1
|
Ebert S, Buffet T, Sermet BS, Marre O, Cessac B. Temporal pattern recognition in retinal ganglion cells is mediated by dynamical inhibitory synapses. Nat Commun 2024; 15:6118. [PMID: 39033142 PMCID: PMC11271269 DOI: 10.1038/s41467-024-50506-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 07/10/2024] [Indexed: 07/23/2024] Open
Abstract
A fundamental task for the brain is to generate predictions of future sensory inputs, and signal errors in these predictions. Many neurons have been shown to signal omitted stimuli during periodic stimulation, even in the retina. However, the mechanisms of this error signaling are unclear. Here we show that depressing inhibitory synapses shape the timing of the response to an omitted stimulus in the retina. While ganglion cells, the retinal output, responded to an omitted flash with a constant latency over many frequencies of the flash sequence, we found that this was not the case once inhibition was blocked. We built a simple circuit model and showed that depressing inhibitory synapses were a necessary component to reproduce our experimental findings. A new prediction of our model is that the accuracy of the constant latency requires a sufficient amount of flashes in the stimulus, which we could confirm experimentally. Depressing inhibitory synapses could thus be a key component to generate the predictive responses observed in the retina, and potentially in many brain areas.
Collapse
Affiliation(s)
- Simone Ebert
- INRIA Biovision Team, Université Côte d'Azur, Valbonne, France.
- Institute for Modeling in Neuroscience and Cognition (NeuroMod), Université Côte d'Azur, Nice, France.
- Sorbonne Université, INSERM, CNRS, Institut De La Vision, Paris, France.
| | - Thomas Buffet
- Sorbonne Université, INSERM, CNRS, Institut De La Vision, Paris, France
| | - B Semihcan Sermet
- Sorbonne Université, INSERM, CNRS, Institut De La Vision, Paris, France
- Netherlands Institute for Neuroscience, Amsterdam, The Netherlands
| | - Olivier Marre
- Sorbonne Université, INSERM, CNRS, Institut De La Vision, Paris, France
| | - Bruno Cessac
- INRIA Biovision Team, Université Côte d'Azur, Valbonne, France
- Institute for Modeling in Neuroscience and Cognition (NeuroMod), Université Côte d'Azur, Nice, France
| |
Collapse
|
2
|
Mizuno H, Ikegaya Y. Late-spiking retrosplenial cortical neurons are not synchronized with neocortical slow waves in anesthetized mice. Neurosci Res 2024; 203:51-56. [PMID: 38224839 DOI: 10.1016/j.neures.2024.01.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 12/29/2023] [Accepted: 01/08/2024] [Indexed: 01/17/2024]
Abstract
Neocortical slow waves are critical for memory consolidation. The retrosplenial cortex is thought to facilitate the slow wave propagation to regions beyond the neocortex. However, it remains unclear which population is responsible for the slow wave propagation. To address this issue, we performed in vivo whole-cell recordings to identify neurons that were synchronous and asynchronous with slow waves. By quantifying their intrinsic membrane properties, we observed that the former exhibited regular spiking, whereas the latter exhibited late spiking. Thus, these two cell types transmit information in different directions between the neocortex and subcortical regions.
Collapse
Affiliation(s)
- Hiroyuki Mizuno
- Graduate School of Pharmaceutical Sciences, The University of Tokyo, Tokyo 113-0033, Japan
| | - Yuji Ikegaya
- Graduate School of Pharmaceutical Sciences, The University of Tokyo, Tokyo 113-0033, Japan; Institute for AI and Beyond, The University of Tokyo, Tokyo 113-0033, Japan; Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita City, Osaka 565-0871, Japan.
| |
Collapse
|
3
|
Zeldenrust F, Calcini N, Yan X, Bijlsma A, Celikel T. The tuning of tuning: How adaptation influences single cell information transfer. PLoS Comput Biol 2024; 20:e1012043. [PMID: 38739640 PMCID: PMC11115315 DOI: 10.1371/journal.pcbi.1012043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 05/23/2024] [Accepted: 04/01/2024] [Indexed: 05/16/2024] Open
Abstract
Sensory neurons reconstruct the world from action potentials (spikes) impinging on them. To effectively transfer information about the stimulus to the next processing level, a neuron needs to be able to adapt its working range to the properties of the stimulus. Here, we focus on the intrinsic neural properties that influence information transfer in cortical neurons and how tightly their properties need to be tuned to the stimulus statistics for them to be effective. We start by measuring the intrinsic information encoding properties of putative excitatory and inhibitory neurons in L2/3 of the mouse barrel cortex. Excitatory neurons show high thresholds and strong adaptation, making them fire sparsely and resulting in a strong compression of information, whereas inhibitory neurons that favour fast spiking transfer more information. Next, we turn to computational modelling and ask how two properties influence information transfer: 1) spike-frequency adaptation and 2) the shape of the IV-curve. We find that a subthreshold (but not threshold) adaptation, the 'h-current', and a properly tuned leak conductance can increase the information transfer of a neuron, whereas threshold adaptation can increase its working range. Finally, we verify the effect of the IV-curve slope in our experimental recordings and show that excitatory neurons form a more heterogeneous population than inhibitory neurons. These relationships between intrinsic neural features and neural coding that had not been quantified before will aid computational, theoretical and systems neuroscientists in understanding how neuronal populations can alter their coding properties, such as through the impact of neuromodulators. Why the variability of intrinsic properties of excitatory neurons is larger than that of inhibitory ones is an exciting question, for which future research is needed.
Collapse
Affiliation(s)
- Fleur Zeldenrust
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen - the Netherlands
| | - Niccolò Calcini
- Maastricht Centre for Systems Biology (MaCSBio), University of Maastricht, Maastricht, The Netherlands
| | - Xuan Yan
- Institute of Neuroscience, Chinese Academy of Sciences, Beijing, China
| | - Ate Bijlsma
- Department of Population Health Sciences / Department of Biology, Universiteit Utrecht, the Netherlands
| | - Tansu Celikel
- School of Psychology, Georgia Institute of Technology, Atlanta - GA, United States of America
| |
Collapse
|
4
|
Gort J. Emergence of Universal Computations Through Neural Manifold Dynamics. Neural Comput 2024; 36:227-270. [PMID: 38101328 DOI: 10.1162/neco_a_01631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 09/05/2023] [Indexed: 12/17/2023]
Abstract
There is growing evidence that many forms of neural computation may be implemented by low-dimensional dynamics unfolding at the population scale. However, neither the connectivity structure nor the general capabilities of these embedded dynamical processes are currently understood. In this work, the two most common formalisms of firing-rate models are evaluated using tools from analysis, topology, and nonlinear dynamics in order to provide plausible explanations for these problems. It is shown that low-rank structured connectivities predict the formation of invariant and globally attracting manifolds in all these models. Regarding the dynamics arising in these manifolds, it is proved they are topologically equivalent across the considered formalisms. This letter also shows that under the low-rank hypothesis, the flows emerging in neural manifolds, including input-driven systems, are universal, which broadens previous findings. It explores how low-dimensional orbits can bear the production of continuous sets of muscular trajectories, the implementation of central pattern generators, and the storage of memory states. These dynamics can robustly simulate any Turing machine over arbitrary bounded memory strings, virtually endowing rate models with the power of universal computation. In addition, the letter shows how the low-rank hypothesis predicts the parsimonious correlation structure observed in cortical activity. Finally, it discusses how this theory could provide a useful tool from which to study neuropsychological phenomena using mathematical methods.
Collapse
Affiliation(s)
- Joan Gort
- Facultat de Psicologia, Universitat Autònoma de Barcelona, 08193, Bellaterra, Barcelona, Spain
| |
Collapse
|
5
|
Andrade-Talavera Y, Fisahn A, Rodríguez-Moreno A. Timing to be precise? An overview of spike timing-dependent plasticity, brain rhythmicity, and glial cells interplay within neuronal circuits. Mol Psychiatry 2023; 28:2177-2188. [PMID: 36991134 PMCID: PMC10611582 DOI: 10.1038/s41380-023-02027-w] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 02/27/2023] [Accepted: 03/01/2023] [Indexed: 03/31/2023]
Abstract
In the mammalian brain information processing and storage rely on the complex coding and decoding events performed by neuronal networks. These actions are based on the computational ability of neurons and their functional engagement in neuronal assemblies where precise timing of action potential firing is crucial. Neuronal circuits manage a myriad of spatially and temporally overlapping inputs to compute specific outputs that are proposed to underly memory traces formation, sensory perception, and cognitive behaviors. Spike-timing-dependent plasticity (STDP) and electrical brain rhythms are suggested to underlie such functions while the physiological evidence of assembly structures and mechanisms driving both processes continues to be scarce. Here, we review foundational and current evidence on timing precision and cooperative neuronal electrical activity driving STDP and brain rhythms, their interactions, and the emerging role of glial cells in such processes. We also provide an overview of their cognitive correlates and discuss current limitations and controversies, future perspectives on experimental approaches, and their application in humans.
Collapse
Affiliation(s)
- Yuniesky Andrade-Talavera
- Laboratory of Cellular Neuroscience and Plasticity, Department of Physiology, Anatomy and Cell Biology, Universidad Pablo de Olavide, ES-41013, Seville, Spain.
| | - André Fisahn
- Department of Biosciences and Nutrition and Department of Women's and Children's Health, Karolinska Institutet, 171 77, Stockholm, Sweden
| | - Antonio Rodríguez-Moreno
- Laboratory of Cellular Neuroscience and Plasticity, Department of Physiology, Anatomy and Cell Biology, Universidad Pablo de Olavide, ES-41013, Seville, Spain.
| |
Collapse
|
6
|
Winston CN, Mastrovito D, Shea-Brown E, Mihalas S. Heterogeneity in Neuronal Dynamics Is Learned by Gradient Descent for Temporal Processing Tasks. Neural Comput 2023; 35:555-592. [PMID: 36827598 PMCID: PMC10044000 DOI: 10.1162/neco_a_01571] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 11/02/2022] [Indexed: 02/26/2023]
Abstract
Individual neurons in the brain have complex intrinsic dynamics that are highly diverse. We hypothesize that the complex dynamics produced by networks of complex and heterogeneous neurons may contribute to the brain's ability to process and respond to temporally complex data. To study the role of complex and heterogeneous neuronal dynamics in network computation, we develop a rate-based neuronal model, the generalized-leaky-integrate-and-fire-rate (GLIFR) model, which is a rate equivalent of the generalized-leaky-integrate-and-fire model. The GLIFR model has multiple dynamical mechanisms, which add to the complexity of its activity while maintaining differentiability. We focus on the role of after-spike currents, currents induced or modulated by neuronal spikes, in producing rich temporal dynamics. We use machine learning techniques to learn both synaptic weights and parameters underlying intrinsic dynamics to solve temporal tasks. The GLIFR model allows the use of standard gradient descent techniques rather than surrogate gradient descent, which has been used in spiking neural networks. After establishing the ability to optimize parameters using gradient descent in single neurons, we ask how networks of GLIFR neurons learn and perform on temporally challenging tasks, such as sequential MNIST. We find that these networks learn diverse parameters, which gives rise to diversity in neuronal dynamics, as demonstrated by clustering of neuronal parameters. GLIFR networks have mixed performance when compared to vanilla recurrent neural networks, with higher performance in pixel-by-pixel MNIST but lower in line-by-line MNIST. However, they appear to be more robust to random silencing. We find that the ability to learn heterogeneity and the presence of after-spike currents contribute to these gains in performance. Our work demonstrates both the computational robustness of neuronal complexity and diversity in networks and a feasible method of training such models using exact gradients.
Collapse
Affiliation(s)
- Chloe N Winston
- Departments of Neuroscience and Computer Science, University of Washington, Seattle, WA 98195, U.S.A
- University of Washington Computational Neuroscience Center, Seattle, WA 98195, U.S.A.
| | - Dana Mastrovito
- Allen Institute for Brain Science, Seattle, WA 98109, U.S.A.
| | - Eric Shea-Brown
- University of Washington Computational Neuroscience Center, Seattle, WA 98195, U.S.A
- Allen Institute for Brain Science, Seattle, WA 98109, U.S.A
- Department of Applied Mathematics, University of Washington, Seattle, WA 98195, U.S.A.
| | - Stefan Mihalas
- University of Washington Computational Neuroscience Center, Seattle, WA 98195, U.S.A
- Allen Institute for Brain Science, Seattle, WA 98109, U.S.A
- Department of Applied Mathematics, University of Washington, Seattle, WA 98195, U.S.A.
| |
Collapse
|
7
|
Kubo Y, Chalmers E, Luczak A. Biologically-inspired neuronal adaptation improves learning in neural networks. Commun Integr Biol 2023; 16:2163131. [PMID: 36685291 PMCID: PMC9851208 DOI: 10.1080/19420889.2022.2163131] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
Since humans still outperform artificial neural networks on many tasks, drawing inspiration from the brain may help to improve current machine learning algorithms. Contrastive Hebbian learning (CHL) and equilibrium propagation (EP) are biologically plausible algorithms that update weights using only local information (without explicitly calculating gradients) and still achieve performance comparable to conventional backpropagation. In this study, we augmented CHL and EP with Adjusted Adaptation, inspired by the adaptation effect observed in neurons, in which a neuron's response to a given stimulus is adjusted after a short time. We add this adaptation feature to multilayer perceptrons and convolutional neural networks trained on MNIST and CIFAR-10. Surprisingly, adaptation improved the performance of these networks. We discuss the biological inspiration for this idea and investigate why Neuronal Adaptation could be an important brain mechanism to improve the stability and accuracy of learning.
Collapse
Affiliation(s)
- Yoshimasa Kubo
- Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, AB, Canada,CONTACT Yoshimasa Kubo
| | - Eric Chalmers
- Department of Mathematics & Computing, Mount Royal University, Calgary, AB, Canada
| | - Artur Luczak
- Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, AB, Canada,Artur Luczak Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, AB, Canada
| |
Collapse
|
8
|
Sidhu RS, Johnson EC, Jones DL, Ratnam R. A dynamic spike threshold with correlated noise predicts observed patterns of negative interval correlations in neuronal spike trains. BIOLOGICAL CYBERNETICS 2022; 116:611-633. [PMID: 36244004 PMCID: PMC9691502 DOI: 10.1007/s00422-022-00946-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Accepted: 09/26/2022] [Indexed: 06/16/2023]
Abstract
Negative correlations in the sequential evolution of interspike intervals (ISIs) are a signature of memory in neuronal spike-trains. They provide coding benefits including firing-rate stabilization, improved detectability of weak sensory signals, and enhanced transmission of information by improving signal-to-noise ratio. Primary electrosensory afferent spike-trains in weakly electric fish fall into two categories based on the pattern of ISI correlations: non-bursting units have negative correlations which remain negative but decay to zero with increasing lags (Type I ISI correlations), and bursting units have oscillatory (alternating sign) correlation which damp to zero with increasing lags (Type II ISI correlations). Here, we predict and match observed ISI correlations in these afferents using a stochastic dynamic threshold model. We determine the ISI correlation function as a function of an arbitrary discrete noise correlation function [Formula: see text], where k is a multiple of the mean ISI. The function permits forward and inverse calculations of the correlation function. Both types of correlation functions can be generated by adding colored noise to the spike threshold with Type I correlations generated with slow noise and Type II correlations generated with fast noise. A first-order autoregressive (AR) process with a single parameter is sufficient to predict and accurately match both types of afferent ISI correlation functions, with the type being determined by the sign of the AR parameter. The predicted and experimentally observed correlations are in geometric progression. The theory predicts that the limiting sum of ISI correlations is [Formula: see text] yielding a perfect DC-block in the power spectrum of the spike train. Observed ISI correlations from afferents have a limiting sum that is slightly larger at [Formula: see text] ([Formula: see text]). We conclude that the underlying process for generating ISIs may be a simple combination of low-order AR and moving average processes and discuss the results from the perspective of optimal coding.
Collapse
Affiliation(s)
- Robin S Sidhu
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Erik C Johnson
- The Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA
| | - Douglas L Jones
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Rama Ratnam
- Division of Biological and Life Sciences, School of Arts and Sciences, Ahmedabad University, Ahmedabad, Gujarat, India.
| |
Collapse
|
9
|
Gansel KS. Neural synchrony in cortical networks: mechanisms and implications for neural information processing and coding. Front Integr Neurosci 2022; 16:900715. [PMID: 36262373 PMCID: PMC9574343 DOI: 10.3389/fnint.2022.900715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 09/13/2022] [Indexed: 11/13/2022] Open
Abstract
Synchronization of neuronal discharges on the millisecond scale has long been recognized as a prevalent and functionally important attribute of neural activity. In this article, I review classical concepts and corresponding evidence of the mechanisms that govern the synchronization of distributed discharges in cortical networks and relate those mechanisms to their possible roles in coding and cognitive functions. To accommodate the need for a selective, directed synchronization of cells, I propose that synchronous firing of distributed neurons is a natural consequence of spike-timing-dependent plasticity (STDP) that associates cells repetitively receiving temporally coherent input: the “synchrony through synaptic plasticity” hypothesis. Neurons that are excited by a repeated sequence of synaptic inputs may learn to selectively respond to the onset of this sequence through synaptic plasticity. Multiple neurons receiving coherent input could thus actively synchronize their firing by learning to selectively respond at corresponding temporal positions. The hypothesis makes several predictions: first, the position of the cells in the network, as well as the source of their input signals, would be irrelevant as long as their input signals arrive simultaneously; second, repeating discharge patterns should get compressed until all or some part of the signals are synchronized; and third, this compression should be accompanied by a sparsening of signals. In this way, selective groups of cells could emerge that would respond to some recurring event with synchronous firing. Such a learned response pattern could further be modulated by synchronous network oscillations that provide a dynamic, flexible context for the synaptic integration of distributed signals. I conclude by suggesting experimental approaches to further test this new hypothesis.
Collapse
|
10
|
Huang C, Zeldenrust F, Celikel T. Cortical Representation of Touch in Silico. Neuroinformatics 2022; 20:1013-1039. [PMID: 35486347 PMCID: PMC9588483 DOI: 10.1007/s12021-022-09576-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/19/2022] [Indexed: 12/31/2022]
Abstract
With its six layers and ~ 12,000 neurons, a cortical column is a complex network whose function is plausibly greater than the sum of its constituents'. Functional characterization of its network components will require going beyond the brute-force modulation of the neural activity of a small group of neurons. Here we introduce an open-source, biologically inspired, computationally efficient network model of the somatosensory cortex's granular and supragranular layers after reconstructing the barrel cortex in soma resolution. Comparisons of the network activity to empirical observations showed that the in silico network replicates the known properties of touch representations and whisker deprivation-induced changes in synaptic strength induced in vivo. Simulations show that the history of the membrane potential acts as a spatial filter that determines the presynaptic population of neurons contributing to a post-synaptic action potential; this spatial filtering might be critical for synaptic integration of top-down and bottom-up information.
Collapse
Affiliation(s)
- Chao Huang
- grid.9647.c0000 0004 7669 9786Department of Biology, University of Leipzig, Leipzig, Germany
| | - Fleur Zeldenrust
- grid.5590.90000000122931605Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Tansu Celikel
- grid.5590.90000000122931605Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands ,grid.213917.f0000 0001 2097 4943School of Psychology, Georgia Institute of Technology, Atlanta, GA USA
| |
Collapse
|
11
|
Rikhye RV, Yildirim M, Hu M, Breton-Provencher V, Sur M. Reliable Sensory Processing in Mouse Visual Cortex through Cooperative Interactions between Somatostatin and Parvalbumin Interneurons. J Neurosci 2021; 41:8761-8778. [PMID: 34493543 PMCID: PMC8528503 DOI: 10.1523/jneurosci.3176-20.2021] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 08/16/2021] [Accepted: 08/23/2021] [Indexed: 11/21/2022] Open
Abstract
Intrinsic neuronal variability significantly limits information encoding in the primary visual cortex (V1). However, under certain conditions, neurons can respond reliably with highly precise responses to the same visual stimuli from trial to trial. This suggests that there exists intrinsic neural circuit mechanisms that dynamically modulate the intertrial variability of visual cortical neurons. Here, we sought to elucidate the role of different inhibitory interneurons (INs) in reliable coding in mouse V1. To study the interactions between somatostatin-expressing interneurons (SST-INs) and parvalbumin-expressing interneurons (PV-INs), we used a dual-color calcium imaging technique that allowed us to simultaneously monitor these two neural ensembles while awake mice, of both sexes, passively viewed natural movies. SST neurons were more active during epochs of reliable pyramidal neuron firing, whereas PV neurons were more active during epochs of unreliable firing. SST neuron activity lagged that of PV neurons, consistent with a feedback inhibitory SST→PV circuit. To dissect the role of this circuit in pyramidal neuron activity, we used temporally limited optogenetic activation and inactivation of SST and PV interneurons during periods of reliable and unreliable pyramidal cell firing. Transient firing of SST neurons increased pyramidal neuron reliability by actively suppressing PV neurons, a proposal that was supported by a rate-based model of V1 neurons. These results identify a cooperative functional role for the SST→PV circuit in modulating the reliability of pyramidal neuron activity.SIGNIFICANCE STATEMENT Cortical neurons often respond to identical sensory stimuli with large variability. However, under certain conditions, the same neurons can also respond highly reliably. The circuit mechanisms that contribute to this modulation remain unknown. Here, we used novel dual-wavelength calcium imaging and temporally selective optical perturbation to identify an inhibitory neural circuit in visual cortex that can modulate the reliability of pyramidal neurons to naturalistic visual stimuli. Our results, supported by computational models, suggest that somatostatin interneurons increase pyramidal neuron reliability by suppressing parvalbumin interneurons via the inhibitory SST→PV circuit. These findings reveal a novel role of the SST→PV circuit in modulating the fidelity of neural coding critical for visual perception.
Collapse
Affiliation(s)
- Rajeev V Rikhye
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Murat Yildirim
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Ming Hu
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Vincent Breton-Provencher
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Mriganka Sur
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| |
Collapse
|
12
|
Goethals S, Sierksma MC, Nicol X, Réaux-Le Goazigo A, Brette R. Electrical match between initial segment and somatodendritic compartment for action potential backpropagation in retinal ganglion cells. J Neurophysiol 2021; 126:28-46. [PMID: 34038184 DOI: 10.1152/jn.00005.2021] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The action potential of most vertebrate neurons initiates in the axon initial segment (AIS) and is then transmitted to the soma where it is regenerated by somatodendritic sodium channels. For successful transmission, the AIS must produce a strong axial current, so as to depolarize the soma to the threshold for somatic regeneration. Theoretically, this axial current depends on AIS geometry and Na+ conductance density. We measured the axial current of mouse retinal ganglion cells using whole cell recordings with post hoc AIS labeling. We found that this current is large, implying high Na+ conductance density, and carries a charge that covaries with capacitance so as to depolarize the soma by ∼30 mV. Additionally, we observed that the axial current attenuates strongly with depolarization, consistent with sodium channel inactivation, but temporally broadens so as to preserve the transmitted charge. Thus, the AIS appears to be organized so as to reliably backpropagate the axonal action potential.NEW & NOTEWORTHY We measured the axial current produced at spike initiation by the axon initial segment of mouse retinal ganglion cells. We found that it is a large current, requiring high sodium channel conductance density, which covaries with cell capacitance so as to ensure a ∼30 mV depolarization. During sustained depolarization the current attenuated, but it broadened to preserve somatic depolarization. Thus, properties of the initial segment are adjusted to ensure backpropagation of the axonal action potential.
Collapse
Affiliation(s)
- Sarah Goethals
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| | - Martijn C Sierksma
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France.,Department of Neuroscience, Erasmus MC, Erasmus University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Xavier Nicol
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| | | | - Romain Brette
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| |
Collapse
|
13
|
Tikidji-Hamburyan RA, Colonnese MT. Polynomial, piecewise-Linear, Step (PLS): A Simple, Scalable, and Efficient Framework for Modeling Neurons. Front Neuroinform 2021; 15:642933. [PMID: 34025382 PMCID: PMC8134741 DOI: 10.3389/fninf.2021.642933] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 03/29/2021] [Indexed: 01/04/2023] Open
Abstract
Biological neurons can be modeled with different levels of biophysical/biochemical details. The accuracy with which a model reflects the actual physiological processes and ultimately the information function of a neuron, can range from very detailed to a schematic phenomenological representation. This range exists due to the common problem: one needs to find an optimal trade-off between the level of details needed to capture the necessary information processing in a neuron and the computational load needed to compute 1 s of model time. An increase in modeled network size or model-time, for which the solution should be obtained, makes this trade-off pivotal in model development. Numerical simulations become incredibly challenging when an extensive network with a detailed representation of each neuron needs to be modeled over a long time interval to study slow evolving processes, e.g., development of the thalamocortical circuits. Here we suggest a simple, powerful and flexible approach in which we approximate the right-hand sides of differential equations by combinations of functions from three families: Polynomial, piecewise-Linear, Step (PLS). To obtain a single coherent framework, we provide four core principles in which PLS functions should be combined. We show the rationale behind each of the core principles. Two examples illustrate how to build a conductance-based or phenomenological model using the PLS-framework. We use the first example as a benchmark on three different computational platforms: CPU, GPU, and mobile system-on-chip devices. We show that the PLS-framework speeds up computations without increasing the memory footprint and maintains high model fidelity comparable to the fully-computed model or with lookup-table approximation. We are convinced that the full range of neuron models: from biophysical to phenomenological and even to abstract models, may benefit from using the PLS-framework.
Collapse
Affiliation(s)
| | - Matthew T Colonnese
- School of Medicine and Health Sciences, George Washington University, Washington, DC, United States
| |
Collapse
|
14
|
Platkiewicz J, Saccomano Z, McKenzie S, English D, Amarasingham A. Monosynaptic inference via finely-timed spikes. J Comput Neurosci 2021; 49:131-157. [PMID: 33507429 DOI: 10.1007/s10827-020-00770-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 09/04/2020] [Accepted: 10/19/2020] [Indexed: 10/22/2022]
Abstract
Observations of finely-timed spike relationships in population recordings have been used to support partial reconstruction of neural microcircuit diagrams. In this approach, fine-timescale components of paired spike train interactions are isolated and subsequently attributed to synaptic parameters. Recent perturbation studies strengthen the case for such an inference, yet the complete set of measurements needed to calibrate statistical models is unavailable. To address this gap, we study features of pairwise spiking in a large-scale in vivo dataset where presynaptic neurons were explicitly decoupled from network activity by juxtacellular stimulation. We then construct biophysical models of paired spike trains to reproduce the observed phenomenology of in vivo monosynaptic interactions, including both fine-timescale spike-spike correlations and firing irregularity. A key characteristic of these models is that the paired neurons are coupled by rapidly-fluctuating background inputs. We quantify a monosynapse's causal effect by comparing the postsynaptic train with its counterfactual, when the monosynapse is removed. Subsequently, we develop statistical techniques for estimating this causal effect from the pre- and post-synaptic spike trains. A particular focus is the justification and application of a nonparametric separation of timescale principle to implement synaptic inference. Using simulated data generated from the biophysical models, we characterize the regimes in which the estimators accurately identify the monosynaptic effect. A secondary goal is to initiate a critical exploration of neurostatistical assumptions in terms of biophysical mechanisms, particularly with regards to the challenging but arguably fundamental issue of fast, unobservable nonstationarities in background dynamics.
Collapse
Affiliation(s)
- Jonathan Platkiewicz
- Department of Mathematics, The City College of New York, The City University of New York, New York, NY, 10031, USA
| | - Zachary Saccomano
- Department of Biology, The Graduate Center, The City University of New York, New York, NY, 10016, USA
| | - Sam McKenzie
- Neuroscience Institute, New York University, New York, NY, 10016, USA
| | - Daniel English
- School of Neuroscience, Virginia Tech, Blacksburg, VA, 24060, USA
| | - Asohan Amarasingham
- Department of Mathematics, The City College of New York, The City University of New York, New York, NY, 10031, USA.
- Department of Biology, The Graduate Center, The City University of New York, New York, NY, 10016, USA.
- Departments of Computer Science and Psychology, The Graduate Center, The City University of New York, New York, NY, 10016, USA.
| |
Collapse
|
15
|
Guo W, Fouda ME, Yantir HE, Eltawil AM, Salama KN. Unsupervised Adaptive Weight Pruning for Energy-Efficient Neuromorphic Systems. Front Neurosci 2020; 14:598876. [PMID: 33281549 PMCID: PMC7689062 DOI: 10.3389/fnins.2020.598876] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Accepted: 10/26/2020] [Indexed: 11/17/2022] Open
Abstract
To tackle real-world challenges, deep and complex neural networks are generally used with a massive number of parameters, which require large memory size, extensive computational operations, and high energy consumption in neuromorphic hardware systems. In this work, we propose an unsupervised online adaptive weight pruning method that dynamically removes non-critical weights from a spiking neural network (SNN) to reduce network complexity and improve energy efficiency. The adaptive pruning method explores neural dynamics and firing activity of SNNs and adapts the pruning threshold over time and neurons during training. The proposed adaptation scheme allows the network to effectively identify critical weights associated with each neuron by changing the pruning threshold dynamically over time and neurons. It balances the connection strength of neurons with the previous layer with adaptive thresholds and prevents weak neurons from failure after pruning. We also evaluated improvement in the energy efficiency of SNNs with our method by computing synaptic operations (SOPs). Simulation results and detailed analyses have revealed that applying adaptation in the pruning threshold can significantly improve network performance and reduce the number of SOPs. The pruned SNN with 800 excitatory neurons can achieve a 30% reduction in SOPs during training and a 55% reduction during inference, with only 0.44% accuracy loss on MNIST dataset. Compared with a previously reported online soft pruning method, the proposed adaptive pruning method shows 3.33% higher classification accuracy and 67% more reduction in SOPs. The effectiveness of our method was confirmed on different datasets and for different network sizes. Our evaluation showed that the implementation overhead of the adaptive method regarding speed, area, and energy is negligible in the network. Therefore, this work offers a promising solution for effective network compression and building highly energy-efficient neuromorphic systems in real-time applications.
Collapse
Affiliation(s)
- Wenzhe Guo
- Sensors Lab, Advanced Membranes & Porous Materials Center, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
- Communication and Computing Systems Lab, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| | - Mohammed E. Fouda
- Department of Electrical Engineering and Computer Science, University of California, Irvine, Irvine, CA, United States
| | - Hasan Erdem Yantir
- Sensors Lab, Advanced Membranes & Porous Materials Center, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
- Communication and Computing Systems Lab, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| | - Ahmed M. Eltawil
- Communication and Computing Systems Lab, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
- Department of Electrical Engineering and Computer Science, University of California, Irvine, Irvine, CA, United States
| | - Khaled Nabil Salama
- Sensors Lab, Advanced Membranes & Porous Materials Center, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| |
Collapse
|
16
|
Zbili M, Rama S, Yger P, Inglebert Y, Boumedine-Guignon N, Fronzaroli-Moliniere L, Brette R, Russier M, Debanne D. Axonal Na + channels detect and transmit levels of input synchrony in local brain circuits. SCIENCE ADVANCES 2020; 6:eaay4313. [PMID: 32494697 PMCID: PMC7202877 DOI: 10.1126/sciadv.aay4313] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Accepted: 02/19/2020] [Indexed: 06/11/2023]
Abstract
Sensory processing requires mechanisms of fast coincidence detection to discriminate synchronous from asynchronous inputs. Spike threshold adaptation enables such a discrimination but is ineffective in transmitting this information to the network. We show here that presynaptic axonal sodium channels read and transmit precise levels of input synchrony to the postsynaptic cell by modulating the presynaptic action potential (AP) amplitude. As a consequence, synaptic transmission is facilitated at cortical synapses when the presynaptic spike is produced by synchronous inputs. Using dual soma-axon recordings, imaging, and modeling, we show that this facilitation results from enhanced AP amplitude in the axon due to minimized inactivation of axonal sodium channels. Quantifying local circuit activity and using network modeling, we found that spikes induced by synchronous inputs produced a larger effect on network activity than spikes induced by asynchronous inputs. Therefore, this input synchrony-dependent facilitation may constitute a powerful mechanism, regulating synaptic transmission at proximal synapses.
Collapse
Affiliation(s)
- Mickaël Zbili
- UNIS, INSERM, UMR 1072, Aix-Marseille Université, 13015, Marseille, France
| | - Sylvain Rama
- UNIS, INSERM, UMR 1072, Aix-Marseille Université, 13015, Marseille, France
| | - Pierre Yger
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, 75012 Paris, France
| | - Yanis Inglebert
- UNIS, INSERM, UMR 1072, Aix-Marseille Université, 13015, Marseille, France
| | | | | | - Romain Brette
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, 75012 Paris, France
| | - Michaël Russier
- UNIS, INSERM, UMR 1072, Aix-Marseille Université, 13015, Marseille, France
| | - Dominique Debanne
- UNIS, INSERM, UMR 1072, Aix-Marseille Université, 13015, Marseille, France
| |
Collapse
|
17
|
Effect of Stimulus-Dependent Spike Timing on Population Coding of Sound Location in the Owl's Auditory Midbrain. eNeuro 2020; 7:ENEURO.0244-19.2020. [PMID: 32188709 PMCID: PMC7189487 DOI: 10.1523/eneuro.0244-19.2020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Revised: 02/07/2020] [Accepted: 02/18/2020] [Indexed: 11/21/2022] Open
Abstract
In the auditory system, the spectrotemporal structure of acoustic signals determines the temporal pattern of spikes. Here, we investigated this effect in neurons of the barn owl's auditory midbrain (Tyto furcata) that are selective for auditory space and whether it can influence the coding of sound direction. We found that in the nucleus where neurons first become selective to combinations of sound localization cues, reproducibility of spike trains across repeated trials of identical sounds, a metric of across-trial temporal fidelity of spiking patterns evoked by a stimulus, was maximal at the sound direction that elicited the highest firing rate. We then tested the hypothesis that this stimulus-dependent patterning resulted in rate co-modulation of cells with similar frequency and spatial selectivity, driving stimulus-dependent synchrony of population responses. Tetrodes were used to simultaneously record multiple nearby units in the optic tectum (OT), where auditory space is topographically represented. While spiking of neurons in OT showed lower reproducibility across trials compared with upstream nuclei, spike-time synchrony between nearby OT neurons was highest for sounds at their preferred direction. A model of the midbrain circuit explained the relationship between stimulus-dependent reproducibility and synchrony, and demonstrated that this effect can improve the decoding of sound location from the OT output. Thus, stimulus-dependent spiking patterns in the auditory midbrain can have an effect on spatial coding. This study reports a functional connection between spike patterning elicited by spectrotemporal features of a sound and the coding of its location.
Collapse
|
18
|
Hart WL, Richardson RT, Kameneva T, Thompson AC, Wise AK, Fallon JB, Stoddart PR, Needham K. Combined optogenetic and electrical stimulation of auditory neurons increases effective stimulation frequency-an in vitro study. J Neural Eng 2020; 17:016069. [PMID: 31923907 DOI: 10.1088/1741-2552/ab6a68] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
OBJECTIVE The performance of neuroprostheses, including cochlear and retinal implants, is currently constrained by the spatial resolution of electrical stimulation. Optogenetics has improved the spatial control of neurons in vivo but lacks the fast-temporal dynamics required for auditory and retinal signalling. The objective of this study is to demonstrate that combining optical and electrical stimulation in vitro could address some of the limitations associated with each of the stimulus modes when used independently. APPROACH The response of murine auditory neurons expressing ChR2-H134 to combined optical and electrical stimulation was characterised using whole cell patch clamp electrophysiology. MAIN RESULTS Optogenetic costimulation produces a three-fold increase in peak firing rate compared to optical stimulation alone and allows spikes to be evoked by combined subthreshold optical and electrical inputs. Subthreshold optical depolarisation also facilitated spiking in auditory neurons for periods of up to 30 ms without evidence of wide-scale Na+ inactivation. SIGNIFICANCE These findings may contribute to the development of spatially and temporally selective optogenetic-based neuroprosthetics and complement recent developments in 'fast opsins'.
Collapse
Affiliation(s)
- William L Hart
- ARC Training Centre in Biodevices, Swinburne University of Technology, Hawthorn, VIC 3122, Australia
| | - Rachael T Richardson
- The Bionics Institute, East Melbourne, VIC 3002, Australia
- Department of Surgery (Otolaryngology), University of Melbourne, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC 3002, Australia
- Medical Bionics Department, University of Melbourne, East Melbourne, VIC 3002, Australia
| | - Tatiana Kameneva
- Swinburne University of Technology, Hawthorn VIC 3122, Australia
| | | | - Andrew K Wise
- The Bionics Institute, East Melbourne, VIC 3002, Australia
- Department of Surgery (Otolaryngology), University of Melbourne, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC 3002, Australia
- Medical Bionics Department, University of Melbourne, East Melbourne, VIC 3002, Australia
| | - James B Fallon
- The Bionics Institute, East Melbourne, VIC 3002, Australia
- Department of Surgery (Otolaryngology), University of Melbourne, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC 3002, Australia
- Medical Bionics Department, University of Melbourne, East Melbourne, VIC 3002, Australia
| | - Paul R Stoddart
- ARC Training Centre in Biodevices, Swinburne University of Technology, Hawthorn, VIC 3122, Australia
| | - Karina Needham
- Department of Surgery (Otolaryngology), University of Melbourne, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC 3002, Australia
- Author to whom any correspondence should be addressed
| |
Collapse
|
19
|
Idei H, Murata S, Yamashita Y, Ogata T. Homogeneous Intrinsic Neuronal Excitability Induces Overfitting to Sensory Noise: A Robot Model of Neurodevelopmental Disorder. Front Psychiatry 2020; 11:762. [PMID: 32903328 PMCID: PMC7434834 DOI: 10.3389/fpsyt.2020.00762] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 07/17/2020] [Indexed: 11/13/2022] Open
Abstract
Neurodevelopmental disorders, including autism spectrum disorder, have been intensively investigated at the neural, cognitive, and behavioral levels, but the accumulated knowledge remains fragmented. In particular, developmental learning aspects of symptoms and interactions with the physical environment remain largely unexplored in computational modeling studies, although a leading computational theory has posited associations between psychiatric symptoms and an unusual estimation of information uncertainty (precision), which is an essential aspect of the real world and is estimated through learning processes. Here, we propose a mechanistic explanation that unifies the disparate observations via a hierarchical predictive coding and developmental learning framework, which is demonstrated in experiments using a neural network-controlled robot. The results show that, through the developmental learning process, homogeneous intrinsic neuronal excitability at the neural level induced via self-organization changes at the information processing level, such as hyper sensory precision and overfitting to sensory noise. These changes led to multifaceted alterations at the behavioral level, such as inflexibility, reduced generalization, and motor clumsiness. In addition, these behavioral alterations were accompanied by fluctuating neural activity and excessive development of synaptic connections. These findings might bridge various levels of understandings in autism spectrum and other neurodevelopmental disorders and provide insights into the disease processes underlying observed behaviors and brain activities in individual patients. This study shows the potential of neurorobotics frameworks for modeling how psychiatric disorders arise from dynamic interactions among the brain, body, and uncertain environments.
Collapse
Affiliation(s)
- Hayato Idei
- Department of Intermedia Studies, Waseda University, Tokyo, Japan
| | - Shingo Murata
- Principles of Informatics Research Division, National Institute of Informatics, Tokyo, Japan
| | - Yuichi Yamashita
- Department of Information Medicine, National Center of Neurology and Psychiatry, Tokyo, Japan
| | - Tetsuya Ogata
- Department of Intermedia Art and Science, Waseda University, Tokyo, Japan
| |
Collapse
|
20
|
Tyukin I, Gorban AN, Calvo C, Makarova J, Makarov VA. High-Dimensional Brain: A Tool for Encoding and Rapid Learning of Memories by Single Neurons. Bull Math Biol 2019; 81:4856-4888. [PMID: 29556797 PMCID: PMC6874527 DOI: 10.1007/s11538-018-0415-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2017] [Accepted: 03/04/2018] [Indexed: 12/27/2022]
Abstract
Codifying memories is one of the fundamental problems of modern Neuroscience. The functional mechanisms behind this phenomenon remain largely unknown. Experimental evidence suggests that some of the memory functions are performed by stratified brain structures such as the hippocampus. In this particular case, single neurons in the CA1 region receive a highly multidimensional input from the CA3 area, which is a hub for information processing. We thus assess the implication of the abundance of neuronal signalling routes converging onto single cells on the information processing. We show that single neurons can selectively detect and learn arbitrary information items, given that they operate in high dimensions. The argument is based on stochastic separation theorems and the concentration of measure phenomena. We demonstrate that a simple enough functional neuronal model is capable of explaining: (i) the extreme selectivity of single neurons to the information content, (ii) simultaneous separation of several uncorrelated stimuli or informational items from a large set, and (iii) dynamic learning of new items by associating them with already "known" ones. These results constitute a basis for organization of complex memories in ensembles of single neurons. Moreover, they show that no a priori assumptions on the structural organization of neuronal ensembles are necessary for explaining basic concepts of static and dynamic memories.
Collapse
Affiliation(s)
- Ivan Tyukin
- Department of Mathematics, University of Leicester, University Road, Leicester, LE1 7RH, UK.
- Saint-Petersburg State Electrotechnical University, Prof. Popova Str. 5, Saint Petersburg, Russia.
| | - Alexander N Gorban
- Department of Mathematics, University of Leicester, University Road, Leicester, LE1 7RH, UK
| | - Carlos Calvo
- Instituto de Matemática Interdisciplinar, Faculty of Mathematics, Universidad Complutense de Madrid, Avda Complutense s/n, 28040, Madrid, Spain
| | - Julia Makarova
- Department of Translational Neuroscience, Cajal Institute, CSIC, Madrid, Spain
- Lobachevsky State University of Nizhny Novgorod, Gagarin Ave. 23, Nizhny Novgorod, Russia, 603950
| | - Valeri A Makarov
- Instituto de Matemática Interdisciplinar, Faculty of Mathematics, Universidad Complutense de Madrid, Avda Complutense s/n, 28040, Madrid, Spain
- Lobachevsky State University of Nizhny Novgorod, Gagarin Ave. 23, Nizhny Novgorod, Russia, 603950
| |
Collapse
|
21
|
Lubejko ST, Fontaine B, Soueidan SE, MacLeod KM. Spike threshold adaptation diversifies neuronal operating modes in the auditory brain stem. J Neurophysiol 2019; 122:2576-2590. [PMID: 31577531 DOI: 10.1152/jn.00234.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Single neurons function along a spectrum of neuronal operating modes whose properties determine how the output firing activity is generated from synaptic input. The auditory brain stem contains a diversity of neurons, from pure coincidence detectors to pure integrators and those with intermediate properties. We investigated how intrinsic spike initiation mechanisms regulate neuronal operating mode in the avian cochlear nucleus. Although the neurons in one division of the avian cochlear nucleus, nucleus magnocellularis, have been studied in depth, the spike threshold dynamics of the tonically firing neurons of a second division of cochlear nucleus, nucleus angularis (NA), remained unexplained. The input-output functions of tonically firing NA neurons were interrogated with directly injected in vivo-like current stimuli during whole cell patch-clamp recordings in vitro. Increasing the amplitude of the noise fluctuations in the current stimulus enhanced the firing rates in one subset of tonically firing neurons ("differentiators") but not another ("integrators"). We found that spike thresholds showed significantly greater adaptation and variability in the differentiator neurons. A leaky integrate-and-fire neuronal model with an adaptive spike initiation process derived from sodium channel dynamics was fit to the firing responses and could recapitulate >80% of the precise temporal firing across a range of fluctuation and mean current levels. Greater threshold adaptation explained the frequency-current curve changes due to a hyperpolarized shift in the effective adaptation voltage range and longer-lasting threshold adaptation in differentiators. The fine-tuning of the intrinsic properties of different NA neurons suggests they may have specialized roles in spectrotemporal processing.NEW & NOTEWORTHY Avian cochlear nucleus angularis (NA) neurons are responsible for encoding sound intensity for sound localization and spectrotemporal processing. An adaptive spike threshold mechanism fine-tunes a subset of repetitive-spiking neurons in NA to confer coincidence detector-like properties. A model based on sodium channel inactivation properties reproduced the activity via a hyperpolarized shift in adaptation conferring fluctuation sensitivity.
Collapse
Affiliation(s)
- Susan T Lubejko
- Department of Biology, University of Maryland, College Park, Maryland
| | - Bertrand Fontaine
- Laboratory of Auditory Neurophysiology, University of Leuven, Leuven, Belgium
| | - Sara E Soueidan
- Department of Biology, University of Maryland, College Park, Maryland
| | - Katrina M MacLeod
- Department of Biology, University of Maryland, College Park, Maryland.,Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland.,Center for the Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland
| |
Collapse
|
22
|
Busch SE, Khakhalin AS. Intrinsic temporal tuning of neurons in the optic tectum is shaped by multisensory experience. J Neurophysiol 2019; 122:1084-1096. [PMID: 31291161 DOI: 10.1152/jn.00099.2019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
For a biological neural network to be functional, its neurons need to be connected with synapses of appropriate strength, and each neuron needs to appropriately respond to its synaptic inputs. This second aspect of network tuning is maintained by intrinsic plasticity; yet it is often considered secondary to changes in connectivity and mostly limited to adjustments of overall excitability of each neuron. Here we argue that even nonoscillatory neurons can be tuned to inputs of different temporal dynamics and that they can routinely adjust this tuning to match the statistics of their synaptic activation. Using the dynamic clamp technique, we show that, in the tectum of Xenopus tadpole, neurons become selective for faster inputs when animals are exposed to fast visual stimuli but remain responsive to longer inputs in animals exposed to slower, looming, or multisensory stimulation. We also report a homeostatic cotuning between synaptic and intrinsic temporal properties of individual tectal cells. These results expand our understanding of intrinsic plasticity in the brain and suggest that there may exist an additional dimension of network tuning that has been so far overlooked.NEW & NOTEWORTHY We use dynamic clamp to show that individual neurons in the tectum of Xenopus tadpoles are selectively tuned to either shorter (more synchronous) or longer (less synchronous) synaptic inputs. We also demonstrate that this intrinsic temporal tuning is strongly shaped by sensory experiences. This new phenomenon, which is likely to be mediated by changes in sodium channel inactivation, is bound to have important consequences for signal processing and the development of local recurrent connections.
Collapse
Affiliation(s)
- Silas E Busch
- Biology Program, Bard College, Annandale-on-Hudson, New York
| | | |
Collapse
|
23
|
Gorban AN, Makarov VA, Tyukin IY. The unreasonable effectiveness of small neural ensembles in high-dimensional brain. Phys Life Rev 2018; 29:55-88. [PMID: 30366739 DOI: 10.1016/j.plrev.2018.09.005] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Accepted: 09/20/2018] [Indexed: 10/28/2022]
Abstract
Complexity is an indisputable, well-known, and broadly accepted feature of the brain. Despite the apparently obvious and widely-spread consensus on the brain complexity, sprouts of the single neuron revolution emerged in neuroscience in the 1970s. They brought many unexpected discoveries, including grandmother or concept cells and sparse coding of information in the brain. In machine learning for a long time, the famous curse of dimensionality seemed to be an unsolvable problem. Nevertheless, the idea of the blessing of dimensionality becomes gradually more and more popular. Ensembles of non-interacting or weakly interacting simple units prove to be an effective tool for solving essentially multidimensional and apparently incomprehensible problems. This approach is especially useful for one-shot (non-iterative) correction of errors in large legacy artificial intelligence systems and when the complete re-training is impossible or too expensive. These simplicity revolutions in the era of complexity have deep fundamental reasons grounded in geometry of multidimensional data spaces. To explore and understand these reasons we revisit the background ideas of statistical physics. In the course of the 20th century they were developed into the concentration of measure theory. The Gibbs equivalence of ensembles with further generalizations shows that the data in high-dimensional spaces are concentrated near shells of smaller dimension. New stochastic separation theorems reveal the fine structure of the data clouds. We review and analyse biological, physical, and mathematical problems at the core of the fundamental question: how can high-dimensional brain organise reliable and fast learning in high-dimensional world of data by simple tools? To meet this challenge, we outline and setup a framework based on statistical physics of data. Two critical applications are reviewed to exemplify the approach: one-shot correction of errors in intellectual systems and emergence of static and associative memories in ensembles of single neurons. Error correctors should be simple; not damage the existing skills of the system; allow fast non-iterative learning and correction of new mistakes without destroying the previous fixes. All these demands can be satisfied by new tools based on the concentration of measure phenomena and stochastic separation theory. We show how a simple enough functional neuronal model is capable of explaining: i) the extreme selectivity of single neurons to the information content of high-dimensional data, ii) simultaneous separation of several uncorrelated informational items from a large set of stimuli, and iii) dynamic learning of new items by associating them with already "known" ones. These results constitute a basis for organisation of complex memories in ensembles of single neurons.
Collapse
Affiliation(s)
- Alexander N Gorban
- Department of Mathematics, University of Leicester, Leicester, LE1 7RH, UK; Lobachevsky University, Nizhni Novgorod, Russia.
| | - Valeri A Makarov
- Lobachevsky University, Nizhni Novgorod, Russia; Instituto de Matemática Interdisciplinar, Faculty of Mathematics, Universidad Complutense de Madrid, Avda Complutense s/n, 28040 Madrid, Spain.
| | - Ivan Y Tyukin
- Department of Mathematics, University of Leicester, Leicester, LE1 7RH, UK; Lobachevsky University, Nizhni Novgorod, Russia; Saint-Petersburg State Electrotechnical University, Saint-Petersburg, Russia.
| |
Collapse
|
24
|
Masquelier T, Kheradpisheh SR. Optimal Localist and Distributed Coding of Spatiotemporal Spike Patterns Through STDP and Coincidence Detection. Front Comput Neurosci 2018; 12:74. [PMID: 30279653 PMCID: PMC6153331 DOI: 10.3389/fncom.2018.00074] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2018] [Accepted: 08/17/2018] [Indexed: 11/13/2022] Open
Abstract
Repeating spatiotemporal spike patterns exist and carry information. Here we investigated how a single spiking neuron can optimally respond to one given pattern (localist coding), or to either one of several patterns (distributed coding, i.e., the neuron's response is ambiguous but the identity of the pattern could be inferred from the response of multiple neurons), but not to random inputs. To do so, we extended a theory developed in a previous paper (Masquelier, 2017), which was limited to localist coding. More specifically, we computed analytically the signal-to-noise ratio (SNR) of a multi-pattern-detector neuron, using a threshold-free leaky integrate-and-fire (LIF) neuron model with non-plastic unitary synapses and homogeneous Poisson inputs. Surprisingly, when increasing the number of patterns, the SNR decreases slowly, and remains acceptable for several tens of independent patterns. In addition, we investigated whether spike-timing-dependent plasticity (STDP) could enable a neuron to reach the theoretical optimal SNR. To this aim, we simulated a LIF equipped with STDP, and repeatedly exposed it to multiple input spike patterns, embedded in equally dense Poisson spike trains. The LIF progressively became selective to every repeating pattern with no supervision, and stopped discharging during the Poisson spike trains. Furthermore, tuning certain STDP parameters, the resulting pattern detectors were optimal. Tens of independent patterns could be learned by a single neuron using a low adaptive threshold, in contrast with previous studies, in which higher thresholds led to localist coding only. Taken together these results suggest that coincidence detection and STDP are powerful mechanisms, fully compatible with distributed coding. Yet we acknowledge that our theory is limited to single neurons, and thus also applies to feed-forward networks, but not to recurrent ones.
Collapse
Affiliation(s)
- Timothée Masquelier
- Centre de Recherche Cerveau et Cognition, UMR5549 CNRS-Université Toulouse 3, Toulouse, France.,Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC, Universidad de Sevilla, Sevilla, Spain
| | - Saeed R Kheradpisheh
- Department of Computer Science, Faculty of Mathematical Sciences and Computer, Kharazmi University, Tehran, Iran
| |
Collapse
|
25
|
Azarfar A, Calcini N, Huang C, Zeldenrust F, Celikel T. Neural coding: A single neuron's perspective. Neurosci Biobehav Rev 2018; 94:238-247. [PMID: 30227142 DOI: 10.1016/j.neubiorev.2018.09.007] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Revised: 08/27/2018] [Accepted: 09/07/2018] [Indexed: 12/15/2022]
Abstract
What any sensory neuron knows about the world is one of the cardinal questions in Neuroscience. Information from the sensory periphery travels across synaptically coupled neurons as each neuron encodes information by varying the rate and timing of its action potentials (spikes). Spatiotemporally correlated changes in this spiking regimen across neuronal populations are the neural basis of sensory representations. In the somatosensory cortex, however, spiking of individual (or pairs of) cortical neurons is only minimally informative about the world. Recent studies showed that one solution neurons implement to counteract this information loss is adapting their rate of information transfer to the ongoing synaptic activity by changing the membrane potential at which spike is generated. Here we first introduce the principles of information flow from the sensory periphery to the primary sensory cortex in a model sensory (whisker) system, and subsequently discuss how the adaptive spike threshold gates the intracellular information transfer from the somatic post-synaptic potential to action potentials, controlling the information content of communication across somatosensory cortical neurons.
Collapse
Affiliation(s)
- Alireza Azarfar
- Department of Neurophysiology, Donders Institute for Brain, Cognition, and Behaviour Radboud University, the Netherlands
| | - Niccoló Calcini
- Department of Neurophysiology, Donders Institute for Brain, Cognition, and Behaviour Radboud University, the Netherlands
| | - Chao Huang
- Department of Neurophysiology, Donders Institute for Brain, Cognition, and Behaviour Radboud University, the Netherlands
| | - Fleur Zeldenrust
- Department of Neurophysiology, Donders Institute for Brain, Cognition, and Behaviour Radboud University, the Netherlands
| | - Tansu Celikel
- Department of Neurophysiology, Donders Institute for Brain, Cognition, and Behaviour Radboud University, the Netherlands.
| |
Collapse
|
26
|
Shi X, Jin Y, Cang J. Transformation of Feature Selectivity From Membrane Potential to Spikes in the Mouse Superior Colliculus. Front Cell Neurosci 2018; 12:163. [PMID: 29970991 PMCID: PMC6018398 DOI: 10.3389/fncel.2018.00163] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2018] [Accepted: 05/25/2018] [Indexed: 11/13/2022] Open
Abstract
Neurons in the visual system display varying degrees of selectivity for stimulus features such as orientation and direction. Such feature selectivity is generated and processed by intricate circuit and synaptic mechanisms. A key factor in this process is the input-output transformation from membrane potential (Vm) to spikes in individual neurons. Here, we use in vivo whole-cell recording to study Vm-to-spike transformation of visual feature selectivity in the superficial neurons of the mouse superior colliculus (SC). As expected from the spike threshold effect, direction and orientation selectivity increase from Vm to spike responses. The degree of this increase is highly variable, and interestingly, it is correlated with the receptive field size of the recorded neurons. We find that the relationships between Vm and spike rate and between Vm dynamics and spike initiation are also correlated with receptive field size, which likely contribute to the observed input-output transformation of feature selectivity. Together, our findings provide useful information for understanding information processing and visual transformation in the mouse SC.
Collapse
Affiliation(s)
- Xuefeng Shi
- Department of Neurobiology, Northwestern University, Evanston, IL, United States.,Tianjin Eye Hospital, Tianjin Key Laboratory of Ophthalmology and Visual Science, Tianjin Eye Institute, Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China
| | - Yanjiao Jin
- Department of Neurobiology, Northwestern University, Evanston, IL, United States.,General Hospital, Tianjin Medical University, Tianjin, China
| | - Jianhua Cang
- Department of Neurobiology, Northwestern University, Evanston, IL, United States.,Department of Biology and Department of Psychology, University of Virginia, Charlottesville, VA, United States
| |
Collapse
|
27
|
Chambers B, Levy M, Dechery JB, MacLean JN. Ensemble stacking mitigates biases in inference of synaptic connectivity. Netw Neurosci 2018; 2:60-85. [PMID: 29911678 PMCID: PMC5989998 DOI: 10.1162/netn_a_00032] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Accepted: 10/11/2017] [Indexed: 01/26/2023] Open
Abstract
A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.
Collapse
Affiliation(s)
- Brendan Chambers
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA
| | - Maayan Levy
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA
| | - Joseph B Dechery
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA
| | - Jason N MacLean
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA.,Department of Neurobiology, University of Chicago, Chicago, IL, USA
| |
Collapse
|
28
|
Grigonis R, Alaburda A. Spike threshold dynamics in spinal motoneurons during scratching and swimming. J Physiol 2017; 595:5843-5855. [PMID: 28653361 PMCID: PMC5577544 DOI: 10.1113/jp274434] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2017] [Accepted: 06/13/2017] [Indexed: 01/06/2023] Open
Abstract
KEY POINTS Action potential threshold can vary depending on firing history and synaptic inputs. We used an ex vivo carapace-spinal cord preparation from adult turtles to study spike threshold dynamics in motoneurons during two distinct types of functional motor behaviour - fictive scratching and fictive swimming. The threshold potential depolarizes by about 10 mV within each burst of spikes generated during scratch and swim network activity and recovers between bursts to a slightly depolarized level. Slow synaptic integration resulting in a wave of membrane potential depolarization is the factor influencing the threshold potential within firing bursts during motor behaviours. Depolarization of the threshold potential decreases the excitability of motoneurons and may provide a mechanism for stabilization of the response of a motoneuron to intense synaptic inputs to maintain the motor commands within an optimal range for muscle activation. ABSTRACT During functional spinal neural network activity motoneurons receive intense synaptic input, and this could modulate the threshold for action potential generation, providing the ability to dynamically adjust the excitability and recruitment order for functional needs. In the present study we investigated the dynamics of action potential threshold during motor network activity. Intracellular recordings from spinal motoneurons in an ex vivo carapace-spinal cord preparation from adult turtles were performed during two distinct types of motor behaviour - fictive scratching and fictive swimming. We found that the threshold of the first spike in episodes of scratching and swimming was the lowest. The threshold potential depolarizes by about 10 mV within each burst of spikes generated during scratch and swim network activity and recovers between bursts to a slightly depolarized level. Depolarization of the threshold potential results in decreased excitability of motoneurons. Synaptic inputs do not modulate the threshold of the first action potential during episodes of scratching or of swimming. There is no correlation between changes in spike threshold and interspike intervals within bursts. Slow synaptic integration that results in a wave of membrane potential depolarization rather than fast synaptic events preceding each spike is the factor influencing the threshold potential within firing bursts during motor behaviours.
Collapse
Affiliation(s)
- Ramunas Grigonis
- Department of Neurobiology and BiophysicsInstitute of Biosciences, Vilnius UniversitySauletekio ave. 7LT‐10257VilniusLithuania
| | - Aidas Alaburda
- Department of Neurobiology and BiophysicsInstitute of Biosciences, Vilnius UniversitySauletekio ave. 7LT‐10257VilniusLithuania
| |
Collapse
|
29
|
Singer AC, Talei Franzesi G, Kodandaramaiah SB, Flores FJ, Cohen JD, Lee AK, Borgers C, Forest CR, Kopell NJ, Boyden ES. Mesoscale-duration activated states gate spiking in response to fast rises in membrane voltage in the awake brain. J Neurophysiol 2017; 118:1270-1291. [PMID: 28566460 PMCID: PMC5558023 DOI: 10.1152/jn.00116.2017] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2017] [Revised: 05/26/2017] [Accepted: 05/29/2017] [Indexed: 12/13/2022] Open
Abstract
Seconds-scale network states, affecting many neurons within a network, modulate neural activity by complementing fast integration of neuron-specific inputs that arrive in the milliseconds before spiking. Nonrhythmic subthreshold dynamics at intermediate timescales, however, are less well characterized. We found, using automated whole cell patch clamping in vivo, that spikes recorded in CA1 and barrel cortex in awake mice are often preceded not only by monotonic voltage rises lasting milliseconds but also by more gradual (lasting tens to hundreds of milliseconds) depolarizations. The latter exert a gating function on spiking, in a fashion that depends on the gradual rise duration: the probability of spiking was higher for longer gradual rises, even when controlled for the amplitude of the gradual rises. Barrel cortex double-autopatch recordings show that gradual rises are shared across some, but not all, neurons. The gradual rises may represent a new kind of state, intermediate both in timescale and in proportion of neurons participating, which gates a neuron's ability to respond to subsequent inputs.NEW & NOTEWORTHY We analyzed subthreshold activity preceding spikes in hippocampus and barrel cortex of awake mice. Aperiodic voltage ramps extending over tens to hundreds of milliseconds consistently precede and facilitate spikes, in a manner dependent on both their amplitude and their duration. These voltage ramps represent a "mesoscale" activated state that gates spike production in vivo.
Collapse
Affiliation(s)
- Annabelle C Singer
- Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, Georgia
- Media Laboratory and McGovern Institute for Brain Research, Departments of Biological Engineering and Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Giovanni Talei Franzesi
- Media Laboratory and McGovern Institute for Brain Research, Departments of Biological Engineering and Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Suhasa B Kodandaramaiah
- Media Laboratory and McGovern Institute for Brain Research, Departments of Biological Engineering and Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- Department of Mechanical Engineering, University of Minnesota, Twin Cities, Minneapolis, Minnesota
| | - Francisco J Flores
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Jeremy D Cohen
- Howard Hughes Medical Institute, Janelia Research Campus, Ashburn, Virginia
| | - Albert K Lee
- Howard Hughes Medical Institute, Janelia Research Campus, Ashburn, Virginia
| | | | - Craig R Forest
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia; and
| | - Nancy J Kopell
- Department of Mathematics and Statistics, Boston University, Boston, Massachusetts
| | - Edward S Boyden
- Media Laboratory and McGovern Institute for Brain Research, Departments of Biological Engineering and Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts;
| |
Collapse
|
30
|
Yoon YC. LIF and Simplified SRM Neurons Encode Signals Into Spikes via a Form of Asynchronous Pulse Sigma-Delta Modulation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:1192-1205. [PMID: 26929065 DOI: 10.1109/tnnls.2016.2526029] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We show how two spiking neuron models encode continuous-time signals into spikes (action potentials, time-encoded pulses, or point processes) using a special form of sigma-delta modulation (SDM). In particular, we show that the well-known leaky integrate-and-fire (LIF) neuron and the simplified spike response model (SRM0) neuron encode the continuous-time signals into spikes via a proposed asynchronous pulse SDM (APSDM) scheme. The encoder is clock free using level-crossing sampling with a single-level quantizer, unipolar signaling, differential coding, and pulse-shaping filters. The decoder, in the form of a low-pass filter or bandpass smoothing filter, can be fed with the spikes to reconstruct an estimate of the signal. The density of the spikes reflects the amplitude of the encoded signal. Numerical examples illustrating the concepts and the signaling efficiency of APSDM vis-à-vis SDM for comparable reconstruction accuracies are presented. We anticipate these results will facilitate the design of spiking neurons and spiking neural networks as well as cross fertilizations between the fields of neural coding and the SDM.
Collapse
|
31
|
Keine C, Rübsamen R, Englitz B. Inhibition in the auditory brainstem enhances signal representation and regulates gain in complex acoustic environments. eLife 2016; 5. [PMID: 27855778 PMCID: PMC5148601 DOI: 10.7554/elife.19295] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2016] [Accepted: 11/17/2016] [Indexed: 12/30/2022] Open
Abstract
Inhibition plays a crucial role in neural signal processing, shaping and limiting responses. In the auditory system, inhibition already modulates second order neurons in the cochlear nucleus, e.g. spherical bushy cells (SBCs). While the physiological basis of inhibition and excitation is well described, their functional interaction in signal processing remains elusive. Using a combination of in vivo loose-patch recordings, iontophoretic drug application, and detailed signal analysis in the Mongolian Gerbil, we demonstrate that inhibition is widely co-tuned with excitation, and leads only to minor sharpening of the spectral response properties. Combinations of complex stimuli and neuronal input-output analysis based on spectrotemporal receptive fields revealed inhibition to render the neuronal output temporally sparser and more reproducible than the input. Overall, inhibition plays a central role in improving the temporal response fidelity of SBCs across a wide range of input intensities and thereby provides the basis for high-fidelity signal processing.
Collapse
Affiliation(s)
- Christian Keine
- Faculty of Bioscience, Pharmacy and Psychology, University of Leipzig, Leipzig, Germany
| | - Rudolf Rübsamen
- Faculty of Bioscience, Pharmacy and Psychology, University of Leipzig, Leipzig, Germany
| | - Bernhard Englitz
- Department of Neurophysiology, Donders Center for Neuroscience, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
32
|
Wang L, Wang H, Yu L, Chen Y. Spike-Threshold Variability Originated from Separatrix-Crossing in Neuronal Dynamics. Sci Rep 2016; 6:31719. [PMID: 27546614 PMCID: PMC4992847 DOI: 10.1038/srep31719] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2016] [Accepted: 07/26/2016] [Indexed: 11/09/2022] Open
Abstract
The threshold voltage for action potential generation is a key regulator of neuronal signal processing, yet the mechanism of its dynamic variation is still not well described. In this paper, we propose that threshold phenomena can be classified as parameter thresholds and state thresholds. Voltage thresholds which belong to the state threshold are determined by the 'general separatrix' in state space. We demonstrate that the separatrix generally exists in the state space of neuron models. The general form of separatrix was assumed as the function of both states and stimuli and the previously assumed threshold evolving equation versus time is naturally deduced from the separatrix. In terms of neuronal dynamics, the threshold voltage variation, which is affected by different stimuli, is determined by crossing the separatrix at different points in state space. We suggest that the separatrix-crossing mechanism in state space is the intrinsic dynamic mechanism for threshold voltages and post-stimulus threshold phenomena. These proposals are also systematically verified in example models, three of which have analytic separatrices and one is the classic Hodgkin-Huxley model. The separatrix-crossing framework provides an overview of the neuronal threshold and will facilitate understanding of the nature of threshold variability.
Collapse
Affiliation(s)
- Longfei Wang
- Institute of Theoretical Physics, Lanzhou University, Lanzhou, Gansu 730000, China
| | - Hengtong Wang
- College of Physics and Information Technology, Shaanxi Normal University, Xi'an 710062, China
| | - Lianchun Yu
- Institute of Theoretical Physics, Lanzhou University, Lanzhou, Gansu 730000, China
| | - Yong Chen
- Center of Soft Matter Physics and its Application, Beihang University, Beijing 100191, China.,School of Physics and Nuclear Energy Engineering, Beihang University, Beijing 100191, China
| |
Collapse
|
33
|
Huang C, Resnik A, Celikel T, Englitz B. Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding. PLoS Comput Biol 2016; 12:e1004984. [PMID: 27304526 PMCID: PMC4909286 DOI: 10.1371/journal.pcbi.1004984] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2015] [Accepted: 05/16/2016] [Indexed: 01/29/2023] Open
Abstract
Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information. A neuron is a tiny computer that transforms electrical inputs into electrical outputs. While neurons have been investigated and modeled for many decades, some aspects remain elusive. Recently, it was demonstrated that the membrane (voltage) state of a neuron determines its threshold to spiking. In the present study we asked, what are the consequences of this dependence for the computation the neuron performs. We find that this so called adaptive threshold allows neurons to be more focused on inputs which arrive close in time with other inputs. Also, it allows neurons to represent their information more robustly, such that a readout of their activity is less influenced by the state the brain is in. The present use of information theory provides a solid foundation for these results. We obtained the results primarily in detailed simulations, but performed neural recordings to verify these properties in real neurons. In summary, an adaptive spiking threshold allows neurons to specifically compute robustly with a focus on tight temporal correlations in their input.
Collapse
Affiliation(s)
- Chao Huang
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
- Laboratory of Neural Circuits and Plasticity, University of Southern California, Los Angeles, California, United States of America
| | - Andrey Resnik
- Laboratory of Neural Circuits and Plasticity, University of Southern California, Los Angeles, California, United States of America
| | - Tansu Celikel
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
- * E-mail: (BE); (TC)
| | - Bernhard Englitz
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
- * E-mail: (BE); (TC)
| |
Collapse
|
34
|
Regulation of Irregular Neuronal Firing by Autaptic Transmission. Sci Rep 2016; 6:26096. [PMID: 27185280 PMCID: PMC4869121 DOI: 10.1038/srep26096] [Citation(s) in RCA: 77] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2016] [Accepted: 04/27/2016] [Indexed: 11/08/2022] Open
Abstract
The importance of self-feedback autaptic transmission in modulating spike-time irregularity is still poorly understood. By using a biophysical model that incorporates autaptic coupling, we here show that self-innervation of neurons participates in the modulation of irregular neuronal firing, primarily by regulating the occurrence frequency of burst firing. In particular, we find that both excitatory and electrical autapses increase the occurrence of burst firing, thus reducing neuronal firing regularity. In contrast, inhibitory autapses suppress burst firing and therefore tend to improve the regularity of neuronal firing. Importantly, we show that these findings are independent of the firing properties of individual neurons, and as such can be observed for neurons operating in different modes. Our results provide an insightful mechanistic understanding of how different types of autapses shape irregular firing at the single-neuron level, and they highlight the functional importance of autaptic self-innervation in taming and modulating neurodynamics.
Collapse
|
35
|
Kobayashi R, Kitano K. Impact of slow K(+) currents on spike generation can be described by an adaptive threshold model. J Comput Neurosci 2016; 40:347-62. [PMID: 27085337 PMCID: PMC4860204 DOI: 10.1007/s10827-016-0601-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2015] [Revised: 03/06/2016] [Accepted: 04/01/2016] [Indexed: 12/01/2022]
Abstract
A neuron that is stimulated by rectangular current injections initially responds with a high firing rate, followed by a decrease in the firing rate. This phenomenon is called spike-frequency adaptation and is usually mediated by slow K(+) currents, such as the M-type K(+) current (I M ) or the Ca(2+)-activated K(+) current (I AHP ). It is not clear how the detailed biophysical mechanisms regulate spike generation in a cortical neuron. In this study, we investigated the impact of slow K(+) currents on spike generation mechanism by reducing a detailed conductance-based neuron model. We showed that the detailed model can be reduced to a multi-timescale adaptive threshold model, and derived the formulae that describe the relationship between slow K(+) current parameters and reduced model parameters. Our analysis of the reduced model suggests that slow K(+) currents have a differential effect on the noise tolerance in neural coding.
Collapse
Affiliation(s)
- Ryota Kobayashi
- Principles of Informatics Research Division, National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo, Japan. .,Department of Informatics, SOKENDAI (The Graduate University for Advanced Studies), 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo, Japan.
| | - Katsunori Kitano
- Department of Human and Computer Intelligence, Ritsumeikan University, 1-1-1 Nojihigashi, Kusatsu, Shiga, 525-8577, Japan
| |
Collapse
|
36
|
Masquelier T, Portelli G, Kornprobst P. Microsaccades enable efficient synchrony-based coding in the retina: a simulation study. Sci Rep 2016; 6:24086. [PMID: 27063867 PMCID: PMC4827057 DOI: 10.1038/srep24086] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2015] [Accepted: 03/15/2016] [Indexed: 11/09/2022] Open
Abstract
It is now reasonably well established that microsaccades (MS) enhance visual perception, although the underlying neuronal mechanisms are unclear. Here, using numerical simulations, we show that MSs enable efficient synchrony-based coding among the primate retinal ganglion cells (RGC). First, using a jerking contrast edge as stimulus, we demonstrate a qualitative change in the RGC responses: synchronous firing, with a precision in the 10 ms range, only occurs at high speed and high contrast. MSs appear to be sufficiently fast to be able reach the synchronous regime. Conversely, the other kinds of fixational eye movements known as tremor and drift both hardly synchronize RGCs because of a too weak amplitude and a too slow speed respectively. Then, under natural image stimulation, we find that each MS causes certain RGCs to fire synchronously, namely those whose receptive fields contain contrast edges after the MS. The emitted synchronous spike volley thus rapidly transmits the most salient edges of the stimulus, which often constitute the most crucial information. We demonstrate that the readout could be done rapidly by simple coincidence-detector neurons without knowledge of the MS landing time, and that the required connectivity could emerge spontaneously with spike timing-dependent plasticity.
Collapse
Affiliation(s)
- Timothée Masquelier
- INSERM, U968, Paris, F-75012, France.,Sorbonne Universités, UPMC Univ Paris 06, UMR_S 968, Institut de la Vision, Paris, F-75012, France.,CNRS, UMR_7210, Paris, F-75012, France
| | | | | |
Collapse
|
37
|
NMDA Receptors Multiplicatively Scale Visual Signals and Enhance Directional Motion Discrimination in Retinal Ganglion Cells. Neuron 2016; 89:1277-1290. [PMID: 26948896 DOI: 10.1016/j.neuron.2016.02.013] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2015] [Revised: 11/25/2015] [Accepted: 01/15/2016] [Indexed: 11/24/2022]
Abstract
Postsynaptic responses in many CNS neurons are typically small and variable, often making it difficult to distinguish physiologically relevant signals from background noise. To extract salient information, neurons are thought to integrate multiple synaptic inputs and/or selectively amplify specific synaptic activation patterns. Here, we present evidence for a third strategy: directionally selective ganglion cells (DSGCs) in the mouse retina multiplicatively scale visual signals via a mechanism that requires both nonlinear NMDA receptor (NMDAR) conductances in DSGC dendrites and directionally tuned inhibition provided by the upstream retinal circuitry. Postsynaptic multiplication enables DSGCs to discriminate visual motion more accurately in noisy visual conditions without compromising directional tuning. These findings demonstrate a novel role for NMDARs in synaptic processing and provide new insights into how synaptic and network features interact to accomplish physiologically relevant neural computations.
Collapse
|
38
|
Mensi S, Hagens O, Gerstner W, Pozzorini C. Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons. PLoS Comput Biol 2016; 12:e1004761. [PMID: 26907675 PMCID: PMC4764342 DOI: 10.1371/journal.pcbi.1004761] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2015] [Accepted: 01/19/2016] [Indexed: 11/25/2022] Open
Abstract
The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter—describing somatic integration—and the spike-history filter—accounting for spike-frequency adaptation—dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations. Over the last decades, a variety of simplified spiking models have been shown to achieve a surprisingly high performance in predicting the neuronal responses to in vitro somatic current injections. Because of the complex adaptive behavior featured by cortical neurons, this success is however restricted to limited stimulus ranges: model parameters optimized for a specific input regime are often inappropriate to describe the response to input currents with different statistical properties. In the present study, a new spiking neuron model is introduced that captures single-neuron computation over a wide range of input statistics and explains different aspects of the neuronal dynamics within a single framework. Our results indicate that complex forms of single neuron adaptation are mediated by the nonlinear dynamics of the firing threshold and that the input-output transformation performed by cortical pyramidal neurons can be intuitively understood in terms of an enhanced Generalized Linear Model in which both the input filter and the spike-history filter adapt to the input statistics.
Collapse
Affiliation(s)
- Skander Mensi
- Laboratory of Computational Neuroscience (LCN), Brain Mind Institute, School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Olivier Hagens
- Laboratory of Neural Microcircuitry (LNMC), Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Wulfram Gerstner
- Laboratory of Computational Neuroscience (LCN), Brain Mind Institute, School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Christian Pozzorini
- Laboratory of Computational Neuroscience (LCN), Brain Mind Institute, School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- * E-mail:
| |
Collapse
|
39
|
Fontaine B. Enhanced novelty detection in auditory scenes through adaptation of inhibition. BMC Neurosci 2015. [PMCID: PMC4697530 DOI: 10.1186/1471-2202-16-s1-p72] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
|
40
|
Yi GS, Wang J, Deng B, Hong SH, Wei XL, Chen YY. Action potential threshold of wide dynamic range neurons in rat spinal dorsal horn evoked by manual acupuncture at ST36. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2015.03.077] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
41
|
Harrison PM, Badel L, Wall MJ, Richardson MJE. Experimentally Verified Parameter Sets for Modelling Heterogeneous Neocortical Pyramidal-Cell Populations. PLoS Comput Biol 2015; 11:e1004165. [PMID: 26291316 PMCID: PMC4546387 DOI: 10.1371/journal.pcbi.1004165] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2014] [Accepted: 01/30/2015] [Indexed: 11/19/2022] Open
Abstract
Models of neocortical networks are increasingly including the diversity of excitatory and inhibitory neuronal classes. Significant variability in cellular properties are also seen within a nominal neuronal class and this heterogeneity can be expected to influence the population response and information processing in networks. Recent studies have examined the population and network effects of variability in a particular neuronal parameter with some plausibly chosen distribution. However, the empirical variability and covariance seen across multiple parameters are rarely included, partly due to the lack of data on parameter correlations in forms convenient for model construction. To addess this we quantify the heterogeneity within and between the neocortical pyramidal-cell classes in layers 2/3, 4, and the slender-tufted and thick-tufted pyramidal cells of layer 5 using a combination of intracellular recordings, single-neuron modelling and statistical analyses. From the response to both square-pulse and naturalistic fluctuating stimuli, we examined the class-dependent variance and covariance of electrophysiological parameters and identify the role of the h current in generating parameter correlations. A byproduct of the dynamic I-V method we employed is the straightforward extraction of reduced neuron models from experiment. Empirically these models took the refractory exponential integrate-and-fire form and provide an accurate fit to the perisomatic voltage responses of the diverse pyramidal-cell populations when the class-dependent statistics of the model parameters were respected. By quantifying the parameter statistics we obtained an algorithm which generates populations of model neurons, for each of the four pyramidal-cell classes, that adhere to experimentally observed marginal distributions and parameter correlations. As well as providing this tool, which we hope will be of use for exploring the effects of heterogeneity in neocortical networks, we also provide the code for the dynamic I-V method and make the full electrophysiological data set available. Neurons are the fundamental components of the nervous system and a quantitative description of their properties is a prerequisite to understanding the complex structures they comprise, from microcircuits to networks. Mathematical modelling provides an essential tool to this end and there has been intense effort directed at analysing networks constructed from different classes of neurons. However, even neurons from the same class show a broad variability in parameter values and the distributions and correlations between these parameters are likely to significantly affect network properties. To quantify this variability, we used a combination of intracellular recording, single-neuron modelling, and statistical analysis to measure the physiological variability in pyramidal-cell populations of the neocortex. We employ protocols that measure parameters from both square-pulse and naturalistic stimuli, characterising the perisomatic integration properties of these cells and allowing for the straightforward extraction of mathematically tractable reduced neuron models. We provide algorithms to generate populations of these neuron models that respect the parameter variability and co-variability observed in our experiments. These represent novel tools for exploring heterogeneity in neocortical networks that will be useful for subsequent theoretical and numerical studies. Finally, we make our full electrophysiological dataset available for other research groups to extend and improve on our analysis.
Collapse
Affiliation(s)
- Paul M. Harrison
- MOAC Doctoral Training Centre, University of Warwick, Coventry, United Kingdom
- School of Life Sciences, University of Warwick, Coventry, United Kingdom
- Warwick Systems Biology Centre, University of Warwick, Coventry, United Kingdom
| | - Laurent Badel
- Laboratory for Circuit Mechanisms of Sensory Perception, RIKEN Brain Science Institute, Wako, Saitama, Japan
| | - Mark J. Wall
- School of Life Sciences, University of Warwick, Coventry, United Kingdom
| | | |
Collapse
|
42
|
Smirnova EY, Zaitsev AV, Kim KK, Chizhov AV. The domain of neuronal firing on a plane of input current and conductance. J Comput Neurosci 2015; 39:217-33. [PMID: 26278407 DOI: 10.1007/s10827-015-0573-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Revised: 08/04/2015] [Accepted: 08/06/2015] [Indexed: 10/23/2022]
Abstract
The activation of neurotransmitter receptors increases the current flow and membrane conductance and thus controls the firing rate of a neuron. In the present work, we justified the two-dimensional representation of a neuronal input by voltage-independent current and conductance and obtained experimentally and numerically a complete input-output (I/O) function. The dependence of the steady-state firing rate on the input current and conductance was studied as a two-parameter I/O function. We employed the dynamic patch clamp technique in slices to get this dependence for the whole domain of two input signals that evoke stationary spike trains in a single neuron (Ω-domain). As found, the Ω-domain is finite and an additional conductance decreases the range of spike-evoking currents. The I/O function has been reproduced in a Hodgkin-Huxley-like model. Among the simulated effects of different factors on the I/O function, including passive and active membrane properties, external conditions and input signal properties, the most interesting were: the shift of the right boundary of the Ω-domain (corresponding to the exCitation block) leftwards due to the decrease of the maximal potassium conductance; and the reduction of the Ω-domain by the decrease of the maximal sodium concentration. As found in experiments and simulations, the Ω-domain is reduced by the decrease of extracellular sodium concentration, by cooling, and by adding slow potassium currents providing interspike interval adaptation; the Ω-domain height is increased by adding color noise. Our modeling data provided a generalization of I/O dependencies that is consistent with previous studies and our experiments. Our results suggest that both current flow and membrane conductance should be taken into account when determining neuronal firing activity.
Collapse
Affiliation(s)
- E Yu Smirnova
- Ioffe Physical-Technical Institute of the Russian Academy of Sciences, Politekhnicheskaya str., 26, 194021, St.-Petersburg, Russia.
| | - A V Zaitsev
- Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences, Saint-Petersburg, Russia
| | - K Kh Kim
- Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences, Saint-Petersburg, Russia
| | - A V Chizhov
- Ioffe Physical-Technical Institute of the Russian Academy of Sciences, Politekhnicheskaya str., 26, 194021, St.-Petersburg, Russia.,Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences, Saint-Petersburg, Russia
| |
Collapse
|
43
|
Biophysical Insights into How Spike Threshold Depends on the Rate of Membrane Potential Depolarization in Type I and Type II Neurons. PLoS One 2015; 10:e0130250. [PMID: 26083350 PMCID: PMC4471164 DOI: 10.1371/journal.pone.0130250] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2014] [Accepted: 05/19/2015] [Indexed: 01/22/2023] Open
Abstract
Dynamic spike threshold plays a critical role in neuronal input-output relations. In many neurons, the threshold potential depends on the rate of membrane potential depolarization (dV/dt) preceding a spike. There are two basic classes of neural excitability, i.e., Type I and Type II, according to input-output properties. Although the dynamical and biophysical basis of their spike initiation has been established, the spike threshold dynamic for each cell type has not been well described. Here, we use a biophysical model to investigate how spike threshold depends on dV/dt in two types of neuron. It is observed that Type II spike threshold is more depolarized and more sensitive to dV/dt than Type I. With phase plane analysis, we show that each threshold dynamic arises from the different separatrix and K+ current kinetics. By analyzing subthreshold properties of membrane currents, we find the activation of hyperpolarizing current prior to spike initiation is a major factor that regulates the threshold dynamics. The outward K+ current in Type I neuron does not activate at the perithresholds, which makes its spike threshold insensitive to dV/dt. The Type II K+ current activates prior to spike initiation and there is a large net hyperpolarizing current at the perithresholds, which results in a depolarized threshold as well as a pronounced threshold dynamic. These predictions are further attested in several other functionally equivalent cases of neural excitability. Our study provides a fundamental description about how intrinsic biophysical properties contribute to the threshold dynamics in Type I and Type II neurons, which could decipher their significant functions in neural coding.
Collapse
|
44
|
Jones DL, Johnson EC, Ratnam R. A stimulus-dependent spike threshold is an optimal neural coder. Front Comput Neurosci 2015; 9:61. [PMID: 26082710 PMCID: PMC4451370 DOI: 10.3389/fncom.2015.00061] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2014] [Accepted: 05/05/2015] [Indexed: 11/13/2022] Open
Abstract
A neural code based on sequences of spikes can consume a significant portion of the brain's energy budget. Thus, energy considerations would dictate that spiking activity be kept as low as possible. However, a high spike-rate improves the coding and representation of signals in spike trains, particularly in sensory systems. These are competing demands, and selective pressure has presumably worked to optimize coding by apportioning a minimum number of spikes so as to maximize coding fidelity. The mechanisms by which a neuron generates spikes while maintaining a fidelity criterion are not known. Here, we show that a signal-dependent neural threshold, similar to a dynamic or adapting threshold, optimizes the trade-off between spike generation (encoding) and fidelity (decoding). The threshold mimics a post-synaptic membrane (a low-pass filter) and serves as an internal decoder. Further, it sets the average firing rate (the energy constraint). The decoding process provides an internal copy of the coding error to the spike-generator which emits a spike when the error equals or exceeds a spike threshold. When optimized, the trade-off leads to a deterministic spike firing-rule that generates optimally timed spikes so as to maximize fidelity. The optimal coder is derived in closed-form in the limit of high spike-rates, when the signal can be approximated as a piece-wise constant signal. The predicted spike-times are close to those obtained experimentally in the primary electrosensory afferent neurons of weakly electric fish (Apteronotus leptorhynchus) and pyramidal neurons from the somatosensory cortex of the rat. We suggest that KCNQ/Kv7 channels (underlying the M-current) are good candidates for the decoder. They are widely coupled to metabolic processes and do not inactivate. We conclude that the neural threshold is optimized to generate an energy-efficient and high-fidelity neural code.
Collapse
Affiliation(s)
- Douglas L. Jones
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-ChampaignUrbana, IL, USA
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-ChampaignUrbana, IL, USA
- Coordinated Science Laboratory, University of Illinois at Urbana-ChampaignUrbana, IL, USA
- Advanced Digital Sciences Center, Illinois at Singapore Pte. Ltd., SingaporeSingapore
| | - Erik C. Johnson
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-ChampaignUrbana, IL, USA
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-ChampaignUrbana, IL, USA
- Coordinated Science Laboratory, University of Illinois at Urbana-ChampaignUrbana, IL, USA
| | - Rama Ratnam
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-ChampaignUrbana, IL, USA
- Coordinated Science Laboratory, University of Illinois at Urbana-ChampaignUrbana, IL, USA
- Advanced Digital Sciences Center, Illinois at Singapore Pte. Ltd., SingaporeSingapore
| |
Collapse
|
45
|
Yi GS, Wang J, Tsang KM, Wei XL, Deng B. Input-output relation and energy efficiency in the neuron with different spike threshold dynamics. Front Comput Neurosci 2015; 9:62. [PMID: 26074810 PMCID: PMC4444831 DOI: 10.3389/fncom.2015.00062] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2015] [Accepted: 05/08/2015] [Indexed: 11/13/2022] Open
Abstract
Neuron encodes and transmits information through generating sequences of output spikes, which is a high energy-consuming process. The spike is initiated when membrane depolarization reaches a threshold voltage. In many neurons, threshold is dynamic and depends on the rate of membrane depolarization (dV/dt) preceding a spike. Identifying the metabolic energy involved in neural coding and their relationship to threshold dynamic is critical to understanding neuronal function and evolution. Here, we use a modified Morris-Lecar model to investigate neuronal input-output property and energy efficiency associated with different spike threshold dynamics. We find that the neurons with dynamic threshold sensitive to dV/dt generate discontinuous frequency-current curve and type II phase response curve (PRC) through Hopf bifurcation, and weak noise could prohibit spiking when bifurcation just occurs. The threshold that is insensitive to dV/dt, instead, results in a continuous frequency-current curve, a type I PRC and a saddle-node on invariant circle bifurcation, and simultaneously weak noise cannot inhibit spiking. It is also shown that the bifurcation, frequency-current curve and PRC type associated with different threshold dynamics arise from the distinct subthreshold interactions of membrane currents. Further, we observe that the energy consumption of the neuron is related to its firing characteristics. The depolarization of spike threshold improves neuronal energy efficiency by reducing the overlap of Na(+) and K(+) currents during an action potential. The high energy efficiency is achieved at more depolarized spike threshold and high stimulus current. These results provide a fundamental biophysical connection that links spike threshold dynamics, input-output relation, energetics and spike initiation, which could contribute to uncover neural encoding mechanism.
Collapse
Affiliation(s)
- Guo-Sheng Yi
- School of Electrical Engineering and Automation, Tianjin University Tianjin, China
| | - Jiang Wang
- School of Electrical Engineering and Automation, Tianjin University Tianjin, China
| | - Kai-Ming Tsang
- Department of Electrical Engineering, The Hong Kong Polytechnic University Hong Kong, China
| | - Xi-Le Wei
- School of Electrical Engineering and Automation, Tianjin University Tianjin, China
| | - Bin Deng
- School of Electrical Engineering and Automation, Tianjin University Tianjin, China
| |
Collapse
|
46
|
Braun W, Matthews PC, Thul R. First-passage times in integrate-and-fire neurons with stochastic thresholds. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2015; 91:052701. [PMID: 26066193 DOI: 10.1103/physreve.91.052701] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2014] [Indexed: 06/04/2023]
Abstract
We consider a leaky integrate-and-fire neuron with deterministic subthreshold dynamics and a firing threshold that evolves as an Ornstein-Uhlenbeck process. The formulation of this minimal model is motivated by the experimentally observed widespread variation of neural firing thresholds. We show numerically that the mean first-passage time can depend nonmonotonically on the noise amplitude. For sufficiently large values of the correlation time of the stochastic threshold the mean first-passage time is maximal for nonvanishing noise. We provide an explanation for this effect by analytically transforming the original model into a first-passage-time problem for Brownian motion. This transformation also allows for a perturbative calculation of the first-passage-time histograms. In turn this provides quantitative insights into the mechanisms that lead to the nonmonotonic behavior of the mean first-passage time. The perturbation expansion is in excellent agreement with direct numerical simulations. The approach developed here can be applied to any deterministic subthreshold dynamics and any Gauss-Markov processes for the firing threshold. This opens up the possibility to incorporate biophysically detailed components into the subthreshold dynamics, rendering our approach a powerful framework that sits between traditional integrate-and-fire models and complex mechanistic descriptions of neural dynamics.
Collapse
Affiliation(s)
- Wilhelm Braun
- School of Mathematical Sciences and Centre for Mathematical Medicine and Biology, University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom
| | - Paul C Matthews
- School of Mathematical Sciences and Centre for Mathematical Medicine and Biology, University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom
| | - Rüdiger Thul
- School of Mathematical Sciences and Centre for Mathematical Medicine and Biology, University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom
| |
Collapse
|
47
|
Bibikov NG. Some features of the sound-signal envelope extracted by cochlear nucleus neurons in grass frog. Biophysics (Nagoya-shi) 2015. [DOI: 10.1134/s0006350915030045] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
|
48
|
Abstract
A large variety of neuron models are used in theoretical and computational neuroscience, and among these, single-compartment models are a popular kind. These models do not explicitly include the dendrites or the axon, and range from the Hodgkin-Huxley (HH) model to various flavors of integrate-and-fire (IF) models. The main classes of models differ in the way spikes are initiated. Which one is the most realistic? Starting with some general epistemological considerations, I show that the notion of realism comes in two dimensions: empirical content (the sort of predictions that a model can produce) and empirical accuracy (whether these predictions are correct). I then examine the realism of the main classes of single-compartment models along these two dimensions, in light of recent experimental evidence.
Collapse
Affiliation(s)
- Romain Brette
- Institut d’Etudes de la Cognition, Ecole Normale Supérieure, Paris, France
- Sorbonne Universités, UPMC Univ. Paris 06, UMR_S 968, Institut de la Vision, Paris, France
- INSERM, U968, Paris, France
- CNRS, UMR_7210, Paris, France
- * E-mail:
| |
Collapse
|
49
|
Afshar S, George L, Tapson J, van Schaik A, Hamilton TJ. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels. Front Neurosci 2014; 8:377. [PMID: 25505378 PMCID: PMC4243566 DOI: 10.3389/fnins.2014.00377] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2014] [Accepted: 11/05/2014] [Indexed: 11/17/2022] Open
Abstract
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.
Collapse
Affiliation(s)
- Saeed Afshar
- Bioelectronics and Neurosciences, The MARCS Institute, University of Western SydneyPenrith, NSW, Australia
| | - Libin George
- School of Electrical Engineering and Telecommunications, The University of New South WalesSydney, NSW, Australia
| | - Jonathan Tapson
- Bioelectronics and Neurosciences, The MARCS Institute, University of Western SydneyPenrith, NSW, Australia
| | - André van Schaik
- Bioelectronics and Neurosciences, The MARCS Institute, University of Western SydneyPenrith, NSW, Australia
| | - Tara J. Hamilton
- Bioelectronics and Neurosciences, The MARCS Institute, University of Western SydneyPenrith, NSW, Australia
- School of Electrical Engineering and Telecommunications, The University of New South WalesSydney, NSW, Australia
| |
Collapse
|
50
|
Laudanski J, Zheng Y, Brette R. A Structural Theory of Pitch(1,2,3). eNeuro 2014; 1:ENEURO.0033-14.2014. [PMID: 26464959 PMCID: PMC4596137 DOI: 10.1523/eneuro.0033-14.2014] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2014] [Revised: 11/07/2014] [Accepted: 11/07/2014] [Indexed: 11/21/2022] Open
Abstract
Musical notes can be ordered from low to high along a perceptual dimension called "pitch". A characteristic property of these sounds is their periodic waveform, and periodicity generally correlates with pitch. Thus, pitch is often described as the perceptual correlate of the periodicity of the sound's waveform. However, the existence and salience of pitch also depends in a complex way on other factors, in particular harmonic content. For example, periodic sounds made of high-order harmonics tend to have a weaker pitch than those made of low-order harmonics. Here we examine the theoretical proposition that pitch is the perceptual correlate of the regularity structure of the vibration pattern of the basilar membrane, across place and time-a generalization of the traditional view on pitch. While this proposition also attributes pitch to periodic sounds, we show that it predicts differences between resolved and unresolved harmonic complexes and a complex domain of existence of pitch, in agreement with psychophysical experiments. We also present a possible neural mechanism for pitch estimation based on coincidence detection, which does not require long delays, in contrast with standard temporal models of pitch.
Collapse
Affiliation(s)
- Jonathan Laudanski
- Institut D’etudes De La Cognition, Ecole Normale Supérieure, Paris, France
- Scientific and Clinical Research Department, Neurelec, Vallauris, France
| | - Yi Zheng
- Institut D’etudes De La Cognition, Ecole Normale Supérieure, Paris, France
- Sorbonne Universités, UPMC Université Paris 06, UMR_S 968, Institut De La Vision, Paris, F-75012, France
- INSERM, U968 Paris, F-75012, France
- CNRS, UMR_7210, Paris, F-75012, France
| | - Romain Brette
- Institut D’etudes De La Cognition, Ecole Normale Supérieure, Paris, France
- Sorbonne Universités, UPMC Université Paris 06, UMR_S 968, Institut De La Vision, Paris, F-75012, France
- INSERM, U968 Paris, F-75012, France
- CNRS, UMR_7210, Paris, F-75012, France
| |
Collapse
|