1
|
Mahuas G, Marre O, Mora T, Ferrari U. Small-correlation expansion to quantify information in noisy sensory systems. Phys Rev E 2023; 108:024406. [PMID: 37723816 DOI: 10.1103/physreve.108.024406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 06/26/2023] [Indexed: 09/20/2023]
Abstract
Neural networks encode information through their collective spiking activity in response to external stimuli. This population response is noisy and strongly correlated, with a complex interplay between correlations induced by the stimulus, and correlations caused by shared noise. Understanding how these correlations affect information transmission has so far been limited to pairs or small groups of neurons, because the curse of dimensionality impedes the evaluation of mutual information in larger populations. Here, we develop a small-correlation expansion to compute the stimulus information carried by a large population of neurons, yielding interpretable analytical expressions in terms of the neurons' firing rates and pairwise correlations. We validate the approximation on synthetic data and demonstrate its applicability to electrophysiological recordings in the vertebrate retina, allowing us to quantify the effects of noise correlations between neurons and of memory in single neurons.
Collapse
Affiliation(s)
- Gabriel Mahuas
- Institut de la Vision, Sorbonne Université, CNRS, INSERM, 17 rue Moreau, 75012 Paris, France
- Laboratoire de Physique de École Normale Supérieure, CNRS, PSL University, Sorbonne University, Université Paris-Cité, 24 rue Lhomond, 75005 Paris, France
| | - Olivier Marre
- Institut de la Vision, Sorbonne Université, CNRS, INSERM, 17 rue Moreau, 75012 Paris, France
| | - Thierry Mora
- Laboratoire de Physique de École Normale Supérieure, CNRS, PSL University, Sorbonne University, Université Paris-Cité, 24 rue Lhomond, 75005 Paris, France
| | - Ulisse Ferrari
- Institut de la Vision, Sorbonne Université, CNRS, INSERM, 17 rue Moreau, 75012 Paris, France
| |
Collapse
|
2
|
Palabas T, Longtin A, Ghosh D, Uzuntarla M. Controlling the spontaneous firing behavior of a neuron with astrocyte. CHAOS (WOODBURY, N.Y.) 2022; 32:051101. [PMID: 35649970 DOI: 10.1063/5.0093234] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2022] [Accepted: 04/18/2022] [Indexed: 06/15/2023]
Abstract
Mounting evidence in recent years suggests that astrocytes, a sub-type of glial cells, not only serve metabolic and structural support for neurons and synapses but also play critical roles in the regulation of proper functioning of the nervous system. In this work, we investigate the effect of astrocytes on the spontaneous firing activity of a neuron through a combined model that includes a neuron-astrocyte pair. First, we show that an astrocyte may provide a kind of multistability in neuron dynamics by inducing different firing modes such as random and bursty spiking. Then, we identify the underlying mechanism of this behavior and search for the astrocytic factors that may have regulatory roles in different firing regimes. More specifically, we explore how an astrocyte can participate in the occurrence and control of spontaneous irregular spiking activity of a neuron in random spiking mode. Additionally, we systematically investigate the bursty firing regime dynamics of the neuron under the variation of biophysical facts related to the intracellular environment of the astrocyte. It is found that an astrocyte coupled to a neuron can provide a control mechanism for both spontaneous firing irregularity and burst firing statistics, i.e., burst regularity and size.
Collapse
Affiliation(s)
- Tugba Palabas
- Department of Biomedical Engineering, Zonguldak Bulent Ecevit University, 67100 Zonguldak, Turkey
| | - Andre Longtin
- Department of Physics, University of Ottawa, Ottawa, Ontario K1N 6N5, Canada
| | - Dibakar Ghosh
- Physics and Applied Mathematics Unit, Indian Statistical Institute, Kolkata 700108, India
| | - Muhammet Uzuntarla
- Department of Bioengineering, Gebze Technical University, 41400 Kocaeli, Turkey
| |
Collapse
|
3
|
Bias-free estimation of information content in temporally sparse neuronal activity. PLoS Comput Biol 2022; 18:e1009832. [PMID: 35148310 PMCID: PMC8836373 DOI: 10.1371/journal.pcbi.1009832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 01/13/2022] [Indexed: 11/20/2022] Open
Abstract
Applying information theoretic measures to neuronal activity data enables the quantification of neuronal encoding quality. However, when the sample size is limited, a naïve estimation of the information content typically contains a systematic overestimation (upward bias), which may lead to misinterpretation of coding characteristics. This bias is exacerbated in Ca2+ imaging because of the temporal sparsity of elevated Ca2+ signals. Here, we introduce methods to correct for the bias in the naïve estimation of information content from limited sample sizes and temporally sparse neuronal activity. We demonstrate the higher accuracy of our methods over previous ones, when applied to Ca2+ imaging data recorded from the mouse hippocampus and primary visual cortex, as well as to simulated data with matching tuning properties and firing statistics. Our bias-correction methods allowed an accurate estimation of the information place cells carry about the animal’s position (spatial information) and uncovered the spatial resolution of hippocampal coding. Furthermore, using our methods, we found that cells with higher peak firing rates carry higher spatial information per spike and exposed differences between distinct hippocampal subfields in the long-term evolution of the spatial code. These results could be masked by the bias when applying the commonly used naïve calculation of information content. Thus, a bias-free estimation of information content can uncover otherwise overlooked properties of the neural code. Neuroscientists interested in understanding the nature of the neural code often apply methods derived from the mathematical framework of information theory to quantify the statistical relationship between neuronal activity and a certain variable of interest. For instance, when studying the neural basis for spatial navigation, it is useful to estimate how much information hippocampal neurons carry about the position of an animal within a specific environment. However, the standard measures for estimating information content suffer from an upward bias when applied to small sample sizes, which may lead to misinterpretation of the data. This bias is more pronounced in data from calcium imaging–a widely used technique for recording neuronal activity–because the activity extracted from the measured calcium signal is sparse in time. In this work, we introduce new methods to correct the bias in the naïve estimation of information content from limited sample sizes and such temporally sparse neuronal activity. We show that our bias-correction methods allow an accurate estimation of the information content carried by the activity obtained from calcium imaging data in both hippocampal and cortical neurons, and help uncover differences in the way information content changes during learning across neural circuits.
Collapse
|
4
|
Schittler Neves F, Timme M. Decoding complex state space trajectories for neural computing. CHAOS (WOODBURY, N.Y.) 2021; 31:123105. [PMID: 34972334 DOI: 10.1063/5.0053429] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 11/15/2021] [Indexed: 06/14/2023]
Abstract
In biological neural circuits as well as in bio-inspired information processing systems, trajectories in high-dimensional state-space encode the solutions to computational tasks performed by complex dynamical systems. Due to the high state-space dimensionality and the number of possible encoding trajectories rapidly growing with input signal dimension, decoding these trajectories constitutes a major challenge on its own, in particular, as exponentially growing (space or time) requirements for decoding would render the original computational paradigm inefficient. Here, we suggest an approach to overcome this problem. We propose an efficient decoding scheme for trajectories emerging in spiking neural circuits that exhibit linear scaling with input signal dimensionality. We focus on the dynamics near a sequence of unstable saddle states that naturally emerge in a range of physical systems and provide a novel paradigm for analog computing, for instance, in the form of heteroclinic computing. Identifying simple measures of coordinated activity (synchrony) that are commonly applicable to all trajectories representing the same percept, we design robust readouts whose sizes and time requirements increase only linearly with the system size. These results move the conceptual boundary so far hindering the implementation of heteroclinic computing in hardware and may also catalyze efficient decoding strategies in spiking neural networks in general.
Collapse
Affiliation(s)
- Fabio Schittler Neves
- Center for Advancing Electronics Dresden (CFAED) and Institute for Theoretical Physics, TU Dresden, 01062 Dresden, Germany
| | - Marc Timme
- Center for Advancing Electronics Dresden (CFAED) and Institute for Theoretical Physics, TU Dresden, 01062 Dresden, Germany
| |
Collapse
|
5
|
Vakilna YS, Tang WC, Wheeler BC, Brewer GJ. The Flow of Axonal Information Among Hippocampal Subregions: 1. Feed-Forward and Feedback Network Spatial Dynamics Underpinning Emergent Information Processing. Front Neural Circuits 2021; 15:660837. [PMID: 34512275 PMCID: PMC8430040 DOI: 10.3389/fncir.2021.660837] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Accepted: 08/03/2021] [Indexed: 11/21/2022] Open
Abstract
The tri-synaptic pathway in the mammalian hippocampus enables cognitive learning and memory. Despite decades of reports on anatomy and physiology, the functional architecture of the hippocampal network remains poorly understood in terms of the dynamics of axonal information transfer between subregions. Information inputs largely flow from the entorhinal cortex (EC) to the dentate gyrus (DG), and then are processed further in the CA3 and CA1 before returning to the EC. Here, we reconstructed elements of the rat hippocampus in a novel device over an electrode array that allowed for monitoring the directionality of individual axons between the subregions. The direction of spike propagation was determined by the transmission delay of the axons recorded between two electrodes in microfluidic tunnels. The majority of axons from the EC to the DG operated in the feed-forward direction, with other regions developing unexpectedly large proportions of feedback axons to balance excitation. Spike timing in axons between each region followed single exponential log-log distributions over two orders of magnitude from 0.01 to 1 s, indicating that conventional descriptors of mean firing rates are misleading assumptions. Most of the spiking occurred in bursts that required two exponentials to fit the distribution of inter-burst intervals. This suggested the presence of up-states and down-states in every region, with the least up-states in the DG to CA3 feed-forward axons and the CA3 subregion. The peaks of the log-normal distributions of intra-burst spike rates were similar in axons between regions with modes around 95 Hz distributed over an order of magnitude. Burst durations were also log-normally distributed around a peak of 88 ms over two orders of magnitude. Despite the diversity of these spike distributions, spike rates from individual axons were often linearly correlated to subregions. These linear relationships enabled the generation of structural connectivity graphs, not possible previously without the directional flow of axonal information. The rich axonal spike dynamics between subregions of the hippocampus reveal both constraints and broad emergent dynamics of hippocampal architecture. Knowledge of this network architecture may enable more efficient computational artificial intelligence (AI) networks, neuromorphic hardware, and stimulation and decoding from cognitive implants.
Collapse
Affiliation(s)
- Yash S Vakilna
- Department of Biomedical Engineering, University of California, Irvine, Irvine, CA, United States
| | - William C Tang
- Department of Biomedical Engineering, University of California, Irvine, Irvine, CA, United States
| | - Bruce C Wheeler
- Department of Bioengineering, University of California, San Diego, San Diego, CA, United States
| | - Gregory J Brewer
- Department of Biomedical Engineering, University of California, Irvine, Irvine, CA, United States.,Center for Neuroscience of Learning and Memory, Memory Impairments and Neurological Disorders (MIND) Institute, University of California, Irvine, Irvine, CA, United States
| |
Collapse
|
6
|
Williams E, Payeur A, Gidon A, Naud R. Neural burst codes disguised as rate codes. Sci Rep 2021; 11:15910. [PMID: 34354118 PMCID: PMC8342467 DOI: 10.1038/s41598-021-95037-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 07/13/2021] [Indexed: 02/07/2023] Open
Abstract
The burst coding hypothesis posits that the occurrence of sudden high-frequency patterns of action potentials constitutes a salient syllable of the neural code. Many neurons, however, do not produce clearly demarcated bursts, an observation invoked to rule out the pervasiveness of this coding scheme across brain areas and cell types. Here we ask how detrimental ambiguous spike patterns, those that are neither clearly bursts nor isolated spikes, are for neuronal information transfer. We addressed this question using information theory and computational simulations. By quantifying how information transmission depends on firing statistics, we found that the information transmitted is not strongly influenced by the presence of clearly demarcated modes in the interspike interval distribution, a feature often used to identify the presence of burst coding. Instead, we found that neurons having unimodal interval distributions were still able to ascribe different meanings to bursts and isolated spikes. In this regime, information transmission depends on dynamical properties of the synapses as well as the length and relative frequency of bursts. Furthermore, we found that common metrics used to quantify burstiness were unable to predict the degree with which bursts could be used to carry information. Our results provide guiding principles for the implementation of coding strategies based on spike-timing patterns, and show that even unimodal firing statistics can be consistent with a bivariate neural code.
Collapse
Affiliation(s)
- Ezekiel Williams
- grid.28046.380000 0001 2182 2255Department of Mathematics and Statistics, University of Ottawa, 150 Louis Pasteur, Ottawa, K1N 6N5 Canada
| | - Alexandre Payeur
- grid.28046.380000 0001 2182 2255University of Ottawa Brain and Mind Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, 451 Smyth Rd., Ottawa, K1H 8M5 Canada
| | - Albert Gidon
- grid.7468.d0000 0001 2248 7639Institute for Biology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Richard Naud
- grid.28046.380000 0001 2182 2255University of Ottawa Brain and Mind Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, 451 Smyth Rd., Ottawa, K1H 8M5 Canada ,grid.28046.380000 0001 2182 2255Department of Physics, University of Ottawa, 150 Louis Pasteur, Ottawa, K1N 6N5 Canada
| |
Collapse
|
7
|
Tomar R, Kostal L. Variability and Randomness of the Instantaneous Firing Rate. Front Comput Neurosci 2021; 15:620410. [PMID: 34163344 PMCID: PMC8215133 DOI: 10.3389/fncom.2021.620410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 04/26/2021] [Indexed: 11/13/2022] Open
Abstract
The apparent stochastic nature of neuronal activity significantly affects the reliability of neuronal coding. To quantify the encountered fluctuations, both in neural data and simulations, the notions of variability and randomness of inter-spike intervals have been proposed and studied. In this article we focus on the concept of the instantaneous firing rate, which is also based on the spike timing. We use several classical statistical models of neuronal activity and we study the corresponding probability distributions of the instantaneous firing rate. To characterize the firing rate variability and randomness under different spiking regimes, we use different indices of statistical dispersion. We find that the relationship between the variability of interspike intervals and the instantaneous firing rate is not straightforward in general. Counter-intuitively, an increase in the randomness (based on entropy) of spike times may either decrease or increase the randomness of instantaneous firing rate, in dependence on the neuronal firing model. Finally, we apply our methods to experimental data, establishing that instantaneous rate analysis can indeed provide additional information about the spiking activity.
Collapse
Affiliation(s)
- Rimjhim Tomar
- Department of Computational Neuroscience, Institute of Physiology, Czech Academy of Sciences, Prague, Czechia.,Second Medical Faculty, Charles University, Prague, Czechia
| | - Lubomir Kostal
- Department of Computational Neuroscience, Institute of Physiology, Czech Academy of Sciences, Prague, Czechia
| |
Collapse
|
8
|
Yu GJ, Bouteiller JMC, Berger TW. Topographic Organization of Correlation Along the Longitudinal and Transverse Axes in Rat Hippocampal CA3 Due to Excitatory Afferents. Front Comput Neurosci 2020; 14:588881. [PMID: 33328947 PMCID: PMC7715032 DOI: 10.3389/fncom.2020.588881] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Accepted: 10/22/2020] [Indexed: 11/13/2022] Open
Abstract
The topographic organization of afferents to the hippocampal CA3 subfield are well-studied, but their role in influencing the spatiotemporal dynamics of population activity is not understood. Using a large-scale, computational neuronal network model of the entorhinal-dentate-CA3 system, the effects of the perforant path, mossy fibers, and associational system on the propagation and transformation of network spiking patterns were investigated. A correlation map was constructed to characterize the spatial structure and temporal evolution of pairwise correlations which underlie the emergent patterns found in the population activity. The topographic organization of the associational system gave rise to changes in the spatial correlation structure along the longitudinal and transverse axes of the CA3. The resulting gradients may provide a basis for the known functional organization observed in hippocampus.
Collapse
Affiliation(s)
- Gene J Yu
- Department of Biomedical Engineering, Center for Neural Engineering, University of Southern California, Los Angeles, CA, United States
| | - Jean-Marie C Bouteiller
- Department of Biomedical Engineering, Center for Neural Engineering, University of Southern California, Los Angeles, CA, United States
| | - Theodore W Berger
- Department of Biomedical Engineering, Center for Neural Engineering, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
9
|
Recanatesi S, Ocker GK, Buice MA, Shea-Brown E. Dimensionality in recurrent spiking networks: Global trends in activity and local origins in connectivity. PLoS Comput Biol 2019; 15:e1006446. [PMID: 31299044 PMCID: PMC6655892 DOI: 10.1371/journal.pcbi.1006446] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Revised: 07/24/2019] [Accepted: 04/03/2019] [Indexed: 11/25/2022] Open
Abstract
The dimensionality of a network's collective activity is of increasing interest in neuroscience. This is because dimensionality provides a compact measure of how coordinated network-wide activity is, in terms of the number of modes (or degrees of freedom) that it can independently explore. A low number of modes suggests a compressed low dimensional neural code and reveals interpretable dynamics [1], while findings of high dimension may suggest flexible computations [2, 3]. Here, we address the fundamental question of how dimensionality is related to connectivity, in both autonomous and stimulus-driven networks. Working with a simple spiking network model, we derive three main findings. First, the dimensionality of global activity patterns can be strongly, and systematically, regulated by local connectivity structures. Second, the dimensionality is a better indicator than average correlations in determining how constrained neural activity is. Third, stimulus evoked neural activity interacts systematically with neural connectivity patterns, leading to network responses of either greater or lesser dimensionality than the stimulus.
Collapse
Affiliation(s)
- Stefano Recanatesi
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
| | - Gabriel Koch Ocker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Michael A. Buice
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| | - Eric Shea-Brown
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
10
|
Abstract
Quantifying mutual information between inputs and outputs of a large neural circuit is an important open problem in both machine learning and neuroscience. However, evaluation of the mutual information is known to be generally intractable for large systems due to the exponential growth in the number of terms that need to be evaluated. Here we show how information contained in the responses of large neural populations can be effectively computed provided the input-output functions of individual neurons can be measured and approximated by a logistic function applied to a potentially nonlinear function of the stimulus. Neural responses in this model can remain sensitive to multiple stimulus components. We show that the mutual information in this model can be effectively approximated as a sum of lower-dimensional conditional mutual information terms. The approximations become exact in the limit of large neural populations and for certain conditions on the distribution of receptive fields across the neural population. We empirically find that these approximations continue to work well even when the conditions on the receptive field distributions are not fulfilled. The computing cost for the proposed methods grows linearly in the dimension of the input and compares favorably with other approximations.
Collapse
Affiliation(s)
- John A Berkowitz
- Department of Physics, University of California San Diego, San Diego, CA 92093, U.S.A.
| | - Tatyana O Sharpee
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037, and Department of Physics, University of California San Diego, San Diego, CA 92093, U.S.A.
| |
Collapse
|
11
|
Herfurth T, Tchumatchenko T. Quantifying encoding redundancy induced by rate correlations in Poisson neurons. Phys Rev E 2019; 99:042402. [PMID: 31108645 DOI: 10.1103/physreve.99.042402] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2018] [Indexed: 11/07/2022]
Abstract
Temporal correlations in neuronal spike trains are known to introduce redundancy to stimulus encoding. However, exact methods to describe how these correlations impact neural information transmission quantitatively are lacking. Here, we provide a general measure for the information carried by correlated rate modulations only, neglecting other spike correlations, and use it to investigate the effect of rate correlations on encoding redundancy. We derive it analytically by calculating the mutual information between a time-correlated, rate modulating signal and the resulting spikes of Poisson neurons. Whereas this information is determined by spike autocorrelations only, the redundancy in information encoding due to rate correlations depends on both the distribution and the autocorrelation of the rate histogram. We further demonstrate that at very small signal strengths the information carried by rate correlated spikes becomes identical to that of independent spikes, in effect measuring the signal modulation depth. In contrast, a vanishing signal correlation time maximizes information but does not generally yield the information of independent spikes. Overall, our study sheds light on the role of signal-induced temporal correlations for neural coding, by providing insight into how signal features shape redundancy and by establishing mathematical links between existing methods.
Collapse
Affiliation(s)
- Tim Herfurth
- Max Planck Institute for Brain Research, Theory of Neural Dynamics, Max-von-Laue-Strasse 4, 60438 Frankfurt, Germany
| | - Tatjana Tchumatchenko
- Max Planck Institute for Brain Research, Theory of Neural Dynamics, Max-von-Laue-Strasse 4, 60438 Frankfurt, Germany
| |
Collapse
|
12
|
Herfurth T, Tchumatchenko T. Information transmission of mean and variance coding in integrate-and-fire neurons. Phys Rev E 2019; 99:032420. [PMID: 30999481 DOI: 10.1103/physreve.99.032420] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2017] [Indexed: 11/07/2022]
Abstract
Neurons process information by translating continuous signals into patterns of discrete spike times. An open question is how much information these spike times contain about signals which modulate either the mean or the variance of the somatic currents in neurons, as is observed experimentally. Here we calculate the exact information contained in discrete spike times about a continuous signal in both encoding strategies. We show that the information content about mean modulating signals is generally substantially larger than about variance modulating signals for biological parameters. Our analysis further reveals that higher information transmission is associated with a larger proportion of nonlinear signal encoding. Our study measures the complete information content of mean and variance coding and provides a method to determine what fraction of the total information is linearly decodable.
Collapse
Affiliation(s)
- Tim Herfurth
- Max Planck Institute for Brain Research, Theory of Neural Dynamics, Max-von-Laue-Strasse 4, 60438 Frankfurt, Germany
| | - Tatjana Tchumatchenko
- Max Planck Institute for Brain Research, Theory of Neural Dynamics, Max-von-Laue-Strasse 4, 60438 Frankfurt, Germany
| |
Collapse
|
13
|
Madar AD, Ewell LA, Jones MV. Temporal pattern separation in hippocampal neurons through multiplexed neural codes. PLoS Comput Biol 2019; 15:e1006932. [PMID: 31009459 PMCID: PMC6476466 DOI: 10.1371/journal.pcbi.1006932] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Accepted: 03/06/2019] [Indexed: 12/18/2022] Open
Abstract
Pattern separation is a central concept in current theories of episodic memory: this computation is thought to support our ability to avoid confusion between similar memories by transforming similar cortical input patterns of neural activity into dissimilar output patterns before their long-term storage in the hippocampus. Because there are many ways one can define patterns of neuronal activity and the similarity between them, pattern separation could in theory be achieved through multiple coding strategies. Using our recently developed assay that evaluates pattern separation in isolated tissue by controlling and recording the input and output spike trains of single hippocampal neurons, we explored neural codes through which pattern separation is performed by systematic testing of different similarity metrics and various time resolutions. We discovered that granule cells, the projection neurons of the dentate gyrus, can exhibit both pattern separation and its opposite computation, pattern convergence, depending on the neural code considered and the statistical structure of the input patterns. Pattern separation is favored when inputs are highly similar, and is achieved through spike time reorganization at short time scales (< 100 ms) as well as through variations in firing rate and burstiness at longer time scales. These multiplexed forms of pattern separation are network phenomena, notably controlled by GABAergic inhibition, that involve many celltypes with input-output transformations that participate in pattern separation to different extents and with complementary neural codes: a rate code for dentate fast-spiking interneurons, a burstiness code for hilar mossy cells and a synchrony code at long time scales for CA3 pyramidal cells. Therefore, the isolated hippocampal circuit itself is capable of performing temporal pattern separation using multiplexed coding strategies that might be essential to optimally disambiguate multimodal mnemonic representations.
Collapse
Affiliation(s)
- Antoine D. Madar
- Department of Neuroscience, University of Wisconsin-Madison, WI, United States of America
- Neuroscience Training Program, University of Wisconsin-Madison, WI, United States of America
- Department of Neurobiology, Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, University of Chicago, IL, United States of America
| | - Laura A. Ewell
- Department of Neuroscience, University of Wisconsin-Madison, WI, United States of America
- Institute of Experimental Epileptology and Cognition Research, University of Bonn–Medical Center, Germany
| | - Mathew V. Jones
- Department of Neuroscience, University of Wisconsin-Madison, WI, United States of America
| |
Collapse
|
14
|
Pregowska A, Kaplan E, Szczepanski J. How Far can Neural Correlations Reduce Uncertainty? Comparison of Information Transmission Rates for Markov and Bernoulli Processes. Int J Neural Syst 2019; 29:1950003. [PMID: 30841769 DOI: 10.1142/s0129065719500035] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The nature of neural codes is central to neuroscience. Do neurons encode information through relatively slow changes in the firing rates of individual spikes (rate code) or by the precise timing of every spike (temporal code)? Here we compare the loss of information due to correlations for these two possible neural codes. The essence of Shannon's definition of information is to combine information with uncertainty: the higher the uncertainty of a given event, the more information is conveyed by that event. Correlations can reduce uncertainty or the amount of information, but by how much? In this paper we address this question by a direct comparison of the information per symbol conveyed by the words coming from a binary Markov source (temporal code) with the information per symbol coming from the corresponding Bernoulli source (uncorrelated, rate code). In a previous paper we found that a crucial role in the relation between information transmission rates (ITRs) and firing rates is played by a parameter s, which is the sum of transition probabilities from the no-spike state to the spike state and vice versa. We found that in this case too a crucial role is played by the same parameter s. We calculated the maximal and minimal bounds of the quotient of ITRs for these sources. Next, making use of the entropy grouping axiom, we determined the loss of information in a Markov source compared with the information in the corresponding Bernoulli source for a given word length. Our results show that in the case of correlated signals the loss of information is relatively small, and thus temporal codes, which are more energetically efficient, can replace rate codes effectively. These results were confirmed by experiments.
Collapse
Affiliation(s)
- Agnieszka Pregowska
- Institute of Fundamental Technological Research, Polish Academy of Sciences, ul. Pawinskiego 5B, 02-106 Warsaw, Poland
| | - Ehud Kaplan
- Icahn School of Medicine at Mount Sinai, One Gustave Levy Place, New York, NY 10029, USA.,Department of Philosophy and History of Science, Faculty of Science, Charles University, Albertov 6, 128 43 Praha 2, Czech Republic.,The National Institute of Mental Health, Topolová 748, 250 67 Klecany, Czech Republic
| | - Janusz Szczepanski
- Institute of Fundamental Technological Research, Polish Academy of Sciences, ul. Pawinskiego 5B, 02-106 Warsaw, Poland
| |
Collapse
|
15
|
Naud R, Sprekeler H. Sparse bursts optimize information transmission in a multiplexed neural code. Proc Natl Acad Sci U S A 2018; 115:E6329-E6338. [PMID: 29934400 PMCID: PMC6142200 DOI: 10.1073/pnas.1720995115] [Citation(s) in RCA: 66] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Many cortical neurons combine the information ascending and descending the cortical hierarchy. In the classical view, this information is combined nonlinearly to give rise to a single firing-rate output, which collapses all input streams into one. We analyze the extent to which neurons can simultaneously represent multiple input streams by using a code that distinguishes spike timing patterns at the level of a neural ensemble. Using computational simulations constrained by experimental data, we show that cortical neurons are well suited to generate such multiplexing. Interestingly, this neural code maximizes information for short and sparse bursts, a regime consistent with in vivo recordings. Neurons can also demultiplex this information, using specific connectivity patterns. The anatomy of the adult mammalian cortex suggests that these connectivity patterns are used by the nervous system to maintain sparse bursting and optimal multiplexing. Contrary to firing-rate coding, our findings indicate that the physiology and anatomy of the cortex may be interpreted as optimizing the transmission of multiple independent signals to different targets.
Collapse
Affiliation(s)
- Richard Naud
- University of Ottawa Brain and Mind Research Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON K1H 8M5, Canada;
- Department of Physics, University of Ottawa, Ottawa, ON K1N 6N5, Canada
| | - Henning Sprekeler
- Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
- Modelling of Cognitive Processes, Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, 10587 Berlin, Germany
| |
Collapse
|
16
|
Pernice V, da Silveira RA. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits. PLoS Comput Biol 2018; 14:e1005979. [PMID: 29408930 PMCID: PMC5833435 DOI: 10.1371/journal.pcbi.1005979] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2017] [Revised: 03/01/2018] [Accepted: 01/10/2018] [Indexed: 11/18/2022] Open
Abstract
Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures—recurrent connections, shared feed-forward projections, and shared gain fluctuations—on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing. The response of neurons to a stimulus is variable across trials. A natural solution for reliable coding in the face of noise is the averaging across a neural population. The nature of this averaging depends on the structure of noise correlations in the neural population. In turn, the correlation structure depends on the way noise and correlations are generated in neural circuits. It is in general difficult to identify the origin of correlations from the observed population activity alone. In this article, we explore different theoretical scenarios of the way in which correlations can be generated, and we relate these to the architecture of feed-forward and recurrent neural circuits. Analyzing population recordings of the activity in mouse auditory cortex in response to sound stimuli, we find that population statistics are consistent with those generated in a recurrent network model. Using this model, we can then quantify the effects of network properties on average population responses, noise correlations, and the representation of sensory information.
Collapse
Affiliation(s)
- Volker Pernice
- Department of Physics, Ecole Normale Supérieure, Paris, France
- Laboratoire de Physique Statistique, Ecole Normale Supérieure, PSL Research University; Université Paris Diderot Sorbonne Paris-Cité, Sorbonne Universités UPMC Univ Paris 06; CNRS, Paris, France
| | - Rava Azeredo da Silveira
- Department of Physics, Ecole Normale Supérieure, Paris, France
- Laboratoire de Physique Statistique, Ecole Normale Supérieure, PSL Research University; Université Paris Diderot Sorbonne Paris-Cité, Sorbonne Universités UPMC Univ Paris 06; CNRS, Paris, France
- Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, United States of America
- * E-mail:
| |
Collapse
|
17
|
How linear response shaped models of neural circuits and the quest for alternatives. Curr Opin Neurobiol 2017; 46:234-240. [DOI: 10.1016/j.conb.2017.09.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Accepted: 09/07/2017] [Indexed: 11/23/2022]
|
18
|
Ocker GK, Hu Y, Buice MA, Doiron B, Josić K, Rosenbaum R, Shea-Brown E. From the statistics of connectivity to the statistics of spike times in neuronal networks. Curr Opin Neurobiol 2017; 46:109-119. [PMID: 28863386 DOI: 10.1016/j.conb.2017.07.011] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2017] [Revised: 07/21/2017] [Accepted: 07/27/2017] [Indexed: 10/19/2022]
Abstract
An essential step toward understanding neural circuits is linking their structure and their dynamics. In general, this relationship can be almost arbitrarily complex. Recent theoretical work has, however, begun to identify some broad principles underlying collective spiking activity in neural circuits. The first is that local features of network connectivity can be surprisingly effective in predicting global statistics of activity across a network. The second is that, for the important case of large networks with excitatory-inhibitory balance, correlated spiking persists or vanishes depending on the spatial scales of recurrent and feedforward connectivity. We close by showing how these ideas, together with plasticity rules, can help to close the loop between network structure and activity statistics.
Collapse
Affiliation(s)
| | - Yu Hu
- Center for Brain Science, Harvard University, United States
| | - Michael A Buice
- Allen Institute for Brain Science, United States; Department of Applied Mathematics, University of Washington, United States
| | - Brent Doiron
- Department of Mathematics, University of Pittsburgh, United States; Center for the Neural Basis of Cognition, Pittsburgh, United States
| | - Krešimir Josić
- Department of Mathematics, University of Houston, United States; Department of Biology and Biochemistry, University of Houston, United States; Department of BioSciences, Rice University, United States
| | - Robert Rosenbaum
- Department of Mathematics, University of Notre Dame, United States
| | - Eric Shea-Brown
- Allen Institute for Brain Science, United States; Department of Applied Mathematics, University of Washington, United States; Department of Physiology and Biophysics, and University of Washington Institute for Neuroengineering, United States.
| |
Collapse
|