1
|
Pascual LMM, Vusirikala A, Nemenman IM, Sober SJ, Pasek M. Millisecond-scale motor coding precedes sensorimotor learning in songbirds. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.27.615500. [PMID: 39386477 PMCID: PMC11463345 DOI: 10.1101/2024.09.27.615500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/12/2024]
Abstract
A key goal of the nervous system in young animals is to learn motor skills. Songbirds learn to sing as juveniles, providing a unique opportunity to identify the neural correlates of skill acquisition. Prior studies have shown that spike rate variability in vocal motor cortex decreases substantially during song acquisition, suggesting a transition from rate-based neural control to the millisecond-precise motor codes known to underlie adult vocal performance. By distinguishing how the ensemble of spike patterns fired by cortical neurons (the "neural vocabulary") and the relationship between spike patterns and song acoustics (the "neural code") change during song acquisition, we quantified how vocal control changes across learning in juvenile Bengalese finches. We found that despite the expected drop in rate variability (a learning-related change in spike vocabulary), the precision of the neural code in the youngest singers is the same as in adults, with 1-2 ms variations in spike timing transduced into quantifiably different behaviors. In contrast, fluctuations in firing rates on longer timescales fail to affect the motor output in both juvenile and adult animals. The consistent presence of millisecond-scale motor coding during changing levels of spike rate and behavioral variability suggests that learning-related changes in cortical activity reflect the brain's changing its spiking vocabulary to better match the underlying motor code, rather than a change in the precision of the code itself.
Collapse
Affiliation(s)
- Leila May M. Pascual
- Neuroscience Graduate Program, Emory University, Atlanta, United States
- Department of Biology, Emory University, Atlanta, United States
| | | | - Ilya M. Nemenman
- Department of Physics, Emory University, Atlanta, United States
- Initiative in Theory and Modeling of Living Systems, Emory University, Atlanta, United States
- Department of Biology, Emory University, Atlanta, United States
| | - Samuel J. Sober
- Department of Biology, Emory University, Atlanta, United States
| | - Michael Pasek
- Department of Physics, Emory University, Atlanta, United States
- Initiative in Theory and Modeling of Living Systems, Emory University, Atlanta, United States
| |
Collapse
|
2
|
Koçillari L, Lorenz GM, Engel NM, Celotto M, Curreli S, Malerba SB, Engel AK, Fellin T, Panzeri S. Sampling bias corrections for accurate neural measures of redundant, unique, and synergistic information. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.04.597303. [PMID: 38895197 PMCID: PMC11185652 DOI: 10.1101/2024.06.04.597303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
Shannon Information theory has long been a tool of choice to measure empirically how populations of neurons in the brain encode information about cognitive variables. Recently, Partial Information Decomposition (PID) has emerged as principled way to break down this information into components identifying not only the unique information carried by each neuron, but also whether relationships between neurons generate synergistic or redundant information. While it has been long recognized that Shannon information measures on neural activity suffer from a (mostly upward) limited sampling estimation bias, this issue has largely been ignored in the burgeoning field of PID analysis of neural activity. We used simulations to investigate the limited sampling bias of PID computed from discrete probabilities (suited to describe neural spiking activity). We found that PID suffers from a large bias that is uneven across components, with synergy by far the most biased. Using approximate analytical expansions, we found that the bias of synergy increases quadratically with the number of discrete responses of each neuron, whereas the bias of unique and redundant information increase only linearly or sub-linearly. Based on the understanding of the PID bias properties, we developed simple yet effective procedures that correct for the bias effectively, and that improve greatly the PID estimation with respect to current state-of-the-art procedures. We apply these PID bias correction procedures to datasets of 53117 pairs neurons in auditory cortex, posterior parietal cortex and hippocampus of mice performing cognitive tasks, deriving precise estimates and bounds of how synergy and redundancy vary across these brain regions.
Collapse
Affiliation(s)
- Loren Koçillari
- Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
| | - Gabriel Matías Lorenz
- Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
- Istituto Italiano di Tecnologia, Genova, Italy
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
| | - Nicola Marie Engel
- Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
| | - Marco Celotto
- Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
- Istituto Italiano di Tecnologia, Genova, Italy
| | | | - Simone Blanco Malerba
- Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
| | - Andreas K. Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
| | | | - Stefano Panzeri
- Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
- Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
3
|
Barta T, Kostal L. Shared input and recurrency in neural networks for metabolically efficient information transmission. PLoS Comput Biol 2024; 20:e1011896. [PMID: 38394341 PMCID: PMC10917264 DOI: 10.1371/journal.pcbi.1011896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 03/06/2024] [Accepted: 02/07/2024] [Indexed: 02/25/2024] Open
Abstract
Shared input to a population of neurons induces noise correlations, which can decrease the information carried by a population activity. Inhibitory feedback in recurrent neural networks can reduce the noise correlations and thus increase the information carried by the population activity. However, the activity of inhibitory neurons is costly. This inhibitory feedback decreases the gain of the population. Thus, depolarization of its neurons requires stronger excitatory synaptic input, which is associated with higher ATP consumption. Given that the goal of neural populations is to transmit as much information as possible at minimal metabolic costs, it is unclear whether the increased information transmission reliability provided by inhibitory feedback compensates for the additional costs. We analyze this problem in a network of leaky integrate-and-fire neurons receiving correlated input. By maximizing mutual information with metabolic cost constraints, we show that there is an optimal strength of recurrent connections in the network, which maximizes the value of mutual information-per-cost. For higher values of input correlation, the mutual information-per-cost is higher for recurrent networks with inhibitory feedback compared to feedforward networks without any inhibitory neurons. Our results, therefore, show that the optimal synaptic strength of a recurrent network can be inferred from metabolically efficient coding arguments and that decorrelation of the input by inhibitory feedback compensates for the associated increased metabolic costs.
Collapse
Affiliation(s)
- Tomas Barta
- Laboratory of Computational Neuroscience, Institute of Physiology of the Czech Academy of Sciences, Prague, Czech Republic
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Onna-son, Okinawa, Japan
| | - Lubomir Kostal
- Laboratory of Computational Neuroscience, Institute of Physiology of the Czech Academy of Sciences, Prague, Czech Republic
| |
Collapse
|
4
|
De Gregorio J, Sánchez D, Toral R. Entropy Estimators for Markovian Sequences: A Comparative Analysis. ENTROPY (BASEL, SWITZERLAND) 2024; 26:79. [PMID: 38248204 PMCID: PMC11154276 DOI: 10.3390/e26010079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 12/21/2023] [Accepted: 01/16/2024] [Indexed: 01/23/2024]
Abstract
Entropy estimation is a fundamental problem in information theory that has applications in various fields, including physics, biology, and computer science. Estimating the entropy of discrete sequences can be challenging due to limited data and the lack of unbiased estimators. Most existing entropy estimators are designed for sequences of independent events and their performances vary depending on the system being studied and the available data size. In this work, we compare different entropy estimators and their performance when applied to Markovian sequences. Specifically, we analyze both binary Markovian sequences and Markovian systems in the undersampled regime. We calculate the bias, standard deviation, and mean squared error for some of the most widely employed estimators. We discuss the limitations of entropy estimation as a function of the transition probabilities of the Markov processes and the sample size. Overall, this paper provides a comprehensive comparison of entropy estimators and their performance in estimating entropy for systems with memory, which can be useful for researchers and practitioners in various fields.
Collapse
Affiliation(s)
| | - David Sánchez
- Institute for Cross-Disciplinary Physics and Complex Systems IFISC (UIB-CSIC), Campus Universitat de les Illes Balears, E-07122 Palma de Mallorca, Spain; (J.D.G.); (R.T.)
| | | |
Collapse
|
5
|
Bryant SJ, Machta BB. Physical Constraints in Intracellular Signaling: The Cost of Sending a Bit. PHYSICAL REVIEW LETTERS 2023; 131:068401. [PMID: 37625074 PMCID: PMC11146629 DOI: 10.1103/physrevlett.131.068401] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 03/20/2023] [Accepted: 06/09/2023] [Indexed: 08/27/2023]
Abstract
Many biological processes require timely communication between molecular components. Cells employ diverse physical channels to this end, transmitting information through diffusion, electrical depolarization, and mechanical waves among other strategies. Here we bound the energetic cost of transmitting information through these physical channels, in k_{B}T/bit, as a function of the size of the sender and receiver, their spatial separation, and the communication latency. These calculations provide an estimate for the energy costs associated with information processing arising from the physical constraints of the cellular environment, which we find to be many orders of magnitude larger than unity in natural units. From these calculations, we construct a phase diagram indicating where each strategy is most efficient. Our results suggest that intracellular information transfer may constitute a substantial energetic cost. This provides a new tool for understanding tradeoffs in cellular network function.
Collapse
Affiliation(s)
- Samuel J. Bryant
- Department of Physics, Yale University, New Haven, Connecticut 06511, USA
| | - Benjamin B. Machta
- Department of Physics, Yale University and Quantitative Biology Institute, Yale University, New Haven, Connecticut 06511, USA
| |
Collapse
|
6
|
Hernández DG, Roman A, Nemenman I. Low-probability states, data statistics, and entropy estimation. Phys Rev E 2023; 108:014101. [PMID: 37583218 DOI: 10.1103/physreve.108.014101] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Accepted: 05/30/2023] [Indexed: 08/17/2023]
Abstract
A fundamental problem in the analysis of complex systems is getting a reliable estimate of the entropy of their probability distributions over the state space. This is difficult because unsampled states can contribute substantially to the entropy, while they do not contribute to the maximum likelihood estimator of entropy, which replaces probabilities by the observed frequencies. Bayesian estimators overcome this obstacle by introducing a model of the low-probability tail of the probability distribution. Which statistical features of the observed data determine the model of the tail, and hence the output of such estimators, remains unclear. Here we show that well-known entropy estimators for probability distributions on discrete state spaces model the structure of the low-probability tail based largely on a few statistics of the data: the sample size, the maximum likelihood estimate, the number of coincidences among the samples, and the dispersion of the coincidences. We derive approximate analytical entropy estimators for undersampled distributions based on these statistics, and we use the results to propose an intuitive understanding of how the Bayesian entropy estimators work.
Collapse
Affiliation(s)
- Damián G Hernández
- Department of Physics, Emory University, Atlanta, Georgia, USA
- Department of Medical Physics, Centro Atómico Bariloche and Instituto Balseiro, 8400 San Carlos de Bariloche, Argentina
| | - Ahmed Roman
- Department of Physics, Emory University, Atlanta, Georgia, USA
| | - Ilya Nemenman
- Department of Physics, Emory University, Atlanta, Georgia, USA
- Department of Biology, Emory University, Atlanta, Georgia, USA
- Initiative for Theory and Modeling of Living Systems, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
7
|
Özdilek Ü. The Role of Thermodynamic and Informational Entropy in Improving Real Estate Valuation Methods. ENTROPY (BASEL, SWITZERLAND) 2023; 25:907. [PMID: 37372251 DOI: 10.3390/e25060907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 06/04/2023] [Accepted: 06/05/2023] [Indexed: 06/29/2023]
Abstract
Price, Cost and Income (PCI) are distinct economic indicators intrinsically linked to the values they denote. These observables take center stage in the multi-criteria decision-making process that enables economic agents to convey subjective utilities of market-exchanged commodities objectively. The valuation of these commodities heavily relies on PCI-based empirical observables and their supported methodologies. This valuation measure's accuracy is critical, as it influences subsequent decisions within the market chain. However, measurement errors often arise due to inherent uncertainties in the value state, impacting economic agents' wealth, particularly when trading significant commodities such as real estate properties. This paper addresses this issue by incorporating entropy measurements into real estate valuation. This mathematical technique adjusts and integrates triadic PCI estimates, improving the final stage of appraisal systems where definitive value decisions are crucial. Employing entropy within the appraisal system can also aid market agents in devising informed production/trading strategies for optimal returns. The results from our practical demonstration indicate promising implications. The entropy's integration with PCI estimates significantly improved the value measurement's precision and reduced economic decision-making errors.
Collapse
Affiliation(s)
- Ünsal Özdilek
- Business School, Department of Strategy, Social and Environmental Responsibility, University of Quebec, Montreal, QC H3C 3P8, Canada
| |
Collapse
|
8
|
Putney J, Niebur T, Wood L, Conn R, Sponberg S. An information theoretic method to resolve millisecond-scale spike timing precision in a comprehensive motor program. PLoS Comput Biol 2023; 19:e1011170. [PMID: 37307288 PMCID: PMC10289674 DOI: 10.1371/journal.pcbi.1011170] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/23/2023] [Accepted: 05/10/2023] [Indexed: 06/14/2023] Open
Abstract
Sensory inputs in nervous systems are often encoded at the millisecond scale in a precise spike timing code. There is now growing evidence in behaviors ranging from slow breathing to rapid flight for the prevalence of precise timing encoding in motor systems. Despite this, we largely do not know at what scale timing matters in these circuits due to the difficulty of recording a complete set of spike-resolved motor signals and assessing spike timing precision for encoding continuous motor signals. We also do not know if the precision scale varies depending on the functional role of different motor units. We introduce a method to estimate spike timing precision in motor circuits using continuous MI estimation at increasing levels of added uniform noise. This method can assess spike timing precision at fine scales for encoding rich motor output variation. We demonstrate the advantages of this approach compared to a previously established discrete information theoretic method of assessing spike timing precision. We use this method to analyze the precision in a nearly complete, spike resolved recording of the 10 primary wing muscles control flight in an agile hawk moth, Manduca sexta. Tethered moths visually tracked a robotic flower producing a range of turning (yaw) torques. We know that all 10 muscles in this motor program encode the majority of information about yaw torque in spike timings, but we do not know whether individual muscles encode motor information at different levels of precision. We demonstrate that the scale of temporal precision in all motor units in this insect flight circuit is at the sub-millisecond or millisecond-scale, with variation in precision scale present between muscle types. This method can be applied broadly to estimate spike timing precision in sensory and motor circuits in both invertebrates and vertebrates.
Collapse
Affiliation(s)
- Joy Putney
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- Graduate Program in Quantitative Biosciences, Georgia Institute of Technology, Atlanta, Georgia, United States of America
| | - Tobias Niebur
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Leo Wood
- Graduate Program in Quantitative Biosciences, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- School of Physics, Georgia Institute of Technology, Atlanta, Georgia, United States of America
| | - Rachel Conn
- School of Physics, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- Neuroscience Program, Emory University, Atlanta, Georgia, United States of America
| | - Simon Sponberg
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- Graduate Program in Quantitative Biosciences, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- School of Physics, Georgia Institute of Technology, Atlanta, Georgia, United States of America
| |
Collapse
|
9
|
Tan AK, Tegmark M, Chuang IL. Pareto-Optimal Clustering with the Primal Deterministic Information Bottleneck. ENTROPY (BASEL, SWITZERLAND) 2022; 24:771. [PMID: 35741492 PMCID: PMC9222302 DOI: 10.3390/e24060771] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 05/24/2022] [Accepted: 05/24/2022] [Indexed: 02/04/2023]
Abstract
At the heart of both lossy compression and clustering is a trade-off between the fidelity and size of the learned representation. Our goal is to map out and study the Pareto frontier that quantifies this trade-off. We focus on the optimization of the Deterministic Information Bottleneck (DIB) objective over the space of hard clusterings. To this end, we introduce the primal DIB problem, which we show results in a much richer frontier than its previously studied Lagrangian relaxation when optimized over discrete search spaces. We present an algorithm for mapping out the Pareto frontier of the primal DIB trade-off that is also applicable to other two-objective clustering problems. We study general properties of the Pareto frontier, and we give both analytic and numerical evidence for logarithmic sparsity of the frontier in general. We provide evidence that our algorithm has polynomial scaling despite the super-exponential search space, and additionally, we propose a modification to the algorithm that can be used where sampling noise is expected to be significant. Finally, we use our algorithm to map the DIB frontier of three different tasks: compressing the English alphabet, extracting informative color classes from natural images, and compressing a group theory-inspired dataset, revealing interesting features of frontier, and demonstrating how the structure of the frontier can be used for model selection with a focus on points previously hidden by the cloak of the convex hull.
Collapse
Affiliation(s)
- Andrew K. Tan
- Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; (M.T.); (I.L.C.)
- The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, Cambridge, MA 02139, USA
| | - Max Tegmark
- Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; (M.T.); (I.L.C.)
- The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, Cambridge, MA 02139, USA
| | - Isaac L. Chuang
- Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; (M.T.); (I.L.C.)
- The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, Cambridge, MA 02139, USA
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|
10
|
Inferring a Property of a Large System from a Small Number of Samples. ENTROPY 2022; 24:e24010125. [PMID: 35052151 PMCID: PMC8775033 DOI: 10.3390/e24010125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 01/10/2022] [Accepted: 01/11/2022] [Indexed: 11/17/2022]
Abstract
Inferring the value of a property of a large stochastic system is a difficult task when the number of samples is insufficient to reliably estimate the probability distribution. The Bayesian estimator of the property of interest requires the knowledge of the prior distribution, and in many situations, it is not clear which prior should be used. Several estimators have been developed so far in which the proposed prior us individually tailored for each property of interest; such is the case, for example, for the entropy, the amount of mutual information, or the correlation between pairs of variables. In this paper, we propose a general framework to select priors that is valid for arbitrary properties. We first demonstrate that only certain aspects of the prior distribution actually affect the inference process. We then expand the sought prior as a linear combination of a one-dimensional family of indexed priors, each of which is obtained through a maximum entropy approach with constrained mean values of the property under study. In many cases of interest, only one or very few components of the expansion turn out to contribute to the Bayesian estimator, so it is often valid to only keep a single component. The relevant component is selected by the data, so no handcrafted priors are required. We test the performance of this approximation with a few paradigmatic examples and show that it performs well in comparison to the ad-hoc methods previously proposed in the literature. Our method highlights the connection between Bayesian inference and equilibrium statistical mechanics, since the most relevant component of the expansion can be argued to be that with the right temperature.
Collapse
|
11
|
Optimizing Measures of Information Encoding in Astrocytic Calcium Signals. Brain Inform 2022. [DOI: 10.1007/978-3-031-15037-1_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022] Open
|
12
|
Zbili M, Rama S. A Quick and Easy Way to Estimate Entropy and Mutual Information for Neuroscience. Front Neuroinform 2021; 15:596443. [PMID: 34211385 PMCID: PMC8239197 DOI: 10.3389/fninf.2021.596443] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Accepted: 05/12/2021] [Indexed: 11/24/2022] Open
Abstract
Calculations of entropy of a signal or mutual information between two variables are valuable analytical tools in the field of neuroscience. They can be applied to all types of data, capture non-linear interactions and are model independent. Yet the limited size and number of recordings one can collect in a series of experiments makes their calculation highly prone to sampling bias. Mathematical methods to overcome this so-called "sampling disaster" exist, but require significant expertise, great time and computational costs. As such, there is a need for a simple, unbiased and computationally efficient tool for estimating the level of entropy and mutual information. In this article, we propose that application of entropy-encoding compression algorithms widely used in text and image compression fulfill these requirements. By simply saving the signal in PNG picture format and measuring the size of the file on the hard drive, we can estimate entropy changes through different conditions. Furthermore, with some simple modifications of the PNG file, we can also estimate the evolution of mutual information between a stimulus and the observed responses through different conditions. We first demonstrate the applicability of this method using white-noise-like signals. Then, while this method can be used in all kind of experimental conditions, we provide examples of its application in patch-clamp recordings, detection of place cells and histological data. Although this method does not give an absolute value of entropy or mutual information, it is mathematically correct, and its simplicity and broad use make it a powerful tool for their estimation through experiments.
Collapse
Affiliation(s)
- Mickael Zbili
- Lyon Neuroscience Research Center (CRNL), Inserm U1028, CNRS UMR 5292, Université Claude Bernard Lyon1, Bron, France
| | - Sylvain Rama
- Laboratory of Synaptic Imaging, Department of Clinical and Experimental Epilepsy, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|
13
|
Tomar R, Kostal L. Variability and Randomness of the Instantaneous Firing Rate. Front Comput Neurosci 2021; 15:620410. [PMID: 34163344 PMCID: PMC8215133 DOI: 10.3389/fncom.2021.620410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 04/26/2021] [Indexed: 11/13/2022] Open
Abstract
The apparent stochastic nature of neuronal activity significantly affects the reliability of neuronal coding. To quantify the encountered fluctuations, both in neural data and simulations, the notions of variability and randomness of inter-spike intervals have been proposed and studied. In this article we focus on the concept of the instantaneous firing rate, which is also based on the spike timing. We use several classical statistical models of neuronal activity and we study the corresponding probability distributions of the instantaneous firing rate. To characterize the firing rate variability and randomness under different spiking regimes, we use different indices of statistical dispersion. We find that the relationship between the variability of interspike intervals and the instantaneous firing rate is not straightforward in general. Counter-intuitively, an increase in the randomness (based on entropy) of spike times may either decrease or increase the randomness of instantaneous firing rate, in dependence on the neuronal firing model. Finally, we apply our methods to experimental data, establishing that instantaneous rate analysis can indeed provide additional information about the spiking activity.
Collapse
Affiliation(s)
- Rimjhim Tomar
- Department of Computational Neuroscience, Institute of Physiology, Czech Academy of Sciences, Prague, Czechia.,Second Medical Faculty, Charles University, Prague, Czechia
| | - Lubomir Kostal
- Department of Computational Neuroscience, Institute of Physiology, Czech Academy of Sciences, Prague, Czechia
| |
Collapse
|
14
|
Rudelt L, González Marx D, Wibral M, Priesemann V. Embedding optimization reveals long-lasting history dependence in neural spiking activity. PLoS Comput Biol 2021; 17:e1008927. [PMID: 34061837 PMCID: PMC8205186 DOI: 10.1371/journal.pcbi.1008927] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Revised: 06/15/2021] [Accepted: 03/31/2021] [Indexed: 11/19/2022] Open
Abstract
Information processing can leave distinct footprints on the statistics of neural spiking. For example, efficient coding minimizes the statistical dependencies on the spiking history, while temporal integration of information may require the maintenance of information over different timescales. To investigate these footprints, we developed a novel approach to quantify history dependence within the spiking of a single neuron, using the mutual information between the entire past and current spiking. This measure captures how much past information is necessary to predict current spiking. In contrast, classical time-lagged measures of temporal dependence like the autocorrelation capture how long-potentially redundant-past information can still be read out. Strikingly, we find for model neurons that our method disentangles the strength and timescale of history dependence, whereas the two are mixed in classical approaches. When applying the method to experimental data, which are necessarily of limited size, a reliable estimation of mutual information is only possible for a coarse temporal binning of past spiking, a so-called past embedding. To still account for the vastly different spiking statistics and potentially long history dependence of living neurons, we developed an embedding-optimization approach that does not only vary the number and size, but also an exponential stretching of past bins. For extra-cellular spike recordings, we found that the strength and timescale of history dependence indeed can vary independently across experimental preparations. While hippocampus indicated strong and long history dependence, in visual cortex it was weak and short, while in vitro the history dependence was strong but short. This work enables an information-theoretic characterization of history dependence in recorded spike trains, which captures a footprint of information processing that is beyond time-lagged measures of temporal dependence. To facilitate the application of the method, we provide practical guidelines and a toolbox.
Collapse
Affiliation(s)
- Lucas Rudelt
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
| | | | - Michael Wibral
- Campus Institute for Dynamics of Biological Networks, University of Göttingen, Göttingen, Germany
| | - Viola Priesemann
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| |
Collapse
|
15
|
Wason TD. A model integrating multiple processes of synchronization and coherence for information instantiation within a cortical area. Biosystems 2021; 205:104403. [PMID: 33746019 DOI: 10.1016/j.biosystems.2021.104403] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 03/05/2021] [Indexed: 12/14/2022]
Abstract
What is the form of dynamic, e.g., sensory, information in the mammalian cortex? Information in the cortex is modeled as a coherence map of a mixed chimera state of synchronous, phasic, and disordered minicolumns. The theoretical model is built on neurophysiological evidence. Complex spatiotemporal information is instantiated through a system of interacting biological processes that generate a synchronized cortical area, a coherent aperture. Minicolumn elements are grouped in macrocolumns in an array analogous to a phased-array radar, modeled as an aperture, a "hole through which radiant energy flows." Coherence maps in a cortical area transform inputs from multiple sources into outputs to multiple targets, while reducing complexity and entropy. Coherent apertures can assume extremely large numbers of different information states as coherence maps, which can be communicated among apertures with corresponding very large bandwidths. The coherent aperture model incorporates considerable reported research, integrating five conceptually and mathematically independent processes: 1) a damped Kuramoto network model, 2) a pumped area field potential, 3) the gating of nearly coincident spikes, 4) the coherence of activity across cortical lamina, and 5) complex information formed through functions in macrocolumns. Biological processes and their interactions are described in equations and a functional circuit such that the mathematical pieces can be assembled the same way the neurophysiological ones are. The model can be conceptually convolved over the specifics of local cortical areas within and across species. A coherent aperture becomes a node in a graph of cortical areas with a corresponding distribution of information.
Collapse
Affiliation(s)
- Thomas D Wason
- North Carolina State University, Department of Biological Sciences, Meitzen Laboratory, Campus Box 7617, 128 David Clark Labs, Raleigh, NC 27695-7617, USA.
| |
Collapse
|
16
|
Meijers M, Ito S, Ten Wolde PR. Behavior of information flow near criticality. Phys Rev E 2021; 103:L010102. [PMID: 33601642 DOI: 10.1103/physreve.103.l010102] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Accepted: 12/22/2020] [Indexed: 11/07/2022]
Abstract
Recent experiments have indicated that many biological systems self-organize near their critical point, which hints at a common design principle. While it has been suggested that information transmission is optimized near the critical point, it remains unclear how information transmission depends on the dynamics of the input signal, the distance over which the information needs to be transmitted, and the distance to the critical point. Here we employ stochastic simulations of a driven two-dimensional Ising system and study the instantaneous mutual information and the information transmission rate between a driven input spin and an output spin. The instantaneous mutual information varies nonmonotonically with the temperature but increases monotonically with the correlation time of the input signal. In contrast, there exists not only an optimal temperature but also an optimal finite input correlation time that maximizes the information transmission rate. This global optimum arises from a fundamental trade-off between the need to maximize the frequency of independent input messages, the necessity to respond fast to changes in the input, and the need to respond reliably to these changes. The optimal temperature lies above the critical point but moves toward it as the distance between the input and output spin is increased.
Collapse
Affiliation(s)
| | - Sosuke Ito
- NWO Institute AMOLF, 1098 XG Amsterdam, The Netherlands.,Universal Biology Institute, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | | |
Collapse
|
17
|
Li W, Li Y. Entropy, mutual information, and systematic measures of structured spiking neural networks. J Theor Biol 2020; 501:110310. [PMID: 32416092 DOI: 10.1016/j.jtbi.2020.110310] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 04/23/2020] [Accepted: 04/27/2020] [Indexed: 10/24/2022]
Abstract
The aim of this paper is to investigate various information-theoretic measures, including entropy, mutual information, and some systematic measures that are based on mutual information, for a class of structured spiking neuronal networks. In order to analyze and compute these information-theoretic measures for large networks, we coarse-grained the data by ignoring the order of spikes that fall into the same small time bin. The resultant coarse-grained entropy mainly captures the information contained in the rhythm produced by a local population of the network. We first show that these information theoretical measures are well-defined and computable by proving stochastic stability and the law of large numbers. Then we use three neuronal network examples, from simple to complex, to investigate these information-theoretic measures. Several analytical and computational results about properties of these information-theoretic measures are given.
Collapse
Affiliation(s)
- Wenjie Li
- Department of Mathematics and Statistics, Washington University, St. Louis, MO 63130, USA.
| | - Yao Li
- Department of Mathematics and Statistics, University of Massachusetts Amherst, Amherst, MA 01002, USA.
| |
Collapse
|
18
|
On the Use of Correlation and MI as a Measure of Metabolite-Metabolite Association for Network Differential Connectivity Analysis. Metabolites 2020; 10:metabo10040171. [PMID: 32344593 PMCID: PMC7241243 DOI: 10.3390/metabo10040171] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 04/15/2020] [Accepted: 04/22/2020] [Indexed: 02/06/2023] Open
Abstract
Metabolite differential connectivity analysis has been successful in investigating potential molecular mechanisms underlying different conditions in biological systems. Correlation and Mutual Information (MI) are two of the most common measures to quantify association and for building metabolite-metabolite association networks and to calculate differential connectivity. In this study, we investigated the performance of correlation and MI to identify significantly differentially connected metabolites. These association measures were compared on (i) 23 publicly available metabolomic data sets and 7 data sets from other fields, (ii) simulated data with known correlation structures, and (iii) data generated using a dynamic metabolic model to simulate real-life observed metabolite concentration profiles. In all cases, we found more differentially connected metabolites when using correlation indices as a measure for association than MI. We also observed that different MI estimation algorithms resulted in difference in performance when applied to data generated using a dynamic model. We concluded that there is no significant benefit in using MI as a replacement for standard Pearson's or Spearman's correlation when the application is to quantify and detect differentially connected metabolites.
Collapse
|
19
|
Zia M, Chung B, Sober S, Bakir MS. Flexible Multielectrode Arrays With 2-D and 3-D Contacts for In Vivo Electromyography Recording. IEEE TRANSACTIONS ON COMPONENTS, PACKAGING, AND MANUFACTURING TECHNOLOGY 2020; 10:197-202. [PMID: 32280561 PMCID: PMC7150534 DOI: 10.1109/tcpmt.2019.2963556] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We present a system for recording in vivo electromyographic (EMG) signals from songbirds using hybrid polyimide-polydimethylsiloxane (PDMS) flexible multielectrode arrays (MEAs). 2-D electrodes with a diameter of 200, 125, and 50 μm and a center-to-center pitch of 300, 200, and 100 μm, respectively, were fabricated. 3-D MEAs were fabricated using a photoresist reflow process to obtain hemispherical domes utilized to form the 3-D electrodes. Biocompatibility and flexibility of the arrays were ensured by using polyimide and PDMS as the materials of choice for the arrays. EMG activity was recorded from the expiratory muscle group of anesthetized songbirds using the fabricated 2-D and 3-D arrays. Air pressure data were also recorded simultaneously from the air sac of the songbird. Together, EMG recordings and air pressure measurements can be used to characterize how the nervous system controls breathing and other motor behaviors. Such technologies can in turn provide unique insights into motor control in a range of species, including humans. An improvement of over 7× in the signal-to-noise ratio (SNR) is observed with the utilization of 3-D MEAs in comparison to 2-D MEAs.
Collapse
Affiliation(s)
- Muneeb Zia
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332 USA
| | - Bryce Chung
- Department of Biology, Emory University, Atlanta, GA 30322 USA
| | - Samuel Sober
- Department of Biology, Emory University, Atlanta, GA 30322 USA
| | - Muhannad S Bakir
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332 USA
| |
Collapse
|
20
|
Abstract
This paper presents methods that quantify the structure of statistical interactions within a given data set, and were applied in a previous article. It establishes new results on the k-multivariate mutual-information (Ik) inspired by the topological formulation of Information introduced in a serie of studies. In particular, we show that the vanishing of all Ik for 2≤k≤n of n random variables is equivalent to their statistical independence. Pursuing the work of Hu Kuo Ting and Te Sun Han, we show that information functions provide co-ordinates for binary variables, and that they are analytically independent from the probability simplex for any set of finite variables. The maximal positive Ik identifies the variables that co-vary the most in the population, whereas the minimal negative Ik identifies synergistic clusters and the variables that differentiate–segregate the most in the population. Finite data size effects and estimation biases severely constrain the effective computation of the information topology on data, and we provide simple statistical tests for the undersampling bias and the k-dependences. We give an example of application of these methods to genetic expression and unsupervised cell-type classification. The methods unravel biologically relevant subtypes, with a sample size of 41 genes and with few errors. It establishes generic basic methods to quantify the epigenetic information storage and a unified epigenetic unsupervised learning formalism. We propose that higher-order statistical interactions and non-identically distributed variables are constitutive characteristics of biological systems that should be estimated in order to unravel their significant statistical structure and diversity. The topological information data analysis presented here allows for precisely estimating this higher-order structure characteristic of biological systems.
Collapse
|
21
|
Pregowska A, Casti A, Kaplan E, Wajnryb E, Szczepanski J. Information processing in the LGN: a comparison of neural codes and cell types. BIOLOGICAL CYBERNETICS 2019; 113:453-464. [PMID: 31243531 PMCID: PMC6658673 DOI: 10.1007/s00422-019-00801-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Accepted: 06/17/2019] [Indexed: 06/09/2023]
Abstract
To understand how anatomy and physiology allow an organism to perform its function, it is important to know how information that is transmitted by spikes in the brain is received and encoded. A natural question is whether the spike rate alone encodes the information about a stimulus (rate code), or additional information is contained in the temporal pattern of the spikes (temporal code). Here we address this question using data from the cat Lateral Geniculate Nucleus (LGN), which is the visual portion of the thalamus, through which visual information from the retina is communicated to the visual cortex. We analyzed the responses of LGN neurons to spatially homogeneous spots of various sizes with temporally random luminance modulation. We compared the Firing Rate with the Shannon Information Transmission Rate , which quantifies the information contained in the temporal relationships between spikes. We found that the behavior of these two rates can differ quantitatively. This suggests that the energy used for spiking does not translate directly into the information to be transmitted. We also compared Firing Rates with Information Rates for X-ON and X-OFF cells. We found that, for X-ON cells the Firing Rate and Information Rate often behave in a completely different way, while for X-OFF cells these rates are much more highly correlated. Our results suggest that for X-ON cells a more efficient "temporal code" is employed, while for X-OFF cells a straightforward "rate code" is used, which is more reliable and is correlated with energy consumption.
Collapse
Affiliation(s)
- Agnieszka Pregowska
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02–106 Warsaw, Poland
| | - Alex Casti
- Department of Mathematics, Gildart-Haase School of Computer Sciences and Engineering, Fairleigh Dickinson University, Teaneck, NY 07666 USA
| | - Ehud Kaplan
- Icahn School of Medicine at Mount Sinai, New York, NY 10029 USA
- National Institute of Mental Health (NUDZ), Topolova 748, 250 67 Klecany, Czech Republic
- Department of Philosophy of Science, Charles University, Prague, Czech Republic
| | - Eligiusz Wajnryb
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02–106 Warsaw, Poland
| | - Janusz Szczepanski
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02–106 Warsaw, Poland
| |
Collapse
|
22
|
Verdú S. Empirical Estimation of Information Measures: A Literature Guide. ENTROPY 2019; 21:e21080720. [PMID: 33267434 PMCID: PMC7515235 DOI: 10.3390/e21080720] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Revised: 06/10/2019] [Accepted: 06/11/2019] [Indexed: 11/23/2022]
Abstract
We give a brief survey of the literature on the empirical estimation of entropy, differential entropy, relative entropy, mutual information and related information measures. While those quantities are of central importance in information theory, universal algorithms for their estimation are increasingly important in data science, machine learning, biology, neuroscience, economics, language, and other experimental sciences.
Collapse
Affiliation(s)
- Sergio Verdú
- Independent Researcher, Princeton, NJ 08540, USA
| |
Collapse
|
23
|
Hernández DG, Samengo I. Estimating the Mutual Information between Two Discrete, Asymmetric Variables with Limited Samples. ENTROPY 2019; 21:e21060623. [PMID: 33267337 PMCID: PMC7515115 DOI: 10.3390/e21060623] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Revised: 06/11/2019] [Accepted: 06/13/2019] [Indexed: 11/27/2022]
Abstract
Determining the strength of nonlinear, statistical dependencies between two variables is a crucial matter in many research fields. The established measure for quantifying such relations is the mutual information. However, estimating mutual information from limited samples is a challenging task. Since the mutual information is the difference of two entropies, the existing Bayesian estimators of entropy may be used to estimate information. This procedure, however, is still biased in the severely under-sampled regime. Here, we propose an alternative estimator that is applicable to those cases in which the marginal distribution of one of the two variables—the one with minimal entropy—is well sampled. The other variable, as well as the joint and conditional distributions, can be severely undersampled. We obtain a consistent estimator that presents very low bias, outperforming previous methods even when the sampled data contain few coincidences. As with other Bayesian estimators, our proposal focuses on the strength of the interaction between the two variables, without seeking to model the specific way in which they are related. A distinctive property of our method is that the main data statistics determining the amount of mutual information is the inhomogeneity of the conditional distribution of the low-entropy variable in those states in which the large-entropy variable registers coincidences.
Collapse
|
24
|
Abstract
Quantifying mutual information between inputs and outputs of a large neural circuit is an important open problem in both machine learning and neuroscience. However, evaluation of the mutual information is known to be generally intractable for large systems due to the exponential growth in the number of terms that need to be evaluated. Here we show how information contained in the responses of large neural populations can be effectively computed provided the input-output functions of individual neurons can be measured and approximated by a logistic function applied to a potentially nonlinear function of the stimulus. Neural responses in this model can remain sensitive to multiple stimulus components. We show that the mutual information in this model can be effectively approximated as a sum of lower-dimensional conditional mutual information terms. The approximations become exact in the limit of large neural populations and for certain conditions on the distribution of receptive fields across the neural population. We empirically find that these approximations continue to work well even when the conditions on the receptive field distributions are not fulfilled. The computing cost for the proposed methods grows linearly in the dimension of the input and compares favorably with other approximations.
Collapse
Affiliation(s)
- John A Berkowitz
- Department of Physics, University of California San Diego, San Diego, CA 92093, U.S.A.
| | - Tatyana O Sharpee
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037, and Department of Physics, University of California San Diego, San Diego, CA 92093, U.S.A.
| |
Collapse
|
25
|
Huang W, Zhang K. Approximations of Shannon Mutual Information for Discrete Variables with Applications to Neural Population Coding. ENTROPY 2019; 21:e21030243. [PMID: 33266958 PMCID: PMC7514724 DOI: 10.3390/e21030243] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2018] [Revised: 02/11/2019] [Accepted: 02/28/2019] [Indexed: 12/03/2022]
Abstract
Although Shannon mutual information has been widely used, its effective calculation is often difficult for many practical problems, including those in neural population coding. Asymptotic formulas based on Fisher information sometimes provide accurate approximations to the mutual information but this approach is restricted to continuous variables because the calculation of Fisher information requires derivatives with respect to the encoded variables. In this paper, we consider information-theoretic bounds and approximations of the mutual information based on Kullback-Leibler divergence and Rényi divergence. We propose several information metrics to approximate Shannon mutual information in the context of neural population coding. While our asymptotic formulas all work for discrete variables, one of them has consistent performance and high accuracy regardless of whether the encoded variables are discrete or continuous. We performed numerical simulations and confirmed that our approximation formulas were highly accurate for approximating the mutual information between the stimuli and the responses of a large neural population. These approximation formulas may potentially bring convenience to the applications of information theory to many practical and theoretical problems.
Collapse
Affiliation(s)
- Wentao Huang
- Key Laboratory of Cognition and Intelligence and Information Science Academy of China Electronics Technology Group Corporation, Beijing 100086, China
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
- Correspondence: (W.H.); (K.Z.); Tel.: +1-443-204-0536 (W.H.); +1-410-955-3538 (K.Z.)
| | - Kechen Zhang
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
- Correspondence: (W.H.); (K.Z.); Tel.: +1-443-204-0536 (W.H.); +1-410-955-3538 (K.Z.)
| |
Collapse
|
26
|
Abstract
It is difficult to estimate the mutual information between spike trains because established methods require more data than are usually available. Kozachenko-Leonenko estimators promise to solve this problem but include a smoothing parameter that must be set. We propose here that the smoothing parameter can be selected by maximizing the estimated unbiased mutual information. This is tested on fictive data and shown to work very well.
Collapse
Affiliation(s)
- Conor Houghton
- Computational Neuroscience Unit, School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of Bristol, Bristol, Avon BS8 1UB, UK
| |
Collapse
|
27
|
Zia M, Chung B, Sober SJ, Bakir MS. Fabrication and Characterization of 3D Multi-Electrode Array on Flexible Substrate for In Vivo EMG Recording from Expiratory Muscle of Songbird. TECHNICAL DIGEST. INTERNATIONAL ELECTRON DEVICES MEETING 2018; 2018:29.4.1-29.4.4. [PMID: 30846889 PMCID: PMC6400221 DOI: 10.1109/iedm.2018.8614503] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
This work presents fabrication and characterization of flexible three-dimensional (3D) multi-electrode arrays (MEAs) capable of high signal-to-noise (SNR) electromyogram (EMG) recordings from the expiratory muscle of a songbird. The fabrication utilizes a photoresist reflow process to obtain 3D structures to serve as the electrodes. A polyimide base with a PDMS top insulation was utilized to ensure flexibility and biocompatibility of the fabricated 3D MEA devices. SNR measurements from the fabricated 3D electrode show up to a 7x improvement as compared to the 2D MEAs.
Collapse
Affiliation(s)
- Muneeb Zia
- Georgia Institute of Technology, Atlanta, GA, USA,
| | | | | | | |
Collapse
|
28
|
Assessing the Relevance of Specific Response Features in the Neural Code. ENTROPY 2018; 20:e20110879. [PMID: 33266602 PMCID: PMC7512461 DOI: 10.3390/e20110879] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Revised: 11/12/2018] [Accepted: 11/13/2018] [Indexed: 11/27/2022]
Abstract
The study of the neural code aims at deciphering how the nervous system maps external stimuli into neural activity—the encoding phase—and subsequently transforms such activity into adequate responses to the original stimuli—the decoding phase. Several information-theoretical methods have been proposed to assess the relevance of individual response features, as for example, the spike count of a given neuron, or the amount of correlation in the activity of two cells. These methods work under the premise that the relevance of a feature is reflected in the information loss that is induced by eliminating the feature from the response. The alternative methods differ in the procedure by which the tested feature is removed, and the algorithm with which the lost information is calculated. Here we compare these methods, and show that more often than not, each method assigns a different relevance to the tested feature. We demonstrate that the differences are both quantitative and qualitative, and connect them with the method employed to remove the tested feature, as well as the procedure to calculate the lost information. By studying a collection of carefully designed examples, and working on analytic derivations, we identify the conditions under which the relevance of features diagnosed by different methods can be ranked, or sometimes even equated. The condition for equality involves both the amount and the type of information contributed by the tested feature. We conclude that the quest for relevant response features is more delicate than previously thought, and may yield to multiple answers depending on methodological subtleties.
Collapse
|
29
|
Abstract
Estimation of mutual information between random variables has become crucial in a range of fields, from physics to neuroscience to finance. Estimating information accurately over a wide range of conditions relies on the development of flexible methods to describe statistical dependencies among variables, without imposing potentially invalid assumptions on the data. Such methods are needed in cases that lack prior knowledge of their statistical properties and that have limited sample numbers. Here we propose a powerful and generally applicable information estimator based on non-parametric copulas. This estimator, called the non-parametric copula-based estimator (NPC), is tailored to take into account detailed stochastic relationships in the data independently of the data's marginal distributions. The NPC estimator can be used both for continuous and discrete numerical variables and thus provides a single framework for the mutual information estimation of both continuous and discrete data. By extensive validation on artificial samples drawn from various statistical distributions, we found that the NPC estimator compares well against commonly used alternatives. Unlike methods not based on copulas, it allows an estimation of information that is robust to changes of the details of the marginal distributions. Unlike parametric copula methods, it remains accurate regardless of the precise form of the interactions between the variables. In addition, the NPC estimator had accurate information estimates even at low sample numbers, in comparison to alternative estimators. The NPC estimator therefore provides a good balance between general applicability to arbitrarily shaped statistical dependencies in the data and shows accurate and robust performance when working with small sample sizes. We anticipate that the non-parametric copula information estimator will be a powerful tool in estimating mutual information between a broad range of data.
Collapse
Affiliation(s)
- Houman Safaai
- Department of Neurobiology, Harvard Medical School, Boston, MA
- Istituto Italiano di Tecnologia, Rovereto, Italy
| | - Arno Onken
- School of Informatics, University of Edinburgh, Edinburgh, UK
| | | | | |
Collapse
|
30
|
Bitbol AF. Inferring interaction partners from protein sequences using mutual information. PLoS Comput Biol 2018; 14:e1006401. [PMID: 30422978 PMCID: PMC6258550 DOI: 10.1371/journal.pcbi.1006401] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Revised: 11/27/2018] [Accepted: 10/27/2018] [Indexed: 11/30/2022] Open
Abstract
Functional protein-protein interactions are crucial in most cellular processes. They enable multi-protein complexes to assemble and to remain stable, and they allow signal transduction in various pathways. Functional interactions between proteins result in coevolution between the interacting partners, and thus in correlations between their sequences. Pairwise maximum-entropy based models have enabled successful inference of pairs of amino-acid residues that are in contact in the three-dimensional structure of multi-protein complexes, starting from the correlations in the sequence data of known interaction partners. Recently, algorithms inspired by these methods have been developed to identify which proteins are functional interaction partners among the paralogous proteins of two families, starting from sequence data alone. Here, we demonstrate that a slightly higher performance for partner identification can be reached by an approximate maximization of the mutual information between the sequence alignments of the two protein families. Our mutual information-based method also provides signatures of the existence of interactions between protein families. These results stand in contrast with structure prediction of proteins and of multi-protein complexes from sequence data, where pairwise maximum-entropy based global statistical models substantially improve performance compared to mutual information. Our findings entail that the statistical dependences allowing interaction partner prediction from sequence data are not restricted to the residue pairs that are in direct contact at the interface between the partner proteins.
Collapse
Affiliation(s)
- Anne-Florence Bitbol
- Sorbonne Université, CNRS, Laboratoire Jean Perrin (UMR 8237), F-75005 Paris, France
| |
Collapse
|
31
|
Azarfar A, Calcini N, Huang C, Zeldenrust F, Celikel T. Neural coding: A single neuron's perspective. Neurosci Biobehav Rev 2018; 94:238-247. [PMID: 30227142 DOI: 10.1016/j.neubiorev.2018.09.007] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Revised: 08/27/2018] [Accepted: 09/07/2018] [Indexed: 12/15/2022]
Abstract
What any sensory neuron knows about the world is one of the cardinal questions in Neuroscience. Information from the sensory periphery travels across synaptically coupled neurons as each neuron encodes information by varying the rate and timing of its action potentials (spikes). Spatiotemporally correlated changes in this spiking regimen across neuronal populations are the neural basis of sensory representations. In the somatosensory cortex, however, spiking of individual (or pairs of) cortical neurons is only minimally informative about the world. Recent studies showed that one solution neurons implement to counteract this information loss is adapting their rate of information transfer to the ongoing synaptic activity by changing the membrane potential at which spike is generated. Here we first introduce the principles of information flow from the sensory periphery to the primary sensory cortex in a model sensory (whisker) system, and subsequently discuss how the adaptive spike threshold gates the intracellular information transfer from the somatic post-synaptic potential to action potentials, controlling the information content of communication across somatosensory cortical neurons.
Collapse
Affiliation(s)
- Alireza Azarfar
- Department of Neurophysiology, Donders Institute for Brain, Cognition, and Behaviour Radboud University, the Netherlands
| | - Niccoló Calcini
- Department of Neurophysiology, Donders Institute for Brain, Cognition, and Behaviour Radboud University, the Netherlands
| | - Chao Huang
- Department of Neurophysiology, Donders Institute for Brain, Cognition, and Behaviour Radboud University, the Netherlands
| | - Fleur Zeldenrust
- Department of Neurophysiology, Donders Institute for Brain, Cognition, and Behaviour Radboud University, the Netherlands
| | - Tansu Celikel
- Department of Neurophysiology, Donders Institute for Brain, Cognition, and Behaviour Radboud University, the Netherlands.
| |
Collapse
|
32
|
Wang S, Chen X, Dan Du, Zheng W, Hu L, Yang H, Cheng J, Gong M. MetaboGroupS: A Group Entropy-Based Web Platform for Evaluating Normalization Methods in Blood Metabolomics Data from Maintenance Hemodialysis Patients. Anal Chem 2018; 90:11124-11130. [PMID: 30118600 DOI: 10.1021/acs.analchem.8b03065] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Shisheng Wang
- West China-Washington Mitochondria and Metabolism Research Center and Key Lab of Transplant Engineering and Immunology, MOH, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Xiaolei Chen
- Department of Nephrology, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Dan Du
- West China-Washington Mitochondria and Metabolism Research Center and Key Lab of Transplant Engineering and Immunology, MOH, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Wen Zheng
- West China-Washington Mitochondria and Metabolism Research Center and Key Lab of Transplant Engineering and Immunology, MOH, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Liqiang Hu
- West China-Washington Mitochondria and Metabolism Research Center and Key Lab of Transplant Engineering and Immunology, MOH, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Hao Yang
- West China-Washington Mitochondria and Metabolism Research Center and Key Lab of Transplant Engineering and Immunology, MOH, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Jingqiu Cheng
- West China-Washington Mitochondria and Metabolism Research Center and Key Lab of Transplant Engineering and Immunology, MOH, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Meng Gong
- West China-Washington Mitochondria and Metabolism Research Center and Key Lab of Transplant Engineering and Immunology, MOH, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| |
Collapse
|
33
|
Junker M, Endres D, Sun ZP, Dicke PW, Giese M, Thier P. Learning from the past: A reverberation of past errors in the cerebellar climbing fiber signal. PLoS Biol 2018; 16:e2004344. [PMID: 30067764 PMCID: PMC6089447 DOI: 10.1371/journal.pbio.2004344] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2017] [Revised: 08/13/2018] [Accepted: 07/13/2018] [Indexed: 01/31/2023] Open
Abstract
The cerebellum allows us to rapidly adjust motor behavior to the needs of the situation. It is commonly assumed that cerebellum-based motor learning is guided by the difference between the desired and the actual behavior, i.e., by error information. Not only immediate but also future behavior will benefit from an error because it induces lasting changes of parallel fiber synapses on Purkinje cells (PCs), whose output mediates the behavioral adjustments. Olivary climbing fibers, likewise connecting with PCs, are thought to transport information on instant errors needed for the synaptic modification yet not to contribute to error memory. Here, we report work on monkeys tested in a saccadic learning paradigm that challenges this concept. We demonstrate not only a clear complex spikes (CS) signature of the error at the time of its occurrence but also a reverberation of this signature much later, before a new manifestation of the behavior, suitable to improve it.
Collapse
Affiliation(s)
- Marc Junker
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Dominik Endres
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
- Section on Computational Sensomotorics, Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Zong Peng Sun
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Peter W. Dicke
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Martin Giese
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
- Section on Computational Sensomotorics, Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Peter Thier
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
- * E-mail:
| |
Collapse
|
34
|
Information-theoretic analysis of realistic odor plumes: What cues are useful for determining location? PLoS Comput Biol 2018; 14:e1006275. [PMID: 29990365 PMCID: PMC6054425 DOI: 10.1371/journal.pcbi.1006275] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2018] [Revised: 07/20/2018] [Accepted: 05/31/2018] [Indexed: 01/30/2023] Open
Abstract
Many species rely on olfaction to navigate towards food sources or mates. Olfactory navigation is a challenging task since odor environments are typically turbulent. While time-averaged odor concentration varies smoothly with the distance to the source, instaneous concentrations are intermittent and obtaining stable averages takes longer than the typical intervals between animals’ navigation decisions. How to effectively sample from the odor distribution to determine sampling location is the focus in this article. To investigate which sampling strategies are most informative about the location of an odor source, we recorded three naturalistic stimuli with planar lased-induced fluorescence and used an information-theoretic approach to quantify the information that different sampling strategies provide about sampling location. Specifically, we compared multiple sampling strategies based on a fixed number of coding bits for encoding the olfactory stimulus. When the coding bits were all allocated to representing odor concentration at a single sensor, information rapidly saturated. Using the same number of coding bits in two sensors provides more information, as does coding multiple samples at different times. When accumulating multiple samples at a fixed location, the temporal sequence does not yield a large amount of information and can be averaged with minimal loss. Furthermore, we show that histogram-equalization is not the most efficient way to use coding bits when using the olfactory sample to determine location. Navigating towards a food source or mating partner based on an animals’ sense of smell is a difficult task due to the complex spatiotemporal distribution of odor molecules. The most basic aspect of this task is the acquisition of samples from the environment. It is clear that odor concentration does not vary smoothly across space in many natural foraging environments. Using data from three different naturalistic environments, we compare different sampling strategies and assess their efficacy in determining the sources’ location. Our findings show that coarsely encoding the concentration of samples at separate sensors and/or multiple times provides more information than encoding fewer samples with higher resolution. Furthermore, coding resources should be focused on discriminating rare high-concentration odor samples, which are very informative about the sampling location. Such a nonlinear transformation can be implemented biologically by the receptor binding kinetics that bind odorants as a first stage of the sampling process. A further implication is that animals as well as computational models of algorithms can operate efficiently with a coarse representation of the odor concentration.
Collapse
|
35
|
Baravalle R, Rosso OA, Montani F. Rhythmic activities of the brain: Quantifying the high complexity of beta and gamma oscillations during visuomotor tasks. CHAOS (WOODBURY, N.Y.) 2018; 28:075513. [PMID: 30070505 DOI: 10.1063/1.5025187] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2018] [Accepted: 06/11/2018] [Indexed: 06/08/2023]
Abstract
Electroencephalography (EEG) signals depict the electrical activity that takes place at the surface of the brain and provide an important tool for understanding a variety of cognitive processes. The EEG is the product of synchronized activity of the brain, and variations in EEG oscillations patterns reflect the underlying changes in neuronal synchrony. Our aim is to characterize the complexity of the EEG rhythmic oscillations bands when the subjects perform a visuomotor or imagined cognitive tasks (imagined movement), providing a causal mapping of the dynamical rhythmic activities of the brain as a measure of attentional investment. We estimate the intrinsic correlational structure of the signals within the causality entropy-complexity plane H×C, where the enhanced complexity in the gamma 1, gamma 2, and beta 1 bands allows us to distinguish motor-visual memory tasks from control conditions. We identify the dynamics of the gamma 1, gamma 2, and beta 1 rhythmic oscillations within the zone of a chaotic dissipative behavior, whereas in contrast the beta 2 band shows a much higher level of entropy and a significant low level of complexity that correspond to a non-invertible cubic map. Our findings enhance the importance of the gamma band during attention in perceptual feature binding during the visuomotor/imagery tasks.
Collapse
Affiliation(s)
- Roman Baravalle
- IFLYSIB, CONICET & Universidad Nacional de La Plata, Calle 59-789, 1900 La Plata, Argentina
| | - Osvaldo A Rosso
- Departamento de Informática en Salud, Hospital Italiano de Buenos Aires & CONICET, C1199ABB Ciudad Autónoma de Buenos Aires, Argentina
| | - Fernando Montani
- IFLYSIB, CONICET & Universidad Nacional de La Plata, Calle 59-789, 1900 La Plata, Argentina
| |
Collapse
|
36
|
Timme NM, Lapish C. A Tutorial for Information Theory in Neuroscience. eNeuro 2018; 5:ENEURO.0052-18.2018. [PMID: 30211307 PMCID: PMC6131830 DOI: 10.1523/eneuro.0052-18.2018] [Citation(s) in RCA: 105] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2018] [Revised: 04/10/2018] [Accepted: 05/30/2018] [Indexed: 11/21/2022] Open
Abstract
Understanding how neural systems integrate, encode, and compute information is central to understanding brain function. Frequently, data from neuroscience experiments are multivariate, the interactions between the variables are nonlinear, and the landscape of hypothesized or possible interactions between variables is extremely broad. Information theory is well suited to address these types of data, as it possesses multivariate analysis tools, it can be applied to many different types of data, it can capture nonlinear interactions, and it does not require assumptions about the structure of the underlying data (i.e., it is model independent). In this article, we walk through the mathematics of information theory along with common logistical problems associated with data type, data binning, data quantity requirements, bias, and significance testing. Next, we analyze models inspired by canonical neuroscience experiments to improve understanding and demonstrate the strengths of information theory analyses. To facilitate the use of information theory analyses, and an understanding of how these analyses are implemented, we also provide a free MATLAB software package that can be applied to a wide range of data from neuroscience experiments, as well as from other fields of study.
Collapse
Affiliation(s)
- Nicholas M Timme
- Department of Psychology, Indiana University - Purdue University Indianapolis, 402 N. Blackford St, Indianapolis, IN 46202
| | - Christopher Lapish
- Department of Psychology, Indiana University - Purdue University Indianapolis, 402 N. Blackford St, Indianapolis, IN 46202
| |
Collapse
|
37
|
Akrami A, Kopec CD, Diamond ME, Brody CD. Posterior parietal cortex represents sensory history and mediates its effects on behaviour. Nature 2018; 554:368-372. [DOI: 10.1038/nature25510] [Citation(s) in RCA: 202] [Impact Index Per Article: 28.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2017] [Accepted: 01/09/2018] [Indexed: 11/09/2022]
|
38
|
Xiao Z, Wang B, Sornborger AT, Tao L. Mutual Information and Information Gating in Synfire Chains. ENTROPY 2018; 20:e20020102. [PMID: 33265193 PMCID: PMC7512595 DOI: 10.3390/e20020102] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2017] [Revised: 01/29/2018] [Accepted: 01/30/2018] [Indexed: 11/27/2022]
Abstract
Coherent neuronal activity is believed to underlie the transfer and processing of information in the brain. Coherent activity in the form of synchronous firing and oscillations has been measured in many brain regions and has been correlated with enhanced feature processing and other sensory and cognitive functions. In the theoretical context, synfire chains and the transfer of transient activity packets in feedforward networks have been appealed to in order to describe coherent spiking and information transfer. Recently, it has been demonstrated that the classical synfire chain architecture, with the addition of suitably timed gating currents, can support the graded transfer of mean firing rates in feedforward networks (called synfire-gated synfire chains—SGSCs). Here we study information propagation in SGSCs by examining mutual information as a function of layer number in a feedforward network. We explore the effects of gating and noise on information transfer in synfire chains and demonstrate that asymptotically, two main regions exist in parameter space where information may be propagated and its propagation is controlled by pulse-gating: a large region where binary codes may be propagated, and a smaller region near a cusp in parameter space that supports graded propagation across many layers.
Collapse
Affiliation(s)
- Zhuocheng Xiao
- Department of Mathematics, University of Arizona, Tucson, AZ 85721, USA
| | - Binxu Wang
- Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing 100871, China
- Yuanpei School, Peking University, Beijing 100871, China
| | - Andrew T. Sornborger
- Information Sciences, CCS-3, Los Alamos National Laboratory, Los Alamos, NM 87545, USA
- Department of Mathematics, University of California, Davis, CA 95616, USA
- Correspondence: (A.T.S.); (L.T.)
| | - Louis Tao
- Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing 100871, China
- Center for Quantitative Biology, Peking University, Beijing 100871, China
- Correspondence: (A.T.S.); (L.T.)
| |
Collapse
|
39
|
Minimum and Maximum Entropy Distributions for Binary Systems with Known Means and Pairwise Correlations. ENTROPY 2017; 19:e19080427. [DOI: 10.3390/e19080427] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Revised: 08/08/2017] [Accepted: 08/18/2017] [Indexed: 11/16/2022]
|
40
|
Daniels BC, Flack JC, Krakauer DC. Dual Coding Theory Explains Biphasic Collective Computation in Neural Decision-Making. Front Neurosci 2017; 11:313. [PMID: 28634436 PMCID: PMC5459926 DOI: 10.3389/fnins.2017.00313] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2017] [Accepted: 05/18/2017] [Indexed: 11/13/2022] Open
Abstract
A central question in cognitive neuroscience is how unitary, coherent decisions at the whole organism level can arise from the distributed behavior of a large population of neurons with only partially overlapping information. We address this issue by studying neural spiking behavior recorded from a multielectrode array with 169 channels during a visual motion direction discrimination task. It is well known that in this task there are two distinct phases in neural spiking behavior. Here we show Phase I is a distributed or incompressible phase in which uncertainty about the decision is substantially reduced by pooling information from many cells. Phase II is a redundant or compressible phase in which numerous single cells contain all the information present at the population level in Phase I, such that the firing behavior of a single cell is enough to predict the subject's decision. Using an empirically grounded dynamical modeling framework, we show that in Phase I large cell populations with low redundancy produce a slow timescale of information aggregation through critical slowing down near a symmetry-breaking transition. Our model indicates that increasing collective amplification in Phase II leads naturally to a faster timescale of information pooling and consensus formation. Based on our results and others in the literature, we propose that a general feature of collective computation is a "coding duality" in which there are accumulation and consensus formation processes distinguished by different timescales.
Collapse
Affiliation(s)
- Bryan C. Daniels
- ASU–SFI Center for Biosocial Complex Systems, Arizona State UniversityTempe, AZ, United States
| | - Jessica C. Flack
- ASU–SFI Center for Biosocial Complex Systems, Arizona State UniversityTempe, AZ, United States
- Santa Fe InstituteSanta Fe, NM, United States
| | - David C. Krakauer
- ASU–SFI Center for Biosocial Complex Systems, Arizona State UniversityTempe, AZ, United States
- Santa Fe InstituteSanta Fe, NM, United States
| |
Collapse
|
41
|
Wollstadt P, Sellers KK, Rudelt L, Priesemann V, Hutt A, Fröhlich F, Wibral M. Breakdown of local information processing may underlie isoflurane anesthesia effects. PLoS Comput Biol 2017; 13:e1005511. [PMID: 28570661 PMCID: PMC5453425 DOI: 10.1371/journal.pcbi.1005511] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2016] [Accepted: 04/11/2017] [Indexed: 02/07/2023] Open
Abstract
The disruption of coupling between brain areas has been suggested as the mechanism underlying loss of consciousness in anesthesia. This hypothesis has been tested previously by measuring the information transfer between brain areas, and by taking reduced information transfer as a proxy for decoupling. Yet, information transfer is a function of the amount of information available in the information source—such that transfer decreases even for unchanged coupling when less source information is available. Therefore, we reconsidered past interpretations of reduced information transfer as a sign of decoupling, and asked whether impaired local information processing leads to a loss of information transfer. An important prediction of this alternative hypothesis is that changes in locally available information (signal entropy) should be at least as pronounced as changes in information transfer. We tested this prediction by recording local field potentials in two ferrets after administration of isoflurane in concentrations of 0.0%, 0.5%, and 1.0%. We found strong decreases in the source entropy under isoflurane in area V1 and the prefrontal cortex (PFC)—as predicted by our alternative hypothesis. The decrease in source entropy was stronger in PFC compared to V1. Information transfer between V1 and PFC was reduced bidirectionally, but with a stronger decrease from PFC to V1. This links the stronger decrease in information transfer to the stronger decrease in source entropy—suggesting reduced source entropy reduces information transfer. This conclusion fits the observation that the synaptic targets of isoflurane are located in local cortical circuits rather than on the synapses formed by interareal axonal projections. Thus, changes in information transfer under isoflurane seem to be a consequence of changes in local processing more than of decoupling between brain areas. We suggest that source entropy changes must be considered whenever interpreting changes in information transfer as decoupling. Currently we do not understand how anesthesia leads to loss of consciousness (LOC). One popular idea is that we loose consciousness when brain areas lose their ability to communicate with each other–as anesthetics might interrupt transmission on nerve fibers coupling them. This idea has been tested by measuring the amount of information transferred between brain areas, and taking this transfer to reflect the coupling itself. Yet, information that isn’t available in the source area can’t be transferred to a target. Hence, the decreases in information transfer could be related to less information being available in the source, rather than to a decoupling. We tested this possibility measuring the information available in source brain areas and found that it decreased under isoflurane anesthesia. In addition, a stronger decrease in source information lead to a stronger decrease of the information transfered. Thus, the input to the connection between brain areas determined the communicated information, not the strength of the coupling (which would result in a stronger decrease in the target). We suggest that interrupted information processing within brain areas has an important contribution to LOC, and should be focused on more in attempts to understand loss of consciousness under anesthesia.
Collapse
Affiliation(s)
- Patricia Wollstadt
- MEG Unit, Brain Imaging Center, Goethe University, Frankfurt/Main, Germany
- * E-mail: (PW); (VP)
| | - Kristin K. Sellers
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Neurobiology Curriculum, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Lucas Rudelt
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Viola Priesemann
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, BCCN, Göttingen, Germany
- * E-mail: (PW); (VP)
| | - Axel Hutt
- Deutscher Wetterdienst, Section FE 12 - Data Assimilation, Offenbach/Main, Germany
- Department of Mathematics and Statistics, University of Reading, Reading, United Kingdom
| | - Flavio Fröhlich
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Neurobiology Curriculum, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Cell Biology and Physiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Biomedical Engineering, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Neurology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Michael Wibral
- MEG Unit, Brain Imaging Center, Goethe University, Frankfurt/Main, Germany
| |
Collapse
|
42
|
Granados AA, Crane MM, Montano-Gutierrez LF, Tanaka RJ, Voliotis M, Swain PS. Distributing tasks via multiple input pathways increases cellular survival in stress. eLife 2017; 6. [PMID: 28513433 PMCID: PMC5464774 DOI: 10.7554/elife.21415] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2016] [Accepted: 05/12/2017] [Indexed: 12/23/2022] Open
Abstract
Improving in one aspect of a task can undermine performance in another, but how such opposing demands play out in single cells and impact on fitness is mostly unknown. Here we study budding yeast in dynamic environments of hyperosmotic stress and show how the corresponding signalling network increases cellular survival both by assigning the requirements of high response speed and high response accuracy to two separate input pathways and by having these pathways interact to converge on Hog1, a p38 MAP kinase. Cells with only the less accurate, reflex-like pathway are fitter in sudden stress, whereas cells with only the slow, more accurate pathway are fitter in increasing but fluctuating stress. Our results demonstrate that cellular signalling is vulnerable to trade-offs in performance, but that these trade-offs can be mitigated by assigning the opposing tasks to different signalling subnetworks. Such division of labour could function broadly within cellular signal transduction. DOI:http://dx.doi.org/10.7554/eLife.21415.001 The faster we do tasks the harder it is to do them well. For example, when we wish to judge if, say, a cup, is too hot, we first quickly withdraw our hand after touching it: we know that the cup is hot but not how much. Next we hold a finger against the cup to accurately judge its temperature. Such speed-accuracy trade-offs are studied widely in fields ranging from neuroscience to engineering, but their consequences for single cells are unknown. This is despite the fact that when cells are exposed to stress they must respond both quickly (to survive) and accurately (to reduce how many resources they consume). One way of stressing yeast cells is to place them in a syrupy substance called sorbitol. This causes the cells to lose water, shrink in size, and launch a stress response to regain volume. If the cells respond inappropriately to the situation, they may die. The signalling network that produces the stress response is unusual in that it has a Y-shaped structure, where the two ‘arms’ of the Y are the input pathways. Although it was known that one input pathway responds to stress faster than the other, the advantages of having two inputs in the signalling network were not understood. Granados, Crane et al. thought that the differences in speed and the Y-shaped structure could allow the cell to respond to stress with both speed and accuracy. To investigate this theory, Granados, Crane et al. used a microscope to study individual yeast cells that had been exposed to sorbitol. Combining these results with a mathematical model of the cell signalling network revealed that a mutant yeast cell that only has one of the input pathways specializes in speed but is inaccurate, similar to a reflex-like response. In contrast, a mutant with only the other pathway specializes in accuracy, being slower but matching the level of the cell’s response to the level of stress placed on it. This trade-off is reflected in rates of cell survival: the first mutant survives best in sudden shocks of stress; the second mutant survives best in gradually increasing stress. Normal yeast cells that have both input pathways survive more often than either mutant. Overall, the results presented by Granados, Crane et al. reveal principles behind cellular decision-making that should hold true in more complex organisms and could be exploited by synthetic biologists to programme cells with new behaviours. DOI:http://dx.doi.org/10.7554/eLife.21415.002
Collapse
Affiliation(s)
- Alejandro A Granados
- SynthSys - Synthetic and Systems Biology, University of Edinburgh, Edinburgh, United Kingdom.,Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Matthew M Crane
- SynthSys - Synthetic and Systems Biology, University of Edinburgh, Edinburgh, United Kingdom.,School of Biological Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Luis F Montano-Gutierrez
- SynthSys - Synthetic and Systems Biology, University of Edinburgh, Edinburgh, United Kingdom.,School of Biological Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Reiko J Tanaka
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Margaritis Voliotis
- Department of Mathematics and Living Systems Institute, College of Engineering, Mathematics, and Physical Sciences, University of Exeter, Exeter, United Kingdom
| | - Peter S Swain
- SynthSys - Synthetic and Systems Biology, University of Edinburgh, Edinburgh, United Kingdom.,School of Biological Sciences, University of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
43
|
Ince RA, Giordano BL, Kayser C, Rousselet GA, Gross J, Schyns PG. A statistical framework for neuroimaging data analysis based on mutual information estimated via a gaussian copula. Hum Brain Mapp 2017; 38:1541-1573. [PMID: 27860095 PMCID: PMC5324576 DOI: 10.1002/hbm.23471] [Citation(s) in RCA: 155] [Impact Index Per Article: 19.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2016] [Revised: 10/25/2016] [Accepted: 11/07/2016] [Indexed: 12/17/2022] Open
Abstract
We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, unidimensional and multidimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses. We show the benefit of considering the instantaneous temporal derivative together with the raw values of M/EEG signals as a multivariate response, how we can separately quantify modulations of amplitude and direction for vector quantities, and how we can measure the emergence of novel information over time in evoked responses. Open-source Matlab and Python code implementing the new methods accompanies this article. Hum Brain Mapp 38:1541-1573, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Robin A.A. Ince
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| | - Bruno L. Giordano
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| | | | - Joachim Gross
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| | - Philippe G. Schyns
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| |
Collapse
|
44
|
Abstract
A fundamental problem in neuroscience is understanding how sequences of action potentials ("spikes") encode information about sensory signals and motor outputs. Although traditional theories assume that this information is conveyed by the total number of spikes fired within a specified time interval (spike rate), recent studies have shown that additional information is carried by the millisecond-scale timing patterns of action potentials (spike timing). However, it is unknown whether or how subtle differences in spike timing drive differences in perception or behavior, leaving it unclear whether the information in spike timing actually plays a role in brain function. By examining the activity of individual motor units (the muscle fibers innervated by a single motor neuron) and manipulating patterns of activation of these neurons, we provide both correlative and causal evidence that the nervous system uses millisecond-scale variations in the timing of spikes within multispike patterns to control a vertebrate behavior-namely, respiration in the Bengalese finch, a songbird. These findings suggest that a fundamental assumption of current theories of motor coding requires revision.
Collapse
|
45
|
Marshall N, Timme NM, Bennett N, Ripp M, Lautzenhiser E, Beggs JM. Analysis of Power Laws, Shape Collapses, and Neural Complexity: New Techniques and MATLAB Support via the NCC Toolbox. Front Physiol 2016; 7:250. [PMID: 27445842 PMCID: PMC4921690 DOI: 10.3389/fphys.2016.00250] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2016] [Accepted: 06/08/2016] [Indexed: 11/13/2022] Open
Abstract
Neural systems include interactions that occur across many scales. Two divergent methods for characterizing such interactions have drawn on the physical analysis of critical phenomena and the mathematical study of information. Inferring criticality in neural systems has traditionally rested on fitting power laws to the property distributions of "neural avalanches" (contiguous bursts of activity), but the fractal nature of avalanche shapes has recently emerged as another signature of criticality. On the other hand, neural complexity, an information theoretic measure, has been used to capture the interplay between the functional localization of brain regions and their integration for higher cognitive functions. Unfortunately, treatments of all three methods-power-law fitting, avalanche shape collapse, and neural complexity-have suffered from shortcomings. Empirical data often contain biases that introduce deviations from true power law in the tail and head of the distribution, but deviations in the tail have often been unconsidered; avalanche shape collapse has required manual parameter tuning; and the estimation of neural complexity has relied on small data sets or statistical assumptions for the sake of computational efficiency. In this paper we present technical advancements in the analysis of criticality and complexity in neural systems. We use maximum-likelihood estimation to automatically fit power laws with left and right cutoffs, present the first automated shape collapse algorithm, and describe new techniques to account for large numbers of neural variables and small data sets in the calculation of neural complexity. In order to facilitate future research in criticality and complexity, we have made the software utilized in this analysis freely available online in the MATLAB NCC (Neural Complexity and Criticality) Toolbox.
Collapse
Affiliation(s)
- Najja Marshall
- Department of Neuroscience, Columbia University New York, NY, USA
| | - Nicholas M Timme
- Department of Psychology, Indiana University - Purdue University Indianapolis Indianapolis, IN, USA
| | | | - Monica Ripp
- Department of Physics, Syracuse University Syracuse, NY, USA
| | | | - John M Beggs
- Department of Physics, Indiana UniversityBloomington, IN, USA; Biocomplexity Institute, Indiana UniversityBloomington, IN, USA
| |
Collapse
|
46
|
|
47
|
Mora T, Walczak AM. Rényi entropy, abundance distribution, and the equivalence of ensembles. Phys Rev E 2016; 93:052418. [PMID: 27300934 DOI: 10.1103/physreve.93.052418] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2016] [Indexed: 01/09/2023]
Abstract
Distributions of abundances or frequencies play an important role in many fields of science, from biology to sociology, as does the Rényi entropy, which measures the diversity of a statistical ensemble. We derive a mathematical relation between the abundance distribution and the Rényi entropy, by analogy with the equivalence of ensembles in thermodynamics. The abundance distribution is mapped onto the density of states, and the Rényi entropy to the free energy. The two quantities are related in the thermodynamic limit by a Legendre transform, by virtue of the equivalence between the micro-canonical and canonical ensembles. In this limit, we show how the Rényi entropy can be constructed geometrically from rank-frequency plots. This mapping predicts that non-concave regions of the rank-frequency curve should result in kinks in the Rényi entropy as a function of its order. We illustrate our results on simple examples, and emphasize the limitations of the equivalence of ensembles when a thermodynamic limit is not well defined. Our results help choose reliable diversity measures based on the experimental accuracy of the abundance distributions in particular frequency ranges.
Collapse
Affiliation(s)
- Thierry Mora
- Laboratoire de physique statistique, CNRS, UPMC and École normale supérieure, 24, rue Lhomond, Paris, France
| | - Aleksandra M Walczak
- Laboratoire de physique théorique, CNRS, UPMC and École normale supérieure, 24, rue Lhomond, Paris, France
| |
Collapse
|
48
|
Characterizing Protease Specificity: How Many Substrates Do We Need? PLoS One 2015; 10:e0142658. [PMID: 26559682 PMCID: PMC4641643 DOI: 10.1371/journal.pone.0142658] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2015] [Accepted: 10/26/2015] [Indexed: 12/26/2022] Open
Abstract
Calculation of cleavage entropies allows to quantify, map and compare protease substrate specificity by an information entropy based approach. The metric intrinsically depends on the number of experimentally determined substrates (data points). Thus a statistical analysis of its numerical stability is crucial to estimate the systematic error made by estimating specificity based on a limited number of substrates. In this contribution, we show the mathematical basis for estimating the uncertainty in cleavage entropies. Sets of cleavage entropies are calculated using experimental cleavage data and modeled extreme cases. By analyzing the underlying mathematics and applying statistical tools, a linear dependence of the metric in respect to 1/n was found. This allows us to extrapolate the values to an infinite number of samples and to estimate the errors. Analyzing the errors, a minimum number of 30 substrates was found to be necessary to characterize substrate specificity, in terms of amino acid variability, for a protease (S4-S4’) with an uncertainty of 5 percent. Therefore, we encourage experimental researchers in the protease field to record specificity profiles of novel proteases aiming to identify at least 30 peptide substrates of maximum sequence diversity. We expect a full characterization of protease specificity helpful to rationalize biological functions of proteases and to assist rational drug design.
Collapse
|
49
|
Izquierdo EJ, Williams PL, Beer RD. Information Flow through a Model of the C. elegans Klinotaxis Circuit. PLoS One 2015; 10:e0140397. [PMID: 26465883 PMCID: PMC4605772 DOI: 10.1371/journal.pone.0140397] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2015] [Accepted: 09/24/2015] [Indexed: 11/29/2022] Open
Abstract
Understanding how information about external stimuli is transformed into behavior is one of the central goals of neuroscience. Here we characterize the information flow through a complete sensorimotor circuit: from stimulus, to sensory neurons, to interneurons, to motor neurons, to muscles, to motion. Specifically, we apply a recently developed framework for quantifying information flow to a previously published ensemble of models of salt klinotaxis in the nematode worm Caenorhabditis elegans. Despite large variations in the neural parameters of individual circuits, we found that the overall information flow architecture circuit is remarkably consistent across the ensemble. This suggests structural connectivity is not necessarily predictive of effective connectivity. It also suggests information flow analysis captures general principles of operation for the klinotaxis circuit. In addition, information flow analysis reveals several key principles underlying how the models operate: (1) Interneuron class AIY is responsible for integrating information about positive and negative changes in concentration, and exhibits a strong left/right information asymmetry. (2) Gap junctions play a crucial role in the transfer of information responsible for the information symmetry observed in interneuron class AIZ. (3) Neck motor neuron class SMB implements an information gating mechanism that underlies the circuit’s state-dependent response. (4) The neck carries more information about small changes in concentration than about large ones, and more information about positive changes in concentration than about negative ones. Thus, not all directions of movement are equally informative for the worm. Each of these findings corresponds to hypotheses that could potentially be tested in the worm. Knowing the results of these experiments would greatly refine our understanding of the neural circuit underlying klinotaxis.
Collapse
Affiliation(s)
- Eduardo J. Izquierdo
- Cognitive Science Program, Indiana University, Bloomington, Indiana, United States of America
- School of Informatics and Computing, Indiana University, Bloomington, Indiana, United States of America
- * E-mail:
| | - Paul L. Williams
- Cognitive Science Program, Indiana University, Bloomington, Indiana, United States of America
| | - Randall D. Beer
- Cognitive Science Program, Indiana University, Bloomington, Indiana, United States of America
- School of Informatics and Computing, Indiana University, Bloomington, Indiana, United States of America
| |
Collapse
|
50
|
Marzen SE, DeWeese MR, Crutchfield JP. Time resolution dependence of information measures for spiking neurons: scaling and universality. Front Comput Neurosci 2015; 9:105. [PMID: 26379538 PMCID: PMC4551861 DOI: 10.3389/fncom.2015.00105] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2015] [Accepted: 07/30/2015] [Indexed: 11/15/2022] Open
Abstract
The mutual information between stimulus and spike-train response is commonly used to monitor neural coding efficiency, but neuronal computation broadly conceived requires more refined and targeted information measures of input-output joint processes. A first step toward that larger goal is to develop information measures for individual output processes, including information generation (entropy rate), stored information (statistical complexity), predictable information (excess entropy), and active information accumulation (bound information rate). We calculate these for spike trains generated by a variety of noise-driven integrate-and-fire neurons as a function of time resolution and for alternating renewal processes. We show that their time-resolution dependence reveals coarse-grained structural properties of interspike interval statistics; e.g., τ-entropy rates that diverge less quickly than the firing rate indicated by interspike interval correlations. We also find evidence that the excess entropy and regularized statistical complexity of different types of integrate-and-fire neurons are universal in the continuous-time limit in the sense that they do not depend on mechanism details. This suggests a surprising simplicity in the spike trains generated by these model neurons. Interestingly, neurons with gamma-distributed ISIs and neurons whose spike trains are alternating renewal processes do not fall into the same universality class. These results lead to two conclusions. First, the dependence of information measures on time resolution reveals mechanistic details about spike train generation. Second, information measures can be used as model selection tools for analyzing spike train processes.
Collapse
Affiliation(s)
- Sarah E. Marzen
- Department of Physics, University of California, BerkeleyBerkeley, CA, USA
| | - Michael R. DeWeese
- Department of Physics, University of California, BerkeleyBerkeley, CA, USA
- Helen Wills Neuroscience Institute and Redwood Center for Theoretical Neuroscience, University of California, BerkeleyBerkeley, CA, USA
| | - James P. Crutchfield
- Complexity Sciences Center and Department of Physics, University of California, DavisDavis, CA, USA
| |
Collapse
|