1
|
Barnett L, Seth AK. Dynamical independence: Discovering emergent macroscopic processes in complex dynamical systems. Phys Rev E 2023; 108:014304. [PMID: 37583178 DOI: 10.1103/physreve.108.014304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 06/15/2023] [Indexed: 08/17/2023]
Abstract
We introduce a notion of emergence for macroscopic variables associated with highly multivariate microscopic dynamical processes. Dynamical independence instantiates the intuition of an emergent macroscopic process as one possessing the characteristics of a dynamical system "in its own right," with its own dynamical laws distinct from those of the underlying microscopic dynamics. We quantify (departure from) dynamical independence by a transformation-invariant Shannon information-based measure of dynamical dependence. We emphasize the data-driven discovery of dynamically independent macroscopic variables, and introduce the idea of a multiscale "emergence portrait" for complex systems. We show how dynamical dependence may be computed explicitly for linear systems in both time and frequency domains, facilitating discovery of emergent phenomena across spatiotemporal scales, and outline application of the linear operationalization to inference of emergence portraits for neural systems from neurophysiological time-series data. We discuss dynamical independence for discrete- and continuous-time deterministic dynamics, with potential application to Hamiltonian mechanics and classical complex systems such as flocking and cellular automata.
Collapse
Affiliation(s)
- L Barnett
- Sussex Centre for Consciousness Science, Department of Informatics, University of Sussex, Falmer, Brighton BN1 9QJ, United Kingdom
| | - A K Seth
- Sussex Centre for Consciousness Science, Department of Informatics, University of Sussex, Falmer, Brighton BN1 9QJ, United Kingdom
- Canadian Institute for Advanced Research, Program on Brain, Mind, and Consciousness, Toronto, Ontario M5G 1M1, Canada
| |
Collapse
|
2
|
Fagerholm ED, Dezhina Z, Moran RJ, Turkheimer FE, Leech R. A primer on entropy in neuroscience. Neurosci Biobehav Rev 2023; 146:105070. [PMID: 36736445 DOI: 10.1016/j.neubiorev.2023.105070] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 01/16/2023] [Accepted: 01/29/2023] [Indexed: 02/04/2023]
Abstract
Entropy is not just a property of a system - it is a property of a system and an observer. Specifically, entropy is a measure of the amount of hidden information in a system that arises due to an observer's limitations. Here we provide an account of entropy from first principles in statistical mechanics with the aid of toy models of neural systems. Specifically, we describe the distinction between micro and macrostates in the context of simplified binary-state neurons and the characteristics of entropy required to capture an associated measure of hidden information. We discuss the origin of the mathematical form of entropy via the indistinguishable re-arrangements of discrete-state neurons and show the way in which the arguments are extended into a phase space description for continuous large-scale neural systems. Finally, we show the ways in which limitations in neuroimaging resolution, as represented by coarse graining operations in phase space, lead to an increase in entropy in time as per the second law of thermodynamics. It is our hope that this primer will support the increasing number of studies that use entropy as a way of characterising neuroimaging timeseries and of making inferences about brain states.
Collapse
Affiliation(s)
- Erik D Fagerholm
- Department of Neuroimaging, King's College London, United Kingdom.
| | - Zalina Dezhina
- Department of Neuroimaging, King's College London, United Kingdom
| | - Rosalyn J Moran
- Department of Neuroimaging, King's College London, United Kingdom
| | | | - Robert Leech
- Department of Neuroimaging, King's College London, United Kingdom
| |
Collapse
|
3
|
Kathpalia A, Nagaraj N. Granger causality for compressively sensed sparse signals. Phys Rev E 2023; 107:034308. [PMID: 37072975 DOI: 10.1103/physreve.107.034308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 02/26/2023] [Indexed: 04/20/2023]
Abstract
Compressed sensing is a scheme that allows for sparse signals to be acquired, transmitted, and stored using far fewer measurements than done by conventional means employing the Nyquist sampling theorem. Since many naturally occurring signals are sparse (in some domain), compressed sensing has rapidly seen popularity in a number of applied physics and engineering applications, particularly in designing signal and image acquisition strategies, e.g., magnetic resonance imaging, quantum state tomography, scanning tunneling microscopy, and analog to digital conversion technologies. Contemporaneously, causal inference has become an important tool for the analysis and understanding of processes and their interactions in many disciplines of science, especially those dealing with complex systems. Direct causal analysis for compressively sensed data is required to avoid the task of reconstructing the compressed data. Also, for some sparse signals, such as for sparse temporal data, it may be difficult to discover causal relations directly using available data-driven or model-free causality estimation techniques. In this work, we provide a mathematical proof that structured compressed sensing matrices, specifically circulant and Toeplitz, preserve causal relationships in the compressed signal domain, as measured by Granger causality (GC). We then verify this theorem on a number of bivariate and multivariate coupled sparse signal simulations which are compressed using these matrices. We also demonstrate a real world application of network causal connectivity estimation from sparse neural spike train recordings from rat prefrontal cortex. In addition to demonstrating the effectiveness of structured matrices for GC estimation from sparse signals, we also show a computational time advantage of the proposed strategy for causal inference from compressed signals of both sparse and regular autoregressive processes as compared to standard GC estimation from original signals.
Collapse
Affiliation(s)
- Aditi Kathpalia
- Department of Complex Systems, Institute of Computer Science of the Czech Academy of Sciences, Prague 18200, Czech Republic
| | - Nithin Nagaraj
- Consciousness Studies Programme, National Institute of Advanced Studies, Bengaluru 560012, India
| |
Collapse
|
4
|
Sowinski DR, Carroll-Nellenback J, DeSilva J, Frank A, Ghoshal G, Gleiser M. The Consensus Problem in Polities of Agents with Dissimilar Cognitive Architectures. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1378. [PMID: 37420398 DOI: 10.3390/e24101378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 09/09/2022] [Accepted: 09/19/2022] [Indexed: 07/09/2023]
Abstract
Agents interacting with their environments, machine or otherwise, arrive at decisions based on their incomplete access to data and their particular cognitive architecture, including data sampling frequency and memory storage limitations. In particular, the same data streams, sampled and stored differently, may cause agents to arrive at different conclusions and to take different actions. This phenomenon has a drastic impact on polities-populations of agents predicated on the sharing of information. We show that, even under ideal conditions, polities consisting of epistemic agents with heterogeneous cognitive architectures might not achieve consensus concerning what conclusions to draw from datastreams. Transfer entropy applied to a toy model of a polity is analyzed to showcase this effect when the dynamics of the environment is known. As an illustration where the dynamics is not known, we examine empirical data streams relevant to climate and show the consensus problem manifest.
Collapse
Affiliation(s)
| | | | - Jeremy DeSilva
- Department of Anthropology, Dartmouth College, Hanover, NH 03755, USA
| | - Adam Frank
- Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627, USA
| | - Gourab Ghoshal
- Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627, USA
| | - Marcelo Gleiser
- Department of Physics and Astronomy, Dartmouth College, Hanover, NH 03755, USA
| |
Collapse
|
5
|
Kim SH, Woo J, Choi K, Choi M, Han K. Neural Information Processing and Computations of Two-Input Synapses. Neural Comput 2022; 34:2102-2131. [PMID: 36027799 DOI: 10.1162/neco_a_01534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 06/02/2022] [Indexed: 11/04/2022]
Abstract
Information processing in artificial neural networks is largely dependent on the nature of neuron models. While commonly used models are designed for linear integration of synaptic inputs, accumulating experimental evidence suggests that biological neurons are capable of nonlinear computations for many converging synaptic inputs via homo- and heterosynaptic mechanisms. This nonlinear neuronal computation may play an important role in complex information processing at the neural circuit level. Here we characterize the dynamics and coding properties of neuron models on synaptic transmissions delivered from two hidden states. The neuronal information processing is influenced by the cooperative and competitive interactions among synapses and the coherence of the hidden states. Furthermore, we demonstrate that neuronal information processing under two-input synaptic transmission can be mapped to linearly nonseparable XOR as well as basic AND/OR operations. In particular, the mixtures of linear and nonlinear neuron models outperform the fashion-MNIST test compared to the neural networks consisting of only one type. This study provides a computational framework for assessing information processing of neuron and synapse models that may be beneficial for the design of brain-inspired artificial intelligence algorithms and neuromorphic systems.
Collapse
Affiliation(s)
- Soon Ho Kim
- Laboratory of Computational Neurophysics, Convergence Research Center for Brain Science, Brain Science Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea
| | - Junhyuk Woo
- Laboratory of Computational Neurophysics, Convergence Research Center for Brain Science, Brain Science Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea
| | - Kiri Choi
- School of Computational Sciences, Korea Institute for Advanced Study, Seoul 02455, South Korea
| | - MooYoung Choi
- Department of Physics and Astronomy and Center for Theoretical Physics, Seoul National University, Seoul 08826, South Korea
| | - Kyungreem Han
- Laboratory of Computational Neurophysics, Convergence Research Center for Brain Science, Brain Science Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea
| |
Collapse
|
6
|
Sattari S, Basak US, James RG, Perrin LW, Crutchfield JP, Komatsuzaki T. Modes of information flow in collective cohesion. SCIENCE ADVANCES 2022; 8:eabj1720. [PMID: 35138896 PMCID: PMC8827646 DOI: 10.1126/sciadv.abj1720] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Accepted: 12/20/2021] [Indexed: 05/23/2023]
Abstract
Pairwise interactions are fundamental drivers of collective behavior-responsible for group cohesion. The abiding question is how each individual influences the collective. However, time-delayed mutual information and transfer entropy, commonly used to quantify mutual influence in aggregated individuals, can result in misleading interpretations. Here, we show that these information measures have substantial pitfalls in measuring information flow between agents from their trajectories. We decompose the information measures into three distinct modes of information flow to expose the role of individual and group memory in collective behavior. It is found that decomposed information modes between a single pair of agents reveal the nature of mutual influence involving many-body nonadditive interactions without conditioning on additional agents. The pairwise decomposed modes of information flow facilitate an improved diagnosis of mutual influence in collectives.
Collapse
Affiliation(s)
- Sulimon Sattari
- Research Center of Mathematics for Social Creativity, Research Institute for Electronic Science, Hokkaido University Kita 20, Nishi 10, Kita-ku, Sapporo, Hokkaido 001-0020, Japan
| | - Udoy S. Basak
- Research Center of Mathematics for Social Creativity, Research Institute for Electronic Science, Hokkaido University Kita 20, Nishi 10, Kita-ku, Sapporo, Hokkaido 001-0020, Japan
- Pabna University of Science and Technology, Pabna 6600, Bangladesh
| | - Ryan G. James
- Reddit Inc., 420 Taylor Street, San Francisco, CA 94102, USA
- Department of Physics, Complexity Sciences Center, University of California, Davis, Davis, CA 95616, USA
| | - Louis W. Perrin
- Research Center of Mathematics for Social Creativity, Research Institute for Electronic Science, Hokkaido University Kita 20, Nishi 10, Kita-ku, Sapporo, Hokkaido 001-0020, Japan
- École Normale Supérieure de Rennes, Robert Schumann, Campus de, Av. de Ker Lann, 35170 Bruz, France
| | - James P. Crutchfield
- Department of Physics, Complexity Sciences Center, University of California, Davis, Davis, CA 95616, USA
| | - Tamiki Komatsuzaki
- Research Center of Mathematics for Social Creativity, Research Institute for Electronic Science, Hokkaido University Kita 20, Nishi 10, Kita-ku, Sapporo, Hokkaido 001-0020, Japan
- Institute for Chemical Reaction Design and Discovery (WPI-ICReDD), Hokkaido University Kita 21 Nishi 10, Kita-ku, Sapporo, Hokkaido 001-0021, Japan
- Graduate School of Chemical Sciences and Engineering Materials Chemistry and Energy Course, Hokkaido University Kita 13, Nishi 8, Kita-ku Sapporo, Hokkaido 060-0812, Japan
| |
Collapse
|
7
|
Dalla Porta L, Castro DM, Copelli M, Carelli PV, Matias FS. Feedforward and feedback influences through distinct frequency bands between two spiking-neuron networks. Phys Rev E 2021; 104:054404. [PMID: 34942789 DOI: 10.1103/physreve.104.054404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 09/27/2021] [Indexed: 11/07/2022]
Abstract
Several studies on brain signals suggested that bottom-up and top-down influences are exerted through distinct frequency bands among visual cortical areas. It was recently shown that theta and gamma rhythms subserve feedforward, whereas the feedback influence is dominated by the alpha-beta rhythm in primates. A few theoretical models for reproducing these effects have been proposed so far. Here we show that a simple but biophysically plausible two-network motif composed of spiking-neuron models and chemical synapses can exhibit feedforward and feedback influences through distinct frequency bands. Different from previous studies, this kind of model allows us to study directed influences not only at the population level, by using a proxy for the local field potential, but also at the cellular level, by using the neuronal spiking series.
Collapse
Affiliation(s)
- Leonardo Dalla Porta
- Systems Neuroscience, Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 08036, Spain
| | - Daniel M Castro
- Departamento de Física, Universidade Federal de Pernambuco, Recife PE 50670-901, Brazil
| | - Mauro Copelli
- Departamento de Física, Universidade Federal de Pernambuco, Recife PE 50670-901, Brazil
| | - Pedro V Carelli
- Departamento de Física, Universidade Federal de Pernambuco, Recife PE 50670-901, Brazil
| | - Fernanda S Matias
- Instituto de Física, Universidade Federal de Alagoas, Maceió, Alagoas 57072-970, Brazil
| |
Collapse
|
8
|
Mijatovic G, Antonacci Y, Faes L. Measuring the Rate of Information Transfer in Point-Process Data: Application to Cardiovascular Interactions. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:341-344. [PMID: 34891305 DOI: 10.1109/embc46164.2021.9629688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We present the implementation to cardiovascular variability of a method for the information-theoretic estimation of the directed interactions between event-based data. The method allows to compute the transfer entropy rate (TER) from a source to a target point process in continuous time, thus overcoming the severe limitations associated with time discretization of event-based processes. In this work, the method is evaluated on coupled cardiovascular point processes representing the heartbeat dynamics and the related peripheral pulsation, first using a physiologically-based simulation model and then studying real point-process data from healthy subjects monitored at rest and during postural stress. Our results document the ability of TER to detect direction and strength of the interactions between cardiovascular processes, also highlighting physiologically plausible interaction mechanisms.
Collapse
|
9
|
Mijatovic G, Antonacci Y, Loncar-Turukalo T, Minati L, Faes L. An Information-Theoretic Framework to Measure the Dynamic Interaction Between Neural Spike Trains. IEEE Trans Biomed Eng 2021; 68:3471-3481. [PMID: 33872139 DOI: 10.1109/tbme.2021.3073833] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
OBJECTIVE While understanding the interaction patterns among simultaneous recordings of spike trains from multiple neuronal units is a key topic in neuroscience, existing methods either do not consider the inherent point-process nature of spike trains or are based on parametric assumptions. This work presents an information-theoretic framework for the model-free, continuous-time estimation of both undirected (symmetric) and directed (Granger-causal) interactions between spike trains. METHODS The framework computes the mutual information rate (MIR) and the transfer entropy rate (TER) for two point processes X and Y, showing that the MIR between X and Y can be decomposed as the sum of the TER along the directions X → Y and Y → X. We present theoretical expressions and introduce strategies to estimate efficiently the two measures through nearest neighbor statistics. RESULTS Using simulations of independent and coupled point processes, we show the accuracy of MIR and TER to assess interactions even for weakly coupled and short realizations, and demonstrate the superiority of continuous-time estimation over the standard discrete-time approach. We also apply the MIR and TER to real-world data, specifically, recordings from in-vitro preparations of spontaneously-growing cultures of cortical neurons. Using this dataset, we demonstrate the ability of MIR and TER to describe how the functional networks between recording units emerge over the course of the maturation of the neuronal cultures. CONCLUSION AND SIGNIFICANCE the proposed framework provides principled measures to assess undirected and directed spike train interactions with more efficiency and flexibility than previous discrete-time or parametric approaches, opening new perspectives for the analysis of point-process data in neuroscience and many other fields.
Collapse
|
10
|
Shorten DP, Spinney RE, Lizier JT. Estimating Transfer Entropy in Continuous Time Between Neural Spike Trains or Other Event-Based Data. PLoS Comput Biol 2021; 17:e1008054. [PMID: 33872296 PMCID: PMC8084348 DOI: 10.1371/journal.pcbi.1008054] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 04/29/2021] [Accepted: 02/19/2021] [Indexed: 11/24/2022] Open
Abstract
Transfer entropy (TE) is a widely used measure of directed information flows in a number of domains including neuroscience. Many real-world time series for which we are interested in information flows come in the form of (near) instantaneous events occurring over time. Examples include the spiking of biological neurons, trades on stock markets and posts to social media, amongst myriad other systems involving events in continuous time throughout the natural and social sciences. However, there exist severe limitations to the current approach to TE estimation on such event-based data via discretising the time series into time bins: it is not consistent, has high bias, converges slowly and cannot simultaneously capture relationships that occur with very fine time precision as well as those that occur over long time intervals. Building on recent work which derived a theoretical framework for TE in continuous time, we present an estimation framework for TE on event-based data and develop a k-nearest-neighbours estimator within this framework. This estimator is provably consistent, has favourable bias properties and converges orders of magnitude more quickly than the current state-of-the-art in discrete-time estimation on synthetic examples. We demonstrate failures of the traditionally-used source-time-shift method for null surrogate generation. In order to overcome these failures, we develop a local permutation scheme for generating surrogate time series conforming to the appropriate null hypothesis in order to test for the statistical significance of the TE and, as such, test for the conditional independence between the history of one point process and the updates of another. Our approach is shown to be capable of correctly rejecting or accepting the null hypothesis of conditional independence even in the presence of strong pairwise time-directed correlations. This capacity to accurately test for conditional independence is further demonstrated on models of a spiking neural circuit inspired by the pyloric circuit of the crustacean stomatogastric ganglion, succeeding where previous related estimators have failed.
Collapse
Affiliation(s)
- David P. Shorten
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
| | - Richard E. Spinney
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
- School of Physics and EMBL Australia Node Single Molecule Science, School of Medical Sciences, The University of New South Wales, Sydney, Australia
| | - Joseph T. Lizier
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
| |
Collapse
|
11
|
Hansen M, Burns A, Monk C, Schutz C, Lizier J, Ramnarine I, Ward A, Krause J. The effect of predation risk on group behaviour and information flow during repeated collective decisions. Anim Behav 2021. [DOI: 10.1016/j.anbehav.2021.01.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
12
|
Kim J, T. Jakobsen S, Natarajan KN, Won KJ. TENET: gene network reconstruction using transfer entropy reveals key regulatory factors from single cell transcriptomic data. Nucleic Acids Res 2021; 49:e1. [PMID: 33170214 PMCID: PMC7797076 DOI: 10.1093/nar/gkaa1014] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 10/05/2020] [Accepted: 10/14/2020] [Indexed: 12/22/2022] Open
Abstract
Accurate prediction of gene regulatory rules is important towards understanding of cellular processes. Existing computational algorithms devised for bulk transcriptomics typically require a large number of time points to infer gene regulatory networks (GRNs), are applicable for a small number of genes and fail to detect potential causal relationships effectively. Here, we propose a novel approach 'TENET' to reconstruct GRNs from single cell RNA sequencing (scRNAseq) datasets. Employing transfer entropy (TE) to measure the amount of causal relationships between genes, TENET predicts large-scale gene regulatory cascades/relationships from scRNAseq data. TENET showed better performance than other GRN reconstructors, in identifying key regulators from public datasets. Specifically from scRNAseq, TENET identified key transcriptional factors in embryonic stem cells (ESCs) and during direct cardiomyocytes reprogramming, where other predictors failed. We further demonstrate that known target genes have significantly higher TE values, and TENET predicted higher TE genes were more influenced by the perturbation of their regulator. Using TENET, we identified and validated that Nme2 is a culture condition specific stem cell factor. These results indicate that TENET is uniquely capable of identifying key regulators from scRNAseq data.
Collapse
Affiliation(s)
- Junil Kim
- Biotech Research and Innovation Centre (BRIC), University of Copenhagen, 2200 Copenhagen N, Denmark
- Novo Nordisk Foundation Center for Stem Cell Biology, DanStem, Faculty of Health and Medical Sciences, University of Copenhagen, Ole Maaløes Vej 5, 2200 Copenhagen N, Denmark
| | - Simon T. Jakobsen
- Functional Genomics and Metabolism Unit, Department of Biochemistry and Molecular Biology, University of Southern Denmark, Denmark
| | - Kedar N Natarajan
- Functional Genomics and Metabolism Unit, Department of Biochemistry and Molecular Biology, University of Southern Denmark, Denmark
- Danish Institute of Advanced Study (D-IAS), University of Southern Denmark, Denmark
| | - Kyoung-Jae Won
- Biotech Research and Innovation Centre (BRIC), University of Copenhagen, 2200 Copenhagen N, Denmark
- Novo Nordisk Foundation Center for Stem Cell Biology, DanStem, Faculty of Health and Medical Sciences, University of Copenhagen, Ole Maaløes Vej 5, 2200 Copenhagen N, Denmark
| |
Collapse
|
13
|
Mijatovic G, Pernice R, Perinelli A, Antonacci Y, Busacca A, Javorka M, Ricci L, Faes L. Measuring the Rate of Information Exchange in Point-Process Data With Application to Cardiovascular Variability. FRONTIERS IN NETWORK PHYSIOLOGY 2021; 1:765332. [PMID: 36925567 PMCID: PMC10013020 DOI: 10.3389/fnetp.2021.765332] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Accepted: 11/26/2021] [Indexed: 02/01/2023]
Abstract
The amount of information exchanged per unit of time between two dynamic processes is an important concept for the analysis of complex systems. Theoretical formulations and data-efficient estimators have been recently introduced for this quantity, known as the mutual information rate (MIR), allowing its continuous-time computation for event-based data sets measured as realizations of coupled point processes. This work presents the implementation of MIR for point process applications in Network Physiology and cardiovascular variability, which typically feature short and noisy experimental time series. We assess the bias of MIR estimated for uncoupled point processes in the frame of surrogate data, and we compensate it by introducing a corrected MIR (cMIR) measure designed to return zero values when the two processes do not exchange information. The method is first tested extensively in synthetic point processes including a physiologically-based model of the heartbeat dynamics and the blood pressure propagation times, where we show the ability of cMIR to compensate the negative bias of MIR and return statistically significant values even for weakly coupled processes. The method is then assessed in real point-process data measured from healthy subjects during different physiological conditions, showing that cMIR between heartbeat and pressure propagation times increases significantly during postural stress, though not during mental stress. These results document that cMIR reflects physiological mechanisms of cardiovascular variability related to the joint neural autonomic modulation of heart rate and arterial compliance.
Collapse
Affiliation(s)
- Gorana Mijatovic
- Faculty of Technical Science, University of Novi Sad, Novi Sad, Serbia
| | - Riccardo Pernice
- Department of Engineering, University of Palermo, Palermo, Italy
| | - Alessio Perinelli
- CIMeC, Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy
| | - Yuri Antonacci
- Department of Physics and Chemistry "Emilio Segrè," University of Palermo, Palermo, Italy
| | | | - Michal Javorka
- Department of Physiology and Biomedical Center Martin, Jessenius Faculty of Medicine, Comenius University, Martin, Slovakia
| | - Leonardo Ricci
- Department of Physics, University of Trento, Trento, Italy
| | - Luca Faes
- Department of Engineering, University of Palermo, Palermo, Italy
| |
Collapse
|
14
|
Nandi M, Banik SK, Chaudhury P. Restricted information in a two-step cascade. Phys Rev E 2019; 100:032406. [PMID: 31639964 DOI: 10.1103/physreve.100.032406] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Indexed: 11/07/2022]
Abstract
A cell must sense extracellular and intracellular fluctuations and respond appropriately to survive for optimal cellular functioning. Accordingly, a cell builds up biochemical networks which can transduce information of extracellular and intracellular fluctuations accurately. We consider a generic two-step cascade as a model gene regulatory network containing three regulatory proteins S, X, and Y connected as S→X→Y. The intermediate node X is a stochastic variable, acts as an obstacle, and impedes the information flow from S to Y. We quantify the information that is restricted by X using the tools of information theory and term this as restricted information. In this context, we further propose two measurable quantities, restricted efficiency and information transfer efficiency. The former determines how efficiently X restricts the upstream information coming from S, while the latter computes the efficiency of X to pass the upstream information toward Y. We also quantify the information that is being uniquely transferred from X to Y, which determines the extent of the ability of X to act as a source of information. Our analysis shows that when the signal strength (or mean population of S, 〈s〉) is low, the intermediate X can carry forward the upstream information reliably as well, as it acts as a better source of information, thereby increasing the fidelity of the network. But at the high signal strength, X restricts most of the upstream information, and its ability to act as a source of information gets reduced. This leads to a loss of fidelity of the network.
Collapse
Affiliation(s)
- Mintu Nandi
- Department of Chemistry, University of Calcutta, 92 A P C Road, Kolkata 700009, India
| | - Suman K Banik
- Department of Chemistry, Bose Institute, 93/1 A P C Road, Kolkata 700009, India
| | - Pinaki Chaudhury
- Department of Chemistry, University of Calcutta, 92 A P C Road, Kolkata 700009, India
| |
Collapse
|
15
|
Li M, Han Y, Aburn MJ, Breakspear M, Poldrack RA, Shine JM, Lizier JT. Transitions in information processing dynamics at the whole-brain network level are driven by alterations in neural gain. PLoS Comput Biol 2019; 15:e1006957. [PMID: 31613882 PMCID: PMC6793849 DOI: 10.1371/journal.pcbi.1006957] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Accepted: 09/02/2019] [Indexed: 12/20/2022] Open
Abstract
A key component of the flexibility and complexity of the brain is its ability to dynamically adapt its functional network structure between integrated and segregated brain states depending on the demands of different cognitive tasks. Integrated states are prevalent when performing tasks of high complexity, such as maintaining items in working memory, consistent with models of a global workspace architecture. Recent work has suggested that the balance between integration and segregation is under the control of ascending neuromodulatory systems, such as the noradrenergic system, via changes in neural gain (in terms of the amplification and non-linearity in stimulus-response transfer function of brain regions). In a previous large-scale nonlinear oscillator model of neuronal network dynamics, we showed that manipulating neural gain parameters led to a 'critical' transition in phase synchrony that was associated with a shift from segregated to integrated topology, thus confirming our original prediction. In this study, we advance these results by demonstrating that the gain-mediated phase transition is characterized by a shift in the underlying dynamics of neural information processing. Specifically, the dynamics of the subcritical (segregated) regime are dominated by information storage, whereas the supercritical (integrated) regime is associated with increased information transfer (measured via transfer entropy). Operating near to the critical regime with respect to modulating neural gain parameters would thus appear to provide computational advantages, offering flexibility in the information processing that can be performed with only subtle changes in gain control. Our results thus link studies of whole-brain network topology and the ascending arousal system with information processing dynamics, and suggest that the constraints imposed by the ascending arousal system constrain low-dimensional modes of information processing within the brain.
Collapse
Affiliation(s)
- Mike Li
- Centre for Complex Systems, The University of Sydney, Sydney, Australia
- Brain and Mind Centre, The University of Sydney, Sydney, Australia
- Complex Systems Research Group, Faculty of Engineering, The University of Sydney, Sydney, Australia
| | - Yinuo Han
- Centre for Complex Systems, The University of Sydney, Sydney, Australia
- Brain and Mind Centre, The University of Sydney, Sydney, Australia
| | - Matthew J. Aburn
- QIMR Berghofer Medical Research Institute, Queensland, Australia
| | | | - Russell A. Poldrack
- Department of Psychology, Stanford University, Stanford, California, United States of America
| | - James M. Shine
- Centre for Complex Systems, The University of Sydney, Sydney, Australia
- Brain and Mind Centre, The University of Sydney, Sydney, Australia
| | - Joseph T. Lizier
- Centre for Complex Systems, The University of Sydney, Sydney, Australia
- Complex Systems Research Group, Faculty of Engineering, The University of Sydney, Sydney, Australia
| |
Collapse
|
16
|
Energy-efficient information transfer at thalamocortical synapses. PLoS Comput Biol 2019; 15:e1007226. [PMID: 31381555 PMCID: PMC6695202 DOI: 10.1371/journal.pcbi.1007226] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2016] [Revised: 08/15/2019] [Accepted: 06/28/2019] [Indexed: 12/04/2022] Open
Abstract
We have previously shown that the physiological size of postsynaptic currents maximises energy efficiency rather than information transfer across the retinothalamic relay synapse. Here, we investigate information transmission and postsynaptic energy use at the next synapse along the visual pathway: from relay neurons in the thalamus to spiny stellate cells in layer 4 of the primary visual cortex (L4SS). Using both multicompartment Hodgkin-Huxley-type simulations and electrophysiological recordings in rodent brain slices, we find that increasing or decreasing the postsynaptic conductance of the set of thalamocortical inputs to one L4SS cell decreases the energy efficiency of information transmission from a single thalamocortical input. This result is obtained in the presence of random background input to the L4SS cell from excitatory and inhibitory corticocortical connections, which were simulated (both excitatory and inhibitory) or injected experimentally using dynamic-clamp (excitatory only). Thus, energy efficiency is not a unique property of strong relay synapses: even at the relatively weak thalamocortical synapse, each of which contributes minimally to the output firing of the L4SS cell, evolutionarily-selected postsynaptic properties appear to maximise the information transmitted per energy used. Compared to other organs, the brain consumes a vast amount of energy for its size. Most of this energy is used to power the electrical and chemical processes that support neural computation. As the energy supply to the brain is limited, it follows that this computation should be energetically efficient. Previously, we showed that this is indeed the case for transmission of information between cells at synapses. Synapses transferring information from the retina to the brain do not maximise information transmission—some information is lost and does not reach the visual cortex. Instead, these synapses maximise the information transmitted per energy used. Here, we demonstrate that this principle of energetic efficiency also holds at the next synapse in the visual pathway, the thalamocortical synapse. This synapse is weaker and competes with hundreds of other inputs to influence the output firing of the next cell. Using detailed simulations of cortical neurons, and electrophysiological recordings in rodent brain slices, we found that this relatively weak synapse also does not maximise information transmission. Instead, it maximises the amount of information transmitted per energy used. This suggests that energy efficiency at synapses could be a common design principle in the brain.
Collapse
|
17
|
Gong X, Li W, Liang H. Spike-field Granger causality for hybrid neural data analysis. J Neurophysiol 2019; 122:809-822. [DOI: 10.1152/jn.00246.2019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
Abstract
Neurotechnological innovations allow for simultaneous recording at various scales, ranging from spiking activity of individual neurons to large neural populations’ local field potentials (LFPs). This capability necessitates developing multiscale analysis of spike-field activity. A joint analysis of the hybrid neural data is crucial for bridging the scales between single neurons and local networks. Granger causality is a fundamental measure to evaluate directional influences among neural signals. However, it is mainly limited to inferring causal influence between the same type of signals—either LFPs or spike trains—and not well developed between two different signal types. Here we propose a model-free, nonparametric spike-field Granger causality measure for hybrid data analysis. Our measure is distinct from existing methods in that we use “binless” spikes (precise spike timing) rather than “binned” spikes (spike counts within small consecutive time windows). The latter clearly distort the information in the mixed analysis of spikes and LFP. Therefore, our spectral estimate of spike trains is directly applied to the neural point process itself, i.e., sequences of spike times rather than spike counts. Our measure is validated by an extensive set of simulated data. When the measure is applied to LFPs and spiking activity simultaneously recorded from visual areas V1 and V4 of monkeys performing a contour detection task, we are able to confirm computationally the long-standing experimental finding of the input-output relationship between LFPs and spikes. Importantly, we demonstrate that spike-field Granger causality can be used to reveal the modulatory effects that are inaccessible by traditional methods, such that spike→LFP Granger causality is modulated by the behavioral task, whereas LFP→spike Granger causality is mainly related to the average synaptic input. NEW & NOTEWORTHY It is a pressing question to study the directional interactions between local field potential (LFP) and spiking activity. In this report, we propose a model-free, nonparametric spike-field Granger causality measure that can be used to reveal directional influences between spikes and LFPs. This new measure is crucial for bridging the scales between single neurons and neural networks; hence it represents an important step to explicate how the brain orchestrates information processing.
Collapse
Affiliation(s)
- Xiajing Gong
- School of Biomedical Engineering, Science, and Health Systems, Drexel University, Philadelphia, Pennsylvania
| | - Wu Li
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Hualou Liang
- School of Biomedical Engineering, Science, and Health Systems, Drexel University, Philadelphia, Pennsylvania
| |
Collapse
|
18
|
Novelli L, Wollstadt P, Mediano P, Wibral M, Lizier JT. Large-scale directed network inference with multivariate transfer entropy and hierarchical statistical testing. Netw Neurosci 2019; 3:827-847. [PMID: 31410382 PMCID: PMC6663300 DOI: 10.1162/netn_a_00092] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Accepted: 04/24/2019] [Indexed: 12/14/2022] Open
Abstract
Network inference algorithms are valuable tools for the study of large-scale neuroimaging datasets. Multivariate transfer entropy is well suited for this task, being a model-free measure that captures nonlinear and lagged dependencies between time series to infer a minimal directed network model. Greedy algorithms have been proposed to efficiently deal with high-dimensional datasets while avoiding redundant inferences and capturing synergistic effects. However, multiple statistical comparisons may inflate the false positive rate and are computationally demanding, which limited the size of previous validation studies. The algorithm we present-as implemented in the IDTxl open-source software-addresses these challenges by employing hierarchical statistical tests to control the family-wise error rate and to allow for efficient parallelization. The method was validated on synthetic datasets involving random networks of increasing size (up to 100 nodes), for both linear and nonlinear dynamics. The performance increased with the length of the time series, reaching consistently high precision, recall, and specificity (>98% on average) for 10,000 time samples. Varying the statistical significance threshold showed a more favorable precision-recall trade-off for longer time series. Both the network size and the sample size are one order of magnitude larger than previously demonstrated, showing feasibility for typical EEG and magnetoencephalography experiments.
Collapse
Affiliation(s)
- Leonardo Novelli
- Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
| | | | - Pedro Mediano
- Computational Neurodynamics Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Michael Wibral
- Campus Institute for Dynamics of Biological Networks, Georg-August University, Göttingen, Germany
| | - Joseph T. Lizier
- Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
| |
Collapse
|
19
|
Harding N, Nigmatullin R, Prokopenko M. Thermodynamic efficiency of contagions: a statistical mechanical analysis of the SIS epidemic model. Interface Focus 2018; 8:20180036. [PMID: 30443333 PMCID: PMC6227806 DOI: 10.1098/rsfs.2018.0036] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/10/2018] [Indexed: 01/23/2023] Open
Abstract
We present a novel approach to the study of epidemics on networks as thermodynamic phenomena, quantifying the thermodynamic efficiency of contagions, considered as distributed computational processes. Modelling SIS dynamics on a contact network statistical-mechanically, we follow the maximum entropy (MaxEnt) principle to obtain steady-state distributions and derive, under certain assumptions, relevant thermodynamic quantities both analytically and numerically. In particular, we obtain closed-form solutions for some cases, while interpreting key epidemic variables, such as the reproductive ratio of a SIS model, in a statistical mechanical setting. On the other hand, we consider configuration and free entropy, as well as the Fisher information, in the epidemiological context. This allowed us to identify criticality and distinct phases of epidemic processes. For each of the considered thermodynamic quantities, we compare the analytical solutions informed by the MaxEnt principle with the numerical estimates for SIS epidemics simulated on Watts-Strogatz random graphs.
Collapse
Affiliation(s)
- Nathan Harding
- Centre for Complex Systems, Faculty of Engineering and IT, University of Sydney, Sydney, New South Wales 2006, Australia
| | - Ramil Nigmatullin
- Centre for Complex Systems, Faculty of Engineering and IT, University of Sydney, Sydney, New South Wales 2006, Australia
| | - Mikhail Prokopenko
- Centre for Complex Systems, Faculty of Engineering and IT, University of Sydney, Sydney, New South Wales 2006, Australia
- Marie Bashir Institute for Infectious Diseases and Biosecurity, University of Sydney, Westmead, New South Wales 2145, Australia
| |
Collapse
|
20
|
Kolchinsky A, Wolpert DH. Semantic information, autonomous agency and non-equilibrium statistical physics. Interface Focus 2018; 8:20180041. [PMID: 30443338 PMCID: PMC6227811 DOI: 10.1098/rsfs.2018.0041] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/04/2018] [Indexed: 01/24/2023] Open
Abstract
Shannon information theory provides various measures of so-called syntactic information, which reflect the amount of statistical correlation between systems. By contrast, the concept of 'semantic information' refers to those correlations which carry significance or 'meaning' for a given system. Semantic information plays an important role in many fields, including biology, cognitive science and philosophy, and there has been a long-standing interest in formulating a broadly applicable and formal theory of semantic information. In this paper, we introduce such a theory. We define semantic information as the syntactic information that a physical system has about its environment which is causally necessary for the system to maintain its own existence. 'Causal necessity' is defined in terms of counter-factual interventions which scramble correlations between the system and its environment, while 'maintaining existence' is defined in terms of the system's ability to keep itself in a low entropy state. We also use recent results in non-equilibrium statistical physics to analyse semantic information from a thermodynamic point of view. Our framework is grounded in the intrinsic dynamics of a system coupled to an environment, and is applicable to any physical system, living or otherwise. It leads to formal definitions of several concepts that have been intuitively understood to be related to semantic information, including 'value of information', 'semantic content' and 'agency'.
Collapse
Affiliation(s)
| | - David H. Wolpert
- Santa Fe Institute, Santa Fe, NM 87501, USA
- Massachusetts Institute of Technology, Cambridge, MA, USA
- Arizona State University, Tempe, AZ, USA
| |
Collapse
|
21
|
Nandi M, Biswas A, Banik SK, Chaudhury P. Information processing in a simple one-step cascade. PHYSICAL REVIEW E 2018; 98:042310. [DOI: 10.1103/physreve.98.042310] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
|
22
|
Spinney RE, Lizier JT. Characterizing information-theoretic storage and transfer in continuous time processes. Phys Rev E 2018; 98:012314. [PMID: 30110808 DOI: 10.1103/physreve.98.012314] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2018] [Indexed: 11/07/2022]
Abstract
The characterization of information processing is an important task in complex systems science. Information dynamics is a quantitative methodology for modeling the intrinsic information processing conducted by a process represented as a time series, but to date has only been formulated in discrete time. Building on previous work which demonstrated how to formulate transfer entropy in continuous time, we give a total account of information processing in this setting, incorporating information storage. We find that a convergent rate of predictive capacity, comprising the transfer entropy and active information storage, does not exist, arising through divergent rates of active information storage. We identify that active information storage can be decomposed into two separate quantities that characterize predictive capacity stored in a process: active memory utilization and instantaneous predictive capacity. The latter involves prediction related to path regularity and so solely inherits the divergent properties of the active information storage, while the former permits definitions of pathwise and rate quantities. We formulate measures of memory utilization for jump and neural spiking processes and illustrate measures of information processing in synthetic neural spiking models and coupled Ornstein-Uhlenbeck models. The application to synthetic neural spiking models demonstrates that active memory utilization for point processes consists of discontinuous jump contributions (at spikes) interrupting a continuously varying contribution (relating to waiting times between spikes), complementing the behavior previously demonstrated for transfer entropy in these processes.
Collapse
Affiliation(s)
- Richard E Spinney
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering and Information Technologies, University of Sydney, Sydney, New South Wales 2006, Australia
| | - Joseph T Lizier
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering and Information Technologies, University of Sydney, Sydney, New South Wales 2006, Australia
| |
Collapse
|
23
|
Crosato E, Spinney RE, Nigmatullin R, Lizier JT, Prokopenko M. Thermodynamics and computation during collective motion near criticality. Phys Rev E 2018; 97:012120. [PMID: 29448440 DOI: 10.1103/physreve.97.012120] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2017] [Indexed: 11/07/2022]
Abstract
We study self-organization of collective motion as a thermodynamic phenomenon in the context of the first law of thermodynamics. It is expected that the coherent ordered motion typically self-organises in the presence of changes in the (generalized) internal energy and of (generalized) work done on, or extracted from, the system. We aim to explicitly quantify changes in these two quantities in a system of simulated self-propelled particles and contrast them with changes in the system's configuration entropy. In doing so, we adapt a thermodynamic formulation of the curvatures of the internal energy and the work, with respect to two parameters that control the particles' alignment. This allows us to systematically investigate the behavior of the system by varying the two control parameters to drive the system across a kinetic phase transition. Our results identify critical regimes and show that during the phase transition, where the configuration entropy of the system decreases, the rates of change of the work and of the internal energy also decrease, while their curvatures diverge. Importantly, the reduction of entropy achieved through expenditure of work is shown to peak at criticality. We relate this both to a thermodynamic efficiency and the significance of the increased order with respect to a computational path. Additionally, this study provides an information-geometric interpretation of the curvature of the internal energy as the difference between two curvatures: the curvature of the free entropy, captured by the Fisher information, and the curvature of the configuration entropy.
Collapse
Affiliation(s)
- Emanuele Crosato
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering and IT, The University of Sydney, Sydney, NSW 2006, Australia
| | - Richard E Spinney
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering and IT, The University of Sydney, Sydney, NSW 2006, Australia
| | - Ramil Nigmatullin
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering and IT, The University of Sydney, Sydney, NSW 2006, Australia
| | - Joseph T Lizier
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering and IT, The University of Sydney, Sydney, NSW 2006, Australia
| | - Mikhail Prokopenko
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering and IT, The University of Sydney, Sydney, NSW 2006, Australia
| |
Collapse
|
24
|
Da Rold F. Information-theoretic decomposition of embodied and situated systems. Neural Netw 2018; 103:94-107. [PMID: 29665540 DOI: 10.1016/j.neunet.2018.03.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2017] [Revised: 01/01/2018] [Accepted: 03/14/2018] [Indexed: 11/30/2022]
Abstract
The embodied and situated view of cognition stresses the importance of real-time and nonlinear bodily interaction with the environment for developing concepts and structuring knowledge. In this article, populations of robots controlled by an artificial neural network learn a wall-following task through artificial evolution. At the end of the evolutionary process, time series are recorded from perceptual and motor neurons of selected robots. Information-theoretic measures are estimated on pairings of variables to unveil nonlinear interactions that structure the agent-environment system. Specifically, the mutual information is utilized to quantify the degree of dependence and the transfer entropy to detect the direction of the information flow. Furthermore, the system is analyzed with the local form of such measures, thus capturing the underlying dynamics of information. Results show that different measures are interdependent and complementary in uncovering aspects of the robots' interaction with the environment, as well as characteristics of the functional neural structure. Therefore, the set of information-theoretic measures provides a decomposition of the system, capturing the intricacy of nonlinear relationships that characterize robots' behavior and neural dynamics.
Collapse
Affiliation(s)
- Federico Da Rold
- School of Computing, Electronics and Mathematics, Plymouth University, Plymouth PL4 8AA, UK.
| |
Collapse
|
25
|
Cliff OM, Prokopenko M, Fitch R. Minimising the Kullback-Leibler Divergence for Model Selection in Distributed Nonlinear Systems. ENTROPY 2018; 20:e20020051. [PMID: 33265171 PMCID: PMC7512642 DOI: 10.3390/e20020051] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Revised: 01/17/2018] [Accepted: 01/18/2018] [Indexed: 02/04/2023]
Abstract
The Kullback-Leibler (KL) divergence is a fundamental measure of information geometry that is used in a variety of contexts in artificial intelligence. We show that, when system dynamics are given by distributed nonlinear systems, this measure can be decomposed as a function of two information-theoretic measures, transfer entropy and stochastic interaction. More specifically, these measures are applicable when selecting a candidate model for a distributed system, where individual subsystems are coupled via latent variables and observed through a filter. We represent this model as a directed acyclic graph (DAG) that characterises the unidirectional coupling between subsystems. Standard approaches to structure learning are not applicable in this framework due to the hidden variables; however, we can exploit the properties of certain dynamical systems to formulate exact methods based on differential topology. We approach the problem by using reconstruction theorems to derive an analytical expression for the KL divergence of a candidate DAG from the observed dataset. Using this result, we present a scoring function based on transfer entropy to be used as a subroutine in a structure learning algorithm. We then demonstrate its use in recovering the structure of coupled Lorenz and Rössler systems.
Collapse
Affiliation(s)
- Oliver M. Cliff
- Australian Centre for Field Robotics, The University of Sydney, Sydney NSW 2006, Australia
- Complex Systems Research Group, The University of Sydney, Sydney NSW 2006, Australia
- Correspondence: ; Tel.: +61-2-9351-3040
| | - Mikhail Prokopenko
- Complex Systems Research Group, The University of Sydney, Sydney NSW 2006, Australia
| | - Robert Fitch
- Australian Centre for Field Robotics, The University of Sydney, Sydney NSW 2006, Australia
- Centre for Autonomous Systems, University of Technology Sydney, Ultimo NSW 2007, Australia
| |
Collapse
|
26
|
Darmon D, Rapp PE. Specific transfer entropy and other state-dependent transfer entropies for continuous-state input-output systems. Phys Rev E 2017; 96:022121. [PMID: 28950488 DOI: 10.1103/physreve.96.022121] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2017] [Indexed: 11/07/2022]
Abstract
Since its original formulation in 2000, transfer entropy has become an invaluable tool in the toolbox of nonlinear dynamicists working with empirical data. Transfer entropy and its generalizations provide a precise definition of uncertainty and information transfer that are central to the coupled systems studied in nonlinear science. However, a canonical definition of state-dependent transfer entropy has yet to be introduced. We introduce a candidate measure, the specific transfer entropy, and compare its properties to both total and local transfer entropy. Specific transfer entropy makes possible both state- and time-resolved analysis of the predictive impact of a candidate input system on a candidate output system. We also present principled methods for estimating total, local, and specific transfer entropies from empirical data. We demonstrate the utility of specific transfer entropy and our proposed estimation procedures with two model systems, and find that specific transfer entropy provides more, and more easily interpretable, information about an input-output system compared to currently existing methods.
Collapse
Affiliation(s)
- David Darmon
- Department of Military and Emergency Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland 20814, USA and The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, Maryland 20817, USA
| | - Paul E Rapp
- Department of Military and Emergency Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland 20814, USA
| |
Collapse
|
27
|
Wollstadt P, Sellers KK, Rudelt L, Priesemann V, Hutt A, Fröhlich F, Wibral M. Breakdown of local information processing may underlie isoflurane anesthesia effects. PLoS Comput Biol 2017; 13:e1005511. [PMID: 28570661 PMCID: PMC5453425 DOI: 10.1371/journal.pcbi.1005511] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2016] [Accepted: 04/11/2017] [Indexed: 02/07/2023] Open
Abstract
The disruption of coupling between brain areas has been suggested as the mechanism underlying loss of consciousness in anesthesia. This hypothesis has been tested previously by measuring the information transfer between brain areas, and by taking reduced information transfer as a proxy for decoupling. Yet, information transfer is a function of the amount of information available in the information source—such that transfer decreases even for unchanged coupling when less source information is available. Therefore, we reconsidered past interpretations of reduced information transfer as a sign of decoupling, and asked whether impaired local information processing leads to a loss of information transfer. An important prediction of this alternative hypothesis is that changes in locally available information (signal entropy) should be at least as pronounced as changes in information transfer. We tested this prediction by recording local field potentials in two ferrets after administration of isoflurane in concentrations of 0.0%, 0.5%, and 1.0%. We found strong decreases in the source entropy under isoflurane in area V1 and the prefrontal cortex (PFC)—as predicted by our alternative hypothesis. The decrease in source entropy was stronger in PFC compared to V1. Information transfer between V1 and PFC was reduced bidirectionally, but with a stronger decrease from PFC to V1. This links the stronger decrease in information transfer to the stronger decrease in source entropy—suggesting reduced source entropy reduces information transfer. This conclusion fits the observation that the synaptic targets of isoflurane are located in local cortical circuits rather than on the synapses formed by interareal axonal projections. Thus, changes in information transfer under isoflurane seem to be a consequence of changes in local processing more than of decoupling between brain areas. We suggest that source entropy changes must be considered whenever interpreting changes in information transfer as decoupling. Currently we do not understand how anesthesia leads to loss of consciousness (LOC). One popular idea is that we loose consciousness when brain areas lose their ability to communicate with each other–as anesthetics might interrupt transmission on nerve fibers coupling them. This idea has been tested by measuring the amount of information transferred between brain areas, and taking this transfer to reflect the coupling itself. Yet, information that isn’t available in the source area can’t be transferred to a target. Hence, the decreases in information transfer could be related to less information being available in the source, rather than to a decoupling. We tested this possibility measuring the information available in source brain areas and found that it decreased under isoflurane anesthesia. In addition, a stronger decrease in source information lead to a stronger decrease of the information transfered. Thus, the input to the connection between brain areas determined the communicated information, not the strength of the coupling (which would result in a stronger decrease in the target). We suggest that interrupted information processing within brain areas has an important contribution to LOC, and should be focused on more in attempts to understand loss of consciousness under anesthesia.
Collapse
Affiliation(s)
- Patricia Wollstadt
- MEG Unit, Brain Imaging Center, Goethe University, Frankfurt/Main, Germany
- * E-mail: (PW); (VP)
| | - Kristin K. Sellers
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Neurobiology Curriculum, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Lucas Rudelt
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Viola Priesemann
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, BCCN, Göttingen, Germany
- * E-mail: (PW); (VP)
| | - Axel Hutt
- Deutscher Wetterdienst, Section FE 12 - Data Assimilation, Offenbach/Main, Germany
- Department of Mathematics and Statistics, University of Reading, Reading, United Kingdom
| | - Flavio Fröhlich
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Neurobiology Curriculum, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Cell Biology and Physiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Biomedical Engineering, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Neurology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Michael Wibral
- MEG Unit, Brain Imaging Center, Goethe University, Frankfurt/Main, Germany
| |
Collapse
|
28
|
Ramos AMT, Builes-Jaramillo A, Poveda G, Goswami B, Macau EEN, Kurths J, Marwan N. Recurrence measure of conditional dependence and applications. Phys Rev E 2017; 95:052206. [PMID: 28618513 DOI: 10.1103/physreve.95.052206] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2017] [Indexed: 06/07/2023]
Abstract
Identifying causal relations from observational data sets has posed great challenges in data-driven causality inference studies. One of the successful approaches to detect direct coupling in the information theory framework is transfer entropy. However, the core of entropy-based tools lies on the probability estimation of the underlying variables. Here we propose a data-driven approach for causality inference that incorporates recurrence plot features into the framework of information theory. We define it as the recurrence measure of conditional dependence (RMCD), and we present some applications. The RMCD quantifies the causal dependence between two processes based on joint recurrence patterns between the past of the possible driver and present of the potentially driven, excepting the contribution of the contemporaneous past of the driven variable. Finally, it can unveil the time scale of the influence of the sea-surface temperature of the Pacific Ocean on the precipitation in the Amazonia during recent major droughts.
Collapse
Affiliation(s)
- Antônio M T Ramos
- National Institute for Space Research - INPE, 12227-010 São José dos Campos, São Paulo, Brazil
- Potsdam Institute for Climate Impact Research, Potsdam 14473, Germany
| | - Alejandro Builes-Jaramillo
- Universidad Nacional de Colombia, Sede Medellín, Department of Geosciences and Environment, Facultad de Minas, Carrera 80 No 65-223, Bloque M2. Medellín, Colombia
- Facultad de Arquitectura e Ingeniería, Institución Universitaria Colegio Mayor de Antioquia, Carrera 78 65 - 46, Edificio patrimonial. Medellín, Colombia
| | - Germán Poveda
- Universidad Nacional de Colombia, Sede Medellín, Department of Geosciences and Environment, Facultad de Minas, Carrera 80 No 65-223, Bloque M2. Medellín, Colombia
| | - Bedartha Goswami
- Potsdam Institute for Climate Impact Research, Potsdam 14473, Germany
- Institute of Earth and Environmental Science, University of Potsdam, Karl-Liebknecht Str. 2425, Potsdam 14476, Germany
| | - Elbert E N Macau
- National Institute for Space Research - INPE, 12227-010 São José dos Campos, São Paulo, Brazil
| | - Jürgen Kurths
- Potsdam Institute for Climate Impact Research, Potsdam 14473, Germany
- Department of Physics, Humboldt University Berlin, Berlin, Germany
| | - Norbert Marwan
- Potsdam Institute for Climate Impact Research, Potsdam 14473, Germany
| |
Collapse
|
29
|
|