1
|
Ruan Z, Li H. Two Levels of Integrated Information Theory: From Autonomous Systems to Conscious Life. ENTROPY (BASEL, SWITZERLAND) 2024; 26:761. [PMID: 39330094 PMCID: PMC11431274 DOI: 10.3390/e26090761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Revised: 08/03/2024] [Accepted: 09/03/2024] [Indexed: 09/28/2024]
Abstract
Integrated Information Theory (IIT) is one of the most prominent candidates for a theory of consciousness, although it has received much criticism for trying to live up to expectations. Based on the relevance of three issues generalized from the developments of IITs, we have summarized the main ideas of IIT into two levels. At the second level, IIT claims to be strictly anchoring consciousness, but the first level on which it is based is more about autonomous systems or systems that have reached some other critical complexity. In this paper, we argue that the clear gap between the two levels of explanation of IIT has led to these criticisms and that its panpsychist tendency plays a crucial role in this. We suggest that the problems of IIT are far from being "pseudoscience", and by adding more necessary elements, when the first level is combined with the second level, IIT can genuinely move toward an appropriate theory of consciousness that can provide necessary and sufficient interpretations.
Collapse
Affiliation(s)
- Zenan Ruan
- Department of Public Administration, Hangzhou Institute of Administration, Hangzhou 310024, China
| | - Hengwei Li
- School of Philosophy, Zhejiang University, Hangzhou 310058, China
- Center for the Study of Language and Cognition, Zhejiang University, Hangzhou 310058, China
- The State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou 310058, China
| |
Collapse
|
2
|
Luppi AI, Mediano PAM, Rosas FE, Allanson J, Pickard J, Carhart-Harris RL, Williams GB, Craig MM, Finoia P, Owen AM, Naci L, Menon DK, Bor D, Stamatakis EA. A synergistic workspace for human consciousness revealed by Integrated Information Decomposition. eLife 2024; 12:RP88173. [PMID: 39022924 PMCID: PMC11257694 DOI: 10.7554/elife.88173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024] Open
Abstract
How is the information-processing architecture of the human brain organised, and how does its organisation support consciousness? Here, we combine network science and a rigorous information-theoretic notion of synergy to delineate a 'synergistic global workspace', comprising gateway regions that gather synergistic information from specialised modules across the human brain. This information is then integrated within the workspace and widely distributed via broadcaster regions. Through functional MRI analysis, we show that gateway regions of the synergistic workspace correspond to the human brain's default mode network, whereas broadcasters coincide with the executive control network. We find that loss of consciousness due to general anaesthesia or disorders of consciousness corresponds to diminished ability of the synergistic workspace to integrate information, which is restored upon recovery. Thus, loss of consciousness coincides with a breakdown of information integration within the synergistic workspace of the human brain. This work contributes to conceptual and empirical reconciliation between two prominent scientific theories of consciousness, the Global Neuronal Workspace and Integrated Information Theory, while also advancing our understanding of how the human brain supports consciousness through the synergistic integration of information.
Collapse
Affiliation(s)
- Andrea I Luppi
- Department of Clinical Neurosciences, University of CambridgeCambridgeUnited Kingdom
- University Division of Anaesthesia, School of Clinical Medicine, University of CambridgeCambridgeUnited Kingdom
| | - Pedro AM Mediano
- Department of Psychology, University of CambridgeCambridgeUnited Kingdom
| | - Fernando E Rosas
- Center for Psychedelic Research, Department of Brain Science, Imperial College LondonLondonUnited Kingdom
- Center for Complexity Science, Imperial College LondonLondonUnited Kingdom
- Data Science Institute, Imperial College LondonLondonUnited Kingdom
| | - Judith Allanson
- Department of Clinical Neurosciences, University of CambridgeCambridgeUnited Kingdom
- Department of Neurosciences, Cambridge University Hospitals NHS Foundation, Addenbrooke's HospitalCambridgeUnited Kingdom
| | - John Pickard
- Department of Clinical Neurosciences, University of CambridgeCambridgeUnited Kingdom
- Wolfson Brain Imaging Centre, University of CambridgeCambridgeUnited Kingdom
- Division of Neurosurgery, School of Clinical Medicine, University of Cambridge, Addenbrooke's HospitalCambridgeUnited Kingdom
| | - Robin L Carhart-Harris
- Center for Psychedelic Research, Department of Brain Science, Imperial College LondonLondonUnited Kingdom
- Psychedelics Division - Neuroscape, Department of Neurology, University of CaliforniaSan FranciscoUnited States
| | - Guy B Williams
- Department of Clinical Neurosciences, University of CambridgeCambridgeUnited Kingdom
- Wolfson Brain Imaging Centre, University of CambridgeCambridgeUnited Kingdom
| | - Michael M Craig
- Department of Clinical Neurosciences, University of CambridgeCambridgeUnited Kingdom
- University Division of Anaesthesia, School of Clinical Medicine, University of CambridgeCambridgeUnited Kingdom
| | - Paola Finoia
- Department of Clinical Neurosciences, University of CambridgeCambridgeUnited Kingdom
| | - Adrian M Owen
- Department of Psychology and Department of Physiology and Pharmacology, The Brain and Mind Institute, University of Western OntarioLondonCanada
| | - Lorina Naci
- Trinity College Institute of Neuroscience, School of Psychology, Lloyd Building, Trinity CollegeDublinIreland
| | - David K Menon
- University Division of Anaesthesia, School of Clinical Medicine, University of CambridgeCambridgeUnited Kingdom
- Wolfson Brain Imaging Centre, University of CambridgeCambridgeUnited Kingdom
| | - Daniel Bor
- Department of Psychology, University of CambridgeCambridgeUnited Kingdom
| | - Emmanuel A Stamatakis
- University Division of Anaesthesia, School of Clinical Medicine, University of CambridgeCambridgeUnited Kingdom
| |
Collapse
|
3
|
Proca AM, Rosas FE, Luppi AI, Bor D, Crosby M, Mediano PAM. Synergistic information supports modality integration and flexible learning in neural networks solving multiple tasks. PLoS Comput Biol 2024; 20:e1012178. [PMID: 38829900 PMCID: PMC11175422 DOI: 10.1371/journal.pcbi.1012178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 06/13/2024] [Accepted: 05/18/2024] [Indexed: 06/05/2024] Open
Abstract
Striking progress has been made in understanding cognition by analyzing how the brain is engaged in different modes of information processing. For instance, so-called synergistic information (information encoded by a set of neurons but not by any subset) plays a key role in areas of the human brain linked with complex cognition. However, two questions remain unanswered: (a) how and why a cognitive system can become highly synergistic; and (b) how informational states map onto artificial neural networks in various learning modes. Here we employ an information-decomposition framework to investigate neural networks performing cognitive tasks. Our results show that synergy increases as networks learn multiple diverse tasks, and that in tasks requiring integration of multiple sources, performance critically relies on synergistic neurons. Overall, our results suggest that synergy is used to combine information from multiple modalities-and more generally for flexible and efficient learning. These findings reveal new ways of investigating how and why learning systems employ specific information-processing strategies, and support the principle that the capacity for general-purpose learning critically relies on the system's information dynamics.
Collapse
Affiliation(s)
- Alexandra M. Proca
- Department of Computing, Imperial College London, London, United Kingdom
| | - Fernando E. Rosas
- Department of Informatics, University of Sussex, Brighton, United Kingdom
- Sussex Centre for Consciousness Science and Sussex AI, University of Sussex, Brighton, United Kingdom
- Centre for Psychedelic Research and Centre for Complexity Science, Department of Brain Sciences, Imperial College London, London, United Kingdom
- Centre for Eudaimonia and Human Flourishing, University of Oxford, Oxford, United Kingdom
| | - Andrea I. Luppi
- Department of Clinical Neurosciences and Division of Anaesthesia, University of Cambridge, Cambridge, United Kingdom
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, United Kingdom
- Montreal Neurological Institute, McGill University, Montreal, Canada
| | - Daniel Bor
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
- Department of Psychology, Queen Mary University of London, London, United Kingdom
| | - Matthew Crosby
- Department of Computing, Imperial College London, London, United Kingdom
| | - Pedro A. M. Mediano
- Department of Computing, Imperial College London, London, United Kingdom
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
4
|
Varley TF, Bongard J. Evolving higher-order synergies reveals a trade-off between stability and information-integration capacity in complex systems. CHAOS (WOODBURY, N.Y.) 2024; 34:063127. [PMID: 38865092 DOI: 10.1063/5.0200425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Accepted: 05/21/2024] [Indexed: 06/13/2024]
Abstract
There has recently been an explosion of interest in how "higher-order" structures emerge in complex systems comprised of many interacting elements (often called "synergistic" information). This "emergent" organization has been found in a variety of natural and artificial systems, although at present, the field lacks a unified understanding of what the consequences of higher-order synergies and redundancies are for systems under study. Typical research treats the presence (or absence) of synergistic information as a dependent variable and report changes in the level of synergy in response to some change in the system. Here, we attempt to flip the script: rather than treating higher-order information as a dependent variable, we use evolutionary optimization to evolve boolean networks with significant higher-order redundancies, synergies, or statistical complexity. We then analyze these evolved populations of networks using established tools for characterizing discrete dynamics: the number of attractors, the average transient length, and the Derrida coefficient. We also assess the capacity of the systems to integrate information. We find that high-synergy systems are unstable and chaotic, but with a high capacity to integrate information. In contrast, evolved redundant systems are extremely stable, but have negligible capacity to integrate information. Finally, the complex systems that balance integration and segregation (known as Tononi-Sporns-Edelman complexity) show features of both chaosticity and stability, with a greater capacity to integrate information than the redundant systems while being more stable than the random and synergistic systems. We conclude that there may be a fundamental trade-off between the robustness of a system's dynamics and its capacity to integrate information (which inherently requires flexibility and sensitivity) and that certain kinds of complexity naturally balance this trade-off.
Collapse
Affiliation(s)
- Thomas F Varley
- Department of Computer Science, University of Vermont, Burlington, Vermont 05405, USA
- Vermont Complex Systems Center, University of Vermont, Burlington, Vermont 05405, USA
| | - Josh Bongard
- Department of Computer Science, University of Vermont, Burlington, Vermont 05405, USA
- Vermont Complex Systems Center, University of Vermont, Burlington, Vermont 05405, USA
| |
Collapse
|
5
|
McMillen P, Levin M. Collective intelligence: A unifying concept for integrating biology across scales and substrates. Commun Biol 2024; 7:378. [PMID: 38548821 PMCID: PMC10978875 DOI: 10.1038/s42003-024-06037-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 03/11/2024] [Indexed: 04/01/2024] Open
Abstract
A defining feature of biology is the use of a multiscale architecture, ranging from molecular networks to cells, tissues, organs, whole bodies, and swarms. Crucially however, biology is not only nested structurally, but also functionally: each level is able to solve problems in distinct problem spaces, such as physiological, morphological, and behavioral state space. Percolating adaptive functionality from one level of competent subunits to a higher functional level of organization requires collective dynamics: multiple components must work together to achieve specific outcomes. Here we overview a number of biological examples at different scales which highlight the ability of cellular material to make decisions that implement cooperation toward specific homeodynamic endpoints, and implement collective intelligence by solving problems at the cell, tissue, and whole-organism levels. We explore the hypothesis that collective intelligence is not only the province of groups of animals, and that an important symmetry exists between the behavioral science of swarms and the competencies of cells and other biological systems at different scales. We then briefly outline the implications of this approach, and the possible impact of tools from the field of diverse intelligence for regenerative medicine and synthetic bioengineering.
Collapse
Affiliation(s)
- Patrick McMillen
- Department of Biology, Tufts University, Medford, MA, 02155, USA
- Allen Discovery Center at Tufts University, Medford, MA, 02155, USA
| | - Michael Levin
- Department of Biology, Tufts University, Medford, MA, 02155, USA.
- Allen Discovery Center at Tufts University, Medford, MA, 02155, USA.
- Wyss Institute for Biologically Inspired Engineering, Harvard University, Boston, MA, 02115, USA.
| |
Collapse
|
6
|
Murphy KA, Bassett DS. Information decomposition in complex systems via machine learning. Proc Natl Acad Sci U S A 2024; 121:e2312988121. [PMID: 38498714 PMCID: PMC10990158 DOI: 10.1073/pnas.2312988121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 01/30/2024] [Indexed: 03/20/2024] Open
Abstract
One of the fundamental steps toward understanding a complex system is identifying variation at the scale of the system's components that is most relevant to behavior on a macroscopic scale. Mutual information provides a natural means of linking variation across scales of a system due to its independence of functional relationship between observables. However, characterizing the manner in which information is distributed across a set of observables is computationally challenging and generally infeasible beyond a handful of measurements. Here, we propose a practical and general methodology that uses machine learning to decompose the information contained in a set of measurements by jointly optimizing a lossy compression of each measurement. Guided by the distributed information bottleneck as a learning objective, the information decomposition identifies the variation in the measurements of the system state most relevant to specified macroscale behavior. We focus our analysis on two paradigmatic complex systems: a Boolean circuit and an amorphous material undergoing plastic deformation. In both examples, the large amount of entropy of the system state is decomposed, bit by bit, in terms of what is most related to macroscale behavior. The identification of meaningful variation in data, with the full generality brought by information theory, is made practical for studying the connection between micro- and macroscale structure in complex systems.
Collapse
Affiliation(s)
- Kieran A. Murphy
- Department of Bioengineering, School of Engineering & Applied Science, University of Pennsylvania, Philadelphia, PA19104
| | - Dani S. Bassett
- Department of Bioengineering, School of Engineering & Applied Science, University of Pennsylvania, Philadelphia, PA19104
- Department of Electrical & Systems Engineering, School of Engineering & Applied Science, University of Pennsylvania, Philadelphia, PA19104
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA19104
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA19104
- Department of Physics & Astronomy, College of Arts & Sciences, University of Pennsylvania, Philadelphia, PA19104
- The Santa Fe Institute, Santa Fe, NM87501
| |
Collapse
|
7
|
Varley TF. Generalized decomposition of multivariate information. PLoS One 2024; 19:e0297128. [PMID: 38315691 PMCID: PMC10843128 DOI: 10.1371/journal.pone.0297128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Accepted: 12/28/2023] [Indexed: 02/07/2024] Open
Abstract
Since its introduction, the partial information decomposition (PID) has emerged as a powerful, information-theoretic technique useful for studying the structure of (potentially higher-order) interactions in complex systems. Despite its utility, the applicability of the PID is restricted by the need to assign elements as either "sources" or "targets", as well as the specific structure of the mutual information itself. Here, I introduce a generalized information decomposition that relaxes the source/target distinction while still satisfying the basic intuitions about information. This approach is based on the decomposition of the Kullback-Leibler divergence, and consequently allows for the analysis of any information gained when updating from an arbitrary prior to an arbitrary posterior. As a result, any information-theoretic measure that can be written as a linear combination of Kullback-Leibler divergences admits a decomposition in the style of Williams and Beer, including the total correlation, the negentropy, and the mutual information as special cases. This paper explores how the generalized information decomposition can reveal novel insights into existing measures, as well as the nature of higher-order synergies. We show that synergistic information is intimately related to the well-known Tononi-Sporns-Edelman (TSE) complexity, and that synergistic information requires a similar integration/segregation balance as a high TSE complexity. Finally, I end with a discussion of how this approach fits into other attempts to generalize the PID and the possibilities for empirical applications.
Collapse
Affiliation(s)
- Thomas F. Varley
- Department of Computer Science, University of Vermont, Burlington, VT, United States of America
- Vermont Complex Systems Center, University of Vermont, Burlington, VT, United States of America
| |
Collapse
|
8
|
Kemp JT, Kline AG, Bettencourt LMA. Information synergy maximizes the growth rate of heterogeneous groups. PNAS NEXUS 2024; 3:pgae072. [PMID: 38420213 PMCID: PMC10901557 DOI: 10.1093/pnasnexus/pgae072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 02/02/2024] [Indexed: 03/02/2024]
Abstract
Collective action and group formation are fundamental behaviors among both organisms cooperating to maximize their fitness and people forming socioeconomic organizations. Researchers have extensively explored social interaction structures via game theory and homophilic linkages, such as kin selection and scalar stress, to understand emergent cooperation in complex systems. However, we still lack a general theory capable of predicting how agents benefit from heterogeneous preferences, joint information, or skill complementarities in statistical environments. Here, we derive general statistical dynamics for the origin of cooperation based on the management of resources and pooled information. Specifically, we show how groups that optimally combine complementary agent knowledge about resources in statistical environments maximize their growth rate. We show that these advantages are quantified by the information synergy embedded in the conditional probability of environmental states given agents' signals, such that groups with a greater diversity of signals maximize their collective information. It follows that, when constraints are placed on group formation, agents must intelligently select with whom they cooperate to maximize the synergy available to their own signal. Our results show how the general properties of information underlie the optimal collective formation and dynamics of groups of heterogeneous agents across social and biological phenomena.
Collapse
Affiliation(s)
- Jordan T Kemp
- Department of Physics, University of Chicago, 5720 S Ellis Ave #201, Chicago, IL 60637, USA
| | - Adam G Kline
- Department of Physics, University of Chicago, 5720 S Ellis Ave #201, Chicago, IL 60637, USA
| | - Luís M A Bettencourt
- Department of Ecology & Evolution, University of Chicago, 1101 E 57th St, Chicago, IL 60637, USA
- Mansueto Institute for Urban Innovation, University of Chicago, 1155 E 60th Street, Chicago, IL 60637, USA
| |
Collapse
|
9
|
Yuan B, Zhang J, Lyu A, Wu J, Wang Z, Yang M, Liu K, Mou M, Cui P. Emergence and Causality in Complex Systems: A Survey of Causal Emergence and Related Quantitative Studies. ENTROPY (BASEL, SWITZERLAND) 2024; 26:108. [PMID: 38392363 PMCID: PMC10887681 DOI: 10.3390/e26020108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 01/16/2024] [Accepted: 01/18/2024] [Indexed: 02/24/2024]
Abstract
Emergence and causality are two fundamental concepts for understanding complex systems. They are interconnected. On one hand, emergence refers to the phenomenon where macroscopic properties cannot be solely attributed to the cause of individual properties. On the other hand, causality can exhibit emergence, meaning that new causal laws may arise as we increase the level of abstraction. Causal emergence (CE) theory aims to bridge these two concepts and even employs measures of causality to quantify emergence. This paper provides a comprehensive review of recent advancements in quantitative theories and applications of CE. It focuses on two primary challenges: quantifying CE and identifying it from data. The latter task requires the integration of machine learning and neural network techniques, establishing a significant link between causal emergence and machine learning. We highlight two problem categories: CE with machine learning and CE for machine learning, both of which emphasize the crucial role of effective information (EI) as a measure of causal emergence. The final section of this review explores potential applications and provides insights into future perspectives.
Collapse
Affiliation(s)
- Bing Yuan
- Swarma Research, Beijing 100085, China
| | - Jiang Zhang
- Swarma Research, Beijing 100085, China
- School of Systems Sciences, Beijing Normal University, Beijing 100875, China
| | - Aobo Lyu
- Department of Electrical and Systems Engineering, Washington University, St. Louis, MO 63130, USA
| | - Jiayun Wu
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Zhipeng Wang
- School of Systems Sciences, Beijing Normal University, Beijing 100875, China
| | - Mingzhe Yang
- School of Systems Sciences, Beijing Normal University, Beijing 100875, China
| | - Kaiwei Liu
- School of Systems Sciences, Beijing Normal University, Beijing 100875, China
| | - Muyun Mou
- School of Systems Sciences, Beijing Normal University, Beijing 100875, China
| | - Peng Cui
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| |
Collapse
|
10
|
Idesis S, Geli S, Faskowitz J, Vohryzek J, Sanz Perl Y, Pieper F, Galindo-Leon E, Engel AK, Deco G. Functional hierarchies in brain dynamics characterized by signal reversibility in ferret cortex. PLoS Comput Biol 2024; 20:e1011818. [PMID: 38241383 PMCID: PMC10836715 DOI: 10.1371/journal.pcbi.1011818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 02/02/2024] [Accepted: 01/09/2024] [Indexed: 01/21/2024] Open
Abstract
Brain signal irreversibility has been shown to be a promising approach to study neural dynamics. Nevertheless, the relation with cortical hierarchy and the influence of different electrophysiological features is not completely understood. In this study, we recorded local field potentials (LFPs) during spontaneous behavior, including awake and sleep periods, using custom micro-electrocorticographic (μECoG) arrays implanted in ferrets. In contrast to humans, ferrets remain less time in each state across the sleep-wake cycle. We deployed a diverse set of metrics in order to measure the levels of complexity of the different behavioral states. In particular, brain irreversibility, which is a signature of non-equilibrium dynamics, captured by the arrow of time of the signal, revealed the hierarchical organization of the ferret's cortex. We found different signatures of irreversibility and functional hierarchy of large-scale dynamics in three different brain states (active awake, quiet awake, and deep sleep), showing a lower level of irreversibility in the deep sleep stage, compared to the other. Irreversibility also allowed us to disentangle the influence of different cortical areas and frequency bands in this process, showing a predominance of the parietal cortex and the theta band. Furthermore, when inspecting the embedded dynamic through a Hidden Markov Model, the deep sleep stage was revealed to have a lower switching rate and lower entropy production. These results suggest functional hierarchies in organization that can be revealed through thermodynamic features and information theory metrics.
Collapse
Affiliation(s)
- Sebastian Idesis
- Center for Brain and Cognition (CBC), Department of Information Technologies and Communications (DTIC), Pompeu Fabra University, Edifici Mercè Rodoreda, Barcelona, Catalonia, Spain
| | - Sebastián Geli
- Center for Brain and Cognition (CBC), Department of Information Technologies and Communications (DTIC), Pompeu Fabra University, Edifici Mercè Rodoreda, Barcelona, Catalonia, Spain
| | - Joshua Faskowitz
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, Indiana, United States of America
| | - Jakub Vohryzek
- Center for Brain and Cognition (CBC), Department of Information Technologies and Communications (DTIC), Pompeu Fabra University, Edifici Mercè Rodoreda, Barcelona, Catalonia, Spain
- Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom
| | - Yonatan Sanz Perl
- Center for Brain and Cognition (CBC), Department of Information Technologies and Communications (DTIC), Pompeu Fabra University, Edifici Mercè Rodoreda, Barcelona, Catalonia, Spain
- National Scientific and Technical Research Council, Buenos Aires, Argentina
- Institut du Cerveau et de la Moelle épinière, ICM, Paris, France
| | - Florian Pieper
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Edgar Galindo-Leon
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Andreas K. Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Gustavo Deco
- Center for Brain and Cognition (CBC), Department of Information Technologies and Communications (DTIC), Pompeu Fabra University, Edifici Mercè Rodoreda, Barcelona, Catalonia, Spain
- Institució Catalana de Recerca I Estudis Avançats (ICREA), Barcelona, Catalonia, Spain
| |
Collapse
|
11
|
Varley TF, Pope M, Puxeddu MG, Faskowitz J, Sporns O. Partial entropy decomposition reveals higher-order information structures in human brain activity. Proc Natl Acad Sci U S A 2023; 120:e2300888120. [PMID: 37467265 PMCID: PMC10372615 DOI: 10.1073/pnas.2300888120] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 06/06/2023] [Indexed: 07/21/2023] Open
Abstract
The standard approach to modeling the human brain as a complex system is with a network, where the basic unit of interaction is a pairwise link between two brain regions. While powerful, this approach is limited by the inability to assess higher-order interactions involving three or more elements directly. In this work, we explore a method for capturing higher-order dependencies in multivariate data: the partial entropy decomposition (PED). Our approach decomposes the joint entropy of the whole system into a set of nonnegative atoms that describe the redundant, unique, and synergistic interactions that compose the system's structure. PED gives insight into the mathematics of functional connectivity and its limitation. When applied to resting-state fMRI data, we find robust evidence of higher-order synergies that are largely invisible to standard functional connectivity analyses. Our approach can also be localized in time, allowing a frame-by-frame analysis of how the distributions of redundancies and synergies change over the course of a recording. We find that different ensembles of regions can transiently change from being redundancy-dominated to synergy-dominated and that the temporal pattern is structured in time. These results provide strong evidence that there exists a large space of unexplored structures in human brain data that have been largely missed by a focus on bivariate network connectivity models. This synergistic structure is dynamic in time and likely will illuminate interesting links between brain and behavior. Beyond brain-specific application, the PED provides a very general approach for understanding higher-order structures in a variety of complex systems.
Collapse
Affiliation(s)
- Thomas F. Varley
- School of Informatics, Computing and Engineering, Indiana University, Bloomington, IN47405
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN47405
| | - Maria Pope
- School of Informatics, Computing and Engineering, Indiana University, Bloomington, IN47405
- Program in Neuroscience, Indiana University, Bloomington, IN47405
| | - Maria Grazia Puxeddu
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN47405
| | - Joshua Faskowitz
- School of Informatics, Computing and Engineering, Indiana University, Bloomington, IN47405
- Program in Neuroscience, Indiana University, Bloomington, IN47405
| | - Olaf Sporns
- School of Informatics, Computing and Engineering, Indiana University, Bloomington, IN47405
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN47405
- Program in Neuroscience, Indiana University, Bloomington, IN47405
| |
Collapse
|
12
|
Yurchenko SB. Is information the other face of causation in biological systems? Biosystems 2023; 229:104925. [PMID: 37182834 DOI: 10.1016/j.biosystems.2023.104925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 05/08/2023] [Accepted: 05/08/2023] [Indexed: 05/16/2023]
Abstract
Is information the other face of causation? This issue cannot be clarified without discussing how these both are related to physical laws, logic, computation, networks, bio-signaling, and the mind-body problem. The relation between information and causation is also intrinsically linked to many other concepts in complex systems theory such as emergence, self-organization, synergy, criticality, and hierarchy, which in turn involve various notions such as observer-dependence, dimensionality reduction, and especially downward causation. A canonical example proposed for downward causation is the collective behavior of the whole system at a macroscale that may affect the behavior of each its member at a microscale. In neuroscience, downward causation is suggested as a strong candidate to account for mental causation (free will). However, this would be possible only on the condition that information might have causal power. After introducing the Causal Equivalence Principle expanding the relativity principle for coarse-grained and fine-grained linear causal chains, and a set-theoretical definition of multiscale nested hierarchy composed of modular ⊂-chains, it is shown that downward causation can be spurious. It emerges only in the eyes of an observer, though, due to information that could not be obtained by "looking" exclusively at the behavior of a system at a microscale. On the other hand, since biological systems are hierarchically organized, this information gain is indicative of how information can be a function of scale in these systems and a prerequisite for scale-dependent emergence of cognition and consciousness in neural networks.
Collapse
Affiliation(s)
- Sergey B Yurchenko
- Brain and Consciousness Independent Research Center, Andijan, Uzbekistan.
| |
Collapse
|
13
|
van Enk SJ. Pooling probability distributions and partial information decomposition. Phys Rev E 2023; 107:054133. [PMID: 37329048 DOI: 10.1103/physreve.107.054133] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 05/09/2023] [Indexed: 06/18/2023]
Abstract
Notwithstanding various attempts to construct a partial information decomposition (PID) for multiple variables by defining synergistic, redundant, and unique information, there is no consensus on how one ought to precisely define either of these quantities. One aim here is to illustrate how that ambiguity-or, more positively, freedom of choice-may arise. Using the basic idea that information equals the average reduction in uncertainty when going from an initial to a final probability distribution, synergistic information will likewise be defined as a difference between two entropies. One term is uncontroversial and characterizes "the whole" information that source variables carry jointly about a target variable T. The other term then is meant to characterize the information carried by the "sum of its parts." Here we interpret that concept as needing a suitable probability distribution aggregated ("pooled") from multiple marginal distributions (the parts). Ambiguity arises in the definition of the optimum way to pool two (or more) probability distributions. Independent of the exact definition of optimum pooling, the concept of pooling leads to a lattice that differs from the often-used redundancy-based lattice. One can associate not just a number (an average entropy) with each node of the lattice, but (pooled) probability distributions. As an example, one simple and reasonable approach to pooling is presented, which naturally gives rise to the overlap between different probability distributions as being a crucial quantity that characterizes both synergistic and unique information.
Collapse
Affiliation(s)
- S J van Enk
- Department of Physics, University of Oregon, Eugene, Oregon 97403, USA
| |
Collapse
|
14
|
Varley TF, Pope M, Faskowitz J, Sporns O. Multivariate information theory uncovers synergistic subsystems of the human cerebral cortex. Commun Biol 2023; 6:451. [PMID: 37095282 PMCID: PMC10125999 DOI: 10.1038/s42003-023-04843-w] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 04/14/2023] [Indexed: 04/26/2023] Open
Abstract
One of the most well-established tools for modeling the brain is the functional connectivity network, which is constructed from pairs of interacting brain regions. While powerful, the network model is limited by the restriction that only pairwise dependencies are considered and potentially higher-order structures are missed. Here, we explore how multivariate information theory reveals higher-order dependencies in the human brain. We begin with a mathematical analysis of the O-information, showing analytically and numerically how it is related to previously established information theoretic measures of complexity. We then apply the O-information to brain data, showing that synergistic subsystems are widespread in the human brain. Highly synergistic subsystems typically sit between canonical functional networks, and may serve an integrative role. We then use simulated annealing to find maximally synergistic subsystems, finding that such systems typically comprise ≈10 brain regions, recruited from multiple canonical brain systems. Though ubiquitous, highly synergistic subsystems are invisible when considering pairwise functional connectivity, suggesting that higher-order dependencies form a kind of shadow structure that has been unrecognized by established network-based analyses. We assert that higher-order interactions in the brain represent an under-explored space that, accessible with tools of multivariate information theory, may offer novel scientific insights.
Collapse
Affiliation(s)
- Thomas F Varley
- School of Informatics, Computing & Engineering, Indiana University, Bloomington, IN, 47405, USA.
- Department of Psychological & Brain Sciences, Indiana University, Bloomington, IN, 47405, USA.
| | - Maria Pope
- School of Informatics, Computing & Engineering, Indiana University, Bloomington, IN, 47405, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, 47405, USA
| | - Joshua Faskowitz
- Department of Psychological & Brain Sciences, Indiana University, Bloomington, IN, 47405, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, 47405, USA
| | - Olaf Sporns
- School of Informatics, Computing & Engineering, Indiana University, Bloomington, IN, 47405, USA
- Department of Psychological & Brain Sciences, Indiana University, Bloomington, IN, 47405, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, 47405, USA
| |
Collapse
|
15
|
Varley TF. Decomposing past and future: Integrated information decomposition based on shared probability mass exclusions. PLoS One 2023; 18:e0282950. [PMID: 36952508 PMCID: PMC10035902 DOI: 10.1371/journal.pone.0282950] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 02/27/2023] [Indexed: 03/25/2023] Open
Abstract
A core feature of complex systems is that the interactions between elements in the present causally constrain their own futures, and the futures of other elements as the system evolves through time. To fully model all of these interactions (between elements, as well as ensembles of elements), it is possible to decompose the total information flowing from past to future into a set of non-overlapping temporal interactions that describe all the different modes by which information can be stored, transferred, or modified. To achieve this, I propose a novel information-theoretic measure of temporal dependency (Iτsx) based on the logic of local probability mass exclusions. This integrated information decomposition can reveal emergent and higher-order interactions within the dynamics of a system, as well as refining existing measures. To demonstrate the utility of this framework, I apply the decomposition to spontaneous spiking activity recorded from dissociated neural cultures of rat cerebral cortex to show how different modes of information processing are distributed over the system. Furthermore, being a localizable analysis, Iτsx can provide insight into the computational structure of single moments. I explore the time-resolved computational structure of neuronal avalanches and find that different types of information atoms have distinct profiles over the course of an avalanche, with the majority of non-trivial information dynamics happening before the first half of the cascade is completed. These analyses allow us to move beyond the historical focus on single measures of dependency such as information transfer or information integration, and explore a panoply of different relationships between elements (and groups of elements) in complex systems.
Collapse
Affiliation(s)
- Thomas F. Varley
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, IN, United States of America
- School of Informatics, Computing, and Engineering, Indiana University Bloomington, Bloomington, IN, United States of America
| |
Collapse
|
16
|
Graham DJ. Nine insights from internet engineering that help us understand brain network communication. FRONTIERS IN COMPUTER SCIENCE 2023. [DOI: 10.3389/fcomp.2022.976801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
Philosophers have long recognized the value of metaphor as a tool that opens new avenues of investigation. By seeing brains as having the goal of representation, the computer metaphor in its various guises has helped systems neuroscience approach a wide array of neuronal behaviors at small and large scales. Here I advocate a complementary metaphor, the internet. Adopting this metaphor shifts our focus from computing to communication, and from seeing neuronal signals as localized representational elements to seeing neuronal signals as traveling messages. In doing so, we can take advantage of a comparison with the internet's robust and efficient routing strategies to understand how the brain might meet the challenges of network communication. I lay out nine engineering strategies that help the internet solve routing challenges similar to those faced by brain networks. The internet metaphor helps us by reframing neuronal activity across the brain as, in part, a manifestation of routing, which may, in different parts of the system, resemble the internet more, less, or not at all. I describe suggestive evidence consistent with the brain's use of internet-like routing strategies and conclude that, even if empirical data do not directly implicate internet-like routing, the metaphor is valuable as a reference point for those investigating the difficult problem of network communication in the brain and in particular the problem of routing.
Collapse
|
17
|
Varley TF. Flickering Emergences: The Question of Locality in Information-Theoretic Approaches to Emergence. ENTROPY (BASEL, SWITZERLAND) 2022; 25:54. [PMID: 36673195 PMCID: PMC9858457 DOI: 10.3390/e25010054] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 12/08/2022] [Accepted: 12/25/2022] [Indexed: 05/25/2023]
Abstract
"Emergence", the phenomenon where a complex system displays properties, behaviours, or dynamics not trivially reducible to its constituent elements, is one of the defining properties of complex systems. Recently, there has been a concerted effort to formally define emergence using the mathematical framework of information theory, which proposes that emergence can be understood in terms of how the states of wholes and parts collectively disclose information about the system's collective future. In this paper, we show how a common, foundational component of information-theoretic approaches to emergence implies an inherent instability to emergent properties, which we call flickering emergence. A system may, on average, display a meaningful emergent property (be it an informative coarse-graining, or higher-order synergy), but for particular configurations, that emergent property falls apart and becomes misinformative. We show existence proofs that flickering emergence occurs in two different frameworks (one based on coarse-graining and another based on multivariate information decomposition) and argue that any approach based on temporal mutual information will display it. Finally, we argue that flickering emergence should not be a disqualifying property of any model of emergence, but that it should be accounted for when attempting to theorize about how emergence relates to practical models of the natural world.
Collapse
Affiliation(s)
- Thomas F. Varley
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, IN 47405, USA;
- School of Informatics, Computing, & Engineering, Indiana University Bloomington, Bloomington, IN 47405, USA
| |
Collapse
|
18
|
Zhang J, Liu K. Neural Information Squeezer for Causal Emergence. ENTROPY (BASEL, SWITZERLAND) 2022; 25:26. [PMID: 36673167 PMCID: PMC9858212 DOI: 10.3390/e25010026] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/19/2022] [Accepted: 12/19/2022] [Indexed: 05/28/2023]
Abstract
Conventional studies of causal emergence have revealed that stronger causality can be obtained on the macro-level than the micro-level of the same Markovian dynamical systems if an appropriate coarse-graining strategy has been conducted on the micro-states. However, identifying this emergent causality from data is still a difficult problem that has not been solved because the appropriate coarse-graining strategy can not be found easily. This paper proposes a general machine learning framework called Neural Information Squeezer to automatically extract the effective coarse-graining strategy and the macro-level dynamics, as well as identify causal emergence directly from time series data. By using invertible neural network, we can decompose any coarse-graining strategy into two separate procedures: information conversion and information discarding. In this way, we can not only exactly control the width of the information channel, but also can derive some important properties analytically. We also show how our framework can extract the coarse-graining functions and the dynamics on different levels, as well as identify causal emergence from the data on several exampled systems.
Collapse
Affiliation(s)
- Jiang Zhang
- School of Systems Sciences, Beijing Normal University, Beijing 100875, China
- Swarma Research, Beijing 100085, China
| | - Kaiwei Liu
- School of Systems Sciences, Beijing Normal University, Beijing 100875, China
| |
Collapse
|
19
|
Varley TF, Kaminski P. Untangling Synergistic Effects of Intersecting Social Identities with Partial Information Decomposition. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1387. [PMID: 37420406 PMCID: PMC9611752 DOI: 10.3390/e24101387] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 09/17/2022] [Accepted: 09/22/2022] [Indexed: 05/10/2023]
Abstract
The theory of intersectionality proposes that an individual's experience of society has aspects that are irreducible to the sum of one's various identities considered individually, but are "greater than the sum of their parts". In recent years, this framework has become a frequent topic of discussion both in social sciences and among popular movements for social justice. In this work, we show that the effects of intersectional identities can be statistically observed in empirical data using information theory, particularly the partial information decomposition framework. We show that, when considering the predictive relationship between various identity categories such as race and sex, on outcomes such as income, health and wellness, robust statistical synergies appear. These synergies show that there are joint-effects of identities on outcomes that are irreducible to any identity considered individually and only appear when specific categories are considered together (for example, there is a large, synergistic effect of race and sex considered jointly on income irreducible to either race or sex). Furthermore, these synergies are robust over time, remaining largely constant year-to-year. We then show using synthetic data that the most widely used method of assessing intersectionalities in data (linear regression with multiplicative interaction coefficients) fails to disambiguate between truly synergistic, greater-than-the-sum-of-their-parts interactions, and redundant interactions. We explore the significance of these two distinct types of interactions in the context of making inferences about intersectional relationships in data and the importance of being able to reliably differentiate the two. Finally, we conclude that information theory, as a model-free framework sensitive to nonlinearities and synergies in data, is a natural method by which to explore the space of higher-order social dynamics.
Collapse
Affiliation(s)
- Thomas F. Varley
- School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN 47405, USA
- Department of Psychology & Brain Sciences, Indiana University, Bloomington, IN 47405, USA
| | - Patrick Kaminski
- School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN 47405, USA
- Department of Sociology, Indiana University, Bloomington, IN 47405, USA
| |
Collapse
|
20
|
Artime O, De Domenico M. From the origin of life to pandemics: emergent phenomena in complex systems. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2022; 380:20200410. [PMID: 35599559 PMCID: PMC9125231 DOI: 10.1098/rsta.2020.0410] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 02/08/2022] [Indexed: 05/31/2023]
Abstract
When a large number of similar entities interact among each other and with their environment at a low scale, unexpected outcomes at higher spatio-temporal scales might spontaneously arise. This non-trivial phenomenon, known as emergence, characterizes a broad range of distinct complex systems-from physical to biological and social-and is often related to collective behaviour. It is ubiquitous, from non-living entities such as oscillators that under specific conditions synchronize, to living ones, such as birds flocking or fish schooling. Despite the ample phenomenological evidence of the existence of systems' emergent properties, central theoretical questions to the study of emergence remain unanswered, such as the lack of a widely accepted, rigorous definition of the phenomenon or the identification of the essential physical conditions that favour emergence. We offer here a general overview of the phenomenon of emergence and sketch current and future challenges on the topic. Our short review also serves as an introduction to the theme issue Emergent phenomena in complex physical and socio-technical systems: from cells to societies, where we provide a synthesis of the contents tackled in the issue and outline how they relate to these challenges, spanning from current advances in our understanding on the origin of life to the large-scale propagation of infectious diseases. This article is part of the theme issue 'Emergent phenomena in complex physical and socio-technical systems: from cells to societies'.
Collapse
Affiliation(s)
- Oriol Artime
- Fondazione Bruno Kessler, Via Sommarive 18, Povo, TN 38123, Italy
| | - Manlio De Domenico
- Department of Physics and Astronomy ‘Galileo Galilei’, University of Padua, Padova, Veneto, Italy
| |
Collapse
|