1
|
van Elteren C, Quax R, Sloot PMA. Cascades Towards Noise-Induced Transitions on Networks Revealed Using Information Flows. ENTROPY (BASEL, SWITZERLAND) 2024; 26:1050. [PMID: 39766679 PMCID: PMC11675381 DOI: 10.3390/e26121050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/25/2024] [Revised: 11/28/2024] [Accepted: 11/29/2024] [Indexed: 01/11/2025]
Abstract
Complex networks, from neuronal assemblies to social systems, can exhibit abrupt, system-wide transitions without external forcing. These endogenously generated "noise-induced transitions" emerge from the intricate interplay between network structure and local dynamics, yet their underlying mechanisms remain elusive. Our study unveils two critical roles that nodes play in catalyzing these transitions within dynamical networks governed by the Boltzmann-Gibbs distribution. We introduce the concept of "initiator nodes", which absorb and propagate short-lived fluctuations, temporarily destabilizing their neighbors. This process initiates a domino effect, where the stability of a node inversely correlates with the number of destabilized neighbors required to tip it. As the system approaches a tipping point, we identify "stabilizer nodes" that encode the system's long-term memory, ultimately reversing the domino effect and settling the network into a new stable attractor. Through targeted interventions, we demonstrate how these roles can be manipulated to either promote or inhibit systemic transitions. Our findings provide a novel framework for understanding and potentially controlling endogenously generated metastable behavior in complex networks. This approach opens new avenues for predicting and managing critical transitions in diverse fields, from neuroscience to social dynamics and beyond.
Collapse
Affiliation(s)
- Casper van Elteren
- Institute of Informatics, University of Amsterdam, 1098 XH Amsterdam, The Netherlands; (R.Q.); (P.M.A.S.)
- Institute for Advanced Study, 1012 GC Amsterdam, The Netherlands
| | - Rick Quax
- Institute of Informatics, University of Amsterdam, 1098 XH Amsterdam, The Netherlands; (R.Q.); (P.M.A.S.)
- Institute for Advanced Study, 1012 GC Amsterdam, The Netherlands
| | - Peter M. A. Sloot
- Institute of Informatics, University of Amsterdam, 1098 XH Amsterdam, The Netherlands; (R.Q.); (P.M.A.S.)
- Institute for Advanced Study, 1012 GC Amsterdam, The Netherlands
- Complexity Science Hub Viennna, 1080 Vienna, Austria
| |
Collapse
|
2
|
Beer RD, Barwich AS, Severino GJ. Milking a spherical cow: Toy models in neuroscience. Eur J Neurosci 2024; 60:6359-6374. [PMID: 39257366 DOI: 10.1111/ejn.16529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 07/19/2024] [Accepted: 08/25/2024] [Indexed: 09/12/2024]
Abstract
There are many different kinds of models, and they play many different roles in the scientific endeavour. Neuroscience, and biology more generally, has understandably tended to emphasise empirical models that are grounded in data and make specific, experimentally testable predictions. Meanwhile, strongly idealised or 'toy' models have played a central role in the theoretical development of other sciences such as physics. In this paper, we examine the nature of toy models and their prospects in neuroscience.
Collapse
Affiliation(s)
- Randall D Beer
- Cognitive Science Program, Indiana University, Bloomington, Indiana, USA
- Neuroscience Program, Indiana University, Bloomington, Indiana, USA
- Department of Informatics, Indiana University, Bloomington, Indiana, USA
| | - Ann-Sophie Barwich
- Cognitive Science Program, Indiana University, Bloomington, Indiana, USA
- Neuroscience Program, Indiana University, Bloomington, Indiana, USA
- Department of History and Philosophy of Science and Medicine, Indiana University, Bloomington, Indiana, USA
| | - Gabriel J Severino
- Cognitive Science Program, Indiana University, Bloomington, Indiana, USA
| |
Collapse
|
3
|
de Llanza Varona M, Martínez M. Synergy Makes Direct Perception Inefficient. ENTROPY (BASEL, SWITZERLAND) 2024; 26:708. [PMID: 39202178 PMCID: PMC11353286 DOI: 10.3390/e26080708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 07/31/2024] [Accepted: 08/15/2024] [Indexed: 09/03/2024]
Abstract
A typical claim in anti-representationalist approaches to cognition such as ecological psychology or radical embodied cognitive science is that ecological information is sufficient for guiding behavior. According to this view, affordances are immediately perceptually available to the agent (in the so-called "ambient energy array"), so sensory data does not require much further inner processing. As a consequence, mental representations are explanatorily idle: perception is immediate and direct. Here we offer one way to formalize this direct-perception claim and identify some important limits to it. We argue that the claim should be read as saying that successful behavior just implies picking out affordance-related information from the ambient energy array. By relying on the Partial Information Decomposition framework, and more concretely on its development of the notion of synergy, we show that in multimodal perception, where various energy arrays carry affordance-related information, the "just pick out affordance-related information" approach is very inefficient, as it is bound to miss all synergistic components. Efficient multimodal information combination requires transmitting sensory-specific (and not affordance-specific) information to wherever it is that the various information streams are combined. The upshot is that some amount of computation is necessary for efficient affordance reconstruction.
Collapse
Affiliation(s)
| | - Manolo Martínez
- Philosophy Department, Universitat de Barcelona, 08001 Barcelona, Spain;
| |
Collapse
|
4
|
Luppi AI, Rosas FE, Mediano PAM, Menon DK, Stamatakis EA. Information decomposition and the informational architecture of the brain. Trends Cogn Sci 2024; 28:352-368. [PMID: 38199949 DOI: 10.1016/j.tics.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 11/09/2023] [Accepted: 11/17/2023] [Indexed: 01/12/2024]
Abstract
To explain how the brain orchestrates information-processing for cognition, we must understand information itself. Importantly, information is not a monolithic entity. Information decomposition techniques provide a way to split information into its constituent elements: unique, redundant, and synergistic information. We review how disentangling synergistic and redundant interactions is redefining our understanding of integrative brain function and its neural organisation. To explain how the brain navigates the trade-offs between redundancy and synergy, we review converging evidence integrating the structural, molecular, and functional underpinnings of synergy and redundancy; their roles in cognition and computation; and how they might arise over evolution and development. Overall, disentangling synergistic and redundant information provides a guiding principle for understanding the informational architecture of the brain and cognition.
Collapse
Affiliation(s)
- Andrea I Luppi
- Division of Anaesthesia, University of Cambridge, Cambridge, UK; Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK; Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Fernando E Rosas
- Department of Informatics, University of Sussex, Brighton, UK; Centre for Psychedelic Research, Department of Brain Sciences, Imperial College London, London, UK; Centre for Complexity Science, Imperial College London, London, UK; Centre for Eudaimonia and Human Flourishing, University of Oxford, Oxford, UK
| | - Pedro A M Mediano
- Department of Computing, Imperial College London, London, UK; Department of Psychology, University of Cambridge, Cambridge, UK
| | - David K Menon
- Department of Medicine, University of Cambridge, Cambridge, UK; Wolfson Brain Imaging Centre, University of Cambridge, Cambridge, UK
| | - Emmanuel A Stamatakis
- Division of Anaesthesia, University of Cambridge, Cambridge, UK; Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK.
| |
Collapse
|
5
|
Szorkovszky A, Veenstra F, Glette K. From real-time adaptation to social learning in robot ecosystems. Front Robot AI 2023; 10:1232708. [PMID: 37860631 PMCID: PMC10584317 DOI: 10.3389/frobt.2023.1232708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 08/18/2023] [Indexed: 10/21/2023] Open
Abstract
While evolutionary robotics can create novel morphologies and controllers that are well-adapted to their environments, learning is still the most efficient way to adapt to changes that occur on shorter time scales. Learning proposals for evolving robots to date have focused on new individuals either learning a controller from scratch, or building on the experience of direct ancestors and/or robots with similar configurations. Here we propose and demonstrate a novel means for social learning of gait patterns, based on sensorimotor synchronization. Using movement patterns of other robots as input can drive nonlinear decentralized controllers such as CPGs into new limit cycles, hence encouraging diversity of movement patterns. Stable autonomous controllers can then be locked in, which we demonstrate using a quasi-Hebbian feedback scheme. We propose that in an ecosystem of robots evolving in a heterogeneous environment, such a scheme may allow for the emergence of generalist task-solvers from a population of specialists.
Collapse
Affiliation(s)
- Alex Szorkovszky
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Frank Veenstra
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Kyrre Glette
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| |
Collapse
|
6
|
Beer RD. On the Proper Treatment of Dynamics in Cognitive Science. Top Cogn Sci 2023. [PMID: 37531569 DOI: 10.1111/tops.12686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 07/25/2023] [Accepted: 07/25/2023] [Indexed: 08/04/2023]
Abstract
This essay examines the relevance of dynamical ideas for cognitive science. On its own, the mere mathematical idea of a dynamical system is too weak to serve as a scientific theory of anything, and dynamical approaches within cognitive science are too rich and varied to be subsumed under a single "dynamical hypothesis." Instead, after first attempting to dissect the different notions of "dynamics" and "cognition" at play, a more specific theoretical framework for cognitive science broadly construed is sketched. This framework draws upon not only dynamical ideas, but also such contemporaneous perspectives as situatedness, embodiment, ecological psychology, enaction, neuroethology/neuroscience, artificial life, and biogenic approaches. The paper ends with some methodological suggestions for pursuing this theoretical framework.
Collapse
Affiliation(s)
- Randall D Beer
- Cognitive Science Program, Informatics Department, Luddy School of Informatics, Computing and Engineering, Indiana University
| |
Collapse
|
7
|
Arsiwalla XD, Solé R, Moulin-Frier C, Herreros I, Sánchez-Fibla M, Verschure P. The Morphospace of Consciousness: Three Kinds of Complexity for Minds and Machines. NEUROSCI 2023; 4:79-102. [PMID: 39483317 PMCID: PMC11523714 DOI: 10.3390/neurosci4020009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 02/14/2023] [Accepted: 03/20/2023] [Indexed: 11/03/2024] Open
Abstract
In this perspective article, we show that a morphospace, based on information-theoretic measures, can be a useful construct for comparing biological agents with artificial intelligence (AI) systems. The axes of this space label three kinds of complexity: (i) autonomic, (ii) computational and (iii) social complexity. On this space, we map biological agents such as bacteria, bees, C. elegans, primates and humans; as well as AI technologies such as deep neural networks, multi-agent bots, social robots, Siri and Watson. A complexity-based conceptualization provides a useful framework for identifying defining features and classes of conscious and intelligent systems. Starting with cognitive and clinical metrics of consciousness that assess awareness and wakefulness, we ask how AI and synthetically engineered life-forms would measure on homologous metrics. We argue that awareness and wakefulness stem from computational and autonomic complexity. Furthermore, tapping insights from cognitive robotics, we examine the functional role of consciousness in the context of evolutionary games. This points to a third kind of complexity for describing consciousness, namely, social complexity. Based on these metrics, our morphospace suggests the possibility of additional types of consciousness other than biological; namely, synthetic, group-based and simulated. This space provides a common conceptual framework for comparing traits and highlighting design principles of minds and machines.
Collapse
Affiliation(s)
- Xerxes D. Arsiwalla
- Departament of Information and Communication Technologies, Universitat Pompeu Fabra (UPF), 08018 Barcelona, Spain
| | - Ricard Solé
- Complex Systems Lab, Universitat Pompeu Fabra, 08003 Barcelona, Spain
- Institut de Biologia Evolutiva (CSIC-UPF), 08003 Barcelona, Spain
- Santa Fe Institute, Santa Fe, NM 87501, USA
- Institució Catalana de Recerca i Estudis Avançats (ICREA), 08010 Barcelona, Spain
| | | | - Ivan Herreros
- Departament of Information and Communication Technologies, Universitat Pompeu Fabra (UPF), 08018 Barcelona, Spain
| | - Martí Sánchez-Fibla
- Departament of Information and Communication Technologies, Universitat Pompeu Fabra (UPF), 08018 Barcelona, Spain
| | - Paul Verschure
- Donders Institute for Brain, Cognition and Behavior, Radboud University, 6525 GD Nijmegen, The Netherlands
| |
Collapse
|
8
|
Baker B, Lansdell B, Kording KP. Three aspects of representation in neuroscience. Trends Cogn Sci 2022; 26:942-958. [PMID: 36175303 PMCID: PMC11749295 DOI: 10.1016/j.tics.2022.08.014] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 08/18/2022] [Accepted: 08/25/2022] [Indexed: 01/12/2023]
Abstract
Neuroscientists often describe neural activity as a representation of something, or claim to have found evidence for a neural representation, but there is considerable ambiguity about what such claims entail. Here we develop a thorough account of what 'representation' does and should do for neuroscientists in terms of three key aspects of representation. (i) Correlation: a neural representation correlates to its represented content; (ii) causal role: the representation has a characteristic effect on behavior; and (iii) teleology: a goal or purpose served by the behavior and thus the representation. We draw broadly on literature in both neuroscience and philosophy to show how these three aspects are rooted in common approaches to understanding the brain and mind. We first describe different contexts that 'representation' has been closely linked to in neuroscience, then discuss each of the three aspects in detail.
Collapse
Affiliation(s)
- Ben Baker
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA.
| | - Benjamin Lansdell
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Konrad P Kording
- Department of Neuroscience, Bioengineering, University of Pennsylvania, CIFAR, Philadelphia, PA, USA
| |
Collapse
|
9
|
Rorot W. Counting with Cilia: The Role of Morphological Computation in Basal Cognition Research. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1581. [PMID: 36359671 PMCID: PMC9689127 DOI: 10.3390/e24111581] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 10/15/2022] [Accepted: 10/26/2022] [Indexed: 06/16/2023]
Abstract
"Morphological computation" is an increasingly important concept in robotics, artificial intelligence, and philosophy of the mind. It is used to understand how the body contributes to cognition and control of behavior. Its understanding in terms of "offloading" computation from the brain to the body has been criticized as misleading, and it has been suggested that the use of the concept conflates three classes of distinct processes. In fact, these criticisms implicitly hang on accepting a semantic definition of what constitutes computation. Here, I argue that an alternative, mechanistic view on computation offers a significantly different understanding of what morphological computation is. These theoretical considerations are then used to analyze the existing research program in developmental biology, which understands morphogenesis, the process of development of shape in biological systems, as a computational process. This important line of research shows that cognition and intelligence can be found across all scales of life, as the proponents of the basal cognition research program propose. Hence, clarifying the connection between morphological computation and morphogenesis allows for strengthening the role of the former concept in this emerging research field.
Collapse
Affiliation(s)
- Wiktor Rorot
- Human Interactivity and Language Lab, Faculty of Psychology, University of Warsaw, 00-927 Warszawa, Poland
| |
Collapse
|
10
|
Dvoretskii S, Gong Z, Gupta A, Parent J, Alicea B. Braitenberg Vehicles as Developmental Neurosimulation. ARTIFICIAL LIFE 2022; 28:369-395. [PMID: 35881679 DOI: 10.1162/artl_a_00384] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Connecting brain and behavior is a longstanding issue in the areas of behavioral science, artificial intelligence, and neurobiology. As is standard among models of artificial and biological neural networks, an analogue of the fully mature brain is presented as a blank slate. However, this does not consider the realities of biological development and developmental learning. Our purpose is to model the development of an artificial organism that exhibits complex behaviors. We introduce three alternate approaches to demonstrate how developmental embodied agents can be implemented. The resulting developmental Braitenberg vehicles (dBVs) will generate behaviors ranging from stimulus responses to group behavior that resembles collective motion. We will situate this work in the domain of artificial brain networks along with broader themes such as embodied cognition, feedback, and emergence. Our perspective is exemplified by three software instantiations that demonstrate how a BV-genetic algorithm hybrid model, a multisensory Hebbian learning model, and multi-agent approaches can be used to approach BV development. We introduce use cases such as optimized spatial cognition (vehicle-genetic algorithm hybrid model), hinges connecting behavioral and neural models (multisensory Hebbian learning model), and cumulative classification (multi-agent approaches). In conclusion, we consider future applications of the developmental neurosimulation approach.
Collapse
Affiliation(s)
| | | | | | | | - Bradly Alicea
- Orthogonal Research and Education Laboratory
- OpenWorm Foundation.
| |
Collapse
|
11
|
Fields C, Levin M. Competency in Navigating Arbitrary Spaces as an Invariant for Analyzing Cognition in Diverse Embodiments. ENTROPY (BASEL, SWITZERLAND) 2022; 24:819. [PMID: 35741540 PMCID: PMC9222757 DOI: 10.3390/e24060819] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 05/26/2022] [Accepted: 06/08/2022] [Indexed: 12/20/2022]
Abstract
One of the most salient features of life is its capacity to handle novelty and namely to thrive and adapt to new circumstances and changes in both the environment and internal components. An understanding of this capacity is central to several fields: the evolution of form and function, the design of effective strategies for biomedicine, and the creation of novel life forms via chimeric and bioengineering technologies. Here, we review instructive examples of living organisms solving diverse problems and propose competent navigation in arbitrary spaces as an invariant for thinking about the scaling of cognition during evolution. We argue that our innate capacity to recognize agency and intelligence in unfamiliar guises lags far behind our ability to detect it in familiar behavioral contexts. The multi-scale competency of life is essential to adaptive function, potentiating evolution and providing strategies for top-down control (not micromanagement) to address complex disease and injury. We propose an observer-focused viewpoint that is agnostic about scale and implementation, illustrating how evolution pivoted similar strategies to explore and exploit metabolic, transcriptional, morphological, and finally 3D motion spaces. By generalizing the concept of behavior, we gain novel perspectives on evolution, strategies for system-level biomedical interventions, and the construction of bioengineered intelligences. This framework is a first step toward relating to intelligence in highly unfamiliar embodiments, which will be essential for progress in artificial intelligence and regenerative medicine and for thriving in a world increasingly populated by synthetic, bio-robotic, and hybrid beings.
Collapse
Affiliation(s)
- Chris Fields
- Allen Discovery Center at Tufts University, Science and Engineering Complex, 200 College Ave., Medford, MA 02155, USA;
| | - Michael Levin
- Allen Discovery Center at Tufts University, Science and Engineering Complex, 200 College Ave., Medford, MA 02155, USA;
- Wyss Institute for Biologically Inspired Engineering at Harvard University, Boston, MA 02115, USA
| |
Collapse
|
12
|
Information Fragmentation, Encryption and Information Flow in Complex Biological Networks. ENTROPY 2022; 24:e24050735. [PMID: 35626617 PMCID: PMC9141881 DOI: 10.3390/e24050735] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 05/04/2022] [Accepted: 05/20/2022] [Indexed: 02/06/2023]
Abstract
Assessing where and how information is stored in biological networks (such as neuronal and genetic networks) is a central task both in neuroscience and in molecular genetics, but most available tools focus on the network’s structure as opposed to its function. Here, we introduce a new information-theoretic tool—information fragmentation analysis—that, given full phenotypic data, allows us to localize information in complex networks, determine how fragmented (across multiple nodes of the network) the information is, and assess the level of encryption of that information. Using information fragmentation matrices we can also create information flow graphs that illustrate how information propagates through these networks. We illustrate the use of this tool by analyzing how artificial brains that evolved in silico solve particular tasks, and show how information fragmentation analysis provides deeper insights into how these brains process information and “think”. The measures of information fragmentation and encryption that result from our methods also quantify complexity of information processing in these networks and how this processing complexity differs between primary exposure to sensory data (early in the lifetime) and later routine processing.
Collapse
|
13
|
Levin M. Technological Approach to Mind Everywhere: An Experimentally-Grounded Framework for Understanding Diverse Bodies and Minds. Front Syst Neurosci 2022; 16:768201. [PMID: 35401131 PMCID: PMC8988303 DOI: 10.3389/fnsys.2022.768201] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 01/24/2022] [Indexed: 12/11/2022] Open
Abstract
Synthetic biology and bioengineering provide the opportunity to create novel embodied cognitive systems (otherwise known as minds) in a very wide variety of chimeric architectures combining evolved and designed material and software. These advances are disrupting familiar concepts in the philosophy of mind, and require new ways of thinking about and comparing truly diverse intelligences, whose composition and origin are not like any of the available natural model species. In this Perspective, I introduce TAME-Technological Approach to Mind Everywhere-a framework for understanding and manipulating cognition in unconventional substrates. TAME formalizes a non-binary (continuous), empirically-based approach to strongly embodied agency. TAME provides a natural way to think about animal sentience as an instance of collective intelligence of cell groups, arising from dynamics that manifest in similar ways in numerous other substrates. When applied to regenerating/developmental systems, TAME suggests a perspective on morphogenesis as an example of basal cognition. The deep symmetry between problem-solving in anatomical, physiological, transcriptional, and 3D (traditional behavioral) spaces drives specific hypotheses by which cognitive capacities can increase during evolution. An important medium exploited by evolution for joining active subunits into greater agents is developmental bioelectricity, implemented by pre-neural use of ion channels and gap junctions to scale up cell-level feedback loops into anatomical homeostasis. This architecture of multi-scale competency of biological systems has important implications for plasticity of bodies and minds, greatly potentiating evolvability. Considering classical and recent data from the perspectives of computational science, evolutionary biology, and basal cognition, reveals a rich research program with many implications for cognitive science, evolutionary biology, regenerative medicine, and artificial intelligence.
Collapse
Affiliation(s)
- Michael Levin
- Allen Discovery Center at Tufts University, Medford, MA, United States
- Wyss Institute for Biologically Inspired Engineering at Harvard University, Cambridge, MA, United States
| |
Collapse
|
14
|
Westerberg JA, Schall MS, Maier A, Woodman GF, Schall JD. Laminar microcircuitry of visual cortex producing attention-associated electric fields. eLife 2022; 11:72139. [PMID: 35089128 PMCID: PMC8846592 DOI: 10.7554/elife.72139] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 01/25/2022] [Indexed: 11/24/2022] Open
Abstract
Cognitive operations are widely studied by measuring electric fields through EEG and ECoG. However, despite their widespread use, the neural circuitry giving rise to these signals remains unknown because the functional architecture of cortical columns producing attention-associated electric fields has not been explored. Here, we detail the laminar cortical circuitry underlying an attention-associated electric field measured over posterior regions of the brain in humans and monkeys. First, we identified visual cortical area V4 as one plausible contributor to this attention-associated electric field through inverse modeling of cranial EEG in macaque monkeys performing a visual attention task. Next, we performed laminar neurophysiological recordings on the prelunate gyrus and identified the electric-field-producing dipoles as synaptic activity in distinct cortical layers of area V4. Specifically, activation in the extragranular layers of cortex resulted in the generation of the attention-associated dipole. Feature selectivity of a given cortical column determined the overall contribution to this electric field. Columns selective for the attended feature contributed more to the electric field than columns selective for a different feature. Last, the laminar profile of synaptic activity generated by V4 was sufficient to produce an attention-associated signal measurable outside of the column. These findings suggest that the top-down recipient cortical layers produce an attention-associated electric field that can be measured extracortically with the relative contribution of each column depending upon the underlying functional architecture.
Collapse
Affiliation(s)
- Jacob A Westerberg
- Department of Psychology, Vanderbilt University, Nashville, United States
| | - Michelle S Schall
- Department of Psychology, Vanderbilt University, Nashville, United States
| | - Alexander Maier
- Department of Psychology, Vanderbilt University, Nashville, United States
| | - Geoffrey F Woodman
- Department of Psychology, Vanderbilt University, Nashville, United States
| | | |
Collapse
|
15
|
Manicka S, Levin M. Minimal Developmental Computation: A Causal Network Approach to Understand Morphogenetic Pattern Formation. ENTROPY (BASEL, SWITZERLAND) 2022; 24:107. [PMID: 35052133 PMCID: PMC8774453 DOI: 10.3390/e24010107] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 01/05/2022] [Accepted: 01/07/2022] [Indexed: 12/22/2022]
Abstract
What information-processing strategies and general principles are sufficient to enable self-organized morphogenesis in embryogenesis and regeneration? We designed and analyzed a minimal model of self-scaling axial patterning consisting of a cellular network that develops activity patterns within implicitly set bounds. The properties of the cells are determined by internal 'genetic' networks with an architecture shared across all cells. We used machine-learning to identify models that enable this virtual mini-embryo to pattern a typical axial gradient while simultaneously sensing the set boundaries within which to develop it from homogeneous conditions-a setting that captures the essence of early embryogenesis. Interestingly, the model revealed several features (such as planar polarity and regenerative re-scaling capacity) for which it was not directly selected, showing how these common biological design principles can emerge as a consequence of simple patterning modes. A novel "causal network" analysis of the best model furthermore revealed that the originally symmetric model dynamically integrates into intercellular causal networks characterized by broken-symmetry, long-range influence and modularity, offering an interpretable macroscale-circuit-based explanation for phenotypic patterning. This work shows how computation could occur in biological development and how machine learning approaches can generate hypotheses and deepen our understanding of how featureless tissues might develop sophisticated patterns-an essential step towards predictive control of morphogenesis in regenerative medicine or synthetic bioengineering contexts. The tools developed here also have the potential to benefit machine learning via new forms of backpropagation and by leveraging the novel distributed self-representation mechanisms to improve robustness and generalization.
Collapse
Affiliation(s)
| | - Michael Levin
- Allen Discovery Center, Tufts University, Medford, MA 02155, USA;
| |
Collapse
|
16
|
Mediano PAM, Rosas FE, Farah JC, Shanahan M, Bor D, Barrett AB. Integrated information as a common signature of dynamical and information-processing complexity. CHAOS (WOODBURY, N.Y.) 2022; 32:013115. [PMID: 35105139 PMCID: PMC7614772 DOI: 10.1063/5.0063384] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 12/03/2021] [Indexed: 06/14/2023]
Abstract
The apparent dichotomy between information-processing and dynamical approaches to complexity science forces researchers to choose between two diverging sets of tools and explanations, creating conflict and often hindering scientific progress. Nonetheless, given the shared theoretical goals between both approaches, it is reasonable to conjecture the existence of underlying common signatures that capture interesting behavior in both dynamical and information-processing systems. Here, we argue that a pragmatic use of integrated information theory (IIT), originally conceived in theoretical neuroscience, can provide a potential unifying framework to study complexity in general multivariate systems. By leveraging metrics put forward by the integrated information decomposition framework, our results reveal that integrated information can effectively capture surprisingly heterogeneous signatures of complexity-including metastability and criticality in networks of coupled oscillators as well as distributed computation and emergent stable particles in cellular automata-without relying on idiosyncratic, ad hoc criteria. These results show how an agnostic use of IIT can provide important steps toward bridging the gap between informational and dynamical approaches to complex systems.
Collapse
Affiliation(s)
- Pedro A. M. Mediano
- Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Fernando E. Rosas
- Centre for Psychedelic Research, Department of Brain Science, Imperial College London, London SW7 2DD, United Kingdom
- Data Science Institute, Imperial College London, London SW7 2AZ, United Kingdom
- Centre for Complexity Science, Imperial College London, London SW7 2AZ, United Kingdom
| | - Juan Carlos Farah
- School of Engineering, École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne, Switzerland
| | - Murray Shanahan
- Department of Computing, Imperial College London, London SW7 2RH, United Kingdom
| | - Daniel Bor
- Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Adam B. Barrett
- Sackler Center for Consciousness Science, Department of Informatics, University of Sussex, Brighton BN1 9RH, United Kingdom
- The Data Intensive Science Centre, Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH, United Kingdom
| |
Collapse
|
17
|
Albantakis L. Quantifying the Autonomy of Structurally Diverse Automata: A Comparison of Candidate Measures. ENTROPY (BASEL, SWITZERLAND) 2021; 23:1415. [PMID: 34828113 PMCID: PMC8624265 DOI: 10.3390/e23111415] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 10/16/2021] [Accepted: 10/26/2021] [Indexed: 11/16/2022]
Abstract
Should the internal structure of a system matter when it comes to autonomy? While there is still no consensus on a rigorous, quantifiable definition of autonomy, multiple candidate measures and related quantities have been proposed across various disciplines, including graph-theory, information-theory, and complex system science. Here, I review and compare a range of measures related to autonomy and intelligent behavior. To that end, I analyzed the structural, information-theoretical, causal, and dynamical properties of simple artificial agents evolved to solve a spatial navigation task, with or without a need for associative memory. By contrast to standard artificial neural networks with fixed architectures and node functions, here, independent evolution simulations produced successful agents with diverse neural architectures and functions. This makes it possible to distinguish quantities that characterize task demands and input-output behavior, from those that capture intrinsic differences between substrates, which may help to determine more stringent requisites for autonomous behavior and the means to measure it.
Collapse
Affiliation(s)
- Larissa Albantakis
- Department of Psychiatry, University of Wisconsin-Madison, Madison, WI 53719, USA
| |
Collapse
|
18
|
Abstract
AbstractThis paper explores current developments in evolutionary and bio-inspired approaches to autonomous robotics, concentrating on research from our group at the University of Sussex. These developments are discussed in the context of advances in the wider fields of adaptive and evolutionary approaches to AI and robotics, focusing on the exploitation of embodied dynamics to create behaviour. Four case studies highlight various aspects of such exploitation. The first exploits the dynamical properties of a physical electronic substrate, demonstrating for the first time how component-level analog electronic circuits can be evolved directly in hardware to act as robot controllers. The second develops novel, effective and highly parsimonious navigation methods inspired by the way insects exploit the embodied dynamics of innate behaviours. Combining biological experiments with robotic modeling, it is shown how rapid route learning can be achieved with the aid of navigation-specific visual information that is provided and exploited by the innate behaviours. The third study focuses on the exploitation of neuromechanical chaos in the generation of robust motor behaviours. It is demonstrated how chaotic dynamics can be exploited to power a goal-driven search for desired motor behaviours in embodied systems using a particular control architecture based around neural oscillators. The dynamics are shown to be chaotic at all levels in the system, from the neural to the embodied mechanical. The final study explores the exploitation of the dynamics of brain-body-environment interactions for efficient, agile flapping winged flight. It is shown how a multi-objective evolutionary algorithm can be used to evolved dynamical neural controllers for a simulated flapping wing robot with feathered wings. Results demonstrate robust, stable, agile flight is achieved in the face of random wind gusts by exploiting complex asymmetric dynamics partly enabled by continually changing wing and tail morphologies.
Collapse
|
19
|
Wiebel-Herboth CB, Krüger M, Wollstadt P. Measuring inter- and intra-individual differences in visual scan patterns in a driving simulator experiment using active information storage. PLoS One 2021; 16:e0248166. [PMID: 33735199 PMCID: PMC7971706 DOI: 10.1371/journal.pone.0248166] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 02/20/2021] [Indexed: 11/17/2022] Open
Abstract
Scan pattern analysis has been discussed as a promising tool in the context of real-time gaze-based applications. In particular, information-theoretic measures of scan path predictability, such as the gaze transition entropy (GTE), have been proposed for detecting relevant changes in user state or task demand. These measures model scan patterns as first-order Markov chains, assuming that only the location of the previous fixation is predictive of the next fixation in time. However, this assumption may not be sufficient in general, as recent research has shown that scan patterns may also exhibit more long-range temporal correlations. Thus, we here evaluate the active information storage (AIS) as a novel information-theoretic approach to quantifying scan path predictability in a dynamic task. In contrast to the GTE, the AIS provides means to statistically test and account for temporal correlations in scan path data beyond the previous last fixation. We compare AIS to GTE in a driving simulator experiment, in which participants drove in a highway scenario, where trials were defined based on an experimental manipulation that encouraged the driver to start an overtaking maneuver. Two levels of difficulty were realized by varying the time left to complete the task. We found that individual observers indeed showed temporal correlations beyond a single past fixation and that the length of the correlation varied between observers. No effect of task difficulty was observed on scan path predictability for either AIS or GTE, but we found a significant increase in predictability during overtaking. Importantly, for participants for which the first-order Markov chain assumption did not hold, this was only shown using AIS but not GTE. We conclude that accounting for longer time horizons in scan paths in a personalized fashion is beneficial for interpreting gaze pattern in dynamic tasks.
Collapse
Affiliation(s)
| | - Matti Krüger
- Honda Research Institute Europe, Offenbach/Main, Germany
| | | |
Collapse
|
20
|
Bongard J, Levin M. Living Things Are Not (20th Century) Machines: Updating Mechanism Metaphors in Light of the Modern Science of Machine Behavior. Front Ecol Evol 2021. [DOI: 10.3389/fevo.2021.650726] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
One of the most useful metaphors for driving scientific and engineering progress has been that of the “machine.” Much controversy exists about the applicability of this concept in the life sciences. Advances in molecular biology have revealed numerous design principles that can be harnessed to understand cells from an engineering perspective, and build novel devices to rationally exploit the laws of chemistry, physics, and computation. At the same time, organicists point to the many unique features of life, especially at larger scales of organization, which have resisted decomposition analysis and artificial implementation. Here, we argue that much of this debate has focused on inessential aspects of machines – classical properties which have been surpassed by advances in modern Machine Behavior and no longer apply. This emerging multidisciplinary field, at the interface of artificial life, machine learning, and synthetic bioengineering, is highlighting the inadequacy of existing definitions. Key terms such as machine, robot, program, software, evolved, designed, etc., need to be revised in light of technological and theoretical advances that have moved past the dated philosophical conceptions that have limited our understanding of both evolved and designed systems. Moving beyond contingent aspects of historical and current machines will enable conceptual tools that embrace inevitable advances in synthetic and hybrid bioengineering and computer science, toward a framework that identifies essential distinctions between fundamental concepts of devices and living agents. Progress in both theory and practical applications requires the establishment of a novel conception of “machines as they could be,” based on the profound lessons of biology at all scales. We sketch a perspective that acknowledges the remarkable, unique aspects of life to help re-define key terms, and identify deep, essential features of concepts for a future in which sharp boundaries between evolved and designed systems will not exist.
Collapse
|
21
|
Jactat B. Mechanics of the Peripheral Auditory System: Foundations for Embodied Listening Using Dynamic Systems Theory and the Coupling Devices as a Metaphor. F1000Res 2021; 10:193. [PMID: 34249336 PMCID: PMC8258707 DOI: 10.12688/f1000research.51125.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/26/2021] [Indexed: 11/20/2022] Open
Abstract
Current approaches to listening are built on standard cognitive science, which considers the brain as the locus of all cognitive activity. This work aims to investigate listening as phenomena occurring within a brain, a body (embodiment), and an environment (situatedness). Drawing on insights from physiology, acoustics, and audiology, this essay presents listening as an interdependent brain-body-environment construct grounded in dynamic systems theory. Coupling, self-organization, and attractors are the central characteristics of dynamic systems. This article reviews the first of these aspects in order to develop a fuller understanding of how embodied auditory perception occurs. It introduces the mind-body problem before reviewing dynamic systems theory and exploring the notion of coupling in human hearing by way of current and original analogies drawn from engineering. It posits that the current use of the Watt governor device as an analogy for coupling is too simplistic to account for the coupling phenomena in the human ear. In light of this review of the physiological characteristics of the peripheral auditory system, coupling in hearing appears more variegated than originally thought and accounts for the diversity of perception among individuals, a cause for individual variance in how the mind emerges, which in turn affects academic performance. Understanding the constraints and affordances of the physical ear with regard to incoming sound supports the embodied listening paradigm.
Collapse
Affiliation(s)
- Bruno Jactat
- Faculty of Humanities and Social Sciences, University of Tsukuba, Tsukuba, Ibaraki, 305-8577, Japan
| |
Collapse
|
22
|
Candadai M, Izquierdo EJ. Sources of predictive information in dynamical neural networks. Sci Rep 2020; 10:16901. [PMID: 33037274 PMCID: PMC7547683 DOI: 10.1038/s41598-020-73380-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2020] [Accepted: 09/07/2020] [Indexed: 11/28/2022] Open
Abstract
Behavior involves the ongoing interaction between an organism and its environment. One of the prevailing theories of adaptive behavior is that organisms are constantly making predictions about their future environmental stimuli. However, how they acquire that predictive information is still poorly understood. Two complementary mechanisms have been proposed: predictions are generated from an agent's internal model of the world or predictions are extracted directly from the environmental stimulus. In this work, we demonstrate that predictive information, measured using bivariate mutual information, cannot distinguish between these two kinds of systems. Furthermore, we show that predictive information cannot distinguish between organisms that are adapted to their environments and random dynamical systems exposed to the same environment. To understand the role of predictive information in adaptive behavior, we need to be able to identify where it is generated. To do this, we decompose information transfer across the different components of the organism-environment system and track the flow of information in the system over time. To validate the proposed framework, we examined it on a set of computational models of idealized agent-environment systems. Analysis of the systems revealed three key insights. First, predictive information, when sourced from the environment, can be reflected in any agent irrespective of its ability to perform a task. Second, predictive information, when sourced from the nervous system, requires special dynamics acquired during the process of adapting to the environment. Third, the magnitude of predictive information in a system can be different for the same task if the environmental structure changes.
Collapse
Affiliation(s)
- Madhavun Candadai
- Cognitive Science program, Indiana University, Bloomington, IN, USA
- The Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, USA
| | - Eduardo J Izquierdo
- Cognitive Science program, Indiana University, Bloomington, IN, USA.
- The Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, USA.
| |
Collapse
|
23
|
Di Paolo EA. Picturing Organisms and Their Environments: Interaction, Transaction, and Constitution Loops. Front Psychol 2020; 11:1912. [PMID: 32849121 PMCID: PMC7406660 DOI: 10.3389/fpsyg.2020.01912] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Accepted: 07/10/2020] [Indexed: 12/13/2022] Open
Abstract
Changing conceptions of the relation between organisms and their environments make up a crucial chapter in the history of psychology. This may be approached by a comparative study of how schematic diagrams portray this relation. Diagrams drive the communication and the teaching of ideas, the sedimentation of epistemic norms and methods of analysis, and in some cases the articulation of novel concepts through pictographic variants. Through a sampling of schematic representations, I offer a concise comparison of how different authors, with different interests and motivations, have portrayed important aspects of the organism–environment relation. I compare example diagrams according to the features they underscore (or omit) and group them into classes that emphasize interaction, transaction, and constitution loops.
Collapse
Affiliation(s)
- Ezequiel A Di Paolo
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain.,Centre for Computational Neuroscience and Robotics, University of Sussex, Brighton, United Kingdom.,IAS-Research, University of the Basque Country, San Sebastián, Spain
| |
Collapse
|
24
|
Bermudez-Contreras E, Clark BJ, Wilber A. The Neuroscience of Spatial Navigation and the Relationship to Artificial Intelligence. Front Comput Neurosci 2020; 14:63. [PMID: 32848684 PMCID: PMC7399088 DOI: 10.3389/fncom.2020.00063] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Accepted: 05/28/2020] [Indexed: 11/13/2022] Open
Abstract
Recent advances in artificial intelligence (AI) and neuroscience are impressive. In AI, this includes the development of computer programs that can beat a grandmaster at GO or outperform human radiologists at cancer detection. A great deal of these technological developments are directly related to progress in artificial neural networks-initially inspired by our knowledge about how the brain carries out computation. In parallel, neuroscience has also experienced significant advances in understanding the brain. For example, in the field of spatial navigation, knowledge about the mechanisms and brain regions involved in neural computations of cognitive maps-an internal representation of space-recently received the Nobel Prize in medicine. Much of the recent progress in neuroscience has partly been due to the development of technology used to record from very large populations of neurons in multiple regions of the brain with exquisite temporal and spatial resolution in behaving animals. With the advent of the vast quantities of data that these techniques allow us to collect there has been an increased interest in the intersection between AI and neuroscience, many of these intersections involve using AI as a novel tool to explore and analyze these large data sets. However, given the common initial motivation point-to understand the brain-these disciplines could be more strongly linked. Currently much of this potential synergy is not being realized. We propose that spatial navigation is an excellent area in which these two disciplines can converge to help advance what we know about the brain. In this review, we first summarize progress in the neuroscience of spatial navigation and reinforcement learning. We then turn our attention to discuss how spatial navigation has been modeled using descriptive, mechanistic, and normative approaches and the use of AI in such models. Next, we discuss how AI can advance neuroscience, how neuroscience can advance AI, and the limitations of these approaches. We finally conclude by highlighting promising lines of research in which spatial navigation can be the point of intersection between neuroscience and AI and how this can contribute to the advancement of the understanding of intelligent behavior.
Collapse
Affiliation(s)
| | - Benjamin J. Clark
- Department of Psychology, University of New Mexico, Albuquerque, NM, United States
| | - Aaron Wilber
- Department of Psychology, Program in Neuroscience, Florida State University, Tallahassee, FL, United States
| |
Collapse
|
25
|
Egbert MD, Jeong V, Postlethwaite CM. Where Computation and Dynamics Meet: Heteroclinic Network-Based Controllers in Evolutionary Robotics. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:1084-1097. [PMID: 31226088 DOI: 10.1109/tnnls.2019.2917471] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In the fields of artificial neural networks and robotics, complicated, often high-dimensional systems can be designed using evolutionary/other algorithms to successfully solve very complex tasks. However, dynamical analysis of the underlying controller can often be near impossible, due to the high dimension and nonlinearities in the system. In this paper, we propose a more restricted form of controller, such that the underlying dynamical systems are forced to contain a dynamical object called a heteroclinic network. Systems containing heteroclinic networks share some properties with finite-state machines (FSMs) but are not discrete: both space and time are still described with continuous variables. Thus, we suggest that the heteroclinic networks can provide a hybrid between continuous and discrete systems. We investigate this innovated architecture in a minimal categorical perception task. The similarity of the controller to an FSM allows us to describe some of the system's behaviors as transition between states. However, other, essential behavior involves subtle ongoing interaction between the controller and the environment that eludes description at this level.
Collapse
|
26
|
Fischer D, Mostaghim S, Albantakis L. How cognitive and environmental constraints influence the reliability of simulated animats in groups. PLoS One 2020; 15:e0228879. [PMID: 32032380 PMCID: PMC7006938 DOI: 10.1371/journal.pone.0228879] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Accepted: 01/24/2020] [Indexed: 11/27/2022] Open
Abstract
Evolving in groups can either enhance or reduce an individual’s task performance. Still, we know little about the factors underlying group performance, which may be reduced to three major dimensions: (a) the individual’s ability to perform a task, (b) the dependency on environmental conditions, and (c) the perception of, and the reaction to, other group members. In our research, we investigated how these dimensions interrelate in simulated evolution experiments using adaptive agents equipped with Markov brains (“animats”). We evolved the animats to perform a spatial-navigation task under various evolutionary setups. The last generation of each evolution simulation was tested across modified conditions to evaluate and compare the animats’ reliability when faced with change. Moreover, the complexity of the evolved Markov brains was assessed based on measures of information integration. We found that, under the right conditions, specialized animats could be as reliable as animats already evolved for the modified tasks, and that reliability across varying group sizes correlated with evolved fitness in most tested evolutionary setups. Our results moreover suggest that balancing the number of individuals in a group may lead to higher reliability but also lower individual performance. Besides, high brain complexity was associated with balanced group sizes and, thus, high reliability under limited sensory capacity. However, additional sensors allowed for even higher reliability across modified environments without a need for complex, integrated Markov brains. Despite complex dependencies between the individual, the group, and the environment, our computational approach provides a way to study reliability in group behavior under controlled conditions. In all, our study revealed that balancing the group size and individual cognitive abilities prevents over-specialization and can help to evolve better reliability under unknown environmental situations.
Collapse
Affiliation(s)
- Dominik Fischer
- School of Management, Technical University of Munich, Munich, Germany
- * E-mail:
| | - Sanaz Mostaghim
- Faculty of Computer Science, Otto von Guericke University of Magdeburg, Magdeburg, Germany
| | - Larissa Albantakis
- Department of Psychiatry, Wisconsin Institute for Sleep and Consciousness, University of Wisconsin–Madison, Madison, Wisconsin, United States of America
| |
Collapse
|
27
|
Roli A, Ligot A, Birattari M. Complexity Measures: Open Questions and Novel Opportunities in the Automatic Design and Analysis of Robot Swarms. Front Robot AI 2019; 6:130. [PMID: 33501145 PMCID: PMC7805888 DOI: 10.3389/frobt.2019.00130] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Accepted: 11/11/2019] [Indexed: 11/13/2022] Open
Abstract
Complexity measures and information theory metrics in general have recently been attracting the interest of multi-agent and robotics communities, owing to their capability of capturing relevant features of robot behaviors, while abstracting from implementation details. We believe that theories and tools from complex systems science and information theory may be fruitfully applied in the near future to support the automatic design of robot swarms and the analysis of their dynamics. In this paper we discuss opportunities and open questions in this scenario.
Collapse
Affiliation(s)
- Andrea Roli
- Department of Computer Science and Engineering, Campus of Cesena, Alma Mater Studiorum Università di Bologna, Bologna, Italy
| | - Antoine Ligot
- IRIDIA, Université libre de Bruxelles, Brussels, Belgium
| | | |
Collapse
|
28
|
Abstract
The dynamical evolution of a system of interacting elements can be predicted in terms of its elementary constituents and their interactions, or in terms of the system’s global state transitions. For this reason, systems with equivalent global dynamics are often taken to be equivalent for all relevant purposes. Nevertheless, such systems may still vary in their causal composition—the way mechanisms within the system specify causes and effects over different subsets of system elements. We demonstrate this point based on a set of small discrete dynamical systems with reversible dynamics that cycle through all their possible states. Our analysis elucidates the role of composition within the formal framework of integrated information theory. We show that the global dynamical and information-theoretic capacities of reversible systems can be maximal even though they may differ, quantitatively and qualitatively, in the information that their various subsets specify about each other (intrinsic information). This can be the case even for a system and its time-reversed equivalent. Due to differences in their causal composition, two systems with equivalent global dynamics may still differ in their capacity for autonomy, agency, and phenomenology.
Collapse
|
29
|
Egbert M, Gagnon JS, Pérez-Mercader J. From chemical soup to computing circuit: transforming a contiguous chemical medium into a logic gate network by modulating its external conditions. J R Soc Interface 2019; 16:20190190. [PMID: 31506047 DOI: 10.1098/rsif.2019.0190] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
It has been shown that it is possible to transform a well-stirred chemical medium into a logic gate simply by varying the chemistry's external conditions (feed rates, lighting conditions, etc.). We extend this work, showing that the same method can be generalized to spatially extended systems. We vary the external conditions of a well-known chemical medium (a cubic autocatalytic reaction-diffusion model), so that different regions of the simulated chemistry are operating under particular conditions at particular times. In so doing, we are able to transform the initially uniform chemistry, not just into a single logic gate, but into a functionally integrated network of diverse logic gates that operate as a basic computational circuit known as a full-adder.
Collapse
Affiliation(s)
- Matthew Egbert
- Department of Earth and Planetary Sciences, Harvard University, Cambridge, MA, USA.,University of Auckland, Auckland, New Zealand
| | - Jean-Sébastien Gagnon
- Department of Earth and Planetary Sciences, Harvard University, Cambridge, MA, USA.,Physics Department, Norwich University, Northfield, VT, USA
| | - Juan Pérez-Mercader
- Department of Earth and Planetary Sciences, Harvard University, Cambridge, MA, USA.,Santa Fe Institute, Santa Fe, NM, USA
| |
Collapse
|
30
|
Koglin T, Sándor B, Gros C. When the goal is to generate a series of activities: A self-organized simulated robot arm. PLoS One 2019; 14:e0217004. [PMID: 31216272 PMCID: PMC6584010 DOI: 10.1371/journal.pone.0217004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Accepted: 04/09/2019] [Indexed: 11/19/2022] Open
Abstract
Behavior is characterized by sequences of goal oriented conducts, such as food uptake, socializing and resting. Classically, one would define for each task a corresponding satisfaction level, with the agent engaging, at a given time, in the activity having the lowest satisfaction level. Alternatively, one may consider that the agent follows the overarching objective to generate sequences of distinct activities. To achieve a balanced distribution of activities would then be the primary goal, and not to master a specific task. In this setting the agent would show two types of behaviors, task-oriented and task-searching phases, with the latter interseeding the former. We study the emergence of autonomous task switching for the case of a simulated robot arm. Grasping one of several moving objects corresponds in this setting to a specific activity. Overall, the arm should follow a given object temporarily and then move away, in order to search for a new target and reengage. We show that this behavior can be generated robustly when modeling the arm as an adaptive dynamical system. The dissipation function is in this approach time dependent. The arm is in a dissipative state when searching for a nearby object, dissipating energy on approach. Once close, the dissipation function starts to increase, with the eventual sign change implying that the arm will take up energy and wander off. The resulting explorative state ends when the dissipation function becomes again negative and the arm selects a new target. We believe that our approach may be generalized to generate self-organized sequences of activities in general.
Collapse
Affiliation(s)
- Tim Koglin
- Institute for Theoretical Physics, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Bulcsú Sándor
- Department of Physics, Babeș-Bolyai University, Cluj-Napoca, Romania
- * E-mail:
| | - Claudius Gros
- Institute for Theoretical Physics, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
31
|
Manicka S, Levin M. The Cognitive Lens: a primer on conceptual tools for analysing information processing in developmental and regenerative morphogenesis. Philos Trans R Soc Lond B Biol Sci 2019; 374:20180369. [PMID: 31006373 PMCID: PMC6553590 DOI: 10.1098/rstb.2018.0369] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/20/2018] [Indexed: 12/31/2022] Open
Abstract
Brains exhibit plasticity, multi-scale integration of information, computation and memory, having evolved by specialization of non-neural cells that already possessed many of the same molecular components and functions. The emerging field of basal cognition provides many examples of decision-making throughout a wide range of non-neural systems. How can biological information processing across scales of size and complexity be quantitatively characterized and exploited in biomedical settings? We use pattern regulation as a context in which to introduce the Cognitive Lens-a strategy using well-established concepts from cognitive and computer science to complement mechanistic investigation in biology. To facilitate the assimilation and application of these approaches across biology, we review tools from various quantitative disciplines, including dynamical systems, information theory and least-action principles. We propose that these tools can be extended beyond neural settings to predict and control systems-level outcomes, and to understand biological patterning as a form of primitive cognition. We hypothesize that a cognitive-level information-processing view of the functions of living systems can complement reductive perspectives, improving efficient top-down control of organism-level outcomes. Exploration of the deep parallels across diverse quantitative paradigms will drive integrative advances in evolutionary biology, regenerative medicine, synthetic bioengineering, cognitive neuroscience and artificial intelligence. This article is part of the theme issue 'Liquid brains, solid brains: How distributed cognitive architectures process information'.
Collapse
Affiliation(s)
| | - Michael Levin
- Allen Discovery Center, Tufts University, Medford, MA 02155, USA
| |
Collapse
|
32
|
Reed SK, Vallacher RR. A comparison of information processing and dynamical systems perspectives on problem solving. THINKING & REASONING 2019. [DOI: 10.1080/13546783.2019.1605930] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Affiliation(s)
- Stephen K. Reed
- Psychology and CRMSE, San Diego State University, San Diego, CA, USA
- Department of Psychology, University of California, San Diego, La Jolla, CA, USA
| | - Robin R. Vallacher
- Department of Psychology, Florida Atlantic University, Boca Raton, FL, USA
| |
Collapse
|
33
|
Del Giudice M, Crespi BJ. Basic functional trade-offs in cognition: An integrative framework. Cognition 2018; 179:56-70. [DOI: 10.1016/j.cognition.2018.06.008] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Revised: 06/05/2018] [Accepted: 06/11/2018] [Indexed: 01/23/2023]
|
34
|
Frey S, Albino DK, Williams PL. Synergistic Information Processing Encrypts Strategic Reasoning in Poker. Cogn Sci 2018; 42:1457-1476. [PMID: 29904937 DOI: 10.1111/cogs.12632] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2015] [Revised: 04/02/2018] [Accepted: 04/03/2018] [Indexed: 12/01/2022]
Abstract
There is a tendency in decision-making research to treat uncertainty only as a problem to be overcome. But it is also a feature that can be leveraged, particularly in social interaction. Comparing the behavior of profitable and unprofitable poker players, we reveal a strategic use of information processing that keeps decision makers unpredictable. To win at poker, a player must exploit public signals from others. But using public inputs makes it easier for an observer to reconstruct that player's strategy and predict his or her behavior. How should players trade off between exploiting profitable opportunities and remaining unexploitable themselves? Using a recent multivariate approach to information theoretic data analysis and 1.75 million hands of online two-player No-Limit Texas Hold'em, we find that the important difference between winning and losing players is not in the amount of information they process, but how they process it. In particular, winning players are better at integrative information processing-creating new information from the interaction between their cards and their opponents' signals. We argue that integrative information processing does not just produce better decisions, it makes decision-making harder for others to reverse engineer, as an expert poker player's cards act like the private key in public-key cryptography. Poker players encrypt their reasoning with the way they process information. The encryption function of integrative information processing makes it possible for players to exploit others while remaining unexploitable. By recognizing the act of information processing as a strategic behavior in its own right, we offer a detailed account of how experts use endemic uncertainty to conceal their intentions in high-stakes competitive environments, and we highlight new opportunities between cognitive science, information theory, and game theory.
Collapse
Affiliation(s)
- Seth Frey
- Department of Communication, University of California Davis
- Neukom Institute for Computational Science, Dartmouth College
- Cognitive Science Program, Indiana University
| | | | | |
Collapse
|
35
|
Timme NM, Lapish C. A Tutorial for Information Theory in Neuroscience. eNeuro 2018; 5:ENEURO.0052-18.2018. [PMID: 30211307 PMCID: PMC6131830 DOI: 10.1523/eneuro.0052-18.2018] [Citation(s) in RCA: 105] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2018] [Revised: 04/10/2018] [Accepted: 05/30/2018] [Indexed: 11/21/2022] Open
Abstract
Understanding how neural systems integrate, encode, and compute information is central to understanding brain function. Frequently, data from neuroscience experiments are multivariate, the interactions between the variables are nonlinear, and the landscape of hypothesized or possible interactions between variables is extremely broad. Information theory is well suited to address these types of data, as it possesses multivariate analysis tools, it can be applied to many different types of data, it can capture nonlinear interactions, and it does not require assumptions about the structure of the underlying data (i.e., it is model independent). In this article, we walk through the mathematics of information theory along with common logistical problems associated with data type, data binning, data quantity requirements, bias, and significance testing. Next, we analyze models inspired by canonical neuroscience experiments to improve understanding and demonstrate the strengths of information theory analyses. To facilitate the use of information theory analyses, and an understanding of how these analyses are implemented, we also provide a free MATLAB software package that can be applied to a wide range of data from neuroscience experiments, as well as from other fields of study.
Collapse
Affiliation(s)
- Nicholas M Timme
- Department of Psychology, Indiana University - Purdue University Indianapolis, 402 N. Blackford St, Indianapolis, IN 46202
| | - Christopher Lapish
- Department of Psychology, Indiana University - Purdue University Indianapolis, 402 N. Blackford St, Indianapolis, IN 46202
| |
Collapse
|
36
|
Da Rold F. Information-theoretic decomposition of embodied and situated systems. Neural Netw 2018; 103:94-107. [PMID: 29665540 DOI: 10.1016/j.neunet.2018.03.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2017] [Revised: 01/01/2018] [Accepted: 03/14/2018] [Indexed: 11/30/2022]
Abstract
The embodied and situated view of cognition stresses the importance of real-time and nonlinear bodily interaction with the environment for developing concepts and structuring knowledge. In this article, populations of robots controlled by an artificial neural network learn a wall-following task through artificial evolution. At the end of the evolutionary process, time series are recorded from perceptual and motor neurons of selected robots. Information-theoretic measures are estimated on pairings of variables to unveil nonlinear interactions that structure the agent-environment system. Specifically, the mutual information is utilized to quantify the degree of dependence and the transfer entropy to detect the direction of the information flow. Furthermore, the system is analyzed with the local form of such measures, thus capturing the underlying dynamics of information. Results show that different measures are interdependent and complementary in uncovering aspects of the robots' interaction with the environment, as well as characteristics of the functional neural structure. Therefore, the set of information-theoretic measures provides a decomposition of the system, capturing the intricacy of nonlinear relationships that characterize robots' behavior and neural dynamics.
Collapse
Affiliation(s)
- Federico Da Rold
- School of Computing, Electronics and Mathematics, Plymouth University, Plymouth PL4 8AA, UK.
| |
Collapse
|
37
|
Chicharro D, Pica G, Panzeri S. The Identity of Information: How Deterministic Dependencies Constrain Information Synergy and Redundancy. ENTROPY (BASEL, SWITZERLAND) 2018; 20:e20030169. [PMID: 33265260 PMCID: PMC7512685 DOI: 10.3390/e20030169] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Revised: 02/26/2018] [Accepted: 02/28/2018] [Indexed: 06/12/2023]
Abstract
Understanding how different information sources together transmit information is crucial in many domains. For example, understanding the neural code requires characterizing how different neurons contribute unique, redundant, or synergistic pieces of information about sensory or behavioral variables. Williams and Beer (2010) proposed a partial information decomposition (PID) that separates the mutual information that a set of sources contains about a set of targets into nonnegative terms interpretable as these pieces. Quantifying redundancy requires assigning an identity to different information pieces, to assess when information is common across sources. Harder et al. (2013) proposed an identity axiom that imposes necessary conditions to quantify qualitatively common information. However, Bertschinger et al. (2012) showed that, in a counterexample with deterministic target-source dependencies, the identity axiom is incompatible with ensuring PID nonnegativity. Here, we study systematically the consequences of information identity criteria that assign identity based on associations between target and source variables resulting from deterministic dependencies. We show how these criteria are related to the identity axiom and to previously proposed redundancy measures, and we characterize how they lead to negative PID terms. This constitutes a further step to more explicitly address the role of information identity in the quantification of redundancy. The implications for studying neural coding are discussed.
Collapse
Affiliation(s)
- Daniel Chicharro
- Department of Neurobiology, Harvard Medical School, Boston, MA 02115, USA
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, Rovereto (TN) 38068, Italy
| | - Giuseppe Pica
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, Rovereto (TN) 38068, Italy
| | - Stefano Panzeri
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, Rovereto (TN) 38068, Italy
| |
Collapse
|
38
|
|
39
|
The explanatory structure of unexplainable events: Causal constraints on magical reasoning. Psychon Bull Rev 2017; 24:1573-1585. [DOI: 10.3758/s13423-016-1206-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
40
|
|
41
|
Aguilera M, Bedia MG, Barandiaran XE. Extended Neural Metastability in an Embodied Model of Sensorimotor Coupling. Front Syst Neurosci 2016; 10:76. [PMID: 27721746 PMCID: PMC5033977 DOI: 10.3389/fnsys.2016.00076] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2016] [Accepted: 08/31/2016] [Indexed: 11/14/2022] Open
Abstract
The hypothesis that brain organization is based on mechanisms of metastable synchronization in neural assemblies has been popularized during the last decades of neuroscientific research. Nevertheless, the role of body and environment for understanding the functioning of metastable assemblies is frequently dismissed. The main goal of this paper is to investigate the contribution of sensorimotor coupling to neural and behavioral metastability using a minimal computational model of plastic neural ensembles embedded in a robotic agent in a behavioral preference task. Our hypothesis is that, under some conditions, the metastability of the system is not restricted to the brain but extends to the system composed by the interaction of brain, body and environment. We test this idea, comparing an agent in continuous interaction with its environment in a task demanding behavioral flexibility with an equivalent model from the point of view of “internalist neuroscience.” A statistical characterization of our model and tools from information theory allow us to show how (1) the bidirectional coupling between agent and environment brings the system closer to a regime of criticality and triggers the emergence of additional metastable states which are not found in the brain in isolation but extended to the whole system of sensorimotor interaction, (2) the synaptic plasticity of the agent is fundamental to sustain open structures in the neural controller of the agent flexibly engaging and disengaging different behavioral patterns that sustain sensorimotor metastable states, and (3) these extended metastable states emerge when the agent generates an asymmetrical circular loop of causal interaction with its environment, in which the agent responds to variability of the environment at fast timescales while acting over the environment at slow timescales, suggesting the constitution of the agent as an autonomous entity actively modulating its sensorimotor coupling with the world. We conclude with a reflection about how our results contribute in a more general way to current progress in neuroscientific research.
Collapse
Affiliation(s)
- Miguel Aguilera
- Department of Computer Science and Systems Engineering, University of ZaragozaZaragoza, Spain; Department of Psychology, University of the Balearic IslandsPalma de Mallorca, Spain; ISAAC Lab, Aragon Institute of Engineering Research, University of ZaragozaZaragoza, Spain
| | - Manuel G Bedia
- Department of Computer Science and Systems Engineering, University of ZaragozaZaragoza, Spain; ISAAC Lab, Aragon Institute of Engineering Research, University of ZaragozaZaragoza, Spain
| | - Xabier E Barandiaran
- Department of Philosophy, University School of Social Work, University of the Basque Country (UPV/EHU)Vitoria-Gasteiz, Spain; Department of Logic and Philosophy of Science, IAS-Research Center for Life, Mind and Society, University of the Basque Country (UPV/EHU)Donostia - San Sebastián, Spain
| |
Collapse
|
42
|
Sándor B, Jahn T, Martin L, Gros C. The Sensorimotor Loop as a Dynamical System: How Regular Motion Primitives May Emerge from Self-Organized Limit Cycles. Front Robot AI 2015. [DOI: 10.3389/frobt.2015.00031] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
43
|
Izquierdo EJ, Williams PL, Beer RD. Information Flow through a Model of the C. elegans Klinotaxis Circuit. PLoS One 2015; 10:e0140397. [PMID: 26465883 PMCID: PMC4605772 DOI: 10.1371/journal.pone.0140397] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2015] [Accepted: 09/24/2015] [Indexed: 11/29/2022] Open
Abstract
Understanding how information about external stimuli is transformed into behavior is one of the central goals of neuroscience. Here we characterize the information flow through a complete sensorimotor circuit: from stimulus, to sensory neurons, to interneurons, to motor neurons, to muscles, to motion. Specifically, we apply a recently developed framework for quantifying information flow to a previously published ensemble of models of salt klinotaxis in the nematode worm Caenorhabditis elegans. Despite large variations in the neural parameters of individual circuits, we found that the overall information flow architecture circuit is remarkably consistent across the ensemble. This suggests structural connectivity is not necessarily predictive of effective connectivity. It also suggests information flow analysis captures general principles of operation for the klinotaxis circuit. In addition, information flow analysis reveals several key principles underlying how the models operate: (1) Interneuron class AIY is responsible for integrating information about positive and negative changes in concentration, and exhibits a strong left/right information asymmetry. (2) Gap junctions play a crucial role in the transfer of information responsible for the information symmetry observed in interneuron class AIZ. (3) Neck motor neuron class SMB implements an information gating mechanism that underlies the circuit’s state-dependent response. (4) The neck carries more information about small changes in concentration than about large ones, and more information about positive changes in concentration than about negative ones. Thus, not all directions of movement are equally informative for the worm. Each of these findings corresponds to hypotheses that could potentially be tested in the worm. Knowing the results of these experiments would greatly refine our understanding of the neural circuit underlying klinotaxis.
Collapse
Affiliation(s)
- Eduardo J. Izquierdo
- Cognitive Science Program, Indiana University, Bloomington, Indiana, United States of America
- School of Informatics and Computing, Indiana University, Bloomington, Indiana, United States of America
- * E-mail:
| | - Paul L. Williams
- Cognitive Science Program, Indiana University, Bloomington, Indiana, United States of America
| | - Randall D. Beer
- Cognitive Science Program, Indiana University, Bloomington, Indiana, United States of America
- School of Informatics and Computing, Indiana University, Bloomington, Indiana, United States of America
| |
Collapse
|
44
|
Lizier JT. JIDT: An Information-Theoretic Toolkit for Studying the Dynamics of Complex Systems. Front Robot AI 2014. [DOI: 10.3389/frobt.2014.00011] [Citation(s) in RCA: 182] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
|
45
|
Applying Information Theory to Neuronal Networks: From Theory to Experiments. ENTROPY 2014. [DOI: 10.3390/e16115721] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
46
|
Pfeifer R, Iida F, Lungarella M. Cognition from the bottom up: on biological inspiration, body morphology, and soft materials. Trends Cogn Sci 2014; 18:404-13. [DOI: 10.1016/j.tics.2014.04.004] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Revised: 04/07/2014] [Accepted: 04/08/2014] [Indexed: 10/25/2022]
|