1
|
Heins C, Millidge B, Da Costa L, Mann RP, Friston KJ, Couzin ID. Collective behavior from surprise minimization. Proc Natl Acad Sci U S A 2024; 121:e2320239121. [PMID: 38630721 PMCID: PMC11046639 DOI: 10.1073/pnas.2320239121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 03/08/2024] [Indexed: 04/19/2024] Open
Abstract
Collective motion is ubiquitous in nature; groups of animals, such as fish, birds, and ungulates appear to move as a whole, exhibiting a rich behavioral repertoire that ranges from directed movement to milling to disordered swarming. Typically, such macroscopic patterns arise from decentralized, local interactions among constituent components (e.g., individual fish in a school). Preeminent models of this process describe individuals as self-propelled particles, subject to self-generated motion and "social forces" such as short-range repulsion and long-range attraction or alignment. However, organisms are not particles; they are probabilistic decision-makers. Here, we introduce an approach to modeling collective behavior based on active inference. This cognitive framework casts behavior as the consequence of a single imperative: to minimize surprise. We demonstrate that many empirically observed collective phenomena, including cohesion, milling, and directed motion, emerge naturally when considering behavior as driven by active Bayesian inference-without explicitly building behavioral rules or goals into individual agents. Furthermore, we show that active inference can recover and generalize the classical notion of social forces as agents attempt to suppress prediction errors that conflict with their expectations. By exploring the parameter space of the belief-based model, we reveal nontrivial relationships between the individual beliefs and group properties like polarization and the tendency to visit different collective states. We also explore how individual beliefs about uncertainty determine collective decision-making accuracy. Finally, we show how agents can update their generative model over time, resulting in groups that are collectively more sensitive to external fluctuations and encode information more robustly.
Collapse
Affiliation(s)
- Conor Heins
- Department of Collective Behaviour, Max Planck Institute of Animal Behavior, KonstanzD-78457, Germany
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz, KonstanzD-78457, Germany
- Department of Biology, University of Konstanz, KonstanzD-78457, Germany
- VERSES Research Lab, Los Angeles, CA90016
| | - Beren Millidge
- Medical Research Council Brain Networks Dynamics Unit, University of Oxford, OxfordOX1 3TH, United Kingdom
| | - Lancelot Da Costa
- VERSES Research Lab, Los Angeles, CA90016
- Department of Mathematics, Imperial College London, LondonSW7 2AZ, United Kingdom
- Wellcome Centre for Human Neuroimaging, University College London, LondonWC1N 3AR, United Kingdom
| | - Richard P. Mann
- Department of Statistics, School of Mathematics, University of Leeds, LeedsLS2 9JT, United Kingdom
| | - Karl J. Friston
- VERSES Research Lab, Los Angeles, CA90016
- Wellcome Centre for Human Neuroimaging, University College London, LondonWC1N 3AR, United Kingdom
| | - Iain D. Couzin
- Department of Collective Behaviour, Max Planck Institute of Animal Behavior, KonstanzD-78457, Germany
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz, KonstanzD-78457, Germany
- Department of Biology, University of Konstanz, KonstanzD-78457, Germany
| |
Collapse
|
2
|
Mannella F, Maggiore F, Baltieri M, Pezzulo G. Active inference through whiskers. Neural Netw 2021; 144:428-437. [PMID: 34563752 DOI: 10.1016/j.neunet.2021.08.037] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Revised: 08/29/2021] [Accepted: 08/31/2021] [Indexed: 10/20/2022]
Abstract
Rodents use whisking to probe actively their environment and to locate objects in space, hence providing a paradigmatic biological example of active sensing. Numerous studies show that the control of whisking has anticipatory aspects. For example, rodents target their whisker protraction to the distance at which they expect objects, rather than just reacting fast to contacts with unexpected objects. Here we characterize the anticipatory control of whisking in rodents as an active inference process. In this perspective, the rodent is endowed with a prior belief that it will touch something at the end of the whisker protraction, and it continuously modulates its whisking amplitude to minimize (proprioceptive and somatosensory) prediction errors arising from an unexpected whisker-object contact, or from a lack of an expected contact. We will use the model to qualitatively reproduce key empirical findings about the ways rodents modulate their whisker amplitude during exploration and the scanning of (expected or unexpected) objects. Furthermore, we will discuss how the components of active inference model can in principle map to the neurobiological circuits of rodent whisking.
Collapse
Affiliation(s)
- Francesco Mannella
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy.
| | - Federico Maggiore
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy.
| | - Manuel Baltieri
- Laboratory for Neural Computation and Adaptation, RIKEN Centre for Brain Science, Wako-shi, Japan.
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy.
| |
Collapse
|
3
|
Hipólito I, Baltieri M, Friston K, Ramstead MJD. Embodied skillful performance: where the action is. SYNTHESE 2021; 199:4457-4481. [PMID: 34866668 PMCID: PMC8602225 DOI: 10.1007/s11229-020-02986-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Accepted: 12/02/2020] [Indexed: 05/13/2023]
Abstract
When someone masters a skill, their performance looks to us like second nature: it looks as if their actions are smoothly performed without explicit, knowledge-driven, online monitoring of their performance. Contemporary computational models in motor control theory, however, are instructionist: that is, they cast skillful performance as a knowledge-driven process. Optimal motor control theory (OMCT), as representative par excellence of such approaches, casts skillful performance as an instruction, instantiated in the brain, that needs to be executed-a motor command. This paper aims to show the limitations of such instructionist approaches to skillful performance. We specifically address the question of whether the assumption of control-theoretic models is warranted. The first section of this paper examines the instructionist assumption, according to which skillful performance consists of the execution of theoretical instructions harnessed in motor representations. The second and third sections characterize the implementation of motor representations as motor commands, with a special focus on formulations from OMCT. The final sections of this paper examine predictive coding and active inference-behavioral modeling frameworks that descend, but are distinct, from OMCT-and argue that the instructionist, control-theoretic assumptions are ill-motivated in light of new developments in active inference.
Collapse
Affiliation(s)
- Inês Hipólito
- Berlin School of Mind and Brain and Institut Für Philosophie Humboldt, Universität zu Berlin, Berlin, Germany
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Manuel Baltieri
- Lab for Neural Computation and Adaptation RIKEN Center for Brain Science Wako, Saitama, Japan
| | - Karl Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Maxwell J. D. Ramstead
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
- Mind, Brain Imaging and Neuroethics, Institute of Mental Health Research, University of Ottawa, Ottawa, Canada
- Division of Social and Transcultural Psychiatry, Department of Psychiatry, McGill University, Montreal, QC Canada
- Culture, Mind, and Brain Program, McGill University, Montreal, QC Canada
| |
Collapse
|
4
|
Ramstead MJD, Friston KJ, Hipólito I. Is the Free-Energy Principle a Formal Theory of Semantics? From Variational Density Dynamics to Neural and Phenotypic Representations. ENTROPY (BASEL, SWITZERLAND) 2020; 22:E889. [PMID: 33286659 PMCID: PMC7517505 DOI: 10.3390/e22080889] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Revised: 08/06/2020] [Accepted: 08/07/2020] [Indexed: 12/14/2022]
Abstract
The aim of this paper is twofold: (1) to assess whether the construct of neural representations plays an explanatory role under the variational free-energy principle and its corollary process theory, active inference; and (2) if so, to assess which philosophical stance-in relation to the ontological and epistemological status of representations-is most appropriate. We focus on non-realist (deflationary and fictionalist-instrumentalist) approaches. We consider a deflationary account of mental representation, according to which the explanatorily relevant contents of neural representations are mathematical, rather than cognitive, and a fictionalist or instrumentalist account, according to which representations are scientifically useful fictions that serve explanatory (and other) aims. After reviewing the free-energy principle and active inference, we argue that the model of adaptive phenotypes under the free-energy principle can be used to furnish a formal semantics, enabling us to assign semantic content to specific phenotypic states (the internal states of a Markovian system that exists far from equilibrium). We propose a modified fictionalist account-an organism-centered fictionalism or instrumentalism. We argue that, under the free-energy principle, pursuing even a deflationary account of the content of neural representations licenses the appeal to the kind of semantic content involved in the 'aboutness' or intentionality of cognitive systems; our position is thus coherent with, but rests on distinct assumptions from, the realist position. We argue that the free-energy principle thereby explains the aboutness or intentionality in living systems and hence their capacity to parse their sensory stream using an ontology or set of semantic factors.
Collapse
Affiliation(s)
- Maxwell J. D. Ramstead
- Division of Social and Transcultural Psychiatry, Department of Psychiatry, McGill University, Montreal, QC H3A 1A1, Canada
- Culture, Mind, and Brain Program, McGill University, Montreal, QC H3A 1A1, Canada
- Wellcome Centre for Human Neuroimaging, University College London, London, WC1N 3AR, UK;
| | - Karl J. Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, WC1N 3AR, UK;
| | - Inês Hipólito
- Faculty of Arts, Social Sciences, and Humanities, University of Wollongong, Wollongong 2522, Australia;
- Institute of Psychiatry, Psychology and Neuroscience (IoPPN), King’s College London, London SE5 8AF, UK
| |
Collapse
|
5
|
Smith R, Schwartenbeck P, Parr T, Friston KJ. An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case. Front Comput Neurosci 2020; 14:41. [PMID: 32508611 PMCID: PMC7250191 DOI: 10.3389/fncom.2020.00041] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Accepted: 04/17/2020] [Indexed: 11/13/2022] Open
Abstract
Within computational neuroscience, the algorithmic and neural basis of structure learning remains poorly understood. Concept learning is one primary example, which requires both a type of internal model expansion process (adding novel hidden states that explain new observations), and a model reduction process (merging different states into one underlying cause and thus reducing model complexity via meta-learning). Although various algorithmic models of concept learning have been proposed within machine learning and cognitive science, many are limited to various degrees by an inability to generalize, the need for very large amounts of training data, and/or insufficiently established biological plausibility. Using concept learning as an example case, we introduce a novel approach for modeling structure learning-and specifically state-space expansion and reduction-within the active inference framework and its accompanying neural process theory. Our aim is to demonstrate its potential to facilitate a novel line of active inference research in this area. The approach we lay out is based on the idea that a generative model can be equipped with extra (hidden state or cause) "slots" that can be engaged when an agent learns about novel concepts. This can be combined with a Bayesian model reduction process, in which any concept learning-associated with these slots-can be reset in favor of a simpler model with higher model evidence. We use simulations to illustrate this model's ability to add new concepts to its state space (with relatively few observations) and increase the granularity of the concepts it currently possesses. We also simulate the predicted neural basis of these processes. We further show that it can accomplish a simple form of "one-shot" generalization to new stimuli. Although deliberately simple, these simulation results highlight ways in which active inference could offer useful resources in developing neurocomputational models of structure learning. They provide a template for how future active inference research could apply this approach to real-world structure learning problems and assess the added utility it may offer.
Collapse
Affiliation(s)
- Ryan Smith
- Laureate Institute for Brain Research, Tulsa, OK, United States
| | - Philipp Schwartenbeck
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London, United Kingdom
| | - Thomas Parr
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London, United Kingdom
| | - Karl J. Friston
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|
6
|
Tschantz A, Seth AK, Buckley CL. Learning action-oriented models through active inference. PLoS Comput Biol 2020; 16:e1007805. [PMID: 32324758 PMCID: PMC7200021 DOI: 10.1371/journal.pcbi.1007805] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 05/05/2020] [Accepted: 03/19/2020] [Indexed: 11/29/2022] Open
Abstract
Converging theories suggest that organisms learn and exploit probabilistic models of their environment. However, it remains unclear how such models can be learned in practice. The open-ended complexity of natural environments means that it is generally infeasible for organisms to model their environment comprehensively. Alternatively, action-oriented models attempt to encode a parsimonious representation of adaptive agent-environment interactions. One approach to learning action-oriented models is to learn online in the presence of goal-directed behaviours. This constrains an agent to behaviourally relevant trajectories, reducing the diversity of the data a model need account for. Unfortunately, this approach can cause models to prematurely converge to sub-optimal solutions, through a process we refer to as a bad-bootstrap. Here, we exploit the normative framework of active inference to show that efficient action-oriented models can be learned by balancing goal-oriented and epistemic (information-seeking) behaviours in a principled manner. We illustrate our approach using a simple agent-based model of bacterial chemotaxis. We first demonstrate that learning via goal-directed behaviour indeed constrains models to behaviorally relevant aspects of the environment, but that this approach is prone to sub-optimal convergence. We then demonstrate that epistemic behaviours facilitate the construction of accurate and comprehensive models, but that these models are not tailored to any specific behavioural niche and are therefore less efficient in their use of data. Finally, we show that active inference agents learn models that are parsimonious, tailored to action, and which avoid bad bootstraps and sub-optimal convergence. Critically, our results indicate that models learned through active inference can support adaptive behaviour in spite of, and indeed because of, their departure from veridical representations of the environment. Our approach provides a principled method for learning adaptive models from limited interactions with an environment, highlighting a route to sample efficient learning algorithms.
Collapse
Affiliation(s)
- Alexander Tschantz
- Sackler Centre for Consciousness Science, University of Sussex, Falmer, Brighton, United Kingdom
- Department of Informatics, University of Sussex, Brighton, United Kingdom
| | - Anil K. Seth
- Sackler Centre for Consciousness Science, University of Sussex, Falmer, Brighton, United Kingdom
- Department of Informatics, University of Sussex, Brighton, United Kingdom
- Canadian Institute for Advanced Research, Azrieli Programme on Brain, Mind, and Consciousness, Toronto, Ontario, Canada
| | - Christopher L. Buckley
- Department of Informatics, University of Sussex, Brighton, United Kingdom
- Evolutionary and Adaptive Systems Research Group, University of Sussex, Falmer, United Kingdom
| |
Collapse
|