1
|
Boyd JL. Moral considerability of brain organoids from the perspective of computational architecture. OXFORD OPEN NEUROSCIENCE 2024; 3:kvae004. [PMID: 38595940 PMCID: PMC10995847 DOI: 10.1093/oons/kvae004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 02/06/2024] [Accepted: 02/27/2024] [Indexed: 04/11/2024]
Abstract
Human brain organoids equipped with complex cytoarchitecture and closed-loop feedback from virtual environments could provide insights into neural mechanisms underlying cognition. Yet organoids with certain cognitive capacities might also merit moral consideration. A precautionary approach has been proposed to address these ethical concerns by focusing on the epistemological question of whether organoids possess neural structures for morally-relevant capacities that bear resemblance to those found in human brains. Critics challenge this similarity approach on philosophical, scientific, and practical grounds but do so without a suitable alternative. Here, I introduce an architectural approach that infers the potential for cognitive-like processing in brain organoids based on the pattern of information flow through the system. The kind of computational architecture acquired by an organoid then informs the kind of cognitive capacities that could, theoretically, be supported and empirically investigated. The implications of this approach for the moral considerability of brain organoids are discussed.
Collapse
Affiliation(s)
- J Lomax Boyd
- Berman Institute of Bioethics, Johns Hopkins University, 1809 Ashland Ave, Baltimore, MD 21205, USA
| |
Collapse
|
2
|
Hodson R, Mehta M, Smith R. The empirical status of predictive coding and active inference. Neurosci Biobehav Rev 2024; 157:105473. [PMID: 38030100 DOI: 10.1016/j.neubiorev.2023.105473] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 10/27/2023] [Accepted: 11/16/2023] [Indexed: 12/01/2023]
Abstract
Research on predictive processing models has focused largely on two specific algorithmic theories: Predictive Coding for perception and Active Inference for decision-making. While these interconnected theories possess broad explanatory potential, they have only recently begun to receive direct empirical evaluation. Here, we review recent studies of Predictive Coding and Active Inference with a focus on evaluating the degree to which they are empirically supported. For Predictive Coding, we find that existing empirical evidence offers modest support. However, some positive results can also be explained by alternative feedforward (e.g., feature detection-based) models. For Active Inference, most empirical studies have focused on fitting these models to behavior as a means of identifying and explaining individual or group differences. While Active Inference models tend to explain behavioral data reasonably well, there has not been a focus on testing empirical validity of active inference theory per se, which would require formal comparison to other models (e.g., non-Bayesian or model-free reinforcement learning models). This review suggests that, while promising, a number of specific research directions are still necessary to evaluate the empirical adequacy and explanatory power of these algorithms.
Collapse
Affiliation(s)
| | | | - Ryan Smith
- Laureate Institute for Brain Research, USA.
| |
Collapse
|
3
|
Corcoran AW, Perrykkad K, Feuerriegel D, Robinson JE. Body as First Teacher: The Role of Rhythmic Visceral Dynamics in Early Cognitive Development. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023:17456916231185343. [PMID: 37694720 DOI: 10.1177/17456916231185343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
Embodied cognition-the idea that mental states and processes should be understood in relation to one's bodily constitution and interactions with the world-remains a controversial topic within cognitive science. Recently, however, increasing interest in predictive processing theories among proponents and critics of embodiment alike has raised hopes of a reconciliation. This article sets out to appraise the unificatory potential of predictive processing, focusing in particular on embodied formulations of active inference. Our analysis suggests that most active-inference accounts invoke weak, potentially trivial conceptions of embodiment; those making stronger claims do so independently of the theoretical commitments of the active-inference framework. We argue that a more compelling version of embodied active inference can be motivated by adopting a diachronic perspective on the way rhythmic physiological activity shapes neural development in utero. According to this visceral afferent training hypothesis, early-emerging physiological processes are essential not only for supporting the biophysical development of neural structures but also for configuring the cognitive architecture those structures entail. Focusing in particular on the cardiovascular system, we propose three candidate mechanisms through which visceral afferent training might operate: (a) activity-dependent neuronal development, (b) periodic signal modeling, and (c) oscillatory network coordination.
Collapse
Affiliation(s)
- Andrew W Corcoran
- Monash Centre for Consciousness and Contemplative Studies, Monash University
- Cognition and Philosophy Laboratory, School of Philosophical, Historical, and International Studies, Monash University
| | - Kelsey Perrykkad
- Cognition and Philosophy Laboratory, School of Philosophical, Historical, and International Studies, Monash University
| | | | - Jonathan E Robinson
- Cognition and Philosophy Laboratory, School of Philosophical, Historical, and International Studies, Monash University
| |
Collapse
|
4
|
Isomura T, Kotani K, Jimbo Y, Friston KJ. Experimental validation of the free-energy principle with in vitro neural networks. Nat Commun 2023; 14:4547. [PMID: 37550277 PMCID: PMC10406890 DOI: 10.1038/s41467-023-40141-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 07/13/2023] [Indexed: 08/09/2023] Open
Abstract
Empirical applications of the free-energy principle are not straightforward because they entail a commitment to a particular process theory, especially at the cellular and synaptic levels. Using a recently established reverse engineering technique, we confirm the quantitative predictions of the free-energy principle using in vitro networks of rat cortical neurons that perform causal inference. Upon receiving electrical stimuli-generated by mixing two hidden sources-neurons self-organised to selectively encode the two sources. Pharmacological up- and downregulation of network excitability disrupted the ensuing inference, consistent with changes in prior beliefs about hidden sources. As predicted, changes in effective synaptic connectivity reduced variational free energy, where the connection strengths encoded parameters of the generative model. In short, we show that variational free energy minimisation can quantitatively predict the self-organisation of neuronal networks, in terms of their responses and plasticity. These results demonstrate the applicability of the free-energy principle to in vitro neural networks and establish its predictive validity in this setting.
Collapse
Affiliation(s)
- Takuya Isomura
- Brain Intelligence Theory Unit, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama, 351-0198, Japan.
| | - Kiyoshi Kotani
- Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo, 153-8904, Japan
| | - Yasuhiko Jimbo
- Department of Precision Engineering, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, Queen Square Institute of Neurology, University College London, London, WC1N 3AR, UK
- VERSES AI Research Lab, Los Angeles, CA, 90016, USA
| |
Collapse
|
5
|
Jeon I, Kim T. Distinctive properties of biological neural networks and recent advances in bottom-up approaches toward a better biologically plausible neural network. Front Comput Neurosci 2023; 17:1092185. [PMID: 37449083 PMCID: PMC10336230 DOI: 10.3389/fncom.2023.1092185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 06/12/2023] [Indexed: 07/18/2023] Open
Abstract
Although it may appear infeasible and impractical, building artificial intelligence (AI) using a bottom-up approach based on the understanding of neuroscience is straightforward. The lack of a generalized governing principle for biological neural networks (BNNs) forces us to address this problem by converting piecemeal information on the diverse features of neurons, synapses, and neural circuits into AI. In this review, we described recent attempts to build a biologically plausible neural network by following neuroscientifically similar strategies of neural network optimization or by implanting the outcome of the optimization, such as the properties of single computational units and the characteristics of the network architecture. In addition, we proposed a formalism of the relationship between the set of objectives that neural networks attempt to achieve, and neural network classes categorized by how closely their architectural features resemble those of BNN. This formalism is expected to define the potential roles of top-down and bottom-up approaches for building a biologically plausible neural network and offer a map helping the navigation of the gap between neuroscience and AI engineering.
Collapse
Affiliation(s)
| | - Taegon Kim
- Brain Science Institute, Korea Institute of Science and Technology, Seoul, Republic of Korea
| |
Collapse
|
6
|
Chen Z, Liang Q, Wei Z, Chen X, Shi Q, Yu Z, Sun T. An Overview of In Vitro Biological Neural Networks for Robot Intelligence. CYBORG AND BIONIC SYSTEMS 2023; 4:0001. [PMID: 37040493 PMCID: PMC10076061 DOI: 10.34133/cbsystems.0001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 10/17/2022] [Indexed: 01/12/2023] Open
Abstract
In vitro biological neural networks (BNNs) interconnected with robots, so-called BNN-based neurorobotic systems, can interact with the external world, so that they can present some preliminary intelligent behaviors, including learning, memory, robot control, etc. This work aims to provide a comprehensive overview of the intelligent behaviors presented by the BNN-based neurorobotic systems, with a particular focus on those related to robot intelligence. In this work, we first introduce the necessary biological background to understand the 2 characteristics of the BNNs: nonlinear computing capacity and network plasticity. Then, we describe the typical architecture of the BNN-based neurorobotic systems and outline the mainstream techniques to realize such an architecture from 2 aspects: from robots to BNNs and from BNNs to robots. Next, we separate the intelligent behaviors into 2 parts according to whether they rely solely on the computing capacity (computing capacity-dependent) or depend also on the network plasticity (network plasticity-dependent), which are then expounded respectively, with a focus on those related to the realization of robot intelligence. Finally, the development trends and challenges of the BNN-based neurorobotic systems are discussed.
Collapse
Affiliation(s)
- Zhe Chen
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
- Key Laboratory of Biomimetic Robots and Systems (Beijing Institute of Technology), Ministry of Education, Beijing 10081, China
- Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing 100081, China
| | - Qian Liang
- Key Laboratory of Biomimetic Robots and Systems (Beijing Institute of Technology), Ministry of Education, Beijing 10081, China
- Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing 100081, China
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
| | - Zihou Wei
- Key Laboratory of Biomimetic Robots and Systems (Beijing Institute of Technology), Ministry of Education, Beijing 10081, China
- Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing 100081, China
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
| | - Xie Chen
- Key Laboratory of Biomimetic Robots and Systems (Beijing Institute of Technology), Ministry of Education, Beijing 10081, China
- Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing 100081, China
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
| | - Qing Shi
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
- Key Laboratory of Biomimetic Robots and Systems (Beijing Institute of Technology), Ministry of Education, Beijing 10081, China
- Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing 100081, China
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
| | - Zhiqiang Yu
- Key Laboratory of Biomimetic Robots and Systems (Beijing Institute of Technology), Ministry of Education, Beijing 10081, China
- Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing 100081, China
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
| | - Tao Sun
- Key Laboratory of Biomimetic Robots and Systems (Beijing Institute of Technology), Ministry of Education, Beijing 10081, China
- Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing 100081, China
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
| |
Collapse
|
7
|
Kagan BJ, Kitchen AC, Tran NT, Habibollahi F, Khajehnejad M, Parker BJ, Bhat A, Rollo B, Razi A, Friston KJ. In vitro neurons learn and exhibit sentience when embodied in a simulated game-world. Neuron 2022; 110:3952-3969.e8. [PMID: 36228614 DOI: 10.1016/j.neuron.2022.09.001] [Citation(s) in RCA: 58] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 06/21/2022] [Accepted: 08/31/2022] [Indexed: 11/06/2022]
Abstract
Integrating neurons into digital systems may enable performance infeasible with silicon alone. Here, we develop DishBrain, a system that harnesses the inherent adaptive computation of neurons in a structured environment. In vitro neural networks from human or rodent origins are integrated with in silico computing via a high-density multielectrode array. Through electrophysiological stimulation and recording, cultures are embedded in a simulated game-world, mimicking the arcade game "Pong." Applying implications from the theory of active inference via the free energy principle, we find apparent learning within five minutes of real-time gameplay not observed in control conditions. Further experiments demonstrate the importance of closed-loop structured feedback in eliciting learning over time. Cultures display the ability to self-organize activity in a goal-directed manner in response to sparse sensory information about the consequences of their actions, which we term synthetic biological intelligence. Future applications may provide further insights into the cellular correlates of intelligence.
Collapse
Affiliation(s)
| | | | - Nhi T Tran
- The Ritchie Centre, Hudson Institute of Medical Research, Clayton, VIC, Australia
| | - Forough Habibollahi
- Department of Biomedical Engineering, The University of Melbourne, Parkville, Australia
| | - Moein Khajehnejad
- Department of Data Science and AI, Monash University, Melbourne, Australia
| | - Bradyn J Parker
- Department of Materials Science and Engineering, Monash University, Melbourne, VIC, Australia
| | - Anjali Bhat
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London, UK
| | - Ben Rollo
- Department of Neuroscience, Central Clinical School, Monash University, Melbourne, Australia
| | - Adeel Razi
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London, UK; Turner Institute for Brain and Mental Health, Monash University, Clayton, VIC, Australia; Monash Biomedical Imaging, Monash University, Clayton, VIC, Australia; CIFAR Azrieli Global Scholars Program, CIFAR, Toronto, Canada
| | - Karl J Friston
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London, UK
| |
Collapse
|
8
|
Wright JJ, Bourke PD. Unification of free energy minimization, spatiotemporal energy, and dimension reduction models of V1 organization: Postnatal learning on an antenatal scaffold. Front Comput Neurosci 2022; 16:869268. [PMID: 36313813 PMCID: PMC9614369 DOI: 10.3389/fncom.2022.869268] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 09/27/2022] [Indexed: 11/23/2022] Open
Abstract
Developmental selection of neurons and synapses so as to maximize pulse synchrony has recently been used to explain antenatal cortical development. Consequences of the same selection process—an application of the Free Energy Principle—are here followed into the postnatal phase in V1, and the implications for cognitive function are considered. Structured inputs transformed via lag relay in superficial patch connections lead to the generation of circumferential synaptic connectivity superimposed upon the antenatal, radial, “like-to-like” connectivity surrounding each singularity. The spatiotemporal energy and dimension reduction models of cortical feature preferences are accounted for and unified within the expanded model, and relationships of orientation preference (OP), space frequency preference (SFP), and temporal frequency preference (TFP) are resolved. The emergent anatomy provides a basis for “active inference” that includes interpolative modification of synapses so as to anticipate future inputs, as well as learn directly from present stimuli. Neurodynamic properties are those of heteroclinic networks with coupled spatial eigenmodes.
Collapse
Affiliation(s)
- James Joseph Wright
- Centre for Brain Research, University of Auckland, Auckland, New Zealand
- Department of Psychological Medicine, School of Medicine, University of Auckland, Auckland, New Zealand
- *Correspondence: James Joseph Wright,
| | - Paul David Bourke
- Faculty of Arts, Business, Law and Education, School of Social Sciences, University of Western Australia, Perth, WA, Australia
| |
Collapse
|
9
|
Lin CHS, Garrido MI. Towards a cross-level understanding of Bayesian inference in the brain. Neurosci Biobehav Rev 2022; 137:104649. [PMID: 35395333 DOI: 10.1016/j.neubiorev.2022.104649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Revised: 02/28/2022] [Accepted: 03/29/2022] [Indexed: 10/18/2022]
Abstract
Perception emerges from unconscious probabilistic inference, which guides behaviour in our ubiquitously uncertain environment. Bayesian decision theory is a prominent computational model that describes how people make rational decisions using noisy and ambiguous sensory observations. However, critical questions have been raised about the validity of the Bayesian framework in explaining the mental process of inference. Firstly, some natural behaviours deviate from Bayesian optimum. Secondly, the neural mechanisms that support Bayesian computations in the brain are yet to be understood. Taking Marr's cross level approach, we review the recent progress made in addressing these challenges. We first review studies that combined behavioural paradigms and modelling approaches to explain both optimal and suboptimal behaviours. Next, we evaluate the theoretical advances and the current evidence for ecologically feasible algorithms and neural implementations in the brain, which may enable probabilistic inference. We argue that this cross-level approach is necessary for the worthwhile pursuit to uncover mechanistic accounts of human behaviour.
Collapse
Affiliation(s)
- Chin-Hsuan Sophie Lin
- Melbourne School of Psychological Sciences, The University of Melbourne, Australia; Australian Research Council for Integrative Brain Function, Australia.
| | - Marta I Garrido
- Melbourne School of Psychological Sciences, The University of Melbourne, Australia; Australian Research Council for Integrative Brain Function, Australia
| |
Collapse
|
10
|
Gandolfi D, Puglisi FM, Boiani GM, Pagnoni G, Friston KJ, D'Angelo EU, Mapelli J. Emergence of associative learning in a neuromorphic inference network. J Neural Eng 2022; 19. [PMID: 35508120 DOI: 10.1088/1741-2552/ac6ca7] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 05/04/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE In the theoretical framework of predictive coding and active inference, the brain can be viewed as instantiating a rich generative model of the world that predicts incoming sensory data while continuously updating its parameters via minimization of prediction errors. While this theory has been successfully applied to cognitive processes - by modelling the activity of functional neural networks at a mesoscopic scale - the validity of the approach when modelling neurons as an ensemble of inferring agents, in a biologically plausible architecture, remained to be explored. APPROACH We modelled a simplified cerebellar circuit with individual neurons acting as Bayesian agents to simulate the classical delayed eyeblink conditioning protocol. Neurons and synapses adjusted their activity to minimize their prediction error, which was used as the network cost function. This cerebellar network was then implemented in hardware by replicating digital neuronal elements via a low-power microcontroller. MAIN RESULTS Persistent changes of synaptic strength - that mirrored neurophysiological observations - emerged via local (neurocentric) prediction error minimization, leading to the expression of associative learning. The same paradigm was effectively emulated in low-power hardware showing remarkably efficient performance compared to conventional neuromorphic architectures. SIGNIFICANCE These findings show that: i) an ensemble of free energy minimizing neurons - organized in a biological plausible architecture - can recapitulate functional self-organization observed in nature, such as associative plasticity, and ii) a neuromorphic network of inference units can learn unsupervised tasks without embedding predefined learning rules in the circuit, thus providing a potential avenue to a novel form of brain-inspired artificial intelligence.
Collapse
Affiliation(s)
- Daniela Gandolfi
- Department Biomedical, Metabolic and Neural Sciences, University of Modena and Reggio Emilia, Via Campi 287, Modena, Emilia-Romagna, 41121, ITALY
| | - Francesco Maria Puglisi
- DIEF, Universita degli Studi di Modena e Reggio Emilia, Via P. Vivarelli 10/1, Modena, MO, 41121, ITALY
| | - Giulia Maria Boiani
- Department Biomedical, Metabolic and Neural Sciences, University of Modena and Reggio Emilia, Via Campi 287, Modena, Emilia-Romagna, 41121, ITALY
| | - Giuseppe Pagnoni
- Department Biomedical, Metabolic and Neural Sciences, University of Modena and Reggio Emilia, Via Campi 287, Modena, Emilia-Romagna, 41121, ITALY
| | - Karl J Friston
- Institute of Neurology, University College London, 23 Queen Square, LONDON, WC1N 3BG, London, WC1N 3AR, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Egidio Ugo D'Angelo
- Department Brain and Behavioral Sciences, University of Pavia, Via Forlanini 6, Pavia, Pavia, Lombardia, 27100, ITALY
| | - Jonathan Mapelli
- Department Biomedical, Metabolic and Neural Sciences, University of Modena and Reggio Emilia, Via Campi 287, Modena, 41125, ITALY
| |
Collapse
|
11
|
Isomura T, Shimazaki H, Friston KJ. Canonical neural networks perform active inference. Commun Biol 2022; 5:55. [PMID: 35031656 PMCID: PMC8760273 DOI: 10.1038/s42003-021-02994-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Accepted: 12/21/2021] [Indexed: 12/03/2022] Open
Abstract
This work considers a class of canonical neural networks comprising rate coding models, wherein neural activity and plasticity minimise a common cost function-and plasticity is modulated with a certain delay. We show that such neural networks implicitly perform active inference and learning to minimise the risk associated with future outcomes. Mathematical analyses demonstrate that this biological optimisation can be cast as maximisation of model evidence, or equivalently minimisation of variational free energy, under the well-known form of a partially observed Markov decision process model. This equivalence indicates that the delayed modulation of Hebbian plasticity-accompanied with adaptation of firing thresholds-is a sufficient neuronal substrate to attain Bayes optimal inference and control. We corroborated this proposition using numerical analyses of maze tasks. This theory offers a universal characterisation of canonical neural networks in terms of Bayesian belief updating and provides insight into the neuronal mechanisms underlying planning and adaptive behavioural control.
Collapse
Affiliation(s)
- Takuya Isomura
- Brain Intelligence Theory Unit, RIKEN Center for Brain Science, Wako, Saitama, 351-0198, Japan.
| | - Hideaki Shimazaki
- Center for Human Nature, Artificial Intelligence, and Neuroscience (CHAIN), Hokkaido University, Sapporo, Hokkaido, 060-0812, Japan
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London, WC1N 3AR, UK
| |
Collapse
|
12
|
Isomura T. Active inference leads to Bayesian neurophysiology. Neurosci Res 2021; 175:38-45. [PMID: 34968557 DOI: 10.1016/j.neures.2021.12.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 12/13/2021] [Accepted: 12/14/2021] [Indexed: 01/20/2023]
Abstract
The neuronal substrates that implement the free-energy principle and ensuing active inference at the neuron and synapse level have not been fully elucidated. This Review considers possible neuronal substrates underlying the principle. First, the foundations of the free-energy principle are introduced, and then its ability to empirically explain various brain functions and psychological and biological phenomena in terms of Bayesian inference is described. Mathematically, the dynamics of neural activity and plasticity that minimise a cost function can be cast as performing Bayesian inference that minimises variational free energy. This equivalence licenses the adoption of the free-energy principle as a universal characterisation of neural networks. Further, the neural network structure itself represents a generative model under which an agent operates. A virtue of this perspective is that it enables the formal association of neural network properties with prior beliefs that regulate inference and learning. The possible neuronal substrates that implement prior beliefs and how to empirically examine the theory are discussed. This perspective renders brain activity explainable, leading to a deeper understanding of the neuronal mechanisms underlying basic psychology and psychiatric disorders in terms of an implicit generative model.
Collapse
Affiliation(s)
- Takuya Isomura
- Brain Intelligence Theory Unit, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan.
| |
Collapse
|
13
|
Colombi I, Nieus T, Massimini M, Chiappalone M. Spontaneous and Perturbational Complexity in Cortical Cultures. Brain Sci 2021; 11:1453. [PMID: 34827452 PMCID: PMC8615728 DOI: 10.3390/brainsci11111453] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 10/27/2021] [Accepted: 10/27/2021] [Indexed: 12/18/2022] Open
Abstract
Dissociated cortical neurons in vitro display spontaneously synchronized, low-frequency firing patterns, which can resemble the slow wave oscillations characterizing sleep in vivo. Experiments in humans, rodents, and cortical slices have shown that awakening or the administration of activating neuromodulators decrease slow waves, while increasing the spatio-temporal complexity of responses to perturbations. In this study, we attempted to replicate those findings using in vitro cortical cultures coupled with micro-electrode arrays and chemically treated with carbachol (CCh), to modulate sleep-like activity and suppress slow oscillations. We adapted metrics such as neural complexity (NC) and the perturbational complexity index (PCI), typically employed in animal and human brain studies, to quantify complexity in simplified, unstructured networks, both during resting state and in response to electrical stimulation. After CCh administration, we found a decrease in the amplitude of the initial response and a marked enhancement of the complexity during spontaneous activity. Crucially, unlike in cortical slices and intact brains, PCI in cortical cultures displayed only a moderate increase. This dissociation suggests that PCI, a measure of the complexity of causal interactions, requires more than activating neuromodulation and that additional factors, such as an appropriate circuit architecture, may be necessary. Exploring more structured in vitro networks, characterized by the presence of strong lateral connections, recurrent excitation, and feedback loops, may thus help to identify the features that are more relevant to support causal complexity.
Collapse
Affiliation(s)
- Ilaria Colombi
- Brain Development and Disease Laboratory, Istituto Italiano di Tecnologia, 16163 Genova, Italy;
| | - Thierry Nieus
- Department of Biomedical and Clinical Sciences “L. Sacco”, University of Milan, 20157 Milan, Italy; (T.N.); (M.M.)
| | - Marcello Massimini
- Department of Biomedical and Clinical Sciences “L. Sacco”, University of Milan, 20157 Milan, Italy; (T.N.); (M.M.)
- IRCCS, Fondazione Don Carlo Gnocchi, 20148 Milan, Italy
| | - Michela Chiappalone
- Department of Informatics, Bioengineering, Robotics and System Engineering, 16145 Genova, Italy
- Rehab Technologies Lab., Istituto Italiano di Tecnologia, 16163 Genova, Italy
| |
Collapse
|
14
|
Fernandez-Leon JA, Acosta G. A heuristic perspective on non-variational free energy modulation at the sleep-like edge. Biosystems 2021; 208:104466. [PMID: 34246689 DOI: 10.1016/j.biosystems.2021.104466] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 06/03/2021] [Accepted: 06/21/2021] [Indexed: 11/16/2022]
Abstract
BACKGROUND The variational Free Energy Principle (FEP) establishes that a neural system minimizes a free energy function of their internal state through environmental sensing entailing beliefs about hidden states in their environment. PROBLEM Because sensations are drastically reduced during sleep, it is still unclear how a self-organizing neural network can modulate free energy during sleep transitions. GOAL To address this issue, we study how network's state-dependent changes in energy, entropy and free energy connect with changes at the synaptic level in the absence of sensing during a sleep-like transition. APPROACH We use simulations of a physically plausible, environmentally isolated neuronal network that self-organize after inducing a thalamic input to show that the reduction of non-variational free energy depends sensitively upon thalamic input at a slow, rhythmic Poisson (delta) frequency due to spike timing dependent plasticity. METHODS We define a non-variational free energy in terms of the relative difference between the energy and entropy of the network from the initial distribution (prior to activity dependent plasticity) to the nonequilibrium steady-state distribution (after plasticity). We repeated the analysis under different levels of thalamic drive - as defined by the number of cortical neurons in receipt of thalamic input. RESULTS Entraining slow activity with thalamic input induces a transition from a gamma (awake-like state) to a delta (sleep-like state) mode of activity, which can be characterized through a modulation of network's energy and entropy (non-variational free energy) of the ensuing dynamics. The self-organizing response to low and high thalamic drive also showed characteristic differences in the spectrum of frequency content due to spike timing dependent plasticity. CONCLUSIONS The modulation of this non-variational free energy in a network that self-organizes, seems to be an organizational network principle. This could open a window to new empirically testable hypotheses about state changes in a neural network.
Collapse
Affiliation(s)
- Jose A Fernandez-Leon
- Neurology, Harvard Medical School, Brigham and Women's Hospital, Boston, MA, 02115, USA; Neuroscience, Baylor College of Medicine, Houston, TX, 77030, USA.
| | - Gerardo Acosta
- INTELYMEC-CIFICEN (UNCPBA-CICPBA-CONICET), Olavarría, B7400JWI, Argentina
| |
Collapse
|
15
|
Artificial neurovascular network (ANVN) to study the accuracy vs. efficiency trade-off in an energy dependent neural network. Sci Rep 2021; 11:13808. [PMID: 34226588 PMCID: PMC8257640 DOI: 10.1038/s41598-021-92661-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Accepted: 06/03/2021] [Indexed: 01/03/2023] Open
Abstract
Artificial feedforward neural networks perform a wide variety of classification and function approximation tasks with high accuracy. Unlike their artificial counterparts, biological neural networks require a supply of adequate energy delivered to single neurons by a network of cerebral microvessels. Since energy is a limited resource, a natural question is whether the cerebrovascular network is capable of ensuring maximum performance of the neural network while consuming minimum energy? Should the cerebrovascular network also be trained, along with the neural network, to achieve such an optimum? In order to answer the above questions in a simplified modeling setting, we constructed an Artificial Neurovascular Network (ANVN) comprising a multilayered perceptron (MLP) connected to a vascular tree structure. The root node of the vascular tree structure is connected to an energy source, and the terminal nodes of the vascular tree supply energy to the hidden neurons of the MLP. The energy delivered by the terminal vascular nodes to the hidden neurons determines the biases of the hidden neurons. The "weights" on the branches of the vascular tree depict the energy distribution from the parent node to the child nodes. The vascular weights are updated by a kind of "backpropagation" of the energy demand error generated by the hidden neurons. We observed that higher performance was achieved at lower energy levels when the vascular network was also trained along with the neural network. This indicates that the vascular network needs to be trained to ensure efficient neural performance. We observed that below a certain network size, the energetic dynamics of the network in the per capita energy consumption vs. classification accuracy space approaches a fixed-point attractor for various initial conditions. Once the number of hidden neurons increases beyond a threshold, the fixed point appears to vanish, giving place to a line of attractors. The model also showed that when there is a limited resource, the energy consumption of neurons is strongly correlated to their individual contribution to the network's performance.
Collapse
|
16
|
Hipólito I, Ramstead MJD, Convertino L, Bhat A, Friston K, Parr T. Markov blankets in the brain. Neurosci Biobehav Rev 2021; 125:88-97. [PMID: 33607182 PMCID: PMC8373616 DOI: 10.1016/j.neubiorev.2021.02.003] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Revised: 01/18/2021] [Accepted: 02/01/2021] [Indexed: 01/19/2023]
Abstract
Recent characterisations of self-organising systems depend upon the presence of a 'Markov blanket': a statistical boundary that mediates the interactions between the inside and outside of a system. We leverage this idea to provide an analysis of partitions in neuronal systems. This is applicable to brain architectures at multiple scales, enabling partitions into single neurons, brain regions, and brain-wide networks. This treatment is based upon the canonical micro-circuitry used in empirical studies of effective connectivity, so as to speak directly to practical applications. The notion of effective connectivity depends upon the dynamic coupling between functional units, whose form recapitulates that of a Markov blanket at each level of analysis. The nuance afforded by partitioning neural systems in this way highlights certain limitations of 'modular' perspectives of brain function that only consider a single level of description.
Collapse
Affiliation(s)
- Inês Hipólito
- Humboldt-Universität zu Berlin, Department of Philosophy & Berlin School of Mind and Brain, Germany; Wellcome Centre for Human Neuroimaging, University College London, United Kingdom.
| | - Maxwell J D Ramstead
- Wellcome Centre for Human Neuroimaging, University College London, United Kingdom; Division of Social and Transcultural Psychiatry, Department of Psychiatry, McGill University, Montreal, Quebec, Canada; Culture, Mind, and Brain Program, McGill University, Montreal, Quebec, Canada
| | - Laura Convertino
- Wellcome Centre for Human Neuroimaging, University College London, United Kingdom; Institute of Cognitive Neuroscience (ICN), University College London, London, United Kingdom
| | - Anjali Bhat
- Wellcome Centre for Human Neuroimaging, University College London, United Kingdom
| | - Karl Friston
- Wellcome Centre for Human Neuroimaging, University College London, United Kingdom
| | - Thomas Parr
- Wellcome Centre for Human Neuroimaging, University College London, United Kingdom
| |
Collapse
|
17
|
Parr T. Message Passing and Metabolism. ENTROPY (BASEL, SWITZERLAND) 2021; 23:606. [PMID: 34068913 PMCID: PMC8156486 DOI: 10.3390/e23050606] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 05/09/2021] [Accepted: 05/10/2021] [Indexed: 11/16/2022]
Abstract
Active inference is an increasingly prominent paradigm in theoretical biology. It frames the dynamics of living systems as if they were solving an inference problem. This rests upon their flow towards some (non-equilibrium) steady state-or equivalently, their maximisation of the Bayesian model evidence for an implicit probabilistic model. For many models, these self-evidencing dynamics manifest as messages passed among elements of a system. Such messages resemble synaptic communication at a neuronal network level but could also apply to other network structures. This paper attempts to apply the same formulation to biochemical networks. The chemical computation that occurs in regulation of metabolism relies upon sparse interactions between coupled reactions, where enzymes induce conditional dependencies between reactants. We will see that these reactions may be viewed as the movement of probability mass between alternative categorical states. When framed in this way, the master equations describing such systems can be reformulated in terms of their steady-state distribution. This distribution plays the role of a generative model, affording an inferential interpretation of the underlying biochemistry. Finally, we see that-in analogy with computational neurology and psychiatry-metabolic disorders may be characterized as false inference under aberrant prior beliefs.
Collapse
Affiliation(s)
- Thomas Parr
- Wellcome Centre for Human Neuroimaging, Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| |
Collapse
|
18
|
Isomura T, Toyoizumi T. On the Achievability of Blind Source Separation for High-Dimensional Nonlinear Source Mixtures. Neural Comput 2021; 33:1433-1468. [PMID: 34496387 DOI: 10.1162/neco_a_01378] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Accepted: 12/23/2020] [Indexed: 11/04/2022]
Abstract
For many years, a combination of principal component analysis (PCA) and independent component analysis (ICA) has been used for blind source separation (BSS). However, it remains unclear why these linear methods work well with real-world data that involve nonlinear source mixtures. This work theoretically validates that a cascade of linear PCA and ICA can solve a nonlinear BSS problem accurately-when the sensory inputs are generated from hidden sources via nonlinear mappings with sufficient dimensionality. Our proposed theorem, termed the asymptotic linearization theorem, theoretically guarantees that applying linear PCA to the inputs can reliably extract a subspace spanned by the linear projections from every hidden source as the major components-and thus projecting the inputs onto their major eigenspace can effectively recover a linear transformation of the hidden sources. Then subsequent application of linear ICA can separate all the true independent hidden sources accurately. Zero-element-wise-error nonlinear BSS is asymptotically attained when the source dimensionality is large and the input dimensionality is sufficiently larger than the source dimensionality. Our proposed theorem is validated analytically and numerically. Moreover, the same computation can be performed by using Hebbian-like plasticity rules, implying the biological plausibility of this nonlinear BSS strategy. Our results highlight the utility of linear PCA and ICA for accurately and reliably recovering nonlinearly mixed sources and suggest the importance of employing sensors with sufficient dimensionality to identify true hidden sources of real-world data.
Collapse
Affiliation(s)
- Takuya Isomura
- Laboratory for Neural Computation and Adaptation and Brain Intelligence Theory Unit, RIKEN Center for Brain Science, Wako, Saitama 351-0198, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Wako, Saitama 351-0198, Japan, and Department of Mathematical Informatics, Graduate School of Information Science and Technology, University of Tokyo, Bunkyo-ku, Tokyo 113-8656, Japan
| |
Collapse
|
19
|
Kim CS. Bayesian mechanics of perceptual inference and motor control in the brain. BIOLOGICAL CYBERNETICS 2021; 115:87-102. [PMID: 33471182 PMCID: PMC7925488 DOI: 10.1007/s00422-021-00859-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Accepted: 01/06/2021] [Indexed: 06/12/2023]
Abstract
The free energy principle (FEP) in the neurosciences stipulates that all viable agents induce and minimize informational free energy in the brain to fit their environmental niche. In this study, we continue our effort to make the FEP a more physically principled formalism by implementing free energy minimization based on the principle of least action. We build a Bayesian mechanics (BM) by casting the formulation reported in the earlier publication (Kim in Neural Comput 30:2616-2659, 2018, https://doi.org/10.1162/neco_a_01115 ) to considering active inference beyond passive perception. The BM is a neural implementation of variational Bayes under the FEP in continuous time. The resulting BM is provided as an effective Hamilton's equation of motion and subject to the control signal arising from the brain's prediction errors at the proprioceptive level. To demonstrate the utility of our approach, we adopt a simple agent-based model and present a concrete numerical illustration of the brain performing recognition dynamics by integrating BM in neural phase space. Furthermore, we recapitulate the major theoretical architectures in the FEP by comparing our approach with the common state-space formulations.
Collapse
Affiliation(s)
- Chang Sub Kim
- Department of Physics, Chonnam National University, Gwangju, 61186, Republic of Korea.
| |
Collapse
|
20
|
Ramstead MJD, Hesp C, Tschantz A, Smith R, Constant A, Friston K. Neural and phenotypic representation under the free-energy principle. Neurosci Biobehav Rev 2021; 120:109-122. [PMID: 33271162 PMCID: PMC7955287 DOI: 10.1016/j.neubiorev.2020.11.024] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 11/19/2020] [Accepted: 11/27/2020] [Indexed: 01/19/2023]
Abstract
The aim of this paper is to leverage the free-energy principle and its corollary process theory, active inference, to develop a generic, generalizable model of the representational capacities of living creatures; that is, a theory of phenotypic representation. Given their ubiquity, we are concerned with distributed forms of representation (e.g., population codes), whereby patterns of ensemble activity in living tissue come to represent the causes of sensory input or data. The active inference framework rests on the Markov blanket formalism, which allows us to partition systems of interest, such as biological systems, into internal states, external states, and the blanket (active and sensory) states that render internal and external states conditionally independent of each other. In this framework, the representational capacity of living creatures emerges as a consequence of their Markovian structure and nonequilibrium dynamics, which together entail a dual-aspect information geometry. This entails a modest representational capacity: internal states have an intrinsic information geometry that describes their trajectory over time in state space, as well as an extrinsic information geometry that allows internal states to encode (the parameters of) probabilistic beliefs about (fictive) external states. Building on this, we describe here how, in an automatic and emergent manner, information about stimuli can come to be encoded by groups of neurons bound by a Markov blanket; what is known as the neuronal packet hypothesis. As a concrete demonstration of this type of emergent representation, we present numerical simulations showing that self-organizing ensembles of active inference agents sharing the right kind of probabilistic generative model are able to encode recoverable information about a stimulus array.
Collapse
Affiliation(s)
- Maxwell J D Ramstead
- Division of Social and Transcultural Psychiatry, Department of Psychiatry, McGill University, Montreal, Quebec, Canada; Culture, Mind, and Brain Program, McGill University, Montreal, Quebec, Canada; Wellcome Centre for Human Neuroimaging, University College London, London, WC1N3BG, UK.
| | - Casper Hesp
- Wellcome Centre for Human Neuroimaging, University College London, London, WC1N3BG, UK; Department of Psychology, University of Amsterdam, Science Park 904, 1098 XH, Amsterdam, the Netherlands; Amsterdam Brain and Cognition Centre, University of Amsterdam, Science Park 904, 1098 XH, Amsterdam, the Netherlands; Institute for Advanced Study, University of Amsterdam, Oude Turfmarkt 147, 1012 GC Amsterdam, the Netherlands.
| | - Alexander Tschantz
- Sackler Centre for Consciousness Science, University of Sussex, Brighton, UK; Department of Informatics, University of Sussex, Brighton, UK.
| | - Ryan Smith
- Laureate Institute for Brain Research, Tulsa, OK, USA.
| | - Axel Constant
- Culture, Mind, and Brain Program, McGill University, Montreal, Quebec, Canada; Wellcome Centre for Human Neuroimaging, University College London, London, WC1N3BG, UK; Theory and Method in Biosciences, Level 6, Charles Perkins Centre D17, Johns Hopkins Drive, University of Sydney, NSW, 2006, Australia.
| | - Karl Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, WC1N3BG, UK.
| |
Collapse
|
21
|
Sajid N, Parr T, Hope TM, Price CJ, Friston KJ. Degeneracy and Redundancy in Active Inference. Cereb Cortex 2020; 30:5750-5766. [PMID: 32488244 PMCID: PMC7899066 DOI: 10.1093/cercor/bhaa148] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Revised: 05/11/2020] [Accepted: 05/11/2020] [Indexed: 12/16/2022] Open
Abstract
The notions of degeneracy and redundancy are important constructs in many areas, ranging from genomics through to network science. Degeneracy finds a powerful role in neuroscience, explaining key aspects of distributed processing and structure-function relationships in the brain. For example, degeneracy accounts for the superadditive effect of lesions on functional deficits in terms of a "many-to-one" structure-function mapping. In this paper, we offer a principled account of degeneracy and redundancy, when function is operationalized in terms of active inference, namely, a formulation of perception and action as belief updating under generative models of the world. In brief, "degeneracy" is quantified by the "entropy" of posterior beliefs about the causes of sensations, while "redundancy" is the "complexity" cost incurred by forming those beliefs. From this perspective, degeneracy and redundancy are complementary: Active inference tries to minimize redundancy while maintaining degeneracy. This formulation is substantiated using statistical and mathematical notions of degenerate mappings and statistical efficiency. We then illustrate changes in degeneracy and redundancy during the learning of a word repetition task. Finally, we characterize the effects of lesions-to intrinsic and extrinsic connections-using in silico disconnections. These numerical analyses highlight the fundamental difference between degeneracy and redundancy-and how they score distinct imperatives for perceptual inference and structure learning that are relevant to synthetic and biological intelligence.
Collapse
Affiliation(s)
- Noor Sajid
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, WC1N 3AR, UK
| | - Thomas Parr
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, WC1N 3AR, UK
| | - Thomas M Hope
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, WC1N 3AR, UK
| | - Cathy J Price
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, WC1N 3AR, UK
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, WC1N 3AR, UK
| |
Collapse
|
22
|
Isomura T, Friston K. Reverse-Engineering Neural Networks to Characterize Their Cost Functions. Neural Comput 2020; 32:2085-2121. [PMID: 32946704 DOI: 10.1162/neco_a_01315] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
This letter considers a class of biologically plausible cost functions for neural networks, where the same cost function is minimized by both neural activity and plasticity. We show that such cost functions can be cast as a variational bound on model evidence under an implicit generative model. Using generative models based on partially observed Markov decision processes (POMDP), we show that neural activity and plasticity perform Bayesian inference and learning, respectively, by maximizing model evidence. Using mathematical and numerical analyses, we establish the formal equivalence between neural network cost functions and variational free energy under some prior beliefs about latent states that generate inputs. These prior beliefs are determined by particular constants (e.g., thresholds) that define the cost function. This means that the Bayes optimal encoding of latent or hidden states is achieved when the network's implicit priors match the process that generates its inputs. This equivalence is potentially important because it suggests that any hyperparameter of a neural network can itself be optimized-by minimization with respect to variational free energy. Furthermore, it enables one to characterize a neural network formally, in terms of its prior beliefs.
Collapse
Affiliation(s)
- Takuya Isomura
- Brain Intelligence Theory Unit, RIKEN Center for Brain Science, Wako, Saitama 351-0198, Japan
| | - Karl Friston
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London, WC1N 3AR, U.K.
| |
Collapse
|
23
|
Roberts TP, Kern FB, Fernando C, Szathmáry E, Husbands P, Philippides AO, Staras K. Encoding Temporal Regularities and Information Copying in Hippocampal Circuits. Sci Rep 2019; 9:19036. [PMID: 31836825 PMCID: PMC6910951 DOI: 10.1038/s41598-019-55395-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Accepted: 11/23/2019] [Indexed: 12/02/2022] Open
Abstract
Discriminating, extracting and encoding temporal regularities is a critical requirement in the brain, relevant to sensory-motor processing and learning. However, the cellular mechanisms responsible remain enigmatic; for example, whether such abilities require specific, elaborately organized neural networks or arise from more fundamental, inherent properties of neurons. Here, using multi-electrode array technology, and focusing on interval learning, we demonstrate that sparse reconstituted rat hippocampal neural circuits are intrinsically capable of encoding and storing sub-second-order time intervals for over an hour timescale, represented in changes in the spatial-temporal architecture of firing relationships among populations of neurons. This learning is accompanied by increases in mutual information and transfer entropy, formal measures related to information storage and flow. Moreover, temporal relationships derived from previously trained circuits can act as templates for copying intervals into untrained networks, suggesting the possibility of circuit-to-circuit information transfer. Our findings illustrate that dynamic encoding and stable copying of temporal relationships are fundamental properties of simple in vitro networks, with general significance for understanding elemental principles of information processing, storage and replication.
Collapse
Affiliation(s)
- Terri P Roberts
- Sussex Neuroscience, University of Sussex, Brighton, BN1 9QG, UK
| | - Felix B Kern
- Sussex Neuroscience, University of Sussex, Brighton, BN1 9QG, UK
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, BN1 9QJ, UK
| | - Chrisantha Fernando
- School of EECS, Queen Mary University of London, E1 4NS, London, UK
- Google DeepMind, London, N1C 4AG, UK
| | - Eörs Szathmáry
- Parmenides Center for the Conceptual Foundations of Science, 82049, Pullach, Munich, Germany
- Institute of Evolution, Centre for Ecological Research, 3 Klebelsberg Kuno Street, 8237, Tihany, Hungary
| | - Phil Husbands
- Sussex Neuroscience, University of Sussex, Brighton, BN1 9QG, UK.
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, BN1 9QJ, UK.
| | - Andrew O Philippides
- Sussex Neuroscience, University of Sussex, Brighton, BN1 9QG, UK
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, BN1 9QJ, UK
| | - Kevin Staras
- Sussex Neuroscience, University of Sussex, Brighton, BN1 9QG, UK.
| |
Collapse
|
24
|
Connolly P. The Gravity of Objects: How Affectively Organized Generative Models Influence Perception and Social Behavior. Front Psychol 2019; 10:2599. [PMID: 31824382 PMCID: PMC6881275 DOI: 10.3389/fpsyg.2019.02599] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Accepted: 11/01/2019] [Indexed: 11/13/2022] Open
Abstract
Friston's (2010) free energy principle (FEP) offers an opportunity to rethink what is meant by the psychoanalytic concept of an object or discrete mental representation (Ogden, 1992). The significance of such objects in psychoanalysis is that they may be superimposed on current experience so that perceptions are partly composed of projected fantasy and partly of more realistic perception. From a free energy perspective, the psychoanalytic (person) object may be understood as a bounded set of prior beliefs about a "platonic" sort of person that provides a free energy minimizing, evidence maximizing, hypothesis to explain inference about - or dyadic interactions with - another. The degree to which realistic perception supervenes - relative to a platonic person object - will depend upon the precision assigned to the sensory evidence (concerning the person) relative to the prior beliefs about a platonic form. This provides a basis for not only explaining projection and transference phenomena but also conceptualizing a central assumption within the object relations psychoanalysis. As an example, the paper examines the Kleinian theory of split good or bad part objects as affectively organized generative models (or platonic part-object models) formed in early infancy. This also provides a basis for building on work by Kernberg (1984, 1996) by conceptualizing the role of the part object(s) in a continuum of reality testing, from mild errors in perception that are relatively easily corrected, through borderline affective instability and frequent shifts between part-object experience, to psychotic failures of reality testing, where Friston et al. (2016) proposed that aberrant precisions bias perception to high precision false beliefs (here cast as platonic part objects), such as stable perceptions of others (and possibly oneself) as persecutory agents of some sort. The paper demonstrates the value that the history of clinical insights into psychoanalysis (including object relations) and a system-based approach to the brain (including the free energy principle) can have for one another. This is offered as a demonstration of the potential value of an "Integrative Clinical Systems Psychology" proposed by Tretter and Lo¨ffler-Stastka (2018), which has the potential to integrate the major theoretical frameworks in the field today.
Collapse
Affiliation(s)
- Patrick Connolly
- Counselling and Psychology Department, Hong Kong Shue Yan University, North Point, Hong Kong
| |
Collapse
|
25
|
Isomura T, Toyoizumi T. Multi-context blind source separation by error-gated Hebbian rule. Sci Rep 2019; 9:7127. [PMID: 31073206 PMCID: PMC6509167 DOI: 10.1038/s41598-019-43423-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Accepted: 04/23/2019] [Indexed: 11/08/2022] Open
Abstract
Animals need to adjust their inferences according to the context they are in. This is required for the multi-context blind source separation (BSS) task, where an agent needs to infer hidden sources from their context-dependent mixtures. The agent is expected to invert this mixing process for all contexts. Here, we show that a neural network that implements the error-gated Hebbian rule (EGHR) with sufficiently redundant sensory inputs can successfully learn this task. After training, the network can perform the multi-context BSS without further updating synapses, by retaining memories of all experienced contexts. This demonstrates an attractive use of the EGHR for dimensionality reduction by extracting low-dimensional sources across contexts. Finally, if there is a common feature shared across contexts, the EGHR can extract it and generalize the task to even inexperienced contexts. The results highlight the utility of the EGHR as a model for perceptual adaptation in animals.
Collapse
Affiliation(s)
- Takuya Isomura
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Wako, Saitama, 351-0198, Japan.
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Wako, Saitama, 351-0198, Japan.
- RIKEN CBS-OMRON Collaboration Center, Wako, Saitama, 351-0198, Japan.
| |
Collapse
|
26
|
Palacios ER, Isomura T, Parr T, Friston K. The emergence of synchrony in networks of mutually inferring neurons. Sci Rep 2019; 9:6412. [PMID: 31040386 PMCID: PMC6491596 DOI: 10.1038/s41598-019-42821-7] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2018] [Accepted: 04/08/2019] [Indexed: 01/05/2023] Open
Abstract
This paper considers the emergence of a generalised synchrony in ensembles of coupled self-organising systems, such as neurons. We start from the premise that any self-organising system complies with the free energy principle, in virtue of placing an upper bound on its entropy. Crucially, the free energy principle allows one to interpret biological systems as inferring the state of their environment or external milieu. An emergent property of this inference is synchronisation among an ensemble of systems that infer each other. Here, we investigate the implications of neuronal dynamics by simulating neuronal networks, where each neuron minimises its free energy. We cast the ensuing ensemble dynamics in terms of inference and show that cardinal behaviours of neuronal networks - both in vivo and in vitro - can be explained by this framework. In particular, we test the hypotheses that (i) generalised synchrony is an emergent property of free energy minimisation; thereby explaining synchronisation in the resting brain: (ii) desynchronisation is induced by exogenous input; thereby explaining event-related desynchronisation and (iii) structure learning emerges in response to causal structure in exogenous input; thereby explaining functional segregation in real neuronal systems.
Collapse
Affiliation(s)
- Ensor Rafael Palacios
- The Wellcome Centre for Human Neuroimaging, University College London, Queen Square, London, WC1N 3BG, UK.
| | - Takuya Isomura
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Hirosawa, Wako, Saitama, 351-0198, Japan
| | - Thomas Parr
- The Wellcome Centre for Human Neuroimaging, University College London, Queen Square, London, WC1N 3BG, UK
| | - Karl Friston
- The Wellcome Centre for Human Neuroimaging, University College London, Queen Square, London, WC1N 3BG, UK
| |
Collapse
|
27
|
Holmes J, Nolte T. "Surprise" and the Bayesian Brain: Implications for Psychotherapy Theory and Practice. Front Psychol 2019; 10:592. [PMID: 30984063 PMCID: PMC6447687 DOI: 10.3389/fpsyg.2019.00592] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Accepted: 03/04/2019] [Indexed: 01/19/2023] Open
Abstract
The free energy principle (FEP) has gained widespread interest and growing acceptance as a new paradigm of brain function, but has had little impact on the theory and practice of psychotherapy. The aim of this paper is to redress this. Brains rely on Bayesian inference during which “bottom-up” sensations are matched with “top-down” predictions. Discrepancies result in “prediction error.” The brain abhors informational “surprise,” which is minimized by (1) action enhancing the statistical likelihood of sensory samples, (2) revising inferences in the light of experience, updating “priors” to reality-aligned “posteriors,” and (3) optimizing the complexity of our generative models of a capricious world. In all three, free energy is converted to bound energy. In psychopathology energy either remains unbound, as in trauma and inhibition of agency, or manifests restricted, anachronistic “top-down” narratives. Psychotherapy fosters client agency, linguistic and practical. Temporary uncoupling bottom-up from top-down automatism and fostering scrutinized simulations sets a number of salutary processes in train. Mentalising enriches Bayesian inference, enabling experience and feeling states to be “metabolized” and assimilated. “Free association” enhances more inclusive sensory sampling, while dream analysis foregrounds salient emotional themes as “attractors.” FEP parallels with psychoanalytic theory are outlined, including Freud’s unpublished project, Bion’s “contact barrier” concept, the Fonagy/Target model of sexuality, Laplanche’s therapist as “enigmatic signifier,” and the role of projective identification. The therapy stimulates patients to become aware of and revise the priors’ they bring to interpersonal experience. In the therapeutic “duet for one,” the energy binding skills and non-partisan stance of the analyst help sufferers face trauma without being overwhelmed by psychic entropy. Overall, the FEP provides a sound theoretical basis for psychotherapy practice, training, and research.
Collapse
Affiliation(s)
- Jeremy Holmes
- University College London, Anna Freud National Centre for Children and Families, London, United Kingdom
| | - Tobias Nolte
- Department of Psychology, College of Life and Environmental Sciences, University of Exeter, Exeter, United Kingdom
| |
Collapse
|