1
|
Torresan F, Baltieri M. Disentangled representations for causal cognition. Phys Life Rev 2024; 51:343-381. [PMID: 39500032 DOI: 10.1016/j.plrev.2024.10.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2024] [Accepted: 10/14/2024] [Indexed: 12/06/2024]
Abstract
Complex adaptive agents consistently achieve their goals by solving problems that seem to require an understanding of causal information, information pertaining to the causal relationships that exist among elements of combined agent-environment systems. Causal cognition studies and describes the main characteristics of causal learning and reasoning in human and non-human animals, offering a conceptual framework to discuss cognitive performances based on the level of apparent causal understanding of a task. Despite the use of formal intervention-based models of causality, including causal Bayesian networks, psychological and behavioural research on causal cognition does not yet offer a computational account that operationalises how agents acquire a causal understanding of the world seemingly from scratch, i.e. without a-priori knowledge of relevant features of the environment. Research on causality in machine and reinforcement learning, especially involving disentanglement as a candidate process to build causal representations, represents on the other hand a concrete attempt at designing artificial agents that can learn about causality, shedding light on the inner workings of natural causal cognition. In this work, we connect these two areas of research to build a unifying framework for causal cognition that will offer a computational perspective on studies of animal cognition, and provide insights in the development of new algorithms for causal reinforcement learning in AI.
Collapse
Affiliation(s)
- Filippo Torresan
- University of Sussex, Falmer, Brighton, BN1 9RH, United Kingdom.
| | - Manuel Baltieri
- University of Sussex, Falmer, Brighton, BN1 9RH, United Kingdom; Araya Inc., Chiyoda City, Tokyo, 101 0025, Japan.
| |
Collapse
|
2
|
Singh SH, van Breugel F, Rao RPN, Brunton BW. Emergent behaviour and neural dynamics in artificial agents tracking odour plumes. NAT MACH INTELL 2023; 5:58-70. [PMID: 37886259 PMCID: PMC10601839 DOI: 10.1038/s42256-022-00599-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 12/01/2022] [Indexed: 01/26/2023]
Abstract
Tracking an odour plume to locate its source under variable wind and plume statistics is a complex task. Flying insects routinely accomplish such tracking, often over long distances, in pursuit of food or mates. Several aspects of this remarkable behaviour and its underlying neural circuitry have been studied experimentally. Here we take a complementary in silico approach to develop an integrated understanding of their behaviour and neural computations. Specifically, we train artificial recurrent neural network agents using deep reinforcement learning to locate the source of simulated odour plumes that mimic features of plumes in a turbulent flow. Interestingly, the agents' emergent behaviours resemble those of flying insects, and the recurrent neural networks learn to compute task-relevant variables with distinct dynamic structures in population activity. Our analyses put forward a testable behavioural hypothesis for tracking plumes in changing wind direction, and we provide key intuitions for memory requirements and neural dynamics in odour plume tracking.
Collapse
|
3
|
Voudouris K, Crosby M, Beyret B, Hernández-Orallo J, Shanahan M, Halina M, Cheke LG. Direct Human-AI Comparison in the Animal-AI Environment. Front Psychol 2022; 13:711821. [PMID: 35686061 PMCID: PMC9172850 DOI: 10.3389/fpsyg.2022.711821] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 03/28/2022] [Indexed: 01/04/2023] Open
Abstract
Artificial Intelligence is making rapid and remarkable progress in the development of more sophisticated and powerful systems. However, the acknowledgement of several problems with modern machine learning approaches has prompted a shift in AI benchmarking away from task-oriented testing (such as Chess and Go) towards ability-oriented testing, in which AI systems are tested on their capacity to solve certain kinds of novel problems. The Animal-AI Environment is one such benchmark which aims to apply the ability-oriented testing used in comparative psychology to AI systems. Here, we present the first direct human-AI comparison in the Animal-AI Environment, using children aged 6-10 (n = 52). We found that children of all ages were significantly better than a sample of 30 AIs across most of the tests we examined, as well as performing significantly better than the two top-scoring AIs, "ironbar" and "Trrrrr," from the Animal-AI Olympics Competition 2019. While children and AIs performed similarly on basic navigational tasks, AIs performed significantly worse in more complex cognitive tests, including detour tasks, spatial elimination tasks, and object permanence tasks, indicating that AIs lack several cognitive abilities that children aged 6-10 possess. Both children and AIs performed poorly on tool-use tasks, suggesting that these tests are challenging for both biological and non-biological machines.
Collapse
Affiliation(s)
- Konstantinos Voudouris
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, United Kingdom
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom, Cambridge, United Kingdom
| | - Matthew Crosby
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, United Kingdom
- Department of Computing, Imperial College London, London, United Kingdom
| | - Benjamin Beyret
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, United Kingdom
- Department of Computing, Imperial College London, London, United Kingdom
| | - José Hernández-Orallo
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, United Kingdom
- Valencian Research Institute for Artificial Intelligence (VRAIN), Universitat Politècnica de València, València, Spain
| | - Murray Shanahan
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, United Kingdom
- Department of Computing, Imperial College London, London, United Kingdom
| | - Marta Halina
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, United Kingdom
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom, Cambridge, United Kingdom
- Department of History and Philosophy of Science, University of Cambridge, Cambridge, United Kingdom
| | - Lucy G. Cheke
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, United Kingdom
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom, Cambridge, United Kingdom
| |
Collapse
|
4
|
Exploring Data-Driven Components of Socially Intelligent AI through Cooperative Game Paradigms. MULTIMODAL TECHNOLOGIES AND INTERACTION 2022. [DOI: 10.3390/mti6020016] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The development of new approaches for creating more “life-like” artificial intelligence (AI) capable of natural social interaction is of interest to a number of scientific fields, from virtual reality to human–robot interaction to natural language speech systems. Yet how such “Social AI” agents might be manifested remains an open question. Previous research has shown that both behavioral factors related to the artificial agent itself as well as contextual factors beyond the agent (i.e., interaction context) play a critical role in how people perceive interactions with interactive technology. As such, there is a need for customizable agents and customizable environments that allow us to explore both sides in a simultaneous manner. To that end, we describe here the development of a cooperative game environment and Social AI using a data-driven approach, which allows us to simultaneously manipulate different components of the social interaction (both behavioral and contextual). We conducted multiple human–human and human–AI interaction experiments to better understand the components necessary for creation of a Social AI virtual avatar capable of autonomously speaking and interacting with humans in multiple languages during cooperative gameplay (in this case, a social survival video game) in context-relevant ways.
Collapse
|
5
|
A philosophical view on singularity and strong AI. AI & SOCIETY 2022. [DOI: 10.1007/s00146-021-01327-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
6
|
Kneer M. Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents. Cogn Sci 2021; 45:e13032. [PMID: 34606119 PMCID: PMC9285490 DOI: 10.1111/cogs.13032] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 05/29/2021] [Accepted: 07/12/2021] [Indexed: 11/29/2022]
Abstract
The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary people willing to ascribe deceptive intentions to artificial agents? (b) Are they as willing to judge a robot lie as a lie as they would be when human agents engage in verbal deception? (c) Do people blame a lying artificial agent to the same extent as a lying human agent? The response to all three questions is a resounding yes. This, I argue, implies that robot deception and its normative consequences deserve considerably more attention than they presently receive.
Collapse
Affiliation(s)
- Markus Kneer
- Center for Ethics, Department of PhilosophyUniversity of Zurich
- Digital Society InitiativeUniversity of Zurich
| |
Collapse
|
7
|
Shanahan M, Crosby M, Beyret B, Cheke L. Artificial Intelligence and the Common Sense of Animals. Trends Cogn Sci 2020; 24:862-872. [PMID: 33041199 DOI: 10.1016/j.tics.2020.09.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Revised: 09/01/2020] [Accepted: 09/02/2020] [Indexed: 11/28/2022]
Abstract
The problem of common sense remains a major obstacle to progress in artificial intelligence. Here, we argue that common sense in humans is founded on a set of basic capacities that are possessed by many other animals, capacities pertaining to the understanding of objects, space, and causality. The field of animal cognition has developed numerous experimental protocols for studying these capacities and, thanks to progress in deep reinforcement learning (RL), it is now possible to apply these methods directly to evaluate RL agents in 3D environments. Besides evaluation, the animal cognition literature offers a rich source of behavioural data, which can serve as inspiration for RL tasks and curricula.
Collapse
|