1
|
Abstract
The explainability of robotic systems depends on people’s ability to reliably attribute perceptual beliefs to robots, i.e., what robots know (or believe) about objects and events in the world based on their perception. However, the perceptual systems of robots are not necessarily well understood by the majority of people interacting with them. In this article, we explain why this is a significant, difficult, and unique problem in social robotics. The inability to judge what a robot knows (and does not know) about the physical environment it shares with people gives rise to a host of communicative and interactive issues, including difficulties to communicate about objects or adapt to events in the environment. The challenge faced by social robotics researchers or designers who want to facilitate appropriate attributions of perceptual beliefs to robots is to shape human–robot interactions so that people understand what robots know about objects and events in the environment. To meet this challenge, we argue, it is necessary to advance our knowledge of when and why people form incorrect or inadequate mental models of robots’ perceptual and cognitive mechanisms. We outline a general approach to studying this empirically and discuss potential solutions to the problem.
Collapse
|
2
|
Chen B, Vondrick C, Lipson H. Visual behavior modelling for robotic theory of mind. Sci Rep 2021; 11:424. [PMID: 33431917 PMCID: PMC7801744 DOI: 10.1038/s41598-020-77918-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Accepted: 11/18/2020] [Indexed: 11/22/2022] Open
Abstract
Behavior modeling is an essential cognitive ability that underlies many aspects of human and animal social behavior (Watson in Psychol Rev 20:158, 1913), and an ability we would like to endow robots. Most studies of machine behavior modelling, however, rely on symbolic or selected parametric sensory inputs and built-in knowledge relevant to a given task. Here, we propose that an observer can model the behavior of an actor through visual processing alone, without any prior symbolic information and assumptions about relevant inputs. To test this hypothesis, we designed a non-verbal non-symbolic robotic experiment in which an observer must visualize future plans of an actor robot, based only on an image depicting the initial scene of the actor robot. We found that an AI-observer is able to visualize the future plans of the actor with 98.5% success across four different activities, even when the activity is not known a-priori. We hypothesize that such visual behavior modeling is an essential cognitive ability that will allow machines to understand and coordinate with surrounding agents, while sidestepping the notorious symbol grounding problem. Through a false-belief test, we suggest that this approach may be a precursor to Theory of Mind, one of the distinguishing hallmarks of primate social cognition.
Collapse
Affiliation(s)
- Boyuan Chen
- Computer Science, Columbia University, Mudd 535, 500 W 120 St, New York, NY, 10027, USA.
| | - Carl Vondrick
- Computer Science, Columbia University, Mudd 535, 500 W 120 St, New York, NY, 10027, USA
- , 611 CEPSR, 530W 120 St, New York, NY, 10027, USA
| | - Hod Lipson
- Mechanical Engineering, Columbia University, Mudd 535E, 500 W 120 St, New York, NY, 10027, USA
- Data Science, Columbia University, New York, NY, 10027, USA
| |
Collapse
|
3
|
Liu R, Zhang X. A review of methodologies for natural-language-facilitated human–robot cooperation. INT J ADV ROBOT SYST 2019. [DOI: 10.1177/1729881419851402] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Natural-language-facilitated human–robot cooperation refers to using natural language to facilitate interactive information sharing and task executions with a common goal constraint between robots and humans. Recently, natural-language-facilitated human–robot cooperation research has received increasing attention. Typical natural-language-facilitated human–robot cooperation scenarios include robotic daily assistance, robotic health caregiving, intelligent manufacturing, autonomous navigation, and robot social accompany. However, a thorough review, which can reveal latest methodologies of using natural language to facilitate human–robot cooperation, is missing. In this review, we comprehensively investigated natural-language-facilitated human–robot cooperation methodologies, by summarizing natural-language-facilitated human–robot cooperation research as three aspects (natural language instruction understanding, natural language-based execution plan generation, knowledge-world mapping). We also made in-depth analysis on theoretical methods, applications, and model advantages and disadvantages. Based on our paper review and perspective, future directions of natural-language-facilitated human–robot cooperation research were discussed.
Collapse
Affiliation(s)
- Rui Liu
- Robotics Institute (RI), Carnegie Mellon University, Pittsburgh, PA, USA
| | - Xiaoli Zhang
- Intelligent Robotics and Systems Lab (IRSL), Colorado School of Mines, Golden, CO, USA
| |
Collapse
|
4
|
Chatila R, Renaudo E, Andries M, Chavez-Garcia RO, Luce-Vayrac P, Gottstein R, Alami R, Clodic A, Devin S, Girard B, Khamassi M. Toward Self-Aware Robots. Front Robot AI 2018; 5:88. [PMID: 33500967 PMCID: PMC7805649 DOI: 10.3389/frobt.2018.00088] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2018] [Accepted: 07/03/2018] [Indexed: 11/13/2022] Open
Abstract
Despite major progress in Robotics and AI, robots are still basically "zombies" repeatedly achieving actions and tasks without understanding what they are doing. Deep-Learning AI programs classify tremendous amounts of data without grasping the meaning of their inputs or outputs. We still lack a genuine theory of the underlying principles and methods that would enable robots to understand their environment, to be cognizant of what they do, to take appropriate and timely initiatives, to learn from their own experience and to show that they know that they have learned and how. The rationale of this paper is that the understanding of its environment by an agent (the agent itself and its effects on the environment included) requires its self-awareness, which actually is itself emerging as a result of this understanding and the distinction that the agent is capable to make between its own mind-body and its environment. The paper develops along five issues: agent perception and interaction with the environment; learning actions; agent interaction with other agents-specifically humans; decision-making; and the cognitive architecture integrating these capacities.
Collapse
Affiliation(s)
- Raja Chatila
- Institute of Intelligent Systems and Robotics, Sorbonne Université, CNRS, Paris, France
| | - Erwan Renaudo
- Institute of Intelligent Systems and Robotics, Sorbonne Université, CNRS, Paris, France
- Intelligent and Interactive Systems, Department of Computer Science, University of Innsbruck, Innsbruck, Austria
| | - Mihai Andries
- Institute of Intelligent Systems and Robotics, Sorbonne Université, CNRS, Paris, France
- Institute for Systems and Robotics, Instituto Superior Técnico, Lisbon, Portugal
| | - Ricardo-Omar Chavez-Garcia
- Institute of Intelligent Systems and Robotics, Sorbonne Université, CNRS, Paris, France
- Istituto Dalle Molle di Studi sull'Intelligenza Artificiale (IDSIA), Università della Svizzera Italiana - Scuola universitaria professionale della Svizzera italiana (USI-SUPSI), Lugano, Switzerland
| | - Pierre Luce-Vayrac
- Institute of Intelligent Systems and Robotics, Sorbonne Université, CNRS, Paris, France
| | - Raphael Gottstein
- Institute of Intelligent Systems and Robotics, Sorbonne Université, CNRS, Paris, France
| | - Rachid Alami
- Intelligent and Interactive Systems, Department of Computer Science, University of Innsbruck, Innsbruck, Austria
| | - Aurélie Clodic
- LAAS-CNRS, Université de Toulouse, CNRS, Toulouse, France
| | - Sandra Devin
- Intelligent and Interactive Systems, Department of Computer Science, University of Innsbruck, Innsbruck, Austria
| | - Benoît Girard
- Institute of Intelligent Systems and Robotics, Sorbonne Université, CNRS, Paris, France
| | - Mehdi Khamassi
- Institute of Intelligent Systems and Robotics, Sorbonne Université, CNRS, Paris, France
| |
Collapse
|
5
|
Winfield AFT. Experiments in Artificial Theory of Mind: From Safety to Story-Telling. Front Robot AI 2018; 5:75. [PMID: 33500954 PMCID: PMC7806090 DOI: 10.3389/frobt.2018.00075] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2018] [Accepted: 06/04/2018] [Indexed: 11/13/2022] Open
Abstract
Theory of mind is the term given by philosophers and psychologists for the ability to form a predictive model of self and others. In this paper we focus on synthetic models of theory of mind. We contend firstly that such models—especially when tested experimentally—can provide useful insights into cognition, and secondly that artificial theory of mind can provide intelligent robots with powerful new capabilities, in particular social intelligence for human-robot interaction. This paper advances the hypothesis that simulation-based internal models offer a powerful and realisable, theory-driven basis for artificial theory of mind. Proposed as a computational model of the simulation theory of mind, our simulation-based internal model equips a robot with an internal model of itself and its environment, including other dynamic actors, which can test (i.e., simulate) the robot's next possible actions and hence anticipate the likely consequences of those actions both for itself and others. Although it falls far short of a full artificial theory of mind, our model does allow us to test several interesting scenarios: in some of these a robot equipped with the internal model interacts with other robots without an internal model, but acting as proxy humans; in others two robots each with a simulation-based internal model interact with each other. We outline a series of experiments which each demonstrate some aspect of artificial theory of mind.
Collapse
Affiliation(s)
- Alan F T Winfield
- Bristol Robotics Laboratory, University of the West of England, Bristol, United Kingdom
| |
Collapse
|
7
|
Liu R, Zhang X. Understanding Human Behaviors with an Object Functional Role Perspective for Robotics. IEEE Trans Cogn Dev Syst 2016. [DOI: 10.1109/tamd.2015.2504919] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
8
|
Dang THH, Tapus A. Stress Game: The Role of Motivational Robotic Assistance in Reducing User’s Task Stress. Int J Soc Robot 2014. [DOI: 10.1007/s12369-014-0256-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|