1
|
Matarese M, Rea F, Sciutti A. Perception is Only Real When Shared: A Mathematical Model for Collaborative Shared Perception in Human-Robot Interaction. Front Robot AI 2022; 9:733954. [PMID: 35783020 PMCID: PMC9240641 DOI: 10.3389/frobt.2022.733954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 05/30/2022] [Indexed: 11/15/2022] Open
Abstract
Partners have to build a shared understanding of their environment in everyday collaborative tasks by aligning their perceptions and establishing a common ground. This is one of the aims of shared perception: revealing characteristics of the individual perception to others with whom we share the same environment. In this regard, social cognitive processes, such as joint attention and perspective-taking, form a shared perception. From a Human-Robot Interaction (HRI) perspective, robots would benefit from the ability to establish shared perception with humans and a common understanding of the environment with their partners. In this work, we wanted to assess whether a robot, considering the differences in perception between itself and its partner, could be more effective in its helping role and to what extent this improves task completion and the interaction experience. For this purpose, we designed a mathematical model for a collaborative shared perception that aims to maximise the collaborators’ knowledge of the environment when there are asymmetries in perception. Moreover, we instantiated and tested our model via a real HRI scenario. The experiment consisted of a cooperative game in which participants had to build towers of Lego bricks, while the robot took the role of a suggester. In particular, we conducted experiments using two different robot behaviours. In one condition, based on shared perception, the robot gave suggestions by considering the partners’ point of view and using its inference about their common ground to select the most informative hint. In the other condition, the robot just indicated the brick that would have yielded a higher score from its individual perspective. The adoption of shared perception in the selection of suggestions led to better performances in all the instances of the game where the visual information was not a priori common to both agents. However, the subjective evaluation of the robot’s behaviour did not change between conditions.
Collapse
Affiliation(s)
- Marco Matarese
- DIBRIS Department, University of Genoa, Genoa, Italy
- RBCS Unit, Italian Institute of Technology, Genoa, Italy
- *Correspondence: Marco Matarese,
| | - Francesco Rea
- RBCS Unit, Italian Institute of Technology, Genoa, Italy
| | | |
Collapse
|
2
|
Blum S, Klaproth O, Russwinkel N. Cognitive Modeling of Anticipation: Unsupervised Learning and Symbolic Modeling of Pilots' Mental Representations. Top Cogn Sci 2022; 14:718-738. [PMID: 35005841 DOI: 10.1111/tops.12594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 11/16/2021] [Accepted: 11/16/2021] [Indexed: 11/29/2022]
Abstract
The ability to anticipate team members' actions enables joint action towards a common goal. Task knowledge and mental simulation allow for anticipating other agents' actions and for making inferences about their underlying mental representations. In human-AI teams, providing AI agents with anticipatory mechanisms can facilitate collaboration and successful execution of joint action. This paper presents a computational cognitive model demonstrating mental simulation of operators' mental models of a situation and anticipation of their behavior. The work proposes two successive steps: (1) A hierarchical cluster algorithm is applied to recognize patterns of behavior among pilots. These behavioral clusters are used to derive commonalities in situation models from empirical data (N = 13 pilots). (2) An ACT-R (adaptive control of thought - rational) cognitive model is implemented to mentally simulate different possible outcomes of action decisions and timing of a pilot. model tracing of ACT-R allows following up on operators' individual actions. Two models are implemented using the symbolic representations of ACT-R: one simulating normative behavior and the other by simulating individual differences and using subsymbolic learning. Model performance is analyzed by a comparison of both models. Results indicate the improved performance of the individual differences over the normative model and are discussed regarding implications for cognitive assistance capable of anticipating operator behavior.
Collapse
Affiliation(s)
- Sebastian Blum
- Department of Cognitive Modeling in Dynamic Human-Machine Systems, TU Berlin
| | | | - Nele Russwinkel
- Department of Cognitive Modeling in Dynamic Human-Machine Systems, TU Berlin
| |
Collapse
|
3
|
Buyukgoz S, Pandey AK, Chamoux M, Chetouani M. Exploring Behavioral Creativity of a Proactive Robot. Front Robot AI 2021; 8:694177. [PMID: 34901167 PMCID: PMC8662345 DOI: 10.3389/frobt.2021.694177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 10/14/2021] [Indexed: 11/13/2022] Open
Abstract
Creativity, in one sense, can be seen as an effort or action to bring novelty. Following this, we explore how a robot can be creative by bringing novelty in a human-robot interaction (HRI) scenario. Studies suggest that proactivity is closely linked with creativity. Proactivity can be defined as acting or interacting by anticipating future needs or actions. This study aims to explore the effect of proactive behavior and the relation of such behaviors to the two aspects of creativity: 1) the perceived creativity observed by the user in the robot's proactive behavior and 2) creativity of the user by assessing how creativity in HRI can be shaped or influenced by proactivity. We do so by conducting an experimental study, where the robot tries to support the user on the completion of the task regardless of the end result being novel or not and does so by exhibiting anticipatory proactive behaviors. In our study, the robot instantiates a set of verbal communications as proactive robot behavior. To our knowledge, the study is among the first to establish and investigate the relationship between creativity and proactivity in the HRI context, based on user studies. The initial results have indicated a relationship between observed proactivity, creativity, and task achievement. It also provides valuable pointers for further investigation in this domain.
Collapse
Affiliation(s)
- Sera Buyukgoz
- SoftBank Robotics Europe, Paris, France.,Institute for Intelligent Systems and Robotics, CNRS UMR 7222, Sorbonne University, Paris, France
| | - Amit Kumar Pandey
- Socients AI and Robotics, Paris, France.,BeingAI Limited, Hong Kong, Hong Kong SAR, China
| | | | - Mohamed Chetouani
- Institute for Intelligent Systems and Robotics, CNRS UMR 7222, Sorbonne University, Paris, France
| |
Collapse
|
4
|
People Do not Automatically Take the Level-1 Visual Perspective of Humanoid Robot Avatars. Int J Soc Robot 2021. [DOI: 10.1007/s12369-021-00773-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
5
|
Fischer T, Demiris Y. Computational Modeling of Embodied Visual Perspective Taking. IEEE Trans Cogn Dev Syst 2020. [DOI: 10.1109/tcds.2019.2949861] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
6
|
Tan H, Zhao Y, Li S, Wang W, Zhu M, Hong J, Yuan X. Relationship between social robot proactive behavior and the human perception of anthropomorphic attributes. Adv Robot 2020. [DOI: 10.1080/01691864.2020.1831699] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Hao Tan
- State Key Laboratory of Advanced design and Manufacturing for Vehicle body, Hunan University, Hunan, People’s Republic of China
| | - Ying Zhao
- School of Design, Hunan University, Hunan, People’s Republic of China
| | - Shiyan Li
- AI HCI Lab of Baidu, Baidu Online Network Technology Co., Ltd, Beijing, People’s Republic of China
| | - Wei Wang
- School of Industrial Design, Georgia Institute of Technology, Atlanta, GA, USA
| | - Ming Zhu
- School of Design, Hunan University, Hunan, People’s Republic of China
| | - Jie Hong
- School of Design, Hunan University, Hunan, People’s Republic of China
| | - Xiang Yuan
- School of Design, Hunan University, Hunan, People’s Republic of China
| |
Collapse
|
7
|
|
8
|
Melnyk A, Hénaff P. Physical Analysis of Handshaking Between Humans: Mutual Synchronisation and Social Context. Int J Soc Robot 2019. [DOI: 10.1007/s12369-019-00525-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
|
9
|
Grosinger J, Pecora F, Saffiotti A. Robots that maintain equilibrium: Proactivity by reasoning about user intentions and preferences. Pattern Recognit Lett 2019. [DOI: 10.1016/j.patrec.2018.05.014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
10
|
Bhat AA, Mohan V. Goal-Directed Reasoning and Cooperation in Robots in Shared Workspaces: an Internal Simulation Based Neural Framework. Cognit Comput 2018; 10:558-576. [PMID: 30147802 PMCID: PMC6096944 DOI: 10.1007/s12559-018-9553-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Accepted: 03/27/2018] [Indexed: 11/27/2022]
Abstract
From social dining in households to product assembly in manufacturing lines, goal-directed reasoning and cooperation with other agents in shared workspaces is a ubiquitous aspect of our day-to-day activities. Critical for such behaviours is the ability to spontaneously anticipate what is doable by oneself as well as the interacting partner based on the evolving environmental context and thereby exploit such information to engage in goal-oriented action sequences. In the setting of an industrial task where two robots are jointly assembling objects in a shared workspace, we describe a bioinspired neural architecture for goal-directed action planning based on coupled interactions between multiple internal models, primarily of the robot's body and its peripersonal space. The internal models (of each robot's body and peripersonal space) are learnt jointly through a process of sensorimotor exploration and then employed in a range of anticipations related to the feasibility and consequence of potential actions of two industrial robots in the context of a joint goal. The ensuing behaviours are demonstrated in a real-world industrial scenario where two robots are assembling industrial fuse-boxes from multiple constituent objects (fuses, fuse-stands) scattered randomly in their workspace. In a spatially unstructured and temporally evolving assembly scenario, the robots employ reward-based dynamics to plan and anticipate which objects to act on at what time instances so as to successfully complete as many assemblies as possible. The existing spatial setting fundamentally necessitates planning collision-free trajectories and avoiding potential collisions between the robots. Furthermore, an interesting scenario where the assembly goal is not realizable by either of the robots individually but only realizable if they meaningfully cooperate is used to demonstrate the interplay between perception, simulation of multiple internal models and the resulting complementary goal-directed actions of both robots. Finally, the proposed neural framework is benchmarked against a typically engineered solution to evaluate its performance in the assembly task. The framework provides a computational outlook to the emerging results from neurosciences related to the learning and use of body schema and peripersonal space for embodied simulation of action and prediction. While experiments reported here engage the architecture in a complex planning task specifically, the internal model based framework is domain-agnostic facilitating portability to several other tasks and platforms.
Collapse
Affiliation(s)
- Ajaz A Bhat
- 1School of Psychology, University of East Anglia, Norwich, UK
| | | |
Collapse
|
11
|
Chatila R, Renaudo E, Andries M, Chavez-Garcia RO, Luce-Vayrac P, Gottstein R, Alami R, Clodic A, Devin S, Girard B, Khamassi M. Toward Self-Aware Robots. Front Robot AI 2018; 5:88. [PMID: 33500967 PMCID: PMC7805649 DOI: 10.3389/frobt.2018.00088] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2018] [Accepted: 07/03/2018] [Indexed: 11/13/2022] Open
Abstract
Despite major progress in Robotics and AI, robots are still basically "zombies" repeatedly achieving actions and tasks without understanding what they are doing. Deep-Learning AI programs classify tremendous amounts of data without grasping the meaning of their inputs or outputs. We still lack a genuine theory of the underlying principles and methods that would enable robots to understand their environment, to be cognizant of what they do, to take appropriate and timely initiatives, to learn from their own experience and to show that they know that they have learned and how. The rationale of this paper is that the understanding of its environment by an agent (the agent itself and its effects on the environment included) requires its self-awareness, which actually is itself emerging as a result of this understanding and the distinction that the agent is capable to make between its own mind-body and its environment. The paper develops along five issues: agent perception and interaction with the environment; learning actions; agent interaction with other agents-specifically humans; decision-making; and the cognitive architecture integrating these capacities.
Collapse
Affiliation(s)
- Raja Chatila
- Institute of Intelligent Systems and Robotics, Sorbonne Université, CNRS, Paris, France
| | - Erwan Renaudo
- Institute of Intelligent Systems and Robotics, Sorbonne Université, CNRS, Paris, France
- Intelligent and Interactive Systems, Department of Computer Science, University of Innsbruck, Innsbruck, Austria
| | - Mihai Andries
- Institute of Intelligent Systems and Robotics, Sorbonne Université, CNRS, Paris, France
- Institute for Systems and Robotics, Instituto Superior Técnico, Lisbon, Portugal
| | - Ricardo-Omar Chavez-Garcia
- Institute of Intelligent Systems and Robotics, Sorbonne Université, CNRS, Paris, France
- Istituto Dalle Molle di Studi sull'Intelligenza Artificiale (IDSIA), Università della Svizzera Italiana - Scuola universitaria professionale della Svizzera italiana (USI-SUPSI), Lugano, Switzerland
| | - Pierre Luce-Vayrac
- Institute of Intelligent Systems and Robotics, Sorbonne Université, CNRS, Paris, France
| | - Raphael Gottstein
- Institute of Intelligent Systems and Robotics, Sorbonne Université, CNRS, Paris, France
| | - Rachid Alami
- Intelligent and Interactive Systems, Department of Computer Science, University of Innsbruck, Innsbruck, Austria
| | - Aurélie Clodic
- LAAS-CNRS, Université de Toulouse, CNRS, Toulouse, France
| | - Sandra Devin
- Intelligent and Interactive Systems, Department of Computer Science, University of Innsbruck, Innsbruck, Austria
| | - Benoît Girard
- Institute of Intelligent Systems and Robotics, Sorbonne Université, CNRS, Paris, France
| | - Mehdi Khamassi
- Institute of Intelligent Systems and Robotics, Sorbonne Université, CNRS, Paris, France
| |
Collapse
|
12
|
Liu P, Glas DF, Kanda T, Ishiguro H. Learning proactive behavior for interactive social robots. Auton Robots 2017. [DOI: 10.1007/s10514-017-9671-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
13
|
|
14
|
Ham J, Cuijpers RH, Cabibihan JJ. Combining Robotic Persuasive Strategies: The Persuasive Power of a Storytelling Robot that Uses Gazing and Gestures. Int J Soc Robot 2015. [DOI: 10.1007/s12369-015-0280-4] [Citation(s) in RCA: 59] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
15
|
Pandey AK, Alami R. Towards Human-Level Semantics Understanding of Human-Centered Object Manipulation Tasks for HRI: Reasoning About Effect, Ability, Effort and Perspective Taking. Int J Soc Robot 2014. [DOI: 10.1007/s12369-014-0246-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
16
|
|