1
|
Valenzo D, Ciria A, Schillaci G, Lara B. Grounding Context in Embodied Cognitive Robotics. Front Neurorobot 2022; 16:843108. [PMID: 35812785 PMCID: PMC9262126 DOI: 10.3389/fnbot.2022.843108] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Accepted: 05/10/2022] [Indexed: 11/13/2022] Open
Abstract
Biological agents are context-dependent systems that exhibit behavioral flexibility. The internal and external information agents process, their actions, and emotions are all grounded in the context within which they are situated. However, in the field of cognitive robotics, the concept of context is far from being clear with most studies making little to no reference to it. The aim of this paper is to provide an interpretation of the notion of context and its core elements based on different studies in natural agents, and how these core contextual elements have been modeled in cognitive robotics, to introduce a new hypothesis about the interactions between these contextual elements. Here, global context is categorized as agent-related, environmental, and task-related context. The interaction of their core elements, allows agents to first select self-relevant tasks depending on their current needs, or for learning and mastering their environment through exploration. Second, to perform a task and continuously monitor its performance. Third, to abandon a task in case its execution is not going as expected. Here, the monitoring of prediction error, the difference between sensorimotor predictions and incoming sensory information, is at the core of behavioral flexibility during situated action cycles. Additionally, monitoring prediction error dynamics and its comparison with the expected reduction rate should indicate the agent its overall performance on executing the task. Sensitivity to performance evokes emotions that function as the driving element for autonomous behavior which, at the same time, depends on the processing of the interacting core elements. Taking all these into account, an interactionist model of contexts and their core elements is proposed. The model is embodied, affective, and situated, by means of the processing of the agent-related and environmental core contextual elements. Additionally, it is grounded in the processing of the task-related context and the associated situated action cycles during task execution. Finally, the model proposed here aims to guide how artificial agents should process the core contextual elements of the agent-related and environmental context to give rise to the task-related context, allowing agents to autonomously select a task, its planning, execution, and monitoring for behavioral flexibility.
Collapse
Affiliation(s)
- Diana Valenzo
- Laboratorio de Robótica Cognitiva, Centro de Investigación en Ciencias, Universidad Autónoma del Estado de Morelos, Cuernavaca, Mexico
| | - Alejandra Ciria
- Facultad de Psicología, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | | | - Bruno Lara
- Laboratorio de Robótica Cognitiva, Centro de Investigación en Ciencias, Universidad Autónoma del Estado de Morelos, Cuernavaca, Mexico
- *Correspondence: Bruno Lara
| |
Collapse
|
2
|
Andriella A, Torras C, Alenyà G. Cognitive System Framework for Brain-Training Exercise Based on Human-Robot Interaction. Cognit Comput 2020. [DOI: 10.1007/s12559-019-09696-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
3
|
Marfil R, Romero-Garces A, Bandera JP, Manso LJ, Calderita LV, Bustos P, Bandera A, Garcia-Polo J, Fernandez F, Voilmy D. Perceptions or Actions? Grounding How Agents Interact Within a Software Architecture for Cognitive Robotics. Cognit Comput 2019. [DOI: 10.1007/s12559-019-09685-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
4
|
Zech P, Renaudo E, Haller S, Zhang X, Piater J. Action representations in robotics: A taxonomy and systematic classification. Int J Rob Res 2019. [DOI: 10.1177/0278364919835020] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Understanding and defining the meaning of “action” is substantial for robotics research. This becomes utterly evident when aiming at equipping autonomous robots with robust manipulation skills for action execution. Unfortunately, to this day we still lack both a clear understanding of the concept of an action and a set of established criteria that ultimately characterize an action. In this survey, we thus first review existing ideas and theories on the notion and meaning of action. Subsequently, we discuss the role of action in robotics and attempt to give a seminal definition of action in accordance with its use in robotics research. Given this definition we then introduce a taxonomy for categorizing action representations in robotics along various dimensions. Finally, we provide a meticulous literature survey on action representations in robotics where we categorize relevant literature along our taxonomy. After discussing the current state of the art we conclude with an outlook towards promising research directions.
Collapse
Affiliation(s)
- Philipp Zech
- Department of Computer Science, University of Innsbruck, Tyrol, Austria
| | - Erwan Renaudo
- Department of Computer Science, University of Innsbruck, Tyrol, Austria
| | - Simon Haller
- Department of Computer Science, University of Innsbruck, Tyrol, Austria
| | - Xiang Zhang
- Department of Computer Science, University of Innsbruck, Tyrol, Austria
| | - Justus Piater
- Department of Computer Science, University of Innsbruck, Tyrol, Austria
| |
Collapse
|
5
|
Bhat AA, Mohan V. Goal-Directed Reasoning and Cooperation in Robots in Shared Workspaces: an Internal Simulation Based Neural Framework. Cognit Comput 2018; 10:558-576. [PMID: 30147802 PMCID: PMC6096944 DOI: 10.1007/s12559-018-9553-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Accepted: 03/27/2018] [Indexed: 11/27/2022]
Abstract
From social dining in households to product assembly in manufacturing lines, goal-directed reasoning and cooperation with other agents in shared workspaces is a ubiquitous aspect of our day-to-day activities. Critical for such behaviours is the ability to spontaneously anticipate what is doable by oneself as well as the interacting partner based on the evolving environmental context and thereby exploit such information to engage in goal-oriented action sequences. In the setting of an industrial task where two robots are jointly assembling objects in a shared workspace, we describe a bioinspired neural architecture for goal-directed action planning based on coupled interactions between multiple internal models, primarily of the robot's body and its peripersonal space. The internal models (of each robot's body and peripersonal space) are learnt jointly through a process of sensorimotor exploration and then employed in a range of anticipations related to the feasibility and consequence of potential actions of two industrial robots in the context of a joint goal. The ensuing behaviours are demonstrated in a real-world industrial scenario where two robots are assembling industrial fuse-boxes from multiple constituent objects (fuses, fuse-stands) scattered randomly in their workspace. In a spatially unstructured and temporally evolving assembly scenario, the robots employ reward-based dynamics to plan and anticipate which objects to act on at what time instances so as to successfully complete as many assemblies as possible. The existing spatial setting fundamentally necessitates planning collision-free trajectories and avoiding potential collisions between the robots. Furthermore, an interesting scenario where the assembly goal is not realizable by either of the robots individually but only realizable if they meaningfully cooperate is used to demonstrate the interplay between perception, simulation of multiple internal models and the resulting complementary goal-directed actions of both robots. Finally, the proposed neural framework is benchmarked against a typically engineered solution to evaluate its performance in the assembly task. The framework provides a computational outlook to the emerging results from neurosciences related to the learning and use of body schema and peripersonal space for embodied simulation of action and prediction. While experiments reported here engage the architecture in a complex planning task specifically, the internal model based framework is domain-agnostic facilitating portability to several other tasks and platforms.
Collapse
Affiliation(s)
- Ajaz A Bhat
- 1School of Psychology, University of East Anglia, Norwich, UK
| | | |
Collapse
|
6
|
Combining Non-negative Matrix Factorization and Sparse Coding for Functional Brain Overlapping Community Detection. Cognit Comput 2018. [DOI: 10.1007/s12559-018-9585-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
7
|
Sandini G, Mohan V, Sciutti A, Morasso P. Social Cognition for Human-Robot Symbiosis-Challenges and Building Blocks. Front Neurorobot 2018; 12:34. [PMID: 30050425 PMCID: PMC6051162 DOI: 10.3389/fnbot.2018.00034] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2018] [Accepted: 06/11/2018] [Indexed: 11/22/2022] Open
Abstract
The next generation of robot companions or robot working partners will need to satisfy social requirements somehow similar to the famous laws of robotics envisaged by Isaac Asimov time ago (Asimov, 1942). The necessary technology has almost reached the required level, including sensors and actuators, but the cognitive organization is still in its infancy and is only partially supported by the current understanding of brain cognitive processes. The brain of symbiotic robots will certainly not be a “positronic” replica of the human brain: probably, the greatest part of it will be a set of interacting computational processes running in the cloud. In this article, we review the challenges that must be met in the design of a set of interacting computational processes as building blocks of a cognitive architecture that may give symbiotic capabilities to collaborative robots of the next decades: (1) an animated body-schema; (2) an imitation machinery; (3) a motor intentions machinery; (4) a set of physical interaction mechanisms; and (5) a shared memory system for incremental symbiotic development. We would like to stress that our approach is totally un-hierarchical: the five building blocks of the shared cognitive architecture are fully bi-directionally connected. For example, imitation and intentional processes require the “services” of the animated body schema which, on the other hand, can run its simulations if appropriately prompted by imitation and/or intention, with or without physical interaction. Successful experiences can leave a trace in the shared memory system and chunks of memory fragment may compete to participate to novel cooperative actions. And so on and so forth. At the heart of the system is lifelong training and learning but, different from the conventional learning paradigms in neural networks, where learning is somehow passively imposed by an external agent, in symbiotic robots there is an element of free choice of what is worth learning, driven by the interaction between the robot and the human partner. The proposed set of building blocks is certainly a rough approximation of what is needed by symbiotic robots but we believe it is a useful starting point for building a computational framework.
Collapse
Affiliation(s)
- Giulio Sandini
- Research Unit of Robotics, Brain, and Cognitive Sciences (RBCS), Istituto Italiano di Tecnologia, Genoa, Italy
| | - Vishwanathan Mohan
- School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| | - Alessandra Sciutti
- Research Unit of Robotics, Brain, and Cognitive Sciences (RBCS), Istituto Italiano di Tecnologia, Genoa, Italy
| | - Pietro Morasso
- Research Unit of Robotics, Brain, and Cognitive Sciences (RBCS), Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
8
|
Badarna M, Shimshoni I, Luria G, Rosenblum S. The Importance of Pen Motion Pattern Groups for Semi-Automatic Classification of Handwriting into Mental Workload Classes. Cognit Comput 2017. [DOI: 10.1007/s12559-017-9520-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
9
|
Bhat AA, Mohan V, Sandini G, Morasso P. Humanoid infers Archimedes' principle: understanding physical relations and object affordances through cumulative learning experiences. J R Soc Interface 2017; 13:rsif.2016.0310. [PMID: 27466440 PMCID: PMC4971221 DOI: 10.1098/rsif.2016.0310] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Accepted: 06/28/2016] [Indexed: 11/12/2022] Open
Abstract
Emerging studies indicate that several species such as corvids, apes and children solve 'The Crow and the Pitcher' task (from Aesop's Fables) in diverse conditions. Hidden beneath this fascinating paradigm is a fundamental question: by cumulatively interacting with different objects, how can an agent abstract the underlying cause-effect relations to predict and creatively exploit potential affordances of novel objects in the context of sought goals? Re-enacting this Aesop's Fable task on a humanoid within an open-ended 'learning-prediction-abstraction' loop, we address this problem and (i) present a brain-guided neural framework that emulates rapid one-shot encoding of ongoing experiences into a long-term memory and (ii) propose four task-agnostic learning rules (elimination, growth, uncertainty and status quo) that correlate predictions from remembered past experiences with the unfolding present situation to gradually abstract the underlying causal relations. Driven by the proposed architecture, the ensuing robot behaviours illustrated causal learning and anticipation similar to natural agents. Results further demonstrate that by cumulatively interacting with few objects, the predictions of the robot in case of novel objects converge close to the physical law, i.e. the Archimedes principle: this being independent of both the objects explored during learning and the order of their cumulative exploration.
Collapse
Affiliation(s)
- Ajaz Ahmad Bhat
- Robotics, Brain and Cognitive Science Department, Istituto Italiano di Tecnologia, Via Morego 30, Genova, Italy
| | - Vishwanathan Mohan
- Robotics, Brain and Cognitive Science Department, Istituto Italiano di Tecnologia, Via Morego 30, Genova, Italy
| | - Giulio Sandini
- Robotics, Brain and Cognitive Science Department, Istituto Italiano di Tecnologia, Via Morego 30, Genova, Italy
| | - Pietro Morasso
- Robotics, Brain and Cognitive Science Department, Istituto Italiano di Tecnologia, Via Morego 30, Genova, Italy
| |
Collapse
|
10
|
Olier JS, Barakova E, Regazzoni C, Rauterberg M. Re-framing the characteristics of concepts and their relation to learning and cognition in artificial agents. COGN SYST RES 2017. [DOI: 10.1016/j.cogsys.2017.03.005] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
11
|
Applying a Handwriting Measurement Model for Capturing Cognitive Load Implications Through Complex Figure Drawing. Cognit Comput 2015. [DOI: 10.1007/s12559-015-9343-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
12
|
Mohan V, Sandini G, Morasso P. A Neural Framework for Organization and Flexible Utilization of Episodic Memory in Cumulatively Learning Baby Humanoids. Neural Comput 2014; 26:2692-734. [DOI: 10.1162/neco_a_00664] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Cumulatively developing robots offer a unique opportunity to reenact the constant interplay between neural mechanisms related to learning, memory, prospection, and abstraction from the perspective of an integrated system that acts, learns, remembers, reasons, and makes mistakes. Situated within such interplay lie some of the computationally elusive and fundamental aspects of cognitive behavior: the ability to recall and flexibly exploit diverse experiences of one’s past in the context of the present to realize goals, simulate the future, and keep learning further. This article is an adventurous exploration in this direction using a simple engaging scenario of how the humanoid iCub learns to construct the tallest possible stack given an arbitrary set of objects to play with. The learning takes place cumulatively, with the robot interacting with different objects (some previously experienced, some novel) in an open-ended fashion. Since the solution itself depends on what objects are available in the “now,” multiple episodes of past experiences have to be remembered and creatively integrated in the context of the present to be successful. Starting from zero, where the robot knows nothing, we explore the computational basis of organization episodic memory in a cumulatively learning humanoid and address (1) how relevant past experiences can be reconstructed based on the present context, (2) how multiple stored episodic memories compete to survive in the neural space and not be forgotten, (3) how remembered past experiences can be combined with explorative actions to learn something new, and (4) how multiple remembered experiences can be recombined to generate novel behaviors (without exploration). Through the resulting behaviors of the robot as it builds, breaks, learns, and remembers, we emphasize that mechanisms of episodic memory are fundamental design features necessary to enable the survival of autonomous robots in a real world where neither everything can be known nor can everything be experienced.
Collapse
Affiliation(s)
- Vishwanathan Mohan
- Robotics, Brain and Cognitive Science Department, Istituto Italiano di Tecnologia, Genova, Italy
| | - Giulio Sandini
- Robotics, Brain and Cognitive Science Department, Istituto Italiano di Tecnologia, Genova, Italy
| | - Pietro Morasso
- Robotics, Brain and Cognitive Science Department, Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
13
|
|