1
|
Colas C, Karch T, Moulin-Frier C, Oudeyer PY. Language and culture internalization for human-like autotelic AI. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00591-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
2
|
Rohlfing KJ, Altvater-Mackensen N, Caruana N, van den Berghe R, Bruno B, Tolksdorf NF, Hanulíková A. Social/dialogical roles of social robots in supporting children's learning of language and literacy-A review and analysis of innovative roles. Front Robot AI 2022; 9:971749. [PMID: 36274914 PMCID: PMC9581183 DOI: 10.3389/frobt.2022.971749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 08/19/2022] [Indexed: 11/16/2022] Open
Abstract
One of the many purposes for which social robots are designed is education, and there have been many attempts to systematize their potential in this field. What these attempts have in common is the recognition that learning can be supported in a variety of ways because a learner can be engaged in different activities that foster learning. Up to now, three roles have been proposed when designing these activities for robots: as a teacher or tutor, a learning peer, or a novice. Current research proposes that deciding in favor of one role over another depends on the content or preferred pedagogical form. However, the design of activities changes not only the content of learning, but also the nature of a human-robot social relationship. This is particularly important in language acquisition, which has been recognized as a social endeavor. The following review aims to specify the differences in human-robot social relationships when children learn language through interacting with a social robot. After proposing categories for comparing these different relationships, we review established and more specific, innovative roles that a robot can play in language-learning scenarios. This follows Mead's (1946) theoretical approach proposing that social roles are performed in interactive acts. These acts are crucial for learning, because not only can they shape the social environment of learning but also engage the learner to different degrees. We specify the degree of engagement by referring to Chi's (2009) progression of learning activities that range from active, constructive, toward interactive with the latter fostering deeper learning. Taken together, this approach enables us to compare and evaluate different human-robot social relationships that arise when applying a robot in a particular social role.
Collapse
Affiliation(s)
- Katharina J. Rohlfing
- Developmental Psycholinguistics, Faculty of Arts and Humanities, Paderborn University, Paderborn, Germany
| | - Nicole Altvater-Mackensen
- Developmental Psychology, Psychologisches Institut, Johannes-Gutenberg-Universität Mainz, English Linguistics, University of Mannheim, Mainz, Germany
| | - Nathan Caruana
- School of Psychological Science, Macquarie University Centre for Reading, Macquarie University, Sydney, NSW, Australia
| | - Rianne van den Berghe
- Urban Care & Education, Windesheim University of Applied Sciences, Almere, Netherlands
| | | | - Nils F. Tolksdorf
- Developmental Psycholinguistics, Faculty of Arts and Humanities, Paderborn University, Paderborn, Germany
| | - Adriana Hanulíková
- Language and Cognition, Deutsches Seminar, Albert-Ludwigs-Universität Freiburg, Freiburg, Germany
| |
Collapse
|
3
|
Rohlfing KJ, Cimiano P, Scharlau I, Matzner T, Buhl HM, Buschmeier H, Esposito E, Grimminger A, Hammer B, Hab-Umbach R, Horwath I, Hullermeier E, Kern F, Kopp S, Thommes K, Ngonga Ngomo AC, Schulte C, Wachsmuth H, Wagner P, Wrede B. Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2020.3044366] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
4
|
A Case Study of a Robot-Assisted Speech Therapy for Children with Language Disorders. SUSTAINABILITY 2021. [DOI: 10.3390/su13052771] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The aim of this study was to explore the potential of using a social robot in speech therapy interventions in children. A descriptive and explorative case study design was implemented involving the intervention for language disorder in five children with different needs with an age ranging from 9 to 12 years. Children participated in sessions with a NAO-type robot in individual sessions. Qualitative methods were used to collect data on aspects of viability, usefulness, barriers and facilitators for the child as well as for the therapist in order to obtain an indication of the effects on learning and the achievement of goals. The main results pointed out the affordances and possibilities of the use of a NAO robot in achieving speech therapy and educational goals. A NAO can contribute towards eliciting motivation, readiness towards learning and improving attention span of the children. The results of the study showed the potential that NAO has in therapy and education for children with different disabilities. More research is needed to gain insight into how a NAO can be applied best in speech therapy to make a more inclusive education conclusions.
Collapse
|
5
|
Tanevska A, Rea F, Sandini G, Cañamero L, Sciutti A. A Socially Adaptable Framework for Human-Robot Interaction. Front Robot AI 2020; 7:121. [PMID: 33501287 PMCID: PMC7806058 DOI: 10.3389/frobt.2020.00121] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Accepted: 07/31/2020] [Indexed: 11/13/2022] Open
Abstract
In our everyday lives we regularly engage in complex, personalized, and adaptive interactions with our peers. To recreate the same kind of rich, human-like interactions, a social robot should be aware of our needs and affective states and continuously adapt its behavior to them. Our proposed solution is to have the robot learn how to select the behaviors that would maximize the pleasantness of the interaction for its peers. To make the robot autonomous in its decision making, this process could be guided by an internal motivation system. We wish to investigate how an adaptive robotic framework of this kind would function and personalize to different users. We also wish to explore whether the adaptability and personalization would bring any additional richness to the human-robot interaction (HRI), or whether it would instead bring uncertainty and unpredictability that would not be accepted by the robot's human peers. To this end, we designed a socially adaptive framework for the humanoid robot iCub. As a result, the robot perceives and reuses the affective and interactive signals from the person as input for the adaptation based on internal social motivation. We strive to investigate the value of the generated adaptation in our framework in the context of HRI. In particular, we compare how users will experience interaction with an adaptive versus a non-adaptive social robot. To address these questions, we propose a comparative interaction study with iCub whereby users act as the robot's caretaker, and iCub's social adaptation is guided by an internal comfort level that varies with the stimuli that iCub receives from its caretaker. We investigate and compare how iCub's internal dynamics would be perceived by people, both in a condition when iCub does not personalize its behavior to the person, and in a condition where it is instead adaptive. Finally, we establish the potential benefits that an adaptive framework could bring to the context of repeated interactions with a humanoid robot.
Collapse
Affiliation(s)
- Ana Tanevska
- Department of Robotics, Brain and Cognitive Science, Italian Institute of Technology (IIT), Genova, Italy.,EECAiA Lab, School of Computer Science, University of Hertfordshire, Hatfield, United Kingdom.,Cognitive Architecture for Collaborative Technologies Unit, Italian Institute of Technology (IIT), Genova, Italy
| | - Francesco Rea
- Department of Robotics, Brain and Cognitive Science, Italian Institute of Technology (IIT), Genova, Italy
| | - Giulio Sandini
- Department of Robotics, Brain and Cognitive Science, Italian Institute of Technology (IIT), Genova, Italy
| | - Lola Cañamero
- EECAiA Lab, School of Computer Science, University of Hertfordshire, Hatfield, United Kingdom
| | - Alessandra Sciutti
- Cognitive Architecture for Collaborative Technologies Unit, Italian Institute of Technology (IIT), Genova, Italy
| |
Collapse
|
6
|
Kuniyoshi Y. Fusing autonomy and sociality via embodied emergence and development of behaviour and cognition from fetal period. Philos Trans R Soc Lond B Biol Sci 2020; 374:20180031. [PMID: 30852992 PMCID: PMC6452254 DOI: 10.1098/rstb.2018.0031] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Human-centred AI/Robotics are quickly becoming important. Their core claim is that AI systems or robots must be designed and work for the benefits of humans with no harm or uneasiness. It essentially requires the realization of autonomy, sociality and their fusion at all levels of system organization, even beyond programming or pre-training. The biologically inspired core principle of such a system is described as the emergence and development of embodied behaviour and cognition. The importance of embodiment, emergence and continuous autonomous development is explained in the context of developmental robotics and dynamical systems view of human development. We present a hypothetical early developmental scenario that fills in the very beginning part of the comprehensive scenarios proposed in developmental robotics. Then our model and experiments on emergent embodied behaviour are presented. They consist of chaotic maps embedded in sensory–motor loops and coupled via embodiment. Behaviours that are consistent with embodiment and adaptive to environmental structure emerge within a few seconds without any external reward or learning. Next, our model and experiments on human fetal development are presented. A precise musculo-skeletal fetal body model is placed in a uterus model. Driven by spinal nonlinear oscillator circuits coupled together via embodiment, somatosensory signals are evoked and learned by a model of the cerebral cortex with 2.6 million neurons and 5.3 billion synapses. The model acquired cortical representations of self–body and multi-modal sensory integration. This work is important because it models very early autonomous development in realistic detailed human embodiment. Finally, discussions toward human-like cognition are presented including other important factors such as motivation, emotion, internal organs and genetic factors. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.
Collapse
Affiliation(s)
- Yasuo Kuniyoshi
- Next Generation Artificial Intelligence Research Center & School of Information Science and Technology, The University of Tokyo , 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 , Japan
| |
Collapse
|
7
|
Taniguchi T, Ugur E, Ogata T, Nagai T, Demiris Y. Editorial: Machine Learning Methods for High-Level Cognitive Capabilities in Robotics. Front Neurorobot 2019; 13:83. [PMID: 31695604 PMCID: PMC6817914 DOI: 10.3389/fnbot.2019.00083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 09/25/2019] [Indexed: 11/18/2022] Open
Affiliation(s)
- Tadahiro Taniguchi
- Department of Information Science and Engineering, Ritsumeikan University, Kyoto, Japan
| | - Emre Ugur
- Department of Computer Engineering, Boǧaziçi University, Istanbul, Turkey
| | - Tetsuya Ogata
- Department of Intermedia Art and Science, School of Fundamental Science and Engineering, Waseda University, Tokyo, Japan
| | - Takayuki Nagai
- Department of Systems Innovation, Graduate School of Engineering Science, Osaka University, Osaka, Japan
| | - Yiannis Demiris
- Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| |
Collapse
|
8
|
Scorolli C. Re-enacting the Bodily Self on Stage: Embodied Cognition Meets Psychoanalysis. Front Psychol 2019; 10:492. [PMID: 31024371 PMCID: PMC6460994 DOI: 10.3389/fpsyg.2019.00492] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Accepted: 02/19/2019] [Indexed: 12/27/2022] Open
Abstract
The embodied approach to cognition consists in a range of theoretical proposals sharing the idea that our concepts are constitutively shaped by the physical and social constraints of our body and environment. Still far from a mutually enriching interplay, in recent years embodied and psychoanalytic approaches are converging on similar constructs as the ones of intersubjectivity, bodily self, and affective quality of verbal communication. Some efforts to cope with the sentient subject were already present in classical cognitivism: having expunged desires and conflicts from the cognitive harmony, bodily emotions re-emerged but only as a noisy dynamic friction. In contrast, the new, neural, embodied cognitive science with its focus on bodily effects/affects has enabled a dialogue between neuro-cognitive perspectives and clinic-psychological ones, through shared conceptual frameworks. I will address crucial issues that should be faced on this reconciling path. With reference to two kinds of contemporary addictions - internet addiction disorder and eating disorders - I will introduce a possible therapeutic approach that is built upon the core role of the acting-sentient bodily self in a dynamic-social and affective environment. In Psychoanalytic Psychodrama, the spontaneous re-enactment of a past (socially and physically constrained) experience is actualized by means of the other, the Auxiliary Ego. This allows homeostatic and social-emotional affects, i.e., drives and instincts, to be re-experienced by the agent, the Protagonist, in a safe scenario. The director-psychoanalyst smoothly traces back this simulation to the motivated, and constrained, early proximal embodied interactions with significant others, and to the related instinctual conflicting aims. The psychoanalytic reframing of classical psychodrama does not merely exploit its original cathartic function, rather stands out for exploring the interpersonal constitution of the self, through an actual "re-somatization" of psychoanalytic therapy. Unspoken/unspeakable feelings pop up on stage: the strength of this treatment mainly rests on re-establishing the priority of the embodied Self over the narrative Self. By pointing out the possible conflicts between these two selves, this method can broaden the embodied cognition perspective. The psychodramatic approach will be briefly discussed in light of connectionist models, to finally address linguistic and methodological pivotal issues.
Collapse
Affiliation(s)
- Claudia Scorolli
- Department of Philosophy and Communication Studies, University of Bologna, Bologna, Italy
| |
Collapse
|
9
|
Ahn H. A sentential cognitive system of robots for conversational human-robot interaction. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2018. [DOI: 10.3233/jifs-169845] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Hyunsik Ahn
- Department of Robot System Engineering, Tongmyong University, Nam-gu, Busan, Republic of Korea
| |
Collapse
|
10
|
Moulin-Frier C, Fischer T, Petit M, Pointeau G, Puigbo JY, Pattacini U, Low SC, Camilleri D, Nguyen P, Hoffmann M, Chang HJ, Zambelli M, Mealier AL, Damianou A, Metta G, Prescott TJ, Demiris Y, Dominey PF, Verschure PFMJ. DAC-h3: A Proactive Robot Cognitive Architecture to Acquire and Express Knowledge About the World and the Self. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2754143] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
11
|
Cochet H, Guidetti M. Contribution of Developmental Psychology to the Study of Social Interactions: Some Factors in Play, Joint Attention and Joint Action and Implications for Robotics. Front Psychol 2018; 9:1992. [PMID: 30405484 PMCID: PMC6202940 DOI: 10.3389/fpsyg.2018.01992] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Accepted: 09/28/2018] [Indexed: 11/29/2022] Open
Abstract
Children exchange information through multiple modalities, including verbal communication, gestures and social gaze and they gradually learn to plan their behavior and coordinate successfully with their partners. The development of joint attention and joint action, especially in the context of social play, provides rich opportunities for describing the characteristics of interactions that can lead to shared outcomes. In the present work, we argue that human-robot interactions (HRI) can benefit from these developmental studies, through influencing the human's perception and interpretation of the robot's behavior. We thus endeavor to describe some components that could be implemented in the robot to strengthen the feeling of dealing with a social agent, and therefore improve the success of collaborative tasks. Focusing in particular on motor precision, coordination, and anticipatory planning, we discuss the question of complexity in HRI. In the context of joint activities, we highlight the necessity of (1) considering multiple speech acts involving multimodal communication (both verbal and non-verbal signals), and (2) analyzing separately the forms and functions of communication. Finally, we examine some challenges related to robot competencies, such as the issue of language and symbol grounding, which might be tackled by bringing together expertise of researchers in developmental psychology and robotics.
Collapse
Affiliation(s)
- Hélène Cochet
- CLLE, Université de Toulouse, CNRS, UT2J, Toulouse, France
| | | |
Collapse
|
12
|
Sensorimotor input as a language generalisation tool: a neurorobotics model for generation and generalisation of noun-verb combinations with sensorimotor inputs. Auton Robots 2018. [DOI: 10.1007/s10514-018-9793-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
13
|
Nakajo R, Murata S, Arie H, Ogata T. Acquisition of Viewpoint Transformation and Action Mappings via Sequence to Sequence Imitative Learning by Deep Neural Networks. Front Neurorobot 2018; 12:46. [PMID: 30087605 PMCID: PMC6066551 DOI: 10.3389/fnbot.2018.00046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2017] [Accepted: 07/03/2018] [Indexed: 11/13/2022] Open
Abstract
We propose an imitative learning model that allows a robot to acquire positional relations between the demonstrator and the robot, and to transform observed actions into robotic actions. Providing robots with imitative capabilities allows us to teach novel actions to them without resorting to trial-and-error approaches. Existing methods for imitative robotic learning require mathematical formulations or conversion modules to translate positional relations between demonstrators and robots. The proposed model uses two neural networks, a convolutional autoencoder (CAE) and a multiple timescale recurrent neural network (MTRNN). The CAE is trained to extract visual features from raw images captured by a camera. The MTRNN is trained to integrate sensory-motor information and to predict next states. We implement this model on a robot and conducted sequence to sequence learning that allows the robot to transform demonstrator actions into robot actions. Through training of the proposed model, representations of actions, manipulated objects, and positional relations are formed in the hierarchical structure of the MTRNN. After training, we confirm capability for generating unlearned imitative patterns.
Collapse
Affiliation(s)
- Ryoichi Nakajo
- Department of Intermedia Art and Science, Waseda University, Tokyo, Japan
| | - Shingo Murata
- Department of Modern Mechanical Engineering, Waseda University, Tokyo, Japan
| | - Hiroaki Arie
- Department of Modern Mechanical Engineering, Waseda University, Tokyo, Japan
| | - Tetsuya Ogata
- Department of Intermedia Art and Science, Waseda University, Tokyo, Japan
| |
Collapse
|
14
|
Cangelosi A, Stramandinoli F. A review of abstract concept learning in embodied agents and robots. Philos Trans R Soc Lond B Biol Sci 2018; 373:20170131. [PMID: 29914999 PMCID: PMC6015819 DOI: 10.1098/rstb.2017.0131] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/22/2017] [Indexed: 11/12/2022] Open
Abstract
This paper reviews computational modelling approaches to the learning of abstract concepts and words in embodied agents such as humanoid robots. This will include a discussion of the learning of abstract words such as 'use' and 'make' in humanoid robot experiments, and the acquisition of numerical concepts via gesture and finger counting strategies. The current approaches share a strong emphasis on embodied cognition aspects for the grounding of abstract concepts, and a continuum, rather than dichotomy, view of concrete/abstract concepts differences.This article is part of the theme issue 'Varieties of abstract concepts: development, use and representation in the brain'.
Collapse
Affiliation(s)
- Angelo Cangelosi
- Centre for Robotics and Neural Systems, Plymouth University, Plymouth PL4 8AA, UK
| | - Francesca Stramandinoli
- iCub Facility Department, Istituto Italiano di Tecnologia, Via Morego 30, 16163 Genoa, Italy
| |
Collapse
|
15
|
Jamone L, Ugur E, Cangelosi A, Fadiga L, Bernardino A, Piater J, Santos-Victor J. Affordances in Psychology, Neuroscience, and Robotics: A Survey. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2016.2594134] [Citation(s) in RCA: 72] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
16
|
Braud R, Pitti A, Gaussier P. A Modular Dynamic Sensorimotor Model for Affordances Learning, Sequences Planning, and Tool-Use. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2016.2647439] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
17
|
Yamada T, Murata S, Arie H, Ogata T. Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions. Front Neurorobot 2017; 11:70. [PMID: 29311891 PMCID: PMC5744442 DOI: 10.3389/fnbot.2017.00070] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2017] [Accepted: 12/14/2017] [Indexed: 11/13/2022] Open
Abstract
An important characteristic of human language is compositionality. We can efficiently express a wide variety of real-world situations, events, and behaviors by compositionally constructing the meaning of a complex expression from a finite number of elements. Previous studies have analyzed how machine-learning models, particularly neural networks, can learn from experience to represent compositional relationships between language and robot actions with the aim of understanding the symbol grounding structure and achieving intelligent communicative agents. Such studies have mainly dealt with the words (nouns, adjectives, and verbs) that directly refer to real-world matters. In addition to these words, the current study deals with logic words, such as “not,” “and,” and “or” simultaneously. These words are not directly referring to the real world, but are logical operators that contribute to the construction of meaning in sentences. In human–robot communication, these words may be used often. The current study builds a recurrent neural network model with long short-term memory units and trains it to learn to translate sentences including logic words into robot actions. We investigate what kind of compositional representations, which mediate sentences and robot actions, emerge as the network's internal states via the learning process. Analysis after learning shows that referential words are merged with visual information and the robot's own current state, and the logical words are represented by the model in accordance with their functions as logical operators. Words such as “true,” “false,” and “not” work as non-linear transformations to encode orthogonal phrases into the same area in a memory cell state space. The word “and,” which required a robot to lift up both its hands, worked as if it was a universal quantifier. The word “or,” which required action generation that looked apparently random, was represented as an unstable space of the network's dynamical system.
Collapse
Affiliation(s)
- Tatsuro Yamada
- Department of Intermedia Art and Science, Waseda University, Tokyo, Japan
| | - Shingo Murata
- Department of Modern Mechanical Engineering, Waseda University, Tokyo, Japan
| | - Hiroaki Arie
- Department of Modern Mechanical Engineering, Waseda University, Tokyo, Japan
| | - Tetsuya Ogata
- Department of Intermedia Art and Science, Waseda University, Tokyo, Japan
| |
Collapse
|
18
|
Oudeyer PY. What do we learn about development from baby robots? WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2016; 8. [PMID: 27906505 DOI: 10.1002/wcs.1395] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2015] [Revised: 03/25/2016] [Accepted: 04/06/2016] [Indexed: 12/21/2022]
Abstract
Understanding infant development is one of the great scientific challenges of contemporary science. In addressing this challenge, robots have proven useful as they allow experimenters to model the developing brain and body and understand the processes by which new patterns emerge in sensorimotor, cognitive, and social domains. Robotics also complements traditional experimental methods in psychology and neuroscience, where only a few variables can be studied at the same time. Moreover, work with robots has enabled researchers to systematically explore the role of the body in shaping the development of skill. All told, this work has shed new light on development as a complex dynamical system. WIREs Cogn Sci 2017, 8:e1395. doi: 10.1002/wcs.1395 For further resources related to this article, please visit the WIREs website.
Collapse
|
19
|
Min H, Yi C, Luo R, Zhu J, Bi S. Affordance Research in Developmental Robotics: A Survey. IEEE Trans Cogn Dev Syst 2016. [DOI: 10.1109/tcds.2016.2614992] [Citation(s) in RCA: 57] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
20
|
Sciutti A, Lohan KS, Gredebäck G, Koch B, Rohlfing KJ. Language Meddles with Infants’ Processing of Observed Actions. Front Robot AI 2016. [DOI: 10.3389/frobt.2016.00046] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
21
|
Yamada T, Murata S, Arie H, Ogata T. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human-Robot Interaction. Front Neurorobot 2016; 10:5. [PMID: 27471463 PMCID: PMC4946379 DOI: 10.3389/fnbot.2016.00005] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2016] [Accepted: 06/23/2016] [Indexed: 12/03/2022] Open
Abstract
To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.
Collapse
Affiliation(s)
- Tatsuro Yamada
- Department of Intermedia Art and Science, Waseda University, Tokyo, Japan
| | - Shingo Murata
- Department of Modern Mechanical Engineering, Waseda University, Tokyo, Japan
| | - Hiroaki Arie
- Department of Modern Mechanical Engineering, Waseda University, Tokyo, Japan
| | - Tetsuya Ogata
- Department of Intermedia Art and Science, Waseda University, Tokyo, Japan
| |
Collapse
|
22
|
Stramandinoli F, Marocco D, Cangelosi A. Making sense of words: a robotic model for language abstraction. Auton Robots 2016. [DOI: 10.1007/s10514-016-9587-8] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
23
|
Taniguchi T, Nagai T, Nakamura T, Iwahashi N, Ogata T, Asoh H. Symbol emergence in robotics: a survey. Adv Robot 2016. [DOI: 10.1080/01691864.2016.1164622] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
24
|
Moore RK. Introducing a Pictographic Language for Envisioning a Rich Variety of Enactive Systems with Different Degrees of Complexity. INT J ADV ROBOT SYST 2016. [DOI: 10.5772/62244] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Notwithstanding the considerable amount of progress that has been made in recent years, the parallel fields of cognitive science and cognitive systems lack a unifying methodology for describing, understanding, simulating and implementing advanced cognitive behaviours. Growing interest in ‘enactivism’ - as pioneered by the Chilean biologists Humberto Maturana and Francisco Varela - may lead to new perspectives in these areas, but a common framework for expressing many of the key concepts is still missing. This paper attempts to lay a tentative foundation in that direction by extending Maturana and Varela's pictographic depictions of autopoietic unities to create a rich visual language for envisioning a wide range of enactive systems - natural or artificial - with different degrees of complexity. It is shown how such a diagrammatic taxonomy can help in the comprehension of important relationships between a variety of complex concepts from a pan-theoretic perspective. In conclusion, it is claimed that visual language is not only valuable for teaching and learning, but also offers important insights into the design and implementation of future advanced robotic systems.
Collapse
|
25
|
Lyon C, Nehaniv CL, Saunders J, Belpaeme T, Bisio A, Fischer K, Förster F, Lehmann H, Metta G, Mohan V, Morse A, Nolfi S, Nori F, Rohlfing K, Sciutti A, Tani J, Tuci E, Wrede B, Zeschel A, Cangelosi A. Embodied Language Learning and Cognitive Bootstrapping: Methods and Design Principles. INT J ADV ROBOT SYST 2016. [DOI: 10.5772/63462] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Co-development of action, conceptualization and social interaction mutually scaffold and support each other within a virtuous feedback cycle in the development of human language in children. Within this framework, the purpose of this article is to bring together diverse but complementary accounts of research methods that jointly contribute to our understanding of cognitive development and in particular, language acquisition in robots. Thus, we include research pertaining to developmental robotics, cognitive science, psychology, linguistics and neuroscience, as well as practical computer science and engineering. The different studies are not at this stage all connected into a cohesive whole; rather, they are presented to illuminate the need for multiple different approaches that complement each other in the pursuit of understanding cognitive development in robots. Extensive experiments involving the humanoid robot iCub are reported, while human learning relevant to developmental robotics has also contributed useful results.Disparate approaches are brought together via common underlying design principles. Without claiming to model human language acquisition directly, we are nonetheless inspired by analogous development in humans and consequently, our investigations include the parallel co-development of action, conceptualization and social interaction. Though these different approaches need to ultimately be integrated into a coherent, unified body of knowledge, progress is currently also being made by pursuing individual methods.
Collapse
Affiliation(s)
- Caroline Lyon
- Adaptive Systems Research Group, University of Hertfordshire, UK
| | | | - Joe Saunders
- Adaptive Systems Research Group, University of Hertfordshire, UK
| | - Tony Belpaeme
- Center for Robotics and Neural Systems, Plymouth University, UK
| | - Ambra Bisio
- Dept. of Experimental Medicine, University of Genoa, Italy
| | - Kerstin Fischer
- Dept. for Design and Communication, University of Southern Denmark, Denmark
| | - Frank Förster
- Adaptive Systems Research Group, University of Hertfordshire, UK
| | - Hagen Lehmann
- Adaptive Systems Research Group, University of Hertfordshire, UK
- Italian Institute of Technology, iCub Facility, Genoa, Italy
| | - Giorgio Metta
- Italian Institute of Technology, iCub Facility, Genoa, Italy
| | - Vishwanathan Mohan
- Italian Institute of Technology, Robotics, Brain and Cognitive Science, Genoa, Italy
| | - Anthony Morse
- Center for Robotics and Neural Systems, Plymouth University, UK
| | - Stefano Nolfi
- Institute of Cognitive Science and Technology, National Research Council, Rome, Italy
| | - Francesco Nori
- Italian Institute of Technology, iCub Facility, Genoa, Italy
| | | | - Alessandra Sciutti
- Italian Institute of Technology, Robotics, Brain and Cognitive Science, Genoa, Italy
| | - Jun Tani
- Department of Electrical Engineering, KAIST, South Korea
| | - Elio Tuci
- Institute of Cognitive Science and Technology, National Research Council, Rome, Italy
| | - Britta Wrede
- Applied Computer Science Group, University of Bielefeld, Germany
| | - Arne Zeschel
- Dept. for Design and Communication, University of Southern Denmark, Denmark
| | | |
Collapse
|
26
|
Chu WS, Zeng J, De la Torre F, Cohn JF, Messinger DS. Unsupervised Synchrony Discovery in Human Interaction. PROCEEDINGS. IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION 2015; 2015:3146-3154. [PMID: 27346988 PMCID: PMC4918688 DOI: 10.1109/iccv.2015.360] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
People are inherently social. Social interaction plays an important and natural role in human behavior. Most computational methods focus on individuals alone rather than in social context. They also require labelled training data. We present an unsupervised approach to discover interpersonal synchrony, referred as to two or more persons preforming common actions in overlapping video frames or segments. For computational efficiency, we develop a branch-and-bound (B&B) approach that affords exhaustive search while guaranteeing a globally optimal solution. The proposed method is entirely general. It takes from two or more videos any multi-dimensional signal that can be represented as a histogram. We derive three novel bounding functions and provide efficient extensions, including multi-synchrony detection and accelerated search, using a warm-start strategy and parallelism. We evaluate the effectiveness of our approach in multiple databases, including human actions using the CMU Mocap dataset [1], spontaneous facial behaviors using group-formation task dataset [37] and parent-infant interaction dataset [28].
Collapse
Affiliation(s)
| | | | | | - Jeffrey F Cohn
- Robotics Institute, Carnegie Mellon University; University of Pittsburgh, USA
| | | |
Collapse
|
27
|
Golosio B, Cangelosi A, Gamotina O, Masala GL. A Cognitive Neural Architecture Able to Learn and Communicate through Natural Language. PLoS One 2015; 10:e0140866. [PMID: 26560154 PMCID: PMC4641699 DOI: 10.1371/journal.pone.0140866] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2015] [Accepted: 10/01/2015] [Indexed: 11/18/2022] Open
Abstract
Communicative interactions involve a kind of procedural knowledge that is used by the human brain for processing verbal and nonverbal inputs and for language production. Although considerable work has been done on modeling human language abilities, it has been difficult to bring them together to a comprehensive tabula rasa system compatible with current knowledge of how verbal information is processed in the brain. This work presents a cognitive system, entirely based on a large-scale neural architecture, which was developed to shed light on the procedural knowledge involved in language elaboration. The main component of this system is the central executive, which is a supervising system that coordinates the other components of the working memory. In our model, the central executive is a neural network that takes as input the neural activation states of the short-term memory and yields as output mental actions, which control the flow of information among the working memory components through neural gating mechanisms. The proposed system is capable of learning to communicate through natural language starting from tabula rasa, without any a priori knowledge of the structure of phrases, meaning of words, role of the different classes of words, only by interacting with a human through a text-based interface, using an open-ended incremental learning process. It is able to learn nouns, verbs, adjectives, pronouns and other word classes, and to use them in expressive language. The model was validated on a corpus of 1587 input sentences, based on literature on early language assessment, at the level of about 4-years old child, and produced 521 output sentences, expressing a broad range of language processing functionalities.
Collapse
Affiliation(s)
- Bruno Golosio
- POLCOMING Department, Section of Engineering and Information Technologies, University of Sassari, Sassari, Italy
- * E-mail:
| | - Angelo Cangelosi
- Centre for Robotics and Neural Systems, School of Computing and Mathematics, University of Plymouth, Plymouth, United Kingdom
| | - Olesya Gamotina
- POLCOMING Department, Section of Engineering and Information Technologies, University of Sassari, Sassari, Italy
| | - Giovanni Luca Masala
- POLCOMING Department, Section of Engineering and Information Technologies, University of Sassari, Sassari, Italy
| |
Collapse
|
28
|
Park G, Tani J. Development of compositional and contextual communicable congruence in robots by using dynamic neural network models. Neural Netw 2015; 72:109-22. [PMID: 26498195 DOI: 10.1016/j.neunet.2015.09.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2015] [Revised: 09/04/2015] [Accepted: 09/20/2015] [Indexed: 10/23/2022]
Abstract
The current study presents neurorobotics experiments on acquisition of skills for "communicable congruence" with human via learning. A dynamic neural network model which is characterized by its multiple timescale dynamics property was utilized as a neuromorphic model for controlling a humanoid robot. In the experimental task, the humanoid robot was trained to generate specific sequential movement patterns as responding to various sequences of imperative gesture patterns demonstrated by the human subjects by following predefined compositional semantic rules. The experimental results showed that (1) the adopted MTRNN can achieve generalization by learning in the lower feature perception level by using a limited set of tutoring patterns, (2) the MTRNN can learn to extract compositional semantic rules with generalization in its higher level characterized by slow timescale dynamics, (3) the MTRNN can develop another type of cognitive capability for controlling the internal contextual processes as situated to on-going task sequences without being provided with cues for explicitly indicating task segmentation points. The analysis on the dynamic property developed in the MTRNN via learning indicated that the aforementioned cognitive mechanisms were achieved by self-organization of adequate functional hierarchy by utilizing the constraint of the multiple timescale property and the topological connectivity imposed on the network configuration. These results of the current research could contribute to developments of socially intelligent robots endowed with cognitive communicative competency similar to that of human.
Collapse
Affiliation(s)
- Gibeom Park
- Department of Electrical Engineering, KAIST, Yuseong-gu, Daejeon, Republic of Korea
| | - Jun Tani
- Department of Electrical Engineering, KAIST, Yuseong-gu, Daejeon, Republic of Korea.
| |
Collapse
|
29
|
Mangin O, Filliat D, ten Bosch L, Oudeyer PY. MCA-NMF: Multimodal Concept Acquisition with Non-Negative Matrix Factorization. PLoS One 2015; 10:e0140732. [PMID: 26489021 PMCID: PMC4619362 DOI: 10.1371/journal.pone.0140732] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2015] [Accepted: 09/28/2015] [Indexed: 11/19/2022] Open
Abstract
In this paper we introduce MCA-NMF, a computational model of the acquisition of multimodal concepts by an agent grounded in its environment. More precisely our model finds patterns in multimodal sensor input that characterize associations across modalities (speech utterances, images and motion). We propose this computational model as an answer to the question of how some class of concepts can be learnt. In addition, the model provides a way of defining such a class of plausibly learnable concepts. We detail why the multimodal nature of perception is essential to reduce the ambiguity of learnt concepts as well as to communicate about them through speech. We then present a set of experiments that demonstrate the learning of such concepts from real non-symbolic data consisting of speech sounds, images, and motions. Finally we consider structure in perceptual signals and demonstrate that a detailed knowledge of this structure, named compositional understanding can emerge from, instead of being a prerequisite of, global understanding. An open-source implementation of the MCA-NMF learner as well as scripts and associated experimental data to reproduce the experiments are publicly available.
Collapse
Affiliation(s)
- Olivier Mangin
- Flowers Team, Inria, Bordeaux, France
- U2IS, ENSTA ParisTech, Université Paris Saclay, Saclay, France
- * E-mail:
| | - David Filliat
- Flowers Team, Inria, Bordeaux, France
- U2IS, ENSTA ParisTech, Université Paris Saclay, Saclay, France
| | - Louis ten Bosch
- Centre for Language and Speech Technology, Radboud University, Nijmegen, Netherlands
| | - Pierre-Yves Oudeyer
- Flowers Team, Inria, Bordeaux, France
- U2IS, ENSTA ParisTech, Université Paris Saclay, Saclay, France
| |
Collapse
|
30
|
Learn Like Infants: A Strategy for Developmental Learning of Symbolic Skills Using Humanoid Robots. Int J Soc Robot 2015. [DOI: 10.1007/s12369-015-0289-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
31
|
Atıl İ, Kalkan S. Towards an Embodied Developing Vision System. KUNSTLICHE INTELLIGENZ 2015. [DOI: 10.1007/s13218-015-0351-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
32
|
Li S, Ferraro M, Caelli T, Pathirana PN. A syntactic two-component encoding model for the trajectories of human actions. IEEE J Biomed Health Inform 2014; 18:1903-14. [PMID: 25375687 DOI: 10.1109/jbhi.2014.2304519] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Human actions have been widely studied for their potential application in various areas such as sports, pervasive patient monitoring, and rehabilitation. However, challenges still persist pertaining to determining the most useful ways to describe human actions at the sensor, then limb and complete action levels of representation and deriving important relations between these levels each involving their own atomic components. In this paper, we report on a motion encoder developed for the sensor level based on the need to distinguish between the shape of the sensor's trajectory and its temporal characteristics during execution. This distinction is critical as it provides a different encoding scheme than the usual velocity and acceleration measures which confound these two attributes of any motion. At the same time, we eliminate noise from sensors by comparing temporal and spatial indexing schemes and a number of optimal filtering models for robust encoding. Results demonstrate the benefits of spatial indexing and separating the shape and dynamics of a motion, as well as its ability to decompose complex motions into several atomic ones. Finally, we discuss how this specific type of sensor encoder bears on the derivation of limb and complete action descriptions.
Collapse
|
33
|
Abstract
Research findings indicate that synchrony between events in two different modalities is a key concept in early social learning. Our longitudinal pilot study with 14 mother–child dyads is the first to support the idea that synchrony between action and language as a form of responsive behaviour in mothers relates to later language acquisition in their children. We conducted a fine-grained coding of multimodal behaviour within the dyad during an everyday diapering activity when the children were three and six months old. When the children attained 24 months, their mothers completed language surveys; this data was then related to the dyadic measures in early interaction. We propose a ‘role-switching’ model according to which it is important for three-month-olds to be exposed to multimodal input for a great deal of time, whereas for six-month-old infants, the mother should respond to the infant’s attention and provide multimodal input when her child is gazing at her.
Collapse
|
34
|
Borghi AM, Cangelosi A. Action and language integration: from humans to cognitive robots. Top Cogn Sci 2014; 6:344-58. [PMID: 24943900 DOI: 10.1111/tops.12103] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2014] [Accepted: 04/25/2014] [Indexed: 11/27/2022]
Abstract
The topic is characterized by a highly interdisciplinary approach to the issue of action and language integration. Such an approach, combining computational models and cognitive robotics experiments with neuroscience, psychology, philosophy, and linguistic approaches, can be a powerful means that can help researchers disentangle ambiguous issues, provide better and clearer definitions, and formulate clearer predictions on the links between action and language. In the introduction we briefly describe the papers and discuss the challenges they pose to future research. We identify four important phenomena the papers address and discuss in light of empirical and computational evidence: (a) the role played not only by sensorimotor and emotional information but also of natural language in conceptual representation; (b) the contextual dependency and high flexibility of the interaction between action, concepts, and language; (c) the involvement of the mirror neuron system in action and language processing; (d) the way in which the integration between action and language can be addressed by developmental robotics and Human-Robot Interaction.
Collapse
Affiliation(s)
- Anna M Borghi
- Department of Psychology, University of Bologna; Institute of Cognitive Sciences and Technologies, Italian National Research Council
| | | |
Collapse
|
35
|
Ivaldi S, Nguyen SM, Lyubova N, Droniou A, Padois V, Filliat D, Oudeyer PY, Sigaud O. Object Learning Through Active Exploration. ACTA ACUST UNITED AC 2014. [DOI: 10.1109/tamd.2013.2280614] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
36
|
|
37
|
|
38
|
Lyon C, Nehaniv CL, Saunders J. Interactive language learning by robots: the transition from babbling to word forms. PLoS One 2012; 7:e38236. [PMID: 22719871 PMCID: PMC3374830 DOI: 10.1371/journal.pone.0038236] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2011] [Accepted: 05/01/2012] [Indexed: 11/29/2022] Open
Abstract
The advent of humanoid robots has enabled a new approach to investigating the acquisition of language, and we report on the development of robots able to acquire rudimentary linguistic skills. Our work focuses on early stages analogous to some characteristics of a human child of about 6 to 14 months, the transition from babbling to first word forms. We investigate one mechanism among many that may contribute to this process, a key factor being the sensitivity of learners to the statistical distribution of linguistic elements. As well as being necessary for learning word meanings, the acquisition of anchor word forms facilitates the segmentation of an acoustic stream through other mechanisms. In our experiments some salient one-syllable word forms are learnt by a humanoid robot in real-time interactions with naive participants. Words emerge from random syllabic babble through a learning process based on a dialogue between the robot and the human participant, whose speech is perceived by the robot as a stream of phonemes. Numerous ways of representing the speech as syllabic segments are possible. Furthermore, the pronunciation of many words in spontaneous speech is variable. However, in line with research elsewhere, we observe that salient content words are more likely than function words to have consistent canonical representations; thus their relative frequency increases, as does their influence on the learner. Variable pronunciation may contribute to early word form acquisition. The importance of contingent interaction in real-time between teacher and learner is reflected by a reinforcement process, with variable success. The examination of individual cases may be more informative than group results. Nevertheless, word forms are usually produced by the robot after a few minutes of dialogue, employing a simple, real-time, frequency dependent mechanism. This work shows the potential of human-robot interaction systems in studies of the dynamics of early language acquisition.
Collapse
Affiliation(s)
- Caroline Lyon
- Adaptive Systems Research Group, University of Hertfordshire, Hertfordshire, United Kingdom.
| | | | | |
Collapse
|
39
|
Abstract
Language and action have been found to share a common neural basis and in particular a common 'syntax', an analogous hierarchical and compositional organization. While language structure analysis has led to the formulation of different grammatical formalisms and associated discriminative or generative computational models, the structure of action is still elusive and so are the related computational models. However, structuring action has important implications on action learning and generalization, in both human cognition research and computation. In this study, we present a biologically inspired generative grammar of action, which employs the structure-building operations and principles of Chomsky's Minimalist Programme as a reference model. In this grammar, action terminals combine hierarchically into temporal sequences of actions of increasing complexity; the actions are bound with the involved tools and affected objects and are governed by certain goals. We show, how the tool role and the affected-object role of an entity within an action drives the derivation of the action syntax in this grammar and controls recursion, merge and move, the latter being mechanisms that manifest themselves not only in human language, but in human action too.
Collapse
Affiliation(s)
- Katerina Pastra
- Cognitive Systems Research Institute, 7 Makedonomachou Prantouna Street, Athens 11525, Greece.
| | | |
Collapse
|
40
|
Fields C. Motion as manipulation: implementation of force-motion analogies by event-file binding and action planning. Cogn Process 2012; 13:231-41. [PMID: 22331426 DOI: 10.1007/s10339-012-0436-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2011] [Accepted: 01/31/2012] [Indexed: 11/28/2022]
Abstract
Tool-improvisation analogies are structure-mapping inferences implemented, in many species, by event-file binding and pre-motor action planning. These processes act on multi-modal representations of currently perceived situations and eventuate in motor acts that can be directly evaluated for success or failure; they employ implicit representations of force-motion relations encoded by the pre-motor system and do not depend on explicit, language-like representations of relational concepts. A detailed reconstruction of the analogical reasoning steps involved in Rutherford's and Bohr's development of the first quantized-orbit model of atomic structure is used to show that human force-motion analogies can in general be implemented by these mechanisms. This event-file manipulation model of the implementation of force-motion analogies is distinguished from the standard view that structure-mapping analogies require the manipulation of explicit, language-like representations of relational concepts.
Collapse
|
41
|
Stramandinoli F, Marocco D, Cangelosi A. The grounding of higher order concepts in action and language: a cognitive robotics model. Neural Netw 2012; 32:165-73. [PMID: 22386502 DOI: 10.1016/j.neunet.2012.02.012] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2011] [Revised: 01/10/2012] [Accepted: 02/07/2012] [Indexed: 10/14/2022]
Abstract
In this paper we present a neuro-robotic model that uses artificial neural networks for investigating the relations between the development of symbol manipulation capabilities and of sensorimotor knowledge in the humanoid robot iCub. We describe a cognitive robotics model in which the linguistic input provided by the experimenter guides the autonomous organization of the robot's knowledge. In this model, sequences of linguistic inputs lead to the development of higher-order concepts grounded on basic concepts and actions. In particular, we show that higher-order symbolic representations can be indirectly grounded in action primitives directly grounded in sensorimotor experiences. The use of recurrent neural network also permits the learning of higher-order concepts based on temporal sequences of action primitives. Hence, the meaning of a higher-order concept is obtained through the combination of basic sensorimotor knowledge. We argue that such a hierarchical organization of concepts can be a possible account for the acquisition of abstract words in cognitive robots.
Collapse
Affiliation(s)
- Francesca Stramandinoli
- Centre for Robotics and Neural Systems, University of Plymouth, Devon, PL48AA, United Kingdom.
| | | | | |
Collapse
|
42
|
Cangelosi A. Embodied compositionality. Comment on "Modeling the cultural evolution of language" by Luc Steels. Phys Life Rev 2011; 8:379-80. [PMID: 22056395 DOI: 10.1016/j.plrev.2011.10.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2011] [Accepted: 10/11/2011] [Indexed: 11/29/2022]
|
43
|
Kopp S, Steil JJ. Special corner on "cognitive robotics". Cogn Process 2011; 12:317-8. [PMID: 21953385 DOI: 10.1007/s10339-011-0415-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2011] [Accepted: 09/15/2011] [Indexed: 11/27/2022]
|
44
|
Pezzulo G, Baldassarre G, Cesta A, Nolfi S. Research on cognitive robotics at the Institute of Cognitive Sciences and Technologies, National Research Council of Italy. Cogn Process 2011; 12:367-74. [PMID: 21468745 DOI: 10.1007/s10339-011-0402-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2010] [Accepted: 03/21/2011] [Indexed: 10/18/2022]
Affiliation(s)
- Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy.
| | | | | | | |
Collapse
|