1
|
Silverstein P, Feng J, Westermann G, Parise E, Twomey KE. Infants Learn to Follow Gaze in Stages: Evidence Confirming a Robotic Prediction. OPEN MIND 2022; 5:174-188. [PMID: 35024530 PMCID: PMC8746125 DOI: 10.1162/opmi_a_00049] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 08/27/2021] [Indexed: 11/24/2022] Open
Abstract
Gaze following is an early-emerging skill in infancy argued to be fundamental to joint attention and later language development. However, how gaze following emerges is a topic of great debate. Representational theories assume that in order to follow adults’ gaze, infants must have a rich sensitivity to adults’ communicative intention from birth. In contrast, learning-based theories hold that infants may learn to gaze follow based on low-level social reinforcement, without the need to understand others’ mental states. Nagai et al. (2006) successfully taught a robot to gaze follow through social reinforcement and found that the robot learned in stages: first in the horizontal plane, and later in the vertical plane—a prediction that does not follow from representational theories. In the current study, we tested this prediction in an eye-tracking paradigm. Six-month-olds did not follow gaze in either the horizontal or vertical plane, whereas 12-month-olds and 18-month-olds only followed gaze in the horizontal plane. These results confirm the core prediction of the robot model, suggesting that children may also learn to gaze follow through social reinforcement coupled with a structured learning environment.
Collapse
Affiliation(s)
| | - Jinzhi Feng
- Psychology Department, Lancaster University, UK
| | | | | | - Katherine E Twomey
- Division of Human Communication, Development and Hearing, University of Manchester, UK
| |
Collapse
|
2
|
Abstract
What is a fundamental ability for cognitive development? Although many researchers have been addressing this question, no shared understanding has been acquired yet. We propose that predictive learning of sensorimotor signals plays a key role in early cognitive development. The human brain is known to represent sensorimotor signals in a predictive manner, i.e. it attempts to minimize prediction error between incoming sensory signals and top–down prediction. We extend this view and suggest that two mechanisms for minimizing prediction error lead to the development of cognitive abilities during early infancy. The first mechanism is to update an immature predictor. The predictor must be trained through sensorimotor experiences because it does not inherently have prediction ability. The second mechanism is to execute an action anticipated by the predictor. Interacting with other individuals often increases prediction error, which can be minimized by executing one's own action corresponding to others’ action. Our experiments using robotic systems replicated developmental dynamics observed in infants. The capabilities of self–other cognition and goal-directed action were acquired based on the first mechanism, whereas imitation and prosocial behaviours emerged based on the second mechanism. Our theory further provides a potential mechanism for autism spectrum condition. Atypical tolerance for prediction error is hypothesized to be a cause of perceptual and social difficulties. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.
Collapse
Affiliation(s)
- Yukie Nagai
- National Institute of Information and Communications Technology , Suita, Osaka 565-0871 , Japan
| |
Collapse
|
3
|
Slone LK, Smith LB, Yu C. Self-generated variability in object images predicts vocabulary growth. Dev Sci 2019; 22:e12816. [PMID: 30770597 PMCID: PMC6697249 DOI: 10.1111/desc.12816] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Revised: 02/01/2019] [Accepted: 02/13/2019] [Indexed: 11/28/2022]
Abstract
Object names are a major component of early vocabularies and learning object names depends on being able to visually recognize objects in the world. However, the fundamental visual challenge of the moment-to-moment variations in object appearances that learners must resolve has received little attention in word learning research. Here we provide the first evidence that image-level object variability matters and may be the link that connects infant object manipulation to vocabulary development. Using head-mounted eye tracking, the present study objectively measured individual differences in the moment-to-moment variability of visual instances of the same object, from infants' first-person views. Infants who generated more variable visual object images through manual object manipulation at 15 months of age experienced greater vocabulary growth over the next six months. Elucidating infants' everyday visual experiences with objects may constitute a crucial missing link in our understanding of the developmental trajectory of object name learning.
Collapse
Affiliation(s)
- Lauren K Slone
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, Indiana
| | - Linda B Smith
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, Indiana
| | - Chen Yu
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, Indiana
| |
Collapse
|
4
|
Horii T, Nagai Y, Asada M. Modeling Development of Multimodal Emotion Perception Guided by Tactile Dominance and Perceptual Improvement. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2018.2809434] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
5
|
Cangelosi A, Schlesinger M. From Babies to Robots: The Contribution of Developmental Robotics to Developmental Psychology. CHILD DEVELOPMENT PERSPECTIVES 2018. [DOI: 10.1111/cdep.12282] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
6
|
Stulp F, Oudeyer PY. Proximodistal exploration in motor learning as an emergent property of optimization. Dev Sci 2017; 21:e12638. [PMID: 29285864 DOI: 10.1111/desc.12638] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2015] [Accepted: 07/18/2017] [Indexed: 11/27/2022]
Abstract
To harness the complexity of their high-dimensional bodies during sensorimotor development, infants are guided by patterns of freezing and freeing of degrees of freedom. For instance, when learning to reach, infants free the degrees of freedom in their arm proximodistally, that is, from joints that are closer to the body to those that are more distant. Here, we formulate and study computationally the hypothesis that such patterns can emerge spontaneously as the result of a family of stochastic optimization processes, without an innate encoding of a maturational schedule. In particular, we present simulated experiments with an arm where a computational learner progressively acquires reaching skills through adaptive exploration, and we show that a proximodistal organization appears spontaneously, which we denote PDFF (Proximo Distal Freezing and Freeing of degrees of freedom). We also compare this emergent organization between different arm morphologies-from human-like to quite unnatural ones-to study the effect of different kinematic structures on the emergence of PDFF.
Collapse
Affiliation(s)
- Freek Stulp
- FLOWERS Team, INRIA Bordeaux Sud-Ouest, Talence, France.,ENSTA ParisTech, Université Paris-Saclay, Paris, France.,German Aerospace Center (DLR), Institute of Robotics and Mechatronics, Wessling, Germany
| | - Pierre-Yves Oudeyer
- FLOWERS Team, INRIA Bordeaux Sud-Ouest, Talence, France.,ENSTA ParisTech, Université Paris-Saclay, Paris, France
| |
Collapse
|
7
|
A neurocomputational investigation of reinforcement-based decision making as a candidate latent vulnerability mechanism in maltreated children. Dev Psychopathol 2017; 29:1689-1705. [DOI: 10.1017/s095457941700133x] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractAlterations in reinforcement-based decision making may be associated with increased psychiatric vulnerability in children who have experienced maltreatment. A probabilistic passive avoidance task and a model-based functional magnetic resonance imaging analytic approach were implemented to assess the neurocomputational components underlying decision making: (a) reinforcement expectancies (the representation of the outcomes associated with a stimulus) and (b) prediction error signaling (the ability to detect the differences between expected and actual outcomes). There were three main findings. First, the maltreated group (n = 18; mean age = 13), relative to nonmaltreated peers (n = 19; mean age = 13), showed decreased activity during expected value processing in a widespread network commonly associated with reinforcement expectancies representation, including the striatum (especially the caudate), the orbitofrontal cortex, and medial temporal structures including the hippocampus and insula. Second, consistent with previously reported hyperresponsiveness to negative cues in the context of childhood abuse, the maltreated group showed increased prediction error signaling in the middle cingulate gyrus, somatosensory cortex, superior temporal gyrus, and thalamus. Third, the maltreated group showed increased activity in frontodorsal regions and in the putamen during expected value representation. These findings suggest that early adverse environments disrupt the development of decision-making processes, which in turn may compromise psychosocial functioning in ways that increase latent vulnerability to psychiatric disorder.
Collapse
|
8
|
Lee K, Choo H. Constructing Perceptual Common Ground Between Human and Robot Through Joint Attention. INT J HUM ROBOT 2017. [DOI: 10.1142/s0219843617500207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Joint attention is a communicative activity that allows social partners to share perceptual experiences by jointly attending to an environmental object. Unlike the common approach towards joint attention, which is based on the developmental view in robotics, here it is conceptualized with a psychophysical paradigm known as cueing. The triadic interaction of joint attention is formalized as the conditional probability of an attentional response for a given target candidate derived from object features and a cue derived from a human partner's indication. A robotic system to which the joint attention model is applied conducted a series of tasks to demonstrate the properties of the computational model. The robotic system successfully performed the tasks, which could not be specified by the information derived from a target object alone; furthermore, the system demonstrated how perceptual and selection ambiguity is resolved through joint attentive interaction and made to converge into a common perceptual state. The results imply that a perceptual common ground is constructed on the triadic relationship between user, robot, and objects through joint attentive interaction.
Collapse
Affiliation(s)
- Kangwoo Lee
- College of Information and Communication Engineering, Sungkyunkwan University, Chonchon-dong, Jangan-gu, Suwon 440-746, South Korea
| | - Hyunseung Choo
- College of Software, Sungkyunkwan University, Chonchon-dong, Jangan-gu, Suwon 440-746, South Korea
| |
Collapse
|
9
|
Oudeyer PY. What do we learn about development from baby robots? WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2016; 8. [PMID: 27906505 DOI: 10.1002/wcs.1395] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2015] [Revised: 03/25/2016] [Accepted: 04/06/2016] [Indexed: 12/21/2022]
Abstract
Understanding infant development is one of the great scientific challenges of contemporary science. In addressing this challenge, robots have proven useful as they allow experimenters to model the developing brain and body and understand the processes by which new patterns emerge in sensorimotor, cognitive, and social domains. Robotics also complements traditional experimental methods in psychology and neuroscience, where only a few variables can be studied at the same time. Moreover, work with robots has enabled researchers to systematically explore the role of the body in shaping the development of skill. All told, this work has shed new light on development as a complex dynamical system. WIREs Cogn Sci 2017, 8:e1395. doi: 10.1002/wcs.1395 For further resources related to this article, please visit the WIREs website.
Collapse
|
10
|
Sheikhi S, Odobez JM. Combining dynamic head pose–gaze mapping with the robot conversational state for attention recognition in human–robot interactions. Pattern Recognit Lett 2015. [DOI: 10.1016/j.patrec.2014.10.002] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
11
|
LIU CHAORAN, ISHI CARLOST, ISHIGURO HIROSHI, HAGITA NORIHIRO. GENERATION OF NODDING, HEAD TILTING AND GAZING FOR HUMAN–ROBOT SPEECH INTERACTION. INT J HUM ROBOT 2013. [DOI: 10.1142/s0219843613500096] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human–robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paper proposes a model for generating head tilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, "Geminoid F", a typical humanoid robot with less facial degrees of freedom, "Robovie R2", and a robot with a 3-axis rotatable neck and movable lips, "Telenoid R2"). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping people's original motions without gaze information. We also find that an upward motion of a robot's face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping people's original motions with gaze information in terms of perceived naturalness.
Collapse
Affiliation(s)
- CHAORAN LIU
- ATR Intelligent Robotics and Communication Labs, Kyoto, 619-0288, Japan
| | - CARLOS T. ISHI
- ATR Intelligent Robotics and Communication Labs, Kyoto, 619-0288, Japan
| | | | - NORIHIRO HAGITA
- ATR Intelligent Robotics and Communication Labs, Kyoto, 619-0288, Japan
| |
Collapse
|
12
|
Coradeschi S, Loutfi A, Wrede B. A Short Review of Symbol Grounding in Robotic and Intelligent Systems. KUNSTLICHE INTELLIGENZ 2013. [DOI: 10.1007/s13218-013-0247-2] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
13
|
|
14
|
Abstract
Computational models of development aim to describe the mechanisms that underlie the acquisition of new skills or the emergence of new capabilities. The strength of a model is judged by both its ability to explain the phenomena in question as well as its ability to generate new hypotheses, generalize to new situations, and provide a unifying conceptual framework. Although often constructed using traditional engineering methodologies, evaluating the performance of a computational model of development in terms of traditional perspectives is a flawed approach. This paper addresses the fundamental issues that confound quantitative analysis of computational models of developmental systems. In particular, we focus on the following recommendations: (i) do not equate the success of a developmental model with its peak performance at some task; (ii) do not employ purely subjective or vague measures of model fitness; and (iii) do not hide or reject variation as found in the computational model. Along the way, we discuss the aspects of computational models of development that lead to the requirements for specialized methods of analysis.
Collapse
Affiliation(s)
- FREDERICK SHIC
- Yale Social Robotics Laboratory, Yale University, 51 Prospect Street, New Haven, CT 06511, USA
| | - BRIAN SCASSELLATI
- Yale Social Robotics Laboratory, Yale University, 51 Prospect Street, New Haven, CT 06511, USA
| |
Collapse
|
15
|
Takahashi Y, Yoshida K, Hibino F, Maeda Y. Human Pointing Navigation Interface for Mobile Robot with Spherical Vision System. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS 2011. [DOI: 10.20965/jaciii.2011.p0869] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Human-robot interaction requires intuitive interface that is not possible using devices, such as, the joystick or teaching pendant, which also require some trainings. Instruction by gesture is one example of an intuitive interfaces requiring no training, and pointing is one of the simplest gestures. We propose simple pointing recognition for a mobile robot having an upwarddirected camera system. The robot using this recognizes pointing and navigates through simple visual feedback control to where the user points. This paper explores the feasibility and utility of our proposal as shown by the results of a questionnaire on proposed and conventional interfaces.
Collapse
|
16
|
Nagai Y, Rohlfing K. Computational Analysis of Motionese Toward Scaffolding Robot Action Learning. ACTA ACUST UNITED AC 2009. [DOI: 10.1109/tamd.2009.2021090] [Citation(s) in RCA: 53] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
17
|
Asada M, Hosoda K, Kuniyoshi Y, Ishiguro H, Inui T, Yoshikawa Y, Ogino M, Yoshida C. Cognitive Developmental Robotics: A Survey. ACTA ACUST UNITED AC 2009. [DOI: 10.1109/tamd.2009.2021702] [Citation(s) in RCA: 362] [Impact Index Per Article: 24.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
18
|
Socialization between toddlers and robots at an early childhood education center. Proc Natl Acad Sci U S A 2007; 104:17954-8. [PMID: 17984068 DOI: 10.1073/pnas.0707769104] [Citation(s) in RCA: 268] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
A state-of-the-art social robot was immersed in a classroom of toddlers for >5 months. The quality of the interaction between children and robots improved steadily for 27 sessions, quickly deteriorated for 15 sessions when the robot was reprogrammed to behave in a predictable manner, and improved in the last three sessions when the robot displayed again its full behavioral repertoire. Initially, the children treated the robot very differently than the way they treated each other. By the last sessions, 5 months later, they treated the robot as a peer rather than as a toy. Results indicate that current robot technology is surprisingly close to achieving autonomous bonding and socialization with human toddlers for sustained periods of time and that it could have great potential in educational settings assisting teachers and enriching the classroom environment.
Collapse
|
19
|
Muhl C, Nagai Y, Sagerer G. On Constructing a Communicative Space in HRI. LECTURE NOTES IN COMPUTER SCIENCE 2007. [DOI: 10.1007/978-3-540-74565-5_21] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|