1
|
Bergoin R, Boucenna S, D'Urso R, Cohen D, Pitti A. A developmental model of audio-visual attention (MAVA) for bimodal language learning in infants and robots. Sci Rep 2024; 14:20492. [PMID: 39242623 PMCID: PMC11379723 DOI: 10.1038/s41598-024-69245-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Accepted: 08/02/2024] [Indexed: 09/09/2024] Open
Abstract
A social individual needs to effectively manage the amount of complex information in his or her environment relative to his or her own purpose to obtain relevant information. This paper presents a neural architecture aiming to reproduce attention mechanisms (alerting/orienting/selecting) that are efficient in humans during audiovisual tasks in robots. We evaluated the system based on its ability to identify relevant sources of information on faces of subjects emitting vowels. We propose a developmental model of audio-visual attention (MAVA) combining Hebbian learning and a competition between saliency maps based on visual movement and audio energy. MAVA effectively combines bottom-up and top-down information to orient the system toward pertinent areas. The system has several advantages, including online and autonomous learning abilities, low computation time and robustness to environmental noise. MAVA outperforms other artificial models for detecting speech sources under various noise conditions.
Collapse
Affiliation(s)
- Raphaël Bergoin
- ETIS, UMR 8051, ENSEA, CY Cergy Paris Université, CNRS, Cergy-Pontoise, France
| | - Sofiane Boucenna
- ETIS, UMR 8051, ENSEA, CY Cergy Paris Université, CNRS, Cergy-Pontoise, France.
| | - Raphaël D'Urso
- ETIS, UMR 8051, ENSEA, CY Cergy Paris Université, CNRS, Cergy-Pontoise, France
| | - David Cohen
- Service de Psychiatrie de l'Enfant et de l'Adolescent, Hôpital Pitié-Salpêtrière, AP-HP, Paris, France
- Institut des Systèmes Intelligents et de Robotiques, Université Pierre et Marie Curie, Paris, France
| | - Alexandre Pitti
- ETIS, UMR 8051, ENSEA, CY Cergy Paris Université, CNRS, Cergy-Pontoise, France
| |
Collapse
|
2
|
Fears NE, Sherrod GM, Blankenship D, Patterson RM, Hynan LS, Wijayasinghe I, Popa DO, Bugnariu NL, Miller HL. Motor differences in autism during a human-robot imitative gesturing task. Clin Biomech (Bristol, Avon) 2023; 106:105987. [PMID: 37207496 PMCID: PMC10684312 DOI: 10.1016/j.clinbiomech.2023.105987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 05/06/2023] [Accepted: 05/10/2023] [Indexed: 05/21/2023]
Abstract
BACKGROUND Difficulty with imitative gesturing is frequently observed as a clinical feature of autism. Current practices for assessment of imitative gesturing ability-behavioral observation and parent report-do not allow precise measurement of specific components of imitative gesturing performance, instead relying on subjective judgments. Advances in technology allow researchers to objectively quantify the nature of these movement differences, and to use less socially stressful interaction partners (e.g., robots). In this study, we aimed to quantify differences in imitative gesturing between autistic and neurotypical development during human-robot interaction. METHODS Thirty-five autistic (n = 19) and neurotypical (n = 16) participants imitated social gestures of an interactive robot (e.g., wave). The movements of the participants and the robot were recorded using an infrared motion-capture system with reflective markers on corresponding head and body locations. We used dynamic time warping to quantify the degree to which the participant's and robot's movement were aligned across the movement cycle and work contribution to determine how each joint angle was producing the movements. FINDINGS Results revealed differences between autistic and neurotypical participants in imitative accuracy and work contribution, primarily in the movements requiring unilateral extension of the arm. Autistic individuals imitated the robot less accurately and used less work at the shoulder compared to neurotypical individuals. INTERPRETATION These findings indicate differences in autistic participants' ability to imitate an interactive robot. These findings build on our understanding of the underlying motor control and sensorimotor integration mechanisms that support imitative gesturing in autism which may aid in identifying appropriate intervention targets.
Collapse
Affiliation(s)
- Nicholas E Fears
- University of North Texas, Health Science Center, Fort Worth, TX, USA; University of Michigan, Ann Arbor, MI, USA; Louisiana State University, Baton Rouge, LA, USA
| | - Gabriela M Sherrod
- University of North Texas, Health Science Center, Fort Worth, TX, USA; University of Alabama at Birmingham, USA
| | | | - Rita M Patterson
- University of North Texas, Health Science Center, Fort Worth, TX, USA
| | - Linda S Hynan
- University of Texas, Southwestern Medical Center, Dallas, TX, USA
| | | | - Dan O Popa
- University of Louisville, Louisville, KY, USA
| | - Nicoleta L Bugnariu
- University of North Texas, Health Science Center, Fort Worth, TX, USA; University of the Pacific, School of Health Sciences, USA
| | - Haylie L Miller
- University of North Texas, Health Science Center, Fort Worth, TX, USA; University of Michigan, Ann Arbor, MI, USA.
| |
Collapse
|
3
|
Belo JPR, Azevedo H, Ramos JJG, Romero RAF. Deep Q-network for social robotics using emotional social signals. Front Robot AI 2022; 9:880547. [PMID: 36226257 PMCID: PMC9548603 DOI: 10.3389/frobt.2022.880547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 08/18/2022] [Indexed: 11/13/2022] Open
Abstract
Social robotics represents a branch of human-robot interaction dedicated to developing systems to control the robots to operate in unstructured environments with the presence of human beings. Social robots must interact with human beings by understanding social signals and responding appropriately to them. Most social robots are still pre-programmed, not having great ability to learn and respond with actions adequate during an interaction with humans. Recently more elaborate methods use body movements, gaze direction, and body language. However, these methods generally neglect vital signs present during an interaction, such as the human emotional state. In this article, we address the problem of developing a system to turn a robot able to decide, autonomously, what behaviors to emit in the function of the human emotional state. From one side, the use of Reinforcement Learning (RL) represents a way for social robots to learn advanced models of social cognition, following a self-learning paradigm, using characteristics automatically extracted from high-dimensional sensory information. On the other side, Deep Learning (DL) models can help the robots to capture information from the environment, abstracting complex patterns from the visual information. The combination of these two techniques is known as Deep Reinforcement Learning (DRL). The purpose of this work is the development of a DRL system to promote a natural and socially acceptable interaction among humans and robots. For this, we propose an architecture, Social Robotics Deep Q-Network (SocialDQN), for teaching social robots to behave and interact appropriately with humans based on social signals, especially on human emotional states. This constitutes a relevant contribution for the area since the social signals must not only be recognized by the robot but help him to take action appropriated according to the situation presented. Characteristics extracted from people’s faces are considered for extracting the human emotional state aiming to improve the robot perception. The development and validation of the system are carried out with the support of SimDRLSR simulator. Results obtained through several tests demonstrate that the system learned satisfactorily to maximize the rewards, and consequently, the robot behaves in a socially acceptable way.
Collapse
Affiliation(s)
- José Pedro R. Belo
- Computer Science Department, Institute of Mathematics and Computer Science, University of São Paulo, São Carlos, Brazil
- *Correspondence: José Pedro R. Belo,
| | - Helio Azevedo
- Center for Information Technology Renato Archer, Campinas, Brazil
| | | | - Roseli A. F. Romero
- Computer Science Department, Institute of Mathematics and Computer Science, University of São Paulo, São Carlos, Brazil
| |
Collapse
|
4
|
Chevalère J, Kirtay M, Hafner VV, Lazarides R. Who to Observe and Imitate in Humans and Robots: The Importance of Motivational Factors. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00923-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
AbstractImitation is a vital skill that humans leverage in various situations. Humans achieve imitation by observing others with apparent ease. Yet, in reality, it is computationally expensive to model on artificial agents (e.g., social robots) to acquire new skills by imitating an expert agent. Although learning through imitation has been extensively addressed in the robotic literature, most studies focus on answering the following questions: what to imitate and how to imitate. In this conceptual paper, we focus on one of the overlooked questions of imitation through observation: who to imitate. We present possible answers to the who-to-imitate question by exploring motivational factors documented in psychological research and their possible implementation in robotics. To this end, we focus on two critical instances of the who-to-imitate question that guide agents to prioritize one demonstrator over another: outcome expectancies, viewed as the anticipated learning gains, and efficacy expectations, viewed as the anticipated costs of performing actions, respectively.
Collapse
|
5
|
Feng H, Mahoor MH, Dino F. A Music-Therapy Robotic Platform for Children With Autism: A Pilot Study. Front Robot AI 2022; 9:855819. [PMID: 35677082 PMCID: PMC9169087 DOI: 10.3389/frobt.2022.855819] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 04/29/2022] [Indexed: 11/13/2022] Open
Abstract
Children with Autism Spectrum Disorder (ASD) experience deficits in verbal and nonverbal communication skills including motor control, turn-taking, and emotion recognition. Innovative technology, such as socially assistive robots, has shown to be a viable method for Autism therapy. This paper presents a novel robot-based music-therapy platform for modeling and improving the social responses and behaviors of children with ASD. Our autonomous social interactive system consists of three modules. Module one provides an autonomous initiative positioning system for the robot, NAO, to properly localize and play the instrument (Xylophone) using the robot’s arms. Module two allows NAO to play customized songs composed by individuals. Module three provides a real-life music therapy experience to the users. We adopted Short-time Fourier Transform and Levenshtein distance to fulfill the design requirements: 1) “music detection” and 2) “smart scoring and feedback”, which allows NAO to understand music and provide additional practice and oral feedback to the users as applicable. We designed and implemented six Human-Robot-Interaction (HRI) sessions including four intervention sessions. Nine children with ASD and seven Typically Developing participated in a total of fifty HRI experimental sessions. Using our platform, we collected and analyzed data on social behavioral changes and emotion recognition using Electrodermal Activity (EDA) signals. The results of our experiments demonstrate most of the participants were able to complete motor control tasks with 70% accuracy. Six out of the nine ASD participants showed stable turn-taking behavior when playing music. The results of automated emotion classification using Support Vector Machines illustrates that emotional arousal in the ASD group can be detected and well recognized via EDA bio-signals. In summary, the results of our data analyses, including emotion classification using EDA signals, indicate that the proposed robot-music based therapy platform is an attractive and promising assistive tool to facilitate the improvement of fine motor control and turn-taking skills in children with ASD.
Collapse
Affiliation(s)
| | - Mohammad H. Mahoor
- Computer Vision and Social Robotics Labarotory, Department of Electrical and Computer Engineering, University of Denver, Denver, CO, United States
- *Correspondence: Mohammad H. Mahoor,
| | - Francesca Dino
- Computer Vision and Social Robotics Labarotory, Department of Electrical and Computer Engineering, University of Denver, Denver, CO, United States
| |
Collapse
|
6
|
Irfan B, Ortiz MG, Lyubova N, Belpaeme T. Multi-modal Open World User Identification. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3477963] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
User identification is an essential step in creating a personalised long-term interaction with robots. This requires learning the users continuously and incrementally, possibly starting from a state without any known user. In this article, we describe a multi-modal incremental Bayesian network with online learning, which is the first method that can be applied in such scenarios. Face recognition is used as the primary biometric, and it is combined with ancillary information, such as gender, age, height, and time of interaction to improve the recognition. The Multi-modal Long-term User Recognition Dataset is generated to simulate various
human-robot interaction (HRI)
scenarios and evaluate our approach in comparison to face recognition, soft biometrics, and a state-of-the-art open world recognition method (Extreme Value Machine). The results show that the proposed methods significantly outperform the baselines, with an increase in the identification rate up to 47.9% in open-set and closed-set scenarios, and a significant decrease in long-term recognition performance loss. The proposed models generalise well to new users, provide stability, improve over time, and decrease the bias of face recognition. The models were applied in HRI studies for user recognition, personalised rehabilitation, and customer-oriented service, which showed that they are suitable for long-term HRI in the real world.
Collapse
Affiliation(s)
- Bahar Irfan
- Centre for Robotics and Neural Systems, University of Plymouth, Drake Circus, Plymouth, United Kingdom
| | - Michael Garcia Ortiz
- AI Lab, SoftBank Robotics Europe and City, University of London, London, United Kingdom
| | | | - Tony Belpaeme
- IDLab - imec, Ghent University and Centre for Robotics and Neural Systems, University of Plymouth, Plymouth, United Kingdom
| |
Collapse
|
7
|
Kang W, Pineda Hernández S, Mei J. Neural Mechanisms of Observational Learning: A Neural Working Model. Front Hum Neurosci 2021; 14:609312. [PMID: 33967717 PMCID: PMC8100516 DOI: 10.3389/fnhum.2020.609312] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Accepted: 12/02/2020] [Indexed: 11/18/2022] Open
Abstract
Humans and some animal species are able to learn stimulus-response (S-R) associations by observing others' behavior. It saves energy and time and avoids the danger of trying the wrong actions. Observational learning (OL) depends on the capability of mapping the actions of others into our own behaviors, processing outcomes, and combining this knowledge to serve our goals. Observational learning plays a central role in the learning of social skills, cultural knowledge, and tool use. Thus, it is one of the fundamental processes in which infants learn about and from adults (Byrne and Russon, 1998). In this paper, we review current methodological approaches employed in observational learning research. We highlight the important role of the prefrontal cortex and cognitive flexibility to support this learning process, develop a new neural working model of observational learning, illustrate how imitation relates to observational learning, and provide directions for future research.
Collapse
Affiliation(s)
- Weixi Kang
- Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Department of Medicine, Imperial College London, London, United Kingdom
| | | | - Jie Mei
- Department of Anatomy, Université du Québec à Trois-Rivières, Québec City, QC, Canada
| |
Collapse
|
8
|
Ohata W, Tani J. Investigation of the Sense of Agency in Social Cognition, Based on Frameworks of Predictive Coding and Active Inference: A Simulation Study on Multimodal Imitative Interaction. Front Neurorobot 2020; 14:61. [PMID: 33013346 PMCID: PMC7509423 DOI: 10.3389/fnbot.2020.00061] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Accepted: 07/28/2020] [Indexed: 12/31/2022] Open
Abstract
When agents interact socially with different intentions (or wills), conflicts are difficult to avoid. Although the means by which social agents can resolve such problems autonomously has not been determined, dynamic characteristics of agency may shed light on underlying mechanisms. Therefore, the current study focused on the sense of agency, a specific aspect of agency referring to congruence between the agent's intention in acting and the outcome, especially in social interaction contexts. Employing predictive coding and active inference as theoretical frameworks of perception and action generation, we hypothesize that regulation of complexity in the evidence lower bound of an agent's model should affect the strength of the agent's sense of agency and should have a significant impact on social interactions. To evaluate this hypothesis, we built a computational model of imitative interaction between a robot and a human via visuo-proprioceptive sensation with a variational Bayes recurrent neural network, and simulated the model in the form of pseudo-imitative interaction using recorded human body movement data, which serve as the counterpart in the interactions. A key feature of the model is that the complexity of each modality can be regulated differently by changing the values of a hyperparameter assigned to each local module of the model. We first searched for an optimal setting of hyperparameters that endow the model with appropriate coordination of multimodal sensation. These searches revealed that complexity of the vision module should be more tightly regulated than that of the proprioception module because of greater uncertainty in visual information flow. Using this optimally trained model as a default model, we investigated how changing the tightness of complexity regulation in the entire network after training affects the strength of the sense of agency during imitative interactions. The results showed that with looser regulation of complexity, an agent tends to act more egocentrically, without adapting to the other. In contrast, with tighter regulation, the agent tends to follow the other by adjusting its intention. We conclude that the tightness of complexity regulation significantly affects the strength of the sense of agency and the dynamics of interactions between agents in social settings.
Collapse
Affiliation(s)
- Wataru Ohata
- Cognitive Neurorobotics Research Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| | - Jun Tani
- Cognitive Neurorobotics Research Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| |
Collapse
|
9
|
Hoang K, Pitti A, Goudou JF, Dufour JY, Gaussier P. Active vision: on the relevance of a bio-inspired approach for object detection. BIOINSPIRATION & BIOMIMETICS 2020; 15:025003. [PMID: 31639780 DOI: 10.1088/1748-3190/ab504c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Starting from biological systems, we review the interest of active perception for object recognition in an autonomous system. Foveated vision and control of the eye saccade introduce strong benefits related to the differentiation of a 'what' pathway recognizing some local parts in the image and a 'where' pathway related to moving the fovea in that part of the image. Experiments on a dataset illustrate the capability of our model to deal with complex visual scenes. The results enlighten the interest of top-down contextual information to serialize the exploration and to perform some kind of hypothesis test. Moreover learning to control the occular saccade from the previous one can help reducing the exploration area and improve the recognition performances. Yet our results show that the selection of the next saccade should take into account broader statistical information. This opens new avenues for the control of the ocular saccades and the active exploration of complex visual scenes.
Collapse
Affiliation(s)
- Kevin Hoang
- ETIS UMR 8051/ENSEA, University of Cergy-Pontoise, France. Thales SIX GTS- Vision and Sensing laboratory, Palaiseau, France
| | | | | | | | | |
Collapse
|
10
|
Huang K, Ma X, Song R, Rong X, Tian X, Li Y. A self-organizing developmental cognitive architecture with interactive reinforcement learning. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.07.109] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
11
|
ILRA: Novelty Detection in Face-Based Intervener Re-Identification. Symmetry (Basel) 2019. [DOI: 10.3390/sym11091154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Transparency laws facilitate citizens to monitor the activities of political representatives. In this sense, automatic or manual diarization of parliamentary sessions is required, the latter being time consuming. In the present work, this problem is addressed as a person re-identification problem. Re-identification is defined as the process of matching individuals under different camera views. This paper, in particular, deals with open world person re-identification scenarios, where the captured probe in one camera is not always present in the gallery collected in another one, i.e., determining whether the probe belongs to a novel identity or not. This procedure is mandatory before matching the identity. In most cases, novelty detection is tackled applying a threshold founded in a linear separation of the identities. We propose a threshold-less approach to solve the novelty detection problem, which is based on a one-class classifier and therefore it does not need any user defined threshold. Unlike other approaches that combine audio-visual features, an Isometric LogRatio transformation of a posteriori (ILRA) probabilities is applied to local and deep computed descriptors extracted from the face, which exhibits symmetry and can be exploited in the re-identification process unlike audio streams. These features are used to train the one-class classifier to detect the novelty of the individual. The proposal is evaluated in real parliamentary session recordings that exhibit challenging variations in terms of pose and location of the interveners. The experimental evaluation explores different configuration sets where our system achieves significant improvement on the given scenario, obtaining an average F measure of 71.29% for online analyzed videos. In addition, ILRA performs better than face descriptors used in recent face-based closed world recognition approaches, achieving an average improvement of 1.6% with respect to a deep descriptor.
Collapse
|
12
|
Xavier J, Guedjou H, Anzalone SM, Boucenna S, Guigon E, Chetouani M, Cohen D. Toward a motor signature in autism: Studies from human-machine interaction. Encephale 2019; 45:182-187. [PMID: 30503684 DOI: 10.1016/j.encep.2018.08.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Revised: 08/05/2018] [Accepted: 08/09/2018] [Indexed: 10/27/2022]
Abstract
BACKGROUND Autism spectrum disorder (ASD) is a heterogeneous group of neurodevelopmental disorders which core symptoms are impairments in socio-communication and repetitive symptoms and stereotypies. Although not cardinal symptoms per se, motor impairments are fundamental aspects of ASD. These impairments are associated with postural and motor control disabilities that we investigated using computational modeling and developmental robotics through human-machine interaction paradigms. METHOD First, in a set of studies involving a human-robot posture imitation, we explored the impact of 3 different groups of partners (including a group of children with ASD) on robot learning by imitation. Second, using an ecological task, i.e. a real-time motor imitation with a tightrope walker (TW) avatar, we investigated interpersonal synchronization, motor coordination and motor control during the task in children with ASD (n=29), TD children (n=39) and children with developmental coordination disorder (n=17, DCD). RESULTS From the human-robot experiments, we evidenced that motor signature at both groups' and individuals' levels had a key influence on imitation learning, posture recognition and identity recognition. From the more dynamic motor imitation paradigm with a TW avatar, we found that interpersonal synchronization, motor coordination and motor control were more impaired in children with ASD compared to both TD children and children with DCD. Taken together these results confirm the motor peculiarities of children with ASD despite imitation tasks were adequately performed. DISCUSSION Studies from human-machine interaction support the idea of a behavioral signature in children with ASD. However, several issues need to be addressed. Is this behavioral signature motoric in essence? Is it possible to ascertain that these peculiarities occur during all motor tasks (e.g. posture, voluntary movement)? Could this motor signature be considered as specific to autism, notably in comparison to DCD that also display poor motor coordination skills? We suggest that more work comparing the two conditions should be implemented, including analysis of kinematics and movement smoothness with sufficient measurement quality to allow spectral analysis.
Collapse
Affiliation(s)
- J Xavier
- Département de psychiatrie de l'enfant et de l'adolescent, hôpital Pitié-Salpêtrière, AP-HP, Paris, France; Sorbonne université, institut des systèmes intelligents et de robotique, CNRS UMR 7222, Paris, France.
| | - H Guedjou
- Sorbonne université, institut des systèmes intelligents et de robotique, CNRS UMR 7222, Paris, France
| | - S M Anzalone
- Laboratoire CHArt-THIM, EA4004, université Paris 8, 93000 Saint-Denis, France
| | - S Boucenna
- Sorbonne université, institut des systèmes intelligents et de robotique, CNRS UMR 7222, Paris, France
| | - E Guigon
- Sorbonne université, institut des systèmes intelligents et de robotique, CNRS UMR 7222, Paris, France
| | - M Chetouani
- Sorbonne université, institut des systèmes intelligents et de robotique, CNRS UMR 7222, Paris, France
| | - D Cohen
- Département de psychiatrie de l'enfant et de l'adolescent, hôpital Pitié-Salpêtrière, AP-HP, Paris, France; Sorbonne université, institut des systèmes intelligents et de robotique, CNRS UMR 7222, Paris, France
| |
Collapse
|
13
|
It Does Not Matter Who You Are: Fairness in Pre-schoolers Interacting with Human and Robotic Partners. Int J Soc Robot 2019. [DOI: 10.1007/s12369-019-00528-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
14
|
Quantifying patterns of joint attention during human-robot interactions: An application for autism spectrum disorder assessment. Pattern Recognit Lett 2019. [DOI: 10.1016/j.patrec.2018.03.007] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
15
|
Cohen L, Billard A. Social babbling: The emergence of symbolic gestures and words. Neural Netw 2018; 106:194-204. [PMID: 30081346 DOI: 10.1016/j.neunet.2018.06.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2017] [Revised: 06/20/2018] [Accepted: 06/27/2018] [Indexed: 10/28/2022]
Abstract
Language acquisition theories classically distinguish passive language understanding from active language production. However, recent findings show that brain areas such as Broca's region are shared in language understanding and production. Furthermore, these areas are also implicated in understanding and producing goal-oriented actions. These observations question the passive view of language development. In this work, we propose a cognitive developmental model of symbol acquisition, coherent with an active view of language learning. For that purpose, we introduce the concept of social babbling. In this view, symbols are learned in the same way as goal-oriented actions in the context of specific caregiver-infant interactions. We show that this model allows a virtual agent to learn both symbolic words and gestures to refer to objects while interacting with a caregiver. We validate our model by reproducing results from studies on the influence of parental responsiveness on infants language acquisition.
Collapse
Affiliation(s)
- Laura Cohen
- Learning Algorithms and Systems Laboratory, School of Engineering, EPFL, Lausanne, Switzerland.
| | - Aude Billard
- Learning Algorithms and Systems Laboratory, School of Engineering, EPFL, Lausanne, Switzerland
| |
Collapse
|
16
|
Xavier J, Gauthier S, Cohen D, Zahoui M, Chetouani M, Villa F, Berthoz A, Anzalone S. Interpersonal Synchronization, Motor Coordination, and Control Are Impaired During a Dynamic Imitation Task in Children With Autism Spectrum Disorder. Front Psychol 2018; 9:1467. [PMID: 30233439 PMCID: PMC6129607 DOI: 10.3389/fpsyg.2018.01467] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2017] [Accepted: 07/25/2018] [Indexed: 12/02/2022] Open
Abstract
Background: Impairments in imitation abilities have been commonly described in children with autism spectrum disorder (ASD). How motricity in interpersonal coordination impacts imitation, during long lasting semi-ecological conditions, has not been carefully investigated. Methods: Eighty-five children and adolescents (39 controls with typical development, TD; 29 patients with ASD; 17 patients with developmental coordination disorder, DCD), aged 6 to 20 years, participated to a behavioral paradigm in which participants, standing and moving, interacted with a virtual tightrope walker standing and moving as well. During the protocol, we measured automatically and continuously bodily postures and movements from RGB sensor recording to assess participants' behavioral imitation. Results: We show that (1) interpersonal synchronization (as evidenced by the synchrony between the participant's and the tightrope walker's bars) and (2) motor coordination (as evidenced by the synchrony between the participant's bar and its own head axis) increased with age and were more impaired in patients with ASD. Also, motor control as evidenced by the movement angle standard deviations of participants' bar and head were significantly impaired in ASD compared to TD or DCD. Conclusion: Interpersonal synchronization and motor coordination during ecological interaction show both subtle impairment in children with ASD as compared to children with TD or DCD. These results questioned how motricity mature in terms of motor control and proprioception in children with ASD.
Collapse
Affiliation(s)
- Jean Xavier
- Département de Psychiatrie de l'Enfant et de l'Adolescent, AP-HP, Hôpital Pitié-Salpêtrière, Paris, France.,Sorbonne Université, Institut des Systèmes Intelligents et de Robotique, CNRS UMR 7222, Paris, France
| | - Soizic Gauthier
- Département de Psychiatrie de l'Enfant et de l'Adolescent, AP-HP, Hôpital Pitié-Salpêtrière, Paris, France.,CRPMS, EA 3522, Université Paris Diderot, Sorbonne Paris Cité, Paris, France.,Equipe Berthoz, Collège de France, Paris, France
| | - David Cohen
- Département de Psychiatrie de l'Enfant et de l'Adolescent, AP-HP, Hôpital Pitié-Salpêtrière, Paris, France.,Sorbonne Université, Institut des Systèmes Intelligents et de Robotique, CNRS UMR 7222, Paris, France
| | | | - Mohamed Chetouani
- Sorbonne Université, Institut des Systèmes Intelligents et de Robotique, CNRS UMR 7222, Paris, France
| | - François Villa
- CRPMS, EA 3522, Université Paris Diderot, Sorbonne Paris Cité, Paris, France
| | | | | |
Collapse
|
17
|
Saadatzi MN, Pennington RC, Welch KC, Graham JH. Small-Group Technology-Assisted Instruction: Virtual Teacher and Robot Peer for Individuals with Autism Spectrum Disorder. J Autism Dev Disord 2018; 48:3816-3830. [DOI: 10.1007/s10803-018-3654-2] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
18
|
Marchetti A, Manzi F, Itakura S, Massaro D. Theory of Mind and Humanoid Robots From a Lifespan Perspective. ZEITSCHRIFT FUR PSYCHOLOGIE-JOURNAL OF PSYCHOLOGY 2018. [DOI: 10.1027/2151-2604/a000326] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Abstract. This review focuses on some relevant issues concerning the relationship between theory of mind (ToM) and humanoid robots. Humanoid robots are employed in different everyday-life contexts, so it seems relevant to question whether the relationships between human beings and humanoids can be characterized by a mode of interaction typical of the relationships between human beings, that is, the attribution of mental states. Because ToM development continuously undergoes changes from early childhood to late adulthood, we adopted a lifespan perspective. We analyzed contributions from the literature by organizing them around the partition between “mental states and actions” and “human-like features.” Finally, we considered how studying human–robot interaction, within a ToM context, can contribute to our understanding of the intersubjective nature of this interaction.
Collapse
Affiliation(s)
- Antonella Marchetti
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Federico Manzi
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Shoji Itakura
- Department of Psychology, Graduate School of Letters, Kyoto University, Japan
| | - Davide Massaro
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| |
Collapse
|
19
|
|
20
|
Alderisio F, Fiore G, Salesse RN, Bardy BG, Bernardo MD. Interaction patterns and individual dynamics shape the way we move in synchrony. Sci Rep 2017; 7:6846. [PMID: 28754908 PMCID: PMC5533803 DOI: 10.1038/s41598-017-06559-4] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Accepted: 06/13/2017] [Indexed: 11/09/2022] Open
Abstract
An important open problem in Human Behaviour is to understand how coordination emerges in human ensembles. This problem has been seldom studied quantitatively in the existing literature, in contrast to situations involving dual interaction. Here we study motor coordination (or synchronisation) in a group of individuals where participants are asked to visually coordinate an oscillatory hand motion. We separately tested two groups of seven participants. We observed that the coordination level of the ensemble depends on group homogeneity, as well as on the pattern of visual couplings (who looked at whom). Despite the complexity of social interactions, we show that networks of coupled heterogeneous oscillators with different structures capture well the group dynamics. Our findings are relevant to any activity requiring the coordination of several people, as in music, sport or at work, and can be extended to account for other perceptual forms of interaction such as sound or feel.
Collapse
Affiliation(s)
- Francesco Alderisio
- Department of Engineering Mathematics, Merchant Venturers Building, University of Bristol, Woodland Road, Clifton, Bristol, BS8 1UB, United Kingdom
| | - Gianfranco Fiore
- Department of Engineering Mathematics, Merchant Venturers Building, University of Bristol, Woodland Road, Clifton, Bristol, BS8 1UB, United Kingdom
| | - Robin N Salesse
- EuroMov, Montpellier University, 700 Avenue du Pic Saint-Loup, 34090, Montpellier, France
| | - Benoît G Bardy
- EuroMov, Montpellier University, 700 Avenue du Pic Saint-Loup, 34090, Montpellier, France.,Institut Universitaire de France, 1 rue Descartes, 75231, Paris Cedex 05, France
| | - Mario di Bernardo
- Department of Engineering Mathematics, Merchant Venturers Building, University of Bristol, Woodland Road, Clifton, Bristol, BS8 1UB, United Kingdom. .,Department of Electrical Engineering and Information Technology, University of Naples Federico II, Via Claudio 21, 80125, Naples, Italy.
| |
Collapse
|
21
|
Xavier J, Magnat J, Sherman A, Gauthier S, Cohen D, Chaby L. A developmental and clinical perspective of rhythmic interpersonal coordination: From mimicry toward the interconnection of minds. ACTA ACUST UNITED AC 2017. [PMID: 28625683 DOI: 10.1016/j.jphysparis.2017.06.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Imitation plays a critical role in the development of intersubjectivity and serves as a prerequisite for understanding the emotions and intentions of others. In our review, we consider spontaneous motor imitation between children and their peers as a developmental process involving repetition and perspective-taking as well as flexibility and reciprocity. During childhood, this playful dynamic challenges developing visuospatial abilities and requires temporal coordination between partners. As such, we address synchrony as form of communication and social signal per se, that leads, from an experience of similarity, to the interconnection of minds. In this way, we argue that, from a developmental perspective, rhythmic interpersonal coordination through childhood imitative interactions serves as a precursor to higher- level social and cognitive abilities, such as theory of mind (TOM) and empathy. Finally, to clinically illustrate our idea, we focus on developmental coordination disorder (DCD), a condition characterized not only by learning difficulties, but also childhood deficits in motor imitation. We address the challenges faced by these children on an emotional and socio-interactional level through the perspective of their impairments in intra- and interpersonal synchrony.
Collapse
Affiliation(s)
- Jean Xavier
- Département de Psychiatrie de l'Enfant et l'Adolescent, APHP, Groupe Hospitalier Pitié-Salpêtrière, Paris, France; Institut des Systèmes Intelligents et Robotique, ISIR, CNRS UMR 7222, Paris, France.
| | - Julien Magnat
- Pôle de psychiatrie de l'enfant et de l'adolescent, centre hospitalier Montperrin, 109, avenue du PetitBarthélémy, 13617 Aix-en-Provence, France
| | - Alain Sherman
- Department of Psychology, Northwestern University, Evanston, IL, USA
| | - Soizic Gauthier
- CRPMS, EA 3522, Université Paris Diderot, et Equipe Berthoz, Collège de France, Paris, France
| | - David Cohen
- Département de Psychiatrie de l'Enfant et l'Adolescent, APHP, Groupe Hospitalier Pitié-Salpêtrière, Paris, France; Institut des Systèmes Intelligents et Robotique, ISIR, CNRS UMR 7222, Paris, France
| | - Laurence Chaby
- Institut des Systèmes Intelligents et Robotique, ISIR, CNRS UMR 7222, Paris, France; Université Paris Descartes, Sorbonne Paris Cité, Institut de Psychologie, Boulogne-Billancourt, France
| |
Collapse
|
22
|
Alderisio F, Lombardi M, Fiore G, di Bernardo M. A Novel Computer-Based Set-Up to Study Movement Coordination in Human Ensembles. Front Psychol 2017. [PMID: 28649217 PMCID: PMC5465282 DOI: 10.3389/fpsyg.2017.00967] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Existing experimental works on movement coordination in human ensembles mostly investigate situations where each subject is connected to all the others through direct visual and auditory coupling, so that unavoidable social interaction affects their coordination level. Here, we present a novel computer-based set-up to study movement coordination in human groups so as to minimize the influence of social interaction among participants and implement different visual pairings between them. In so doing, players can only take into consideration the motion of a designated subset of the others. This allows the evaluation of the exclusive effects on coordination of the structure of interconnections among the players in the group and their own dynamics. In addition, our set-up enables the deployment of virtual computer players to investigate dyadic interaction between a human and a virtual agent, as well as group synchronization in mixed teams of human and virtual agents. We show how this novel set-up can be employed to study coordination both in dyads and in groups over different structures of interconnections, in the presence as well as in the absence of virtual agents acting as followers or leaders. Finally, in order to illustrate the capabilities of the architecture, we describe some preliminary results. The platform is available to any researcher who wishes to unfold the mechanisms underlying group synchronization in human ensembles and shed light on its socio-psychological aspects.
Collapse
Affiliation(s)
- Francesco Alderisio
- Department of Engineering Mathematics, University of BristolBristol, United Kingdom
| | - Maria Lombardi
- Department of Electrical Engineering and Information Technology, University of Naples Federico IINaples, Italy
| | - Gianfranco Fiore
- Department of Engineering Mathematics, University of BristolBristol, United Kingdom
| | - Mario di Bernardo
- Department of Engineering Mathematics, University of BristolBristol, United Kingdom.,Department of Electrical Engineering and Information Technology, University of Naples Federico IINaples, Italy
| |
Collapse
|
23
|
Cohen D, Grossard C, Grynszpan O, Anzalone S, Boucenna S, Xavier J, Chetouani M, Chaby L. Autisme, jeux sérieux et robotique : réalité tangible ou abus de langage ? ANNALES MEDICO-PSYCHOLOGIQUES 2017. [DOI: 10.1016/j.amp.2017.03.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
24
|
Anzalone SM, Varni G, Ivaldi S, Chetouani M. Automated Prediction of Extraversion During Human–Humanoid Interaction. Int J Soc Robot 2017. [DOI: 10.1007/s12369-017-0399-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
25
|
Alderisio F, Lombardi M, Fiore G, di Bernardo M. A Novel Computer-Based Set-Up to Study Movement Coordination in Human Ensembles. Front Psychol 2017; 8:967. [PMID: 28649217 DOI: 10.3389/fpsyg.2017.00967/bibtex] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2016] [Accepted: 05/26/2017] [Indexed: 05/19/2023] Open
Abstract
Existing experimental works on movement coordination in human ensembles mostly investigate situations where each subject is connected to all the others through direct visual and auditory coupling, so that unavoidable social interaction affects their coordination level. Here, we present a novel computer-based set-up to study movement coordination in human groups so as to minimize the influence of social interaction among participants and implement different visual pairings between them. In so doing, players can only take into consideration the motion of a designated subset of the others. This allows the evaluation of the exclusive effects on coordination of the structure of interconnections among the players in the group and their own dynamics. In addition, our set-up enables the deployment of virtual computer players to investigate dyadic interaction between a human and a virtual agent, as well as group synchronization in mixed teams of human and virtual agents. We show how this novel set-up can be employed to study coordination both in dyads and in groups over different structures of interconnections, in the presence as well as in the absence of virtual agents acting as followers or leaders. Finally, in order to illustrate the capabilities of the architecture, we describe some preliminary results. The platform is available to any researcher who wishes to unfold the mechanisms underlying group synchronization in human ensembles and shed light on its socio-psychological aspects.
Collapse
Affiliation(s)
- Francesco Alderisio
- Department of Engineering Mathematics, University of BristolBristol, United Kingdom
| | - Maria Lombardi
- Department of Electrical Engineering and Information Technology, University of Naples Federico IINaples, Italy
| | - Gianfranco Fiore
- Department of Engineering Mathematics, University of BristolBristol, United Kingdom
| | - Mario di Bernardo
- Department of Engineering Mathematics, University of BristolBristol, United Kingdom
- Department of Electrical Engineering and Information Technology, University of Naples Federico IINaples, Italy
| |
Collapse
|