1
|
Chouinard B, Pesquita A, Enns JT, Chapman CS. Processing of visual social-communication cues during a social-perception of action task in autistic and non-autistic observers. Neuropsychologia 2024; 198:108880. [PMID: 38555063 DOI: 10.1016/j.neuropsychologia.2024.108880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 03/14/2024] [Accepted: 03/26/2024] [Indexed: 04/02/2024]
Abstract
Social perception and communication differ between those with and without autism, even when verbal fluency and intellectual ability are equated. Previous work found that observers responded more quickly to an actor's points if the actor had chosen by themselves where to point instead of being directed where to point. Notably, this 'choice-advantage' effect decreased across non-autistic participants as the number of autistic-like traits and tendencies increased (Pesquita et al., 2016). Here, we build on that work using the same task to study individuals over a broader range of the spectrum, from autistic to non-autistic, measuring both response initiation and mouse movement times, and considering the response to each actor separately. Autistic and non-autistic observers viewed videos of three different actors pointing to one of two locations, without knowing that the actors were sometimes freely choosing to point to one target and other times being directed where to point. All observers exhibited a choice-advantage overall, meaning they responded more rapidly when actors were freely choosing versus when they were directed, indicating a sensitivity to the actors' postural cues and movements. Our fine-grained analyses found a more robust choice-advantage to some actors than others, with autistic observers showing a choice-advantage only in response to one of the actors, suggesting that both actor and observer characteristics influence the overall effect. We briefly explore existing actor characteristics that may have contributed to this effect, finding that both duration of exposure to pre-movement cues and kinematic cues of the actors likely influence the choice advantage to different degrees across the groups. Altogether, the evidence suggested that both autistic and non-autistic individuals could detect the choice-advantage signal, but that for autistic observers the choice-advantage was actor specific. Notably, we found that the influence of the signal, when present, was detected early for all actors by the non-autistic observers, but detected later and only for one actor by the autistic observers. Altogether, we have more accurately characterized the ability of social-perception in autistic individuals as intact, but highlighted that detection of signal is likely delayed/distributed compared to non-autistic observers and that it is important to investigate actor characteristics that may influence detection and use of their social-perception signals.
Collapse
Affiliation(s)
| | | | - J T Enns
- University of British Columbia, Canada
| | | |
Collapse
|
2
|
Silva F, Ribeiro S, Silva S, Garrido MI, Soares SC. Exploring the use of visual predictions in social scenarios while under anticipatory threat. Sci Rep 2024; 14:10913. [PMID: 38740937 DOI: 10.1038/s41598-024-61682-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 05/08/2024] [Indexed: 05/16/2024] Open
Abstract
One of the less recognized effects of anxiety lies in perception alterations caused by how one weighs both sensory evidence and contextual cues. Here, we investigated how anxiety affects our ability to use social cues to anticipate the others' actions. We adapted a paradigm to assess expectations in social scenarios, whereby participants were asked to identify the presence of agents therein, while supported by contextual cues from another agent. Participants (N = 66) underwent this task under safe and threat-of-shock conditions. We extracted both criterion and sensitivity measures as well as gaze data. Our analysis showed that whilst the type of action had the expected effect, threat-of-shock had no effect over criterion and sensitivity. Although showing similar dwell times, gaze exploration of the contextual cue was associated with shorter fixation durations whilst participants were under threat. Our findings suggest that anxiety does not appear to influence the use of expectations in social scenarios.
Collapse
Affiliation(s)
- Fábio Silva
- William James Center for Research, Department of Education and Psychology, University of Aveiro, Universidade de Aveiro, 3810-193, Aveiro, Portugal
| | - Sérgio Ribeiro
- Department of Education and Psychology, University of Aveiro, Aveiro, Portugal
| | - Samuel Silva
- IEETA, DETI, University of Aveiro, Aveiro, Portugal
| | - Marta I Garrido
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, VIC, Australia
- Graeme Clark Institute for Biomedical Engineering, The University of Melbourne, Melbourne, Australia
| | - Sandra C Soares
- William James Center for Research, Department of Education and Psychology, University of Aveiro, Universidade de Aveiro, 3810-193, Aveiro, Portugal.
| |
Collapse
|
3
|
Bianco V, Finisguerra A, Urgesi C. Contextual Priors Shape Action Understanding before and beyond the Unfolding of Movement Kinematics. Brain Sci 2024; 14:164. [PMID: 38391738 PMCID: PMC10887018 DOI: 10.3390/brainsci14020164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 01/29/2024] [Accepted: 02/02/2024] [Indexed: 02/24/2024] Open
Abstract
Previous studies have shown that contextual information may aid in guessing the intention underlying others' actions in conditions of perceptual ambiguity. Here, we aimed to evaluate the temporal deployment of contextual influence on action prediction with increasing availability of kinematic information during the observation of ongoing actions. We used action videos depicting an actor grasping an object placed on a container to perform individual or interpersonal actions featuring different kinematic profiles. Crucially, the container could be of different colors. First, in a familiarization phase, the probability of co-occurrence between each action kinematics and color cues was implicitly manipulated to 80% and 20%, thus generating contextual priors. Then, in a testing phase, participants were asked to predict action outcome when the same action videos were occluded at five different timeframes of the entire movement, ranging from when the actor was still to when the grasp of the object was fully accomplished. In this phase, all possible action-contextual cues' associations were equally presented. The results showed that for all occlusion intervals, action prediction was more facilitated when action kinematics deployed in high- than low-probability contextual scenarios. Importantly, contextual priors shaped action prediction even in the latest occlusion intervals, where the kinematic cues clearly unveiled an action outcome that was previously associated with low-probability scenarios. These residual contextual effects were stronger in individuals with higher subclinical autistic traits. Our findings highlight the relative contribution of kinematic and contextual information to action understanding and provide evidence in favor of their continuous integration during action observation.
Collapse
Affiliation(s)
- Valentina Bianco
- Department of Brain and Behavioural Sciences, University of Pavia, 27100 Pavia, Italy
- Laboratory of Cognitive Neuroscience, Department of Languages and Literatures, Communication, Education and Society, University of Udine, 33100 Udine, Italy
| | | | - Cosimo Urgesi
- Laboratory of Cognitive Neuroscience, Department of Languages and Literatures, Communication, Education and Society, University of Udine, 33100 Udine, Italy
- Scientific Institute, IRCCS E. Medea, Pasian di Prato, 33037 Udine, Italy
| |
Collapse
|
4
|
Hauge TC, Ferris DP, Seidler RD. Individual differences in cooperative and competitive play strategies. PLoS One 2023; 18:e0293583. [PMID: 37943863 PMCID: PMC10635547 DOI: 10.1371/journal.pone.0293583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 10/17/2023] [Indexed: 11/12/2023] Open
Abstract
INTRODUCTION Cooperation and competition are common in social interactions. It is not clear how individual differences in personality may predict performance strategies under these two contexts. We evaluated whether instructions to play cooperatively and competitively would differentially affect dyads playing a Pong video game. We hypothesized that instructions to play cooperatively would result in lower overall points scored and differences in paddle control kinematics relative to when participants were instructed to play competitively. We also predicted that higher scores in prosociality and Sportspersonship would be related to better performance during cooperative than competitive conditions. METHODS Pairs of participants played a Pong video game under cooperative and competitive instructions. During competitive trials, participants were instructed to score more points against one another to win the game. During the cooperative trials, participants were instructed to work together to score as few points against one another as possible. After game play, each participant completed surveys so we could measure their trait prosociality and Sportspersonship. RESULTS Condition was a significant predictor of where along the paddle participants hit the ball, which controlled ball exit angles. Specifically, during cooperation participants concentrated ball contacts on the paddle towards the center to produce more consistent rebound angles. We found a significant correlation of Sex and the average points scored by participants during cooperative games, competitive games, and across all trials. Sex was also significantly correlated with paddle kinematics during cooperative games. The overall scores on the prosociality and Sportspersonship surveys were not significantly correlated with the performance outcomes in cooperative and competitive games. The dimension of prosociality assessing empathic concern was significantly correlated with performance outcomes during cooperative video game play. DISCUSSION No Sportspersonship survey score was able to predict cooperative or competitive game performance, suggesting that Sportspersonship personality assessments are not reliable predictors of cooperative or competitive behaviors translated to a virtual game setting. Survey items and dimensions probing broader empathic concern may be more effective predictors of cooperative and competitive performance during interactive video game play. Further testing is encouraged to assess the efficacy of prosocial personality traits as predictors of cooperative and competitive video game behavior.
Collapse
Affiliation(s)
- Theresa C. Hauge
- Department of Applied Physiology & Kinesiology, College of Health and Human Performance, University of Florida, Gainesville, FL, United States of America
| | - Daniel P. Ferris
- J. Crayton Pruitt Family Department of Biomedical Engineering, Herbert Wertheim College of Engineering, University of Florida, Gainesville, FL, United States of America
| | - Rachael D. Seidler
- Department of Applied Physiology & Kinesiology, College of Health and Human Performance, University of Florida, Gainesville, FL, United States of America
| |
Collapse
|
5
|
Raghavan R, Raviv L, Peeters D. What's your point? Insights from virtual reality on the relation between intention and action in the production of pointing gestures. Cognition 2023; 240:105581. [PMID: 37573692 DOI: 10.1016/j.cognition.2023.105581] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 07/03/2023] [Accepted: 07/26/2023] [Indexed: 08/15/2023]
Abstract
Human communication involves the process of translating intentions into communicative actions. But how exactly do our intentions surface in the visible communicative behavior we display? Here we focus on pointing gestures, a fundamental building block of everyday communication, and investigate whether and how different types of underlying intent modulate the kinematics of the pointing hand and the brain activity preceding the gestural movement. In a dynamic virtual reality environment, participants pointed at a referent to either share attention with their addressee, inform their addressee, or get their addressee to perform an action. Behaviorally, it was observed that these different underlying intentions modulated how long participants kept their arm and finger still, both prior to starting the movement and when keeping their pointing hand in apex position. In early planning stages, a neurophysiological distinction was observed between a gesture that is used to share attitudes and knowledge with another person versus a gesture that mainly uses that person as a means to perform an action. Together, these findings suggest that our intentions influence our actions from the earliest neurophysiological planning stages to the kinematic endpoint of the movement itself.
Collapse
Affiliation(s)
- Renuka Raghavan
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Radboud University, Donders Institute for Brain, Cognition, and Behavior, Nijmegen, The Netherlands
| | - Limor Raviv
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Centre for Social, Cognitive and Affective Neuroscience (cSCAN), University of Glasgow, United Kingdom
| | - David Peeters
- Tilburg University, Department of Communication and Cognition, TiCC, Tilburg, The Netherlands.
| |
Collapse
|
6
|
Bosco A, Filippini M, Borra D, Kirchner EA, Fattori P. Depth and direction effects in the prediction of static and shifted reaching goals from kinematics. Sci Rep 2023; 13:13115. [PMID: 37573413 PMCID: PMC10423273 DOI: 10.1038/s41598-023-40127-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 08/04/2023] [Indexed: 08/14/2023] Open
Abstract
The kinematic parameters of reach-to-grasp movements are modulated by action intentions. However, when an unexpected change in visual target goal during reaching execution occurs, it is still unknown whether the action intention changes with target goal modification and which is the temporal structure of the target goal prediction. We recorded the kinematics of the pointing finger and wrist during the execution of reaching movements in 23 naïve volunteers where the targets could be located at different directions and depths with respect to the body. During the movement execution, the targets could remain static for the entire duration of movement or shifted, with different timings, to another position. We performed temporal decoding of the final goals and of the intermediate trajectory from the past kinematics exploiting a recurrent neural network. We observed a progressive increase of the classification performance from the onset to the end of movement in both horizontal and sagittal dimensions, as well as in decoding shifted targets. The classification accuracy in decoding horizontal targets was higher than the classification accuracy of sagittal targets. These results are useful for establishing how human and artificial agents could take advantage from the observed kinematics to optimize their cooperation in three-dimensional space.
Collapse
Affiliation(s)
- A Bosco
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy.
- Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Bologna, Italy.
| | - M Filippini
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Bologna, Italy
| | - D Borra
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - E A Kirchner
- Department of Electrical Engineering and Information Technology, University of Duisburg-Essen, Duisburg, Germany
- Robotics Innovation Center, German Research Center for Artificial Intelligence GmbH, Kaiserslautern, Germany
| | - P Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Bologna, Italy
| |
Collapse
|
7
|
Torricelli F, Tomassini A, Pezzulo G, Pozzo T, Fadiga L, D'Ausilio A. Motor invariants in action execution and perception. Phys Life Rev 2023; 44:13-47. [PMID: 36462345 DOI: 10.1016/j.plrev.2022.11.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 11/21/2022] [Indexed: 11/27/2022]
Abstract
The nervous system is sensitive to statistical regularities of the external world and forms internal models of these regularities to predict environmental dynamics. Given the inherently social nature of human behavior, being capable of building reliable predictive models of others' actions may be essential for successful interaction. While social prediction might seem to be a daunting task, the study of human motor control has accumulated ample evidence that our movements follow a series of kinematic invariants, which can be used by observers to reduce their uncertainty during social exchanges. Here, we provide an overview of the most salient regularities that shape biological motion, examine the role of these invariants in recognizing others' actions, and speculate that anchoring socially-relevant perceptual decisions to such kinematic invariants provides a key computational advantage for inferring conspecifics' goals and intentions.
Collapse
Affiliation(s)
- Francesco Torricelli
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Alice Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Via San Martino della Battaglia 44, 00185 Rome, Italy
| | - Thierry Pozzo
- Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; INSERM UMR1093-CAPS, UFR des Sciences du Sport, Université Bourgogne Franche-Comté, F-21000, Dijon, France
| | - Luciano Fadiga
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Alessandro D'Ausilio
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy.
| |
Collapse
|
8
|
Dockendorff M, Schmitz L, Vesper C, Knoblich G. Understanding others' distal goals from proximal communicative actions. PLoS One 2023; 18:e0280265. [PMID: 36662700 PMCID: PMC9858010 DOI: 10.1371/journal.pone.0280265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 12/23/2022] [Indexed: 01/21/2023] Open
Abstract
Many social interactions require individuals to coordinate their actions and to inform each other about their goals. Often these goals concern an immediate (i.e., proximal) action, as when people give each other a brief handshake, but they sometimes also refer to a future (i.e. distal) action, as when football players perform a passing sequence. The present study investigates whether observers can derive information about such distal goals by relying on kinematic modulations of an actor's instrumental actions. In Experiment 1 participants were presented with animations of a box being moved at different velocities towards an apparent endpoint. The distal goal, however, was for the object to be moved past this endpoint, to one of two occluded target locations. Participants then selected the location which they considered the likely distal goal of the action. As predicted, participants were able to detect differences in movement velocity and, based on these differences, systematically mapped the movements to the two distal goal locations. Adding a distal goal led to more variation in the way participants mapped the observed movements onto different target locations. The results of Experiments 2 and 3 indicated that this cannot be explained by difficulties in perceptual discrimination. Rather, the increased variability likely reflects differences in interpreting the underlying connection between proximal communicative actions and distal goals. The present findings extend previous research on sensorimotor communication by demonstrating that communicative action modulations are not restricted to predicting proximal goals but can also be used to infer more distal goals.
Collapse
Affiliation(s)
- Martin Dockendorff
- Department of Cognitive Science, Central European University, Vienna, Austria
| | - Laura Schmitz
- Department of Neurology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Cordula Vesper
- Department of Linguistics, Cognitive Science, and Semiotics, Aarhus University, Aarhus, Denmark
- Interacting Minds Centre, Aarhus University, Aarhus, Denmark
| | - Günther Knoblich
- Department of Cognitive Science, Central European University, Vienna, Austria
| |
Collapse
|
9
|
Gowen E, Poliakoff E, Shepherd H, Stadler W. Measuring the prediction of observed actions using an occlusion paradigm: Comparing autistic and non-autistic adults. Autism Res 2022; 15:1636-1648. [PMID: 35385218 PMCID: PMC9543210 DOI: 10.1002/aur.2716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 03/04/2022] [Accepted: 03/15/2022] [Indexed: 11/12/2022]
Abstract
Action prediction involves observing and predicting the actions of others and plays an important role in social cognition and interacting with others. It is thought to use simulation, whereby the observers use their own motor system to predict the observed actions. As individuals diagnosed with autism are characterized by difficulties understanding the actions of others and motor coordination issues, it is possible that action prediction ability is altered in this population. This study compared action prediction ability between 20 autistic and 22 non-autistic adults using an occlusion paradigm. Participants watched different videos of a female actor carrying out everyday actions. During each video, the action was transiently occluded by a gray rectangle for 1000 ms. During occlusions, the video was allowed to continue as normal or was moved forward (i.e., appearing to continue too far ahead) or moved backwards (i.e., appearing to continue too far behind). Participants were asked to indicate after each occlusion whether the action continued with the correct timing or was too far ahead/behind. Autistic individuals were less accurate than non-autistic individuals, particularly when the video was too far behind. A trend analysis suggested that autistic participants were more likely to judge too far behind occlusions as being in time. These preliminary results suggest that prediction ability may be altered in autistic adults, potentially due to slower simulation or a delayed onset of these processes. LAY SUMMARY: When we observe other people performing everyday actions, we use their movements to help us understand and predict what they are doing. In this study, we found that autistic compared to non-autistic adults were slightly less accurate at predicting other people's actions. These findings help to unpick the different ways that social understanding is affected in autism.
Collapse
Affiliation(s)
- Emma Gowen
- Division of Neuroscience and Experimental Psychology, School of Biology, Faculty of Biology, Medicine and Health Sciences, The University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
| | - Ellen Poliakoff
- Division of Neuroscience and Experimental Psychology, School of Biology, Faculty of Biology, Medicine and Health Sciences, The University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
| | - Hayley Shepherd
- Division of Neuroscience and Experimental Psychology, School of Biology, Faculty of Biology, Medicine and Health Sciences, The University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
| | - Waltraud Stadler
- Technical University of Munich, Department of Sport and Health Sciences, Munich, Germany
| |
Collapse
|
10
|
Hemeren P, Veto P, Thill S, Li C, Sun J. Kinematic-Based Classification of Social Gestures and Grasping by Humans and Machine Learning Techniques. Front Robot AI 2021; 8:699505. [PMID: 34746242 PMCID: PMC8565478 DOI: 10.3389/frobt.2021.699505] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 09/07/2021] [Indexed: 11/20/2022] Open
Abstract
The affective motion of humans conveys messages that other humans perceive and understand without conventional linguistic processing. This ability to classify human movement into meaningful gestures or segments plays also a critical role in creating social interaction between humans and robots. In the research presented here, grasping and social gesture recognition by humans and four machine learning techniques (k-Nearest Neighbor, Locality-Sensitive Hashing Forest, Random Forest and Support Vector Machine) is assessed by using human classification data as a reference for evaluating the classification performance of machine learning techniques for thirty hand/arm gestures. The gestures are rated according to the extent of grasping motion on one task and the extent to which the same gestures are perceived as social according to another task. The results indicate that humans clearly rate differently according to the two different tasks. The machine learning techniques provide a similar classification of the actions according to grasping kinematics and social quality. Furthermore, there is a strong association between gesture kinematics and judgments of grasping and the social quality of the hand/arm gestures. Our results support previous research on intention-from-movement understanding that demonstrates the reliance on kinematic information for perceiving the social aspects and intentions in different grasping actions as well as communicative point-light actions.
Collapse
Affiliation(s)
- Paul Hemeren
- School of Informatics, University of Skövde, Skövde, Sweden
| | - Peter Veto
- School of Informatics, University of Skövde, Skövde, Sweden
| | - Serge Thill
- School of Informatics, University of Skövde, Skövde, Sweden.,Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Cai Li
- Pin An Technology Co. Ltd., Shenzhen, China
| | | |
Collapse
|
11
|
Abstract
Why do we run toward people we love, but only walk toward others? One reason is to let them know we love them. In this commentary, we elaborate on how subjective utility information encoded in vigor is read out by others. We consider the potential implications for understanding and modeling the link between movements and decisions in social environments.
Collapse
|
12
|
Savaki HE, Kavroulakis E, Papadaki E, Maris TG, Simos PG. Action Observation Responses Are Influenced by Movement Kinematics and Target Identity. Cereb Cortex 2021; 32:490-503. [PMID: 34259867 DOI: 10.1093/cercor/bhab225] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
In order to inform the debate whether cortical areas related to action observation provide a pragmatic or a semantic representation of goal-directed actions, we performed 2 functional magnetic resonance imaging (fMRI) experiments in humans. The first experiment, involving observation of aimless arm movements, resulted in activation of most of the components known to support action execution and action observation. Given the absence of a target/goal in this experiment and the activation of parieto-premotor cortical areas, which were associated in the past with direction, amplitude, and velocity of movement of biological effectors, our findings suggest that during action observation we could be monitoring movement kinematics. With the second, double dissociation fMRI experiment, we revealed the components of the observation-related cortical network affected by 1) actions that have the same target/goal but different reaching and grasping kinematics and 2) actions that have very similar kinematics but different targets/goals. We found that certain areas related to action observation, including the mirror neuron ones, are informed about movement kinematics and/or target identity, hence providing a pragmatic rather than a semantic representation of goal-directed actions. Overall, our findings support a process-driven simulation-like mechanism of action understanding, in agreement with the theory of motor cognition, and question motor theories of action concept processing.
Collapse
Affiliation(s)
- Helen E Savaki
- Institute of Applied and Computational Mathematics, Foundation for Research and Technology Hellas, Iraklion, Crete 70013, Greece.,Faculty of Medicine, School of Health Sciences, University of Crete, Iraklion, Crete 70013, Greece
| | - Eleftherios Kavroulakis
- Faculty of Medicine, School of Health Sciences, University of Crete, Iraklion, Crete 70013, Greece
| | - Efrosini Papadaki
- Faculty of Medicine, School of Health Sciences, University of Crete, Iraklion, Crete 70013, Greece.,Computational Bio-Medicine Laboratory, Institute of Computer Science, Foundation for Research and Technology Hellas, Iraklion, Crete 70013, Greece
| | - Thomas G Maris
- Faculty of Medicine, School of Health Sciences, University of Crete, Iraklion, Crete 70013, Greece.,Computational Bio-Medicine Laboratory, Institute of Computer Science, Foundation for Research and Technology Hellas, Iraklion, Crete 70013, Greece
| | - Panagiotis G Simos
- Faculty of Medicine, School of Health Sciences, University of Crete, Iraklion, Crete 70013, Greece.,Computational Bio-Medicine Laboratory, Institute of Computer Science, Foundation for Research and Technology Hellas, Iraklion, Crete 70013, Greece
| |
Collapse
|
13
|
Cerullo S, Fulceri F, Muratori F, Contaldo A. Acting with shared intentions: A systematic review on joint action coordination in Autism Spectrum Disorder. Brain Cogn 2021; 149:105693. [PMID: 33556847 DOI: 10.1016/j.bandc.2021.105693] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2020] [Revised: 12/30/2020] [Accepted: 01/08/2021] [Indexed: 02/02/2023]
Abstract
BACKGROUND Joint actions, described as a form of social interaction in which individuals coordinate their actions in space and time to bring about a change in the environment, rely on sensory-motor processes that play a role in the development of social skills. Two brain networks, associated with "mirroring" and "mentalizing", are engaged during these actions: the mirror neuron and the theory of mind systems. People with autism spectrum disorder (ASD) showed impairment in interpersonal coordination during joint actions. Studying joint action coordination in ASD will contribute to clarify the interplay between sensory-motor and social processes throughout development and the interactions between the brain and the behavior. METHOD This review focused on empirical studies that reported behavioral and kinematic findings related to joint action coordination in people with ASD. RESULTS Literature on mechanisms involved in the joint action coordination impairment in ASD is still limited. Data are controversial. Different key-components of joint action coordination may be impaired, such as cooperative behavior, temporal coordination, and motor planning. CONCLUSIONS Interpersonal coordination during joint actions relies on early sensory-motor processes that have a key role in guiding social development. Early intervention targeting the sensory-motor processes involved in the development of joint action coordination could positively support social skills.
Collapse
Affiliation(s)
- Sonia Cerullo
- IRCCS Stella Maris Foundation, 331 Viale del Tirreno, 56018 Pisa, Italy
| | - Francesca Fulceri
- Research Coordination and Support Service, Istituto Superiore di Sanità, Viale Regina Elena 299, 00161 Rome, Italy
| | - Filippo Muratori
- IRCCS Stella Maris Foundation, 331 Viale del Tirreno, 56018 Pisa, Italy; Department of Clinical and Experimental Medicine, University of Pisa, Pisa, Italy
| | - Annarita Contaldo
- IRCCS Stella Maris Foundation, 331 Viale del Tirreno, 56018 Pisa, Italy.
| |
Collapse
|
14
|
Trujillo JP, Simanova I, Bekkering H, Özyürek A. The communicative advantage: how kinematic signaling supports semantic comprehension. PSYCHOLOGICAL RESEARCH 2020; 84:1897-1911. [PMID: 31079227 PMCID: PMC7772160 DOI: 10.1007/s00426-019-01198-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Accepted: 05/02/2019] [Indexed: 11/04/2022]
Abstract
Humans are unique in their ability to communicate information through representational gestures which visually simulate an action (eg. moving hands as if opening a jar). Previous research indicates that the intention to communicate modulates the kinematics (e.g., velocity, size) of such gestures. If and how this modulation influences addressees' comprehension of gestures have not been investigated. Here we ask whether communicative kinematic modulation enhances semantic comprehension (i.e., identification) of gestures. We additionally investigate whether any comprehension advantage is due to enhanced early identification or late identification. Participants (n = 20) watched videos of representational gestures produced in a more- (n = 60) or less-communicative (n = 60) context and performed a forced-choice recognition task. We tested the isolated role of kinematics by removing visibility of actor's faces in Experiment I, and by reducing the stimuli to stick-light figures in Experiment II. Three video lengths were used to disentangle early identification from late identification. Accuracy and response time quantified main effects. Kinematic modulation was tested for correlations with task performance. We found higher gesture identification performance in more- compared to less-communicative gestures. However, early identification was only enhanced within a full visual context, while late identification occurred even when viewing isolated kinematics. Additionally, temporally segmented acts with more post-stroke holds were associated with higher accuracy. Our results demonstrate that communicative signaling, interacting with other visual cues, generally supports gesture identification, while kinematic modulation specifically enhances late identification in the absence of other cues. Results provide insights into mutual understanding processes as well as creating artificial communicative agents.
Collapse
Affiliation(s)
- James P Trujillo
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Montessorilaan 3, B.01.25, 6525GR, Nijmegen, The Netherlands.
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands.
| | - Irina Simanova
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Montessorilaan 3, B.01.25, 6525GR, Nijmegen, The Netherlands
| | - Harold Bekkering
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Montessorilaan 3, B.01.25, 6525GR, Nijmegen, The Netherlands
| | - Asli Özyürek
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD, Nijmegen, The Netherlands
| |
Collapse
|
15
|
Lewkowicz D, Delevoye-Turrell YN. Predictable real-time constraints reveal anticipatory strategies of coupled planning in a sequential pick and place task. Q J Exp Psychol (Hove) 2020; 73:594-616. [DOI: 10.1177/1747021819888081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Planning a sequence of two motor elements is much more than concatenating two independent movements. However, very little is known about the cognitive strategies that are used to perform fluent sequences for intentional object manipulation. In this series of studies, the participants’ task was to reach for and pick to place a wooden cylinder to set it on a place pad of three different diameters, which served to modify terminal accuracy constraints. Participants were required to perform the sequences (1) at their preferred speed or (2) as fast as possible. Action kinematics were recorded with the Qualisys motion-capture system in order to implement a real-time protocol to get participants to engage in a true interactive relation. Results revealed that with low internal constraints (at preferred speed), low coupling between the two elements of the motor sequence was observed, suggesting a step-by-step planning strategy. Under high constraints (at fastest speed), an important terminal accuracy effect back propagated to modify early kinematic parameters of the first element, suggesting strong coupling of the parameters in an encapsulated planning strategy. In Studies 2 and 3, we further manipulated instructions and timing constraints to confirm the importance of time and predictability of external information for coupled planning. These findings overall sustain the hypothesis that coupled planning can take place in a pick and place task when anticipatory strategies are possible. This mode of action planning may be the key reason why motor intention can be read through the observation of micro variations in body kinematics.
Collapse
Affiliation(s)
- Daniel Lewkowicz
- Sciences Cognitives et Sciences Affectives (SCALab), UMR CNRS 9193, Université de Lille, Villeneuve d’Ascq, France
| | - Yvonne N Delevoye-Turrell
- Sciences Cognitives et Sciences Affectives (SCALab), UMR CNRS 9193, Université de Lille, Villeneuve d’Ascq, France
| |
Collapse
|
16
|
Trujillo JP, Simanova I, Özyürek A, Bekkering H. Seeing the Unexpected: How Brains Read Communicative Intent through Kinematics. Cereb Cortex 2020; 30:1056-1067. [PMID: 31504305 PMCID: PMC7132920 DOI: 10.1093/cercor/bhz148] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2019] [Revised: 06/06/2019] [Accepted: 06/12/2019] [Indexed: 11/12/2022] Open
Abstract
Social interaction requires us to recognize subtle cues in behavior, such as kinematic differences in actions and gestures produced with different social intentions. Neuroscientific studies indicate that the putative mirror neuron system (pMNS) in the premotor cortex and mentalizing system (MS) in the medial prefrontal cortex support inferences about contextually unusual actions. However, little is known regarding the brain dynamics of these systems when viewing communicatively exaggerated kinematics. In an event-related functional magnetic resonance imaging experiment, 28 participants viewed stick-light videos of pantomime gestures, recorded in a previous study, which contained varying degrees of communicative exaggeration. Participants made either social or nonsocial classifications of the videos. Using participant responses and pantomime kinematics, we modeled the probability of each video being classified as communicative. Interregion connectivity and activity were modulated by kinematic exaggeration, depending on the task. In the Social Task, communicativeness of the gesture increased activation of several pMNS and MS regions and modulated top-down coupling from the MS to the pMNS, but engagement of the pMNS and MS was not found in the nonsocial task. Our results suggest that expectation violations can be a key cue for inferring communicative intention, extending previous findings from wholly unexpected actions to more subtle social signaling.
Collapse
Affiliation(s)
- James P Trujillo
- Donders Institute for Brain, Cognition and Behaviour
- Centre for Language Studies, Radboud University Nijmegen, 6500HD Nijmegen, the Netherlands
| | | | - Asli Özyürek
- Centre for Language Studies, Radboud University Nijmegen, 6500HD Nijmegen, the Netherlands
- Max Planck Institute for Psycholinguistics, 6525XD Nijmegen, the Netherlands
| | | |
Collapse
|
17
|
De Marco D, Scalona E, Bazzini MC, Avanzini P, Fabbri-Destro M. Observer-Agent Kinematic Similarity Facilitates Action Intention Decoding. Sci Rep 2020; 10:2605. [PMID: 32054915 PMCID: PMC7018748 DOI: 10.1038/s41598-020-59176-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Accepted: 01/22/2020] [Indexed: 11/12/2022] Open
Abstract
It is well known that the kinematics of an action is modulated by the underlying motor intention. In turn, kinematics serves as a cue also during action observation, providing hints about the intention of the observed action. However, an open question is whether decoding others’ intentions on the basis of their kinematics depends solely on how much the kinematics varies across different actions, or rather it is also influenced by its similarity with the observer motor repertoire. The execution of reach-to-grasp and place actions, differing for target size and context, was recorded in terms of upper-limb kinematics in 21 volunteers and in an actor. Volunteers had later to observe the sole reach-to-grasp phase of the actor’s actions, and predict the underlying intention. The potential benefit of the kinematic actor-participant similarity for recognition accuracy was evaluated. In execution, both target size and context modulated specific kinematic parameters. More importantly, although participants performed above chance in intention recognition, the similarity of motor patterns positively correlated with recognition accuracy. Overall, these data indicate that kinematic similarity exerts a facilitative role in intention recognition, providing further support to the view of action intention recognition as a visuo-motor process grounded in motor resonance.
Collapse
Affiliation(s)
- Doriana De Marco
- Consiglio Nazionale delle Ricerche (CNR), Istituto di Neuroscienze, sede di Parma, Italy.
| | - Emilia Scalona
- Consiglio Nazionale delle Ricerche (CNR), Istituto di Neuroscienze, sede di Parma, Italy
| | - Maria Chiara Bazzini
- Consiglio Nazionale delle Ricerche (CNR), Istituto di Neuroscienze, sede di Parma, Italy
| | - Pietro Avanzini
- Consiglio Nazionale delle Ricerche (CNR), Istituto di Neuroscienze, sede di Parma, Italy
| | | |
Collapse
|
18
|
McMahon EG, Zheng CY, Pereira F, Gonzalez R, Ungerleider LG, Vaziri-Pashkam M. Subtle predictive movements reveal actions regardless of social context. J Vis 2020; 19:16. [PMID: 31355865 PMCID: PMC6662941 DOI: 10.1167/19.7.16] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Humans have a remarkable ability to predict the actions of others. To address what information enables this prediction and how the information is modulated by social context, we used videos collected during an interactive reaching game. Two participants (an “initiator” and a “responder”) sat on either side of a plexiglass screen on which two targets were affixed. The initiator was directed to tap one of the two targets, and the responder had to either beat the initiator to the target (competition) or arrive at the same time (cooperation). In a psychophysics experiment, new observers predicted the direction of the initiators' reach from brief clips, which were clipped relative to when the initiator began reaching. A machine learning classifier performed the same task. Both humans and the classifier were able to determine the direction of movement before the finger lift-off in both social conditions. Further, using an information mapping technique, the relevant information was found to be distributed throughout the body of the initiator in both social conditions. Our results indicate that we reveal our intentions during cooperation, in which communicating the future course of actions is beneficial, and also during competition despite the social motivation to reveal less information.
Collapse
Affiliation(s)
- Emalie G McMahon
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Charles Y Zheng
- Machine Learning Team, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Francisco Pereira
- Machine Learning Team, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Ray Gonzalez
- Vision Laboratory, Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Leslie G Ungerleider
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Maryam Vaziri-Pashkam
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
19
|
Bartlett ME, Edmunds CER, Belpaeme T, Thill S, Lemaignan S. What Can You See? Identifying Cues on Internal States From the Movements of Natural Social Interactions. Front Robot AI 2019; 6:49. [PMID: 33501065 PMCID: PMC7805824 DOI: 10.3389/frobt.2019.00049] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2019] [Accepted: 06/06/2019] [Indexed: 11/26/2022] Open
Abstract
In recent years, the field of Human-Robot Interaction (HRI) has seen an increasing demand for technologies that can recognize and adapt to human behaviors and internal states (e.g., emotions and intentions). Psychological research suggests that human movements are important for inferring internal states. There is, however, a need to better understand what kind of information can be extracted from movement data, particularly in unconstrained, natural interactions. The present study examines which internal states and social constructs humans identify from movement in naturalistic social interactions. Participants either viewed clips of the full scene or processed versions of it displaying 2D positional data. Then, they were asked to fill out questionnaires assessing their social perception of the viewed material. We analyzed whether the full scene clips were more informative than the 2D positional data clips. First, we calculated the inter-rater agreement between participants in both conditions. Then, we employed machine learning classifiers to predict the internal states of the individuals in the videos based on the ratings obtained. Although we found a higher inter-rater agreement for full scenes compared to positional data, the level of agreement in the latter case was still above chance, thus demonstrating that the internal states and social constructs under study were identifiable in both conditions. A factor analysis run on participants' responses showed that participants identified the constructs interaction imbalance, interaction valence and engagement regardless of video condition. The machine learning classifiers achieved a similar performance in both conditions, again supporting the idea that movement alone carries relevant information. Overall, our results suggest it is reasonable to expect a machine learning algorithm, and consequently a robot, to successfully decode and classify a range of internal states and social constructs using low-dimensional data (such as the movements and poses of observed individuals) as input.
Collapse
Affiliation(s)
- Madeleine E Bartlett
- Centre for Robotics and Neural Systems (CRNS), University of Plymouth, Plymouth, United Kingdom
| | | | - Tony Belpaeme
- Centre for Robotics and Neural Systems (CRNS), University of Plymouth, Plymouth, United Kingdom.,ID Lab-imec, University of Ghent, Ghent, Belgium
| | - Serge Thill
- Interaction Lab, School of Informatics, University of Skövde, Skövde, Sweden.,Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, Netherlands
| | - Séverin Lemaignan
- Bristol Robotics Lab, University of the West of England, Bristol, United Kingdom
| |
Collapse
|
20
|
Zillekens IC, Brandi ML, Lahnakoski JM, Koul A, Manera V, Becchio C, Schilbach L. Increased functional coupling of the left amygdala and medial prefrontal cortex during the perception of communicative point-light stimuli. Soc Cogn Affect Neurosci 2019; 14:97-107. [PMID: 30481356 PMCID: PMC6318468 DOI: 10.1093/scan/nsy105] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Accepted: 11/21/2018] [Indexed: 11/15/2022] Open
Abstract
Interpersonal predictive coding (IPPC) describes the behavioral phenomenon whereby seeing a communicative rather than an individual action helps to discern a masked second agent. As little is known, yet, about the neural correlates of IPPC, we conducted a functional magnetic resonance imaging study in a group of 27 healthy participants using point-light displays of moving agents embedded in distractors. We discovered that seeing communicative compared to individual actions was associated with higher activation of right superior frontal gyrus, whereas the reversed contrast elicited increased neural activation in an action observation network that was activated during all trials. Our findings, therefore, potentially indicate the formation of action predictions and a reduced demand for executive control in response to communicative actions. Further, in a regression analysis, we revealed that increased perceptual sensitivity was associated with a deactivation of the left amygdala during the perceptual task. A consecutive psychophysiological interaction analysis showed increased connectivity of the amygdala with medial prefrontal cortex in the context of communicative compared to individual actions. Thus, whereas increased amygdala signaling might interfere with task-relevant processes, increased co-activation of the amygdala and the medial prefrontal cortex in a communicative context might represent the integration of mentalizing computations.
Collapse
Affiliation(s)
- Imme C Zillekens
- Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, Munich, Germany.,International Max Planck Research School for Translational Psychiatry, Munich, Germany
| | - Marie-Luise Brandi
- Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, Munich, Germany
| | - Juha M Lahnakoski
- Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, Munich, Germany
| | - Atesh Koul
- Cognition, Motion and Neuroscience Unit, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | | | - Cristina Becchio
- Cognition, Motion and Neuroscience Unit, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy.,Department of Psychology, University of Turin, Turin, Italy
| | - Leonhard Schilbach
- Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, Munich, Germany.,International Max Planck Research School for Translational Psychiatry, Munich, Germany.,Department of Psychiatry, Ludwig-Maximilians-Universität, Munich, Germany
| |
Collapse
|
21
|
Cole EJ, Barraclough NE. Timing of mirror system activation when inferring the intentions of others. Brain Res 2018; 1700:109-117. [DOI: 10.1016/j.brainres.2018.07.015] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2018] [Revised: 07/11/2018] [Accepted: 07/12/2018] [Indexed: 10/28/2022]
|
22
|
Donnarumma F, Dindo H, Pezzulo G. Sensorimotor Communication for Humans and Robots: Improving Interactive Skills by Sending Coordination Signals. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2756107] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
23
|
Dosso JA, Kingstone A. Social modulation of object-directed but not image-directed actions. PLoS One 2018; 13:e0205830. [PMID: 30352061 PMCID: PMC6198971 DOI: 10.1371/journal.pone.0205830] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2018] [Accepted: 10/02/2018] [Indexed: 11/20/2022] Open
Abstract
There has recently been an increased research focus on the influence of social factors on human cognition, attention, and action. While this represents an important step towards an ecologically valid description of real-world behaviour, this work has primarily examined dyads interacting with virtual stimuli i.e. on-screen images of objects. Though differences between actions to images and real items are known, their relative sensitivity to social factors is largely unknown. We argue that because images and real items elicit different neural representations, patterns of attention, and hand actions, a direct comparison between the magnitude of social effects while interacting with images and real objects is demanded. We examined patterns of reaching as individuals performed a shape-matching game. Images and real objects were used as stimuli, and social context was manipulated via the proximity of an observer. We found that social context interacted with stimulus type to modulate behaviour. Specifically, there was a delay in reaching for distant objects when a participant was facing another individual but this social effect only occurred when the stimuli were real objects. Our data suggest that even when images and real objects are arranged to share the affordance of reachability, they differ in their sensitivity to social influences. Therefore, the measurement of social effects using on-screen stimuli may poorly predict the social effects of actions directed towards real objects. Accordingly, generalizations between these two domains should be treated with caution.
Collapse
Affiliation(s)
- Jill A. Dosso
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
- * E-mail:
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
24
|
Visuo-motor interference with a virtual partner is equally present in cooperative and competitive interactions. PSYCHOLOGICAL RESEARCH 2018; 84:810-822. [PMID: 30191316 DOI: 10.1007/s00426-018-1090-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2018] [Accepted: 09/03/2018] [Indexed: 10/28/2022]
Abstract
Automatic imitation of observed actions is thought to be a powerful mechanism, one that may mediate the reward value of interpersonal interactions, but that could also generate visuo-motor interference when interactions involve complementary movements. Since interpersonal coordination seems to be crucial both when cooperating and competing with others, the questions arises as to whether imitation-and thus visuo-motor interference-occurs in both scenarios. To address this issue, we asked human participants to engage in high- or low-interactive (Interactive or Cued condition, respectively), cooperative or competitive, joint reach-to-grasps with a virtual partner. More specifically, interactions occurred in: (i) a Cued condition, where participants simply adapted their movement timing to synchronize with (during cooperation) or anticipate (during competition) the virtual partner's grasp; (ii) an Interactive condition requiring the same adaptation, as well as a real-time selection of their action according to the virtual character's movement. To simulate a realistic human-human interaction, the virtual character would change its movement speed in consecutive trials according to participants' behaviour. Results demonstrate that visuo-motor interference-as indexed by movement kinematics (higher maximum wrist height during complementary compared to imitative power grips)-emerge in both cooperative and competitive motor interactions only when predictions about the partner's movements are needed to perform one's own action (interactive condition). These results support the idea that simulative imitation is heavily present when individuals need to match their behaviours closely.
Collapse
|
25
|
Toward the markerless and automatic analysis of kinematic features: A toolkit for gesture and movement research. Behav Res Methods 2018; 51:769-777. [PMID: 30143970 PMCID: PMC6478643 DOI: 10.3758/s13428-018-1086-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
Action, gesture, and sign represent unique aspects of human communication that use form and movement to convey meaning. Researchers typically use manual coding of video data to characterize naturalistic, meaningful movements at various levels of description, but the availability of markerless motion-tracking technology allows for quantification of the kinematic features of gestures or any meaningful human movement. We present a novel protocol for extracting a set of kinematic features from movements recorded with Microsoft Kinect. Our protocol captures spatial and temporal features, such as height, velocity, submovements/strokes, and holds. This approach is based on studies of communicative actions and gestures and attempts to capture features that are consistently implicated as important kinematic aspects of communication. We provide open-source code for the protocol, a description of how the features are calculated, a validation of these features as quantified by our protocol versus manual coders, and a discussion of how the protocol can be applied. The protocol effectively quantifies kinematic features that are important in the production (e.g., characterizing different contexts) as well as the comprehension (e.g., used by addressees to understand intent and semantics) of manual acts. The protocol can also be integrated with qualitative analysis, allowing fast and objective demarcation of movement units, providing accurate coding even of complex movements. This can be useful to clinicians, as well as to researchers studying multimodal communication or human–robot interactions. By making this protocol available, we hope to provide a tool that can be applied to understanding meaningful movement characteristics in human communication.
Collapse
|
26
|
Identifying others' informative intentions from movement kinematics. Cognition 2018; 180:246-258. [PMID: 30096482 DOI: 10.1016/j.cognition.2018.08.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Revised: 07/31/2018] [Accepted: 08/01/2018] [Indexed: 11/24/2022]
Abstract
Previous research has demonstrated that people can reliably distinguish between actions with different instrumental intentions on the basis of the kinematic signatures of these actions (Cavallo, Koul, Ansuini, Capozzi, & Becchio, 2016). It has also been demonstrated that different informative intentions result in distinct action kinematics (McEllin, Knoblich, & Sebanz, 2017). However, it is unknown whether people can discriminate between instrumental actions and actions performed with an informative intention, and between actions performed with different informative intentions, on the basis of kinematic cues produced in these actions. We addressed these questions using a visual discrimination paradigm in which participants were presented with point light animations of an actor playing a virtual xylophone. We systematically manipulated and amplified kinematic parameters that have been shown to reflect different informative intentions. We found that participants reliably used both spatial and temporal cues in order to discriminate between instrumental actions and actions performed with an informative intention, and between actions performed with different informative intentions. Our findings indicate that the informative cues produced in joint action and teaching go beyond serving a general informative purpose and can be used to infer specific informative intentions.
Collapse
|
27
|
Betti S, Zani G, Granziol U, Guerra S, Castiello U, Sartori L. Look at Me: Early Gaze Engagement Enhances Corticospinal Excitability During Action Observation. Front Psychol 2018; 9:1408. [PMID: 30140243 PMCID: PMC6095062 DOI: 10.3389/fpsyg.2018.01408] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 07/19/2018] [Indexed: 11/20/2022] Open
Abstract
Direct gaze is a powerful social cue able to capture the onlooker’s attention. Beside gaze, head and limb movements as well can provide relevant sources of information for social interaction. This study investigated the joint role of direct gaze and hand gestures on onlookers corticospinal excitability (CE). In two experiments we manipulated the temporal and spatial aspects of observed gaze and hand behavior to assess their role in affecting motor preparation. To do this, transcranial magnetic stimulation (TMS) on the primary motor cortex (M1) coupled with electromyography (EMG) recording was used in two experiments. In the crucial manipulation, we showed to participants four video clips of an actor who initially displayed eye contact while starting a social request gesture, and then completed the action while directing his gaze toward a salient object for the interaction. This way, the observed gaze potentially expressed the intention to interact. Eye tracking data confirmed that gaze manipulation was effective in drawing observers’ attention to the actor’s hand gesture. In the attempt to reveal possible time-locked modulations, we tracked CE at the onset and offset of the request gesture. Neurophysiological results showed an early CE modulation when the actor was about to start the request gesture looking straight to the participants, compared to when his gaze was averted from the gesture. This effect was time-locked to the kinematics of the actor’s arm movement. Overall, data from the two experiments seem to indicate that the joint contribution of direct gaze and precocious kinematic information, gained while a request gesture is on the verge of beginning, increases the subjective experience of involvement and allows observers to prepare for an appropriate social interaction. On the contrary, the separation of gaze cues and body kinematics can have adverse effects on social motor preparation. CE is highly susceptible to biological cues, such as averted gaze, which is able to automatically capture and divert observer’s attention. This point to the existence of heuristics based on early action and gaze cues that would allow observers to interact appropriately.
Collapse
Affiliation(s)
- Sonia Betti
- Dipartimento di Psicologia Generale, Università di Padova, Padova, Italy
| | - Giovanni Zani
- Dipartimento di Psicologia Generale, Università di Padova, Padova, Italy
| | - Umberto Granziol
- Dipartimento di Psicologia Generale, Università di Padova, Padova, Italy
| | - Silvia Guerra
- Dipartimento di Psicologia Generale, Università di Padova, Padova, Italy
| | - Umberto Castiello
- Dipartimento di Psicologia Generale, Università di Padova, Padova, Italy.,Centro Beniamino Segre, Accademia Nazionale dei Lincei, Rome, Italy
| | - Luisa Sartori
- Dipartimento di Psicologia Generale, Università di Padova, Padova, Italy.,Center for Cognitive Neuroscience, Università di Padova, Padova, Italy
| |
Collapse
|
28
|
Trujillo JP, Simanova I, Bekkering H, Özyürek A. Communicative intent modulates production and comprehension of actions and gestures: A Kinect study. Cognition 2018; 180:38-51. [PMID: 29981967 DOI: 10.1016/j.cognition.2018.04.003] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2017] [Revised: 03/16/2018] [Accepted: 04/02/2018] [Indexed: 10/28/2022]
Abstract
Actions may be used to directly act on the world around us, or as a means of communication. Effective communication requires the addressee to recognize the act as being communicative. Humans are sensitive to ostensive communicative cues, such as direct eye gaze (Csibra & Gergely, 2009). However, there may be additional cues present in the action or gesture itself. Here we investigate features that characterize the initiation of a communicative interaction in both production and comprehension. We asked 40 participants to perform 31 pairs of object-directed actions and representational gestures in more- or less- communicative contexts. Data were collected using motion capture technology for kinematics and video recording for eye-gaze. With these data, we focused on two issues. First, if and how actions and gestures are systematically modulated when performed in a communicative context. Second, if observers exploit such kinematic information to classify an act as communicative. Our study showed that during production the communicative context modulates space-time dimensions of kinematics and elicits an increase in addressee-directed eye-gaze. Naïve participants detected communicative intent in actions and gestures preferentially using eye-gaze information, only utilizing kinematic information when eye-gaze was unavailable. Our study highlights the general communicative modulation of action and gesture kinematics during production but also shows that addressees only exploit this modulation to recognize communicative intention in the absence of eye-gaze. We discuss these findings in terms of distinctive but potentially overlapping functions of addressee directed eye-gaze and kinematic modulations within the wider context of human communication and learning.
Collapse
Affiliation(s)
- James P Trujillo
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands; Centre for Language Studies, Radboud University Nijmegen, The Netherlands.
| | - Irina Simanova
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands
| | - Harold Bekkering
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands
| | - Asli Özyürek
- Centre for Language Studies, Radboud University Nijmegen, The Netherlands; Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD Nijmegen, The Netherlands
| |
Collapse
|
29
|
Craighero L, Mele S. Equal kinematics and visual context but different purposes: Observer's moral rules modulate motor resonance. Cortex 2018; 104:1-11. [DOI: 10.1016/j.cortex.2018.03.032] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2017] [Revised: 02/19/2018] [Accepted: 03/30/2018] [Indexed: 10/17/2022]
|
30
|
Schmitz L, Vesper C, Sebanz N, Knoblich G. When Height Carries Weight: Communicating Hidden Object Properties for Joint Action. Cogn Sci 2018; 42:2021-2059. [PMID: 29936705 PMCID: PMC6120543 DOI: 10.1111/cogs.12638] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Revised: 05/19/2019] [Accepted: 05/23/2018] [Indexed: 11/29/2022]
Abstract
In the absence of pre-established communicative conventions, people create novel communication systems to successfully coordinate their actions toward a joint goal. In this study, we address two types of such novel communication systems: sensorimotor communication, where the kinematics of instrumental actions are systematically modulated, versus symbolic communication. We ask which of the two systems co-actors preferentially create when aiming to communicate about hidden object properties such as weight. The results of three experiments consistently show that actors who knew the weight of an object transmitted this weight information to their uninformed co-actors by systematically modulating their instrumental actions, grasping objects of particular weights at particular heights. This preference for sensorimotor communication was reduced in a fourth experiment where co-actors could communicate with weight-related symbols. Our findings demonstrate that the use of sensorimotor communication extends beyond the communication of spatial locations to non-spatial, hidden object properties.
Collapse
Affiliation(s)
- Laura Schmitz
- Department of Cognitive ScienceCentral European University
| | - Cordula Vesper
- Department of Cognitive ScienceCentral European University
- School of Communication and CultureAarhus University
| | - Natalie Sebanz
- Department of Cognitive ScienceCentral European University
| | | |
Collapse
|
31
|
Cole EJ, Slocombe KE, Barraclough NE. Abilities to Explicitly and Implicitly Infer Intentions from Actions in Adults with Autism Spectrum Disorder. J Autism Dev Disord 2018; 48:1712-1726. [PMID: 29214604 PMCID: PMC5889782 DOI: 10.1007/s10803-017-3425-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Previous research suggests that Autism Spectrum Disorder (ASD) might be associated with impairments on implicit but not explicit mentalizing tasks. However, such comparisons are made difficult by the heterogeneity of stimuli and the techniques used to measure mentalizing capabilities. We tested the abilities of 34 individuals (17 with ASD) to derive intentions from others' actions during both explicit and implicit tasks and tracked their eye-movements. Adults with ASD displayed explicit but not implicit mentalizing deficits. Adults with ASD displayed typical fixation patterns during both implicit and explicit tasks. These results illustrate an explicit mentalizing deficit in adults with ASD, which cannot be attributed to differences in fixation patterns.
Collapse
Affiliation(s)
- Eleanor J Cole
- The Department of Psychology, The University of York, Heslington, York, YO10 5DD, UK.
| | - Katie E Slocombe
- The Department of Psychology, The University of York, Heslington, York, YO10 5DD, UK
| | - Nick E Barraclough
- The Department of Psychology, The University of York, Heslington, York, YO10 5DD, UK
| |
Collapse
|
32
|
Seeing mental states: An experimental strategy for measuring the observability of other minds. Phys Life Rev 2018; 24:67-80. [DOI: 10.1016/j.plrev.2017.10.002] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Revised: 09/29/2017] [Accepted: 10/01/2017] [Indexed: 02/03/2023]
|
33
|
Di Cesare G, De Stefani E, Gentilucci M, De Marco D. Vitality Forms Expressed by Others Modulate Our Own Motor Response: A Kinematic Study. Front Hum Neurosci 2017; 11:565. [PMID: 29204114 PMCID: PMC5698685 DOI: 10.3389/fnhum.2017.00565] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2017] [Accepted: 11/07/2017] [Indexed: 01/12/2023] Open
Abstract
During social interaction, actions, and words may be expressed in different ways, for example, gently or rudely. A handshake can be gentle or vigorous and, similarly, tone of voice can be pleasant or rude. These aspects of social communication have been named vitality forms by Daniel Stern. Vitality forms represent how an action is performed and characterize all human interactions. In spite of their importance in social life, to date it is not clear whether the vitality forms expressed by the agent can influence the execution of a subsequent action performed by the receiver. To shed light on this matter, in the present study we carried out a kinematic study aiming to assess whether and how visual and auditory properties of vitality forms expressed by others influenced the motor response of participants. In particular, participants were presented with video-clips showing a male and a female actor performing a "giving request" (give me) or a "taking request" (take it) in visual, auditory, and mixed modalities (visual and auditory). Most importantly, requests were expressed with rude or gentle vitality forms. After the actor's request, participants performed a subsequent action. Results showed that vitality forms expressed by the actors influenced the kinematic parameters of the participants' actions regardless to the modality by which they are conveyed.
Collapse
Affiliation(s)
- Giuseppe Di Cesare
- Department of Robotics, Brain and Cognitive Sciences, Istituto Italiano di Tecnologia, Genova, Italy
| | - Elisa De Stefani
- Neuroscience Unit, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | | | - Doriana De Marco
- Istituto di Neuroscienze, Consiglio Nazionale delle Ricerche, Parma, Italy
| |
Collapse
|
34
|
Gallagher S. Seeing in context: Comment on "Seeing mental states: An experimental strategy for measuring the observability of other minds" by Cristina Becchio et al. Phys Life Rev 2017; 24:104-106. [PMID: 29126778 DOI: 10.1016/j.plrev.2017.11.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2017] [Accepted: 11/01/2017] [Indexed: 10/18/2022]
Affiliation(s)
- Shaun Gallagher
- Philosophy, University of Memphis, USA; Faculty of Law, Humanities and the Arts, University of Wollongong, Australia.
| |
Collapse
|
35
|
|
36
|
Sahaï A, Pacherie E, Grynszpan O, Berberian B. Predictive Mechanisms Are Not Involved the Same Way during Human-Human vs. Human-Machine Interactions: A Review. Front Neurorobot 2017; 11:52. [PMID: 29081744 PMCID: PMC5645494 DOI: 10.3389/fnbot.2017.00052] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2017] [Accepted: 09/19/2017] [Indexed: 11/13/2022] Open
Abstract
Nowadays, interactions with others do not only involve human peers but also automated systems. Many studies suggest that the motor predictive systems that are engaged during action execution are also involved during joint actions with peers and during other human generated action observation. Indeed, the comparator model hypothesis suggests that the comparison between a predicted state and an estimated real state enables motor control, and by a similar functioning, understanding and anticipating observed actions. Such a mechanism allows making predictions about an ongoing action, and is essential to action regulation, especially during joint actions with peers. Interestingly, the same comparison process has been shown to be involved in the construction of an individual's sense of agency, both for self-generated and observed other human generated actions. However, the implication of such predictive mechanisms during interactions with machines is not consensual, probably due to the high heterogeneousness of the automata used in the experimentations, from very simplistic devices to full humanoid robots. The discrepancies that are observed during human/machine interactions could arise from the absence of action/observation matching abilities when interacting with traditional low-level automata. Consistently, the difficulties to build a joint agency with this kind of machines could stem from the same problem. In this context, we aim to review the studies investigating predictive mechanisms during social interactions with humans and with automated artificial systems. We will start by presenting human data that show the involvement of predictions in action control and in the sense of agency during social interactions. Thereafter, we will confront this literature with data from the robotic field. Finally, we will address the upcoming issues in the field of robotics related to automated systems aimed at acting as collaborative agents.
Collapse
Affiliation(s)
- Aïsha Sahaï
- Département d'Etudes Cognitives, ENS, EHESS, Centre National de la Recherche Scientifique, Institut Jean-Nicod, PSL Research University, Paris, France.,ONERA, The French Aerospace Lab, Département Traitement de l'Information et Systèmes, Salon-de-Provence, France
| | - Elisabeth Pacherie
- Département d'Etudes Cognitives, ENS, EHESS, Centre National de la Recherche Scientifique, Institut Jean-Nicod, PSL Research University, Paris, France
| | - Ouriel Grynszpan
- Institut des Systèmes Intelligents et de Robotique, Université Pierre et Marie Curie, Paris, France
| | - Bruno Berberian
- ONERA, The French Aerospace Lab, Département Traitement de l'Information et Systèmes, Salon-de-Provence, France
| |
Collapse
|
37
|
How can the study of action kinematics inform our understanding of human social interaction? Neuropsychologia 2017; 105:101-110. [DOI: 10.1016/j.neuropsychologia.2017.01.018] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2016] [Revised: 01/17/2017] [Accepted: 01/18/2017] [Indexed: 11/17/2022]
|
38
|
Quesque F, Mignon A, Coello Y. Cooperative and competitive contexts do not modify the effect of social intention on motor action. Conscious Cogn 2017; 56:91-99. [PMID: 28697981 DOI: 10.1016/j.concog.2017.06.011] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2016] [Revised: 04/19/2017] [Accepted: 06/18/2017] [Indexed: 10/19/2022]
Abstract
In social interactions, the movements performed by others can be used to anticipate their intention. The present paper investigates whether cooperative vs competitive contexts influence the kinematics of object-directed motor actions and whether they modulate the effect of social intention on motor actions. An "Actor" and a "Partner" participated in a task consisting in displacing a wooden dowel under time constraint. Before this Main action, the Actor performed a Preparatory action which consisted in placing the dowel at the center of the table. Information about who would make the forthcoming Main action was provided only to the Actor through headphones. Results demonstrate an exaggeration of spatial and temporal actions' parameters when acting for the Partner, in cooperative, as well as in competitive context. This finding suggests that the motor manifestation of social intention is largely determined by non-conscious implicit processes that seem little influenced by the context of social interaction.
Collapse
Affiliation(s)
- François Quesque
- Univ. Lille, CNRS, CHU Lille, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000 Lille, France
| | - Astrid Mignon
- Univ. Lille, CNRS, CHU Lille, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000 Lille, France
| | - Yann Coello
- Univ. Lille, CNRS, CHU Lille, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000 Lille, France.
| |
Collapse
|
39
|
Somon B, Campagne A, Delorme A, Berberian B. Performance Monitoring Applied to System Supervision. Front Hum Neurosci 2017; 11:360. [PMID: 28744209 PMCID: PMC5504305 DOI: 10.3389/fnhum.2017.00360] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Accepted: 06/26/2017] [Indexed: 12/30/2022] Open
Abstract
Nowadays, automation is present in every aspect of our daily life and has some benefits. Nonetheless, empirical data suggest that traditional automation has many negative performance and safety consequences as it changed task performers into task supervisors. In this context, we propose to use recent insights into the anatomical and neurophysiological substrates of action monitoring in humans, to help further characterize performance monitoring during system supervision. Error monitoring is critical for humans to learn from the consequences of their actions. A wide variety of studies have shown that the error monitoring system is involved not only in our own errors, but also in the errors of others. We hypothesize that the neurobiological correlates of the self-performance monitoring activity can be applied to system supervision. At a larger scale, a better understanding of system supervision may allow its negative effects to be anticipated or even countered. This review is divided into three main parts. First, we assess the neurophysiological correlates of self-performance monitoring and their characteristics during error execution. Then, we extend these results to include performance monitoring and error observation of others or of systems. Finally, we provide further directions in the study of system supervision and assess the limits preventing us from studying a well-known phenomenon: the Out-Of-the-Loop (OOL) performance problem.
Collapse
Affiliation(s)
- Bertille Somon
- ONERA, Information Processing and Systems DepartmentSalon Air, France.,Univ. Grenoble Alpes, CNRS, LPNC UMR 5105Grenoble, France
| | | | - Arnaud Delorme
- Centre de Recherche Cerveau & Cognition, Pavillon Baudot, Hopital Purpan, BP-25202Toulouse, France.,Swartz Center for Computational Neurosciences, University of California, San DiegoSan Diego, La Jolla, CA, United States
| | - Bruno Berberian
- ONERA, Information Processing and Systems DepartmentSalon Air, France
| |
Collapse
|
40
|
The role of perspective in discriminating between social and non-social intentions from reach-to-grasp kinematics. PSYCHOLOGICAL RESEARCH 2017; 82:915-928. [PMID: 28444467 DOI: 10.1007/s00426-017-0868-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2016] [Accepted: 04/18/2017] [Indexed: 10/19/2022]
Abstract
Making correct inferences regarding social and individual intentions may be crucial for successful interactions, especially when we are required to discriminate between cooperative and competitive behaviors. The results of previous studies indicate that reach-to-grasp kinematic parameters may be used to infer the social or individual outcome of a movement. However, the majority of the studies investigated this ability by presenting reach-to-grasp movements from a third-person perspective only. The aim of the present study was to assess whether the ability to recognize the intent associated to a reach-to-grasp movement varies as a function of perspective by manipulating the perspective of observation (second- and third-perspective) within participants. To this end, we presented participants with video clips of models performing a reach-to-grasp movement with different intents. The video clips were recorded both from a lateral view (third-person perspective) and from a frontal view (second-person perspective). After viewing the clips, in two subsequent tasks participants were asked to distinguish between social and non-social intentions by observing the initial phase of the same action recorded from the two different views. Results showed that, when a fast-speed movement was presented from a lateral view, participants were able to predict its social intention. In contrast, when the same movement was observed from a frontal view, performance was impaired. These results indicate that the ability to detect social intentions from motor cues can be biased by the visual perspective of the observer, specifically for fast-speed movements.
Collapse
|
41
|
Donnarumma F, Dindo H, Pezzulo G. Sensorimotor Coarticulation in the Execution and Recognition of Intentional Actions. Front Psychol 2017; 8:237. [PMID: 28280475 PMCID: PMC5322223 DOI: 10.3389/fpsyg.2017.00237] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2016] [Accepted: 02/07/2017] [Indexed: 11/13/2022] Open
Abstract
Humans excel at recognizing (or inferring) another's distal intentions, and recent experiments suggest that this may be possible using only subtle kinematic cues elicited during early phases of movement. Still, the cognitive and computational mechanisms underlying the recognition of intentional (sequential) actions are incompletely known and it is unclear whether kinematic cues alone are sufficient for this task, or if it instead requires additional mechanisms (e.g., prior information) that may be more difficult to fully characterize in empirical studies. Here we present a computationally-guided analysis of the execution and recognition of intentional actions that is rooted in theories of motor control and the coarticulation of sequential actions. In our simulations, when a performer agent coarticulates two successive actions in an action sequence (e.g., "reach-to-grasp" a bottle and "grasp-to-pour"), he automatically produces kinematic cues that an observer agent can reliably use to recognize the performer's intention early on, during the execution of the first part of the sequence. This analysis lends computational-level support for the idea that kinematic cues may be sufficiently informative for early intention recognition. Furthermore, it suggests that the social benefits of coarticulation may be a byproduct of a fundamental imperative to optimize sequential actions. Finally, we discuss possible ways a performer agent may combine automatic (coarticulation) and strategic (signaling) ways to facilitate, or hinder, an observer's action recognition processes.
Collapse
Affiliation(s)
- Francesco Donnarumma
- Institute of Cognitive Sciences and Technologies, National Research Council Rome, Italy
| | - Haris Dindo
- Computer Science Engineering, University of Palermo Palermo, Italy
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council Rome, Italy
| |
Collapse
|
42
|
Finisguerra A, Amoruso L, Makris S, Urgesi C. Dissociated Representations of Deceptive Intentions and Kinematic Adaptations in the Observer's Motor System. Cereb Cortex 2016; 28:33-47. [DOI: 10.1093/cercor/bhw346] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2016] [Accepted: 10/24/2016] [Indexed: 11/13/2022] Open
Affiliation(s)
- Alessandra Finisguerra
- Dipartimento di Lingue e Letterature, Comunicazione, Formazione e Società, Università degli Studi di Udine, I-33100 Udine, Italy
| | - Lucia Amoruso
- Dipartimento di Lingue e Letterature, Comunicazione, Formazione e Società, Università degli Studi di Udine, I-33100 Udine, Italy
| | - Stergios Makris
- Department of Psychology, Edge Hill University, Ormskirk, Lancashire L394QP, UK
| | - Cosimo Urgesi
- Dipartimento di Lingue e Letterature, Comunicazione, Formazione e Società, Università degli Studi di Udine, I-33100 Udine, Italy
- Istituto di Ricovero e Cura a Carattere Scientifico (IRCSS) Eugenio Medea, Polo Friuli Venezia Giulia, I-33078 San Vito al Tagliamento, Pordenone, Italy
| |
Collapse
|
43
|
Cavallo A, Koul A, Ansuini C, Capozzi F, Becchio C. Decoding intentions from movement kinematics. Sci Rep 2016; 6:37036. [PMID: 27845434 PMCID: PMC5109236 DOI: 10.1038/srep37036] [Citation(s) in RCA: 95] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2016] [Accepted: 10/24/2016] [Indexed: 11/09/2022] Open
Abstract
How do we understand the intentions of other people? There has been a longstanding controversy over whether it is possible to understand others' intentions by simply observing their movements. Here, we show that indeed movement kinematics can form the basis for intention detection. By combining kinematics and psychophysical methods with classification and regression tree (CART) modeling, we found that observers utilized a subset of discriminant kinematic features over the total kinematic pattern in order to detect intention from observation of simple motor acts. Intention discriminability covaried with movement kinematics on a trial-by-trial basis, and was directly related to the expression of discriminative features in the observed movements. These findings demonstrate a definable and measurable relationship between the specific features of observed movements and the ability to discriminate intention, providing quantitative evidence of the significance of movement kinematics for anticipating others' intentional actions.
Collapse
Affiliation(s)
- Andrea Cavallo
- Department of Psychology, University of Torino, Torino, Italy
| | - Atesh Koul
- C’MON, Cognition, Motion and Neuroscience Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | - Caterina Ansuini
- C’MON, Cognition, Motion and Neuroscience Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | | | - Cristina Becchio
- Department of Psychology, University of Torino, Torino, Italy
- C’MON, Cognition, Motion and Neuroscience Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
44
|
Doing It Your Way: How Individual Movement Styles Affect Action Prediction. PLoS One 2016; 11:e0165297. [PMID: 27780259 PMCID: PMC5079573 DOI: 10.1371/journal.pone.0165297] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2016] [Accepted: 10/10/2016] [Indexed: 01/12/2023] Open
Abstract
Individuals show significant variations in performing a motor act. Previous studies in the action observation literature have largely ignored this ubiquitous, if often unwanted, characteristic of motor performance, assuming movement patterns to be highly similar across repetitions and individuals. In the present study, we examined the possibility that individual variations in motor style directly influence the ability to understand and predict others' actions. To this end, we first recorded grasping movements performed with different intents and used a two-step cluster analysis to identify quantitatively 'clusters' of movements performed with similar movement styles (Experiment 1). Next, using videos of the same movements, we proceeded to examine the influence of these styles on the ability to judge intention from action observation (Experiments 2 and 3). We found that motor styles directly influenced observers' ability to 'read' others' intention, with some styles always being less 'readable' than others. These results provide experimental support for the significance of motor variability for action prediction, suggesting that the ability to predict what another person is likely to do next directly depends on her individual movement style.
Collapse
|
45
|
Humans are sensitive to attention control when predicting others' actions. Proc Natl Acad Sci U S A 2016; 113:8669-74. [PMID: 27436897 DOI: 10.1073/pnas.1601872113] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Studies of social perception report acute human sensitivity to where another's attention is aimed. Here we ask whether humans are also sensitive to how the other's attention is deployed. Observers viewed videos of actors reaching to targets without knowing that those actors were sometimes choosing to reach to one of the targets (endogenous control) and sometimes being directed to reach to one of the targets (exogenous control). Experiments 1 and 2 showed that observers could respond more rapidly when actors chose where to reach, yet were at chance when guessing whether the reach was chosen or directed. This implicit sensitivity to attention control held when either actor's faces or limbs were masked (experiment 3) and when only the earliest actor's movements were visible (experiment 4). Individual differences in sensitivity to choice correlated with an independent measure of social aptitude. We conclude that humans are sensitive to attention control through an implicit kinematic process linked to empathy. The findings support the hypothesis that social cognition involves the predictive modeling of others' attentional states.
Collapse
|
46
|
I see what you say: Prior knowledge of other’s goals automatically biases the perception of their actions. Cognition 2016; 146:245-50. [DOI: 10.1016/j.cognition.2015.09.021] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2014] [Revised: 08/19/2015] [Accepted: 09/27/2015] [Indexed: 10/22/2022]
|
47
|
Catmur C. Understanding intentions from actions: Direct perception, inference, and the roles of mirror and mentalizing systems. Conscious Cogn 2015; 36:426-33. [DOI: 10.1016/j.concog.2015.03.012] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2014] [Revised: 03/16/2015] [Accepted: 03/18/2015] [Indexed: 10/23/2022]
|
48
|
Quesque F, Delevoye-Turrell Y, Coello Y. Facilitation effect of observed motor deviants in a cooperative motor task: Evidence for direct perception of social intention in action. Q J Exp Psychol (Hove) 2015; 69:1451-63. [PMID: 26288247 DOI: 10.1080/17470218.2015.1083596] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Spatiotemporal parameters of voluntary motor action may help optimize human social interactions. Yet it is unknown whether individuals performing a cooperative task spontaneously perceive subtly informative social cues emerging through voluntary actions. In the present study, an auditory cue was provided through headphones to an actor and a partner who faced each other. Depending on the pitch of the auditory cue, either the actor or the partner were required to grasp and move a wooden dowel under time constraints from a central to a lateral position. Before this main action, the actor performed a preparatory action under no time constraint, consisting in placing the wooden dowel on the central location when receiving either a neutral ("prêt"-ready) or an informative auditory cue relative to who will be asked to perform the main action (the actor: "moi"-me, or the partner: "lui"-him). Although the task focused on the main action, analysis of motor performances revealed that actors performed the preparatory action with longer reaction times and higher trajectories when informed that the partner would be performing the main action. In this same condition, partners executed the main actions with shorter reaction times and lower velocities, despite having received no previous informative cues. These results demonstrate that the mere observation of socially driven motor actions spontaneously influences the low-level kinematics of voluntary motor actions performed by the observer during a cooperative motor task. These findings indicate that social intention can be anticipated from the mere observation of action patterns.
Collapse
Affiliation(s)
- François Quesque
- a Cognitive and Affective Sciences Laboratory-SCALab , UMR CNRS 9193 University of Lille , Villeneuve d'Ascq , France
| | - Yvonne Delevoye-Turrell
- a Cognitive and Affective Sciences Laboratory-SCALab , UMR CNRS 9193 University of Lille , Villeneuve d'Ascq , France
| | - Yann Coello
- a Cognitive and Affective Sciences Laboratory-SCALab , UMR CNRS 9193 University of Lille , Villeneuve d'Ascq , France
| |
Collapse
|
49
|
Marty B, Bourguignon M, Jousmäki V, Wens V, Op de Beeck M, Van Bogaert P, Goldman S, Hari R, De Tiège X. Cortical kinematic processing of executed and observed goal-directed hand actions. Neuroimage 2015; 119:221-8. [DOI: 10.1016/j.neuroimage.2015.06.064] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2015] [Revised: 04/29/2015] [Accepted: 06/23/2015] [Indexed: 12/01/2022] Open
|
50
|
Sciutti A, Ansuini C, Becchio C, Sandini G. Investigating the ability to read others' intentions using humanoid robots. Front Psychol 2015; 6:1362. [PMID: 26441738 PMCID: PMC4563880 DOI: 10.3389/fpsyg.2015.01362] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2015] [Accepted: 08/24/2015] [Indexed: 11/13/2022] Open
Abstract
The ability to interact with other people hinges crucially on the possibility to anticipate how their actions would unfold. Recent evidence suggests that a similar skill may be grounded on the fact that we perform an action differently if different intentions lead it. Human observers can detect these differences and use them to predict the purpose leading the action. Although intention reading from movement observation is receiving a growing interest in research, the currently applied experimental paradigms have important limitations. Here, we describe a new approach to study intention understanding that takes advantage of robots, and especially of humanoid robots. We posit that this choice may overcome the drawbacks of previous methods, by guaranteeing the ideal trade-off between controllability and naturalness of the interactive scenario. Robots indeed can establish an interaction in a controlled manner, while sharing the same action space and exhibiting contingent behaviors. To conclude, we discuss the advantages of this research strategy and the aspects to be taken in consideration when attempting to define which human (and robot) motion features allow for intention reading during social interactive tasks.
Collapse
Affiliation(s)
- Alessandra Sciutti
- Departments of Robotics, Brain and Cognitive Sciences, Istituto Italiano di Tecnologia Genoa, Italy
| | - Caterina Ansuini
- Departments of Robotics, Brain and Cognitive Sciences, Istituto Italiano di Tecnologia Genoa, Italy
| | - Cristina Becchio
- Departments of Robotics, Brain and Cognitive Sciences, Istituto Italiano di Tecnologia Genoa, Italy ; Department of Psychology, Centre for Cognitive Science, University of Torino Torino, Italy
| | - Giulio Sandini
- Departments of Robotics, Brain and Cognitive Sciences, Istituto Italiano di Tecnologia Genoa, Italy
| |
Collapse
|