1
|
Kroczek LOH, Lingnau A, Schwind V, Wolff C, Mühlberger A. Observers predict actions from facial emotional expressions during real-time social interactions. Behav Brain Res 2024; 471:115126. [PMID: 38950784 DOI: 10.1016/j.bbr.2024.115126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 06/07/2024] [Accepted: 06/19/2024] [Indexed: 07/03/2024]
Abstract
In face-to-face social interactions, emotional expressions provide insights into the mental state of an interactive partner. This information can be crucial to infer action intentions and react towards another person's actions. Here we investigate how facial emotional expressions impact subjective experience and physiological and behavioral responses to social actions during real-time interactions. Thirty-two participants interacted with virtual agents while fully immersed in Virtual Reality. Agents displayed an angry or happy facial expression before they directed an appetitive (fist bump) or aversive (punch) social action towards the participant. Participants responded to these actions, either by reciprocating the fist bump or by defending the punch. For all interactions, subjective experience was measured using ratings. In addition, physiological responses (electrodermal activity, electrocardiogram) and participants' response times were recorded. Aversive actions were judged to be more arousing and less pleasant relative to appetitive actions. In addition, angry expressions increased heart rate relative to happy expressions. Crucially, interaction effects between facial emotional expression and action were observed. Angry expressions reduced pleasantness stronger for appetitive compared to aversive actions. Furthermore, skin conductance responses to aversive actions were increased for happy compared to angry expressions and reaction times were faster to aversive compared to appetitive actions when agents showed an angry expression. These results indicate that observers used facial emotional expression to generate expectations for particular actions. Consequently, the present study demonstrates that observers integrate information from facial emotional expressions with actions during social interactions.
Collapse
Affiliation(s)
- Leon O H Kroczek
- Department of Psychology, Clinical Psychology and Psychotherapy, University of Regensburg, Regensburg, Germany.
| | - Angelika Lingnau
- Department of Psychology, Cognitive Neuroscience, University of Regensburg, Regensburg, Germany
| | - Valentin Schwind
- Human Computer Interaction, University of Applied Sciences in Frankfurt a. M., Frankfurt a. M, Germany; Department of Media Informatics, University of Regensburg, Regensburg, Germany
| | - Christian Wolff
- Department of Media Informatics, University of Regensburg, Regensburg, Germany
| | - Andreas Mühlberger
- Department of Psychology, Clinical Psychology and Psychotherapy, University of Regensburg, Regensburg, Germany
| |
Collapse
|
2
|
Azaad S, Sebanz N. Predicting others' actions from their social contexts. Sci Rep 2023; 13:22047. [PMID: 38086897 PMCID: PMC10716130 DOI: 10.1038/s41598-023-49081-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 12/04/2023] [Indexed: 12/18/2023] Open
Abstract
Contextual cues have been shown to inform our understanding and predictions of others' actions. In this study, we tested whether observers' predictions about unfolding actions depend upon the social context in which they occur. Across five experiments, we showed participants videos of an actor walking toward a piece of furniture either with (joint context) or without (solo context) a partner standing by it. We found greater predictive bias, indicative of stronger action expectations when videos contained a second actor (Experiment 1), even when the solo condition had a perceptually-matched control object in place of the actor (Experiment 2). Critically, belief manipulations about the actions the walking actor would perform suppressed the difference between social context conditions when the manipulation specified an action possible in both contexts (Experiment 5) but not when the action was one that would be difficult without a partner (Experiment 4). Interestingly, the social context effect persisted when the belief manipulation specified an unlikely action given the depicted scene (Experiment 3). These findings provide novel evidence that kinematically-identical actions can elicit different predictions depending on the social context in which they occur.
Collapse
|
3
|
Torricelli F, Tomassini A, Pezzulo G, Pozzo T, Fadiga L, D'Ausilio A. Motor invariants in action execution and perception. Phys Life Rev 2023; 44:13-47. [PMID: 36462345 DOI: 10.1016/j.plrev.2022.11.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 11/21/2022] [Indexed: 11/27/2022]
Abstract
The nervous system is sensitive to statistical regularities of the external world and forms internal models of these regularities to predict environmental dynamics. Given the inherently social nature of human behavior, being capable of building reliable predictive models of others' actions may be essential for successful interaction. While social prediction might seem to be a daunting task, the study of human motor control has accumulated ample evidence that our movements follow a series of kinematic invariants, which can be used by observers to reduce their uncertainty during social exchanges. Here, we provide an overview of the most salient regularities that shape biological motion, examine the role of these invariants in recognizing others' actions, and speculate that anchoring socially-relevant perceptual decisions to such kinematic invariants provides a key computational advantage for inferring conspecifics' goals and intentions.
Collapse
Affiliation(s)
- Francesco Torricelli
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Alice Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Via San Martino della Battaglia 44, 00185 Rome, Italy
| | - Thierry Pozzo
- Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; INSERM UMR1093-CAPS, UFR des Sciences du Sport, Université Bourgogne Franche-Comté, F-21000, Dijon, France
| | - Luciano Fadiga
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Alessandro D'Ausilio
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy.
| |
Collapse
|
4
|
Bayani KYT, Natraj N, Gale MK, Temples D, Atawala N, Wheaton LA. Flexible constraint hierarchy during the visual encoding of tool-object interactions. Eur J Neurosci 2021; 54:6520-6532. [PMID: 34523764 DOI: 10.1111/ejn.15460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 08/27/2021] [Accepted: 09/10/2021] [Indexed: 11/26/2022]
Abstract
Tools and objects are associated with numerous action possibilities that are reduced depending on the task-related internal and external constraints presented to the observer. Action hierarchies propose that goals represent higher levels of the hierarchy while kinematic patterns represent lower levels of the hierarchy. Prior work suggests that tool-object perception is heavily influenced by grasp and action context. The current study sought to evaluate whether the presence of action hierarchy can be perceptually identified using eye tracking during tool-object observation. We hypothesize that gaze patterns will reveal a perceptual hierarchy based on the observed task context and grasp constraints. Participants viewed tool-objects scenes with two types of constraints: task-context and grasp constraints. Task-context constraints consisted of correct (e.g., frying pan-spatula) and incorrect tool-object pairings (e.g., stapler-spatula). Grasp constraints involved modified tool orientations, which requires participants to understand how initially awkward grasp postures can help achieve the task. The visual scene contained three areas of interests (AOIs): the object, the functional tool-end (e.g., spoon handle) and the manipulative tool-end (e.g., spoon bowl). Results revealed two distinct processes based on stimuli constraints. Goal-oriented encoding, the attentional bias towards the object and manipulative tool-end, was demonstrated when grasp did not lead to meaningful tool-use. In images where grasp postures were critical to action performance, attentional bias was primarily between the object and functional tool-end, which suggests means-related encoding of the graspable properties of the object. This study expands from previous work and demonstrates a flexible constraint hierarchy depending on the observed task constraints.
Collapse
Affiliation(s)
| | - Nikhilesh Natraj
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, USA.,Weill Institute of Neurosciences, University of California, San Francisco, California, USA
| | - Mary Kate Gale
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, USA
| | - Danielle Temples
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, USA
| | - Neel Atawala
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, USA
| | - Lewis A Wheaton
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, USA
| |
Collapse
|
5
|
Kroczek LOH, Lingnau A, Schwind V, Wolff C, Mühlberger A. Angry facial expressions bias towards aversive actions. PLoS One 2021; 16:e0256912. [PMID: 34469494 PMCID: PMC8409676 DOI: 10.1371/journal.pone.0256912] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 08/19/2021] [Indexed: 11/19/2022] Open
Abstract
Social interaction requires fast and efficient processing of another person's intentions. In face-to-face interactions, aversive or appetitive actions typically co-occur with emotional expressions, allowing an observer to anticipate action intentions. In the present study, we investigated the influence of facial emotions on the processing of action intentions. Thirty-two participants were presented with video clips showing virtual agents displaying a facial emotion (angry vs. happy) while performing an action (punch vs. fist-bump) directed towards the observer. During each trial, video clips stopped at varying durations of the unfolding action, and participants had to recognize the presented action. Naturally, participants' recognition accuracy improved with increasing duration of the unfolding actions. Interestingly, while facial emotions did not influence accuracy, there was a significant influence on participants' action judgements. Participants were more likely to judge a presented action as a punch when agents showed an angry compared to a happy facial emotion. This effect was more pronounced in short video clips, showing only the beginning of an unfolding action, than in long video clips, showing near-complete actions. These results suggest that facial emotions influence anticipatory processing of action intentions allowing for fast and adaptive responses in social interactions.
Collapse
Affiliation(s)
- Leon O. H. Kroczek
- Department of Psychology, Clinical Psychology and Psychotherapy, University of Regensburg, Regensburg, Germany
- * E-mail:
| | - Angelika Lingnau
- Department of Psychology, Cognitive Neuroscience, University of Regensburg, Regensburg, Germany
| | - Valentin Schwind
- Human Computer Interaction, University of Applied Sciences in Frankfurt a. M, Frankfurt a. M., Germany
- Department of Media Informatics, University of Regensburg, Regensburg, Germany
| | - Christian Wolff
- Department of Media Informatics, University of Regensburg, Regensburg, Germany
| | - Andreas Mühlberger
- Department of Psychology, Clinical Psychology and Psychotherapy, University of Regensburg, Regensburg, Germany
| |
Collapse
|
6
|
Bellebaum C, Ghio M, Wollmer M, Weismüller B, Thoma P. The role of trait empathy in the processing of observed actions in a false-belief task. Soc Cogn Affect Neurosci 2021; 15:53-61. [PMID: 31993669 PMCID: PMC7171373 DOI: 10.1093/scan/nsaa009] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 12/11/2019] [Accepted: 01/14/2020] [Indexed: 01/02/2023] Open
Abstract
Empathic brain responses are characterized by overlapping activations between active experience and observation of an emotion in another person, with the pattern for observation being modulated by trait empathy. Also for self-performed and observed errors, similar brain activity has been described, but findings concerning the role of empathy are mixed. We hypothesized that trait empathy modulates the processing of observed responses if expectations concerning the response are based on the beliefs of the observed person. In the present study, we utilized a false-belief task in which observed person’s and observer’s task-related knowledge were dissociated and errors and correct responses could be expected or unexpected. While theta power was generally modulated by the expectancy of the observed response, a negative mediofrontal event-related potential (ERP) component was more pronounced for unexpected observed actions only in participants with higher trait empathy (assessed by the Empathy Quotient), as revealed by linear mixed effects analyses. Cognitive and affective empathy, assessed by the Interpersonal Reactivity Index, were not significantly related to the ERP component. The results suggest that trait empathy can facilitate the generation of predictions and thereby modulate specific aspects of the processing of observed actions, while the contributions of specific empathy components remain unclear.
Collapse
Affiliation(s)
- Christian Bellebaum
- Institute of Experimental Psychology, Heinrich-Heine University Düsseldorf, 40225 Düsseldorf, Germany
| | - Marta Ghio
- Institute of Experimental Psychology, Heinrich-Heine University Düsseldorf, 40225 Düsseldorf, Germany
| | - Marie Wollmer
- Institute of Experimental Psychology, Heinrich-Heine University Düsseldorf, 40225 Düsseldorf, Germany
| | - Benjamin Weismüller
- Institute of Experimental Psychology, Heinrich-Heine University Düsseldorf, 40225 Düsseldorf, Germany
| | - Patrizia Thoma
- Faculty of Psychology, Clinical Neuropsychology, Neuropsychological Therapy Centre, Ruhr University Bochum, 44780 Bochum, Germany
| |
Collapse
|
7
|
Ganglmayer K, Haupt M, Finke K, Paulus M. Adults, but not preschoolers or toddlers integrate situational constraints in their action anticipations: a developmental study on the flexibility of anticipatory gaze. Cogn Process 2021; 22:515-528. [PMID: 33763791 PMCID: PMC8324589 DOI: 10.1007/s10339-021-01015-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Accepted: 01/16/2021] [Indexed: 10/29/2022]
Abstract
Recent theories stress the role of situational information in understanding others' behaviour. For example, the predictive coding framework assumes that people take contextual information into account when anticipating other's actions. Likewise, the teleological stance theory assumes an early developing ability to consider situational constraints in action prediction. The current study investigates, over a wide age range, whether humans flexibly integrate situational constraints in their action anticipations. By means of an eye-tracking experiment, 2-year-olds, 5-year-olds, younger and older adults (together N = 181) observed an agent repeatedly taking one of two paths to reach a goal. Then, this path became blocked, and for test trials only the other path was passable. Results demonstrated that in test trials younger and older adults anticipated that the agent would take the continuous path, indicating that they took the situational constraints into account. In contrast, 2- and 5-year-olds anticipated that the agent would take the blocked path, indicating that they still relied on the agent's previous observed behaviour and-contrary to claims by the teleological stance theory-did not take the situational constraints into account. The results highlight developmental changes in human's ability to include situational constraints in their visual anticipations. Overall, the study contributes to theories on predictive coding and the development of action understanding.
Collapse
Affiliation(s)
- Kerstin Ganglmayer
- Department Psychology, Developmental Psychology, Ludwig Maximilians-Universität München, Leopoldstr. 13, 80802, Munich, Germany.
| | - Marleen Haupt
- Department Psychology, General and Experimental Psychology, Ludwig-Maximilians-Universtität München, Munich, Germany
| | - Kathrin Finke
- Department Psychology, General and Experimental Psychology, Ludwig-Maximilians-Universtität München, Munich, Germany.,Hans-Berger Department of Neurology, University Hospital Jena, Jena, Germany
| | - Markus Paulus
- Department Psychology, Developmental Psychology, Ludwig Maximilians-Universität München, Leopoldstr. 13, 80802, Munich, Germany
| |
Collapse
|
8
|
A review of the neurobiomechanical processes underlying secure gripping in object manipulation. Neurosci Biobehav Rev 2021; 123:286-300. [PMID: 33497782 DOI: 10.1016/j.neubiorev.2021.01.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 01/05/2021] [Accepted: 01/11/2021] [Indexed: 11/24/2022]
Abstract
O'SHEA, H. and S. J. Redmond. A review of the neurobiomechanical processes underlying secure gripping in object manipulation. NEUROSCI BIOBEHAV REV 286-300, 2021. Humans display skilful control over the objects they manipulate, so much so that biomimetic systems have yet to emulate this remarkable behaviour. Two key control processes are assumed to facilitate such dexterity: predictive cognitive-motor processes that guide manipulation procedures by anticipating action outcomes; and reactive sensorimotor processes that provide important error-based information for movement adaptation. Notwithstanding increased interdisciplinary research interest in object manipulation behaviour, the complexity of the perceptual-sensorimotor-cognitive processes involved and the theoretical divide regarding the fundamentality of control mean that the essential mechanisms underlying manipulative action remain undetermined. In this paper, following a detailed discussion of the theoretical and empirical bases for understanding human dexterous movement, we emphasise the role of tactile-related sensory events in secure object handling, and consider the contribution of certain biophysical and biomechanical phenomena. We aim to provide an integrated account of the current state-of-art in skilled human-object interaction that bridges the literature in neuroscience, cognitive psychology, and biophysics. We also propose novel directions for future research exploration in this area.
Collapse
|
9
|
Craighero L, Mele S. Proactive gaze is present during biological and non-biological motion observation. Cognition 2020; 206:104461. [PMID: 33010721 DOI: 10.1016/j.cognition.2020.104461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2020] [Revised: 09/02/2020] [Accepted: 09/03/2020] [Indexed: 10/23/2022]
Abstract
Others' action observation activates in the observer a coordinated hand-eye motor program, covert for the hand (i.e. motor resonance), and overt for the eye (i.e. proactive gaze), similar to that of the observed agent. The biological motion hypothesis of action anticipation claims that proactive gaze occurs only in the presence of biological motion, and that kinematic information is sufficient to determine the anticipation process. The results of the present study did not support the biological motion hypothesis of action anticipation. Specifically, proactive gaze was present during observation of both a biological accelerated-decelerated motion and a non-biological constant velocity motion (Experiment 1), in the presence of a barrier able to restrict differences between the two kinematics to the motion profile of individual markers prior to contact (Experiment 2), but only if an object was present at the end point of the movement trajectory (Experiment 3). Furthermore, proactive gaze was found independently of the presence of end effects temporally congruent with the instant in which the movement stopped (Experiments 4, and 5). We propose that the involvement of the observer's motor system is not restricted to when the agent moves with natural kinematics, and it is mandatory whenever the presence of an agent or a goal is evident, regardless of physical appearance, natural kinematics, and the possibility to identify the action behind the stimulus.
Collapse
Affiliation(s)
- Laila Craighero
- Department of Biomedical and Surgical Specialist Sciences, University of Ferrara, Italy.
| | - Sonia Mele
- Department of Biomedical and Surgical Specialist Sciences, University of Ferrara, Italy
| |
Collapse
|
10
|
McDonough KL, Costantini M, Hudson M, Ward E, Bach P. Affordance matching predictively shapes the perceptual representation of others' ongoing actions. J Exp Psychol Hum Percept Perform 2020; 46:847-859. [PMID: 32378934 PMCID: PMC7391862 DOI: 10.1037/xhp0000745] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
Predictive processing accounts of social perception argue that action observation is a predictive process, in which inferences about others' goals are tested against the perceptual input, inducing a subtle perceptual confirmation bias that distorts observed action kinematics toward the inferred goals. Here we test whether such biases are induced even when goals are not explicitly given but have to be derived from the unfolding action kinematics. In 2 experiments, participants briefly saw an actor reach ambiguously toward a large object and a small object, with either a whole-hand power grip or an index-finger and thumb precision grip. During its course, the hand suddenly disappeared, and participants reported its last seen position on a touch-screen. As predicted, judgments were consistently biased toward apparent action targets, such that power grips were perceived closer to large objects and precision grips closer to small objects, even if the reach kinematics were identical. Strikingly, these biases were independent of participants' explicit goal judgments. They were of equal size when action goals had to be explicitly derived in each trial (Experiment 1) or not (Experiment 2) and, across trials and across participants, explicit judgments and perceptual biases were uncorrelated. This provides evidence, for the first time, that people make online adjustments of observed actions based on the match between hand grip and object goals, distorting their perceptual representation toward implied goals. These distortions may not reflect high-level goal assumptions, but emerge from relatively low-level processing of kinematic features within the perceptual system. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
|
11
|
Zillekens IC, Schliephake LM, Brandi ML, Schilbach L. A look at actions: direct gaze modulates functional connectivity of the right TPJ with an action control network. Soc Cogn Affect Neurosci 2020; 14:977-986. [PMID: 31593216 PMCID: PMC6917026 DOI: 10.1093/scan/nsz071] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Revised: 06/11/2019] [Accepted: 08/19/2019] [Indexed: 11/12/2022] Open
Abstract
Social signals such as eye contact and motor actions are essential elements of social interactions. However, our knowledge about the interplay of gaze signals and the control of actions remains limited. In a group of 30 healthy participants, we investigated the effect of gaze (direct gaze vs averted) on behavioral and neural measures of action control as assessed by a spatial congruency task (spatially congruent vs incongruent button presses in response to gaze shifts). Behavioral results demonstrate that inter-individual differences in condition-specific incongruency costs were associated with autistic traits. While there was no interaction effect of gaze and action control on brain activation, in a context of incongruent responses to direct gaze shifts, a psychophysiological interaction analysis showed increased functional coupling between the right temporoparietal junction, a key region in gaze processing, and the inferior frontal gyri, which have been related to both social cognition and motor inhibition. Conversely, incongruency costs to averted gaze were reflected in increased connectivity with action control areas implicated in top-down attentional processes. Our findings indicate that direct gaze perception inter-individually modulates motor actions and enforces the functional integration of gaze-related social cognition and action control processes, thereby connecting functional elements of social interactions.
Collapse
Affiliation(s)
- Imme Christina Zillekens
- Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, Munich, Germany.,International Max Planck Research School for Translational Psychiatry (IMPRS-TP), Munich, Germany
| | | | - Marie-Luise Brandi
- Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, Munich, Germany
| | - Leonhard Schilbach
- Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, Munich, Germany.,International Max Planck Research School for Translational Psychiatry (IMPRS-TP), Munich, Germany.,Department of Psychiatry, Ludwig-Maximilians-Universität, Munich, Germany.,Outpatient and Day Clinic for Disorders of Social Interaction, Max Planck Institute of Psychiatry, Munich, Germany
| |
Collapse
|
12
|
Elsner B, Adam M. Infants’ Goal Prediction for Simple Action Events: The Role of Experience and Agency Cues. Top Cogn Sci 2020; 13:45-62. [DOI: 10.1111/tops.12494] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2019] [Revised: 01/29/2020] [Accepted: 01/29/2020] [Indexed: 11/27/2022]
|
13
|
Ward E, Ganis G, McDonough KL, Bach P. Perspective taking as virtual navigation? Perceptual simulation of what others see reflects their location in space but not their gaze. Cognition 2020; 199:104241. [PMID: 32105910 DOI: 10.1016/j.cognition.2020.104241] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 02/14/2020] [Accepted: 02/17/2020] [Indexed: 11/27/2022]
Abstract
Other peoples' (imagined) visual perspectives are represented perceptually in a similar way to our own, and can drive bottom-up processes in the same way as own perceptual input (Ward, Ganis, & Bach, 2019). Here we test directly whether visual perspective taking is driven by where another person is looking, or whether these perceptual simulations represent their position in space more generally. Across two experiments, we asked participants to identify whether alphanumeric characters, presented at one of eight possible orientations away from upright, were presented normally, or in their mirror-inverted form (e.g. "R" vs. "Я"). In some scenes, a person would appear sitting to the left or the right of the participant. We manipulated either between-trials (Experiment 1) or between-subjects (Experiment 2), the gaze-direction of the inserted person, such that they either (1) looked towards the to-be-judged item, (2) averted their gaze away from the participant, or (3) gazed out towards the participant (Exp. 2 only). In the absence of another person, we replicated the well-established mental rotation effect, where recognition of items becomes slower the more items are oriented away from upright (e.g. Shepard and Meltzer, 1971). Crucially, in both experiments and in all conditions, this response pattern changed when another person was inserted into the scene. People spontaneously took the perspective of the other person and made faster judgements about the presented items in their presence if the characters were oriented towards upright to them. The gaze direction of this other person did not influence these effects. We propose that visual perspective taking is therefore a general spatial-navigational ability, allowing us to calculate more easily how a scene would (in principle) look from another position in space, and that such calculations reflect the spatial location of another person, but not their gaze.
Collapse
Affiliation(s)
- Eleanor Ward
- School of Psychology, University of Plymouth, Drake Circus, Devon PL4 8AA, UK.
| | - Giorgio Ganis
- School of Psychology, University of Plymouth, Drake Circus, Devon PL4 8AA, UK
| | - Katrina L McDonough
- School of Psychology, University of Plymouth, Drake Circus, Devon PL4 8AA, UK
| | - Patric Bach
- School of Psychology, University of Plymouth, Drake Circus, Devon PL4 8AA, UK
| |
Collapse
|
14
|
What first drives visual attention during the recognition of object-directed actions? The role of kinematics and goal information. Atten Percept Psychophys 2020; 81:2400-2409. [PMID: 31292941 DOI: 10.3758/s13414-019-01784-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
The recognition of others' object-directed actions is known to involve the decoding of both the visual kinematics of the action and the action goal. Yet whether action recognition is first guided by the processing of visual kinematics or by a prediction about the goal of the actor remains debated. In order to provide experimental evidence to this issue, the present study aimed at investigating whether visual attention would be preferentially captured by visual kinematics or by action goal information when processing others' actions. In a visual search task, participants were asked to find correct actions (e.g., drinking from glass) among distractor actions. Distractors actions contained grip and/or goal violations and could therefore share the correct goal and/or the correct grip with the target. The time course of fixation proportion on each distractor action has been taken as an indicator of visual attention allocation. Results show that visual attention is first captured by the distractor action with similar goal. Then the withdrawal of visual attention from the action distractor with similar goal suggests a later attentional capture by the action distractor with similar grip. Overall, results are in line with predictive approaches of action understanding, which assume that observers first make a prediction about the actor's goal before verifying this prediction using the visual kinematics of the action.
Collapse
|
15
|
Visual attention and action: How cueing, direct mapping, and social interactions drive orienting. Psychon Bull Rev 2018; 25:1585-1605. [PMID: 28808932 DOI: 10.3758/s13423-017-1354-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Despite considerable interest in both action perception and social attention over the last 2 decades, there has been surprisingly little investigation concerning how the manual actions of other humans orient visual attention. The present review draws together studies that have measured the orienting of attention, following observation of another's goal-directed action. Our review proposes that, in line with the literature on eye gaze, action is a particularly strong orienting cue for the visual system. However, we additionally suggest that action may orient visual attention using mechanisms, which gaze direction does not (i.e., neural direct mapping and corepresentation). Finally, we review the implications of these gaze-independent mechanisms for the study of attention to action. We suggest that our understanding of attention to action may benefit from being studied in the context of joint action paradigms, where the role of higher level action goals and social factors can be investigated.
Collapse
|
16
|
The Role of Attention and Saccades on Parietofrontal Encoding of Contextual and Grasp-specific Affordances of Tools: An ERP Study. Neuroscience 2018; 394:243-266. [PMID: 30347278 DOI: 10.1016/j.neuroscience.2018.10.019] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 10/02/2018] [Accepted: 10/10/2018] [Indexed: 11/23/2022]
Abstract
The ability to recognize a tool's affordances (how a spoon should be appropriately grasped and used), is vital for daily life. Prior research has identified parietofrontal circuits, including mirror neurons, to be critical in understanding affordances. However, parietofrontal action-encoding regions receive extensive visual input and are adjacent to parietofrontal attention control networks. It is unclear how eye movements and attention modulate parietofrontal encoding of affordances. To address this issue, scenes depicting tools in different use-contexts and grasp-postures were presented to healthy subjects across two experiments, with stimuli durations of 100 ms or 500 ms. The 100-ms experiment automatically restricted saccades and required covert attention, while the 500-ms experiment allowed overt attention. The two experiments elicited similar behavioral decisions on tool-use correctness and isolated the influence of attention on parietofrontal activity. Parietofrontal ERPs (P600) distinguishing tool-use contexts (e.g., spoon-yogurt vs. spoon-ball) were similar in both experiments. Conversely, parietofrontal ERPs distinguishing tool-grasps were characterized by posterior to frontal N130-N200 ERPs in the 100-ms experiment and by saccade-perturbed N130-N200 ERPs, frontal N400 and parietal P500 in the 500-ms experiment. Particularly, only overt gaze toward the hand-tool interaction engaged mirror neurons (frontal N400) when discerning grasps that manipulate but not functionally use a tool - (grasp bowl rather than stem of spoon). Results here detail the first human electrophysiological evidence on how attention selectively modulates multiple parietofrontal grasp-perception circuits, especially the mirror neuron system, while unaffecting parietofrontal encoding of tool-use contexts. These results are pertinent to neurophysiological models of affordances that typically neglect the role of attention in action perception.
Collapse
|
17
|
Wermelinger S, Gampe A, Daum MM. The dynamics of the interrelation of perception and action across the life span. PSYCHOLOGICAL RESEARCH 2018; 83:116-131. [PMID: 30083839 DOI: 10.1007/s00426-018-1058-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 07/14/2018] [Indexed: 11/30/2022]
Abstract
Successful social interaction relies on the interaction partners' perception, anticipation and understanding of their respective actions. The perception of a particular action and the capability to produce this action share a common representational ground. So far, no study has explored the interrelation between action perception and production across the life span using the same tasks and the same measurement techniques. This study was designed to fill this gap. Participants between 3 and 80 years (N = 214) observed two multistep actions of different familiarities and then reproduced the according actions. Using eye tracking, we measured participants' action perception via their prediction of action goals during observation. To capture subtler perceptual processes, we additionally analysed the dynamics and recurrent patterns within participants' gaze behaviour. Action production was assessed via the accuracy of the participants' reproduction of the observed actions. No age-related differences were found for the perception of the familiar action, where participants of all ages could rely on previous experience. In the unfamiliar action, where participants had less experience, action goals were predicted more frequently with increasing age. The recurrence in participants' gaze behaviour was related to both, age and action production: gaze behaviour was more recurrent (i.e. less flexible) in very young and very old participants, and lower levels of recurrence (i.e. greater flexibility) were related to higher scores in action production across participants. Incorporating a life-span perspective, this study illustrates the dynamic nature of developmental differences in the associations of action production with action perception.
Collapse
Affiliation(s)
- Stephanie Wermelinger
- Department of Psychology, University of Zurich, Binzmuehlestrasse 14, Box 21, 8050, Zurich, Switzerland.
| | - Anja Gampe
- Department of Psychology, University of Zurich, Binzmuehlestrasse 14, Box 21, 8050, Zurich, Switzerland
| | - Moritz M Daum
- Department of Psychology, University of Zurich, Binzmuehlestrasse 14, Box 21, 8050, Zurich, Switzerland.,Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
| |
Collapse
|
18
|
Typical predictive eye movements during action observation without effector-specific motor simulation. Psychon Bull Rev 2018; 24:1152-1157. [PMID: 28004256 DOI: 10.3758/s13423-016-1219-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
When watching someone reaching to grasp an object, we typically gaze at the object before the agent's hand reaches it-that is, we make a "predictive eye movement" to the object. The received explanation is that predictive eye movements rely on a direct matching process, by which the observed action is mapped onto the motor representation of the same body movements in the observer's brain. In this article, we report evidence that calls for a reexamination of this account. We recorded the eye movements of an individual born without arms (D.C.) while he watched an actor reaching for one of two different-sized objects with a power grasp, a precision grasp, or a closed fist. D.C. showed typical predictive eye movements modulated by the actor's hand shape. This finding constitutes proof of concept that predictive eye movements during action observation can rely on visual and inferential processes, unaided by effector-specific motor simulation.
Collapse
|
19
|
Recognition memory and featural similarity between concepts: The pupil's point of view. Biol Psychol 2018; 135:159-169. [PMID: 29665431 DOI: 10.1016/j.biopsycho.2018.04.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2017] [Revised: 02/18/2018] [Accepted: 04/10/2018] [Indexed: 11/20/2022]
Abstract
Differences in pupil dilation are observed for studied compared to new items in recognition memory. According to cognitive load theory, this effect reflects the greater cognitive demands of retrieving contextual information from study phase. Pupil dilation can also occur when new items conceptually related to old ones are erroneously recognized as old, but the aspects of similarity that modulate false memory and related pupil responses remain unclear. We investigated this issue by manipulating the degree of featural similarity between new (unstudied) and old (studied) concepts in an old/new recognition task. We found that new concepts with high similarity were mistakenly identified as old and had greater pupil dilation than those with low similarity, suggesting that pupil dilation reflects the strength of evidence on which recognition judgments are based and, importantly, greater locus coeruleus and prefrontal activity determined by the higher degree of retrieval monitoring involved in recognizing these items.
Collapse
|
20
|
Bach P, Schenke KC. Predictive social perception: Towards a unifying framework from action observation to person knowledge. SOCIAL AND PERSONALITY PSYCHOLOGY COMPASS 2017. [DOI: 10.1111/spc3.12312] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
21
|
Donnarumma F, Costantini M, Ambrosini E, Friston K, Pezzulo G. Action perception as hypothesis testing. Cortex 2017; 89:45-60. [PMID: 28226255 PMCID: PMC5383736 DOI: 10.1016/j.cortex.2017.01.016] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2016] [Revised: 11/21/2016] [Accepted: 01/18/2017] [Indexed: 01/27/2023]
Abstract
We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions - and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing.
Collapse
Affiliation(s)
- Francesco Donnarumma
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Marcello Costantini
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, UK; Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University G. d'Annunzio, Chieti, Italy; Institute for Advanced Biomedical Technologies - ITAB, Foundation University G. d'Annunzio, Chieti, Italy
| | - Ettore Ambrosini
- Department of Neuroscience, University of Padua, Padua, Italy; Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University G. d'Annunzio, Chieti, Italy; Institute for Advanced Biomedical Technologies - ITAB, Foundation University G. d'Annunzio, Chieti, Italy
| | - Karl Friston
- The Wellcome Trust Centre for Neuroimaging, UCL, London, UK
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy.
| |
Collapse
|
22
|
Aldaqre I, Schuwerk T, Daum MM, Sodian B, Paulus M. Sensitivity to communicative and non-communicative gestures in adolescents and adults with autism spectrum disorder: saccadic and pupillary responses. Exp Brain Res 2016; 234:2515-27. [DOI: 10.1007/s00221-016-4656-y] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2015] [Accepted: 04/19/2016] [Indexed: 10/21/2022]
|
23
|
Ansuini C, Cavallo A, Koul A, D'Ausilio A, Taverna L, Becchio C. Grasping others' movements: Rapid discrimination of object size from observed hand movements. J Exp Psychol Hum Percept Perform 2016; 42:918-29. [PMID: 27078036 DOI: 10.1037/xhp0000169] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
During reach-to-grasp movements, the hand is gradually molded to conform to the size and shape of the object to be grasped. Yet the ability to glean information about object properties by observing grasping movements is poorly understood. In this study, we capitalized on the effect of object size to investigate the ability to discriminate the size of an invisible object from movement kinematics. The study consisted of 2 phases. In the first action execution phase, to assess grip scaling, we recorded and analyzed reach-to-grasp movements performed toward differently sized objects. In the second action observation phase, video clips of the corresponding movements were presented to participants in a two-alternative forced-choice task. To probe discrimination performance over time, videos were edited to provide selective vision of different periods from 2 viewpoints. Separate analyses were conducted to determine how the participants' ability to discriminate between stimulus alternatives (Type I sensitivity) and their metacognitive ability to discriminate between correct and incorrect responses (Type II sensitivity) varied over time and viewpoint. We found that as early as 80 ms after movement onset, participants were able to discriminate object size from the observation of grasping movements delivered from the lateral viewpoint. For both viewpoints, information pickup closely matched the evolution of the hand's kinematics, reaching an almost perfect performance well before the fingers made contact with the object (60% of movement duration). These findings suggest that observers are able to decode object size from kinematic sources specified early on in the movement. (PsycINFO Database Record
Collapse
Affiliation(s)
- Caterina Ansuini
- Department of Robotics, Brain and Cognitive Sciences, Fondazione Istituto Italiano di Tecnologia
| | | | - Atesh Koul
- Department of Robotics, Brain and Cognitive Sciences, Fondazione Istituto Italiano di Tecnologia
| | - Alessandro D'Ausilio
- Department of Robotics, Brain and Cognitive Sciences, Fondazione Istituto Italiano di Tecnologia
| | - Laura Taverna
- Department of Robotics, Brain and Cognitive Sciences, Fondazione Istituto Italiano di Tecnologia
| | - Cristina Becchio
- Department of Robotics, Brain and Cognitive Sciences, Fondazione Istituto Italiano di Tecnologia
| |
Collapse
|
24
|
Schuwerk T, Paulus M. Preschoolers, adolescents, and adults visually anticipate an agent's efficient action; but only after having observed it frequently. Q J Exp Psychol (Hove) 2016; 69:800-16. [DOI: 10.1080/17470218.2015.1061028] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
The present study examined the contribution of efficiency reasoning and statistical learning on visual action anticipation in preschool children, adolescents, and adults. To this end, Experiment 1 assessed proactive eye movements of 5-year-old children, 15-year-old adolescents, and adults, who observed an agent stating the intent to reach a goal as quickly as possible. Subsequently the agent could four times either take a short, hence efficient, or long, hence inefficient, path to get to the goal. The results showed that in the first trial participants in none of the age groups predicted above chance level that the agent would produce the efficient action. Instead, we observed an age-dependent increase in action predictions in the subsequent repeated presentation of the same action. Experiment 2 ruled out that participants’ nonconsideration of the efficient path was due to a lack of understanding of the agent's action goal. Moreover, it demonstrated that 5-year-old children do predict that the agent will act efficiently when verbally reasoning about his future action. Overall, the study supports the view that rapid learning from frequency information guides visual action anticipations.
Collapse
Affiliation(s)
- Tobias Schuwerk
- Department of Psychology, Ludwig-Maximilians Universität München, Munich, Germany
- Department of Psychiatry and Psychotherapy, University of Regensburg, Regensburg, Germany
| | - Markus Paulus
- Department of Psychology, Ludwig-Maximilians Universität München, Munich, Germany
| |
Collapse
|
25
|
Motor system contribution to action prediction: Temporal accuracy depends on motor experience. Cognition 2016; 148:71-8. [DOI: 10.1016/j.cognition.2015.12.007] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2014] [Revised: 11/17/2015] [Accepted: 12/12/2015] [Indexed: 12/19/2022]
|
26
|
Filippi CA, Woodward AL. Action Experience Changes Attention to Kinematic Cues. Front Psychol 2016; 7:19. [PMID: 26913012 PMCID: PMC4753290 DOI: 10.3389/fpsyg.2016.00019] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2015] [Accepted: 01/06/2016] [Indexed: 11/13/2022] Open
Abstract
The current study used remote corneal reflection eye-tracking to examine the relationship between motor experience and action anticipation in 13-months-old infants. To measure online anticipation of actions infants watched videos where the actor’s hand provided kinematic information (in its orientation) about the type of object that the actor was going to reach for. The actor’s hand orientation either matched the orientation of a rod (congruent cue) or did not match the orientation of the rod (incongruent cue). To examine relations between motor experience and action anticipation, we used a 2 (reach first vs. observe first) × 2 (congruent kinematic cue vs. incongruent kinematic cue) between-subjects design. We show that 13-months-old infants in the observe first condition spontaneously generate rapid online visual predictions to congruent hand orientation cues and do not visually anticipate when presented incongruent cues. We further demonstrate that the speed that these infants generate predictions to congruent motor cues is correlated with their own ability to pre-shape their hands. Finally, we demonstrate that following reaching experience, infants generate rapid predictions to both congruent and incongruent hand shape cues—suggesting that short-term experience changes attention to kinematics.
Collapse
|
27
|
The motor way: Clinical implications of understanding and shaping actions with the motor system in autism and drug addiction. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2015; 16:191-206. [DOI: 10.3758/s13415-015-0399-7] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
28
|
Natraj N, Pella Y, Borghi A, Wheaton L. The visual encoding of tool–object affordances. Neuroscience 2015; 310:512-27. [DOI: 10.1016/j.neuroscience.2015.09.060] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2015] [Revised: 09/19/2015] [Accepted: 09/22/2015] [Indexed: 10/23/2022]
|
29
|
Abstract
An important element in social interactions is predicting the goals of others, including the goals of others' manual actions. Over a decade ago, Flanagan and Johansson demonstrated that, when observing other people reaching for objects, the observer's gaze arrives at the goal before the action is completed. Moreover, those authors proposed that this behavior was mediated by an embodied process, which takes advantage of the observer's motor knowledge. Here, we scrutinize work that has followed that seminal article. We include studies on adults that have used combined eye tracking and transcranial magnetic stimulation technologies to test causal hypotheses about underlying brain circuits. We also include developmental studies on human infants. We conclude that, although several aspects of the embodied process of predictive eye movements remain to be clarified, current evidence strongly suggests that the motor system plays a causal role in guiding predictive gaze shifts that focus on another person's future goal. The early emergence of the predictive gaze in infant development underlines its importance for social cognition and interaction.
Collapse
Affiliation(s)
| | - Terje Falck-Ytter
- Department of Psychology, Uppsala University Department of Women's and Children's Health, Karolinska Institutet
| |
Collapse
|
30
|
Quesque F, Coello Y. Perceiving what you intend to do from what you do: evidence for embodiment in social interactions. SOCIOAFFECTIVE NEUROSCIENCE & PSYCHOLOGY 2015; 5:28602. [PMID: 26246478 PMCID: PMC4526771 DOI: 10.3402/snp.v5.28602] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2015] [Accepted: 07/13/2015] [Indexed: 11/14/2022]
Abstract
Although action and perception are central components of our interactions with the external world, the most recent experimental investigations also support their implications in the emotional, decision-making, and goal ascription processes in social context. In this article, we review the existing literature supporting this view and highlighting a link between reach-to-grasp motor actions and social communicative processes. First, we discuss the most recent experimental findings showing how the social context subtly influences the execution of object-oriented motor actions. Then, we show that the kinematic characteristics of object-oriented motor actions are modulated by the actor's social intention. Finally, we demonstrate that naïve observers can implicitly take advantage of these kinematic effects for their own motor productions. Considered together, these data are compatible with the embodied cognition framework stating that cognition, and in our case social cognition, is grounded in knowledge associated with past sensory and motor experiences.
Collapse
Affiliation(s)
| | - Yann Coello
- UMR CNRS 9193 SCALab, University of Lille, Lille, France;
| |
Collapse
|
31
|
Letesson C, Grade S, Edwards MG. Different but complementary roles of action and gaze in action observation priming: Insights from eye- and motion-tracking measures. Front Psychol 2015; 6:569. [PMID: 25999886 PMCID: PMC4419854 DOI: 10.3389/fpsyg.2015.00569] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2015] [Accepted: 04/20/2015] [Indexed: 11/18/2022] Open
Abstract
Action priming following action observation is thought to be caused by the observed action kinematics being represented in the same brain areas as those used for action execution. But, action priming can also be explained by shared goal representations, with compatibility between observation of the agent’s gaze and the intended action of the observer. To assess the contribution of action kinematics and eye-gaze cues in the prediction of an agent’s action goal and action priming, participants observed actions where the availability of both cues was manipulated. Action observation was followed by action execution, and the congruency between the target of the agent’s and observer’s actions, and the congruency between the observed and executed action spatial location were manipulated. Eye movements were recorded during the observation phase, and the action priming was assessed using motion analysis. The results showed that the observation of gaze information influenced the observer’s prediction speed to attend to the target, and that observation of action kinematic information influenced the accuracy of these predictions. Motion analysis results showed that observed action cues alone primed both spatial incongruent and object congruent actions, consistent with the idea that the prime effect was driven by similarity between goals and kinematics. The observation of action and eye-gaze cues together induced a prime effect complementarily sensitive to object and spatial congruency. While observation of the agent’s action kinematics triggered an object-centered and kinematic-centered action representation, independently, the complementary observation of eye-gaze triggered a more fine-grained representation illustrating a specification of action kinematics toward the selected goal. Even though both cues differentially contributed to action priming, their complementary integration led to a more refined pattern of action priming.
Collapse
Affiliation(s)
- Clément Letesson
- Psy-NAPS Group, Institut de Recherches en Sciences Psychologiques, Université Catholique de Louvain Louvain-la-Neuve, Belgium ; Institute of Neuroscience, Université Catholique de Louvain Louvain-la-Neuve, Belgium
| | - Stéphane Grade
- Psy-NAPS Group, Institut de Recherches en Sciences Psychologiques, Université Catholique de Louvain Louvain-la-Neuve, Belgium ; Institute of Neuroscience, Université Catholique de Louvain Louvain-la-Neuve, Belgium
| | - Martin G Edwards
- Psy-NAPS Group, Institut de Recherches en Sciences Psychologiques, Université Catholique de Louvain Louvain-la-Neuve, Belgium ; Institute of Neuroscience, Université Catholique de Louvain Louvain-la-Neuve, Belgium
| |
Collapse
|
32
|
Ambrosini E, Pezzulo G, Costantini M. The eye in hand: predicting others' behavior by integrating multiple sources of information. J Neurophysiol 2015; 113:2271-9. [PMID: 25568158 DOI: 10.1152/jn.00464.2014] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2014] [Accepted: 01/07/2015] [Indexed: 11/22/2022] Open
Abstract
The ability to predict the outcome of other beings' actions confers significant adaptive advantages. Experiments have assessed that human action observation can use multiple information sources, but it is currently unknown how they are integrated and how conflicts between them are resolved. To address this issue, we designed an action observation paradigm requiring the integration of multiple, potentially conflicting sources of evidence about the action target: the actor's gaze direction, hand preshape, and arm trajectory, and their availability and relative uncertainty in time. In two experiments, we analyzed participants' action prediction ability by using eye tracking and behavioral measures. The results show that the information provided by the actor's gaze affected participants' explicit predictions. However, results also show that gaze information was disregarded as soon as information on the actor's hand preshape was available, and this latter information source had widespread effects on participants' prediction ability. Furthermore, as the action unfolded in time, participants relied increasingly more on the arm movement source, showing sensitivity to its increasing informativeness. Therefore, the results suggest that the brain forms a robust estimate of the actor's motor intention by integrating multiple sources of information. However, when informative motor cues such as a preshaped hand with a given grip are available and might help in selecting action targets, people tend to capitalize on such motor cues, thus turning out to be more accurate and fast in inferring the object to be manipulated by the other's hand.
Collapse
Affiliation(s)
- Ettore Ambrosini
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience, Imaging and Cognitive Sciences, University "G. d'Annunzio," Chieti, Italy; Institute for Advanced Biomedical Technologies, University "G. d'Annunzio," Chieti, Italy;
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy; and
| | - Marcello Costantini
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience, Imaging and Cognitive Sciences, University "G. d'Annunzio," Chieti, Italy; Institute for Advanced Biomedical Technologies, University "G. d'Annunzio," Chieti, Italy; Mind, Brain Imaging and Neuroethics, Institute of Mental Health Research, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
33
|
Elsner C, Bakker M, Rohlfing K, Gredebäck G. Infants' online perception of give-and-take interactions. J Exp Child Psychol 2014; 126:280-94. [PMID: 24973626 PMCID: PMC4119258 DOI: 10.1016/j.jecp.2014.05.007] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2013] [Revised: 05/22/2014] [Accepted: 05/27/2014] [Indexed: 11/22/2022]
Abstract
This research investigated infants' online perception of give-me gestures during observation of a social interaction. In the first experiment, goal-directed eye movements of 12-month-olds were recorded as they observed a give-and-take interaction in which an object is passed from one individual to another. Infants' gaze shifts from the passing hand to the receiving hand were significantly faster when the receiving hand formed a give-me gesture relative to when it was presented as an inverted hand shape. Experiment 2 revealed that infants' goal-directed gaze shifts were not based on different affordances of the two receiving hands. Two additional control experiments further demonstrated that differences in infants' online gaze behavior were not mediated by an attentional preference for the give-me gesture. Together, our findings provide evidence that properties of social action goals influence infants' online gaze during action observation. The current studies demonstrate that infants have expectations about well-formed object transfer actions between social agents. We suggest that 12-month-olds are sensitive to social goals within the context of give-and-take interactions while observing from a third-party perspective.
Collapse
Affiliation(s)
- Claudia Elsner
- Uppsala Child and Baby Lab, Department of Psychology, Uppsala University, 751 42 Uppsala, Sweden.
| | - Marta Bakker
- Uppsala Child and Baby Lab, Department of Psychology, Uppsala University, 751 42 Uppsala, Sweden
| | - Katharina Rohlfing
- Center of Excellence Cognitive Interaction Technology, Bielefeld University, 33615 Bielefeld, Germany
| | - Gustaf Gredebäck
- Uppsala Child and Baby Lab, Department of Psychology, Uppsala University, 751 42 Uppsala, Sweden
| |
Collapse
|
34
|
Möller C, Zimmer HD, Aschersleben G. Effects of short-term experience on anticipatory eye movements during action observation. Exp Brain Res 2014; 233:69-77. [PMID: 25209915 DOI: 10.1007/s00221-014-4091-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2014] [Accepted: 08/29/2014] [Indexed: 10/24/2022]
Abstract
Recent studies have shown that anticipatory eye movements occur during both action observation and action execution. These findings strongly support the direct matching hypothesis, which states that in observing others' actions, people take advantage of the same action knowledge that enables them to perform the same actions. Furthermore, a connection between action experience and the ability to anticipate action goals has been proposed. Concerning the role of experience, most studies concentrated on motor experts such as athletes and musicians, whereas only few studies investigated whether motor programs can be activated by short-term experience. Applying a pre-post design, we examined whether short-term experience affects anticipatory eye movements during observation. Participants (N = 150 university students) observed scenes showing an actor performing a block stacking task. Subsequently, participants performed either a block stacking task, puzzles, or a pursuit rotor task. Afterward, participants were again provided with the aforementioned block stacking task scenes. Results revealed that the block stacking task group directed their gaze significantly earlier toward the action goals of the block stacking task during posttest trials, compared with Puzzle and pursuit rotor task groups, which did not differ from each other. In accordance with the direct matching hypothesis, our study provides evidence that short-term experience with the block stacking task activates task-specific action knowledge.
Collapse
Affiliation(s)
- Corina Möller
- Developmental Psychology Unit, Saarland University, Building A 1 3, 66123, Saarbrücken, Germany,
| | | | | |
Collapse
|
35
|
Where there is a goal, there is a way: what, why and how the parieto-frontal mirror network can mediate imitative behaviours. Neurosci Biobehav Rev 2014; 47:177-93. [PMID: 25149267 DOI: 10.1016/j.neubiorev.2014.08.004] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2014] [Revised: 05/29/2014] [Accepted: 08/08/2014] [Indexed: 11/23/2022]
Abstract
The relationships between mirror neurons (MNs) and motor imitation, and its clinical implications in autism spectrum disorder (ASD) have been widely investigated; however, the literature remains—at least partially—controversial. In this review we support a multi-level action understanding model focusing on the mirror-based understanding. We review the functional role of the parieto-frontal MNs (PFMN) network claiming that PFMNs function cannot be limited to imitation nor can imitation be explained solely by the activity of PFMNs. The distinction between movement, motor act and motor action is useful to characterize deeply both act(ion) understanding and imitation of act(ion). A more abstract representation of act(ion) may be crucial for clarifying what, why and how an imitator is imitating. What counts in social interactions is achieving goals: it does not matter which effector or string of motor acts you eventually use for achieving (proximal and distal) goals. Similarly, what counts is the ability to recognize/imitate the style of act(ion) regardless of the way in which it is expressed. We address this crucial point referring to its potential implications in ASD.
Collapse
|
36
|
What do infants understand of others’ action? A theoretical account of early social cognition. PSYCHOLOGICAL RESEARCH 2013; 78:609-22. [DOI: 10.1007/s00426-013-0519-3] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2013] [Accepted: 09/26/2013] [Indexed: 10/26/2022]
|
37
|
Falck-Ytter T, Bölte S, Gredebäck G. Eye tracking in early autism research. J Neurodev Disord 2013; 5:28. [PMID: 24069955 PMCID: PMC3849191 DOI: 10.1186/1866-1955-5-28] [Citation(s) in RCA: 128] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2013] [Accepted: 09/13/2013] [Indexed: 12/21/2022] Open
Abstract
Eye tracking has the potential to characterize autism at a unique intermediate level, with links 'down' to underlying neurocognitive networks, as well as 'up' to everyday function and dysfunction. Because it is non-invasive and does not require advanced motor responses or language, eye tracking is particularly important for the study of young children and infants. In this article, we review eye tracking studies of young children with autism spectrum disorder (ASD) and children at risk for ASD. Reduced looking time at people and faces, as well as problems with disengagement of attention, appear to be among the earliest signs of ASD, emerging during the first year of life. In toddlers with ASD, altered looking patterns across facial parts such as the eyes and mouth have been found, together with limited orienting to biological motion. We provide a detailed discussion of these and other key findings and highlight methodological opportunities and challenges for eye tracking research of young children with ASD. We conclude that eye tracking can reveal important features of the complex picture of autism.
Collapse
Affiliation(s)
- Terje Falck-Ytter
- Department of Women’s & Children’s Health, Center of Neurodevelopmental Disorders at Karolinska Institute (KIND), Pediatric Neuropsychiatry Unit, Child and Adolescent Psychiatry Research Center, Gävlegatan 22, Stockholm, SE-11330, Sweden
- Uppsala Child & Babylab, Department of Psychology, Uppsala University, Uppsala, Sweden
| | - Sven Bölte
- Department of Women’s & Children’s Health, Center of Neurodevelopmental Disorders at Karolinska Institute (KIND), Pediatric Neuropsychiatry Unit, Child and Adolescent Psychiatry Research Center, Gävlegatan 22, Stockholm, SE-11330, Sweden
- Division of Child and Adolescent Psychiatry, Stockholm County Council, Stockholm, Sweden
| | - Gustaf Gredebäck
- Uppsala Child & Babylab, Department of Psychology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
38
|
Causer J, McCormick SA, Holmes PS. Congruency of gaze metrics in action, imagery and action observation. Front Hum Neurosci 2013; 7:604. [PMID: 24068996 PMCID: PMC3781353 DOI: 10.3389/fnhum.2013.00604] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2013] [Accepted: 09/04/2013] [Indexed: 11/25/2022] Open
Abstract
The aim of this paper is to provide a review of eye movements during action execution, action observation, and movement imagery. Furthermore, the paper highlights aspects of congruency in gaze metrics between these states. The implications of the imagery, observation, and action gaze congruency are discussed in terms of motor learning and rehabilitation. Future research directions are outlined in order to further the understanding of shared gaze metrics between overt and covert states. Suggestions are made for how researchers and practitioners can structure action observation and movement imagery interventions to maximize (re)learning.
Collapse
Affiliation(s)
- Joe Causer
- Brain and Behaviour Laboratory, Liverpool John Moores UniversityLiverpool, UK
| | - Sheree A. McCormick
- Centre for Cognitive Motor Function, Institute for Performance Research, Manchester Metropolitan UniversityCrewe, UK
| | - Paul S. Holmes
- Centre for Cognitive Motor Function, Institute for Performance Research, Manchester Metropolitan UniversityCrewe, UK
| |
Collapse
|
39
|
Southgate V. Do infants provide evidence that the mirror system is involved in action understanding? Conscious Cogn 2013; 22:1114-21. [PMID: 23773550 PMCID: PMC3807794 DOI: 10.1016/j.concog.2013.04.008] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2012] [Revised: 04/17/2013] [Accepted: 04/18/2013] [Indexed: 11/19/2022]
Abstract
The mirror neuron theory of action understanding makes predictions concerning how the limited motor repertoire of young infants should impact on their ability to interpret others' actions. In line with this theory, an increasing body of research has identified a correlation between infants' abilities to perform an action, and their ability to interpret that action as goal-directed when performed by others. In this paper, I will argue that the infant data does by no means unequivocally support the mirror neuron theory of action understanding and that alternative interpretations of the data should be considered. Furthermore, some of this data can be better interpreted in terms of an alternative view, which holds that the role of the motor system in action perception is more likely to be one of enabling the observer to predict, after a goal has been identified, how that goal will be attained.
Collapse
Affiliation(s)
- Victoria Southgate
- Centre for Brain and Cognitive Development, Birkbeck College, Malet Street, London WC1E 7HX, United Kingdom.
| |
Collapse
|
40
|
Ambrosini E, Reddy V, de Looper A, Costantini M, Lopez B, Sinigaglia C. Looking ahead: anticipatory gaze and motor ability in infancy. PLoS One 2013; 8:e67916. [PMID: 23861832 PMCID: PMC3701628 DOI: 10.1371/journal.pone.0067916] [Citation(s) in RCA: 74] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2013] [Accepted: 05/21/2013] [Indexed: 11/17/2022] Open
Abstract
The present study asks when infants are able to selectively anticipate the goals of observed actions, and how this ability relates to infants' own abilities to produce those specific actions. Using eye-tracking technology to measure on-line anticipation, 6-, 8- and 10-month-old infants and a control group of adults were tested while observing an adult reach with a whole hand grasp, a precision grasp or a closed fist towards one of two different sized objects. The same infants were also given a comparable action production task. All infants showed proactive gaze to the whole hand grasps, with increased degrees of proactivity in the older groups. Gaze proactivity to the precision grasps, however, was present from 8 months of age. Moreover, the infants' ability in performing precision grasping strongly predicted their ability in using the actor's hand shape cues to differentially anticipate the goal of the observed action, even when age was partialled out. The results are discussed in terms of the specificity of action anticipation, and the fine-grained relationship between action production and action perception.
Collapse
Affiliation(s)
- Ettore Ambrosini
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University G. d'Annunzio, Chieti, Italy.
| | | | | | | | | | | |
Collapse
|
41
|
Southgate V, Begus K. Motor activation during the prediction of nonexecutable actions in infants. Psychol Sci 2013; 24:828-35. [PMID: 23678509 PMCID: PMC3938142 DOI: 10.1177/0956797612459766] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2012] [Accepted: 07/24/2012] [Indexed: 11/23/2022] Open
Abstract
Although it is undeniable that the motor system is recruited when people observe others' actions, the inferences that the brain generates from motor activation and the mechanisms involved in the motor system's recruitment are still unknown. Here, we challenged the popular hypothesis that motor involvement in action observation enables the observer to identify and predict an agent's goal by matching observed actions with existing and corresponding motor representations. Using a novel neural indication of action prediction--sensorimotor-cortex activation measured by electroencephalography--we demonstrated that 9-month-old infants recruit their motor system whenever a context suggests an impending action, but that this recruitment is not dependent on being able to match the observed action with a corresponding motor representation. Our data are thus inconsistent with the view that action prediction depends on motor correspondence; instead, they support an alternative view in which motor activation is the result of, rather than the cause of, goal identification.
Collapse
Affiliation(s)
- Victoria Southgate
- Centre for Brain and Cognitive Development, University of London, England.
| | | |
Collapse
|
42
|
Montefinese M, Ambrosini E, Fairfield B, Mammarella N. The "subjective" pupil old/new effect: is the truth plain to see? Int J Psychophysiol 2013; 89:48-56. [PMID: 23665094 DOI: 10.1016/j.ijpsycho.2013.05.001] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2013] [Revised: 04/29/2013] [Accepted: 05/02/2013] [Indexed: 11/24/2022]
Abstract
Human memory is an imperfect process, prone to distortion and errors that range from minor disturbances to major errors that can have serious consequences on everyday life. In this study, we investigated false remembering of manipulatory verbs using an explicit recognition task and pupillometry. Our results replicated the "classical" pupil old/new effect as well as data in false remembering literature that show how items must be recognize as old in order for the pupil size to increase (e.g., "subjective" pupil old/new effect), even though these items do not necessarily have to be truly old. These findings support the strength-of-memory trace account that affirms that pupil dilation is related to experience rather than to the accuracy of recognition. Moreover, behavioral results showed higher rates of true and false recognitions for manipulatory verbs and a consequent larger pupil diameter, supporting the embodied view of language.
Collapse
|
43
|
Costantini M, Ambrosini E, Cardellicchio P, Sinigaglia C. How your hand drives my eyes. Soc Cogn Affect Neurosci 2013; 9:705-11. [PMID: 23559593 DOI: 10.1093/scan/nst037] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
When viewing object-related hand actions people make proactive eye movements of the same kind as those made when performing such actions. Why is this so? It has been suggested that proactive gaze when viewing a given hand action depends on the recruitment of motor areas such as the ventral premotor (PMv) cortex that would be involved in the execution of that action. However, direct evidence for a distinctive role of the PMv cortex in driving gaze behavior is still lacking. We recorded eye moments while viewing hand actions before and immediately after delivering repetitive transcranial magnetic stimulation (rTMS) over the left PMv and the posterior part of the left superior temporal sulcus, which is known to be involved in high-order visual action processing. Our results showed that rTMS-induced effects were selective with respect to the viewed actions following the virtual lesion of the left PMv only. This, for the first time, provides direct evidence that the PMv cortex might selectively contribute to driving the viewer's gaze to the action's target. When people view another's action, their eyes may be driven by motor processes similar to those they would need to perform the action themselves.
Collapse
Affiliation(s)
- Marcello Costantini
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University G. d'Annunzio, 66100, Chieti, Italy.
| | | | | | | |
Collapse
|
44
|
The motor cortex is causally related to predictive eye movements during action observation. Neuropsychologia 2012; 51:488-92. [PMID: 23267825 DOI: 10.1016/j.neuropsychologia.2012.12.007] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2012] [Revised: 11/27/2012] [Accepted: 12/14/2012] [Indexed: 11/23/2022]
Abstract
We examined the hypothesis that predictive gaze during observation of other people's actions depends on the activation of corresponding action plans in the observer. Using transcranial magnetic stimulation and eye-tracking technology we found that stimulation of the motor hand area, but not of the leg area, slowed gaze predictive behavior (compared to no TMS). This result shows that predictive eye movements to others' action goals depend on a somatotopical recruitment of the observer's motor system. The study provides direct support for the view that a direct matching process implemented in the mirror-neuron system plays a functional role for real-time goal prediction.
Collapse
|
45
|
Gowen E. Imitation in autism: why action kinematics matter. Front Integr Neurosci 2012; 6:117. [PMID: 23248591 PMCID: PMC3521151 DOI: 10.3389/fnint.2012.00117] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2012] [Accepted: 11/28/2012] [Indexed: 11/13/2022] Open
Affiliation(s)
- Emma Gowen
- Faculty of Life Sciences, The University of Manchester Manchester, UK
| |
Collapse
|
46
|
Henrichs I, Elsner C, Elsner B, Gredebäck G. Goal salience affects infants' goal-directed gaze shifts. Front Psychol 2012; 3:391. [PMID: 23087658 PMCID: PMC3466991 DOI: 10.3389/fpsyg.2012.00391] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2012] [Accepted: 09/19/2012] [Indexed: 11/13/2022] Open
Abstract
Around their first year of life, infants are able to anticipate the goal of others' ongoing actions. For instance, 12-month-olds anticipate the goal of everyday feeding actions and manual actions such as reaching and grasping. However, little is known whether the salience of the goal influences infants' online assessment of others' actions. The aim of the current eye-tracking study was to elucidate infants' ability to anticipate reaching actions depending on the visual salience of the goal object. In Experiment 1, 12-month-old infants' goal-directed gaze shifts were recorded as they observed a hand reaching for and grasping either a large (high-salience condition) or a small (low-salience condition) goal object. Infants exhibited predictive gaze shifts significantly earlier when the observed hand reached for the large goal object compared to when it reached for the small goal object. In addition, findings revealed rapid learning over the course of trials in the high-salience condition and no learning in the low-salience condition. Experiment 2 demonstrated that the results could not be simply attributed to the different grip aperture of the hand used when reaching for small and large objects. Together, our data indicate that by the end of their first year of life, infants rely on information about the goal salience to make inferences about the action goal.
Collapse
Affiliation(s)
- Ivanina Henrichs
- Department of Psychology, University of Potsdam Potsdam, Germany
| | | | | | | |
Collapse
|
47
|
Costantini M, Ambrosini E, Sinigaglia C. Does how I look at what you're doing depend on what I'm doing? Acta Psychol (Amst) 2012; 141:199-204. [PMID: 22968193 DOI: 10.1016/j.actpsy.2012.07.012] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2012] [Revised: 07/25/2012] [Accepted: 07/30/2012] [Indexed: 10/27/2022] Open
Abstract
Previous studies showed that people proactively gaze at the target of another's action by taking advantage of their own motor representation of that action. But just how selectively is one's own motor representation implicated in another's action processing? If people observe another's action while performing a compatible or an incompatible action themselves, will this impact on their gaze behaviour? We recorded proactive eye movements while participants observed an actor grasping small or large objects. The participants' right hand either freely rested on the table or held with a suitable grip a large or a small object, respectively. Proactivity of gaze behaviour significantly decreased when participants observed the actor reaching her target with a grip that was incompatible with respect to that used by them to hold the object in their own hand. This indicates that effective observation of action may depend on what one is actually doing, being actions observed best when the suitable motor representations may be readily recruited.
Collapse
|
48
|
de Bruin L, van Elk M, Newen A. Reconceptualizing second-person interaction. Front Hum Neurosci 2012; 6:151. [PMID: 22679421 PMCID: PMC3368580 DOI: 10.3389/fnhum.2012.00151] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2012] [Accepted: 05/14/2012] [Indexed: 11/13/2022] Open
Abstract
Over the last couple of decades, most neuroscientific research on social cognition has been dominated by a third-person paradigm in which participating subjects are not actively engaging with other agents but merely observe them. Recently this paradigm has been challenged by researchers who promote a second-person approach to social cognition, and emphasize the importance of dynamic, real-time interactions with others. The present article's contribution to this debate is twofold. First, we critically analyze the second-person challenge to social neuroscience, and assess the various ways in which the distinction between second- versus third-person modes of social cognition has been articulated. Second, we put forward an alternative conceptualization of this distinction-one that gives pride of place to the notion of reciprocity. We discuss the implications of our proposal for neuroscientific studies on social cognition.
Collapse
Affiliation(s)
- Leon de Bruin
- Department of Philosophy II, Ruhr-University Bochum Bochum, Germany
| | | | | |
Collapse
|
49
|
Falck-Ytter T. Predicting other people's action goals with low-level motor information. J Neurophysiol 2012; 107:2923-5. [DOI: 10.1152/jn.00783.2011] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In support for the direct-matching hypothesis, Ambrosini et al. (2011) recently reported that goal-directed saccades during action observation were modulated by manipulations of basic motor information. This finding indicates that motor programs, activated by low-level visual descriptions of others' actions, are involved in predicting other people's action goals. Here, I put this result into a broader context, review alternative interpretations, and suggest strategies for future studies.
Collapse
Affiliation(s)
- Terje Falck-Ytter
- Center of Neurodevelopmental Disorders at Karolinska Institutet, Astrid Lindgren Children's Hospital, Stockholm; and Department of Psychology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
50
|
Costantini M, Ambrosini E, Sinigaglia C. Out of your hand's reach, out of my eyes' reach. Q J Exp Psychol (Hove) 2012; 65:848-55. [DOI: 10.1080/17470218.2012.679945] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
When witnessing another's action, people recruit the same motor resources that enable them to efficiently perform that action, thus gazing at its target well before the agent's hand. But just to what extent does this recruitment help people in grabbing another's action target? If the latter seems to be out of the agent's reach, will this impact on people's gaze behaviour? We recorded proactive eye movements while participants witnessed someone else trying to reach for and grasp objects located either within or outside his reach. Proactivity of gaze was impaired when the targets were just out of the agent's reach. This effect is likely to be due to an interpersonal bodily space representation that allows one to map another's reaching space, thus prompting proactive eye movements towards the target just in case the agent is in the position to act upon it.
Collapse
Affiliation(s)
- Marcello Costantini
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University G. d'Annunzio, Chieti, Italy
- Institute for Advanced Biomedical Technologies–ITAB, Foundation University G. d'Annunzio, Chieti, Italy
| | - Ettore Ambrosini
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University G. d'Annunzio, Chieti, Italy
- Institute for Advanced Biomedical Technologies–ITAB, Foundation University G. d'Annunzio, Chieti, Italy
| | | |
Collapse
|