1
|
Luabeya GN, Yan X, Freud E, Crawford JD. Influence of gaze, vision, and memory on hand kinematics in a placement task. J Neurophysiol 2024; 132:147-161. [PMID: 38836297 DOI: 10.1152/jn.00362.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 05/24/2024] [Accepted: 06/01/2024] [Indexed: 06/06/2024] Open
Abstract
People usually reach for objects to place them in some position and orientation, but the placement component of this sequence is often ignored. For example, reaches are influenced by gaze position, visual feedback, and memory delays, but their influence on object placement is unclear. Here, we tested these factors in a task where participants placed and oriented a trapezoidal block against two-dimensional (2-D) visual templates displayed on a frontally located computer screen. In experiment 1, participants matched the block to three possible orientations: 0° (horizontal), +45° and -45°, with gaze fixated 10° to the left/right. The hand and template either remained illuminated (closed-loop), or visual feedback was removed (open-loop). Here, hand location consistently overshot the template relative to gaze, especially in the open-loop task; likewise, orientation was influenced by gaze position (depending on template orientation and visual feedback). In experiment 2, a memory delay was added, and participants sometimes performed saccades (toward, away from, or across the template). In this task, the influence of gaze on orientation vanished, but location errors were influenced by both template orientation and final gaze position. Contrary to our expectations, the previous saccade metrics also impacted placement overshoot. Overall, hand orientation was influenced by template orientation in a nonlinear fashion. These results demonstrate interactions between gaze and orientation signals in the planning and execution of hand placement and suggest different neural mechanisms for closed-loop, open-loop, and memory delay placement.NEW & NOTEWORTHY Eye-hand coordination studies usually focus on object acquisition, but placement is equally important. We investigated how gaze position influences object placement toward a 2-D template with different levels of visual feedback. Like reach, placement overestimated goal location relative to gaze and was influenced by previous saccade metrics. Gaze also modulated hand orientation, depending on template orientation and level of visual feedback. Gaze influence was feedback-dependent, with location errors having no significant effect after a memory delay.
Collapse
Affiliation(s)
- Gaelle N Luabeya
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
| | - Erez Freud
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
- Department of Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Su X, Swallow KM. People can reliably detect action changes and goal changes during naturalistic perception. Mem Cognit 2024; 52:1093-1111. [PMID: 38315292 DOI: 10.3758/s13421-024-01525-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/17/2024] [Indexed: 02/07/2024]
Abstract
As a part of ongoing perception, the human cognitive system segments others' activities into discrete episodes (event segmentation). Although prior research has shown that this process is likely related to changes in an actor's actions and goals, it has not yet been determined whether untrained observers can reliably identify action and goal changes as naturalistic activities unfold, or whether the changes they identify are tied to visual features of the activity (e.g., the beginnings and ends of object interactions). This study addressed these questions by examining untrained participants' identification of action changes, goal changes, and event boundaries while watching videos of everyday activities that were presented in both first-person and third-person perspectives. We found that untrained observers can identify goal changes and action changes consistently, and these changes are not explained by visual change and the onsets or offsets of contact with objects. Moreover, the action and goal changes identified by untrained observers were associated with event boundaries, even after accounting for objective visual features of the videos. These findings suggest that people can identify action and goal changes consistently and with high agreement, that they do so by using sensory information flexibly, and that the action and goal changes they identify may contribute to event segmentation.
Collapse
Affiliation(s)
- Xing Su
- Department of Psychological and Brain Sciences, Washington University in Saint Louis, Saint Louis, MO, USA
| | - Khena M Swallow
- Department of Psychology and Cognitive Science Program, Cornell University, 211 Uris Hall, Ithaca, NY, 14853, USA.
| |
Collapse
|
3
|
Bianco V, Finisguerra A, Urgesi C. Contextual Priors Shape Action Understanding before and beyond the Unfolding of Movement Kinematics. Brain Sci 2024; 14:164. [PMID: 38391738 PMCID: PMC10887018 DOI: 10.3390/brainsci14020164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 01/29/2024] [Accepted: 02/02/2024] [Indexed: 02/24/2024] Open
Abstract
Previous studies have shown that contextual information may aid in guessing the intention underlying others' actions in conditions of perceptual ambiguity. Here, we aimed to evaluate the temporal deployment of contextual influence on action prediction with increasing availability of kinematic information during the observation of ongoing actions. We used action videos depicting an actor grasping an object placed on a container to perform individual or interpersonal actions featuring different kinematic profiles. Crucially, the container could be of different colors. First, in a familiarization phase, the probability of co-occurrence between each action kinematics and color cues was implicitly manipulated to 80% and 20%, thus generating contextual priors. Then, in a testing phase, participants were asked to predict action outcome when the same action videos were occluded at five different timeframes of the entire movement, ranging from when the actor was still to when the grasp of the object was fully accomplished. In this phase, all possible action-contextual cues' associations were equally presented. The results showed that for all occlusion intervals, action prediction was more facilitated when action kinematics deployed in high- than low-probability contextual scenarios. Importantly, contextual priors shaped action prediction even in the latest occlusion intervals, where the kinematic cues clearly unveiled an action outcome that was previously associated with low-probability scenarios. These residual contextual effects were stronger in individuals with higher subclinical autistic traits. Our findings highlight the relative contribution of kinematic and contextual information to action understanding and provide evidence in favor of their continuous integration during action observation.
Collapse
Affiliation(s)
- Valentina Bianco
- Department of Brain and Behavioural Sciences, University of Pavia, 27100 Pavia, Italy
- Laboratory of Cognitive Neuroscience, Department of Languages and Literatures, Communication, Education and Society, University of Udine, 33100 Udine, Italy
| | | | - Cosimo Urgesi
- Laboratory of Cognitive Neuroscience, Department of Languages and Literatures, Communication, Education and Society, University of Udine, 33100 Udine, Italy
- Scientific Institute, IRCCS E. Medea, Pasian di Prato, 33037 Udine, Italy
| |
Collapse
|
4
|
Lombardi G, Sciutti A, Rea F, Vannucci F, Di Cesare G. Humanoid facial expressions as a tool to study human behaviour. Sci Rep 2024; 14:133. [PMID: 38167552 PMCID: PMC10762044 DOI: 10.1038/s41598-023-45825-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 10/24/2023] [Indexed: 01/05/2024] Open
Abstract
Besides action vitality forms, facial expressions represent another fundamental social cue which enables to infer the affective state of others. In the present study, we proposed the iCub robot as an interactive and controllable agent to investigate whether and how different facial expressions, associated to different action vitality forms, could modulate the motor behaviour of participants. To this purpose, we carried out a kinematic experiment in which 18 healthy participants observed video-clips of the iCub robot performing a rude or gentle request with a happy or angry facial expression. After this request, they were asked to grasp an object and pass it towards the iCub robot. Results showed that the iCub facial expressions significantly modulated participants motor response. Particularly, the observation of a happy facial expression, associated to a rude action, decreased specific kinematic parameters such as velocity, acceleration and maximum height of movement. In contrast, the observation of an angry facial expression, associated to a gentle action, increased the same kinematic parameters. Moreover, a behavioural study corroborated these findings, showing that the perception of the same action vitality form was modified when associated to a positive or negative facial expression.
Collapse
Affiliation(s)
- G Lombardi
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), University of Genoa, Genova, Italy
- Cognitive Architecture for Collaborative Technologies Unit (CONTACT), Italian Institute of Technology, Genova, Italy
| | - A Sciutti
- Cognitive Architecture for Collaborative Technologies Unit (CONTACT), Italian Institute of Technology, Genova, Italy
| | - F Rea
- Robotics Brain and Cognitive Sciences Unit, Italian Institute of Technology, Genova, Italy
| | - F Vannucci
- Cognitive Architecture for Collaborative Technologies Unit (CONTACT), Italian Institute of Technology, Genova, Italy
| | - G Di Cesare
- Cognitive Architecture for Collaborative Technologies Unit (CONTACT), Italian Institute of Technology, Genova, Italy.
- Department of Medicine and Surgery, Neuroscience Unit, University of Parma, via Volturno 39/E, 43125, Parma, Italy.
| |
Collapse
|
5
|
The visual encoding of graspable unfamiliar objects. PSYCHOLOGICAL RESEARCH 2023; 87:452-461. [PMID: 35322276 DOI: 10.1007/s00426-022-01673-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 03/08/2022] [Indexed: 10/18/2022]
Abstract
We explored by eye-tracking the visual encoding modalities of participants (N = 20) involved in a free-observation task in which three repetitions of ten unfamiliar graspable objects were administered. Then, we analysed the temporal allocation (t = 1500 ms) of visual-spatial attention to objects' manipulation (i.e., the part aimed at grasping the object) and functional (i.e., the part aimed at recognizing the function and identity of the object) areas. Within the first 750 ms, participants tended to shift their gaze on the functional areas while decreasing their attention on the manipulation areas. Then, participants reversed this trend, decreasing their visual-spatial attention to the functional areas while fixing the manipulation areas relatively more. Crucially, the global amount of visual-spatial attention for objects' functional areas significantly decreased as an effect of stimuli repetition while remaining stable for the manipulation areas, thus indicating stimulus familiarity effects. These findings support the action reappraisal theoretical approach, which considers object/tool processing as abilities emerging from semantic, technical/mechanical, and sensorimotor knowledge integration.
Collapse
|
6
|
Isenstein EL, Waz T, LoPrete A, Hernandez Y, Knight EJ, Busza A, Tadin D. Rapid assessment of hand reaching using virtual reality and application in cerebellar stroke. PLoS One 2022; 17:e0275220. [PMID: 36174027 PMCID: PMC9522266 DOI: 10.1371/journal.pone.0275220] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 09/13/2022] [Indexed: 11/19/2022] Open
Abstract
The acquisition of sensory information about the world is a dynamic and interactive experience, yet the majority of sensory research focuses on perception without action and is conducted with participants who are passive observers with very limited control over their environment. This approach allows for highly controlled, repeatable experiments and has led to major advances in our understanding of basic sensory processing. Typical human perceptual experiences, however, are far more complex than conventional action-perception experiments and often involve bi-directional interactions between perception and action. Innovations in virtual reality (VR) technology offer an approach to close this notable disconnect between perceptual experiences and experiments. VR experiments can be conducted with a high level of empirical control while also allowing for movement and agency as well as controlled naturalistic environments. New VR technology also permits tracking of fine hand movements, allowing for seamless empirical integration of perception and action. Here, we used VR to assess how multisensory information and cognitive demands affect hand movements while reaching for virtual targets. First, we manipulated the visibility of the reaching hand to uncouple vision and proprioception in a task measuring accuracy while reaching toward a virtual target (n = 20, healthy young adults). The results, which as expected revealed multisensory facilitation, provided a rapid and a highly sensitive measure of isolated proprioceptive accuracy. In the second experiment, we presented the virtual target only briefly and showed that VR can be used as an efficient and robust measurement of spatial memory (n = 18, healthy young adults). Finally, to assess the feasibility of using VR to study perception and action in populations with physical disabilities, we showed that the results from the visual-proprioceptive task generalize to two patients with recent cerebellar stroke. Overall, we show that VR coupled with hand-tracking offers an efficient and adaptable way to study human perception and action.
Collapse
Affiliation(s)
- E. L. Isenstein
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, United States of America
- Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
- Center for Visual Science, University of Rochester, Rochester, NY, United States of America
| | - T. Waz
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, United States of America
| | - A. LoPrete
- Center for Visual Science, University of Rochester, Rochester, NY, United States of America
- Center for Neuroscience and Behavior, American University, Washington, DC, United States of America
- Bioengineering Graduate Group, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Y. Hernandez
- Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
- The City College of New York, CUNY, New York, NY, United States of America
| | - E. J. Knight
- Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
- Division of Developmental and Behavioral Pediatrics, Department of Pediatrics, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
| | - A. Busza
- Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
- Department of Neurology, University of Rochester Medical Center, Rochester, NY, United States of America
| | - D. Tadin
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, United States of America
- Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
- Center for Visual Science, University of Rochester, Rochester, NY, United States of America
- Department of Ophthalmology, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
| |
Collapse
|
7
|
Mnif M, Chikh S, Jarraya M. Effect of Social Context on Cognitive and Motor Behavior: A Systematic Review. J Mot Behav 2022; 54:631-647. [PMID: 35379082 DOI: 10.1080/00222895.2022.2060928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Human cognitive and motor behavior is influenced by the social contexts. The aim of this systematic review is to investigate the impact of the social contexts on human behaviors. A systematic search of the literature was performed via Pub-Med/Medline, Web of sciences, Google scholar, Science direct, Springer-Link and EMBASE and 68 articles were selected. After applying all the inclusion and exclusion criteria, 16 articles were retained. The results show that the presence of other people and the social context influence motor behavior (i.e. movement duration, trajectory behavior, maximum speed) and cognitive behavior (reaction time). Studies have shown an improvement in performance in the presence of other people compared to the individual situation. However, other studies showed that the presence of other people led to deterioration in performance compared to the individual situation. The improvement of behavior is attributed to the social phenomenon of facilitation while the deterioration was explained by the conduct theory or the distraction conflict theory. These social phenomena of facilitation or inhibition could be related to the perception-action theory, which interferes with interaction with other. This, in turn, seems to be associated with neural circuits of mirror neurons and motor system.
Collapse
Affiliation(s)
- Maha Mnif
- Education, Motricity, Sport and Health Research Laboratory, EMSS-LR19JS01, University of Sfax, Sfax, Tunisia.,High Institute of Sports and Physical Education, Sfax, Tunisia
| | - Soufien Chikh
- Education, Motricity, Sport and Health Research Laboratory, EMSS-LR19JS01, University of Sfax, Sfax, Tunisia.,High Institute of Sports and Physical Education, Sfax, Tunisia
| | - Mohamed Jarraya
- Education, Motricity, Sport and Health Research Laboratory, EMSS-LR19JS01, University of Sfax, Sfax, Tunisia.,High Institute of Sports and Physical Education, Sfax, Tunisia
| |
Collapse
|
8
|
Savaki HE, Kavroulakis E, Papadaki E, Maris TG, Simos PG. Action Observation Responses Are Influenced by Movement Kinematics and Target Identity. Cereb Cortex 2021; 32:490-503. [PMID: 34259867 DOI: 10.1093/cercor/bhab225] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
In order to inform the debate whether cortical areas related to action observation provide a pragmatic or a semantic representation of goal-directed actions, we performed 2 functional magnetic resonance imaging (fMRI) experiments in humans. The first experiment, involving observation of aimless arm movements, resulted in activation of most of the components known to support action execution and action observation. Given the absence of a target/goal in this experiment and the activation of parieto-premotor cortical areas, which were associated in the past with direction, amplitude, and velocity of movement of biological effectors, our findings suggest that during action observation we could be monitoring movement kinematics. With the second, double dissociation fMRI experiment, we revealed the components of the observation-related cortical network affected by 1) actions that have the same target/goal but different reaching and grasping kinematics and 2) actions that have very similar kinematics but different targets/goals. We found that certain areas related to action observation, including the mirror neuron ones, are informed about movement kinematics and/or target identity, hence providing a pragmatic rather than a semantic representation of goal-directed actions. Overall, our findings support a process-driven simulation-like mechanism of action understanding, in agreement with the theory of motor cognition, and question motor theories of action concept processing.
Collapse
Affiliation(s)
- Helen E Savaki
- Institute of Applied and Computational Mathematics, Foundation for Research and Technology Hellas, Iraklion, Crete 70013, Greece.,Faculty of Medicine, School of Health Sciences, University of Crete, Iraklion, Crete 70013, Greece
| | - Eleftherios Kavroulakis
- Faculty of Medicine, School of Health Sciences, University of Crete, Iraklion, Crete 70013, Greece
| | - Efrosini Papadaki
- Faculty of Medicine, School of Health Sciences, University of Crete, Iraklion, Crete 70013, Greece.,Computational Bio-Medicine Laboratory, Institute of Computer Science, Foundation for Research and Technology Hellas, Iraklion, Crete 70013, Greece
| | - Thomas G Maris
- Faculty of Medicine, School of Health Sciences, University of Crete, Iraklion, Crete 70013, Greece.,Computational Bio-Medicine Laboratory, Institute of Computer Science, Foundation for Research and Technology Hellas, Iraklion, Crete 70013, Greece
| | - Panagiotis G Simos
- Faculty of Medicine, School of Health Sciences, University of Crete, Iraklion, Crete 70013, Greece.,Computational Bio-Medicine Laboratory, Institute of Computer Science, Foundation for Research and Technology Hellas, Iraklion, Crete 70013, Greece
| |
Collapse
|
9
|
Rutkowska JM, Meyer M, Hunnius S. Adults Do Not Distinguish Action Intentions Based on Movement Kinematics Presented in Naturalistic Settings. Brain Sci 2021; 11:brainsci11060821. [PMID: 34205675 PMCID: PMC8234011 DOI: 10.3390/brainsci11060821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 06/11/2021] [Accepted: 06/15/2021] [Indexed: 11/25/2022] Open
Abstract
Predicting others’ actions is an essential part of acting in the social world. Action kinematics have been proposed to be a cue about others’ intentions. It is still an open question as to whether adults can use kinematic information in naturalistic settings when presented as a part of a richer visual scene than previously examined. We investigated adults’ intention perceptions from kinematics using naturalistic stimuli in two experiments. In experiment 1, thirty participants watched grasp-to-drink and grasp-to-place movements and identified the movement intention (to drink or to place), whilst their mouth-opening muscle activity was measured with electromyography (EMG) to examine participants’ motor simulation of the observed actions. We found anecdotal evidence that participants could correctly identify the intentions from the action kinematics, although we found no evidence for increased activation of their mylohyoid muscle during the observation of grasp-to-drink compared to grasp-to-place actions. In pre-registered experiment 2, fifty participants completed the same task online. With the increased statistical power, we found strong evidence that participants were not able to discriminate intentions based on movement kinematics. Together, our findings suggest that the role of action kinematics in intention perception is more complex than previously assumed. Although previous research indicates that under certain circumstances observers can perceive and act upon intention-specific kinematic information, perceptual differences in everyday scenes or the observers’ ability to use kinematic information in more naturalistic scenes seems limited.
Collapse
|
10
|
Federico G, Osiurak F, Reynaud E, Brandimonte MA. Semantic congruency effects of prime words on tool visual exploration. Brain Cogn 2021; 152:105758. [PMID: 34102405 DOI: 10.1016/j.bandc.2021.105758] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 05/19/2021] [Accepted: 05/23/2021] [Indexed: 10/21/2022]
Abstract
Most recent research on human tool use highlighted how people might integrate multiple sources of information through different neurocognitive systems to exploit the environment for action. This mechanism of integration is known as "action reappraisal". In the present eye-tracking study, we further tested the action reappraisal idea by devising a word-priming paradigm to investigate how semantically congruent (e.g., "nail") vs. semantically incongruent words (e.g., "jacket") that preceded the vision of tools (e.g., a hammer) may affect participants' visual exploration of them. We found an implicit modulation of participants' temporal allocation of visuospatial attention as a function of the object-word consistency. Indeed, participants tended to increase over time their fixations on tools' manipulation areas under semantically congruent conditions. Conversely, participants tended to concentrate their visual-spatial attention on tools' functional areas when inconsistent object-word pairs were presented. These results support and extend the information-integrated perspective of the action reappraisal approach. Also, these findings provide further evidence about how higher-level semantic information may influence tools' visual exploration.
Collapse
Affiliation(s)
| | - François Osiurak
- Laboratoire d'Etude des Mécanismes Cognitifs, Université de Lyon, Lyon, France; Institut Universitaire de France, Paris, France
| | - Emanuelle Reynaud
- Laboratoire d'Etude des Mécanismes Cognitifs, Université de Lyon, Lyon, France
| | - Maria A Brandimonte
- Laboratory of Experimental Psychology, Suor Orsola Benincasa University, Naples, Italy
| |
Collapse
|
11
|
Federico G, Osiurak F, Brandimonte MA. Hazardous tools: the emergence of reasoning in human tool use. PSYCHOLOGICAL RESEARCH 2021; 85:3108-3118. [PMID: 33404904 DOI: 10.1007/s00426-020-01466-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 12/14/2020] [Indexed: 02/03/2023]
Abstract
Humans are unique in the way they understand the causal relationships between the use of tools and achieving a goal. The idea at the core of the present research is that tool use can be considered as an instance of problem-solving situations supported by technical reasoning. In an eye-tracking study, we investigated the fixation patterns of participants (N = 32) looking at 3D images of thematically consistent (e.g., nail-steel hammer) and thematically inconsistent (e.g., scarf-steel hammer) object-tool pairs that could be either "hazardous" (accidentally electrified) or not. Results showed that under thematically consistent conditions, participants focused on the tool's manipulation area (e.g., the handle of a steel hammer). However, when electrified tools were present or when the visual scene was not action-prompting, regardless of the presence of electricity, the tools' functional/identity areas (e.g., the head of a steel hammer) were fixated longer than the tools' manipulation areas. These results support an integrated and reasoning-based approach to human tool use and document, for the first time, the crucial role of mechanical/semantic knowledge in tool visual exploration.
Collapse
Affiliation(s)
| | - François Osiurak
- Laboratoire d'Etude des Mécanismes Cognitifs, Université de Lyon, Lyon, France
- Institut Universitaire de France, Paris, France
| | - Maria A Brandimonte
- Laboratory of Experimental Psychology, Suor Orsola Benincasa University, Naples, Italy
| |
Collapse
|
12
|
Osiurak F, Federico G, Brandimonte MA, Reynaud E, Lesourd M. On the Temporal Dynamics of Tool Use. Front Hum Neurosci 2020; 14:579378. [PMID: 33364928 PMCID: PMC7750203 DOI: 10.3389/fnhum.2020.579378] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 11/06/2020] [Indexed: 12/27/2022] Open
Affiliation(s)
- François Osiurak
- Laboratoire d'Etude des Mécanismes Cognitifs, Université de Lyon, Lyon, France
- Institut Universitaire de France, Paris, France
| | - Giovanni Federico
- Laboratory of Experimental Psychology, Suor Orsola Benincasa University, Naples, Italy
| | - Maria A. Brandimonte
- Laboratory of Experimental Psychology, Suor Orsola Benincasa University, Naples, Italy
| | - Emanuelle Reynaud
- Laboratoire d'Etude des Mécanismes Cognitifs, Université de Lyon, Lyon, France
| | - Mathieu Lesourd
- Laboratoire de Psychologie, Université de Bourgogne Franche-Comté, Besançon, France
| |
Collapse
|
13
|
Trujillo JP, Simanova I, Bekkering H, Özyürek A. The communicative advantage: how kinematic signaling supports semantic comprehension. PSYCHOLOGICAL RESEARCH 2020; 84:1897-1911. [PMID: 31079227 PMCID: PMC7772160 DOI: 10.1007/s00426-019-01198-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Accepted: 05/02/2019] [Indexed: 11/04/2022]
Abstract
Humans are unique in their ability to communicate information through representational gestures which visually simulate an action (eg. moving hands as if opening a jar). Previous research indicates that the intention to communicate modulates the kinematics (e.g., velocity, size) of such gestures. If and how this modulation influences addressees' comprehension of gestures have not been investigated. Here we ask whether communicative kinematic modulation enhances semantic comprehension (i.e., identification) of gestures. We additionally investigate whether any comprehension advantage is due to enhanced early identification or late identification. Participants (n = 20) watched videos of representational gestures produced in a more- (n = 60) or less-communicative (n = 60) context and performed a forced-choice recognition task. We tested the isolated role of kinematics by removing visibility of actor's faces in Experiment I, and by reducing the stimuli to stick-light figures in Experiment II. Three video lengths were used to disentangle early identification from late identification. Accuracy and response time quantified main effects. Kinematic modulation was tested for correlations with task performance. We found higher gesture identification performance in more- compared to less-communicative gestures. However, early identification was only enhanced within a full visual context, while late identification occurred even when viewing isolated kinematics. Additionally, temporally segmented acts with more post-stroke holds were associated with higher accuracy. Our results demonstrate that communicative signaling, interacting with other visual cues, generally supports gesture identification, while kinematic modulation specifically enhances late identification in the absence of other cues. Results provide insights into mutual understanding processes as well as creating artificial communicative agents.
Collapse
Affiliation(s)
- James P Trujillo
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Montessorilaan 3, B.01.25, 6525GR, Nijmegen, The Netherlands.
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands.
| | - Irina Simanova
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Montessorilaan 3, B.01.25, 6525GR, Nijmegen, The Netherlands
| | - Harold Bekkering
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Montessorilaan 3, B.01.25, 6525GR, Nijmegen, The Netherlands
| | - Asli Özyürek
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD, Nijmegen, The Netherlands
| |
Collapse
|
14
|
The combined effects of motor and social goals on the kinematics of object-directed motor action. Sci Rep 2020; 10:6369. [PMID: 32286415 PMCID: PMC7156435 DOI: 10.1038/s41598-020-63314-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Accepted: 03/25/2020] [Indexed: 11/08/2022] Open
Abstract
Voluntary actions towards manipulable objects are usually performed with a particular motor goal (i.e., a task-specific object-target-effector interaction) and in a particular social context (i.e., who would benefit from these actions), but the mutual influence of these two constraints has not yet been properly studied. For this purpose, we asked participants to grasp an object and place it on either a small or large target in relation to Fitts’ law (motor goal). This first action prepared them for a second grasp-to-place action which was performed under temporal constraints, either by the participants themselves or by a confederate (social goal). Kinematic analysis of the first preparatory grasp-to-place action showed that, while deceleration time was impacted by the motor goal, peak velocity was influenced by the social goal. Movement duration and trajectory height were modulated by both goals, the effect of the social goal being attenuated by the effect of the motor goal. Overall, these results suggest that both motor and social constraints influence the characteristics of object-oriented actions, with effects that combine in a hierarchical way.
Collapse
|
15
|
De Marco D, Scalona E, Bazzini MC, Avanzini P, Fabbri-Destro M. Observer-Agent Kinematic Similarity Facilitates Action Intention Decoding. Sci Rep 2020; 10:2605. [PMID: 32054915 PMCID: PMC7018748 DOI: 10.1038/s41598-020-59176-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Accepted: 01/22/2020] [Indexed: 11/12/2022] Open
Abstract
It is well known that the kinematics of an action is modulated by the underlying motor intention. In turn, kinematics serves as a cue also during action observation, providing hints about the intention of the observed action. However, an open question is whether decoding others’ intentions on the basis of their kinematics depends solely on how much the kinematics varies across different actions, or rather it is also influenced by its similarity with the observer motor repertoire. The execution of reach-to-grasp and place actions, differing for target size and context, was recorded in terms of upper-limb kinematics in 21 volunteers and in an actor. Volunteers had later to observe the sole reach-to-grasp phase of the actor’s actions, and predict the underlying intention. The potential benefit of the kinematic actor-participant similarity for recognition accuracy was evaluated. In execution, both target size and context modulated specific kinematic parameters. More importantly, although participants performed above chance in intention recognition, the similarity of motor patterns positively correlated with recognition accuracy. Overall, these data indicate that kinematic similarity exerts a facilitative role in intention recognition, providing further support to the view of action intention recognition as a visuo-motor process grounded in motor resonance.
Collapse
Affiliation(s)
- Doriana De Marco
- Consiglio Nazionale delle Ricerche (CNR), Istituto di Neuroscienze, sede di Parma, Italy.
| | - Emilia Scalona
- Consiglio Nazionale delle Ricerche (CNR), Istituto di Neuroscienze, sede di Parma, Italy
| | - Maria Chiara Bazzini
- Consiglio Nazionale delle Ricerche (CNR), Istituto di Neuroscienze, sede di Parma, Italy
| | - Pietro Avanzini
- Consiglio Nazionale delle Ricerche (CNR), Istituto di Neuroscienze, sede di Parma, Italy
| | | |
Collapse
|
16
|
What first drives visual attention during the recognition of object-directed actions? The role of kinematics and goal information. Atten Percept Psychophys 2020; 81:2400-2409. [PMID: 31292941 DOI: 10.3758/s13414-019-01784-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
The recognition of others' object-directed actions is known to involve the decoding of both the visual kinematics of the action and the action goal. Yet whether action recognition is first guided by the processing of visual kinematics or by a prediction about the goal of the actor remains debated. In order to provide experimental evidence to this issue, the present study aimed at investigating whether visual attention would be preferentially captured by visual kinematics or by action goal information when processing others' actions. In a visual search task, participants were asked to find correct actions (e.g., drinking from glass) among distractor actions. Distractors actions contained grip and/or goal violations and could therefore share the correct goal and/or the correct grip with the target. The time course of fixation proportion on each distractor action has been taken as an indicator of visual attention allocation. Results show that visual attention is first captured by the distractor action with similar goal. Then the withdrawal of visual attention from the action distractor with similar goal suggests a later attentional capture by the action distractor with similar grip. Overall, results are in line with predictive approaches of action understanding, which assume that observers first make a prediction about the actor's goal before verifying this prediction using the visual kinematics of the action.
Collapse
|
17
|
The left cerebral hemisphere may be dominant for the control of bimanual symmetric reach-to-grasp movements. Exp Brain Res 2019; 237:3297-3311. [PMID: 31664489 DOI: 10.1007/s00221-019-05672-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2018] [Accepted: 10/19/2019] [Indexed: 12/20/2022]
Abstract
Previous research has established that the left cerebral hemisphere is dominant for the control of continuous bimanual movements. The lateralisation of motor control for discrete bimanual movements, in contrast, is underexplored. The purpose of the current study was to investigate which (if either) hemisphere is dominant for discrete bimanual movements. Twenty-one participants made bimanual reach-to-grasp movements towards pieces of candy. Participants grasped the candy to either place it in their mouths (grasp-to-eat) or in a receptacle near their mouths (grasp-to-place). Research has shown smaller maximum grip apertures (MGAs) for unimanual grasp-to-eat movements than unimanual grasp-to-place movements when controlled by the left hemisphere. In Experiment 1, participants made bimanual symmetric movements where both hands made grasp-to-eat or grasp-to-place movements. We hypothesised that a left hemisphere dominance for bimanual movements would cause smaller MGAs in both hands during bimanual grasp-to-eat movements compared to those in bimanual grasp-to-place movements. The results revealed that MGAs were indeed smaller for bimanual grasp-to-eat movements than grasp-to-place movements. This supports that the left hemisphere may be dominant for the control of bimanual symmetric movements, which agrees with studies on continuous bimanual movements. In Experiment 2, participants made bimanual asymmetric movements where one hand made a grasp-to-eat movement while the other hand made a grasp-to-place movement. The results failed to support the potential predictions of left hemisphere dominance, right hemisphere dominance, or contralateral control.
Collapse
|
18
|
Amoruso L, Finisguerra A. Low or High-Level Motor Coding? The Role of Stimulus Complexity. Front Hum Neurosci 2019; 13:332. [PMID: 31680900 PMCID: PMC6798151 DOI: 10.3389/fnhum.2019.00332] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Accepted: 09/09/2019] [Indexed: 11/13/2022] Open
Abstract
Transcranial magnetic stimulation (TMS) studies have shown that observing an action induces activity in the onlooker's motor system. In light of the muscle specificity and time-locked mirroring nature of the effect, this motor resonance has been traditionally viewed as an inner automatic replica of the observed movement. Notably, studies highlighting this aspect have classically considered movement in isolation (i.e., using non-realistic stimuli such as snapshots of hands detached from background). However, a few recent studies accounting for the role of contextual cues, motivational states, and social factors, have challenged this view by showing that motor resonance is not completely impervious to top-down modulations. A debate is still present. We reasoned that motor resonance reflects the inner replica of the observed movement only when its modulation is assessed during the observation of movements in isolation. Conversely, the presence of top-down modulations of motor resonance emerges when other high-level factors (i.e., contextual cues, past experience, social, and motivational states) are taken into account. Here, we attempt to lay out current TMS studies assessing this issue and discuss the results in terms of their potential to favor the inner replica or the top-down modulation hypothesis. In doing so, we seek to shed light on this actual debate and suggest specific avenues for future research, highlighting the need for a more ecological approach when studying motor resonance phenomenon.
Collapse
Affiliation(s)
- Lucia Amoruso
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain.,IKERBASQUE, Basque Foundation for Science, Bilbao, Spain
| | | |
Collapse
|
19
|
Thompson EL, Bird G, Catmur C. Conceptualizing and testing action understanding. Neurosci Biobehav Rev 2019; 105:106-114. [DOI: 10.1016/j.neubiorev.2019.08.002] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Revised: 06/25/2019] [Accepted: 08/04/2019] [Indexed: 11/30/2022]
|
20
|
van Ommeren AL, Sawaryn B, Prange-Lasonder GB, Buurke JH, Rietman JS, Veltink PH. Detection of the Intention to Grasp During Reaching in Stroke Using Inertial Sensing. IEEE Trans Neural Syst Rehabil Eng 2019; 27:2128-2134. [DOI: 10.1109/tnsre.2019.2939202] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
21
|
Senna I, Cardinali L, Farnè A, Brozzoli C. Aim and Plausibility of Action Chains Remap Peripersonal Space. Front Psychol 2019; 10:1681. [PMID: 31379692 PMCID: PMC6652232 DOI: 10.3389/fpsyg.2019.01681] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Accepted: 07/03/2019] [Indexed: 11/22/2022] Open
Abstract
Successful interaction with objects in the peripersonal space requires that the information relative to current and upcoming positions of our body is continuously monitored and updated with respect to the location of target objects. Voluntary actions, for example, are known to induce an anticipatory remapping of the peri-hand space (PHS, i.e., the space near the acting hand) during the very early stages of the action chain: planning and initiating an object grasp increase the interference exerted by visual stimuli coming from the object on touches delivered to the grasping hand, thus allowing for hand-object position monitoring and guidance. Voluntarily grasping an object, though, is rarely performed in isolation. Grasping a candy, for example, is most typically followed by concatenated secondary action steps (bringing the candy to the mouth and swallowing it) that represent the agent’s ultimate intention (to eat the candy). However, whether and when complex action chains remap the PHS remains unknown, just as whether remapping is conditional to goal achievability (e.g., candy-mouth fit). Here we asked these questions by assessing changes in visuo-tactile interference on the acting hand while participants had to grasp an object serving as a support for an elongated candy, and bring it toward their mouth. Depending on its orientation, the candy could potentially enter the participants’ mouth (plausible goal), or not (implausible goal). We observed increased visuo-tactile interference at relatively late stages of the action chain, after the object had been grasped, and only when the action goal was plausible. These findings suggest that multisensory interactions during action execution depend upon the final aim and plausibility of complex goal-directed actions, and extend our knowledge about the role of peripersonal space in guiding goal-directed voluntary actions.
Collapse
Affiliation(s)
- Irene Senna
- Integrative Multisensory Perception Action and Cognition Team (ImpAct), Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, Lyon, France.,Department of Applied Cognitive Psychology, Ulm University, Ulm, Germany
| | - Lucilla Cardinali
- Cognition, Motion and Neuroscience Unit, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - Alessandro Farnè
- Integrative Multisensory Perception Action and Cognition Team (ImpAct), Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, Lyon, France.,University of Lyon 1, Lyon, France.,Hospices Civils de Lyon, Mouvement et Handicap & Neuro-Immersion, Lyon, France.,Center for Mind/Brain Sciences, University of Trento, Trento, Italy
| | - Claudio Brozzoli
- Integrative Multisensory Perception Action and Cognition Team (ImpAct), Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, Lyon, France.,University of Lyon 1, Lyon, France.,Hospices Civils de Lyon, Mouvement et Handicap & Neuro-Immersion, Lyon, France.,Institutionen för Neurobiologi, Vårdvetenskap och Samhälle, Aging Research Center, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
22
|
Rigato S, Banissy MJ, Romanska A, Thomas R, van Velzen J, Bremner AJ. Cortical signatures of vicarious tactile experience in four-month-old infants. Dev Cogn Neurosci 2019; 35:75-80. [PMID: 28942240 PMCID: PMC6968956 DOI: 10.1016/j.dcn.2017.09.003] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2017] [Revised: 06/30/2017] [Accepted: 09/11/2017] [Indexed: 11/29/2022] Open
Abstract
The human brain recruits similar brain regions when a state is experienced (e.g., touch, pain, actions) and when that state is passively observed in other individuals. In adults, seeing other people being touched activates similar brain areas as when we experience touch ourselves. Here we show that already by four months of age, cortical responses to tactile stimulation are modulated by visual information specifying another person being touched. We recorded somatosensory evoked potentials (SEPs) in 4-month-old infants while they were presented with brief vibrotactile stimuli to the hands. At the same time that the tactile stimuli were presented the infants observed another person's hand being touched by a soft paintbrush or approached by the paintbrush which then touched the surface next to their hand. A prominent positive peak in SEPs contralateral to the site of tactile stimulation around 130 ms after the tactile stimulus onset was of a significantly larger amplitude for the "Surface" trials than for the "Hand" trials. These findings indicate that, even at four months of age, somatosensory cortex is not only involved in the personal experience of touch but can also be vicariously recruited by seeing other people being touched.
Collapse
Affiliation(s)
- Silvia Rigato
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, CO4 3SQ, UK
| | - Michael J Banissy
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, London, SE14 6NW, UK
| | - Aleksandra Romanska
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, London, SE14 6NW, UK
| | - Rhiannon Thomas
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, London, SE14 6NW, UK
| | - José van Velzen
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, London, SE14 6NW, UK
| | - Andrew J Bremner
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, London, SE14 6NW, UK.
| |
Collapse
|
23
|
Donnarumma F, Dindo H, Pezzulo G. Sensorimotor Communication for Humans and Robots: Improving Interactive Skills by Sending Coordination Signals. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2756107] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
24
|
Abstract
Estimation of intentions from the observation of other people’s actions has been proposed to rely on the same motor chain organization supporting the execution of intentional actions. However, the nature of the mechanism by which a specific neuronal chain is selected among possible alternatives during action observation remains obscure. Our study shows that in absence of discriminative contextual cues, subtle changes in the kinematics of the observed action inform mapping to the most probable chain. These results shed light on the importance of kinematics for the attribution of intentions to actions. The ability to understand intentions based on another’s movements is crucial for human interaction. This ability has been ascribed to the so-called motor chaining mechanism: anytime a motor chain is activated (e.g., grasp-to-drink), the observer attributes to the agent the corresponding intention (i.e., to drink) from the first motor act (i.e., the grasp). However, the mechanisms by which a specific chain is selected in the observer remain poorly understood. In the current study, we investigate the possibility that in the absence of discriminative contextual cues, slight kinematic variations in the observed grasp inform mapping to the most probable chain. Chaining of motor acts predicts that, in a sequential grasping task (e.g., grasp-to-drink), electromyographic (EMG) components that are required for the final act [e.g., the mouth-opening mylohyoid (MH) muscle] show anticipatory activation. To test this prediction, we used MH EMG, transcranial magnetic stimulation (TMS; MH motor-evoked potentials), and predictive models of movement kinematics to measure the level and timing of MH activation during the execution (Experiment 1) and the observation (Experiment 2) of reach-to-grasp actions. We found that MH-related corticobulbar excitability during grasping observation varied as a function of the goal (to drink or to pour) and the kinematics of the observed grasp. These results show that subtle changes in movement kinematics drive the selection of the most probable motor chain, allowing the observer to link an observed act to the agent’s intention.
Collapse
|
25
|
Abstract
A common deflationary tendency has emerged recently in both philosophical accounts and comparative animal studies concerned with how subjects understand the actions of others. The suggestion emerging from both arenas is that the default mechanism for understanding action involves only a sensitivity to the observable, behavioural (non-mental) features of a situation. This kind of ‘smart behaviour reading’ thus suggests that, typically, predicting or explaining the behaviour of conspecifics does not require seeing the other through the lens of mental state attribution. This paper aims to explore and assess this deflationary move. In §1 I clarify what might be involved in a smart behaviour reading account via looking at some concrete examples. Then in §2 I critically assess the deflationary move, arguing that, at least in the human case, it would in fact be a mistake to assume that our default method of action understanding proceeds without appeal to mental state attribution. Finally in §3 I consider briefly how the positive view proposed here relates to discussions about standard two-system models of cognition.
Collapse
Affiliation(s)
- Emma Borg
- 1Reading Centre for Cognition Research, Department of Philosophy, University of Reading, Reading, RG6 6AA UK.,2Australian Research Council Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, Australia
| |
Collapse
|
26
|
Eilbeigi E, Setarehdan SK. Detecting intention to execute the next movement while performing current movement from EEG using global optimal constrained ICA. Comput Biol Med 2018; 99:63-75. [DOI: 10.1016/j.compbiomed.2018.05.024] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2018] [Revised: 05/02/2018] [Accepted: 05/25/2018] [Indexed: 10/16/2022]
|
27
|
Trujillo JP, Simanova I, Bekkering H, Özyürek A. Communicative intent modulates production and comprehension of actions and gestures: A Kinect study. Cognition 2018; 180:38-51. [PMID: 29981967 DOI: 10.1016/j.cognition.2018.04.003] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2017] [Revised: 03/16/2018] [Accepted: 04/02/2018] [Indexed: 10/28/2022]
Abstract
Actions may be used to directly act on the world around us, or as a means of communication. Effective communication requires the addressee to recognize the act as being communicative. Humans are sensitive to ostensive communicative cues, such as direct eye gaze (Csibra & Gergely, 2009). However, there may be additional cues present in the action or gesture itself. Here we investigate features that characterize the initiation of a communicative interaction in both production and comprehension. We asked 40 participants to perform 31 pairs of object-directed actions and representational gestures in more- or less- communicative contexts. Data were collected using motion capture technology for kinematics and video recording for eye-gaze. With these data, we focused on two issues. First, if and how actions and gestures are systematically modulated when performed in a communicative context. Second, if observers exploit such kinematic information to classify an act as communicative. Our study showed that during production the communicative context modulates space-time dimensions of kinematics and elicits an increase in addressee-directed eye-gaze. Naïve participants detected communicative intent in actions and gestures preferentially using eye-gaze information, only utilizing kinematic information when eye-gaze was unavailable. Our study highlights the general communicative modulation of action and gesture kinematics during production but also shows that addressees only exploit this modulation to recognize communicative intention in the absence of eye-gaze. We discuss these findings in terms of distinctive but potentially overlapping functions of addressee directed eye-gaze and kinematic modulations within the wider context of human communication and learning.
Collapse
Affiliation(s)
- James P Trujillo
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands; Centre for Language Studies, Radboud University Nijmegen, The Netherlands.
| | - Irina Simanova
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands
| | - Harold Bekkering
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands
| | - Asli Özyürek
- Centre for Language Studies, Radboud University Nijmegen, The Netherlands; Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD Nijmegen, The Netherlands
| |
Collapse
|
28
|
Craighero L, Mele S. Equal kinematics and visual context but different purposes: Observer's moral rules modulate motor resonance. Cortex 2018; 104:1-11. [DOI: 10.1016/j.cortex.2018.03.032] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2017] [Revised: 02/19/2018] [Accepted: 03/30/2018] [Indexed: 10/17/2022]
|
29
|
Koul A, Cavallo A, Cauda F, Costa T, Diano M, Pontil M, Becchio C. Action Observation Areas Represent Intentions From Subtle Kinematic Features. Cereb Cortex 2018; 28:2647-2654. [PMID: 29722797 PMCID: PMC5998953 DOI: 10.1093/cercor/bhy098] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Revised: 03/15/2018] [Indexed: 12/05/2022] Open
Abstract
Mirror neurons have been proposed to underlie humans' ability to understand others' actions and intentions. Despite 2 decades of research, however, the exact computational and neuronal mechanisms implied in this ability remain unclear. In the current study, we investigated whether, in the absence of contextual cues, regions considered to be part of the human mirror neuron system represent intention from movement kinematics. A total of 21 participants observed reach-to-grasp movements, performed with either the intention to drink or to pour while undergoing functional magnetic resonance imaging. Multivoxel pattern analysis revealed successful decoding of intentions from distributed patterns of activity in a network of structures comprising the inferior parietal lobule, the superior parietal lobule, the inferior frontal gyrus, and the middle frontal gyrus. Consistent with the proposal that parietal regions play a key role in intention understanding, classifier weights were higher in the inferior parietal region. These results provide the first demonstration that putative mirror neuron regions represent subtle differences in movement kinematics to read the intention of an observed motor act.
Collapse
Affiliation(s)
- Atesh Koul
- Department of Psychology, University of Torino, Torino, Italy
- C’MON, Cognition, Motion and Neuroscience Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | - Andrea Cavallo
- Department of Psychology, University of Torino, Torino, Italy
- C’MON, Cognition, Motion and Neuroscience Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | - Franco Cauda
- Department of Psychology, University of Torino, Torino, Italy
- GCS-fMRI, Koelliker Hospital and Department of Psychology, University of Torino, Torino, Italy
- Focus Lab, Department of Psychology, University of Torino, Torino, Italy
| | - Tommaso Costa
- Department of Psychology, University of Torino, Torino, Italy
- GCS-fMRI, Koelliker Hospital and Department of Psychology, University of Torino, Torino, Italy
- Focus Lab, Department of Psychology, University of Torino, Torino, Italy
| | - Matteo Diano
- Department of Psychology, University of Torino, Torino, Italy
| | - Massimiliano Pontil
- Computational Statistics and Machine Learning, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
- Department of Computer Science, University College London, London, UK
| | - Cristina Becchio
- Department of Psychology, University of Torino, Torino, Italy
- C’MON, Cognition, Motion and Neuroscience Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
30
|
Decroix J, Kalénine S. Timing of grip and goal activation during action perception: a priming study. Exp Brain Res 2018; 236:2411-2426. [DOI: 10.1007/s00221-018-5309-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Accepted: 06/07/2018] [Indexed: 01/23/2023]
|
31
|
Cole EJ, Slocombe KE, Barraclough NE. Abilities to Explicitly and Implicitly Infer Intentions from Actions in Adults with Autism Spectrum Disorder. J Autism Dev Disord 2018; 48:1712-1726. [PMID: 29214604 PMCID: PMC5889782 DOI: 10.1007/s10803-017-3425-5] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Previous research suggests that Autism Spectrum Disorder (ASD) might be associated with impairments on implicit but not explicit mentalizing tasks. However, such comparisons are made difficult by the heterogeneity of stimuli and the techniques used to measure mentalizing capabilities. We tested the abilities of 34 individuals (17 with ASD) to derive intentions from others' actions during both explicit and implicit tasks and tracked their eye-movements. Adults with ASD displayed explicit but not implicit mentalizing deficits. Adults with ASD displayed typical fixation patterns during both implicit and explicit tasks. These results illustrate an explicit mentalizing deficit in adults with ASD, which cannot be attributed to differences in fixation patterns.
Collapse
Affiliation(s)
- Eleanor J Cole
- The Department of Psychology, The University of York, Heslington, York, YO10 5DD, UK.
| | - Katie E Slocombe
- The Department of Psychology, The University of York, Heslington, York, YO10 5DD, UK
| | - Nick E Barraclough
- The Department of Psychology, The University of York, Heslington, York, YO10 5DD, UK
| |
Collapse
|
32
|
Beke C, Flindall JW, Gonzalez CLR. Kinematics of ventrally mediated grasp-to-eat actions: right-hand advantage is dependent on dorsal stream input. Exp Brain Res 2018; 236:1621-1630. [PMID: 29589079 DOI: 10.1007/s00221-018-5242-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Accepted: 03/21/2018] [Indexed: 11/24/2022]
Abstract
Studies have suggested a left-hemisphere specialization for visually guided grasp-to-eat actions by way of task-dependent kinematic asymmetries (i.e., smaller maximum grip apertures for right-handed grasp-to-eat movements than for right-handed grasp-to-place movements or left-handed movements of either type). It is unknown, however, whether this left-hemisphere/right-hand kinematic advantage is reliant on the dorsal "vision-for-action" visual stream. The present study investigates the kinematic differences between grasp-to-eat and grasp-to place actions performance during closed-loop (i.e., dorsally mediated) and open-loop delay (i.e., ventrally mediated) conditions. Twenty-one right-handed adult participants were asked to reach to grasp small food items to (1) eat them, or (2) place them in a container below the mouth. Grasps were performed in both closed-loop and open-loop delay conditions, in separate sessions. We show that participants displayed the right-hand grasp-to-eat kinematic advantage in the closed-loop condition, but not in the open-loop delay condition. As no task-dependent kinematic differences were found in ventrally mediated grasps, we posit that the left-hemisphere/right-hand advantage is dependent on dorsal stream processing.
Collapse
Affiliation(s)
- Clarissa Beke
- The Brain in Action Laboratory, Department of Kinesiology, University of Lethbridge, 4401 University Dr W, Lethbridge, AB, T1K 6T5, Canada
| | - Jason W Flindall
- The Brain in Action Laboratory, Department of Kinesiology, University of Lethbridge, 4401 University Dr W, Lethbridge, AB, T1K 6T5, Canada. .,Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver, BC, V6T 1Z4, Canada.
| | - Claudia L R Gonzalez
- The Brain in Action Laboratory, Department of Kinesiology, University of Lethbridge, 4401 University Dr W, Lethbridge, AB, T1K 6T5, Canada
| |
Collapse
|
33
|
Seeing mental states: An experimental strategy for measuring the observability of other minds. Phys Life Rev 2018; 24:67-80. [DOI: 10.1016/j.plrev.2017.10.002] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Revised: 09/29/2017] [Accepted: 10/01/2017] [Indexed: 02/03/2023]
|
34
|
Ruggiero M, Catmur C. Mirror neurons and intention understanding: Dissociating the contribution of object type and intention to mirror responses using electromyography. Psychophysiology 2018; 55:e13061. [PMID: 29349781 DOI: 10.1111/psyp.13061] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2017] [Revised: 11/09/2017] [Accepted: 12/18/2017] [Indexed: 11/29/2022]
Abstract
Since their discovery in the monkey and human brain, mirror neurons have been claimed to play a key role in understanding others' intentions. For example, "action-constrained" mirror neurons in inferior parietal lobule fire when the monkey observes a grasping movement that is followed by an eating action, but not when it is followed by a placing action. It is claimed these responses enable the monkey to predict the intentions of the actor. These findings have been replicated in human observers by recording electromyography responses of the mouth-opening mylohyoid muscle during action observation. Mylohyoid muscle activity was greater during the observation of actions performed with the intention to eat than of actions performed with the intention to place, again suggesting an ability to predict the actor's intentions. However, in previous studies, intention was confounded with object type (food for eating actions, nonfood for placing actions). We therefore used electromyography to measure mylohyoid activity in participants observing eating and placing actions. Unlike previous studies, we used a design in which each object (food, nonfood) could be both eaten and placed, and thus participants could not predict the actor's intention at the onset of the action. Greater mylohyoid activity was found for the observation of actions performed on food objects, irrespective of intention, indicating that the object type, not the actor's intention, drives the mirror response. This result suggests that observers' motor responses during action observation reflect the presence of a particular object, rather than the actor's underlying intentions.
Collapse
Affiliation(s)
- Maura Ruggiero
- School of Human and Social Science, Università degli Studi di Napoli Federico II, Naples, Italy.,Department of Psychology, University of Surrey, Guildford, United Kingdom
| | - Caroline Catmur
- Department of Psychology, University of Surrey, Guildford, United Kingdom.,Department of Psychology, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| |
Collapse
|
35
|
de Vries JC, van Ommeren AL, Prange-Lasonder GP, Rietman JS, Veltink PH. Detection of the intention to grasp during reach movements. J Rehabil Assist Technol Eng 2018; 5:2055668317752850. [PMID: 31191924 PMCID: PMC6453090 DOI: 10.1177/2055668317752850] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Accepted: 12/15/2017] [Indexed: 11/17/2022] Open
Abstract
INTRODUCTION Soft-robotic gloves have been developed to enhance grip to support stroke patients during daily life tasks. Studies showed that users perform tasks faster without the glove as compared to with the glove. It was investigated whether it is possible to detect grasp intention earlier than using force sensors to enhance the performance of the glove. METHODS This was studied by distinguishing reach-to-grasp movements from reach movements without the intention to grasp, using minimal inertial sensing and machine learning. Both single-user and multi-user support vector machine classifiers were investigated. Data were gathered during an experiment with healthy subjects, in which they were asked to perform grasp and reach movements. RESULTS Experimental results show a mean accuracy of 98.2% for single-user and of 91.4% for multi-user classification, both using only two sensors: one on the hand and one on the middle finger. Furthermore, it was found that using only 40% of the trial length, an accuracy of 85.3% was achieved, which would allow for an earlier prediction of grasp during the reach movement by 1200 ms. CONCLUSIONS Based on these promising results, further research will be done to investigate the possibility to use classification of the movements in stroke patients.
Collapse
Affiliation(s)
- JC de Vries
- Department of Biomedical Signals and
Systems, University of Twente, Enschede, the Netherlands
| | - AL van Ommeren
- Roessingh Research and Development,
Enschede, the Netherlands
- Department of Biomechanical Engineering,
University of Twente, Enschede, the Netherlands
| | - GP Prange-Lasonder
- Roessingh Research and Development,
Enschede, the Netherlands
- Department of Biomechanical Engineering,
University of Twente, Enschede, the Netherlands
| | - JS Rietman
- Roessingh Research and Development,
Enschede, the Netherlands
- Department of Biomechanical Engineering,
University of Twente, Enschede, the Netherlands
| | - PH Veltink
- Department of Biomedical Signals and
Systems, University of Twente, Enschede, the Netherlands
| |
Collapse
|
36
|
Reader AT. Optimal motor synergy extraction for novel actions and virtual environments. J Neurophysiol 2017; 118:652-654. [PMID: 28539395 DOI: 10.1152/jn.00165.2017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Revised: 05/18/2017] [Accepted: 05/18/2017] [Indexed: 11/22/2022] Open
Abstract
Dimensionality reduction techniques such as factor analysis can be used to identify the smallest number of components (motor synergies) that explain motion. Lambert-Shirzad and Van der Loos (J Neurophysiol 117: 290-302, 2017) compared dimensionality reduction techniques in bimanual hand movements, concluding that nonnegative matrix factorization was the optimal technique for extracting meaningful synergies. Their results provide a useful measure for examining how the motor system deals with novel motor tasks that allow the actor to engage with a virtual environment.
Collapse
Affiliation(s)
- Arran T Reader
- Centre for Integrative Neuroscience and Neurodynamics, School of Psychology and Clinical Language Sciences, University of Reading, Reading, United Kingdom
| |
Collapse
|
37
|
Abstract
When we reach to grasp something, we need to take into account both the properties of the object we are grasping and the intention we have in mind. Previous research has found these constraints to be visible in the reach-to-grasp kinematics, but there is no consensus on which kinematic parameters are the most sensitive. To examine this, a systematic literature search and meta-analyses were performed. The search identified studies assessing how changes in either an object property or a prior intention affect reach-to-grasp kinematics in healthy participants. Hereafter, meta-analyses were conducted using a restricted maximum likelihood random effect model. The meta-analyses showed that changes in both object properties and prior intentions affected reach-to-grasp kinematics. Based on these results, the authors argue for a tripartition of the reach-to-grasp movement in which the accelerating part of the reach is primarily associated with transporting the hand to the object (i.e., extrinsic object properties), the decelerating part of the reach is used as a preparation for object manipulation (i.e., prepare the grasp or the subsequent action), and the grasp is associated with manipulating the object's intrinsic properties, especially object size.
Collapse
Affiliation(s)
- Ida Egmose
- a Department of Psychology , University of Copenhagen , Denmark
| | - Simo Køppe
- a Department of Psychology , University of Copenhagen , Denmark
| |
Collapse
|
38
|
Blanchard CCV, McGlashan HL, French B, Sperring RJ, Petrocochino B, Holmes NP. Online Control of Prehension Predicts Performance on a Standardized Motor Assessment Test in 8- to 12-Year-Old Children. Front Psychol 2017; 8:374. [PMID: 28360874 PMCID: PMC5352659 DOI: 10.3389/fpsyg.2017.00374] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Accepted: 02/27/2017] [Indexed: 11/13/2022] Open
Abstract
Goal-directed hand movements are guided by sensory information and may be adjusted 'online,' during the movement. If the target of a movement unexpectedly changes position, trajectory corrections can be initiated in as little as 100 ms in adults. This rapid visual online control is impaired in children with developmental coordination disorder (DCD), and potentially in other neurodevelopmental conditions. We investigated the visual control of hand movements in children in a 'center-out' double-step reaching and grasping task, and examined how parameters of this visuomotor control co-vary with performance on standardized motor tests often used with typically and atypically developing children. Two groups of children aged 8-12 years were asked to reach and grasp an illuminated central ball on a vertically oriented board. On a proportion of trials, and at movement onset, the illumination switched unpredictably to one of four other balls in a center-out configuration (left, right, up, or down). When the target moved, all but one of the children were able to correct their movements before reaching the initial target, at least on some trials, but the latencies to initiate these corrections were longer than those typically reported in the adult literature, ranging from 211 to 581 ms. These later corrections may be due to less developed motor skills in children, or to the increased cognitive and biomechanical complexity of switching movements in four directions. In the first group (n = 187), reaching and grasping parameters significantly predicted standardized movement scores on the MABC-2, most strongly for the aiming and catching component. In the second group (n = 85), these same parameters did not significantly predict scores on the DCDQ'07 parent questionnaire. Our reaching and grasping task provides a sensitive and continuous measure of movement skill that predicts scores on standardized movement tasks used to screen for DCD.
Collapse
Affiliation(s)
| | - Hannah L McGlashan
- School of Psychology, University of Nottingham, University Park Nottingham, UK
| | - Blandine French
- School of Psychology, University of Nottingham, University Park Nottingham, UK
| | - Rachel J Sperring
- School of Psychology and Clinical Language Sciences, University of Reading Reading, UK
| | - Bianca Petrocochino
- School of Psychology and Clinical Language Sciences, University of Reading Reading, UK
| | - Nicholas P Holmes
- School of Psychology, University of Nottingham, University Park Nottingham, UK
| |
Collapse
|
39
|
Donnarumma F, Dindo H, Pezzulo G. Sensorimotor Coarticulation in the Execution and Recognition of Intentional Actions. Front Psychol 2017; 8:237. [PMID: 28280475 PMCID: PMC5322223 DOI: 10.3389/fpsyg.2017.00237] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2016] [Accepted: 02/07/2017] [Indexed: 11/13/2022] Open
Abstract
Humans excel at recognizing (or inferring) another's distal intentions, and recent experiments suggest that this may be possible using only subtle kinematic cues elicited during early phases of movement. Still, the cognitive and computational mechanisms underlying the recognition of intentional (sequential) actions are incompletely known and it is unclear whether kinematic cues alone are sufficient for this task, or if it instead requires additional mechanisms (e.g., prior information) that may be more difficult to fully characterize in empirical studies. Here we present a computationally-guided analysis of the execution and recognition of intentional actions that is rooted in theories of motor control and the coarticulation of sequential actions. In our simulations, when a performer agent coarticulates two successive actions in an action sequence (e.g., "reach-to-grasp" a bottle and "grasp-to-pour"), he automatically produces kinematic cues that an observer agent can reliably use to recognize the performer's intention early on, during the execution of the first part of the sequence. This analysis lends computational-level support for the idea that kinematic cues may be sufficiently informative for early intention recognition. Furthermore, it suggests that the social benefits of coarticulation may be a byproduct of a fundamental imperative to optimize sequential actions. Finally, we discuss possible ways a performer agent may combine automatic (coarticulation) and strategic (signaling) ways to facilitate, or hinder, an observer's action recognition processes.
Collapse
Affiliation(s)
- Francesco Donnarumma
- Institute of Cognitive Sciences and Technologies, National Research Council Rome, Italy
| | - Haris Dindo
- Computer Science Engineering, University of Palermo Palermo, Italy
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council Rome, Italy
| |
Collapse
|
40
|
Finisguerra A, Amoruso L, Makris S, Urgesi C. Dissociated Representations of Deceptive Intentions and Kinematic Adaptations in the Observer's Motor System. Cereb Cortex 2016; 28:33-47. [DOI: 10.1093/cercor/bhw346] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2016] [Accepted: 10/24/2016] [Indexed: 11/13/2022] Open
Affiliation(s)
- Alessandra Finisguerra
- Dipartimento di Lingue e Letterature, Comunicazione, Formazione e Società, Università degli Studi di Udine, I-33100 Udine, Italy
| | - Lucia Amoruso
- Dipartimento di Lingue e Letterature, Comunicazione, Formazione e Società, Università degli Studi di Udine, I-33100 Udine, Italy
| | - Stergios Makris
- Department of Psychology, Edge Hill University, Ormskirk, Lancashire L394QP, UK
| | - Cosimo Urgesi
- Dipartimento di Lingue e Letterature, Comunicazione, Formazione e Società, Università degli Studi di Udine, I-33100 Udine, Italy
- Istituto di Ricovero e Cura a Carattere Scientifico (IRCSS) Eugenio Medea, Polo Friuli Venezia Giulia, I-33078 San Vito al Tagliamento, Pordenone, Italy
| |
Collapse
|
41
|
Cavallo A, Koul A, Ansuini C, Capozzi F, Becchio C. Decoding intentions from movement kinematics. Sci Rep 2016; 6:37036. [PMID: 27845434 PMCID: PMC5109236 DOI: 10.1038/srep37036] [Citation(s) in RCA: 100] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2016] [Accepted: 10/24/2016] [Indexed: 11/09/2022] Open
Abstract
How do we understand the intentions of other people? There has been a longstanding controversy over whether it is possible to understand others' intentions by simply observing their movements. Here, we show that indeed movement kinematics can form the basis for intention detection. By combining kinematics and psychophysical methods with classification and regression tree (CART) modeling, we found that observers utilized a subset of discriminant kinematic features over the total kinematic pattern in order to detect intention from observation of simple motor acts. Intention discriminability covaried with movement kinematics on a trial-by-trial basis, and was directly related to the expression of discriminative features in the observed movements. These findings demonstrate a definable and measurable relationship between the specific features of observed movements and the ability to discriminate intention, providing quantitative evidence of the significance of movement kinematics for anticipating others' intentional actions.
Collapse
Affiliation(s)
- Andrea Cavallo
- Department of Psychology, University of Torino, Torino, Italy
| | - Atesh Koul
- C’MON, Cognition, Motion and Neuroscience Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | - Caterina Ansuini
- C’MON, Cognition, Motion and Neuroscience Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | | | - Cristina Becchio
- Department of Psychology, University of Torino, Torino, Italy
- C’MON, Cognition, Motion and Neuroscience Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
42
|
Doing It Your Way: How Individual Movement Styles Affect Action Prediction. PLoS One 2016; 11:e0165297. [PMID: 27780259 PMCID: PMC5079573 DOI: 10.1371/journal.pone.0165297] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2016] [Accepted: 10/10/2016] [Indexed: 01/12/2023] Open
Abstract
Individuals show significant variations in performing a motor act. Previous studies in the action observation literature have largely ignored this ubiquitous, if often unwanted, characteristic of motor performance, assuming movement patterns to be highly similar across repetitions and individuals. In the present study, we examined the possibility that individual variations in motor style directly influence the ability to understand and predict others' actions. To this end, we first recorded grasping movements performed with different intents and used a two-step cluster analysis to identify quantitatively 'clusters' of movements performed with similar movement styles (Experiment 1). Next, using videos of the same movements, we proceeded to examine the influence of these styles on the ability to judge intention from action observation (Experiments 2 and 3). We found that motor styles directly influenced observers' ability to 'read' others' intention, with some styles always being less 'readable' than others. These results provide experimental support for the significance of motor variability for action prediction, suggesting that the ability to predict what another person is likely to do next directly depends on her individual movement style.
Collapse
|
43
|
Reader AT, Holmes NP. Examining ecological validity in social interaction: problems of visual fidelity, gaze, and social potential. CULTURE AND BRAIN 2016; 4:134-146. [PMID: 27867831 PMCID: PMC5095160 DOI: 10.1007/s40167-016-0041-8] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Accepted: 09/13/2016] [Indexed: 10/27/2022]
Abstract
Social interaction is an essential part of the human experience, and much work has been done to study it. However, several common approaches to examining social interactions in psychological research may inadvertently either unnaturally constrain the observed behaviour by causing it to deviate from naturalistic performance, or introduce unwanted sources of variance. In particular, these sources are the differences between naturalistic and experimental behaviour that occur from changes in visual fidelity (quality of the observed stimuli), gaze (whether it is controlled for in the stimuli), and social potential (potential for the stimuli to provide actual interaction). We expand on these possible sources of extraneous variance and why they may be important. We review the ways in which experimenters have developed novel designs to remove these sources of extraneous variance. New experimental designs using a 'two-person' approach are argued to be one of the most effective ways to develop more ecologically valid measures of social interaction, and we suggest that future work on social interaction should use these designs wherever possible.
Collapse
Affiliation(s)
- Arran T. Reader
- School of Psychology and Clinical Language Sciences, University of Reading, Earley Gate, Whiteknights Road, Reading, RG6 6AL UK
| | | |
Collapse
|
44
|
Catmur C. Understanding intentions from actions: Direct perception, inference, and the roles of mirror and mentalizing systems. Conscious Cogn 2015; 36:426-33. [DOI: 10.1016/j.concog.2015.03.012] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2014] [Revised: 03/16/2015] [Accepted: 03/18/2015] [Indexed: 10/23/2022]
|
45
|
Quesque F, Delevoye-Turrell Y, Coello Y. Facilitation effect of observed motor deviants in a cooperative motor task: Evidence for direct perception of social intention in action. Q J Exp Psychol (Hove) 2015; 69:1451-63. [PMID: 26288247 DOI: 10.1080/17470218.2015.1083596] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Spatiotemporal parameters of voluntary motor action may help optimize human social interactions. Yet it is unknown whether individuals performing a cooperative task spontaneously perceive subtly informative social cues emerging through voluntary actions. In the present study, an auditory cue was provided through headphones to an actor and a partner who faced each other. Depending on the pitch of the auditory cue, either the actor or the partner were required to grasp and move a wooden dowel under time constraints from a central to a lateral position. Before this main action, the actor performed a preparatory action under no time constraint, consisting in placing the wooden dowel on the central location when receiving either a neutral ("prêt"-ready) or an informative auditory cue relative to who will be asked to perform the main action (the actor: "moi"-me, or the partner: "lui"-him). Although the task focused on the main action, analysis of motor performances revealed that actors performed the preparatory action with longer reaction times and higher trajectories when informed that the partner would be performing the main action. In this same condition, partners executed the main actions with shorter reaction times and lower velocities, despite having received no previous informative cues. These results demonstrate that the mere observation of socially driven motor actions spontaneously influences the low-level kinematics of voluntary motor actions performed by the observer during a cooperative motor task. These findings indicate that social intention can be anticipated from the mere observation of action patterns.
Collapse
Affiliation(s)
- François Quesque
- a Cognitive and Affective Sciences Laboratory-SCALab , UMR CNRS 9193 University of Lille , Villeneuve d'Ascq , France
| | - Yvonne Delevoye-Turrell
- a Cognitive and Affective Sciences Laboratory-SCALab , UMR CNRS 9193 University of Lille , Villeneuve d'Ascq , France
| | - Yann Coello
- a Cognitive and Affective Sciences Laboratory-SCALab , UMR CNRS 9193 University of Lille , Villeneuve d'Ascq , France
| |
Collapse
|
46
|
Lewkowicz D, Quesque F, Coello Y, Delevoye-Turrell YN. Individual differences in reading social intentions from motor deviants. Front Psychol 2015; 6:1175. [PMID: 26347673 PMCID: PMC4538241 DOI: 10.3389/fpsyg.2015.01175] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2015] [Accepted: 07/26/2015] [Indexed: 11/13/2022] Open
Abstract
As social animals, it is crucial to understand others’ intention. But is it possible to detect social intention in two actions that have the exact same motor goal? In the present study, we presented participants with video clips of an individual reaching for and grasping an object to either use it (personal trial) or to give his partner the opportunity to use it (social trial). In Experiment 1, the ability of naïve participants to classify correctly social trials through simple observation of short video clips was tested. In addition, detection levels were analyzed as a function of individual scores in psychological questionnaires of motor imagery, visual imagery, and social cognition. Results revealed that the between-participant heterogeneity in the ability to distinguish social from personal actions was predicted by the social skill abilities. A second experiment was then conducted to assess what predictive mechanism could contribute to the detection of social intention. Video clips were sliced and normalized to control for either the reaction times (RTs) or/and the movement times (MTs) of the grasping action. Tested in a second group of participants, results showed that the detection of social intention relies on the variation of both RT and MT that are implicitly perceived in the grasping action. The ability to use implicitly these motor deviants for action-outcome understanding would be the key to intuitive social interaction.
Collapse
Affiliation(s)
- Daniel Lewkowicz
- SCALab, UMR CNRS 9193, Department of Psychology, Université de Lille , Villeneuve-d'Ascq, France
| | - Francois Quesque
- SCALab, UMR CNRS 9193, Department of Psychology, Université de Lille , Villeneuve-d'Ascq, France
| | - Yann Coello
- SCALab, UMR CNRS 9193, Department of Psychology, Université de Lille , Villeneuve-d'Ascq, France
| | | |
Collapse
|
47
|
Reader AT, Holmes NP. Video stimuli reduce object-directed imitation accuracy: a novel two-person motion-tracking approach. Front Psychol 2015; 6:644. [PMID: 26042073 PMCID: PMC4436526 DOI: 10.3389/fpsyg.2015.00644] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2015] [Accepted: 05/02/2015] [Indexed: 11/20/2022] Open
Abstract
Imitation is an important form of social behavior, and research has aimed to discover and explain the neural and kinematic aspects of imitation. However, much of this research has featured single participants imitating in response to pre-recorded video stimuli. This is in spite of findings that show reduced neural activation to video vs. real life movement stimuli, particularly in the motor cortex. We investigated the degree to which video stimuli may affect the imitation process using a novel motion tracking paradigm with high spatial and temporal resolution. We recorded 14 positions on the hands, arms, and heads of two individuals in an imitation experiment. One individual freely moved within given parameters (moving balls across a series of pegs) and a second participant imitated. This task was performed with either simple (one ball) or complex (three balls) movement difficulty, and either face-to-face or via a live video projection. After an exploratory analysis, three dependent variables were chosen for examination: 3D grip position, joint angles in the arm, and grip aperture. A cross-correlation and multivariate analysis revealed that object-directed imitation task accuracy (as represented by grip position) was reduced in video compared to face-to-face feedback, and in complex compared to simple difficulty. This was most prevalent in the left-right and forward-back motions, relevant to the imitator sitting face-to-face with the actor or with a live projected video of the same actor. The results suggest that for tasks which require object-directed imitation, video stimuli may not be an ecologically valid way to present task materials. However, no similar effects were found in the joint angle and grip aperture variables, suggesting that there are limits to the influence of video stimuli on imitation. The implications of these results are discussed with regards to previous findings, and with suggestions for future experimentation.
Collapse
Affiliation(s)
- Arran T Reader
- School of Psychology and Clinical Language Sciences, University of Reading Reading, UK
| | | |
Collapse
|
48
|
Flindall JW, Gonzalez CL. Children’s bilateral advantage for grasp-to-eat actions becomes unimanual by age 10years. J Exp Child Psychol 2015; 133:57-71. [DOI: 10.1016/j.jecp.2015.01.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2014] [Revised: 01/23/2015] [Accepted: 01/23/2015] [Indexed: 12/31/2022]
|
49
|
Quesque F, Coello Y. For your eyes only: effect of confederate's eye level on reach-to-grasp action. Front Psychol 2014; 5:1407. [PMID: 25538657 PMCID: PMC4255501 DOI: 10.3389/fpsyg.2014.01407] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2014] [Accepted: 11/17/2014] [Indexed: 11/24/2022] Open
Abstract
Previous studies have shown that the spatio-temporal parameters of reach-to-grasp movement are influenced by the social context in which the motor action is performed. In particular, when interacting with a confederate, movements are slower, with longer initiation times and more ample trajectories, which has been interpreted as implicit communicative information emerging through voluntary movement to catch the partner’s attention and optimize cooperation (Quesque et al., 2013). Because gaze is a crucial component of social interactions, the present study evaluated the role of a confederate’s eye level on the social modulation of trajectory curvature. An actor and a partner facing each other took part in a cooperative task consisting, for one of them, of grasping and moving a wooden dowel under time constraints. Before this Main action, the actor performed a Preparatory action, which consisted of placing the wooden dowel on a central marking. The partner’s eye level was unnoticeably varied using an adjustable seat that matched or was higher than the actor’s seat. Our data confirmed the previous effects of social intention on motor responses. Furthermore, we observed an effect of the partner’s eye level on the Preparatory action, leading the actors to exaggerate unconsciously the trajectory curvature in relation to their partner’s eye level. No interaction was found between the actor’s social intention and their partner’s eye level. These results suggest that other bodies are implicitly taken into account when a reach-to-grasp movement is produced in a social context.
Collapse
Affiliation(s)
- François Quesque
- Psychology Department, Unité de Recherche en Sciences Cognitives et Affectives, Charles de Gaulle-Lille 3 University - University of Lille Nord de France Villeneuve d'Ascq, France
| | - Yann Coello
- Psychology Department, Unité de Recherche en Sciences Cognitives et Affectives, Charles de Gaulle-Lille 3 University - University of Lille Nord de France Villeneuve d'Ascq, France
| |
Collapse
|
50
|
Ansuini C, Cavallo A, Bertone C, Becchio C. The visible face of intention: why kinematics matters. Front Psychol 2014; 5:815. [PMID: 25104946 PMCID: PMC4109428 DOI: 10.3389/fpsyg.2014.00815] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2014] [Accepted: 07/09/2014] [Indexed: 11/13/2022] Open
Abstract
A key component of social understanding is the ability to read intentions from movements. But how do we discern intentions in others' actions? What kind of intention information is actually available in the features of others' movements? Based on the assumption that intentions are hidden away in the other person's mind, standard theories of social cognition have mainly focused on the contribution of higher level processes. Here, we delineate an alternative approach to the problem of intention-from-movement understanding. We argue that intentions become "visible" in the surface flow of agents' motions. Consequently, the ability to understand others' intentions cannot be divorced from the capability to detect essential kinematics. This hypothesis has far reaching implications for how we know other minds and predict others' behavior.
Collapse
Affiliation(s)
- Caterina Ansuini
- Department of Robotics, Brain and Cognitive Sciences, Italian Institute of Technology Genova, Italy
| | - Andrea Cavallo
- Department of Psychology, Centre for Cognitive Science, University of Torino Torino, Italy
| | - Cesare Bertone
- Department of Psychology, Centre for Cognitive Science, University of Torino Torino, Italy
| | - Cristina Becchio
- Department of Robotics, Brain and Cognitive Sciences, Italian Institute of Technology Genova, Italy ; Department of Psychology, Centre for Cognitive Science, University of Torino Torino, Italy
| |
Collapse
|