1
|
Sulfikar Ali A, Bhat M, Palaniswamy HP, Ramachandran S, Kumaran SD. Does Action Observation of the Whole Task Influence Mirror Neuron System and Upper Limb Muscle Activity Better Than Part Task in People With Stroke? Stroke Res Treat 2024; 2024:9967369. [PMID: 39399483 PMCID: PMC11470815 DOI: 10.1155/2024/9967369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 07/13/2024] [Accepted: 09/05/2024] [Indexed: 10/15/2024] Open
Abstract
Background: Task-based action observation and imitation (AOI) is a promising intervention to enhance upper limb (UL) motor function poststroke. However, whether whole/part task must be trained in the AOI therapy needs further substantiation. Objective: The objective of this study is to assess and compare the mirror neuron activity and UL muscle activity during AOI of reaching task in terms of whole task (complete movement) and part task (proximal arm movements and distal arm movements). Methods: In this cross-sectional study, 26 participants with first-time unilateral stroke were asked to observe the prerecorded videos of a reaching task in terms of a whole task and proximal and distal components, followed by imitation of the task, respectively. Electroencephalographic (EEG) mu rhythm suppression and electromyographic amplitude of six UL muscles were measured during the task. Results: The analysis of EEG revealed a statistically significant mu suppression score, indicating mirror neuron system activity, during AOI of the whole task in C3 (p = <0.001) and C4 (p = <0.001) electrodes compared to the part task. Percentage maximum voluntary contraction amplitudes of the deltoid (p = 0.002), supraspinatus (p = <0.001), triceps brachii (p = 0.002), brachioradialis (p = 0.006), and extensor carpi radialis (p = <0.001) muscles showed a significant increase in muscle activity during AOI of the whole task. Also, there seems to be a task observation-specific activation of muscles following AOI of proximal or distal tasks. Conclusion: The practice of the whole task should be given emphasis while framing the AOI treatment module to enhance reaching in people with stroke. Trial registration: Clinical Trials Registry-India (CTRI) identifier: CTRI/2018/04/013466.
Collapse
Affiliation(s)
- A. Sulfikar Ali
- Department of Physiotherapy, Manipal College of Health Professions, Manipal Academy of Higher Education, Karnataka, Manipal 576104, India
- Department of Physiotherapy, Kasturba Medical College Mangalore, Manipal Academy of Higher Education, Karnataka, Manipal 576104, India
| | - Mayur Bhat
- Department of Audiology and Speech Language Pathology, Kasturba Medical College Mangalore, Manipal Academy of Higher Education, Karnataka, Manipal 576104, India
| | - Hari Prakash Palaniswamy
- Department of Speech and Hearing, Manipal College of Health Professions, Manipal Academy of Higher Education, Karnataka, Manipal 576104, India
| | - Selvam Ramachandran
- Department of Physiotherapy, Manipal College of Health Professions, Manipal Academy of Higher Education, Karnataka, Manipal 576104, India
| | - Senthil D. Kumaran
- Department of Physiotherapy, Manipal College of Health Professions, Manipal Academy of Higher Education, Karnataka, Manipal 576104, India
- Department of Medical Rehabilitation-Physical Therapy Program, School of Rehabilitation and Medical Sciences, College of Health Sciences, University of Nizwa, Nizwa, Oman
| |
Collapse
|
2
|
Linnunsalo S, Küster D, Yrttiaho S, Peltola MJ, Hietanen JK. Psychophysiological responses to eye contact with a humanoid robot: Impact of perceived intentionality. Neuropsychologia 2023; 189:108668. [PMID: 37619935 DOI: 10.1016/j.neuropsychologia.2023.108668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 06/20/2023] [Accepted: 08/21/2023] [Indexed: 08/26/2023]
Abstract
Eye contact with a social robot has been shown to elicit similar psychophysiological responses to eye contact with another human. However, it is becoming increasingly clear that the attention- and affect-related psychophysiological responses differentiate between direct (toward the observer) and averted gaze mainly when viewing embodied faces that are capable of social interaction, whereas pictorial or pre-recorded stimuli have no such capability. It has been suggested that genuine eye contact, as indicated by the differential psychophysiological responses to direct and averted gaze, requires a feeling of being watched by another mind. Therefore, we measured event-related potentials (N170 and frontal P300) with EEG, facial electromyography, skin conductance, and heart rate deceleration responses to seeing a humanoid robot's direct versus averted gaze, while manipulating the impression of the robot's intentionality. The results showed that the N170 and the facial zygomatic responses were greater to direct than to averted gaze of the robot, and independent of the robot's intentionality, whereas the frontal P300 responses were more positive to direct than to averted gaze only when the robot appeared intentional. The study provides further evidence that the gaze behavior of a social robot elicits attentional and affective responses and adds that the robot's seemingly autonomous social behavior plays an important role in eliciting higher-level socio-cognitive processing.
Collapse
Affiliation(s)
- Samuli Linnunsalo
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland.
| | - Dennis Küster
- Cognitive Systems Lab, Department of Computer Science, University of Bremen, Bremen, Germany
| | - Santeri Yrttiaho
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland
| | - Mikko J Peltola
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland; Tampere Institute for Advanced Study, Tampere University, Tampere, Finland
| | - Jari K Hietanen
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland.
| |
Collapse
|
3
|
Cipriano M, Carneiro P, Albuquerque PB, Pinheiro AP, Lindner I. Stimuli in 3 Acts: A normative study on action-statements, action videos and object photos. Behav Res Methods 2023; 55:3504-3512. [PMID: 36131196 DOI: 10.3758/s13428-022-01972-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/01/2022] [Indexed: 11/08/2022]
Abstract
The study of action observation and imagery, separately and combined, is expanding in diverse research areas (e.g., sports psychology, neurosciences), making clear the need for action-related stimuli (i.e., action statements, videos, and pictures). Although several databases of object and action pictures are available, norms on action videos are scarce. In this study, we validated a set of 60 object-related everyday actions in three different formats: action-statements, and corresponding dynamic (action videos) and static (object photos) stimuli. In Study 1, ratings of imageability, image agreement, action familiarity, action frequency, and action valence were collected from 161 participants. In Study 2, a different sample of 115 participants rated object familiarity, object valence, and object-action prototypicality. Most actions were rated as easy to imagine, familiar, and neutral or positive in valence. However, there was variation in the frequency with which participants perform these actions on a daily basis. High agreement between participants' mental image and action videos was also found, showing that the videos depict a conventional way of performing the actions. Objects were considered familiar and positive in valence. High ratings on object-action prototypicality indicate that the actions correspond to prototypical actions for most objects. 3ActStimuli is a comprehensive set of stimuli that can be useful in several research areas, allowing the combined study of action observation and imagery.
Collapse
Affiliation(s)
- Margarida Cipriano
- Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal.
| | - Paula Carneiro
- Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| | | | - Ana P Pinheiro
- Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| | - Isabel Lindner
- Universität Kassel, Institut für Psychologie, Kassel, Germany
| |
Collapse
|
4
|
Tidoni E, Holle H, Scandola M, Schindler I, Hill L, Cross ES. Human but not robotic gaze facilitates action prediction. iScience 2022; 25:104462. [PMID: 35707718 PMCID: PMC9189121 DOI: 10.1016/j.isci.2022.104462] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 05/05/2022] [Accepted: 05/17/2022] [Indexed: 01/08/2023] Open
Abstract
Do people ascribe intentions to humanoid robots as they would to humans or non-human-like animated objects? In six experiments, we compared people’s ability to extract non-mentalistic (i.e., where an agent is looking) and mentalistic (i.e., what an agent is looking at; what an agent is going to do) information from gaze and directional cues performed by humans, human-like robots, and a non-human-like object. People were faster to infer the mental content of human agents compared to robotic agents. Furthermore, although the absence of differences in control conditions rules out the use of non-mentalizing strategies, the human-like appearance of non-human agents may engage mentalizing processes to solve the task. Overall, results suggest that human-like robotic actions may be processed differently from humans’ and objects’ behavior. These findings inform our understanding of the relevance of an object’s physical features in triggering mentalizing abilities and its relevance for human–robot interaction. People differently ascribe mental content to human-like and non-human-like agents A human-like shape may automatically engage mentalizing processes Human actions are interpreted faster than non-human actions
Collapse
|
5
|
Kathleen B, Víctor FC, Amandine M, Aurélie C, Elisabeth P, Michèle G, Rachid A, Hélène C. Addressing joint action challenges in HRI: Insights from psychology and philosophy. Acta Psychol (Amst) 2022; 222:103476. [PMID: 34974283 DOI: 10.1016/j.actpsy.2021.103476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 11/19/2021] [Accepted: 12/15/2021] [Indexed: 11/24/2022] Open
Abstract
The vast expansion of research in human-robot interactions (HRI) these last decades has been accompanied by the design of increasingly skilled robots for engaging in joint actions with humans. However, these advances have encountered significant challenges to ensure fluent interactions and sustain human motivation through the different steps of joint action. After exploring current literature on joint action in HRI, leading to a more precise definition of these challenges, the present article proposes some perspectives borrowed from psychology and philosophy showing the key role of communication in human interactions. From mutual recognition between individuals to the expression of commitment and social expectations, we argue that communicative cues can facilitate coordination, prediction, and motivation in the context of joint action. The description of several notions thus suggests that some communicative capacities can be implemented in the context of joint action for HRI, leading to an integrated perspective of robotic communication.
Collapse
Affiliation(s)
- Belhassein Kathleen
- CLLE, UMR5263, Toulouse University, CNRS, UT2J, France; LAAS-CNRS, UPR8001, Toulouse University, CNRS, France
| | | | | | | | | | | | - Alami Rachid
- LAAS-CNRS, UPR8001, Toulouse University, CNRS, France
| | - Cochet Hélène
- CLLE, UMR5263, Toulouse University, CNRS, UT2J, France
| |
Collapse
|
6
|
Bek J, Gowen E, Vogt S, Crawford TJ, Poliakoff E. Action observation and imitation in Parkinson's disease: The influence of biological and non-biological stimuli. Neuropsychologia 2020; 150:107690. [PMID: 33259870 DOI: 10.1016/j.neuropsychologia.2020.107690] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Revised: 10/16/2020] [Accepted: 11/11/2020] [Indexed: 10/22/2022]
Abstract
Action observation and imitation have been found to influence movement in people with Parkinson's disease (PD), but simple visual stimuli can also guide their movement. To investigate whether action observation may provide a more effective stimulus than other visual cues, the present study examined the effects of observing human pointing movements and simple visual stimuli on hand kinematics and eye movements in people with mild to moderate PD and age-matched controls. In Experiment 1, participants observed videos of movement sequences between horizontal positions, depicted by a simple cue with or without a moving human hand, then imitated the sequence either without further visual input (consecutive task) or while watching the video again (concurrent task). Modulation of movement duration, in accordance with changes in the observed stimulus, increased when the simple cue was accompanied by the hand and in the concurrent task, whereas modulation of horizontal amplitude was greater with the simple cue alone and in the consecutive task. Experiment 2 compared imitation of kinematically-matched dynamic biological (human hand) and non-biological (shape) stimuli, which moved with a high or low vertical trajectory. Both groups exhibited greater modulation for the hand than the shape, and differences in eye movements suggested closer tracking of the hand. Despite producing slower and smaller movements overall, the PD group showed a similar pattern of imitation to controls across tasks and conditions. The findings demonstrate that observing human action influences aspects of movement such as duration or trajectory more strongly than non-biological stimuli, particularly during concurrent imitation.
Collapse
Affiliation(s)
- Judith Bek
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology Medicine and Health, Manchester Academic Health Science Centre, University of Manchester, UK.
| | - Emma Gowen
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology Medicine and Health, Manchester Academic Health Science Centre, University of Manchester, UK.
| | - Stefan Vogt
- Department of Psychology, Lancaster University, UK.
| | | | - Ellen Poliakoff
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology Medicine and Health, Manchester Academic Health Science Centre, University of Manchester, UK.
| |
Collapse
|
7
|
Kiilavuori H, Sariola V, Peltola MJ, Hietanen JK. Making eye contact with a robot: Psychophysiological responses to eye contact with a human and with a humanoid robot. Biol Psychol 2020; 158:107989. [PMID: 33217486 DOI: 10.1016/j.biopsycho.2020.107989] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Revised: 11/09/2020] [Accepted: 11/10/2020] [Indexed: 11/18/2022]
Abstract
Previous research has shown that eye contact, in human-human interaction, elicits increased affective and attention related psychophysiological responses. In the present study, we investigated whether eye contact with a humanoid robot would elicit these responses. Participants were facing a humanoid robot (NAO) or a human partner, both physically present and looking at or away from the participant. The results showed that both in human-robot and human-human condition, eye contact versus averted gaze elicited greater skin conductance responses indexing autonomic arousal, greater facial zygomatic muscle responses (and smaller corrugator responses) associated with positive affect, and greater heart deceleration responses indexing attention allocation. With regard to the skin conductance and zygomatic responses, the human model's gaze direction had a greater effect on the responses as compared to the robot's gaze direction. In conclusion, eye contact elicits automatic affective and attentional reactions both when shared with a humanoid robot and with another human.
Collapse
Affiliation(s)
- Helena Kiilavuori
- Human Information Processing Laboratory, Psychology, Faculty of Social Sciences, FI -33014, Tampere University, Finland
| | - Veikko Sariola
- Faculty of Medicine and Health Technology, Korkeakoulunkatu 3, FI - 33720, Tampere University, Finland
| | - Mikko J Peltola
- Human Information Processing Laboratory, Psychology, Faculty of Social Sciences, FI -33014, Tampere University, Finland
| | - Jari K Hietanen
- Human Information Processing Laboratory, Psychology, Faculty of Social Sciences, FI -33014, Tampere University, Finland.
| |
Collapse
|
8
|
Guidali G, Carneiro MI, Bolognini N. Paired Associative Stimulation drives the emergence of motor resonance. Brain Stimul 2020; 13:627-636. [DOI: 10.1016/j.brs.2020.01.017] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 12/11/2019] [Accepted: 01/30/2020] [Indexed: 11/25/2022] Open
|
9
|
The Mimicry Among Us: Intra- and Inter-Personal Mechanisms of Spontaneous Mimicry. JOURNAL OF NONVERBAL BEHAVIOR 2019. [DOI: 10.1007/s10919-019-00324-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Abstract
This review explores spontaneous mimicry in the context of three questions. The first question concerns the role of spontaneous mimicry in processing conceptual information. The second question concerns the debate whether spontaneous mimicry is driven by simple associative processes or reflects higher-order processes such as goals, intentions, and social context. The third question addresses the implications of these debates for understanding atypical individuals and states. We review relevant literature and argue for a dynamic, context-sensitive role of spontaneous mimicry in social cognition and behavior. We highlight how the modulation of mimicry is often adaptive but also point out some cases of maladaptive modulations that impair an individuals’ engagement in social life.
Collapse
|
10
|
Reuten A, van Dam M, Naber M. Pupillary Responses to Robotic and Human Emotions: The Uncanny Valley and Media Equation Confirmed. Front Psychol 2018; 9:774. [PMID: 29875722 PMCID: PMC5974161 DOI: 10.3389/fpsyg.2018.00774] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2018] [Accepted: 05/01/2018] [Indexed: 11/13/2022] Open
Abstract
Physiological responses during human–robots interaction are useful alternatives to subjective measures of uncanny feelings for nearly humanlike robots (uncanny valley) and comparable emotional responses between humans and robots (media equation). However, no studies have employed the easily accessible measure of pupillometry to confirm the uncanny valley and media equation hypotheses, evidence in favor of the existence of these hypotheses in interaction with emotional robots is scarce, and previous studies have not controlled for low level image statistics across robot appearances. We therefore recorded pupil size of 40 participants that viewed and rated pictures of robotic and human faces that expressed a variety of basic emotions. The robotic faces varied along the dimension of human likeness from cartoonish to humanlike. We strictly controlled for confounding factors by removing backgrounds, hair, and color, and by equalizing low level image statistics. After the presentation phase, participants indicated to what extent the robots appeared uncanny and humanlike, and whether they could imagine social interaction with the robots in real life situations. The results show that robots rated as nearly humanlike scored higher on uncanniness, scored lower on imagined social interaction, evoked weaker pupil dilations, and their emotional expressions were more difficult to recognize. Pupils dilated most strongly to negative expressions and the pattern of pupil responses across emotions was highly similar between robot and human stimuli. These results highlight the usefulness of pupillometry in emotion studies and robot design by confirming the uncanny valley and media equation hypotheses.
Collapse
Affiliation(s)
- Anne Reuten
- Experimental Psychology, Helmholtz Institute, Faculty of Social Sciences, Utrecht University, Utrecht, Netherlands
| | - Maureen van Dam
- Experimental Psychology, Helmholtz Institute, Faculty of Social Sciences, Utrecht University, Utrecht, Netherlands
| | - Marnix Naber
- Experimental Psychology, Helmholtz Institute, Faculty of Social Sciences, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
11
|
Hortensius R, Cross ES. From automata to animate beings: the scope and limits of attributing socialness to artificial agents. Ann N Y Acad Sci 2018; 1426:93-110. [PMID: 29749634 DOI: 10.1111/nyas.13727] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2017] [Revised: 03/16/2018] [Accepted: 03/21/2018] [Indexed: 12/29/2022]
Abstract
Understanding the mechanisms and consequences of attributing socialness to artificial agents has important implications for how we can use technology to lead more productive and fulfilling lives. Here, we integrate recent findings on the factors that shape behavioral and brain mechanisms that support social interactions between humans and artificial agents. We review how visual features of an agent, as well as knowledge factors within the human observer, shape attributions across dimensions of socialness. We explore how anthropomorphism and dehumanization further influence how we perceive and interact with artificial agents. Based on these findings, we argue that the cognitive reconstruction within the human observer is likely to be far more crucial in shaping our interactions with artificial agents than previously thought, while the artificial agent's visual features are possibly of lesser importance. We combine these findings to provide an integrative theoretical account based on the "like me" hypothesis, and discuss the key role played by the Theory-of-Mind network, especially the temporal parietal junction, in the shift from mechanistic to social attributions. We conclude by highlighting outstanding questions on the impact of long-term interactions with artificial agents on the behavioral and brain mechanisms of attributing socialness to these agents.
Collapse
Affiliation(s)
- Ruud Hortensius
- Wales Institute for Cognitive Neuroscience, School of Psychology, Bangor University, Wales, United Kingdom
- Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow, Scotland, United Kingdom
| | - Emily S Cross
- Wales Institute for Cognitive Neuroscience, School of Psychology, Bangor University, Wales, United Kingdom
- Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow, Scotland, United Kingdom
| |
Collapse
|
12
|
Hofree G, Ruvolo P, Reinert A, Bartlett MS, Winkielman P. Behind the Robot's Smiles and Frowns: In Social Context, People Do Not Mirror Android's Expressions But React to Their Informational Value. Front Neurorobot 2018; 12:14. [PMID: 29740307 PMCID: PMC5928139 DOI: 10.3389/fnbot.2018.00014] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2017] [Accepted: 03/15/2018] [Indexed: 11/13/2022] Open
Abstract
Facial actions are key elements of non-verbal behavior. Perceivers' reactions to others' facial expressions often represent a match or mirroring (e.g., they smile to a smile). However, the information conveyed by an expression depends on context. Thus, when shown by an opponent, a smile conveys bad news and evokes frowning. The availability of anthropomorphic agents capable of facial actions raises the question of how people respond to such agents in social context. We explored this issue in a study where participants played a strategic game with or against a facially expressive android. Electromyography (EMG) recorded participants' reactions over zygomaticus muscle (smiling) and corrugator muscle (frowning). We found that participants' facial responses to android's expressions reflect their informational value, rather than a direct match. Overall, participants smiled more, and frowned less, when winning than losing. Critically, participants' responses to the game outcome were similar regardless of whether it was conveyed via the android's smile or frown. Furthermore, the outcome had greater impact on people's facial reactions when it was conveyed through android's face than a computer screen. These findings demonstrate that facial actions of artificial agents impact human facial responding. They also suggest a sophistication in human-robot communication that highlights the signaling value of facial expressions.
Collapse
Affiliation(s)
- Galit Hofree
- Department of Psychology, University of California, San Diego, San Diego, CA, United States
| | - Paul Ruvolo
- Department of Engineering, Franklin W. Olin College of Engineering, Needham, MA, United States
| | - Audrey Reinert
- Department of Industrial Engineering, Purdue University, West Lafayette, IN, United States
| | - Marian S Bartlett
- Institute for Neural Computation, University of California, San Diego, San Diego, CA, United States
| | - Piotr Winkielman
- Department of Psychology, University of California, San Diego, San Diego, CA, United States.,Department of Psychology, SWPS University of Social Sciences and Humanities, Warsaw, Poland
| |
Collapse
|
13
|
Genschow O, Klomfar S, d’Haene I, Brass M. Mimicking and anticipating others' actions is linked to Social Information Processing. PLoS One 2018; 13:e0193743. [PMID: 29590127 PMCID: PMC5873994 DOI: 10.1371/journal.pone.0193743] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2017] [Accepted: 02/16/2018] [Indexed: 11/26/2022] Open
Abstract
It is widely known that individuals frequently imitate each other in social situations and that such mimicry fulfills an important social role in the sense that it functions as a social glue. With reference to the anticipated action effect, it has recently been demonstrated that individuals do not only imitate others, but also engage in anticipated action before the observed person starts engaging in that action. Interestingly, both phenomena (i.e., mimicry and anticipated action) rely on tracking others’ social behavior. Therefore, in the present research we investigated whether mimicry and anticipated action are related to social abilities as indicated by measures of social intelligence. The results demonstrate for the first time that mimicry as well as anticipated action is correlated with an important aspect of social intelligence—namely the ability to process social information. Theoretical implications and limitations are discussed.
Collapse
Affiliation(s)
- Oliver Genschow
- Social Cognition Center Cologne, University of Cologne, Cologne, Germany
- * E-mail:
| | - Sophie Klomfar
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Ine d’Haene
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Marcel Brass
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| |
Collapse
|
14
|
Kupferberg A, Iacoboni M, Flanagin V, Huber M, Kasparbauer A, Baumgartner T, Hasler G, Schmidt F, Borst C, Glasauer S. Fronto-parietal coding of goal-directed actions performed by artificial agents. Hum Brain Mapp 2017; 39:1145-1162. [PMID: 29205671 DOI: 10.1002/hbm.23905] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2017] [Revised: 11/17/2017] [Accepted: 11/22/2017] [Indexed: 11/11/2022] Open
Abstract
With advances in technology, artificial agents such as humanoid robots will soon become a part of our daily lives. For safe and intuitive collaboration, it is important to understand the goals behind their motor actions. In humans, this process is mediated by changes in activity in fronto-parietal brain areas. The extent to which these areas are activated when observing artificial agents indicates the naturalness and easiness of interaction. Previous studies indicated that fronto-parietal activity does not depend on whether the agent is human or artificial. However, it is unknown whether this activity is modulated by observing grasping (self-related action) and pointing actions (other-related action) performed by an artificial agent depending on the action goal. Therefore, we designed an experiment in which subjects observed human and artificial agents perform pointing and grasping actions aimed at two different object categories suggesting different goals. We found a signal increase in the bilateral inferior parietal lobule and the premotor cortex when tool versus food items were pointed to or grasped by both agents, probably reflecting the association of hand actions with the functional use of tools. Our results show that goal attribution engages the fronto-parietal network not only for observing a human but also a robotic agent for both self-related and social actions. The debriefing after the experiment has shown that actions of human-like artificial agents can be perceived as being goal-directed. Therefore, humans will be able to interact with service robots intuitively in various domains such as education, healthcare, public service, and entertainment.
Collapse
Affiliation(s)
- Aleksandra Kupferberg
- Division of Molecular Psychiatry, Translational Research Center, University Hospital of Psychiatry University of Bern, Bern, Switzerland
| | - Marco Iacoboni
- David Geffen School of Medicine at UCLA, Ahmanson-Lovelace Brain Mapping Center, Semel Institute for Neuroscience and Human Behavior, Brain Research Institute, Los Angeles, California
| | - Virginia Flanagin
- German Center for Vertigo and Balance Disorders DSGZ, Ludwig-Maximilian University Munich, München, Germany.,Center for Sensorimotor Research, Department of Neurology, Ludwig-Maximilian University, München, Germany
| | - Markus Huber
- Center for Sensorimotor Research, Department of Neurology, Ludwig-Maximilian University, München, Germany
| | | | - Thomas Baumgartner
- Department of Social Psychology and Social Neuroscience, University of Bern, Bern, Switzerland
| | - Gregor Hasler
- Division of Molecular Psychiatry, Translational Research Center, University Hospital of Psychiatry University of Bern, Bern, Switzerland
| | - Florian Schmidt
- Department of Robotics, DLR, Oberpfaffenhofen, Bavaria, Germany
| | - Christoph Borst
- Department of Robotics, DLR, Oberpfaffenhofen, Bavaria, Germany
| | - Stefan Glasauer
- German Center for Vertigo and Balance Disorders DSGZ, Ludwig-Maximilian University Munich, München, Germany.,Center for Sensorimotor Research, Department of Neurology, Ludwig-Maximilian University, München, Germany
| |
Collapse
|
15
|
Genschow O, van Den Bossche S, Cracco E, Bardi L, Rigoni D, Brass M. Mimicry and automatic imitation are not correlated. PLoS One 2017; 12:e0183784. [PMID: 28877197 PMCID: PMC5587324 DOI: 10.1371/journal.pone.0183784] [Citation(s) in RCA: 52] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2017] [Accepted: 08/10/2017] [Indexed: 12/30/2022] Open
Abstract
It is widely known that individuals have a tendency to imitate each other. However, different psychological disciplines assess imitation in different manners. While social psychologists assess mimicry by means of action observation, cognitive psychologists assess automatic imitation with reaction time based measures on a trial-by-trial basis. Although these methods differ in crucial methodological aspects, both phenomena are assumed to rely on similar underlying mechanisms. This raises the fundamental question whether mimicry and automatic imitation are actually correlated. In the present research we assessed both phenomena and did not find a meaningful correlation. Moreover, personality traits such as empathy, autism traits, and traits related to self- versus other-focus did not correlate with mimicry or automatic imitation either. Theoretical implications are discussed.
Collapse
Affiliation(s)
- Oliver Genschow
- Social Cognition Center Cologne, University of Cologne, Cologne, Germany
- * E-mail:
| | | | - Emiel Cracco
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Lara Bardi
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Davide Rigoni
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Marcel Brass
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| |
Collapse
|
16
|
Dickerson K, Gerhardstein P, Moser A. The Role of the Human Mirror Neuron System in Supporting Communication in a Digital World. Front Psychol 2017; 8:698. [PMID: 28553240 PMCID: PMC5427119 DOI: 10.3389/fpsyg.2017.00698] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Accepted: 04/21/2017] [Indexed: 11/13/2022] Open
Abstract
Humans use both verbal and non-verbal communication to interact with others and their environment and increasingly these interactions are occurring in a digital medium. Whether live or digital, learning to communicate requires overcoming the correspondence problem: There is no direct mapping, or correspondence between perceived and self-produced signals. Reconciliation of the differences between perceived and produced actions, including linguistic actions, is difficult and requires integration across multiple modalities and neuro-cognitive networks. Recent work on the neural substrates of social learning suggests that there may be a common mechanism underlying the perception-production cycle for verbal and non-verbal communication. The purpose of this paper is to review evidence supporting the link between verbal and non-verbal communications, and to extend the hMNS literature by proposing that recent advances in communication technology, which at times have had deleterious effects on behavioral and perceptual performance, may disrupt the success of the hMNS in supporting social interactions because these technologies are virtual and spatiotemporal distributed nature.
Collapse
Affiliation(s)
- Kelly Dickerson
- U.S. Army Research Laboratory, Human Research and Engineering, AberdeenMD, USA
| | | | - Alecia Moser
- Department of Psychology, Binghamton University, BinghamtonNY, USA
| |
Collapse
|
17
|
Crawford LE, Vavra DT, Corbin JC. Thinking Outside the Button Box: EMG as a Computer Input Device for Psychological Research. COLLABRA: PSYCHOLOGY 2017. [DOI: 10.1525/collabra.92] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Experimental psychology research commonly has participants respond to stimuli by pressing buttons or keys. Standard computer input devices constrain the range of motoric responses participants can make, even as the field advances theory about the importance of the motor system in cognitive and social information processing. Here we describe an inexpensive way to use an electromyographic (EMG) signal as a computer input device, enabling participants to control a computer by contracting muscles that are not usually used for that purpose, but which may be theoretically relevant. We tested this approach in a study of facial mimicry, a well-documented phenomenon in which viewing emotional faces elicits automatic activation of corresponding muscles in the face of the viewer. Participants viewed happy and angry faces and were instructed to indicate the emotion on each face as quickly as possible by either furrowing their brow or contracting their cheek. The mapping of motor response to judgment was counterbalanced, so that one block of trials required a congruent mapping (contract brow to respond “angry,” cheek to respond “happy”) and the other block required an incongruent mapping (brow for “happy,” cheek for “angry”). EMG sensors placed over the left corrugator supercilii muscle and left zygomaticus major muscle fed readings of muscle activation to a microcontroller, which sent a response to a computer when activation reached a pre-determined threshold. Response times were faster when the motor-response mapping was congruent than when it was incongruent, extending prior studies on facial mimicry. We discuss further applications of the method for research that seeks to expand the range of human-computer interaction beyond the button box.
Collapse
|
18
|
Arévalo A, Baldo J, González-Perilli F, Ibáñez A. Editorial: What can we make of theories of embodiment and the role of the human mirror neuron system? Front Hum Neurosci 2015; 9:500. [PMID: 26441598 PMCID: PMC4565976 DOI: 10.3389/fnhum.2015.00500] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2015] [Accepted: 08/28/2015] [Indexed: 11/13/2022] Open
Affiliation(s)
- Analía Arévalo
- Center for Aphasia and Related Disorders, East Bay Institute for Research and Education Martinez, CA, USA
| | - Juliana Baldo
- Center for Aphasia and Related Disorders, East Bay Institute for Research and Education Martinez, CA, USA
| | - Fernando González-Perilli
- Center for Basic Research in Psychology and Faculty of Information and Communication, University of the Republic Montevideo, Uruguay ; Department of Basic, Evolutionary and Educational Psychology, Universitat Autonoma de Barcelona Barcelona, Spain
| | - Agustín Ibáñez
- Laboratory of Experimental Psychology and Neuroscience, Institute of Cognitive Neurology (INECO), Favaloro University Buenos Aires, Argentina ; National Scientific and Technical Research Council (CONICET) Buenos Aires, Argentina ; UDP-INECO Foundation Core on Neuroscience, Diego Portales University Santiago, Chile ; Department of Psychology, Universidad Autónoma del Caribe Barranquilla, Colombia ; Centre of Excellence in Cognition and its Disorders Sydney, NSW, Australia
| |
Collapse
|