1
|
Kroczek LOH, Lingnau A, Schwind V, Wolff C, Mühlberger A. Observers predict actions from facial emotional expressions during real-time social interactions. Behav Brain Res 2024; 471:115126. [PMID: 38950784 DOI: 10.1016/j.bbr.2024.115126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 06/07/2024] [Accepted: 06/19/2024] [Indexed: 07/03/2024]
Abstract
In face-to-face social interactions, emotional expressions provide insights into the mental state of an interactive partner. This information can be crucial to infer action intentions and react towards another person's actions. Here we investigate how facial emotional expressions impact subjective experience and physiological and behavioral responses to social actions during real-time interactions. Thirty-two participants interacted with virtual agents while fully immersed in Virtual Reality. Agents displayed an angry or happy facial expression before they directed an appetitive (fist bump) or aversive (punch) social action towards the participant. Participants responded to these actions, either by reciprocating the fist bump or by defending the punch. For all interactions, subjective experience was measured using ratings. In addition, physiological responses (electrodermal activity, electrocardiogram) and participants' response times were recorded. Aversive actions were judged to be more arousing and less pleasant relative to appetitive actions. In addition, angry expressions increased heart rate relative to happy expressions. Crucially, interaction effects between facial emotional expression and action were observed. Angry expressions reduced pleasantness stronger for appetitive compared to aversive actions. Furthermore, skin conductance responses to aversive actions were increased for happy compared to angry expressions and reaction times were faster to aversive compared to appetitive actions when agents showed an angry expression. These results indicate that observers used facial emotional expression to generate expectations for particular actions. Consequently, the present study demonstrates that observers integrate information from facial emotional expressions with actions during social interactions.
Collapse
Affiliation(s)
- Leon O H Kroczek
- Department of Psychology, Clinical Psychology and Psychotherapy, University of Regensburg, Regensburg, Germany.
| | - Angelika Lingnau
- Department of Psychology, Cognitive Neuroscience, University of Regensburg, Regensburg, Germany
| | - Valentin Schwind
- Human Computer Interaction, University of Applied Sciences in Frankfurt a. M., Frankfurt a. M, Germany; Department of Media Informatics, University of Regensburg, Regensburg, Germany
| | - Christian Wolff
- Department of Media Informatics, University of Regensburg, Regensburg, Germany
| | - Andreas Mühlberger
- Department of Psychology, Clinical Psychology and Psychotherapy, University of Regensburg, Regensburg, Germany
| |
Collapse
|
2
|
Zhang P, Feng S, Zhang Q, Chen Y, Liu Y, Liu T, Bai X, Yin J. Online chasing action recruits both mirror neuron and mentalizing systems: A pilot fNIRS study. Acta Psychol (Amst) 2024; 248:104363. [PMID: 38905953 DOI: 10.1016/j.actpsy.2024.104363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 05/28/2024] [Accepted: 06/12/2024] [Indexed: 06/23/2024] Open
Abstract
Engaging in chasing, where an actor actively pursues a target, is considered a crucial activity for the development of social skills. Previous studies have focused predominantly on understanding the neural correlates of chasing from an observer's perspective, but the neural mechanisms underlying the real-time implementation of chasing action remain poorly understood. To gain deeper insights into this phenomenon, the current study employed functional near-infrared spectroscopy (fNIRS) techniques and a novel interactive game. In this interactive game, participants (N = 29) were tasked to engage in chasing behavior by controlling an on-screen character using a gamepad, with the goal of catching a virtual partner. To specifically examine the brain activations associated with the interactive nature of chasing, we included two additional interactive actions: following action of following the path of a virtual partner and free action of moving without a specific pursuit goal. The results revealed that chasing and following actions elicited activation in a broad and overlapping network of brain regions, including the temporoparietal junction (TPJ), medial prefrontal cortex (mPFC), premotor cortex (PMC), primary somatosensory cortex (SI), and primary motor cortex (M1). Crucially, these regions were found to be modulated by the type of interaction, with greater activation and functional connectivity during the chasing interaction than during the following and free interactions. These findings suggested that both the MNS, encompassing regions such as the PMC, M1 and SI, and the mentalizing system (MS), involving the TPJ and mPFC, contribute to the execution of online chasing actions. Thus, the present study represents an initial step toward future investigations into the roles of MNS and MS in real-time chasing interactions.
Collapse
Affiliation(s)
- Peng Zhang
- Academy of Psychology and Behavior, Tianjin Normal University, Tianjin, China
| | - Shuyuan Feng
- Academy of Psychology and Behavior, Tianjin Normal University, Tianjin, China
| | - Qihan Zhang
- Academy of Psychology and Behavior, Tianjin Normal University, Tianjin, China
| | - Yixin Chen
- Academy of Psychology and Behavior, Tianjin Normal University, Tianjin, China
| | - Yu Liu
- Academy of Psychology and Behavior, Tianjin Normal University, Tianjin, China
| | - Tao Liu
- School of Management, Shanghai University, Shanghai, China
| | - Xuejun Bai
- Academy of Psychology and Behavior, Tianjin Normal University, Tianjin, China.
| | - Jun Yin
- Department of Psychology, Ningbo University, Ningbo, China.
| |
Collapse
|
3
|
Bauer A, Kuder A, Schulder M, Schepens J. Phonetic differences between affirmative and feedback head nods in German Sign Language (DGS): A pose estimation study. PLoS One 2024; 19:e0304040. [PMID: 38814896 PMCID: PMC11139280 DOI: 10.1371/journal.pone.0304040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 05/04/2024] [Indexed: 06/01/2024] Open
Abstract
This study investigates head nods in natural dyadic German Sign Language (DGS) interaction, with the aim of finding whether head nods serving different functions vary in their phonetic characteristics. Earlier research on spoken and sign language interaction has revealed that head nods vary in the form of the movement. However, most claims about the phonetic properties of head nods have been based on manual annotation without reference to naturalistic text types and the head nods produced by the addressee have been largely ignored. There is a lack of detailed information about the phonetic properties of the addressee's head nods and their interaction with manual cues in DGS as well as in other sign languages, and the existence of a form-function relationship of head nods remains uncertain. We hypothesize that head nods functioning in the context of affirmation differ from those signaling feedback in their form and the co-occurrence with manual items. To test the hypothesis, we apply OpenPose, a computer vision toolkit, to extract head nod measurements from video recordings and examine head nods in terms of their duration, amplitude and velocity. We describe the basic phonetic properties of head nods in DGS and their interaction with manual items in naturalistic corpus data. Our results show that phonetic properties of affirmative nods differ from those of feedback nods. Feedback nods appear to be on average slower in production and smaller in amplitude than affirmation nods, and they are commonly produced without a co-occurring manual element. We attribute the variations in phonetic properties to the distinct roles these cues fulfill in turn-taking system. This research underlines the importance of non-manual cues in shaping the turn-taking system of sign languages, establishing the links between such research fields as sign language linguistics, conversational analysis, quantitative linguistics and computer vision.
Collapse
Affiliation(s)
- Anastasia Bauer
- Department of Linguistics, General Linguistics, University of Cologne, Cologne, Germany
| | - Anna Kuder
- Department of Linguistics, General Linguistics, University of Cologne, Cologne, Germany
| | - Marc Schulder
- Institute for German Sign Language and Communication of the Deaf, University of Hamburg, Hamburg, Germany
| | - Job Schepens
- Department of Linguistics, General Linguistics, University of Cologne, Cologne, Germany
| |
Collapse
|
4
|
Tsantani M, Yon D, Cook R. Neural Representations of Observed Interpersonal Synchrony/Asynchrony in the Social Perception Network. J Neurosci 2024; 44:e2009222024. [PMID: 38527811 PMCID: PMC11097257 DOI: 10.1523/jneurosci.2009-22.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 12/19/2023] [Accepted: 01/10/2024] [Indexed: 03/27/2024] Open
Abstract
The visual perception of individuals is thought to be mediated by a network of regions in the occipitotemporal cortex that supports specialized processing of faces, bodies, and actions. In comparison, we know relatively little about the neural mechanisms that support the perception of multiple individuals and the interactions between them. The present study sought to elucidate the visual processing of social interactions by identifying which regions of the social perception network represent interpersonal synchrony. In an fMRI study with 32 human participants (26 female, 6 male), we used multivoxel pattern analysis to investigate whether activity in face-selective, body-selective, and interaction-sensitive regions across the social perception network supports the decoding of synchronous versus asynchronous head-nodding and head-shaking. Several regions were found to support significant decoding of synchrony/asynchrony, including extrastriate body area (EBA), face-selective and interaction-sensitive mid/posterior right superior temporal sulcus, and occipital face area. We also saw robust cross-classification across actions in the EBA, suggestive of movement-invariant representations of synchrony/asynchrony. Exploratory whole-brain analyses also identified a region of the right fusiform cortex that responded more strongly to synchronous than to asynchronous motion. Critically, perceiving interpersonal synchrony/asynchrony requires the simultaneous extraction and integration of dynamic information from more than one person. Hence, the representation of synchrony/asynchrony cannot be attributed to augmented or additive processing of individual actors. Our findings therefore provide important new evidence that social interactions recruit dedicated visual processing within the social perception network that extends beyond that engaged by the faces and bodies of the constituent individuals.
Collapse
Affiliation(s)
- Maria Tsantani
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, United Kingdom
| | - Daniel Yon
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, United Kingdom
| | - Richard Cook
- School of Psychology, University of Leeds, Leeds LS2 9JU, United Kingdom
- Department of Psychology, University of York, York YO10 5DD, United Kingdom
| |
Collapse
|
5
|
Kausel L, Michon M, Soto-Icaza P, Aboitiz F. A multimodal interface for speech perception: the role of the left superior temporal sulcus in social cognition and autism. Cereb Cortex 2024; 34:84-93. [PMID: 38696598 DOI: 10.1093/cercor/bhae066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 01/17/2024] [Accepted: 02/03/2024] [Indexed: 05/04/2024] Open
Abstract
Multimodal integration is crucial for human interaction, in particular for social communication, which relies on integrating information from various sensory modalities. Recently a third visual pathway specialized in social perception was proposed, which includes the right superior temporal sulcus (STS) playing a key role in processing socially relevant cues and high-level social perception. Importantly, it has also recently been proposed that the left STS contributes to audiovisual integration of speech processing. In this article, we propose that brain areas along the right STS that support multimodal integration for social perception and cognition can be considered homologs to those in the left, language-dominant hemisphere, sustaining multimodal integration of speech and semantic concepts fundamental for social communication. Emphasizing the significance of the left STS in multimodal integration and associated processes such as multimodal attention to socially relevant stimuli, we underscore its potential relevance in comprehending neurodevelopmental conditions characterized by challenges in social communication such as autism spectrum disorder (ASD). Further research into this left lateral processing stream holds the promise of enhancing our understanding of social communication in both typical development and ASD, which may lead to more effective interventions that could improve the quality of life for individuals with atypical neurodevelopment.
Collapse
Affiliation(s)
- Leonie Kausel
- Centro de Estudios en Neurociencia Humana y Neuropsicología (CENHN), Facultad de Psicología, Universidad Diego Portales, Chile, Vergara 275, 8370076 Santiago, Chile
| | - Maëva Michon
- Praxiling Laboratory, Joint Research Unit (UMR 5267), Centre National de la Recherche Scientifique (CNRS), Université Paul Valéry, Montpellier, France, Route de Mende, 34199 Montpellier cedex 5, France
- Centro Interdisciplinario de Neurociencia, Pontificia Universidad Católica de Chile, Chile, Marcoleta 391, 2do piso, 8330024 Santiago, Chile
- Laboratorio de Neurociencia Cognitiva y Evolutiva, Facultad de Medicina, Pontificia Universidad Católica de Chile, Chile, Marcoleta 391, 2do piso, 8330024 Santiago, Chile
| | - Patricia Soto-Icaza
- Centro de Investigación en Complejidad Social (CICS), Facultad de Gobierno, Universidad del Desarrollo, Chile, Av. Las Condes 12461, edificio 3, piso 3, 7590943, Las Condes Santiago, Chile
| | - Francisco Aboitiz
- Centro Interdisciplinario de Neurociencia, Pontificia Universidad Católica de Chile, Chile, Marcoleta 391, 2do piso, 8330024 Santiago, Chile
- Laboratorio de Neurociencia Cognitiva y Evolutiva, Facultad de Medicina, Pontificia Universidad Católica de Chile, Chile, Marcoleta 391, 2do piso, 8330024 Santiago, Chile
| |
Collapse
|
6
|
Hackel LM, Kalkstein DA, Mende-Siedlecki P. Simplifying social learning. Trends Cogn Sci 2024; 28:428-440. [PMID: 38331595 DOI: 10.1016/j.tics.2024.01.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 01/16/2024] [Accepted: 01/17/2024] [Indexed: 02/10/2024]
Abstract
Social learning is complex, but people often seem to navigate social environments with ease. This ability creates a puzzle for traditional accounts of reinforcement learning (RL) that assume people negotiate a tradeoff between easy-but-simple behavior (model-free learning) and complex-but-difficult behavior (e.g., model-based learning). We offer a theoretical framework for resolving this puzzle: although social environments are complex, people have social expertise that helps them behave flexibly with low cognitive cost. Specifically, by using familiar concepts instead of focusing on novel details, people can turn hard learning problems into simpler ones. This ability highlights social learning as a prototype for studying cognitive simplicity in the face of environmental complexity and identifies a role for conceptual knowledge in everyday reward learning.
Collapse
Affiliation(s)
- Leor M Hackel
- University of Southern California, Los Angeles, CA 90089, USA.
| | | | | |
Collapse
|
7
|
Papeo L. What is abstract about seeing social interactions? Trends Cogn Sci 2024; 28:390-391. [PMID: 38632008 DOI: 10.1016/j.tics.2024.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 02/06/2024] [Indexed: 04/19/2024]
Affiliation(s)
- Liuba Papeo
- Institute of Cognitive Sciences Marc Jeannerod -UMR5229, Centre National de la Recherche Scientifique (CNRS) and Université Claude Bernard Lyon 1, France.
| |
Collapse
|
8
|
McMahon E, Isik L. Abstract social interaction representations along the lateral pathway. Trends Cogn Sci 2024; 28:392-393. [PMID: 38632007 DOI: 10.1016/j.tics.2024.03.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 03/14/2024] [Accepted: 03/15/2024] [Indexed: 04/19/2024]
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
9
|
Grossmann T. Neurodevelopmental and evolutionary origins of processing social interactions. Trends Cogn Sci 2024; 28:193-194. [PMID: 38296746 DOI: 10.1016/j.tics.2023.11.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 11/20/2023] [Accepted: 11/21/2023] [Indexed: 02/02/2024]
Affiliation(s)
- Tobias Grossmann
- Department of Psychology, University of Virginia, Charlottesville, VA, USA.
| |
Collapse
|
10
|
McMahon E, Isik L. The neurodevelopmental origins of seeing social interactions. Trends Cogn Sci 2024; 28:195-196. [PMID: 38296745 DOI: 10.1016/j.tics.2023.12.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 12/29/2023] [Accepted: 12/31/2023] [Indexed: 02/02/2024]
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
11
|
Malik M, Isik L. Relational visual representations underlie human social interaction recognition. Nat Commun 2023; 14:7317. [PMID: 37951960 PMCID: PMC10640586 DOI: 10.1038/s41467-023-43156-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 11/02/2023] [Indexed: 11/14/2023] Open
Abstract
Humans effortlessly recognize social interactions from visual input. Attempts to model this ability have typically relied on generative inverse planning models, which make predictions by inverting a generative model of agents' interactions based on their inferred goals, suggesting humans use a similar process of mental inference to recognize interactions. However, growing behavioral and neuroscience evidence suggests that recognizing social interactions is a visual process, separate from complex mental state inference. Yet despite their success in other domains, visual neural network models have been unable to reproduce human-like interaction recognition. We hypothesize that humans rely on relational visual information in particular, and develop a relational, graph neural network model, SocialGNN. Unlike prior models, SocialGNN accurately predicts human interaction judgments across both animated and natural videos. These results suggest that humans can make complex social interaction judgments without an explicit model of the social and physical world, and that structured, relational visual representations are key to this behavior.
Collapse
Affiliation(s)
- Manasi Malik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, 21218, USA.
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, 21218, USA.
| |
Collapse
|