1
|
Sarzedas J, Lima CF, Roberto MS, Scott SK, Pinheiro AP, Conde T. Blindness influences emotional authenticity perception in voices: Behavioral and ERP evidence. Cortex 2024; 172:254-270. [PMID: 38123404 DOI: 10.1016/j.cortex.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/31/2023] [Accepted: 11/10/2023] [Indexed: 12/23/2023]
Abstract
The ability to distinguish spontaneous from volitional emotional expressions is an important social skill. How do blind individuals perceive emotional authenticity? Unlike sighted individuals, they cannot rely on facial and body language cues, relying instead on vocal cues alone. Here, we combined behavioral and ERP measures to investigate authenticity perception in laughter and crying in individuals with early- or late-blindness onset. Early-blind, late-blind, and sighted control participants (n = 17 per group, N = 51) completed authenticity and emotion discrimination tasks while EEG data were recorded. The stimuli consisted of laughs and cries that were either spontaneous or volitional. The ERP analysis focused on the N1, P2, and late positive potential (LPP). Behaviorally, early-blind participants showed intact authenticity perception, but late-blind participants performed worse than controls. There were no group differences in the emotion discrimination task. In brain responses, all groups were sensitive to laughter authenticity at the P2 stage, and to crying authenticity at the early LPP stage. Nevertheless, only early-blind participants were sensitive to crying authenticity at the N1 and middle LPP stages, and to laughter authenticity at the early LPP stage. Furthermore, early-blind and sighted participants were more sensitive than late-blind ones to crying authenticity at the P2 and late LPP stages. Altogether, these findings suggest that early blindness relates to facilitated brain processing of authenticity in voices, both at early sensory and late cognitive-evaluative stages. Late-onset blindness, in contrast, relates to decreased sensitivity to authenticity at behavioral and brain levels.
Collapse
Affiliation(s)
- João Sarzedas
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - César F Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal; Institute of Cognitive Neuroscience, University College London, London, UK
| | - Magda S Roberto
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| | - Tatiana Conde
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| |
Collapse
|
2
|
Olszanowski M, Frankowska N, Tołopiło A. "Rear bias" in spatial auditory perception: Attentional and affective vigilance to sounds occurring outside the visual field. Psychophysiology 2023; 60:e14377. [PMID: 37357967 DOI: 10.1111/psyp.14377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 05/11/2023] [Accepted: 05/14/2023] [Indexed: 06/27/2023]
Abstract
Presented studies explored the rear bias phenomenon, that is, the attentional and affective bias to sounds occurring behind the listener. Physiological and psychological reactions (i.e., fEMG, EDA/SCR, Simple Reaction Task-SRT, and self-assessments of affect-related states) were measured in response to tones of different frequencies (Study 1) and emotional vocalizations (Study 2) presented in rear and front spatial locations. Results showed that emotional vocalizations, when located in the back, facilitate reactions related to attention orientation (i.e., auricularis muscle response and simple reaction times) and evoke higher arousal-both physiological (as measured by SCR) and psychological (self-assessment scale). Importantly, observed asymmetries were larger for negative and threat-related signals (e.g., anger) than positive/nonthreatening ones (e.g., achievement). By contrast, there were only small differences for the relatively higher frequency tones. The observed relationships are discussed in terms of one of the postulated auditory system's functions, which is monitoring of the environment in order to quickly detect potential threats that occur outside of the visual field (e.g., behind one's back).
Collapse
Affiliation(s)
- Michal Olszanowski
- Center for Research on Biological Basis of Social Behavior, SWPS University, Warsaw, Poland
| | - Natalia Frankowska
- Center for Research on Biological Basis of Social Behavior, SWPS University, Warsaw, Poland
| | - Aleksandra Tołopiło
- Center for Research on Biological Basis of Social Behavior, SWPS University, Warsaw, Poland
| |
Collapse
|
3
|
Lee M, Lori A, Langford NA, Rilling JK. The neural basis of smile authenticity judgments and the potential modulatory role of the oxytocin receptor gene (OXTR). Behav Brain Res 2023; 437:114144. [PMID: 36216140 DOI: 10.1016/j.bbr.2022.114144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 09/03/2022] [Accepted: 09/30/2022] [Indexed: 11/13/2022]
Abstract
Accurate perception of genuine vs. posed smiles is crucial for successful social navigation in humans. While people vary in their ability to assess the authenticity of smiles, little is known about the specific biological mechanisms underlying this variation. We investigated the neural substrates of smile authenticity judgments using functional magnetic resonance imaging (fMRI). We also tested a preliminary hypothesis that a common polymorphism in the oxytocin receptor gene (OXTR) rs53576 would modulate the behavioral and neural indices of accurate smile authenticity judgments. A total of 185 healthy adult participants (Neuroimaging arm: N = 44, Behavioral arm: N = 141) determined the authenticity of dynamic facial expressions of genuine and posed smiles either with or without fMRI scanning. Correctly identified genuine vs. posed smiles activated brain areas involved with reward processing, facial mimicry, and mentalizing. Activation within the inferior frontal gyrus and dorsomedial prefrontal cortex correlated with individual differences in sensitivity (d') and response criterion (C), respectively. Our exploratory genetic analysis revealed that rs53576 G homozygotes in the neuroimaging arm had a stronger tendency to judge posed smiles as genuine than did A allele carriers and showed decreased activation in the medial prefrontal cortex when viewing genuine vs. posed smiles. Yet, OXTR rs53576 did not modulate task performance in the behavioral arm, which calls for further studies to evaluate the legitimacy of this result. Our findings extend previous literature on the biological foundations of smile authenticity judgments, particularly emphasizing the involvement of brain regions implicated in reward, facial mimicry, and mentalizing.
Collapse
Affiliation(s)
| | - Adriana Lori
- Department of Psychiatry and Behavioral Science, USA
| | - Nicole A Langford
- Department of Psychiatry and Behavioral Science, USA; Nell Hodgson Woodruff School of Nursing, USA
| | - James K Rilling
- Department of Anthropology, USA; Department of Psychiatry and Behavioral Science, USA; Center for Behavioral Neuroscience, USA; Emory National Primate Research Center, USA; Center for Translational Social Neuroscience, USA.
| |
Collapse
|
4
|
Wołoszyn K, Hohol M, Kuniecki M, Winkielman P. Restricting movements of lower face leaves recognition of emotional vocalizations intact but introduces a valence positivity bias. Sci Rep 2022; 12:16101. [PMID: 36167865 PMCID: PMC9515079 DOI: 10.1038/s41598-022-18888-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 08/22/2022] [Indexed: 11/08/2022] Open
Abstract
Blocking facial mimicry can disrupt recognition of emotion stimuli. Many previous studies have focused on facial expressions, and it remains unclear whether this generalises to other types of emotional expressions. Furthermore, by emphasizing categorical recognition judgments, previous studies neglected the role of mimicry in other processing stages, including dimensional (valence and arousal) evaluations. In the study presented herein, we addressed both issues by asking participants to listen to brief non-verbal vocalizations of four emotion categories (anger, disgust, fear, happiness) and neutral sounds under two conditions. One of the conditions included blocking facial mimicry by creating constant tension on the lower face muscles, in the other condition facial muscles remained relaxed. After each stimulus presentation, participants evaluated sounds' category, valence, and arousal. Although the blocking manipulation did not influence emotion recognition, it led to higher valence ratings in a non-category-specific manner, including neutral sounds. Our findings suggest that somatosensory and motor feedback play a role in the evaluation of affect vocalizations, perhaps introducing a directional bias. This distinction between stimulus recognition, stimulus categorization, and stimulus evaluation is important for understanding what cognitive and emotional processing stages involve somatosensory and motor processes.
Collapse
Affiliation(s)
- Kinga Wołoszyn
- Institute of Psychology, Jagiellonian University, Kraków, Poland.
| | - Mateusz Hohol
- Copernicus Center for Interdisciplinary Studies, Jagiellonian University, Kraków, Poland
| | - Michał Kuniecki
- Institute of Psychology, Jagiellonian University, Kraków, Poland
| | - Piotr Winkielman
- Department of Psychology, University of California San Diego, La Jolla, USA.
| |
Collapse
|
5
|
Namba S, Sato W, Nakamura K, Watanabe K. Computational Process of Sharing Emotion: An Authentic Information Perspective. Front Psychol 2022; 13:849499. [PMID: 35645906 PMCID: PMC9134197 DOI: 10.3389/fpsyg.2022.849499] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 04/26/2022] [Indexed: 11/28/2022] Open
Abstract
Although results of many psychology studies have shown that sharing emotion achieves dyadic interaction, no report has explained a study of the transmission of authentic information from emotional expressions that can strengthen perceivers. For this study, we used computational modeling, which is a multinomial processing tree, for formal quantification of the process of sharing emotion that emphasizes the perception of authentic information for expressers’ feeling states from facial expressions. Results indicated that the ability to perceive authentic information of feeling states from a happy expression has a higher probability than the probability of judging authentic information from anger expressions. Next, happy facial expressions can activate both emotional elicitation and sharing emotion in perceivers, where emotional elicitation alone is working rather than sharing emotion for angry facial expressions. Third, parameters to detect anger experiences were found to be correlated positively with those of happiness. No robust correlation was found between the parameters extracted from this experiment task and questionnaire-measured emotional contagion, empathy, and social anxiety. Results of this study revealed the possibility that a new computational approach contributes to description of emotion sharing processes.
Collapse
Affiliation(s)
- Shushi Namba
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
- *Correspondence: Shushi Namba,
| | - Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
| | - Koyo Nakamura
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
- Japan Society for the Promotion of Science, Tokyo, Japan
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan
| | - Katsumi Watanabe
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan
- Faculty of Arts, Design and Architecture, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
6
|
Szameitat DP, Szameitat AJ, Wildgruber D. Vocal Expression of Affective States in Spontaneous Laughter reveals the Bright and the Dark Side of Laughter. Sci Rep 2022; 12:5613. [PMID: 35379847 PMCID: PMC8980048 DOI: 10.1038/s41598-022-09416-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 03/07/2022] [Indexed: 11/29/2022] Open
Abstract
It has been shown that the acoustical signal of posed laughter can convey affective information to the listener. However, because posed and spontaneous laughter differ in a number of significant aspects, it is unclear whether affective communication generalises to spontaneous laughter. To answer this question, we created a stimulus set of 381 spontaneous laughter audio recordings, produced by 51 different speakers, resembling different types of laughter. In Experiment 1, 159 participants were presented with these audio recordings without any further information about the situational context of the speakers and asked to classify the laughter sounds. Results showed that joyful, tickling, and schadenfreude laughter could be classified significantly above chance level. In Experiment 2, 209 participants were presented with a subset of 121 laughter recordings correctly classified in Experiment 1 and asked to rate the laughter according to four emotional dimensions, i.e., arousal, dominance, sender’s valence, and receiver-directed valence. Results showed that laughter types differed significantly in their ratings on all dimensions. Joyful laughter and tickling laughter both showed a positive sender’s valence and receiver-directed valence, whereby tickling laughter had a particularly high arousal. Schadenfreude had a negative receiver-directed valence and a high dominance, thus providing empirical evidence for the existence of a dark side in spontaneous laughter. The present results suggest that with the evolution of human social communication laughter diversified from the former play signal of non-human primates to a much more fine-grained signal that can serve a multitude of social functions in order to regulate group structure and hierarchy.
Collapse
|