1
|
Fino E, Menegatti M, Avenanti A, Rubini M. Reading of ingroup politicians' smiles triggers smiling in the corner of one's eyes. PLoS One 2024; 19:e0290590. [PMID: 38635525 PMCID: PMC11025833 DOI: 10.1371/journal.pone.0290590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 01/19/2024] [Indexed: 04/20/2024] Open
Abstract
Spontaneous smiles in response to politicians can serve as an implicit barometer for gauging electorate preferences. However, it is unclear whether a subtle Duchenne smile-an authentic expression involving the coactivation of the zygomaticus major (ZM) and orbicularis oculi (OO) muscles-would be elicited while reading about a favored politician smiling, indicating a more positive disposition and political endorsement. From an embodied simulation perspective, we investigated whether written descriptions of a politician's smile would trigger morphologically different smiles in readers depending on shared or opposing political orientation. In a controlled reading task in the laboratory, participants were presented with subject-verb phrases describing left and right-wing politicians smiling or frowning. Concurrently, their facial muscular reactions were measured via electromyography (EMG) recording at three facial muscles: the ZM and OO, coactive during Duchenne smiles, and the corrugator supercilii (CS) involved in frowning. We found that participants responded with a Duchenne smile detected at the ZM and OO facial muscles when exposed to portrayals of smiling politicians of same political orientation and reported more positive emotions towards these latter. In contrast, when reading about outgroup politicians smiling, there was a weaker activation of the ZM muscle and no activation of the OO muscle, suggesting a weak non-Duchenne smile, while emotions reported towards outgroup politicians were significantly more negative. Also, a more enhanced frown response in the CS was found for ingroup compared to outgroup politicians' frown expressions. Present findings suggest that a politician's smile may go a long way to influence electorates through both non-verbal and verbal pathways. They add another layer to our understanding of how language and social information shape embodied effects in a highly nuanced manner. Implications for verbal communication in the political context are discussed.
Collapse
Affiliation(s)
- Edita Fino
- Department of Psychology “Renzo Canestrari”, Alma Mater Studiorum Università di Bologna, Bologna, Italy
| | - Michela Menegatti
- Department of Psychology “Renzo Canestrari”, Alma Mater Studiorum Università di Bologna, Bologna, Italy
| | - Alessio Avenanti
- Department of Psychology “Renzo Canestrari”, Alma Mater Studiorum Università di Bologna, Bologna, Italy
- Centro Studi e Ricerche in Neuroscienze Cognitive, Department of Psychology “Renzo Canestrari”, Alma Mater Studiorum Università di Bologna, Campus di Cesena, Cesena, Italy
- Centro de Investigación en Neuropsicología y Neurociencias Cognitivas, Universidad Católica del Maule, Talca, Chile
| | - Monica Rubini
- Department of Psychology “Renzo Canestrari”, Alma Mater Studiorum Università di Bologna, Bologna, Italy
| |
Collapse
|
2
|
Neumann R, Schneider LJ. What is in a smile: The role of evaluation goal and response labels in facial muscle responses to prejudiced groups. Psychophysiology 2024; 61:e14518. [PMID: 38200628 DOI: 10.1111/psyp.14518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 12/08/2023] [Accepted: 12/14/2023] [Indexed: 01/12/2024]
Abstract
Based on the assumption that valence is permanently linked to facial responses, we expected that the corrugator muscle is contracted faster in response to overweight persons than to slim persons, whereas we expected faster contractions of the zygomaticus muscle in response to slim persons rather than to overweight persons. To detect such differences, we conducted experiments with different versions of a facial stimulus-response compatibility task that required participants to respond with the two facial muscles to photos of overweight or slim persons. Contrary to the assumption that valence is permanently linked to facial responses, in Experiments 1 and 2, social categories (overweight vs. slim persons) did not influence the response latencies assessed by electromyography. Whereas in Experiments 1 and 2, neutral labels were used for the muscle responses, in Experiment 3, affective response labels (smile vs. frown) were used. In Experiment 3, faster responses with the corrugator to overweight than to slim persons and with the zygomaticus to slim than to overweight persons were obtained. The influence of task and response label is consistent with the theory of event coding that suggests a more flexible link between valence and action.
Collapse
Affiliation(s)
- Roland Neumann
- Department of Psychology, Institute for Cognitive & Affective Neuroscience (ICAN), University of Trier, Trier, Germany
| | - Lisa J Schneider
- Department of Psychology, Institute for Cognitive & Affective Neuroscience (ICAN), University of Trier, Trier, Germany
| |
Collapse
|
3
|
Huber R, Fischer R, Kozlik J. When a smile is still a conflict: Affective conflicts from emotional facial expressions of ingroup or outgroup members occur irrespective of the social interaction context. Acta Psychol (Amst) 2023; 239:104008. [PMID: 37603901 DOI: 10.1016/j.actpsy.2023.104008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 06/30/2023] [Accepted: 08/08/2023] [Indexed: 08/23/2023] Open
Abstract
Facial expressions play a crucial role in human interactions. Typically, a positive (negative) expression evokes a congruent positive (negative) reaction within the observer. This congruent behavior is inverted, however, when the same positive (negative) expression is displayed by an outgroup member. Two approaches provide an explanation for this phenomenon. The social intentions account proposes underlying social messages within the facial display, whereas the processing conflict account assumes an affective conflict triggered by incongruent combinations of emotion and the affective connotation of group membership. In three experiments, we aimed at further substantiating the processing conflict account by separating the affective conflict from potential social intentions. For this, we created a new paradigm, in which the participant was an outside observer of a social interaction scene between two faces. Participants were required to respond to the emotional target person that could represent an ingroup or outgroup member. In all three experiments, irrespective of any social intention, responses were consistently affected by the group relation between participant and emotional target, i.e., the affective (in)congruency of the target seen by participants. These results further support the processing conflict account. The implications for the two theoretical accounts are discussed.
Collapse
Affiliation(s)
- Robert Huber
- Department of Psychology, University of Greifswald, Greifswald, Germany.
| | - Rico Fischer
- Department of Psychology, University of Greifswald, Greifswald, Germany
| | - Julia Kozlik
- University Medicine Greifswald, Greifswald, Germany
| |
Collapse
|
4
|
Doğdu C, Kessler T, Schneider D, Shadaydeh M, Schweinberger SR. A Comparison of Machine Learning Algorithms and Feature Sets for Automatic Vocal Emotion Recognition in Speech. SENSORS (BASEL, SWITZERLAND) 2022; 22:7561. [PMID: 36236658 PMCID: PMC9571288 DOI: 10.3390/s22197561] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 09/26/2022] [Accepted: 10/02/2022] [Indexed: 06/16/2023]
Abstract
Vocal emotion recognition (VER) in natural speech, often referred to as speech emotion recognition (SER), remains challenging for both humans and computers. Applied fields including clinical diagnosis and intervention, social interaction research or Human Computer Interaction (HCI) increasingly benefit from efficient VER algorithms. Several feature sets were used with machine-learning (ML) algorithms for discrete emotion classification. However, there is no consensus for which low-level-descriptors and classifiers are optimal. Therefore, we aimed to compare the performance of machine-learning algorithms with several different feature sets. Concretely, seven ML algorithms were compared on the Berlin Database of Emotional Speech: Multilayer Perceptron Neural Network (MLP), J48 Decision Tree (DT), Support Vector Machine with Sequential Minimal Optimization (SMO), Random Forest (RF), k-Nearest Neighbor (KNN), Simple Logistic Regression (LOG) and Multinomial Logistic Regression (MLR) with 10-fold cross validation using four openSMILE feature sets (i.e., IS-09, emobase, GeMAPS and eGeMAPS). Results indicated that SMO, MLP and LOG show better performance (reaching to 87.85%, 84.00% and 83.74% accuracies, respectively) compared to RF, DT, MLR and KNN (with minimum 73.46%, 53.08%, 70.65% and 58.69% accuracies, respectively). Overall, the emobase feature set performed best. We discuss the implications of these findings for applications in diagnosis, intervention or HCI.
Collapse
Affiliation(s)
- Cem Doğdu
- Department of Social Psychology, Institute of Psychology, Friedrich Schiller University Jena, Humboldtstraße 26, 07743 Jena, Germany
- Michael Stifel Center Jena for Data-Driven and Simulation Science, Friedrich Schiller University Jena, 07743 Jena, Germany
- Social Potential in Autism Research Unit, Friedrich Schiller University Jena, 07743 Jena, Germany
| | - Thomas Kessler
- Department of Social Psychology, Institute of Psychology, Friedrich Schiller University Jena, Humboldtstraße 26, 07743 Jena, Germany
| | - Dana Schneider
- Department of Social Psychology, Institute of Psychology, Friedrich Schiller University Jena, Humboldtstraße 26, 07743 Jena, Germany
- Michael Stifel Center Jena for Data-Driven and Simulation Science, Friedrich Schiller University Jena, 07743 Jena, Germany
- Social Potential in Autism Research Unit, Friedrich Schiller University Jena, 07743 Jena, Germany
- DFG Scientific Network “Understanding Others”, 10117 Berlin, Germany
| | - Maha Shadaydeh
- Michael Stifel Center Jena for Data-Driven and Simulation Science, Friedrich Schiller University Jena, 07743 Jena, Germany
- Computer Vision Group, Department of Mathematics and Computer Science, Friedrich Schiller University Jena, 07743 Jena, Germany
| | - Stefan R. Schweinberger
- Michael Stifel Center Jena for Data-Driven and Simulation Science, Friedrich Schiller University Jena, 07743 Jena, Germany
- Social Potential in Autism Research Unit, Friedrich Schiller University Jena, 07743 Jena, Germany
- Department of General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Am Steiger 3/Haus 1, 07743 Jena, Germany
- German Center for Mental Health (DZPG), Site Jena-Magdeburg-Halle, 07743 Jena, Germany
| |
Collapse
|
5
|
Sinvani RT, Sapir S. Sentence vs. Word Perception by Young Healthy Females: Toward a Better Understanding of Emotion in Spoken Language. Front Glob Womens Health 2022; 3:829114. [PMID: 35692948 PMCID: PMC9174644 DOI: 10.3389/fgwh.2022.829114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Accepted: 05/04/2022] [Indexed: 11/13/2022] Open
Abstract
Expression and perception of emotions by voice are fundamental for basic mental health stability. Since different languages interpret results differently, studies should be guided by the relationship between speech complexity and the emotional perception. The aim of our study was therefore to analyze the efficiency of speech stimuli, word vs. sentence, as it relates to the accuracy of four different categories of emotions: anger, sadness, happiness, and neutrality. To this end, a total of 2,235 audio clips were presented to 49 females, native Hebrew speakers, aged 20–30 years (M = 23.7; SD = 2.13). Participants were asked to judge audio utterances according to one of four emotional categories: anger, sadness, happiness, and neutrality. Simulated voice samples were consisting of words and meaningful sentences, provided by 15 healthy young females Hebrew native speakers. Generally, word vs. sentence was not originally accepted as a means of emotional recognition of voice; However, introducing a variety of speech utterances revealed a different perception. Thus, the emotional conveyance provided new, even higher precision to our findings: Anger emotions produced a higher impact to the single word (χ2 = 10.21, p < 0.01) as opposed to the sentence, while sadness was identified more accurately with a sentence (χ2 = 3.83, p = 0.05). Our findings resulted in a better understanding of how speech types can interpret perception, as a part of mental health.
Collapse
Affiliation(s)
- Rachel-Tzofia Sinvani
- School of Occupational Therapy, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem, Israel
- *Correspondence: Rachel-Tzofia Sinvani
| | - Shimon Sapir
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Haifa, Israel
| |
Collapse
|