1
|
Obayashi Y, Uehara S, Yuasa A, Otaka Y. The other person's smiling amount affects one's smiling response during face-to-face conversations. Front Behav Neurosci 2024; 18:1420361. [PMID: 39184933 PMCID: PMC11341491 DOI: 10.3389/fnbeh.2024.1420361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2024] [Accepted: 07/29/2024] [Indexed: 08/27/2024] Open
Abstract
Introduction Smiling during conversation occurs interactively between people and is known to build good interpersonal relationships. However, whether and how much the amount that an individual smiles is influenced by the other person's smile has remained unclear. This study aimed to quantify the amount of two individuals' smiles during conversations and investigate the dependency of one's smile amount (i.e., intensity and frequency) on that of the other. Method Forty participants (20 females) engaged in three-minute face-to-face conversations as speakers with a listener (male or female), under three conditions, where the amount of smiling response by listeners was controlled as "less," "moderate," and "greater." The amount of the smiles was quantified based on their facial movements through automated facial expression analysis. Results The results showed that the amount of smiling by the speaker changed significantly depending on the listener's smile amount; when the listeners smiled to a greater extent, the speakers tended to smile more, especially when they were of the same gender (i.e., male-male and female-female pairs). Further analysis revealed that the smiling intensities of the two individuals changed in a temporally synchronized manner. Discussion These results provide quantitative evidence for the dependence of one's smile on the other's smile, and the differential effect between gender pairs.
Collapse
Affiliation(s)
- Yota Obayashi
- Department of Rehabilitation, Fujita Health University Hospital, Aichi, Japan
| | - Shintaro Uehara
- Faculty of Rehabilitation, Fujita Health University School of Health Sciences, Aichi, Japan
| | - Akiko Yuasa
- Department of Rehabilitation Medicine, Fujita Health University School of Medicine, Aichi, Japan
- Japan Society for the Promotion of Science, Tokyo, Japan
| | - Yohei Otaka
- Department of Rehabilitation Medicine, Fujita Health University School of Medicine, Aichi, Japan
| |
Collapse
|
2
|
Patterson ML, Fridlund AJ, Crivelli C. Four Misconceptions About Nonverbal Communication. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:1388-1411. [PMID: 36791676 PMCID: PMC10623623 DOI: 10.1177/17456916221148142] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Abstract
Research and theory in nonverbal communication have made great advances toward understanding the patterns and functions of nonverbal behavior in social settings. Progress has been hindered, we argue, by presumptions about nonverbal behavior that follow from both received wisdom and faulty evidence. In this article, we document four persistent misconceptions about nonverbal communication-namely, that people communicate using decodable body language; that they have a stable personal space by which they regulate contact with others; that they express emotion using universal, evolved, iconic, categorical facial expressions; and that they can deceive and detect deception, using dependable telltale clues. We show how these misconceptions permeate research as well as the practices of popular behavior experts, with consequences that extend from intimate relationships to the boardroom and courtroom and even to the arena of international security. Notwithstanding these misconceptions, existing frameworks of nonverbal communication are being challenged by more comprehensive systems approaches and by virtual technologies that ambiguate the roles and identities of interactants and the contexts of interaction.
Collapse
Affiliation(s)
| | - Alan J. Fridlund
- Department of Psychological and Brain Sciences, University of California, Santa Barbara
| | | |
Collapse
|
3
|
Venkitakrishnan S, Wu YH. Facial Expressions as an Index of Listening Difficulty and Emotional Response. Semin Hear 2023; 44:166-187. [PMID: 37122878 PMCID: PMC10147507 DOI: 10.1055/s-0043-1766104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023] Open
Abstract
Knowledge about listening difficulty experienced during a task can be used to better understand speech perception processes, to guide amplification outcomes, and can be used by individuals to decide whether to participate in communication. Another factor affecting these decisions is individuals' emotional response which has not been measured objectively previously. In this study, we describe a novel method of measuring listening difficulty and affect of individuals in adverse listening situations using automatic facial expression algorithm. The purpose of our study was to determine if facial expressions of confusion and frustration are sensitive to changes in listening difficulty. We recorded speech recognition scores, facial expressions, subjective listening effort scores, and subjective emotional responses in 33 young participants with normal hearing. We used the signal-to-noise ratios of -1, +2, and +5 dB SNR and quiet conditions to vary the difficulty level. We found that facial expression of confusion and frustration increased with increase in difficulty level, but not with change in each level. We also found a relationship between facial expressions and both subjective emotion ratings and subjective listening effort. Emotional responses in the form of facial expressions show promise as a measure of affect and listening difficulty. Further research is needed to determine the specific contribution of affect to communication in challenging listening environments.
Collapse
Affiliation(s)
- Soumya Venkitakrishnan
- Department of Communication Sciences and Disorders, California State University, Sacramento, California
| | - Yu-Hsiang Wu
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa
| |
Collapse
|
4
|
Emotional face recognition when a colored mask is worn: a cross-sectional study. Sci Rep 2023; 13:174. [PMID: 36599964 PMCID: PMC9812539 DOI: 10.1038/s41598-022-27049-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 12/23/2022] [Indexed: 01/05/2023] Open
Abstract
Studies of the impact of face masks on emotional facial expression recognition are sparse in children. Moreover, to our knowledge no study has so far considered mask color (in adults and in children), even though this esthetic property is thought to have an impact on information processing. In order to explore these issues, the present study looked at whether first- and fifth-graders and young adults were influenced by the absence or presence (and color: pink, green, red, black, or white) of a face mask when asked to judge emotional facial expressions of fear, anger, sadness, or neutrality. Analysis of results suggested that the presence of a mask did affect the recognition of sad or fearful faces but did not influence significantly the perception of angry and neutral faces. Mask color slightly modulated the recognition of facial emotional expressions, without a systematic pattern that would allow a clear conclusion to be drawn. Moreover, none of these findings varied according to age group. The contribution of different facial areas to efficient emotion recognition is discussed with reference to methodological and theoretical considerations, and in the light of recent studies.
Collapse
|
5
|
Heydari F, Sheybani S, Yoonessi A. Iranian emotional face database: Acquisition and validation of a stimulus set of basic facial expressions. Behav Res Methods 2023; 55:143-150. [PMID: 35297015 DOI: 10.3758/s13428-022-01812-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/15/2022] [Indexed: 11/08/2022]
Abstract
Facial expressions play an essential role in social interactions. Databases of face images have furnished theories of emotion perception, as well as having applications in other disciplines such as facial recognition technology. However, the faces of many ethnicities remain largely underrepresented in the existing face databases, which can impact the generalizability of the theories and technologies developed based on them. Here, we present the first survey-validated database of Iranian faces. It consists of 248 images from 40 Iranian individuals portraying six emotional expressions-anger, sadness, fear, disgust, happiness, and surprise-as well as the neutral state. The photos were taken in a studio setting, following the common scenarios of emotion induction, and controlling for conditions of lighting, camera setup, and the model's head posture. An evaluation survey confirmed high agreement between the models' intended expressions and the raters' perception of them. The database is freely available online for academic research purposes.
Collapse
Affiliation(s)
- Faeze Heydari
- Institute for Cognitive Science Studies, Tehran, Iran.
| | | | - Ali Yoonessi
- Department of Neuroscience and Addiction Studies, School of Advanced Technologies in Medicine, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
6
|
Denault V, Zloteanu M. Darwin's illegitimate children: How body language experts undermine Darwin's legacy. EVOLUTIONARY HUMAN SCIENCES 2022; 4:e53. [PMID: 37588916 PMCID: PMC10426054 DOI: 10.1017/ehs.2022.50] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 10/16/2022] [Accepted: 11/03/2022] [Indexed: 11/13/2022] Open
Abstract
The Expression of the Emotions in Man and Animals has received and continues to receive much attention from emotion researchers and behavioural scientists. However, the common misconception that Darwin advocated for the universality of emotional reactions has led to a host of unfounded and discredited claims promoted by 'body language experts' on both traditional and social media. These 'experts' receive unparalleled public attention. Thus, rather than being presented with empirically supported findings on non-verbal behaviour, the public is exposed to 'body language analysis' of celebrities, politicians and defendants in criminal trials. In this perspective piece, we address the misinformation surrounding non-verbal behaviour. We also discuss the nature and scope of statements from body language experts, unpacking the claims of the most viewed YouTube video by a body language expert, comparing these claims with actual research findings, and giving specific attention to the implications for the justice system. We explain how body language experts use (and misuse) Darwin's legacy and conclude with a call for researchers to unite their voices and work towards stopping the spread of misinformation about non-verbal behaviour.
Collapse
Affiliation(s)
- Vincent Denault
- Department of Educational and Counselling Psychology, McGill University, Canada
| | | |
Collapse
|
7
|
Méndez CA, Celeghin A, Diano M, Orsenigo D, Ocak B, Tamietto M. A deep neural network model of the primate superior colliculus for emotion recognition. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210512. [PMID: 36126660 PMCID: PMC9489290 DOI: 10.1098/rstb.2021.0512] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 07/18/2022] [Indexed: 12/01/2022] Open
Abstract
Although sensory processing is pivotal to nearly every theory of emotion, the evaluation of the visual input as 'emotional' (e.g. a smile as signalling happiness) has been traditionally assumed to take place in supramodal 'limbic' brain regions. Accordingly, subcortical structures of ancient evolutionary origin that receive direct input from the retina, such as the superior colliculus (SC), are traditionally conceptualized as passive relay centres. However, mounting evidence suggests that the SC is endowed with the necessary infrastructure and computational capabilities for the innate recognition and initial categorization of emotionally salient features from retinal information. Here, we built a neurobiologically inspired convolutional deep neural network (DNN) model that approximates physiological, anatomical and connectional properties of the retino-collicular circuit. This enabled us to characterize and isolate the initial computations and discriminations that the DNN model of the SC can perform on facial expressions, based uniquely on the information it directly receives from the virtual retina. Trained to discriminate facial expressions of basic emotions, our model matches human error patterns and above chance, yet suboptimal, classification accuracy analogous to that reported in patients with V1 damage, who rely on retino-collicular pathways for non-conscious vision of emotional attributes. When presented with gratings of different spatial frequencies and orientations never 'seen' before, the SC model exhibits spontaneous tuning to low spatial frequencies and reduced orientation discrimination, as can be expected from the prevalence of the magnocellular (M) over parvocellular (P) projections. Likewise, face manipulation that biases processing towards the M or P pathway affects expression recognition in the SC model accordingly, an effect that dovetails with variations of activity in the human SC purposely measured with ultra-high field functional magnetic resonance imaging. Lastly, the DNN generates saliency maps and extracts visual features, demonstrating that certain face parts, like the mouth or the eyes, provide higher discriminative information than other parts as a function of emotional expressions like happiness and sadness. The present findings support the contention that the SC possesses the necessary infrastructure to analyse the visual features that define facial emotional stimuli also without additional processing stages in the visual cortex or in 'limbic' areas. This article is part of the theme issue 'Cracking the laugh code: laughter through the lens of biology, psychology and neuroscience'.
Collapse
Affiliation(s)
- Carlos Andrés Méndez
- Department of Psychology, University of Torino, Via Verdi 10, Torino 10124, Italy
| | - Alessia Celeghin
- Department of Psychology, University of Torino, Via Verdi 10, Torino 10124, Italy
| | - Matteo Diano
- Department of Psychology, University of Torino, Via Verdi 10, Torino 10124, Italy
| | - Davide Orsenigo
- Department of Psychology, University of Torino, Via Verdi 10, Torino 10124, Italy
| | - Brian Ocak
- Department of Psychology, University of Torino, Via Verdi 10, Torino 10124, Italy
- Section of Cognitive Neurophysiology and Imaging, National Institute of Mental Health, 49 Convent Drive, Bethesda, MD 20892, USA
| | - Marco Tamietto
- Department of Psychology, University of Torino, Via Verdi 10, Torino 10124, Italy
- Department of Medical and Clinical Psychology, and CoRPS - Center of Research on Psychology in Somatic diseases, Tilburg University, PO Box 90153, 5000 LE Tilburg, The Netherlands
| |
Collapse
|
8
|
Schmid I, Witkower Z, Götz FM, Stieger S. Registered report: Social face evaluation: ethnicity-specific differences in the judgement of trustworthiness of faces and facial parts. Sci Rep 2022; 12:18311. [PMID: 36316450 PMCID: PMC9622746 DOI: 10.1038/s41598-022-22709-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Accepted: 10/18/2022] [Indexed: 11/06/2022] Open
Abstract
Social face evaluation is a common and consequential element of everyday life based on the judgement of trustworthiness. However, the particular facial regions that guide such trustworthiness judgements are largely unknown. It is also unclear whether different facial regions are consistently utilized to guide judgments for different ethnic groups, and whether previous exposure to specific ethnicities in one's social environment has an influence on trustworthiness judgements made from faces or facial regions. This registered report addressed these questions through a global online survey study that recruited Asian, Black, Latino, and White raters (N = 4580). Raters were shown full faces and specific parts of the face for an ethnically diverse, sex-balanced set of 32 targets and rated targets' trustworthiness. Multilevel modelling showed that in forming trustworthiness judgements, raters relied most strongly on the eyes (with no substantial information loss vis-à-vis full faces). Corroborating ingroup-outgroup effects, raters rated faces and facial parts of targets with whom they shared their ethnicity, sex, or eye color as significantly more trustworthy. Exposure to ethnic groups in raters' social environment predicted trustworthiness ratings of other ethnic groups in nuanced ways. That is, raters from the ambient ethnic majority provided slightly higher trustworthiness ratings for stimuli of their own ethnicity compared to minority ethnicities. In contrast, raters from an ambient ethnic minority (e.g., immigrants) provided substantially lower trustworthiness ratings for stimuli of the ethnic majority. Taken together, the current study provides a new window into the psychological processes underlying social face evaluation and its cultural generalizability. PROTOCOL REGISTRATION: The stage 1 protocol for this Registered Report was accepted in principle on 7 January 2022. The protocol, as accepted by the journal, can be found at: https://doi.org/10.6084/m9.figshare.18319244 .
Collapse
Affiliation(s)
- Irina Schmid
- grid.459693.4Department of Psychology and Psychodynamics, Karl Landsteiner University of Health Sciences, Krems an der Donau, Austria
| | - Zachary Witkower
- grid.17063.330000 0001 2157 2938Department of Psychology, University of Toronto, Toronto, Canada
| | - Friedrich M. Götz
- grid.17091.3e0000 0001 2288 9830Department of Psychology, University of British Columbia, Vancouver, Canada ,grid.47840.3f0000 0001 2181 7878Institute of Personality and Social Research, University of California, Berkeley, USA
| | - Stefan Stieger
- grid.459693.4Department of Psychology and Psychodynamics, Karl Landsteiner University of Health Sciences, Krems an der Donau, Austria
| |
Collapse
|
9
|
Roberts SC, Třebická Fialová J, Sorokowska A, Langford B, Sorokowski P, Třebický V, Havlíček J. Emotional expression in human odour. EVOLUTIONARY HUMAN SCIENCES 2022; 4:e44. [PMID: 37588919 PMCID: PMC10426192 DOI: 10.1017/ehs.2022.44] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 09/02/2022] [Accepted: 09/08/2022] [Indexed: 11/06/2022] Open
Abstract
Recent work has demonstrated that human body odour alters with changing emotional state and that emotionally laden odours can affect the physiology and behaviour of people exposed to them. Here we review these discoveries, which we believe add to a growing recognition that the human sense of smell and its potential role in social interactions have been underappreciated. However, we also critically evaluate the current evidence, with a particular focus on methodology and the interpretation of emotional odour studies. We argue that while the evidence convincingly indicates that humans retain a capacity for olfactory communication of emotion, the extent to which this occurs in ordinary social interaction remains an open question. Future studies should place fewer restrictions on participant selection and lifestyle and adopt more realistic experimental designs. We also need to devote more consideration to underlying mechanisms and to recognise the constraints that these may place on effective communication. Finally, we outline some promising approaches to address these issues, and raise some broader theoretical questions that such approaches may help us to answer.
Collapse
Affiliation(s)
| | | | | | - Ben Langford
- UK Centre for Ecology and Hydrology, Penicuik, UK
| | | | - Vít Třebický
- Faculty of Physical Education and Sport, Charles University, Prague, Czech Republic
| | - Jan Havlíček
- Faculty of Science, Charles University, Prague, Czech Republic
| |
Collapse
|
10
|
Comunicación no verbal de emociones: variables sociodemográficas y ventaja endogrupal. REVISTA IBEROAMERICANA DE PSICOLOGÍA 2022. [DOI: 10.33881/2027-1786.rip.15209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
En el campo de la comunicación no verbal de las emociones aún existe un debate en torno a la universalidad de las expresiones de emoción y el efecto que tiene la cultura en ellas. Actualmente existen dos teorías que tratan de explicar este fenómeno, la teoría neurocultural y la teoría de los dialectos. Ambas se enfocan en explicar la comunicación no verbal de emociones, pero la primera se centra en los aspectos universales, mientras que la segunda lo hace en la cultura. El objetivo del presente estudio fue indagar la ventaja endogrupal al interior de una cultura. Se diseñó un cuasiexperimento en el que se solicitó a 107 participantes que indicaran la emoción expresada en 42 estímulos en tres formatos de presentación distintos. Los resultados indican la existencia de dicha ventaja en las mujeres y jóvenes. Los presentes resultados ilustran los efectos de la cultura en este fenómeno.
Collapse
|
11
|
The Effect of Mouth-Opening on Recognition of Facial Expressions in the NimStim Set: An Evaluation from Chinese College Students. JOURNAL OF NONVERBAL BEHAVIOR 2022. [DOI: 10.1007/s10919-022-00417-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
12
|
Cooper H, Brar A, Beyaztas H, Jennings BJ, Bennetts RJ. The effects of face coverings, own-ethnicity biases, and attitudes on emotion recognition. Cogn Res Princ Implic 2022; 7:57. [PMID: 35780221 PMCID: PMC9250564 DOI: 10.1186/s41235-022-00400-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 05/20/2022] [Indexed: 11/10/2022] Open
Abstract
As a result of the COVID-19 pandemic, face coverings were introduced as a safety measure in certain environments in England and some research suggests that they can affect emotion recognition. Factors such as own-ethnicity bias (e.g. whether people perceiving and expressing emotions are of the same ethnicity) and social biases are also known to influence emotion recognition. However, it is unclear whether these factors interact with face coverings to affect emotion recognition. Therefore, this study examined the effects of face coverings, own-ethnicity biases, and attitudes on emotion recognition accuracy. In this study, 131 participants viewed masked and unmasked emotional faces varying in ethnicity and completed a questionnaire on their attitudes towards face masks. We found that emotion recognition was associated with masks and attitudes: accuracy was lower in masked than unmasked conditions and attitudes towards masks Inside and Outside were associated with emotion recognition. However, a match between perceiver and stimulus ethnicity did not have a significant effect on emotion recognition. Ultimately, our results suggest that masks, and negative attitudes towards them, were associated with poorer emotion recognition. Future research should explore different mask-wearing behaviours and possible in-group/out-group biases and their interaction with other social cues (e.g. in-group biases).
Collapse
Affiliation(s)
- Holly Cooper
- Division of Psychology, College of Health, Medicine, and Life Sciences, Brunel University London, Kingston Lane, Uxbridge, UB8 3PH, UK.
| | - Amrit Brar
- Division of Psychology, College of Health, Medicine, and Life Sciences, Brunel University London, Kingston Lane, Uxbridge, UB8 3PH, UK
| | - Hazel Beyaztas
- Division of Psychology, College of Health, Medicine, and Life Sciences, Brunel University London, Kingston Lane, Uxbridge, UB8 3PH, UK
| | - Ben J Jennings
- Division of Psychology, College of Health, Medicine, and Life Sciences, Brunel University London, Kingston Lane, Uxbridge, UB8 3PH, UK
| | - Rachel J Bennetts
- Division of Psychology, College of Health, Medicine, and Life Sciences, Brunel University London, Kingston Lane, Uxbridge, UB8 3PH, UK.
| |
Collapse
|
13
|
Lennie TM, Eerola T. The CODA Model: A Review and Skeptical Extension of the Constructionist Model of Emotional Episodes Induced by Music. Front Psychol 2022; 13:822264. [PMID: 35496245 PMCID: PMC9043863 DOI: 10.3389/fpsyg.2022.822264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Accepted: 03/07/2022] [Indexed: 11/14/2022] Open
Abstract
This paper discusses contemporary advancements in the affective sciences (described together as skeptical theories) that can inform the music-emotion literature. Key concepts in these theories are outlined, highlighting their points of agreement and disagreement. This summary shows the importance of appraisal within the emotion process, provides a greater emphasis upon goal-directed accounts of (emotion) behavior, and a need to move away from discrete emotion “folk” concepts and toward the study of an emotional episode and its components. Consequently, three contemporary music emotion theories (BRECVEMA, Multifactorial Process Approach, and a Constructionist Account) are examined through a skeptical lens. This critique highlights the over-reliance upon categorization and a lack of acknowledgment of appraisal processes, specifically goal-directed appraisal, in examining how individual experiences of music emerge in different contexts. Based on this critique of current music-emotion models, we present our skeptically informed CODA model - Constructivistly-Organised Dimensional-Appraisal model. This model addresses skeptical limitations of existing theories, reinstates the role of goal-directed appraisal as central to what makes music relevant and meaningful to an individual in different contexts and brings together different theoretical frameworks into a single model. From the development of the CODA model, several hypotheses are proposed and applied to musical contexts. These hypotheses address theoretical issues such as acknowledging individual and contextual differences in emotional intensity and valence, as well as differentiating between induced and perceived emotions, and utilitarian and aesthetic emotions. We conclude with a sections of recommendations for future research. Altogether, this theoretical critique and proposed model points toward a positive future direction for music-emotion science. One where researchers can take forward testable predictions about what makes music relevant and meaningful to an individual.
Collapse
Affiliation(s)
- Thomas M Lennie
- Department of Music, Durham University, Durham, United Kingdom
| | - Tuomas Eerola
- Department of Music, Durham University, Durham, United Kingdom
| |
Collapse
|
14
|
Fang X, Sauter DA, Heerdink MW, van Kleef GA. Culture Shapes the Distinctiveness of Posed and Spontaneous Facial Expressions of Anger and Disgust. JOURNAL OF CROSS-CULTURAL PSYCHOLOGY 2022. [DOI: 10.1177/00220221221095208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
There is a growing consensus that culture influences the perception of facial expressions of emotion. However, relatively few studies have examined whether and how culture shapes the production of emotional facial expressions. Drawing on prior work on cultural differences in communication styles, we tested the prediction that people from the Netherlands (a low-context culture) produce facial expressions that are more distinct across emotions compared to people from China (a high-context culture). Furthermore, we examined whether the degree of distinctiveness varies across posed and spontaneous expressions. Dutch and Chinese participants were instructed to either pose facial expressions of anger and disgust, or to share autobiographical events that elicited spontaneous expressions of anger or disgust. Using a supervised machine learning approach to categorize expressions based on the patterns of activated facial action units, we showed that both posed and spontaneous facial expressions of anger and disgust were more distinct when produced by Dutch compared to Chinese participants. Yet, the distinctiveness of posed and spontaneous expressions differed in their sources. The difference in the distinctiveness of posed expressions appears to be due to a larger array of facial expression prototypes for each emotion in Chinese culture than in Dutch culture. The difference in the distinctiveness of spontaneous expressions, however, appears to reflect the greater similarity of expressions of anger and disgust from the same Chinese individual than from the same Dutch individual. The implications of these findings are discussed in relation to cross-cultural emotion communication, including via cultural products.
Collapse
Affiliation(s)
- Xia Fang
- Zhejiang University, Hangzhou, China
| | | | | | | |
Collapse
|
15
|
Dixson BJW, Spiers T, Miller PA, Sidari MJ, Nelson NL, Craig BM. Facial hair may slow detection of happy facial expressions in the face in the crowd paradigm. Sci Rep 2022; 12:5911. [PMID: 35396450 PMCID: PMC8993935 DOI: 10.1038/s41598-022-09397-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Accepted: 03/08/2022] [Indexed: 11/08/2022] Open
Abstract
Human visual systems have evolved to extract ecologically relevant information from complex scenery. In some cases, the face in the crowd visual search task demonstrates an anger superiority effect, where anger is allocated preferential attention. Across three studies (N = 419), we tested whether facial hair guides attention in visual search and influences the speed of detecting angry and happy facial expressions in large arrays of faces. In Study 1, participants were faster to search through clean-shaven crowds and detect bearded targets than to search through bearded crowds and detect clean-shaven targets. In Study 2, targets were angry and happy faces presented in neutral backgrounds. Facial hair of the target faces was also manipulated. An anger superiority effect emerged that was augmented by the presence of facial hair, which was due to the slower detection of happiness on bearded faces. In Study 3, targets were happy and angry faces presented in either bearded or clean-shaven backgrounds. Facial hair of the background faces was also systematically manipulated. A significant anger superiority effect was revealed, although this was not moderated by the target's facial hair. Rather, the anger superiority effect was larger in clean-shaven than bearded face backgrounds. Together, results suggest that facial hair does influence detection of emotional expressions in visual search, however, rather than facilitating an anger superiority effect as a potential threat detection system, facial hair may reduce detection of happy faces within the face in the crowd paradigm.
Collapse
Affiliation(s)
- Barnaby J W Dixson
- School of Psychology, The University of Queensland, St. Lucia, QLD, 4067, Australia.
- School of Health and Behavioural Sciences, University of the Sunshine Coast, Sippy Downs, 4502, Australia.
| | - Tamara Spiers
- School of Psychology, The University of Queensland, St. Lucia, QLD, 4067, Australia
| | - Paul A Miller
- School of Psychology, The University of Queensland, St. Lucia, QLD, 4067, Australia
| | - Morgan J Sidari
- School of Psychology, The University of Queensland, St. Lucia, QLD, 4067, Australia
| | - Nicole L Nelson
- School of Psychology, The University of Queensland, St. Lucia, QLD, 4067, Australia
- School of Psychology, University of Adelaide, Adelaide, SA, 5005, Australia
| | - Belinda M Craig
- School of Population Health, Curtin University, Bentley, WA, 6102, Australia
- Faculty of Health Sciences and Medicine, Bond University, Robina, QLD, 4229, Australia
| |
Collapse
|
16
|
Szameitat DP, Szameitat AJ, Wildgruber D. Vocal Expression of Affective States in Spontaneous Laughter reveals the Bright and the Dark Side of Laughter. Sci Rep 2022; 12:5613. [PMID: 35379847 PMCID: PMC8980048 DOI: 10.1038/s41598-022-09416-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 03/07/2022] [Indexed: 11/29/2022] Open
Abstract
It has been shown that the acoustical signal of posed laughter can convey affective information to the listener. However, because posed and spontaneous laughter differ in a number of significant aspects, it is unclear whether affective communication generalises to spontaneous laughter. To answer this question, we created a stimulus set of 381 spontaneous laughter audio recordings, produced by 51 different speakers, resembling different types of laughter. In Experiment 1, 159 participants were presented with these audio recordings without any further information about the situational context of the speakers and asked to classify the laughter sounds. Results showed that joyful, tickling, and schadenfreude laughter could be classified significantly above chance level. In Experiment 2, 209 participants were presented with a subset of 121 laughter recordings correctly classified in Experiment 1 and asked to rate the laughter according to four emotional dimensions, i.e., arousal, dominance, sender's valence, and receiver-directed valence. Results showed that laughter types differed significantly in their ratings on all dimensions. Joyful laughter and tickling laughter both showed a positive sender's valence and receiver-directed valence, whereby tickling laughter had a particularly high arousal. Schadenfreude had a negative receiver-directed valence and a high dominance, thus providing empirical evidence for the existence of a dark side in spontaneous laughter. The present results suggest that with the evolution of human social communication laughter diversified from the former play signal of non-human primates to a much more fine-grained signal that can serve a multitude of social functions in order to regulate group structure and hierarchy.
Collapse
Affiliation(s)
- Diana P Szameitat
- Department of Psychiatry and Psychotherapy, University of Tübingen, Calwerstraße 14, 72076, Tübingen, Germany.
- Division of Psychology, Brunel University London, Uxbridge, UK.
| | - André J Szameitat
- Division of Psychology, Brunel University London, Uxbridge, UK.
- Centre for Cognitive Neuroscience, Division of Psychology, Department of Life Sciences College of Health, Medicine and Life Sciences, Brunel University London, Kingston Lane, Uxbridge, UB8 3PH, UK.
| | - Dirk Wildgruber
- Department of Psychiatry and Psychotherapy, University of Tübingen, Calwerstraße 14, 72076, Tübingen, Germany
| |
Collapse
|
17
|
Rostovtseva VV, Mezentseva AA, Butovskaya ML. Perception of Emergent Leaders' Faces and Evolution of Social Cheating: Cross-Cultural Experiments. EVOLUTIONARY PSYCHOLOGY 2022; 20:14747049221081733. [PMID: 35238674 PMCID: PMC10355292 DOI: 10.1177/14747049221081733] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 01/23/2022] [Accepted: 01/28/2022] [Indexed: 01/16/2023] Open
Abstract
The aim of the present study was to investigate whether neutral faces of individuals with different propensities for leadership may convey information about their personal qualities, and are there impacts of sex, population and social environment on the facial perception. This study is based on a previous experiment ( Rostovtseva et al., 2022), where emergent leadership in the context of male group cooperation was investigated in Buryats (Mongolian population of Siberia). In the previous study three behavioural types of participants were revealed: non-leaders, prosocial leaders and leaders-cheaters, each having a set of distinguishing personality, communicative, and cooperative features. In the current study, three composite portraits representing different leadership qualities of Buryat men from the prior experiment were created. The composites were then scored on a number of traits by male and female Russian and Buryat independent raters (N = 435). The results revealed that ratings on masculinity, physical strength, dominance, competitiveness, and perceived leadership were positively correlated, while perceived trustworthiness was negatively associated with these traits. However, the composite portraits of actual leaders generally were scored as more trustworthy, masculine, and physically strong, with the prosocial leaders' portrait being perceived as healthier than others. Surprisingly, the composite of leaders-cheaters was scored as the most trustworthy and generous, and the least competitive than others. No significant effects of raters' sex, origin, or degree of familiarity with Mongolian appearance were revealed. We conclude that static facial morphology contributes to appearing trustworthy, which may allow exploitation of others.
Collapse
Affiliation(s)
| | - Anna A. Mezentseva
- Institute of Ethnology and Anthropology, Russian Academy of Sciences, Moscow 119334, Russia
| | - Marina L. Butovskaya
- Institute of Ethnology and Anthropology, Russian Academy of Sciences, Moscow 119334, Russia
- Russian State University for the Humanities, Moscow, 125047, Russia
| |
Collapse
|
18
|
Matt S, Dzhelyova M, Maillard L, Lighezzolo-Alnot J, Rossion B, Caharel S. The rapid and automatic categorization of facial expression changes in highly variable natural images. Cortex 2021; 144:168-184. [PMID: 34666300 DOI: 10.1016/j.cortex.2021.08.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 07/08/2021] [Accepted: 08/09/2021] [Indexed: 01/23/2023]
Abstract
Emotional expressions are quickly and automatically read from human faces under natural viewing conditions. Yet, categorization of facial expressions is typically measured in experimental contexts with homogenous sets of face stimuli. Here we evaluated how the 6 basic facial emotions (Fear, Disgust, Happiness, Anger, Surprise or Sadness) can be rapidly and automatically categorized with faces varying in head orientation, lighting condition, identity, gender, age, ethnic origin and background context. High-density electroencephalography was recorded in 17 participants viewing 50 s sequences with natural variable images of neutral-expression faces alternating at a 6 Hz rate. Every five stimuli (1.2 Hz), variable natural images of one of the six basic expressions were presented. Despite the wide physical variability across images, a significant F/5 = 1.2 Hz response and its harmonics (e.g., 2F/5 = 2.4 Hz, etc.) was observed for all expression changes at the group-level and in every individual participant. Facial categorization responses were found mainly over occipito-temporal sites, with distinct hemispheric lateralization and cortical topographies according to the different expressions. Specifically, a stronger response was found to Sadness categorization, especially over the left hemisphere, as compared to Fear and Happiness, together with a right hemispheric dominance for categorization of Fearful faces. Importantly, these differences were specific to upright faces, ruling out the contribution of low-level visual cues. Overall, these observations point to robust rapid and automatic facial expression categorization processes in the human brain.
Collapse
Affiliation(s)
- Stéphanie Matt
- Université de Lorraine, 2LPN, Nancy, France; Université de Lorraine, Laboratoire INTERPSY, Nancy, France.
| | - Milena Dzhelyova
- Université Catholique de Louvain, Institute of Research in Psychological Science, Louvain-la-Neuve, Belgium.
| | - Louis Maillard
- Université de Lorraine, CNRS, CRAN, Nancy, France; Université de Lorraine, CHRU-Nancy, Service de Neurologie, Nancy, France.
| | | | - Bruno Rossion
- Université Catholique de Louvain, Institute of Research in Psychological Science, Louvain-la-Neuve, Belgium; Université de Lorraine, CNRS, CRAN, Nancy, France; Université de Lorraine, CHRU-Nancy, Service de Neurologie, Nancy, France.
| | - Stéphanie Caharel
- Université de Lorraine, 2LPN, Nancy, France; Institut Universitaire de France, Paris, France.
| |
Collapse
|
19
|
Obayashi Y, Uehara S, Kokuwa R, Otaka Y. Quantitative Evaluation of Facial Expression in a Patient With Minimally Conscious State After Severe Traumatic Brain Injury. J Head Trauma Rehabil 2021; 36:E337-E344. [PMID: 33741824 DOI: 10.1097/htr.0000000000000666] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE To investigate whether automatic facial expression analysis can quantify differences in the intensity of facial responses depending on the affective stimuli in a patient with minimally conscious state (MCS). METHODS We filmed the facial responses of a patient with MCS during the delivery of three 1-minute auditory stimuli: audio clips of comedy movies, a nurse hilariously talking, and recitation of a novel (comedy, nurse, and recitation conditions, respectively). These measures were repeated at least 13 times for each condition on different days for approximately 10 months. The intensity of being "happy" was estimated from the smiling face using a software called FaceReader. The intensity among 5 conditions including those at 2 resting conditions (pre- and poststimuli) was compared using the Kruskal-Wallis test and the Dunn-Bonferroni test for multiple comparisons. RESULTS Significantly higher values were found in the intensity of being "happy" in the comedy and nurse conditions versus other conditions, with no significant differences between the recitation and pre- or poststimulus conditions. These findings indicate that the automated facial expression analysis can quantify differences in context-dependent facial responses in the patient recruited in this study. CONCLUSIONS This case study demonstrates the feasibility of using automated facial expression analysis to quantitatively evaluate the differences in facial expressions and their corresponding emotions in a single patient with MCS.
Collapse
Affiliation(s)
- Yota Obayashi
- Department of Rehabilitation, Fujita Health University Hospital, Aichi, Japan (Dr Obayashi and Mr Kokuwa); Faculty of Rehabilitation, Fujita Health University School of Health Sciences, Aichi, Japan (Dr Uehara); and Department of Rehabilitation Medicine I, Fujita Health University School of Medicine, Aichi, Japan (Dr Otaka)
| | | | | | | |
Collapse
|
20
|
Lorette P. Investigating Emotion Perception via the Two-Dimensional Affect and Feeling Space: An Example of a Cross-Cultural Study Among Chinese and Non-Chinese Participants. Front Psychol 2021; 12:662610. [PMID: 34366981 PMCID: PMC8343541 DOI: 10.3389/fpsyg.2021.662610] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 06/07/2021] [Indexed: 11/15/2022] Open
Abstract
The categorical approach to cross-cultural emotion perception research has mainly relied on constrained experimental tasks, which have arguably biased previous findings and attenuated cross-cultural differences. On the other hand, in the constructionist approach, conclusions on the universal nature of valence and arousal have mainly been indirectly drawn based on participants' word-matching or free-sorting behaviors, but studies based on participants' continuous valence and arousal ratings are very scarce. When it comes to self-reports of specific emotion perception, constructionists tend to rely on free labeling, which has its own limitations. In an attempt to move beyond the limitations of previous methods, a new instrument called the Two-Dimensional Affect and Feeling Space (2DAFS) has been developed. The 2DAFS is a useful, innovative, and user-friendly instrument that can easily be integrated in online surveys and allows for the collection of both continuous valence and arousal ratings and categorical emotion perception data in a quick and flexible way. In order to illustrate the usefulness of this tool, a cross-cultural emotion perception study based on the 2DAFS is reported. The results indicate the cross-cultural variation in valence and arousal perception, suggesting that the minimal universality hypothesis might need to be more nuanced.
Collapse
Affiliation(s)
- Pernelle Lorette
- Department of English Studies, University of Mannheim, Mannheim, Germany
| |
Collapse
|
21
|
Wang X, Han S. Processing of facial expressions of same-race and other-race faces: distinct and shared neural underpinnings. Soc Cogn Affect Neurosci 2021; 16:576-592. [PMID: 33624818 PMCID: PMC8138088 DOI: 10.1093/scan/nsab027] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 12/23/2020] [Accepted: 02/24/2021] [Indexed: 11/29/2022] Open
Abstract
People understand others’ emotions quickly from their facial expressions. However, facial expressions of ingroup and outgroup members may signal different social information and thus be mediated by distinct neural activities. We investigated whether there are distinct neuronal responses to fearful and happy expressions of same-race (SR) and other-race (OR) faces. We recorded electroencephalogram from Chinese adults when viewing an adaptor face (with fearful/neutral expressions in Experiment 1 but happy/neutral expressions in Experiment 2) and a target face (with fearful expressions in Experiment 1 but happy expressions in Experiment 2) presented in rapid succession. We found that both fearful and happy (vs neutral) adaptor faces increased the amplitude of a frontocentral positivity (P2). However, a fearful but not happy (vs neutral) adaptor face decreased the P2 amplitudes to target faces, and this repetition suppression (RS) effect occurred when adaptor and target faces were of the same race but not when of different races. RS was observed on two late parietal/central positive activities to fearful/happy target faces, which, however, occurred regardless of whether adaptor and target faces were of the same or different races. Our findings suggest that early affective processing of fearful expressions may engage distinct neural activities for SR and OR faces.
Collapse
Affiliation(s)
- Xuena Wang
- School of Psychological and Cognitive Sciences, PKU-IDG/McGovern Institute for Brain Research, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100080, China
| | - Shihui Han
- School of Psychological and Cognitive Sciences, PKU-IDG/McGovern Institute for Brain Research, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100080, China
| |
Collapse
|
22
|
Guan H, Wei H, Hauer RJ, Liu P. Facial expressions of Asian people exposed to constructed urban forests: Accuracy validation and variation assessment. PLoS One 2021; 16:e0253141. [PMID: 34138924 PMCID: PMC8211262 DOI: 10.1371/journal.pone.0253141] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Accepted: 05/30/2021] [Indexed: 01/27/2023] Open
Abstract
An outcome of building sustainable urban forests is that people's well-being is improved when they are exposed to trees. Facial expressions directly represents one's inner emotions, and can be used to assess real-time perception. The emergence and change in the facial expressions of forest visitors are an implicit process. As such, the reserved character of Asians requires an instrument rating to accurately recognize expressions. In this study, a dataset was established with 2,886 randomly photographed faces from visitors at a constructed urban forest park and at a promenade during summertime in Shenyang City, Northeast China. Six experts were invited to choose 160 photos in total with 20 images representing one of eight typical expressions: angry, contempt, disgusted, happy, neutral, sad, scared, and surprised. The FireFACE ver. 3.0 software was used to test hit-ratio validation as an accuracy measurement (ac.) to match machine-recognized photos with those identified by experts. According to the Kruskal-Wallis test on the difference from averaged scores in 20 recently published papers, contempt (ac. = 0.40%, P = 0.0038) and scared (ac. = 25.23%, P = 0.0018) expressions do not pass the validation test. Both happy and sad expression scores were higher in forests than in promenades, but there were no difference in net positive response (happy minus sad) between locations. Men had a higher happy score but lower disgusted score in forests than in promenades. Men also had a higher angry score in forests. We conclude that FireFACE can be used for analyzing facial expressions in Asian people within urban forests. Women are encouraged to visit urban forests rather than promenades to elicit more positive emotions.
Collapse
Affiliation(s)
- Haoming Guan
- School of Geographical Sciences, Northeast Normal University, Changchun, China
| | - Hongxu Wei
- Key Laboratory of Wetland Ecology and Environment, Northeast Institute of Geography and Agroecology, Chinse Academy of Sciences, Changchun, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Richard J. Hauer
- College of Natural Resources, University of Wisconsin-Stevens Point, Stevens Point, Wisconsin, United States of America
| | - Ping Liu
- College of Forestry, Shenyang Agricultural University, Shenyang, China
| |
Collapse
|
23
|
Sitting in Judgment: How Body Posture Influences Deception Detection and Gazing Behavior. Behav Sci (Basel) 2021; 11:bs11060085. [PMID: 34200633 PMCID: PMC8229315 DOI: 10.3390/bs11060085] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 06/07/2021] [Accepted: 06/08/2021] [Indexed: 11/17/2022] Open
Abstract
Body postures can affect how we process and attend to information. Here, a novel effect of adopting an open or closed posture on the ability to detect deception was investigated. It was hypothesized that the posture adopted by judges would affect their social acuity, resulting in differences in the detection of nonverbal behavior (i.e., microexpression recognition) and the discrimination of deceptive and truthful statements. In Study 1, adopting an open posture produced higher accuracy for detecting naturalistic lies, but no difference was observed in the recognition of brief facial expressions as compared to adopting a closed posture; trait empathy was found to have an additive effect on posture, with more empathic judges having higher deception detection scores. In Study 2, with the use of an eye-tracker, posture effects on gazing behavior when judging both low-stakes and high-stakes lies were measured. Sitting in an open posture reduced judges’ average dwell times looking at senders, and in particular, the amount and length of time they focused on their hands. The findings suggest that simply shifting posture can impact judges’ attention to visual information and veracity judgments (Mg = 0.40, 95% CI (0.03, 0.78)).
Collapse
|
24
|
Okubo M, Oyama T. Do you know your best side? Awareness of lateral posing asymmetries. Laterality 2021; 27:6-20. [PMID: 34088246 DOI: 10.1080/1357650x.2021.1938105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
People tend to show the left cheek to broadly express emotions while they tend to show the right cheek to hide emotions because emotions were expressed more on the left than on the right side of the face. The present study investigated the level of awareness on the left- and right-cheek poses using the method of structural knowledge attributions. When asked to broadly express emotions for a family portrait, right-handed participants were more likely to show the left cheek than the right. On the other hand, when asked to conceal emotions to show a calm and reassuring attitude as a scientist, they were more likely to show the right cheek. After the posing session, participants selected the conscious level of their knowledge about posing from five categories: Random, intuition, familiarity, recollection, and rules. Most participants rated their knowledge as unconscious (i.e., either as random, intuition, or familiarity). The choice of the conscious level did not differ across posing orientations and posing instructions. These results suggest that although people do not have an acute awareness of their lateral posing preference, they reliably show one side of their faces to express or hide emotions.
Collapse
Affiliation(s)
- Matia Okubo
- Department of Psychology, Senshu University, Kawasaki, Japan
| | - Takato Oyama
- Department of Psychology, Senshu University, Kawasaki, Japan
| |
Collapse
|
25
|
Israelashvili J, Pauw LS, Sauter DA, Fischer AH. Emotion Recognition from Realistic Dynamic Emotional Expressions Cohere with Established Emotion Recognition Tests: A Proof-of-Concept Validation of the Emotional Accuracy Test. J Intell 2021; 9:25. [PMID: 34067013 PMCID: PMC8162550 DOI: 10.3390/jintelligence9020025] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 02/25/2021] [Accepted: 04/26/2021] [Indexed: 12/01/2022] Open
Abstract
Individual differences in understanding other people's emotions have typically been studied with recognition tests using prototypical emotional expressions. These tests have been criticized for the use of posed, prototypical displays, raising the question of whether such tests tell us anything about the ability to understand spontaneous, non-prototypical emotional expressions. Here, we employ the Emotional Accuracy Test (EAT), which uses natural emotional expressions and defines the recognition as the match between the emotion ratings of a target and a perceiver. In two preregistered studies (Ntotal = 231), we compared the performance on the EAT with two well-established tests of emotion recognition ability: the Geneva Emotion Recognition Test (GERT) and the Reading the Mind in the Eyes Test (RMET). We found significant overlap (r > 0.20) between individuals' performance in recognizing spontaneous emotions in naturalistic settings (EAT) and posed (or enacted) non-verbal measures of emotion recognition (GERT, RMET), even when controlling for individual differences in verbal IQ. On average, however, participants reported enjoying the EAT more than the other tasks. Thus, the current research provides a proof-of-concept validation of the EAT as a useful measure for testing the understanding of others' emotions, a crucial feature of emotional intelligence. Further, our findings indicate that emotion recognition tests using prototypical expressions are valid proxies for measuring the understanding of others' emotions in more realistic everyday contexts.
Collapse
Affiliation(s)
- Jacob Israelashvili
- Psychology Department, The Hebrew University of Jerusalem, Jerusalem 9190501, Israel
| | - Lisanne S. Pauw
- Department of Psychology, University of Münster, 48149 Münster, Germany;
| | - Disa A. Sauter
- Faculty of Social and Behavioral Sciences, Department of Psychology, University of Amsterdam, 1001 NK Amsterdam, The Netherlands; (D.A.S.); (A.H.F.)
| | - Agneta H. Fischer
- Faculty of Social and Behavioral Sciences, Department of Psychology, University of Amsterdam, 1001 NK Amsterdam, The Netherlands; (D.A.S.); (A.H.F.)
| |
Collapse
|
26
|
Yousefi Heris A. Emotions and two senses of simulation. PHILOSOPHICAL PSYCHOLOGY 2021. [DOI: 10.1080/09515089.2021.1914831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Ali Yousefi Heris
- Department of Philosophy, Shahid Beheshti University, Tehran, Iran
- School of Cognitive Science, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
| |
Collapse
|
27
|
Zloteanu M, Krumhuber EG, Richardson DC. Acting Surprised: Comparing Perceptions of Different Dynamic Deliberate Expressions. JOURNAL OF NONVERBAL BEHAVIOR 2020. [DOI: 10.1007/s10919-020-00349-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
AbstractPeople are accurate at classifying emotions from facial expressions but much poorer at determining if such expressions are spontaneously felt or deliberately posed. We explored if the method used by senders to produce an expression influences the decoder’s ability to discriminate authenticity, drawing inspiration from two well-known acting techniques: the Stanislavski (internal) and Mimic method (external). We compared spontaneous surprise expressions in response to a jack-in-the-box (genuine condition), to posed displays of senders who either focused on their past affective state (internal condition) or the outward expression (external condition). Although decoders performed better than chance at discriminating the authenticity of all expressions, their accuracy was lower in classifying external surprise compared to internal surprise. Decoders also found it harder to discriminate external surprise from spontaneous surprise and were less confident in their decisions, perceiving these to be similarly intense but less genuine-looking. The findings suggest that senders are capable of voluntarily producing genuine-looking expressions of emotions with minimal effort, especially by mimicking a genuine expression. Implications for research on emotion recognition are discussed.
Collapse
|
28
|
Malsert J, Tran K, Tran TAT, Ha-Vinh T, Gentaz E, Leuchter RHV. Cross-Cultural and Environmental Influences on Facial Emotional Discrimination Sensitivity in 9-Year-Old Children from Swiss and Vietnamese Schools. SWISS JOURNAL OF PSYCHOLOGY 2020. [DOI: 10.1024/1421-0185/a000240] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Abstract. The Other Race Effect (ORE), i.e., recognition facilitation for own-race faces, is a well-established phenomenon with broad evidence in adults and infants. Nevertheless, the ORE in older children is poorly understood, and even less so for emotional face processing. This research samples 87 9-year-old children from Vietnamese and Swiss schools. In two separate studies, we evaluated the children’s abilities to perceive the disappearance of emotions in Asian and Caucasian faces in an offset task. The first study evaluated an “emotional ORE” in Vietnamese-Asian, Swiss-Caucasian, and Swiss-Multicultural children. Offset times showed an emotional ORE in Vietnamese-Asian children living in an ethnically homogeneous environment, whereas mixed ethnicities in Swiss children seem to have balanced performance between face types. The second study compared socioemotionally trained versus untrained Vietnamese-Asian children. Vietnamese children showed a strong emotional ORE and tend to increase their sensitivity to emotion offset after training. Moreover, an effect of emotion consistent with previous observation in adults could suggest a cultural sensitivity to disapproval signs. Taken together, the results suggest that 9-year-old children can present an emotional ORE, but that a heterogeneous environment or an emotional training could strengthen face-processing abilities without reducing skills on their own-group.
Collapse
Affiliation(s)
- Jennifer Malsert
- SensoriMotor, Affective, and Social Development Lab, University of Geneva, Geneva, Switzerland
- Swiss Center for Affective Sciences, Campus Biotech, Geneva, Switzerland
| | - Khanh Tran
- Eurasia Foundation and Association for Special Education in Vietnam, Ho Chi Minh City, Vietnam
| | - Tu Anh Thi Tran
- University of Education, Hue University, Thua Thien Hue, Vietnam
| | - Tho Ha-Vinh
- Eurasia Foundation and Association for Special Education in Vietnam, Ho Chi Minh City, Vietnam
| | - Edouard Gentaz
- SensoriMotor, Affective, and Social Development Lab, University of Geneva, Geneva, Switzerland
- Swiss Center for Affective Sciences, Campus Biotech, Geneva, Switzerland
| | - Russia Ha-Vinh Leuchter
- Division of Development and Growth, Department of Pediatrics, University of Geneva, Geneva, Switzerland
| |
Collapse
|
29
|
Consistent behavioral and electrophysiological evidence for rapid perceptual discrimination among the six human basic facial expressions. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2020; 20:928-948. [PMID: 32918269 DOI: 10.3758/s13415-020-00811-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
The extent to which the six basic human facial expressions perceptually differ from one another remains controversial. For instance, despite the importance of rapidly decoding fearful faces, this expression often is confused with other expressions, such as Surprise in explicit behavioral categorization tasks. We quantified implicit visual discrimination among rapidly presented facial expressions with an oddball periodic visual stimulation approach combined with electroencephalography (EEG), testing for the relationship with behavioral explicit measures of facial emotion discrimination. We report robust facial expression discrimination responses bilaterally over the occipito-temporal cortex for each pairwise expression change. While fearful faces presented as repeated stimuli led to the smallest deviant responses from all other basic expressions, deviant fearful faces were well discriminated overall and to a larger extent than expressions of Sadness and Anger. Expressions of Happiness did not differ quantitatively as much in EEG as for behavioral subjective judgments, suggesting that the clear dissociation between happy and other expressions, typically observed in behavioral studies, reflects higher-order processes. However, this expression differed from all others in terms of scalp topography, pointing to a qualitative rather than quantitative difference. Despite this difference, overall, we report for the first time a tight relationship of the similarity matrices across facial expressions obtained for implicit EEG responses and behavioral explicit measures collected under the same temporal constraints, paving the way for new approaches of understanding facial expression discrimination in developmental, intercultural, and clinical populations.
Collapse
|
30
|
Cowen AS, Keltner D. Universal facial expressions uncovered in art of the ancient Americas: A computational approach. SCIENCE ADVANCES 2020; 6:eabb1005. [PMID: 32875109 PMCID: PMC7438103 DOI: 10.1126/sciadv.abb1005] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Accepted: 07/08/2020] [Indexed: 05/08/2023]
Abstract
Central to the study of emotion is evidence concerning its universality, particularly the degree to which emotional expressions are similar across cultures. Here, we present an approach to studying the universality of emotional expression that rules out cultural contact and circumvents potential biases in survey-based methods: A computational analysis of apparent facial expressions portrayed in artwork created by members of cultures isolated from Western civilization. Using data-driven methods, we find that facial expressions depicted in 63 sculptures from the ancient Americas tend to accord with Western expectations for emotions that unfold in specific social contexts. Ancient American sculptures tend to portray at least five facial expressions in contexts predicted by Westerners, including "pain" in torture, "determination"/"strain" in heavy lifting, "anger" in combat, "elation" in social touch, and "sadness" in defeat-supporting the universality of these expressions.
Collapse
Affiliation(s)
- Alan S. Cowen
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Dacher Keltner
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| |
Collapse
|
31
|
Barker MS, Bidstrup EM, Robinson GA, Nelson NL. "Grumpy" or "furious"? arousal of emotion labels influences judgments of facial expressions. PLoS One 2020; 15:e0235390. [PMID: 32609780 PMCID: PMC7329125 DOI: 10.1371/journal.pone.0235390] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Accepted: 06/15/2020] [Indexed: 12/21/2022] Open
Abstract
Whether language information influences recognition of emotion from facial expressions remains the subject of debate. The current studies investigate how variations in emotion labels that are paired with expressions influences participants' judgments of the emotion displayed. Static (Study 1) and dynamic (Study 2) facial expressions depicting eight emotion categories were paired with emotion labels that systematically varied in arousal (low and high). Participants rated the arousal, valence, and dominance of expressions paired with labels. Isolated faces and isolated labels were also rated. As predicted, the label presented influenced participants' judgments of the expressions. Across both studies, higher arousal labels were associated with: 1) higher ratings of arousal for sad, angry, and scared expressions, and 2) higher ratings of dominance for angry, proud, and disgust expressions. These results indicate that emotion labels influence judgments of facial expressions.
Collapse
Affiliation(s)
- Megan S. Barker
- School of Psychology, The University of Queensland, St Lucia, Brisbane, QLD, Australia
- Department of Neurology, Taub Institute for Research on Alzheimer’s Disease and the Aging Brain, Gertrude H. Sergievsky Center, Columbia University Medical Center, New York, NY, United States of America
| | - Emma M. Bidstrup
- School of Psychology, The University of Queensland, St Lucia, Brisbane, QLD, Australia
| | - Gail A. Robinson
- School of Psychology, The University of Queensland, St Lucia, Brisbane, QLD, Australia
- Queensland Brain Institute, The University of Queensland, St Lucia, Brisbane, QLD, Australia
| | - Nicole L. Nelson
- School of Psychology, The University of Queensland, St Lucia, Brisbane, QLD, Australia
| |
Collapse
|
32
|
Interactive situations reveal more about children's emotional knowledge. J Exp Child Psychol 2020; 198:104879. [PMID: 32590198 DOI: 10.1016/j.jecp.2020.104879] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Revised: 04/20/2020] [Accepted: 04/25/2020] [Indexed: 11/24/2022]
Abstract
Research examining children's emotion judgments has generally used nonsocial tasks that do not resemble children's daily experiences in judging others' emotions. Here, younger children (4- to 6-year-olds) and older children (7- to 9-year-olds) participated in a socially interactive task where an experimenter opened boxes and made an expression (happy, sad, scared, or disgust) based on the object inside. Children guessed which of four objects (a sticker, a broken toy car, a spider, or toy poop) was in the box. Subsequently, children opened a set of boxes and generated facial expressions for the experimenter. Children also labeled the emotion elicited by the objects and static facial expressions. Children's ability to guess which object caused the experimenter's expression increased with age but did not predict their ability to generate a recognizable expression. Children's demonstration of emotion knowledge also varied across tasks, suggesting that when emotion judgment tasks more closely mimic their daily experiences, children demonstrate broader emotion knowledge.
Collapse
|
33
|
Bidet-Ildei C, Decatoire A, Gil S. Recognition of Emotions From Facial Point-Light Displays. Front Psychol 2020; 11:1062. [PMID: 32581934 PMCID: PMC7287185 DOI: 10.3389/fpsyg.2020.01062] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Accepted: 04/27/2020] [Indexed: 12/26/2022] Open
Abstract
Facial emotion recognition occupies a prominent place in emotion psychology. How perceivers recognize messages conveyed by faces can be studied in either an explicit or an implicit way, and using different kinds of facial stimuli. In the present study, we explored for the first time how facial point-light displays (PLDs) (i.e., biological motion with minimal perceptual properties) can elicit both explicit and implicit mechanisms of facial emotion recognition. Participants completed tasks of explicit or implicit facial emotion recognition from PLDs. Results showed that point-light stimuli are sufficient to allow facial emotion recognition, be it explicit and implicit. We argue that this finding could encourage the use of PLDs in research on the perception of emotional cues from faces.
Collapse
Affiliation(s)
- Christel Bidet-Ildei
- Université de Poitiers, Poitiers, France.,Université de Tours, Tours, France.,Centre de Recherches sur la Cognition et l'Apprentissage, UMR 7295, Poitiers, France.,Centre National de la Recherche Scientifique (CNRS), Paris, France
| | - Arnaud Decatoire
- Université de Poitiers, Poitiers, France.,Centre National de la Recherche Scientifique (CNRS), Paris, France.,Institut Pprime UPR 3346, Poitiers, France
| | - Sandrine Gil
- Université de Poitiers, Poitiers, France.,Université de Tours, Tours, France.,Centre de Recherches sur la Cognition et l'Apprentissage, UMR 7295, Poitiers, France.,Centre National de la Recherche Scientifique (CNRS), Paris, France
| |
Collapse
|
34
|
Gendron M, Hoemann K, Crittenden AN, Mangola SM, Ruark GA, Barrett LF. Emotion Perception in Hadza Hunter-Gatherers. Sci Rep 2020; 10:3867. [PMID: 32123191 PMCID: PMC7051983 DOI: 10.1038/s41598-020-60257-2] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2019] [Accepted: 02/04/2020] [Indexed: 12/19/2022] Open
Abstract
It has long been claimed that certain configurations of facial movements are universally recognized as emotional expressions because they evolved to signal emotional information in situations that posed fitness challenges for our hunting and gathering hominin ancestors. Experiments from the last decade have called this particular evolutionary hypothesis into doubt by studying emotion perception in a wider sample of small-scale societies with discovery-based research methods. We replicate these newer findings in the Hadza of Northern Tanzania; the Hadza are semi-nomadic hunters and gatherers who live in tight-knit social units and collect wild foods for a large portion of their diet, making them a particularly relevant population for testing evolutionary hypotheses about emotion. Across two studies, we found little evidence of universal emotion perception. Rather, our findings are consistent with the hypothesis that people infer emotional meaning in facial movements using emotion knowledge embrained by cultural learning.
Collapse
Affiliation(s)
- Maria Gendron
- Yale University, Department of Psychology, New Haven, USA.
| | - Katie Hoemann
- Northeastern University, Department of Psychology, Boston, USA
| | | | | | - Gregory A Ruark
- U.S. Army Research Institute for the Behavioral and Social Sciences, Foundational Science Research Unit (FSRU), Fort Belvoir, USA
| | - Lisa Feldman Barrett
- Northeastern University, Department of Psychology, Boston, USA. .,Massachusetts General Hospital, Martinos Center for Biomedical Imaging and Department of Psychiatry, Boston, USA.
| |
Collapse
|
35
|
Analysis of the efficacy and reliability of the Moodies app for detecting emotions through speech: Does it actually work? COMPUTERS IN HUMAN BEHAVIOR 2020. [DOI: 10.1016/j.chb.2019.106156] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
36
|
Cowen A, Sauter D, Tracy JL, Keltner D. Mapping the Passions: Toward a High-Dimensional Taxonomy of Emotional Experience and Expression. Psychol Sci Public Interest 2019; 20:69-90. [PMID: 31313637 PMCID: PMC6675572 DOI: 10.1177/1529100619850176] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
What would a comprehensive atlas of human emotions include? For 50 years, scientists have sought to map emotion-related experience, expression, physiology, and recognition in terms of the "basic six"-anger, disgust, fear, happiness, sadness, and surprise. Claims about the relationships between these six emotions and prototypical facial configurations have provided the basis for a long-standing debate over the diagnostic value of expression (for review and latest installment in this debate, see Barrett et al., p. 1). Building on recent empirical findings and methodologies, we offer an alternative conceptual and methodological approach that reveals a richer taxonomy of emotion. Dozens of distinct varieties of emotion are reliably distinguished by language, evoked in distinct circumstances, and perceived in distinct expressions of the face, body, and voice. Traditional models-both the basic six and affective-circumplex model (valence and arousal)-capture a fraction of the systematic variability in emotional response. In contrast, emotion-related responses (e.g., the smile of embarrassment, triumphant postures, sympathetic vocalizations, blends of distinct expressions) can be explained by richer models of emotion. Given these developments, we discuss why tests of a basic-six model of emotion are not tests of the diagnostic value of facial expression more generally. Determining the full extent of what facial expressions can tell us, marginally and in conjunction with other behavioral and contextual cues, will require mapping the high-dimensional, continuous space of facial, bodily, and vocal signals onto richly multifaceted experiences using large-scale statistical modeling and machine-learning methods.
Collapse
Affiliation(s)
- Alan Cowen
- Department of Psychology, University of California, Berkeley
| | - Disa Sauter
- Faculty of Social and Behavioural Sciences, University of Amsterdam
| | | | - Dacher Keltner
- Department of Psychology, University of California, Berkeley
| |
Collapse
|
37
|
Keltner D, Sauter D, Tracy J, Cowen A. Emotional Expression: Advances in Basic Emotion Theory. JOURNAL OF NONVERBAL BEHAVIOR 2019; 43:133-160. [PMID: 31395997 PMCID: PMC6687086 DOI: 10.1007/s10919-019-00293-3] [Citation(s) in RCA: 109] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
In this article, we review recent developments in the study of emotional expression within a basic emotion framework. Dozens of new studies find that upwards of 20 emotions are signaled in multimodal and dynamic patterns of expressive behavior. Moving beyond word to stimulus matching paradigms, new studies are detailing the more nuanced and complex processes involved in emotion recognition and the structure of how people perceive emotional expression. Finally, we consider new studies documenting contextual influences upon emotion recognition. We conclude by extending these recent findings to questions about emotion-related physiology and the mammalian precursors of human emotion.
Collapse
|
38
|
Gunes H, Celiktutan O, Sariyanidi E. Live human-robot interactive public demonstrations with automatic emotion and personality prediction. Philos Trans R Soc Lond B Biol Sci 2019; 374:20180026. [PMID: 30853000 PMCID: PMC6452249 DOI: 10.1098/rstb.2018.0026] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/14/2019] [Indexed: 02/05/2023] Open
Abstract
Communication with humans is a multi-faceted phenomenon where the emotions, personality and non-verbal behaviours, as well as the verbal behaviours, play a significant role, and human-robot interaction (HRI) technologies should respect this complexity to achieve efficient and seamless communication. In this paper, we describe the design and execution of five public demonstrations made with two HRI systems that aimed at automatically sensing and analysing human participants' non-verbal behaviour and predicting their facial action units, facial expressions and personality in real time while they interacted with a small humanoid robot. We describe an overview of the challenges faced together with the lessons learned from those demonstrations in order to better inform the science and engineering fields to design and build better robots with more purposeful interaction capabilities. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.
Collapse
Affiliation(s)
- Hatice Gunes
- Department of Computer Science and Technology, University of Cambridge, Cambridge CB3 0FD, UK
| | - Oya Celiktutan
- Centre for Robotics Research, Department of Informatics, King’s College London, London WC2R 2LS, UK
| | | |
Collapse
|
39
|
Saumure C, Plouffe-Demers MP, Estéphan A, Fiset D, Blais C. The use of visual information in the recognition of posed and spontaneous facial expressions. J Vis 2019; 18:21. [PMID: 30372755 DOI: 10.1167/18.9.21] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Recognizing facial expressions is crucial for the success of social interactions, and the visual processes underlying this ability have been the subject of many studies in the field of face perception. Nevertheless, the stimuli used in the majority of these studies consist of facial expressions that were produced on request rather than spontaneously induced. In the present study, we directly compared the visual strategies underlying the recognition of posed and spontaneous expressions of happiness, disgust, surprise, and sadness. We used the Bubbles method with pictures of the same individuals spontaneously expressing an emotion or posing with an expression on request. Two key findings were obtained: Visual strategies were less systematic with spontaneous than with posed expressions, suggesting a higher heterogeneity in the useful facial cues across identities; and with spontaneous expressions, the relative reliance on the mouth and eyes areas was more evenly distributed, contrasting with the higher reliance on the mouth compared to the eyes area observed with posed expressions.
Collapse
Affiliation(s)
- Camille Saumure
- Department of Psychoeducation and Psychology, Université du Québec en Outaouais, Gatineau, Québec, Canada
| | - Marie-Pier Plouffe-Demers
- Department of Psychoeducation and Psychology, Université du Québec en Outaouais, Gatineau, Québec, Canada
| | - Amanda Estéphan
- Department of Psychoeducation and Psychology, Université du Québec en Outaouais, Gatineau, Québec, Canada
| | - Daniel Fiset
- Department of Psychoeducation and Psychology, Université du Québec en Outaouais, Gatineau, Québec, Canada
| | - Caroline Blais
- Department of Psychoeducation and Psychology, Université du Québec en Outaouais, Gatineau, Québec, Canada
| |
Collapse
|
40
|
Abstract
The goal of this study was to validate AFFDEX and FACET, two algorithms classifying emotions from facial expressions, in iMotions's software suite. In Study 1, pictures of standardized emotional facial expressions from three databases, the Warsaw Set of Emotional Facial Expression Pictures (WSEFEP), the Amsterdam Dynamic Facial Expression Set (ADFES), and the Radboud Faces Database (RaFD), were classified with both modules. Accuracy (Matching Scores) was computed to assess and compare the classification quality. Results show a large variance in accuracy across emotions and databases, with a performance advantage for FACET over AFFDEX. In Study 2, 110 participants' facial expressions were measured while being exposed to emotionally evocative pictures from the International Affective Picture System (IAPS), the Geneva Affective Picture Database (GAPED) and the Radboud Faces Database (RaFD). Accuracy again differed for distinct emotions, and FACET performed better. Overall, iMotions can achieve acceptable accuracy for standardized pictures of prototypical (vs. natural) facial expressions, but performs worse for more natural facial expressions. We discuss potential sources for limited validity and suggest research directions in the broader context of emotion research.
Collapse
|
41
|
Mayo LM, Heilig M. In the face of stress: Interpreting individual differences in stress-induced facial expressions. Neurobiol Stress 2019; 10:100166. [PMID: 31193535 PMCID: PMC6535645 DOI: 10.1016/j.ynstr.2019.100166] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Revised: 04/17/2019] [Accepted: 04/19/2019] [Indexed: 11/22/2022] Open
Abstract
Stress is an inevitable part of life that can profoundly impact social and emotional functioning, contributing to the development of psychiatric disease. One key component of emotion and social processing is facial expressions, which humans can readily detect and react to even without conscious awareness. Facial expressions have been the focus of philosophic and scientific interest for centuries. Historically, facial expressions have been relegated to peripheral indices of fixed emotion states. More recently, affective neuroscience has undergone a conceptual revolution, resulting in novel interpretations of these muscle movements. Here, we review the role of facial expressions according to the leading affective neuroscience theories, including constructed emotion and social-motivation accounts. We specifically highlight recent data (Mayo et al, 2018) demonstrating the way in which stress shapes facial expressions and how this is influenced by individual factors. In particular, we focus on the consequence of genetic variation within the endocannabinoid system, a neuromodulatory system implicated in stress and emotion, and its impact on stress-induced facial muscle activity. In a re-analysis of this dataset, we highlight how gender may also influence these processes, conceptualized as variation in the "fight-or-flight" or "tend-and-befriend" behavioral responses to stress. We speculate on how these interpretations may contribute to a broader understanding of facial expressions, discuss the potential use of facial expressions as a trans-diagnostic marker of psychiatric disease, and suggest future work necessary to resolve outstanding questions.
Collapse
Affiliation(s)
- Leah M. Mayo
- Center for Social and Affective Neuroscience, Department of Clinical and Experimental Medicine, Linköping University, Sweden
| | | |
Collapse
|
42
|
|
43
|
Yik M, Wong KFE, Zeng KJ. Anchoring-and-Adjustment During Affect Inferences. Front Psychol 2019; 9:2567. [PMID: 30670994 PMCID: PMC6331480 DOI: 10.3389/fpsyg.2018.02567] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2018] [Accepted: 11/29/2018] [Indexed: 11/13/2022] Open
Abstract
People can easily infer the thoughts and feelings of others from brief descriptions of scenarios. But how do they arrive at these inferences? Three studies tested how, through anchoring-and-adjustment, people used semantic and numerical anchors (irrelevant values provided by experimenters) in inferring feelings from scenario descriptions. We showed that in a between-subject design, people’s inference was biased toward anchoring information (Studies 1 and 2). People made fewer adjustments (anchoring increased) under time pressure in the high-anchor condition but not in the low-anchor condition (Study 3). When inferring affect from scenario descriptions, not only did people integrate their inference with the context, they adjusted away from the initial anchors provided by the experimenters. However, time pressure discouraged people from making adequate adjustments.
Collapse
Affiliation(s)
- Michelle Yik
- Division of Social Science, Hong Kong University of Science and Technology, Kowloon, Hong Kong
| | - Kin Fai Ellick Wong
- Department of Management, Hong Kong University of Science and Technology, Kowloon, Hong Kong
| | - Kevin J Zeng
- Division of Social Science, Hong Kong University of Science and Technology, Kowloon, Hong Kong
| |
Collapse
|
44
|
Kleindienst N, Hauschild S, Liebke L, Thome J, Bertsch K, Hensel S, Lis S. A negative bias in decoding positive social cues characterizes emotion processing in patients with symptom-remitted Borderline Personality Disorder. Borderline Personal Disord Emot Dysregul 2019; 6:17. [PMID: 31788316 PMCID: PMC6858731 DOI: 10.1186/s40479-019-0114-3] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Accepted: 10/23/2019] [Indexed: 11/27/2022] Open
Abstract
BACKGROUND Impairments in the domain of interpersonal functioning such as the feeling of loneliness and fear of abandonment have been associated with a negative bias during processing of social cues in Borderline Personality Disorder (BPD). Since these symptoms show low rates of remission, high rates of recurrence and are relatively resistant to treatment, in the present study we investigated whether a negative bias during social cognitive processing exists in BPD even after symptomatic remission. We focused on facial emotion recognition since it is one of the basal social-cognitive processes required for successful social interactions and building relationships. METHODS Ninety-eight female participants (46 symptom-remitted BPD [r-BPD]), 52 healthy controls [HC]) rated the intensity of anger and happiness in ambiguous (anger/happiness blends) and unambiguous (emotion/neutral blends) emotional facial expressions. Additionally, participants assessed the confidence they experienced in their own judgments. RESULTS R-BPD participants assessed ambiguous expressions as less happy and as more angry when the faces displayed predominantly happiness. Confidence in these judgments did not differ between groups, but confidence in judging happiness in predominantly happy faces was lower in BPD patients with a higher level of BPD psychopathology. CONCLUSIONS Evaluating social cues that signal the willingness to affiliate is characterized by a negative bias that seems to be a trait-like feature of social cognition in BPD. In contrast, confidence in judging positive social signals seems to be a state-like feature of emotion recognition in BPD that improves with attenuation in the level of acute BPD symptoms.
Collapse
Affiliation(s)
- Nikolaus Kleindienst
- 1Institute of Psychiatric and Psychosomatic Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, PO Box 12 21 20, 68072 Mannheim, Germany
| | - Sophie Hauschild
- 1Institute of Psychiatric and Psychosomatic Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, PO Box 12 21 20, 68072 Mannheim, Germany.,2Institute for Psychosocial Prevention, University Heidelberg, Heidelberg, Germany
| | - Lisa Liebke
- 1Institute of Psychiatric and Psychosomatic Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, PO Box 12 21 20, 68072 Mannheim, Germany
| | - Janine Thome
- 1Institute of Psychiatric and Psychosomatic Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, PO Box 12 21 20, 68072 Mannheim, Germany.,3Department of Psychiatry, Western University, London, Canada.,4Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty, Heidelberg University, Mannheim, Germany
| | - Katja Bertsch
- 5Department of General Psychiatry, Center for Psychosocial Medicine, University of Heidelberg, Heidelberg, Germany.,6Department of Psychology, LMU Munich, Munich, Germany
| | - Saskia Hensel
- 1Institute of Psychiatric and Psychosomatic Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, PO Box 12 21 20, 68072 Mannheim, Germany
| | - Stefanie Lis
- 1Institute of Psychiatric and Psychosomatic Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, PO Box 12 21 20, 68072 Mannheim, Germany
| |
Collapse
|
45
|
Abstract
Jussim's critique of social psychology's embrace of error and bias is needed and often persuasive. In opting for perceptual realism over social constructivism, however, he seems to ignore a third choice - a cognitive constructivism which has a long and distinguished history in the study of nonsocial perception, and which enables us to understand both accuracy and error.
Collapse
|
46
|
Calvo MG, Fernández-Martín A, Gutiérrez-García A, Lundqvist D. Selective eye fixations on diagnostic face regions of dynamic emotional expressions: KDEF-dyn database. Sci Rep 2018; 8:17039. [PMID: 30451919 PMCID: PMC6242984 DOI: 10.1038/s41598-018-35259-w] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2018] [Accepted: 10/28/2018] [Indexed: 12/20/2022] Open
Abstract
Prior research using static facial stimuli (photographs) has identified diagnostic face regions (i.e., functional for recognition) of emotional expressions. In the current study, we aimed to determine attentional orienting, engagement, and time course of fixation on diagnostic regions. To this end, we assessed the eye movements of observers inspecting dynamic expressions that changed from a neutral to an emotional face. A new stimulus set (KDEF-dyn) was developed, which comprises 240 video-clips of 40 human models portraying six basic emotions (happy, sad, angry, fearful, disgusted, and surprised). For validation purposes, 72 observers categorized the expressions while gaze behavior was measured (probability of first fixation, entry time, gaze duration, and number of fixations). Specific visual scanpath profiles characterized each emotional expression: The eye region was looked at earlier and longer for angry and sad faces; the mouth region, for happy faces; and the nose/cheek region, for disgusted faces; the eye and the mouth regions attracted attention in a more balanced manner for surprise and fear. These profiles reflected enhanced selective attention to expression-specific diagnostic face regions. The KDEF-dyn stimuli and the validation data will be available to the scientific community as a useful tool for research on emotional facial expression processing.
Collapse
Affiliation(s)
- Manuel G Calvo
- Department of Cognitive Psychology, Universidad de La Laguna, Tenerife, Spain.
- Instituto Universitario de Neurociencia (IUNE), Universidad de La Laguna, Tenerife, Spain.
| | | | | | | |
Collapse
|
47
|
Calvo MG, Fernández-Martín A, Recio G, Lundqvist D. Human Observers and Automated Assessment of Dynamic Emotional Facial Expressions: KDEF-dyn Database Validation. Front Psychol 2018; 9:2052. [PMID: 30416473 PMCID: PMC6212581 DOI: 10.3389/fpsyg.2018.02052] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Accepted: 10/05/2018] [Indexed: 12/11/2022] Open
Abstract
Most experimental studies of facial expression processing have used static stimuli (photographs), yet facial expressions in daily life are generally dynamic. In its original photographic format, the Karolinska Directed Emotional Faces (KDEF) has been frequently utilized. In the current study, we validate a dynamic version of this database, the KDEF-dyn. To this end, we applied animation between neutral and emotional expressions (happy, sad, angry, fearful, disgusted, and surprised; 1,033-ms unfolding) to 40 KDEF models, with morphing software. Ninety-six human observers categorized the expressions of the resulting 240 video-clip stimuli, and automated face analysis assessed the evidence for 6 expressions and 20 facial action units (AUs) at 31 intensities. Low-level image properties (luminance, signal-to-noise ratio, etc.) and other purely perceptual factors (e.g., size, unfolding speed) were controlled. Human recognition performance (accuracy, efficiency, and confusions) patterns were consistent with prior research using static and other dynamic expressions. Automated assessment of expressions and AUs was sensitive to intensity manipulations. Significant correlations emerged between human observers' categorization and automated classification. The KDEF-dyn database aims to provide a balance between experimental control and ecological validity for research on emotional facial expression processing. The stimuli and the validation data are available to the scientific community.
Collapse
Affiliation(s)
- Manuel G. Calvo
- Department of Cognitive Psychology, Universidad de La Laguna, San Cristóbal de La Laguna, Spain
- Instituto Universitario de Neurociencia (IUNE), Universidad de La Laguna, Santa Cruz de Tenerife, Spain
| | | | - Guillermo Recio
- Institute of Psychology, Universität Hamburg, Hamburg, Germany
| | - Daniel Lundqvist
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
48
|
Recognition of facial emotions on human and canine faces in children with and without autism spectrum disorders. MOTIVATION AND EMOTION 2018. [DOI: 10.1007/s11031-018-9736-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
49
|
Mishra MV, Ray SB, Srinivasan N. Cross-cultural emotion recognition and evaluation of Radboud faces database with an Indian sample. PLoS One 2018; 13:e0203959. [PMID: 30273355 PMCID: PMC6166925 DOI: 10.1371/journal.pone.0203959] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2018] [Accepted: 08/30/2018] [Indexed: 11/18/2022] Open
Abstract
Emotional databases are important tools to study emotion recognition and their effects on various cognitive processes. Since, well-standardized large-scale emotional expression database is not available in India, we evaluated Radboud faces database (RaFD)—a freely available database of emotional facial expressions of adult Caucasian models, for Indian sample. Using the pictures from RaFD, we investigated the similarity and differences in self-reported ratings on emotion recognition accuracy as well as parameters of valence, clarity, genuineness, intensity and arousal of emotional expression, by following the same rating procedure as used for the validation of RaFD. We also systematically evaluated the universality hypothesis of emotion perception by analyzing differences in accuracy and ratings for different emotional parameters across Indian and Dutch participants. As the original Radboud database lacked arousal rating, we added this as a emotional parameter along with all other parameters. The results show that the overall accuracy of emotional expression recognition by Indian participants was high and very similar to the ratings from Dutch participants. However, there were significant cross-cultural differences in classification of emotion categories and their corresponding parameters. Indians rated certain expressions comparatively more genuine, higher in valence, and less intense in comparison to original Radboud ratings. The misclassifications/ confusion for specific emotional categories differed across the two cultures indicating subtle but significant differences between the cultures. In addition to understanding the nature of facial emotion recognition, this study also evaluates and enables the use of RaFD within Indian population.
Collapse
Affiliation(s)
- Maruti Vijayshankar Mishra
- Centre of Behavioural and Cognitive Sciences (CBCS), University of Allahabad, Allahabad, UP, India
- * E-mail: (MVM); (NS)
| | - Sonia Baloni Ray
- Centre of Behavioural and Cognitive Sciences (CBCS), University of Allahabad, Allahabad, UP, India
| | - Narayanan Srinivasan
- Centre of Behavioural and Cognitive Sciences (CBCS), University of Allahabad, Allahabad, UP, India
- * E-mail: (MVM); (NS)
| |
Collapse
|
50
|
Bailey Bisson J. It's written all over their faces: Preschoolers' emotion understanding. SOCIAL DEVELOPMENT 2018. [DOI: 10.1111/sode.12322] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|