1
|
Kavanagh E, Whitehouse J, Waller BM. Being facially expressive is socially advantageous. Sci Rep 2024; 14:12798. [PMID: 38871925 DOI: 10.1038/s41598-024-62902-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Accepted: 05/22/2024] [Indexed: 06/15/2024] Open
Abstract
Individuals vary in how they move their faces in everyday social interactions. In a first large-scale study, we measured variation in dynamic facial behaviour during social interaction and examined dyadic outcomes and impression formation. In Study 1, we recorded semi-structured video calls with 52 participants interacting with a confederate across various everyday contexts. Video clips were rated by 176 independent participants. In Study 2, we examined video calls of 1315 participants engaging in unstructured video-call interactions. Facial expressivity indices were extracted using automated Facial Action Coding Scheme analysis and measures of personality and partner impressions were obtained by self-report. Facial expressivity varied considerably across participants, but little across contexts, social partners or time. In Study 1, more facially expressive participants were more well-liked, agreeable, and successful at negotiating (if also more agreeable). Participants who were more facially competent, readable, and perceived as readable were also more well-liked. In Study 2, we replicated the findings that facial expressivity was associated with agreeableness and liking by their social partner, and additionally found it to be associated with extraversion and neuroticism. Findings suggest that facial behaviour is a stable individual difference that proffers social advantages, pointing towards an affiliative, adaptive function.
Collapse
Affiliation(s)
- Eithne Kavanagh
- Department of Psychology, Nottingham Trent University, Nottingham, UK.
| | - Jamie Whitehouse
- Department of Psychology, Nottingham Trent University, Nottingham, UK
| | - Bridget M Waller
- Department of Psychology, Nottingham Trent University, Nottingham, UK
| |
Collapse
|
2
|
Virk T, Letendre T, Pathman T. The convergence of naturalistic paradigms and cognitive neuroscience methods to investigate memory and its development. Neuropsychologia 2024; 196:108779. [PMID: 38154592 DOI: 10.1016/j.neuropsychologia.2023.108779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 12/12/2023] [Accepted: 12/23/2023] [Indexed: 12/30/2023]
Abstract
Studies that involve lab-based stimuli (e.g., words, pictures) are fundamental in the memory literature. At the same time, there is growing acknowledgment that memory processes assessed in the lab may not be analogous to how memory operates in the real world. Naturalistic paradigms can bridge this gap and over the decades a growing proportion of memory research has involved more naturalistic events. However, there is significant variation in the types of naturalistic studies used to study memory and its development, each with various advantages and limitations. Further, there are notable gaps in how often different types of naturalistic approaches have been combined with cognitive neuroscience methods (e.g., fMRI, EEG) to elucidate the neural processes and substrates involved in memory encoding and retrieval in the real world. Here we summarize and discuss what we identify as progressively more naturalistic methodologies used in the memory literature (movie, virtual reality, staged-events inside and outside of the lab, photo-taking, and naturally occurring event studies). Our goal is to describe each approach's benefits (e.g., naturalistic quality, feasibility), limitations (e.g., viability of neuroimaging method for event encoding versus event retrieval), and discuss possible future directions with each approach. We focus on child studies, when available, but also highlight past adult studies. Although there is a growing body of child memory research, naturalistic approaches combined with cognitive neuroscience methodologies in this domain remain sparse. Overall, this viewpoint article reviews how we can study memory through the lens of developmental cognitive neuroscience, while utilizing naturalistic and real-world events.
Collapse
|
3
|
Robles M, Ramos-Grille I, Hervás A, Duran-Tauleria E, Galiano-Landeira J, Wormwood JB, Falter-Wagner CM, Chanes L. Reduced stereotypicality and spared use of facial expression predictions for social evaluation in autism. Int J Clin Health Psychol 2024; 24:100440. [PMID: 38426036 PMCID: PMC10901834 DOI: 10.1016/j.ijchp.2024.100440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 01/08/2024] [Indexed: 03/02/2024] Open
Abstract
Background/Objective Autism has been investigated through traditional emotion recognition paradigms, merely investigating accuracy, thereby constraining how potential differences across autistic and control individuals may be observed, identified, and described. Moreover, the use of emotional facial expression information for social functioning in autism is of relevance to provide a deeper understanding of the condition. Method Adult autistic individuals (n = 34) and adult control individuals (n = 34) were assessed with a social perception behavioral paradigm exploring facial expression predictions and their impact on social evaluation. Results Autistic individuals held less stereotypical predictions than controls. Importantly, despite such differences in predictions, the use of such predictions for social evaluation did not differ significantly between groups, as autistic individuals relied on their predictions to evaluate others to the same extent as controls. Conclusions These results help to understand how autistic individuals perceive social stimuli and evaluate others, revealing a deviation from stereotypicality beyond which social evaluation strategies may be intact.
Collapse
Affiliation(s)
- Marta Robles
- Department of Clinical and Health Psychology, Universitat Autònoma de Barcelona, Barcelona, Spain
- Department of Psychiatry and Psychotherapy, LMU University Hospital, LMU Munich, Germany
| | - Irene Ramos-Grille
- Department of Clinical and Health Psychology, Universitat Autònoma de Barcelona, Barcelona, Spain
- Division of Mental Health, Consorci Sanitari de Terrassa, Terrassa, Catalunya, Spain
| | - Amaia Hervás
- Child and Adolescent Mental Health Service, Hospital Universitari Mútua de Terrassa, Barcelona, Spain
- Institut Global d'Atenció Integral del Neurodesenvolupament (IGAIN), Barcelona, Spain
| | - Enric Duran-Tauleria
- Institut Global d'Atenció Integral del Neurodesenvolupament (IGAIN), Barcelona, Spain
| | - Jordi Galiano-Landeira
- Department of Clinical and Health Psychology, Universitat Autònoma de Barcelona, Barcelona, Spain
| | | | | | - Lorena Chanes
- Department of Clinical and Health Psychology, Universitat Autònoma de Barcelona, Barcelona, Spain
- Institut de Neurociències, Universitat Autònoma de Barcelona, Barcelona, Spain
- Serra Húnter Programme, Generalitat de Catalunya, Barcelona, Spain
| |
Collapse
|
4
|
Hsu CT, Sato W, Yoshikawa S. An investigation of the modulatory effects of empathic and autistic traits on emotional and facial motor responses during live social interactions. PLoS One 2024; 19:e0290765. [PMID: 38194416 PMCID: PMC10775989 DOI: 10.1371/journal.pone.0290765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 08/15/2023] [Indexed: 01/11/2024] Open
Abstract
A close relationship between emotional contagion and spontaneous facial mimicry has been theoretically proposed and is supported by empirical data. Facial expressions are essential in terms of both emotional and motor synchrony. Previous studies have demonstrated that trait emotional empathy enhanced spontaneous facial mimicry, but the relationship between autistic traits and spontaneous mimicry remained controversial. Moreover, previous studies presented faces that were static or videotaped, which may lack the "liveliness" of real-life social interactions. We addressed this limitation by using an image relay system to present live performances and pre-recorded videos of smiling or frowning dynamic facial expressions to 94 healthy female participants. We assessed their subjective experiential valence and arousal ratings to infer the amplitude of emotional contagion. We measured the electromyographic activities of the zygomaticus major and corrugator supercilii muscles to estimate spontaneous facial mimicry. Individual differences measures included trait emotional empathy (empathic concern) and the autism-spectrum quotient. We did not find that live performances enhanced the modulatory effect of trait differences on emotional contagion or spontaneous facial mimicry. However, we found that a high trait empathic concern was associated with stronger emotional contagion and corrugator mimicry. We found no two-way interaction between the autism spectrum quotient and emotional condition, suggesting that autistic traits did not modulate emotional contagion or spontaneous facial mimicry. Our findings imply that previous findings regarding the relationship between emotional empathy and emotional contagion/spontaneous facial mimicry using videos and photos could be generalized to real-life interactions.
Collapse
Affiliation(s)
- Chun-Ting Hsu
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Soraku-gun, Kyoto, Japan
| | - Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Soraku-gun, Kyoto, Japan
| | - Sakiko Yoshikawa
- Institute of Philosophy and Human Values, Kyoto University of the Arts, Kyoto, Kyoto, Japan
| |
Collapse
|
5
|
Hsu CT, Sato W. Electromyographic Validation of Spontaneous Facial Mimicry Detection Using Automated Facial Action Coding. SENSORS (BASEL, SWITZERLAND) 2023; 23:9076. [PMID: 38005462 PMCID: PMC10675524 DOI: 10.3390/s23229076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 11/06/2023] [Accepted: 11/08/2023] [Indexed: 11/26/2023]
Abstract
Although electromyography (EMG) remains the standard, researchers have begun using automated facial action coding system (FACS) software to evaluate spontaneous facial mimicry despite the lack of evidence of its validity. Using the facial EMG of the zygomaticus major (ZM) as a standard, we confirmed the detection of spontaneous facial mimicry in action unit 12 (AU12, lip corner puller) via an automated FACS. Participants were alternately presented with real-time model performance and prerecorded videos of dynamic facial expressions, while simultaneous ZM signal and frontal facial videos were acquired. Facial videos were estimated for AU12 using FaceReader, Py-Feat, and OpenFace. The automated FACS is less sensitive and less accurate than facial EMG, but AU12 mimicking responses were significantly correlated with ZM responses. All three software programs detected enhanced facial mimicry by live performances. The AU12 time series showed a roughly 100 to 300 ms latency relative to the ZM. Our results suggested that while the automated FACS could not replace facial EMG in mimicry detection, it could serve a purpose for large effect sizes. Researchers should be cautious with the automated FACS outputs, especially when studying clinical populations. In addition, developers should consider the EMG validation of AU estimation as a benchmark.
Collapse
Affiliation(s)
- Chun-Ting Hsu
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Soraku-gun, Kyoto 619-0288, Japan
| | - Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Soraku-gun, Kyoto 619-0288, Japan
| |
Collapse
|
6
|
Wang L, Hu X, Ren Y, Lv J, Zhao S, Guo L, Liu T, Han J. Arousal modulates the amygdala-insula reciprocal connectivity during naturalistic emotional movie watching. Neuroimage 2023; 279:120316. [PMID: 37562718 DOI: 10.1016/j.neuroimage.2023.120316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 08/06/2023] [Accepted: 08/07/2023] [Indexed: 08/12/2023] Open
Abstract
Emotional arousal is a complex state recruiting distributed cortical and subcortical structures, in which the amygdala and insula play an important role. Although previous neuroimaging studies have showed that the amygdala and insula manifest reciprocal connectivity, the effective connectivities and modulatory patterns on the amygdala-insula interactions underpinning arousal are still largely unknown. One of the reasons may be attributed to static and discrete laboratory brain imaging paradigms used in most existing studies. In this study, by integrating naturalistic-paradigm (i.e., movie watching) functional magnetic resonance imaging (fMRI) with a computational affective model that predicts dynamic arousal for the movie stimuli, we investigated the effective amygdala-insula interactions and the modulatory effect of the input arousal on the effective connections. Specifically, the predicted dynamic arousal of the movie served as regressors in general linear model (GLM) analysis and brain activations were identified accordingly. The regions of interest (i.e., the bilateral amygdala and insula) were localized according to the GLM activation map. The effective connectivity and modulatory effect were then inferred by using dynamic causal modeling (DCM). Our experimental results demonstrated that amygdala was the site of driving arousal input and arousal had a modulatory effect on the reciprocal connections between amygdala and insula. Our study provides novel evidence to the underlying neural mechanisms of arousal in a dynamical naturalistic setting.
Collapse
Affiliation(s)
- Liting Wang
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Xintao Hu
- School of Automation, Northwestern Polytechnical University, Xi'an, China.
| | - Yudan Ren
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Jinglei Lv
- School of Biomedical Engineering and Brain and Mind Centre, University of Sydney, Sydney, Australia
| | - Shijie Zhao
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Lei Guo
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Tianming Liu
- School of Computing, University of Georgia, Athens, USA
| | - Junwei Han
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| |
Collapse
|
7
|
Turkstra LS, Hosseini-Moghaddam S, Wohltjen S, Nurre SV, Mutlu B, Duff MC. Facial affect recognition in context in adults with and without TBI. Front Psychol 2023; 14:1111686. [PMID: 37645059 PMCID: PMC10461638 DOI: 10.3389/fpsyg.2023.1111686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 07/31/2023] [Indexed: 08/31/2023] Open
Abstract
Introduction Several studies have reported impaired emotion recognition in adults with traumatic brain injury (TBI), but studies have two major design features that limit application of results to real-world contexts: (1) participants choose from among lists of basic emotions, rather than generating emotion labels, and (2) images are typically presented in isolation rather than in context. To address these limitations, we created an open-labeling task with faces shown alone or in real-life scenes, to more closely approximate how adults with TBI label facial emotions beyond the lab. Methods Participants were 55 adults (29 female) with moderate to severe TBI and 55 uninjured comparison peers, individually matched for race, sex, and age. Participants viewed 60 photographs of faces, either alone or in the pictured person's real-life context, and were asked what that person was feeling. We calculated the percent of responses that were standard forced-choice-task options, and also used sentiment intensity analysis to compare verbal responses between the two groups. We tracked eye movements for a subset of participants, to explore whether gaze duration or number of fixations helped explain any group differences in labels. Results Over 50% of responses in both groups were words other than basic emotions on standard affect tasks, highlighting the importance of eliciting open-ended responses. Valence of labels by participants with TBI was attenuated relative to valence of Comparison group labels, i.e., TBI group responses were less positive to positive images and the same was true for negative images, although the TBI group responses had higher lexical diversity. There were no significant differences in gaze duration or number of fixations between groups. Discussion Results revealed qualitative differences in affect labels between adults with and without TBI that would not have emerged on standard forced-choice tasks. Verbal differences did not appear to be attributable to differences in gaze patterns, leaving open the question of mechanisms of atypical affect processing in adults with TBI.
Collapse
Affiliation(s)
- Lyn S. Turkstra
- Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| | | | - Sophie Wohltjen
- Department of Computer Sciences, University of Wisconsin-Madison, Madison, WI, United States
| | - Sara V. Nurre
- American Speech-Language-Hearing Association, Rockville, MD, United States
| | - Bilge Mutlu
- Department of Computer Sciences, University of Wisconsin-Madison, Madison, WI, United States
| | - Melissa C. Duff
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
8
|
Sato W, Kochiyama T. Crosstalk in Facial EMG and Its Reduction Using ICA. SENSORS (BASEL, SWITZERLAND) 2023; 23:2720. [PMID: 36904924 PMCID: PMC10007323 DOI: 10.3390/s23052720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 02/24/2023] [Accepted: 02/28/2023] [Indexed: 06/18/2023]
Abstract
There is ample evidence that electromyography (EMG) signals from the corrugator supercilii and zygomatic major muscles can provide valuable information for the assessment of subjective emotional experiences. Although previous research suggested that facial EMG data could be affected by crosstalk from adjacent facial muscles, it remains unproven whether such crosstalk occurs and, if so, how it can be reduced. To investigate this, we instructed participants (n = 29) to perform the facial actions of frowning, smiling, chewing, and speaking, in isolation and combination. During these actions, we measured facial EMG signals from the corrugator supercilii, zygomatic major, masseter, and suprahyoid muscles. We performed an independent component analysis (ICA) of the EMG data and removed crosstalk components. Speaking and chewing induced EMG activity in the masseter and suprahyoid muscles, as well as the zygomatic major muscle. The ICA-reconstructed EMG signals reduced the effects of speaking and chewing on zygomatic major activity, compared with the original signals. These data suggest that: (1) mouth actions could induce crosstalk in zygomatic major EMG signals, and (2) ICA can reduce the effects of such crosstalk.
Collapse
Affiliation(s)
- Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0288, Japan
- Field Science Education and Research Center, Kyoto University, Oiwake-cho, Kitashirakawa, Kyoto 606-8502, Japan
| | - Takanori Kochiyama
- Brain Activity Imaging Center, ATR-Promotions, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0288, Japan
| |
Collapse
|
9
|
Stasiak JE, Mitchell WJ, Reisman SS, Gregory DF, Murty VP, Helion C. Physiological arousal guides situational appraisals and metacognitive recall for naturalistic experiences. Neuropsychologia 2023; 180:108467. [PMID: 36610494 DOI: 10.1016/j.neuropsychologia.2023.108467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 11/25/2022] [Accepted: 01/01/2023] [Indexed: 01/06/2023]
Abstract
As individuals navigate the world, they are bound to have emotionally intense experiences. These events not only influence momentary physiological and affective responses, but may also have a powerful impact on one's memory for their emotional experience. In this research, we used the naturalistic context of a haunted house to examine how physiological arousal is associated with metacognitive emotional memory (i.e., the extent to which an individual remembers having experienced a certain emotion). Participants first navigated the haunted house while heart rate and explicit situational appraisals were recorded, and then recalled specific events from the haunted house and the intensity of these affective events approximately one week later. We found that heart rate predicted both the intensity of reported scariness in the haunted house and meta-cognitive memory of affect during recall. Critically, we found evidence for malleability in metacognitive emotional memory based on how the event was initially labeled. Individuals tended to recall events that they explicitly labeled as fear-evoking as being more intense than they reported at the time of the event. We found the opposite relationship for events that they labeled as not fear-evoking. Taken together, this indicates that there are strong relationships between physiological arousal and emotional experiences in naturalistic contexts, but that affective labeling can modulate the relationship between these features when reflecting on the emotionality of that experience in memory.
Collapse
Affiliation(s)
- Joanne E Stasiak
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, USA
| | | | - Samantha S Reisman
- Department of Cognitive, Linguistics, and Psychological Sciences, Brown University, USA
| | | | | | | |
Collapse
|
10
|
Six facial prosodic expressions caregivers similarly display to infants and dogs. Sci Rep 2023; 13:929. [PMID: 36650174 PMCID: PMC9845226 DOI: 10.1038/s41598-022-26981-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 12/21/2022] [Indexed: 01/19/2023] Open
Abstract
Parents tend to use a specific communication style, including specific facial expressions, when speaking to their preverbal infants which has important implications for children's healthy development. In the present study, we investigated these facial prosodic features of caregivers with a novel method that compares infant-, dog- and adult-directed communication. We identified three novel facial displays in addition to the already described three facial expressions (i.e. the 'prosodic faces') that mothers and fathers are typically displaying when interacting with their 1-18 month-old infants and family dogs, but not when interacting with another adult. The so-called Special Happy expression proved to be the most frequent face type during infant- and dog-directed communication which always includes a Duchenne marker to convey an honest and intense happy emotion of the speaker. These results suggest that the 'prosodic faces' play an important role in both adult-infant and human-dog interactions and fulfil specific functions: to call and maintain the partner's attention, to foster emotionally positive interactions, and to strengthen social bonds. Our study highlights the relevance of future comparative studies on facial prosody and its potential contribution to healthy emotional and cognitive development of infants.
Collapse
|
11
|
Moulds DJ, Meyer J, McLean JF, Kempe V. Exploring effects of response biases in affect induction procedures. PLoS One 2023; 18:e0285706. [PMID: 37167316 PMCID: PMC10174507 DOI: 10.1371/journal.pone.0285706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 05/02/2023] [Indexed: 05/13/2023] Open
Abstract
This study examined whether self-reports or ratings of experienced affect, often used as manipulation checks on the efficacy of affect induction procedures (AIPs), reflect genuine changes in affective states rather than response biases arising from demand characteristics or social desirability effects. In a between-participants design, participants were exposed to positive, negative and neutral images with valence-congruent music or sound to induce happy, sad and neutral mood. Half of the participants had to actively appraise each image whereas the other half viewed images passively. We hypothesised that if ratings of affective valence are subject to response biases then they should reflect the target mood in the same way for active appraisal and passive exposure as participants encountered the same affective stimuli in both conditions. We also tested whether the AIP resulted in mood-congruent changes in facial expressions analysed by FaceReader to see whether behavioural indicators corroborate the self-reports. The results showed that while participants' ratings reflected the induced target valence, the difference between positive and negative AIP was significantly attenuated in the active appraisal condition, suggesting that self-reports of mood experienced after the AIP are not entirely a reflection of response biases. However, there were no effects of the AIP on FaceReader valence scores, in line with theories questioning the existence of cross-culturally and inter-individually universal behavioural indicators of affective states. Efficacy of AIPs is therefore best checked using self-reports.
Collapse
Affiliation(s)
- David J Moulds
- Division of Psychology, School of Applied Sciences, Abertay University, Dundee, United Kingdom
| | - Jona Meyer
- Division of Psychology, School of Applied Sciences, Abertay University, Dundee, United Kingdom
| | - Janet F McLean
- Division of Psychology, School of Applied Sciences, Abertay University, Dundee, United Kingdom
| | - Vera Kempe
- Division of Psychology, School of Applied Sciences, Abertay University, Dundee, United Kingdom
| |
Collapse
|
12
|
Chen Y, Xu Q, Fan C, Wang Y, Jiang Y. Eye gaze direction modulates nonconscious affective contextual effect. Conscious Cogn 2022; 102:103336. [DOI: 10.1016/j.concog.2022.103336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 04/06/2022] [Accepted: 04/23/2022] [Indexed: 11/03/2022]
|
13
|
Krumhuber EG, Kappas A. More What Duchenne Smiles Do, Less What They Express. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2022; 17:1566-1575. [PMID: 35712993 DOI: 10.1177/17456916211071083] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We comment on an article by Sheldon et al. from a previous issue of Perspectives (May 2021). They argued that the presence of positive emotion (Hypothesis 1), the intensity of positive emotion (Hypothesis 2), and chronic positive mood (Hypothesis 3) are reliably signaled by the Duchenne smile (DS). We reexamined the cited literature in support of each hypothesis and show that the study findings were mostly inconclusive, irrelevant, incomplete, and/or misread. In fact, there is no single (empirical) article that would unanimously support the idea that DSs function solely as indicators of felt positive affect. Additional evidence is reviewed, suggesting that DSs can be-and often are-displayed deliberately and in the absence of positive feelings. Although DSs may lead to favorable interpersonal perceptions and positive emotional responses in the observer, we propose a functional view that focuses on what facial actions-here specifically DSs-do rather than what they express.
Collapse
Affiliation(s)
- Eva G Krumhuber
- Department of Experimental Psychology, University College London
| | - Arvid Kappas
- Department of Psychology, Jacobs University Bremen
| |
Collapse
|
14
|
Comparing self-reported emotions and facial expressions of joy in heterosexual romantic couples. PERSONALITY AND INDIVIDUAL DIFFERENCES 2022. [DOI: 10.1016/j.paid.2021.111182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
15
|
Sato W, Usui N, Sawada R, Kondo A, Toichi M, Inoue Y. Impairment of emotional expression detection after unilateral medial temporal structure resection. Sci Rep 2021; 11:20617. [PMID: 34663869 PMCID: PMC8523523 DOI: 10.1038/s41598-021-99945-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 10/05/2021] [Indexed: 12/02/2022] Open
Abstract
Detecting emotional facial expressions is an initial and indispensable component of face-to-face communication. Neuropsychological studies on the neural substrates of this process have shown that bilateral amygdala lesions impaired the detection of emotional facial expressions. However, the findings were inconsistent, possibly due to the limited number of patients examined. Furthermore, whether this processing is based on emotional or visual factors of facial expressions remains unknown. To investigate this issue, we tested a group of patients (n = 23) with unilateral resection of medial temporal lobe structures, including the amygdala, and compared their performance under resected- and intact-hemisphere stimulation conditions. The participants were asked to detect normal facial expressions of anger and happiness, and artificially created anti-expressions, among a crowd with neutral expressions. Reaction times for the detection of normal expressions versus anti-expressions were shorter when the target faces were presented to the visual field contralateral to the intact hemisphere (i.e., stimulation of the intact hemisphere; e.g., right visual field for patients with right hemispheric resection) compared with the visual field contralateral to the resected hemisphere (i.e., stimulation of the resected hemisphere). Our findings imply that the medial temporal lobe structures, including the amygdala, play an essential role in the detection of emotional facial expressions, according to the emotional significance of the expressions.
Collapse
Affiliation(s)
- Wataru Sato
- Psychological Process Team, Guardian Robot Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0288, Japan.
| | - Naotaka Usui
- National Epilepsy Center, NHO Shizuoka Institute of Epilepsy and Neurological Disorders, Urushiyama 886, Shizuoka, 420-8688, Japan.
| | - Reiko Sawada
- Graduate School of Medicine, Kyoto University, 53 Shogoin-Kawaharacho, Sakyo, Kyoto, 606-8507, Japan
| | - Akihiko Kondo
- National Epilepsy Center, NHO Shizuoka Institute of Epilepsy and Neurological Disorders, Urushiyama 886, Shizuoka, 420-8688, Japan
| | - Motomi Toichi
- Graduate School of Medicine, Kyoto University, 53 Shogoin-Kawaharacho, Sakyo, Kyoto, 606-8507, Japan
| | - Yushi Inoue
- National Epilepsy Center, NHO Shizuoka Institute of Epilepsy and Neurological Disorders, Urushiyama 886, Shizuoka, 420-8688, Japan
| |
Collapse
|
16
|
The Emotional Experiences of Paralympic Swimming Medalists: Not All Wins and Losses Are Equal. Adapt Phys Activ Q 2021; 38:396-412. [PMID: 33819911 DOI: 10.1123/apaq.2020-0138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Revised: 10/21/2020] [Accepted: 11/16/2020] [Indexed: 11/18/2022] Open
Abstract
The goal of this study was to determine if emotional expressions at the end of swimmers' 2016 Paralympic races varied according to medal won and if their race wins and losses were close or not close. Using FaceReader software, videos of 46 races of medal-winning Paralympic (M age = 24.6; SD = 5.4) swimmers' faces (78 males and 60 females) from 22 countries were analyzed. Silver medalists were angrier and sadder than gold medalists and angrier and more disgusted than bronze medalists. Swimmers who swam slower than their 2015 best time were angrier than Paralympians who swam faster. Paralympians who finished lower than their 2015 world ranking had more neutral emotions and were less happy than Paralympians who finished higher. Gold medalists who narrowly defeated silver medalists were less happy and more fearful than gold medalists who won easily. Bronze medalists with close wins had fewer neutral emotions and were happier, less angry, and more surprised than bronze medalists with not-close wins. All medalists with close wins were more surprised than medalists with easier wins. Bronze medalists with close losses to silver medalists were happier and less angry than bronze medalists who lost more easily. Effect sizes ranged from d = 0.27 to 1.01. These results provide theoretical support to basic emotion theory and confirm the anecdotal observations that Paralympic competition generates wide-ranging and diverse emotions.
Collapse
|
17
|
Motion Increases Recognition of Naturalistic Postures but not Facial Expressions. JOURNAL OF NONVERBAL BEHAVIOR 2021. [DOI: 10.1007/s10919-021-00372-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
18
|
Kittel AFD, Olderbak S, Wilhelm O. Sty in the Mind's Eye: A Meta-Analytic Investigation of the Nomological Network and Internal Consistency of the "Reading the Mind in the Eyes" Test. Assessment 2021; 29:872-895. [PMID: 33645295 DOI: 10.1177/1073191121996469] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
The Reading the Mind in the Eyes Test (RMET) is the most popular adult measure of individual differences in theory of mind. We present a meta-analytic investigation of the test's psychometric properties (k = 119 effect sizes, 61 studies, ntotal = 8,611 persons). Using random effects models, we found the internal consistency of the test was acceptable (α = .73). However, the RMET was more strongly related with emotion perception (r = .33, ρ = .48) relative to alternative theory of mind measures (r = .29, ρ = .39), and weakly to moderately related with vocabulary (r = .25, ρ = .32), cognitive empathy (r = .14, ρ = .20), and affective empathy (r = .13, ρ = .19). Overall, we conclude that the RMET operates rather as emotion perception measure than as theory of mind measure, challenging the interpretation of RMET results.
Collapse
|
19
|
Zloteanu M, Krumhuber EG. Expression Authenticity: The Role of Genuine and Deliberate Displays in Emotion Perception. Front Psychol 2021; 11:611248. [PMID: 33519624 PMCID: PMC7840656 DOI: 10.3389/fpsyg.2020.611248] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 12/21/2020] [Indexed: 11/13/2022] Open
Abstract
People dedicate significant attention to others' facial expressions and to deciphering their meaning. Hence, knowing whether such expressions are genuine or deliberate is important. Early research proposed that authenticity could be discerned based on reliable facial muscle activations unique to genuine emotional experiences that are impossible to produce voluntarily. With an increasing body of research, such claims may no longer hold up to empirical scrutiny. In this article, expression authenticity is considered within the context of senders' ability to produce convincing facial displays that resemble genuine affect and human decoders' judgments of expression authenticity. This includes a discussion of spontaneous vs. posed expressions, as well as appearance- vs. elicitation-based approaches for defining emotion recognition accuracy. We further expand on the functional role of facial displays as neurophysiological states and communicative signals, thereby drawing upon the encoding-decoding and affect-induction perspectives of emotion expressions. Theoretical and methodological issues are addressed with the aim to instigate greater conceptual and operational clarity in future investigations of expression authenticity.
Collapse
Affiliation(s)
- Mircea Zloteanu
- Department of Criminology and Sociology, Kingston University London, Kingston, United Kingdom.,Department of Psychology, Kingston University London, Kingston, United Kingdom
| | - Eva G Krumhuber
- Department of Experimental Psychology, University College London, London, United Kingdom
| |
Collapse
|
20
|
Song SY, Curtis AM, Aragón OR. Anger and Sadness Expressions Situated in Both Positive and Negative Contexts: An Investigation in South Korea and the United States. Front Psychol 2021; 11:579509. [PMID: 33519596 PMCID: PMC7838562 DOI: 10.3389/fpsyg.2020.579509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 10/26/2020] [Indexed: 11/13/2022] Open
Abstract
A formidable challenge to the research of non-verbal behavior can be in the assumptions that we sometimes make, and the subsequent questions that arise from those assumptions. In this article, we proceed with an investigation that would have been precluded by the assumption of a 1:1 correspondence between facial expressions and discrete emotional experiences. We investigated two expressions that in the normative sense are considered negative expressions. One expression, "anger" could be described as clenched fists, furrowed brows, tense jaws and lips, the showing of teeth, and flared nostrils, and the other "sadness" could be described as downward turned mouths, tears, drooping eyes, and wrinkled foreheads. Here, we investigated the prevalence, understanding, and use of these expressions in both positive and negative contexts in South Korea and the United States. We found evidence in both cultures, that anger and sadness displays are used to express positive emotions, a notion relevant to Dimorphous Theory. Moreover, we found that anger and sadness expressions communicated appetitive feelings of wanting to "go!" and consummatory feelings of wanting to "pause," respectively. There were moderations of our effects consistent with past work in Affect Valuation Theory and Display Rule Theory. We discuss our findings, their theoretical relevance, and how the assumptions that are made can narrow the questions that we ask in the field on non-verbal behavior.
Collapse
Affiliation(s)
- Sunny Youngok Song
- Department of Marketing, Wilbur O. and Ann Powers College of Business, Clemson University, Clemson, SC, United States
- School of Marketing and International Business, Spears School of Business, Oklahoma State University, Stillwater, OK, United States
| | - Alexandria M. Curtis
- Department of Marketing, Wilbur O. and Ann Powers College of Business, Clemson University, Clemson, SC, United States
| | - Oriana R. Aragón
- Department of Marketing, Wilbur O. and Ann Powers College of Business, Clemson University, Clemson, SC, United States
| |
Collapse
|
21
|
Abstract
Everyday social interactions hinge on our ability to resolve uncertainty in nonverbal cues. For example, although some facial expressions (e.g. happy, angry) convey a clear affective meaning, others (e.g. surprise) are ambiguous, in that their meaning is determined by the context. Here, we used mouse-tracking to examine the underlying process of resolving uncertainty. Previous work has suggested an initial negativity, in part via faster response times for negative than positive ratings of surprise. We examined valence categorizations of filtered images in order to compare faster (low spatial frequencies; LSF) versus more deliberate processing (high spatial frequencies; HSF). When participants categorised faces as "positive", they first exhibited a partial attraction toward the competing ("negative") response option, and this effect was exacerbated for HSF than LSF faces. Thus, the effect of response conflict due to an initial negativity bias was exaggerated for HSF faces, likely because these images allow for greater deliberation than the LSFs. These results are consistent with the notion that more positive categorizations are characterised by an initial attraction to a default, negative response.
Collapse
Affiliation(s)
- Maital Neta
- Department of Psychology and Center for Brain, Biology, and Behavior, University of Nebraska-Lincoln, Lincoln, NE, USA
| | | | | |
Collapse
|
22
|
Tcherkassof A, Dupré D. The emotion-facial expression link: evidence from human and automatic expression recognition. PSYCHOLOGICAL RESEARCH 2020; 85:2954-2969. [PMID: 33236175 DOI: 10.1007/s00426-020-01448-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Accepted: 11/06/2020] [Indexed: 10/22/2022]
Abstract
While it has been taken for granted in the development of several automatic facial expression recognition tools, the question of the coherence between subjective feelings and facial expressions is still a subject of debate. On one hand, the "Basic Emotion View" conceives emotions as genetically hardwired and, therefore, being genuinely displayed through facial expressions. Consequently, emotion recognition is perceiver independent. On the other hand, the constructivist approach conceives emotions as socially constructed, the emotional meaning of a facial expression being inferred by the perceiver. Hence, emotion recognition is perceiver dependent. In order (1) to evaluate the coherence between the subjective feeling of emotions and their spontaneous facial displays, and (2) to compare the recognition of such displays by human perceivers and by an automatic facial expression classifier, 232 videos of expressers recruited to carry out an emotion elicitation task were annotated by 1383 human perceivers as well as by Affdex, an automatic classifier. Results show a weak consistency between self-reported emotional states by expressers and their facial emotional displays. They also show low accuracy both of human perceivers and of the automatic classifier to infer the subjective feeling from the spontaneous facial expressions displayed by expressers. However, the results are more in favor of a perceiver-dependent view. Based on these results, the hypothesis of genetically hardwired emotion genuinely displayed is difficult to support, whereas the idea of emotion and facial expression as being socially constructed appears to be more likely. Accordingly, automatic emotion recognition tools based on facial expressions should be questioned.
Collapse
Affiliation(s)
- Anna Tcherkassof
- Psychology Department, Université Grenoble Alpes, Bâtiment Michel Dubois, 1251 Avenue Centrale, Saint-Martin-d'Hères, 38400, France.
| | - Damien Dupré
- Business School, Dublin City University, DCU Glasnevin Campus, Dublin, D09, Ireland
| |
Collapse
|
23
|
Tabibnia G. An affective neuroscience model of boosting resilience in adults. Neurosci Biobehav Rev 2020; 115:321-350. [DOI: 10.1016/j.neubiorev.2020.05.005] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Revised: 05/09/2020] [Accepted: 05/10/2020] [Indexed: 12/11/2022]
|
24
|
Shablack H, Stein AG, Lindquist KA. Comment: A role of Language in Infant Emotion Concept Acquisition. EMOTION REVIEW 2020. [DOI: 10.1177/1754073919897297] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Ruba and Repacholi (2020) review an important debate in the emotion development literature: whether infants can perceive and understand facial configurations as instances of discrete emotion categories. Consistent with a psychological constructionist account (Lindquist & Gendron, 2013; Shablack & Lindquist, 2019), they conclude that infants can perceive valence on faces, but argue the evidence is far from clear that infants perceive and understand discrete emotions. Ruba and Repacholi outline a novel developmental trajectory of emotion perception and understanding in which early emotion concept learning may be language-independent. In this comment, we argue that language may play a role in emotion concept acquisition even prior to children’s ability to produce emotion labels. We look forward to future research addressing this hypothesis.
Collapse
Affiliation(s)
- Holly Shablack
- Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, USA
| | - Andrea G. Stein
- Department of Psychology, University of Wisconsin-Madison, USA
| | - Kristen A. Lindquist
- Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, USA
| |
Collapse
|
25
|
Zeng H, Wang X, Wu A, Wang Y, Li Q, Endert A, Qu H. EmoCo: Visual Analysis of Emotion Coherence in Presentation Videos. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:927-937. [PMID: 31443002 DOI: 10.1109/tvcg.2019.2934656] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Emotions play a key role in human communication and public presentations. Human emotions are usually expressed through multiple modalities. Therefore, exploring multimodal emotions and their coherence is of great value for understanding emotional expressions in presentations and improving presentation skills. However, manually watching and studying presentation videos is often tedious and time-consuming. There is a lack of tool support to help conduct an efficient and in-depth multi-level analysis. Thus, in this paper, we introduce EmoCo, an interactive visual analytics system to facilitate efficient analysis of emotion coherence across facial, text, and audio modalities in presentation videos. Our visualization system features a channel coherence view and a sentence clustering view that together enable users to obtain a quick overview of emotion coherence and its temporal evolution. In addition, a detail view and word view enable detailed exploration and comparison from the sentence level and word level, respectively. We thoroughly evaluate the proposed system and visualization techniques through two usage scenarios based on TED Talk videos and interviews with two domain experts. The results demonstrate the effectiveness of our system in gaining insights into emotion coherence in presentations.
Collapse
|
26
|
Abstract
The current article discusses the distinction between affective valence—the degree to which an affective response represents pleasure or displeasure—and semantic valence, the degree to which an object or event is considered positive or negative. To date, measures that reflect positivity and negativity are usually placed under the same conceptual umbrella (e.g., valence, affective, emotional), with minimal distinction between the modes of valence they reflect. Recent work suggests that what might seem to reflect a monolithic structure of valence has at least two different, confounding underlying sources, affective and semantic, that are fundamentally distinct, dissociable, and that obey different, recognizable rules. The current work discusses this distinction and provides implications for affective science from both the theoretical and the empirical perspective.
Collapse
Affiliation(s)
- Oksana Itkes
- Department of Psychology, University of Haifa, Israel
| | - Assaf Kron
- Department of Psychology, University of Haifa, Israel
| |
Collapse
|
27
|
Nath EC, Cannon PR, Philipp MC. An unfamiliar social presence reduces facial disgust responses to food stimuli. Food Res Int 2019; 126:108662. [PMID: 31732049 DOI: 10.1016/j.foodres.2019.108662] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 09/05/2019] [Accepted: 09/09/2019] [Indexed: 01/21/2023]
Abstract
Consumers' emotional responses complement sensory and hedonic ratings in the prediction of food choice and consumption behaviour. The challenge with the measurement of consumption emotions is that emotions are highly context dependent. For emotion evaluations to bring greater insight to food research and development, it is essential that the influence of contextual variables on emotion are quantified. The present study contributes to the discussion with an investigation of the effect of an unfamiliar social presence on affective facial responses to visual food stimuli. Seventy participants (52 female and 18 male) viewed food images of varying acceptability either alone, or in the presence of the researcher. Subjective liking ratings were measured using a labelled affective magnitude scale, and facial muscle activity from zygomaticus major (contracted during smiling), corrugator supercilii (contracted during frowning) and levator labii superioris (contracted during nose wrinkling) were measured with an EMG recording system. Controlling for individual differences in facial expressivity and food image acceptability using linear mixed models, it was found that social context did not predict smiling or frowning muscle activity. Social context did predict the intensity of muscle activity indicative of a disgust response, with participants in the observed condition exhibiting less levator activity than participants in the alone condition. Regardless of social context, each muscle was found to have a relationship with subjective liking, with the direction of effects as expected. The results indicate that emotional stimuli and social context both influence food-evoked facial expression and provides support for the utility of facial EMG in measuring food-evoked emotion.
Collapse
Affiliation(s)
- Elizabeth C Nath
- School of Psychology, Massey University, Private Bag 11-222, Palmerston North 4442, New Zealand.
| | - Peter R Cannon
- School of Psychology, Massey University, Private Bag 11-222, Palmerston North 4442, New Zealand
| | - Michael C Philipp
- School of Psychology, Massey University, Private Bag 11-222, Palmerston North 4442, New Zealand
| |
Collapse
|
28
|
Barrett LF, Adolphs R, Marsella S, Martinez A, Pollak SD. Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements. Psychol Sci Public Interest 2019; 20:1-68. [PMID: 31313636 PMCID: PMC6640856 DOI: 10.1177/1529100619832930] [Citation(s) in RCA: 382] [Impact Index Per Article: 76.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
It is commonly assumed that a person's emotional state can be readily inferred from his or her facial movements, typically called emotional expressions or facial expressions. This assumption influences legal judgments, policy decisions, national security protocols, and educational practices; guides the diagnosis and treatment of psychiatric illness, as well as the development of commercial applications; and pervades everyday social interactions as well as research in other scientific fields such as artificial intelligence, neuroscience, and computer vision. In this article, we survey examples of this widespread assumption, which we refer to as the common view, and we then examine the scientific evidence that tests this view, focusing on the six most popular emotion categories used by consumers of emotion research: anger, disgust, fear, happiness, sadness, and surprise. The available scientific evidence suggests that people do sometimes smile when happy, frown when sad, scowl when angry, and so on, as proposed by the common view, more than what would be expected by chance. Yet how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation. Furthermore, similar configurations of facial movements variably express instances of more than one emotion category. In fact, a given configuration of facial movements, such as a scowl, often communicates something other than an emotional state. Scientists agree that facial movements convey a range of information and are important for social communication, emotional or otherwise. But our review suggests an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life, as well as careful study of the mechanisms by which people perceive instances of emotion in one another. We make specific research recommendations that will yield a more valid picture of how people move their faces to express emotions and how they infer emotional meaning from facial movements in situations of everyday life. This research is crucial to provide consumers of emotion research with the translational information they require.
Collapse
Affiliation(s)
- Lisa Feldman Barrett
- Northeastern University, Department of Psychology, Boston, MA
- Massachusetts General Hospital, Department of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA
- Harvard Medical School, Department of Psychiatry, Boston MA
| | - Ralph Adolphs
- California Institute of Technology, Departments of Psychology, Neuroscience, and Biology,Pasadena, CA
| | - Stacy Marsella
- Northeastern University, Department of Psychology, Boston, MA
- Northeastern University, College of Computer and Information Science, Boston, MA
- University of Glasgow, Glasgow, Scotland
| | - Aleix Martinez
- The Ohio State University, Department of Electrical and Computer Engineering, and Center for Cognitive and Brain Sciences, Columbus, OH
| | - Seth D. Pollak
- University of Wisconsin - Madison, Department of Psychology, Madison, WI
| |
Collapse
|
29
|
Abstract
Emotion recognition is widely assumed to be determined by face and body features, and measures of emotion perception typically use unnatural, static, or decontextualized face stimuli. Using our method called affective tracking, we show that observers can infer, recognize, and track over time the affect of an invisible person based solely on visual spatial context. We further show that visual context provides a substantial and unique contribution to the perception of human affect, beyond the information available from face and body. This method reveals that emotion recognition is, at its heart, a context-based process. Emotion recognition is an essential human ability critical for social functioning. It is widely assumed that identifying facial expression is the key to this, and models of emotion recognition have mainly focused on facial and bodily features in static, unnatural conditions. We developed a method called affective tracking to reveal and quantify the enormous contribution of visual context to affect (valence and arousal) perception. When characters’ faces and bodies were masked in silent videos, viewers inferred the affect of the invisible characters successfully and in high agreement based solely on visual context. We further show that the context is not only sufficient but also necessary to accurately perceive human affect over time, as it provides a substantial and unique contribution beyond the information available from face and body. Our method (which we have made publicly available) reveals that emotion recognition is, at its heart, an issue of context as much as it is about faces.
Collapse
|
30
|
Sato W, Hyniewska S, Minemoto K, Yoshikawa S. Facial Expressions of Basic Emotions in Japanese Laypeople. Front Psychol 2019; 10:259. [PMID: 30809180 PMCID: PMC6379788 DOI: 10.3389/fpsyg.2019.00259] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Accepted: 01/28/2019] [Indexed: 11/13/2022] Open
Abstract
Facial expressions that show emotion play an important role in human social interactions. In previous theoretical studies, researchers have suggested that there are universal, prototypical facial expressions specific to basic emotions. However, the results of some empirical studies that tested the production of emotional facial expressions based on particular scenarios only partially supported the theoretical predictions. In addition, all of the previous studies were conducted in Western cultures. We investigated Japanese laypeople (n = 65) to provide further empirical evidence regarding the production of emotional facial expressions. The participants produced facial expressions for six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) in specific scenarios. Under the baseline condition, the participants imitated photographs of prototypical facial expressions. The produced facial expressions were automatically coded using FaceReader in terms of the intensities of emotions and facial action units. In contrast to the photograph condition, where all target emotions were shown clearly, the scenario condition elicited the target emotions clearly only for happy and surprised expressions. The photograph and scenario conditions yielded different profiles for the intensities of emotions and facial action units associated with all of the facial expressions tested. These results provide partial support for the theory of universal, prototypical facial expressions for basic emotions but suggest the possibility that the theory may need to be modified based on empirical evidence.
Collapse
Affiliation(s)
- Wataru Sato
- Kokoro Research Center, Kyoto University, Kyoto, Japan
| | - Sylwia Hyniewska
- Kokoro Research Center, Kyoto University, Kyoto, Japan.,Bioimaging Research Center, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | | | | |
Collapse
|
31
|
Mayo LM, Heilig M. In the face of stress: Interpreting individual differences in stress-induced facial expressions. Neurobiol Stress 2019; 10:100166. [PMID: 31193535 PMCID: PMC6535645 DOI: 10.1016/j.ynstr.2019.100166] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Revised: 04/17/2019] [Accepted: 04/19/2019] [Indexed: 11/22/2022] Open
Abstract
Stress is an inevitable part of life that can profoundly impact social and emotional functioning, contributing to the development of psychiatric disease. One key component of emotion and social processing is facial expressions, which humans can readily detect and react to even without conscious awareness. Facial expressions have been the focus of philosophic and scientific interest for centuries. Historically, facial expressions have been relegated to peripheral indices of fixed emotion states. More recently, affective neuroscience has undergone a conceptual revolution, resulting in novel interpretations of these muscle movements. Here, we review the role of facial expressions according to the leading affective neuroscience theories, including constructed emotion and social-motivation accounts. We specifically highlight recent data (Mayo et al, 2018) demonstrating the way in which stress shapes facial expressions and how this is influenced by individual factors. In particular, we focus on the consequence of genetic variation within the endocannabinoid system, a neuromodulatory system implicated in stress and emotion, and its impact on stress-induced facial muscle activity. In a re-analysis of this dataset, we highlight how gender may also influence these processes, conceptualized as variation in the "fight-or-flight" or "tend-and-befriend" behavioral responses to stress. We speculate on how these interpretations may contribute to a broader understanding of facial expressions, discuss the potential use of facial expressions as a trans-diagnostic marker of psychiatric disease, and suggest future work necessary to resolve outstanding questions.
Collapse
Affiliation(s)
- Leah M. Mayo
- Center for Social and Affective Neuroscience, Department of Clinical and Experimental Medicine, Linköping University, Sweden
| | | |
Collapse
|
32
|
|
33
|
Ferrari C, Papagno C, Todorov A, Cattaneo Z. Differences in Emotion Recognition From Body and Face Cues Between Deaf and Hearing Individuals. Multisens Res 2019; 32:499-519. [PMID: 31117046 DOI: 10.1163/22134808-20191353] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2018] [Accepted: 04/05/2019] [Indexed: 11/19/2022]
Abstract
Deaf individuals may compensate for the lack of the auditory input by showing enhanced capacities in certain visual tasks. Here we assessed whether this also applies to recognition of emotions expressed by bodily and facial cues. In Experiment 1, we compared deaf participants and hearing controls in a task measuring recognition of the six basic emotions expressed by actors in a series of video-clips in which either the face, the body, or both the face and body were visible. In Experiment 2, we measured the weight of body and face cues in conveying emotional information when intense genuine emotions are expressed, a situation in which face expressions alone may have ambiguous valence. We found that deaf individuals were better at identifying disgust and fear from body cues (Experiment 1) and in integrating face and body cues in case of intense negative genuine emotions (Experiment 2). Our findings support the capacity of deaf individuals to compensate for the lack of the auditory input enhancing perceptual and attentional capacities in the spared modalities, showing that this capacity extends to the affective domain.
Collapse
Affiliation(s)
- Chiara Ferrari
- 1Department of Psychology, University of Milano-Bicocca, Milan 20126, Italy
| | - Costanza Papagno
- 1Department of Psychology, University of Milano-Bicocca, Milan 20126, Italy.,2CeRiN and CIMeC, University of Trento, Rovereto 38068, Italy
| | | | - Zaira Cattaneo
- 1Department of Psychology, University of Milano-Bicocca, Milan 20126, Italy.,4IRCCS Mondino Foundation, Pavia, Italy
| |
Collapse
|
34
|
Chen C, Crivelli C, Garrod OGB, Schyns PG, Fernández-Dols JM, Jack RE. Distinct facial expressions represent pain and pleasure across cultures. Proc Natl Acad Sci U S A 2018; 115:E10013-E10021. [PMID: 30297420 PMCID: PMC6205428 DOI: 10.1073/pnas.1807862115] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Real-world studies show that the facial expressions produced during pain and orgasm-two different and intense affective experiences-are virtually indistinguishable. However, this finding is counterintuitive, because facial expressions are widely considered to be a powerful tool for social interaction. Consequently, debate continues as to whether the facial expressions of these extreme positive and negative affective states serve a communicative function. Here, we address this debate from a novel angle by modeling the mental representations of dynamic facial expressions of pain and orgasm in 40 observers in each of two cultures (Western, East Asian) using a data-driven method. Using a complementary approach of machine learning, an information-theoretic analysis, and a human perceptual discrimination task, we show that mental representations of pain and orgasm are physically and perceptually distinct in each culture. Cross-cultural comparisons also revealed that pain is represented by similar face movements across cultures, whereas orgasm showed distinct cultural accents. Together, our data show that mental representations of the facial expressions of pain and orgasm are distinct, which questions their nondiagnosticity and instead suggests they could be used for communicative purposes. Our results also highlight the potential role of cultural and perceptual factors in shaping the mental representation of these facial expressions. We discuss new research directions to further explore their relationship to the production of facial expressions.
Collapse
Affiliation(s)
- Chaona Chen
- School of Psychology, College of Science and Engineering, University of Glasgow, Glasgow G12 8QB Scotland, United Kingdom
- Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB Scotland, United Kingdom
| | - Carlos Crivelli
- Institute for Psychological Science, School of Applied Social Sciences, De Montfort University, Leicester LE1 9BH, United Kingdom
| | - Oliver G B Garrod
- Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB Scotland, United Kingdom
| | - Philippe G Schyns
- School of Psychology, College of Science and Engineering, University of Glasgow, Glasgow G12 8QB Scotland, United Kingdom
- Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB Scotland, United Kingdom
| | - José-Miguel Fernández-Dols
- Departamento de Psicología Social y Metodología, Facultad de Psicología, Universidad Autónoma de Madrid, 28049 Madrid, Spain
| | - Rachael E Jack
- School of Psychology, College of Science and Engineering, University of Glasgow, Glasgow G12 8QB Scotland, United Kingdom;
- Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB Scotland, United Kingdom
| |
Collapse
|
35
|
Abstract
Responses to surprising events are dynamic. We argue that initial responses are primarily driven by the unexpectedness of the surprising event and reflect an interrupted and surprised state in which the outcome does not make sense yet. Later responses, after sense-making, are more likely to incorporate the valence of the outcome itself. To identify initial and later responses to surprising stimuli, we conducted two repetition-change studies and coded the general valence of facial expressions using computerised facial coding and specific facial action using the Facial Action Coding System (FACS). Results partly supported our unfolding logic. The computerised coding showed that initial expressions to positive surprises were less positive than later expressions. Moreover, expressions to positive and negative surprises were initially similar, but after some time differentiated depending on the valence of the event. Importantly, these patterns were particularly pronounced in a subset of facially expressive participants, who also showed facial action in the FACS coding. The FACS data showed that the initial phase was characterised by limited facial action, whereas the later increase in positivity seems to be explained by smiling. Conceptual as well as methodological implications are discussed.
Collapse
Affiliation(s)
- Marret K Noordewier
- a Faculty of Social and Behavioural Sciences, Social and Organizational Psychology , Leiden University , Leiden , The Netherlands
| | - Eric van Dijk
- a Faculty of Social and Behavioural Sciences, Social and Organizational Psychology , Leiden University , Leiden , The Netherlands
| |
Collapse
|
36
|
Shepherd SV, Freiwald WA. Functional Networks for Social Communication in the Macaque Monkey. Neuron 2018; 99:413-420.e3. [PMID: 30017395 DOI: 10.1016/j.neuron.2018.06.027] [Citation(s) in RCA: 64] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2018] [Revised: 05/09/2018] [Accepted: 06/15/2018] [Indexed: 11/26/2022]
Abstract
All primates communicate. To dissect the neural circuits of social communication, we used fMRI to map non-human primate brain regions for social perception, second-person (interactive) social cognition, and orofacial movement generation. Face perception, second-person cognition, and face motor networks were largely non-overlapping and acted as distinct functional units rather than an integrated feedforward-processing pipeline. Whereas second-person context selectively engaged a region of medial prefrontal cortex, production of orofacial movements recruited distributed subcortical and cortical areas in medial and lateral frontal and insular cortex. These areas exhibited some specialization, but not dissociation, of function along the medio-lateral axis. Production of lipsmack movements recruited areas including putative homologs of Broca's area. These findings provide a new view of the neural architecture for social communication and suggest expressive orofacial movements generated by lateral premotor cortex as a putative evolutionary precursor to human speech.
Collapse
Affiliation(s)
- Stephen V Shepherd
- The Laboratory of Neural Systems, The Rockefeller University, New York, NY 10065, USA.
| | - Winrich A Freiwald
- The Laboratory of Neural Systems, The Rockefeller University, New York, NY 10065, USA.
| |
Collapse
|
37
|
Jarillo S, Fridlund A, Crivelli C, Fernández-Dols JM, Russell JA. Rejoinder to Kret and Straffon. J Hum Evol 2018; 125:198-200. [PMID: 29880425 DOI: 10.1016/j.jhevol.2018.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Revised: 03/08/2018] [Accepted: 03/13/2018] [Indexed: 11/17/2022]
Affiliation(s)
- Sergio Jarillo
- Division of Anthropology, American Museum of Natural History, Central Park West at 79th Street, New York, NY 10024, USA
| | - Alan Fridlund
- Department of Psychological and Brain Sciences, University of California-Santa Barbara, 251 Ucen Drive, Santa Barbara, CA 93106, USA
| | - Carlos Crivelli
- School of Applied Social Sciences, De Montfort University, The Gateway, Leicester, LE1 9BH, UK
| | | | - James A Russell
- Department of Psychology, Boston College, Chestnut Hill, MA 02467, USA.
| |
Collapse
|
38
|
Discrimination between smiling faces: Human observers vs. automated face analysis. Acta Psychol (Amst) 2018; 187:19-29. [PMID: 29758397 DOI: 10.1016/j.actpsy.2018.04.019] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2017] [Revised: 04/09/2018] [Accepted: 04/30/2018] [Indexed: 11/23/2022] Open
Abstract
This study investigated (a) how prototypical happy faces (with happy eyes and a smile) can be discriminated from blended expressions with a smile but non-happy eyes, depending on type and intensity of the eye expression; and (b) how smile discrimination differs for human perceivers versus automated face analysis, depending on affective valence and morphological facial features. Human observers categorized faces as happy or non-happy, or rated their valence. Automated analysis (FACET software) computed seven expressions (including joy/happiness) and 20 facial action units (AUs). Physical properties (low-level image statistics and visual saliency) of the face stimuli were controlled. Results revealed, first, that some blended expressions (especially, with angry eyes) had lower discrimination thresholds (i.e., they were identified as "non-happy" at lower non-happy eye intensities) than others (especially, with neutral eyes). Second, discrimination sensitivity was better for human perceivers than for automated FACET analysis. As an additional finding, affective valence predicted human discrimination performance, whereas morphological AUs predicted FACET discrimination. FACET can be a valid tool for categorizing prototypical expressions, but is currently more limited than human observers for discrimination of blended expressions. Configural processing facilitates detection of in/congruence(s) across regions, and thus detection of non-genuine smiling faces (due to non-happy eyes).
Collapse
|
39
|
Wood A, Niedenthal P. Developing a social functional account of laughter. SOCIAL AND PERSONALITY PSYCHOLOGY COMPASS 2018. [DOI: 10.1111/spc3.12383] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
40
|
Abstract
Psychological research on emotion perception anchors heavily on an object perception analogy. We present static “cues,” such as facial expressions, as objects for perceivers to categorize. Yet in the real world, emotions play out as dynamic multidimensional events. Current theoretical approaches and research methods are limited in their ability to capture this complexity. We draw on insights from a predictive coding account of neural activity and a grounded cognition account of concept representation to conceive of emotion perception as a stream of synchronized conceptualizations between two individuals, which is supported and shaped by language. We articulate how this framework can illuminate the fundamental need to study culture, as well as other sources of conceptual variation, in unpacking conceptual synchrony in emotion. We close by suggesting that the conceptual system provides the necessary flexibility to overcome gaps in emotional synchrony.
Collapse
Affiliation(s)
- Maria Gendron
- Department of Psychology, Northeastern University, USA
- Department of Psychology, Northeastern University, USA
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School, USA
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, USA
| | - Lisa Feldman Barrett
- Department of Psychology, Northeastern University, USA
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School, USA
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, USA
| |
Collapse
|
41
|
Abstract
Emotions correspond to the execution of a number of computations by the central nervous system. Previous research has studied the hypothesis that some of these computations yield visually identifiable facial muscle movements. Here, we study the supplemental hypothesis that some of these computations yield facial blood flow changes unique to the category and valence of each emotion. These blood flow changes are visible as specific facial color patterns to observers, who can then successfully decode the emotion. We present converging computational and behavioral evidence in favor of this hypothesis. Our studies demonstrate that people identify the correct emotion category and valence from these facial colors, even in the absence of any facial muscle movement. Facial expressions of emotion in humans are believed to be produced by contracting one’s facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion.
Collapse
|
42
|
Crivelli C, Fridlund AJ. Facial Displays Are Tools for Social Influence. Trends Cogn Sci 2018; 22:388-399. [PMID: 29544997 DOI: 10.1016/j.tics.2018.02.006] [Citation(s) in RCA: 94] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2017] [Revised: 02/02/2018] [Accepted: 02/09/2018] [Indexed: 01/25/2023]
Abstract
Based on modern theories of signal evolution and animal communication, the behavioral ecology view of facial displays (BECV) reconceives our 'facial expressions of emotion' as social tools that serve as lead signs to contingent action in social negotiation. BECV offers an externalist, functionalist view of facial displays that is not bound to Western conceptions about either expressions or emotions. It easily accommodates recent findings of diversity in facial displays, their public context-dependency, and the curious but common occurrence of solitary facial behavior. Finally, BECV restores continuity of human facial behavior research with modern functional accounts of non-human communication, and provides a non-mentalistic account of facial displays well-suited to new developments in artificial intelligence and social robotics.
Collapse
Affiliation(s)
- Carlos Crivelli
- School of Applied Social Sciences, De Montfort University, The Gateway, LE1 9BH, Leicester, UK; These authors contributed equally to this work.
| | - Alan J Fridlund
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, 251 Ucen Drive, Santa Barbara, CA, USA; These authors contributed equally to this work.
| |
Collapse
|
43
|
Nozima AMM, Demos B, Souza WCD. Ausência de Prejuízo no Reconhecimento de Expressões Faciais entre Indivíduos com Parkinson. PSICOLOGIA: TEORIA E PESQUISA 2018. [DOI: 10.1590/0102.3772e3421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
RESUMO: Entre os sintomas não motores da doença de Parkinson, dificuldades no reconhecimento de expressões faciais emocionais vêm sendo amplamente discutidas, pois as áreas cerebrais relacionadas a tal habilidade podem estar afetadas na doença. Este estudo investigou, em idosos, o reconhecimento das seis expressões emocionais faciais consideradas universais por meio do instrumento Teste de Percepção Emocional de Faces, em que participantes executam uma tarefa de reconhecimento de expressões emocionais faciais. Participaram 41 indivíduos com idade média de 64,9 anos, 27 homens e 14 mulheres. Não foi observada significativa dificuldade no reconhecimento de nenhuma das expressões emocionais por parte dos parkinsonianos. Tal resultado pode indicar a necessidade do desenvolvimento de instrumentos e técnicas mais adequadas para esse tipo de investigação na população brasileira.
Collapse
|
44
|
Caeiro C, Guo K, Mills D. Dogs and humans respond to emotionally competent stimuli by producing different facial actions. Sci Rep 2017; 7:15525. [PMID: 29138393 PMCID: PMC5686192 DOI: 10.1038/s41598-017-15091-4] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2017] [Accepted: 10/20/2017] [Indexed: 01/17/2023] Open
Abstract
The commonality of facial expressions of emotion has been studied in different species since Darwin, with most of the research focusing on closely related primate species. However, it is unclear to what extent there exists common facial expression in species more phylogenetically distant, but sharing a need for common interspecific emotional understanding. Here we used the objective, anatomically-based tools, FACS and DogFACS (Facial Action Coding Systems), to quantify and compare human and domestic dog facial expressions in response to emotionally-competent stimuli associated with different categories of emotional arousal. We sought to answer two questions: Firstly, do dogs display specific discriminatory facial movements in response to different categories of emotional stimuli? Secondly, do dogs display similar facial movements to humans when reacting in emotionally comparable contexts? We found that dogs displayed distinctive facial actions depending on the category of stimuli. However, dogs produced different facial movements to humans in comparable states of emotional arousal. These results refute the commonality of emotional expression across mammals, since dogs do not display human-like facial expressions. Given the unique interspecific relationship between dogs and humans, two highly social but evolutionarily distant species sharing a common environment, these findings give new insight into the origin of emotion expression.
Collapse
Affiliation(s)
- Cátia Caeiro
- School of Psychology, University of Lincoln, Lincoln, UK. .,School of Life Sciences, University of Lincoln, Lincoln, UK.
| | - Kun Guo
- School of Psychology, University of Lincoln, Lincoln, UK
| | - Daniel Mills
- School of Life Sciences, University of Lincoln, Lincoln, UK
| |
Collapse
|
45
|
The inherently contextualized nature of facial emotion perception. Curr Opin Psychol 2017; 17:47-54. [DOI: 10.1016/j.copsyc.2017.06.006] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2017] [Revised: 04/28/2017] [Accepted: 06/14/2017] [Indexed: 11/20/2022]
|
46
|
Lopez LD, Reschke PJ, Knothe JM, Walle EA. Postural Communication of Emotion: Perception of Distinct Poses of Five Discrete Emotions. Front Psychol 2017; 8:710. [PMID: 28559860 PMCID: PMC5432628 DOI: 10.3389/fpsyg.2017.00710] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2017] [Accepted: 04/21/2017] [Indexed: 02/01/2023] Open
Abstract
Emotion can be communicated through multiple distinct modalities. However, an often-ignored channel of communication is posture. Recent research indicates that bodily posture plays an important role in the perception of emotion. However, research examining postural communication of emotion is limited by the variety of validated emotion poses and unknown cohesion of categorical and dimensional ratings. The present study addressed these limitations. Specifically, we examined individuals' (1) categorization of emotion postures depicting 5 discrete emotions (joy, sadness, fear, anger, and disgust), (2) categorization of different poses depicting the same discrete emotion, and (3) ratings of valence and arousal for each emotion pose. Findings revealed that participants successfully categorized each posture as the target emotion, including disgust. Moreover, participants accurately identified multiple distinct poses within each emotion category. In addition to the categorical responses, dimensional ratings of valence and arousal revealed interesting overlap and distinctions between and within emotion categories. These findings provide the first evidence of an identifiable posture for disgust and instantiate the principle of equifinality of emotional communication through the inclusion of distinct poses within emotion categories. Additionally, the dimensional ratings corroborated the categorical data and provide further granularity for future researchers to consider in examining how distinct emotion poses are perceived.
Collapse
Affiliation(s)
- Lukas D Lopez
- Psychological Sciences, University of California, Merced, MercedCA, USA
| | - Peter J Reschke
- Psychological Sciences, University of California, Merced, MercedCA, USA
| | - Jennifer M Knothe
- Psychological Sciences, University of California, Merced, MercedCA, USA
| | - Eric A Walle
- Psychological Sciences, University of California, Merced, MercedCA, USA
| |
Collapse
|
47
|
Abstract
Posed stimuli dominate the study of nonverbal communication of emotion, but concerns have been raised that the use of posed stimuli may inflate recognition accuracy relative to spontaneous expressions. Here, we compare recognition of emotions from spontaneous expressions with that of matched posed stimuli. Participants made forced-choice judgments about the expressed emotion and whether the expression was spontaneous, and rated expressions on intensity (Experiments 1 and 2) and prototypicality (Experiment 2). Listeners were able to accurately infer emotions from both posed and spontaneous expressions, from auditory, visual, and audiovisual cues. Furthermore, perceived intensity and prototypicality were found to play a role in the accurate recognition of emotion, particularly from spontaneous expressions. Our findings demonstrate that perceivers can reliably recognise emotions from spontaneous expressions, and that depending on the comparison set, recognition levels can even be equivalent to that of posed stimulus sets.
Collapse
Affiliation(s)
- Disa A Sauter
- a Department of Social Psychology , University of Amsterdam , Amsterdam , Netherlands
| | - Agneta H Fischer
- a Department of Social Psychology , University of Amsterdam , Amsterdam , Netherlands
| |
Collapse
|
48
|
Aragón OR, Bargh JA. “So Happy I Could Shout!” and “So Happy I Could Cry!” Dimorphous expressions represent and communicate motivational aspects of positive emotions. Cogn Emot 2017; 32:286-302. [DOI: 10.1080/02699931.2017.1301388] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Affiliation(s)
- Oriana R. Aragón
- Department of Psychology, Yale University, New Haven, CT, USA
- Department of Marketing, Clemson University School of Business, Clemson, SC, USA
| | - John A. Bargh
- Department of Psychology, Yale University, New Haven, CT, USA
| |
Collapse
|
49
|
Nelson NL, Mondloch CJ. Adults’ and children’s perception of facial expressions is influenced by body postures even for dynamic stimuli. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1301615] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Nicole L. Nelson
- School of Psychology, The University of Queensland, Brisbane, Australia
| | - Catherine J. Mondloch
- Department of Psychology, Brock University, St. Catharines, Canada
- ARC Centre of Excellence in Cognition and its Disorders, School of Psychology, University of Western Australia, Perth, Australia
| |
Collapse
|
50
|
“Tears of joy” and “tears and joy?” personal accounts of dimorphous and mixed expressions of emotion. MOTIVATION AND EMOTION 2017. [DOI: 10.1007/s11031-017-9606-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|