1
|
Becker C, Conduit R, Chouinard PA, Laycock R. Can deepfakes be used to study emotion perception? A comparison of dynamic face stimuli. Behav Res Methods 2024; 56:7674-7690. [PMID: 38834812 PMCID: PMC11362322 DOI: 10.3758/s13428-024-02443-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/11/2024] [Indexed: 06/06/2024]
Abstract
Video recordings accurately capture facial expression movements; however, they are difficult for face perception researchers to standardise and manipulate. For this reason, dynamic morphs of photographs are often used, despite their lack of naturalistic facial motion. This study aimed to investigate how humans perceive emotions from faces using real videos and two different approaches to artificially generating dynamic expressions - dynamic morphs, and AI-synthesised deepfakes. Our participants perceived dynamic morphed expressions as less intense when compared with videos (all emotions) and deepfakes (fearful, happy, sad). Videos and deepfakes were perceived similarly. Additionally, they perceived morphed happiness and sadness, but not morphed anger or fear, as less genuine than other formats. Our findings support previous research indicating that social responses to morphed emotions are not representative of those to video recordings. The findings also suggest that deepfakes may offer a more suitable standardized stimulus type compared to morphs. Additionally, qualitative data were collected from participants and analysed using ChatGPT, a large language model. ChatGPT successfully identified themes in the data consistent with those identified by an independent human researcher. According to this analysis, our participants perceived dynamic morphs as less natural compared with videos and deepfakes. That participants perceived deepfakes and videos similarly suggests that deepfakes effectively replicate natural facial movements, making them a promising alternative for face perception research. The study contributes to the growing body of research exploring the usefulness of generative artificial intelligence for advancing the study of human perception.
Collapse
|
2
|
Sarzedas J, Lima CF, Roberto MS, Scott SK, Pinheiro AP, Conde T. Blindness influences emotional authenticity perception in voices: Behavioral and ERP evidence. Cortex 2024; 172:254-270. [PMID: 38123404 DOI: 10.1016/j.cortex.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/31/2023] [Accepted: 11/10/2023] [Indexed: 12/23/2023]
Abstract
The ability to distinguish spontaneous from volitional emotional expressions is an important social skill. How do blind individuals perceive emotional authenticity? Unlike sighted individuals, they cannot rely on facial and body language cues, relying instead on vocal cues alone. Here, we combined behavioral and ERP measures to investigate authenticity perception in laughter and crying in individuals with early- or late-blindness onset. Early-blind, late-blind, and sighted control participants (n = 17 per group, N = 51) completed authenticity and emotion discrimination tasks while EEG data were recorded. The stimuli consisted of laughs and cries that were either spontaneous or volitional. The ERP analysis focused on the N1, P2, and late positive potential (LPP). Behaviorally, early-blind participants showed intact authenticity perception, but late-blind participants performed worse than controls. There were no group differences in the emotion discrimination task. In brain responses, all groups were sensitive to laughter authenticity at the P2 stage, and to crying authenticity at the early LPP stage. Nevertheless, only early-blind participants were sensitive to crying authenticity at the N1 and middle LPP stages, and to laughter authenticity at the early LPP stage. Furthermore, early-blind and sighted participants were more sensitive than late-blind ones to crying authenticity at the P2 and late LPP stages. Altogether, these findings suggest that early blindness relates to facilitated brain processing of authenticity in voices, both at early sensory and late cognitive-evaluative stages. Late-onset blindness, in contrast, relates to decreased sensitivity to authenticity at behavioral and brain levels.
Collapse
Affiliation(s)
- João Sarzedas
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - César F Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal; Institute of Cognitive Neuroscience, University College London, London, UK
| | - Magda S Roberto
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| | - Tatiana Conde
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| |
Collapse
|
3
|
Kim H, Küster D, Girard JM, Krumhuber EG. Human and machine recognition of dynamic and static facial expressions: prototypicality, ambiguity, and complexity. Front Psychol 2023; 14:1221081. [PMID: 37794914 PMCID: PMC10546417 DOI: 10.3389/fpsyg.2023.1221081] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 08/22/2023] [Indexed: 10/06/2023] Open
Abstract
A growing body of research suggests that movement aids facial expression recognition. However, less is known about the conditions under which the dynamic advantage occurs. The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, ambiguity, and complexity) in human and machine analysis. In two studies, facial expression videos and corresponding images depicting the peak of the target and non-target emotion were presented to human observers and the machine classifier (FACET). Results revealed higher recognition rates for dynamic stimuli compared to non-target images. Such benefit disappeared in the context of target-emotion images which were similarly well (or even better) recognised than videos, and more prototypical, less ambiguous, and more complex in appearance than non-target images. While prototypicality and ambiguity exerted more predictive power in machine performance, complexity was more indicative of human emotion recognition. Interestingly, recognition performance by the machine was found to be superior to humans for both target and non-target images. Together, the findings point towards a compensatory role of dynamic information, particularly when static-based stimuli lack relevant features of the target emotion. Implications for research using automatic facial expression analysis (AFEA) are discussed.
Collapse
Affiliation(s)
- Hyunwoo Kim
- Departmet of Experimental Psychology, University College London, London, United Kingdom
| | - Dennis Küster
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| | - Jeffrey M. Girard
- Department of Psychology, University of Kansas, Lawrence, KS, United States
| | - Eva G. Krumhuber
- Departmet of Experimental Psychology, University College London, London, United Kingdom
| |
Collapse
|
4
|
Miolla A, Cardaioli M, Scarpazza C. Padova Emotional Dataset of Facial Expressions (PEDFE): A unique dataset of genuine and posed emotional facial expressions. Behav Res Methods 2023; 55:2559-2574. [PMID: 36002622 PMCID: PMC10439033 DOI: 10.3758/s13428-022-01914-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/15/2022] [Indexed: 11/08/2022]
Abstract
Facial expressions are among the most powerful signals for human beings to convey their emotional states. Indeed, emotional facial datasets represent the most effective and controlled method of examining humans' interpretation of and reaction to various emotions. However, scientific research on emotion mainly relied on static pictures of facial expressions posed (i.e., simulated) by actors, creating a significant bias in emotion literature. This dataset tries to fill this gap, providing a considerable amount (N = 1458) of dynamic genuine (N = 707) and posed (N = 751) clips of the six universal emotions from 56 participants. The dataset is available in two versions: original clips, including participants' body and background, and modified clips, where only the face of participants is visible. Notably, the original dataset has been validated by 122 human raters, while the modified dataset has been validated by 280 human raters. Hit rates for emotion and genuineness, as well as the mean, standard deviation of genuineness, and intensity perception, are provided for each clip to allow future users to select the most appropriate clips needed to answer their scientific questions.
Collapse
Affiliation(s)
- A. Miolla
- Department of General Psychology, University of Padua, Padua, Italy
| | - M. Cardaioli
- Department of Mathematics, University of Padua, Padua, Italy
- GFT Italy, Milan, Italy
| | - C. Scarpazza
- Department of General Psychology, University of Padua, Padua, Italy
| |
Collapse
|
5
|
Pasqualette L, Klinger S, Kulke L. Development and validation of a natural dynamic facial expression stimulus set. PLoS One 2023; 18:e0287049. [PMID: 37379278 DOI: 10.1371/journal.pone.0287049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 05/26/2023] [Indexed: 06/30/2023] Open
Abstract
Emotion research commonly uses either controlled and standardised pictures or natural video stimuli to measure participants' reactions to emotional content. Natural stimulus materials can be beneficial; however, certain measures such as neuroscientific methods, require temporally and visually controlled stimulus material. The current study aimed to create and validate video stimuli in which a model displays positive, neutral and negative expressions. These stimuli were kept as natural as possible while editing timing and visual features to make them suitable for neuroscientific research (e.g. EEG). The stimuli were successfully controlled regarding their features and the validation studies show that participants reliably classify the displayed expression correctly and perceive it as genuine. In conclusion, we present a motion stimulus set that is perceived as natural and that is suitable for neuroscientific research, as well as a pipeline describing successful editing methods for controlling natural stimuli.
Collapse
Affiliation(s)
- Laura Pasqualette
- Neurocognitive Developmental Psychology, Psychology Department, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Bavaria, Germany
- Developmental and Educational Psychology Department, University of Bremen, Bremen, Germany
| | - Sara Klinger
- Neurocognitive Developmental Psychology, Psychology Department, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Bavaria, Germany
| | - Louisa Kulke
- Neurocognitive Developmental Psychology, Psychology Department, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Bavaria, Germany
- Developmental and Educational Psychology Department, University of Bremen, Bremen, Germany
| |
Collapse
|
6
|
Sun J, Dong T, Liu P. Holistic processing and visual characteristics of regulated and spontaneous expressions. J Vis 2023; 23:6. [PMID: 36912592 PMCID: PMC10019490 DOI: 10.1167/jov.23.3.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023] Open
Abstract
The rapid and efficient recognition of facial expressions is crucial for adaptive behaviors, and holistic processing is one of the critical processing methods to achieve this adaptation. Therefore, this study integrated the effects and attentional characteristics of the authenticity of facial expressions on holistic processing. The results show that both regulated and spontaneous expressions were processed holistically. However, the spontaneous expression details did not indicate typical holistic processing, with the congruency effect observed equally for aligned and misaligned conditions. No significant difference between the two expressions was observed in terms of reaction times and eye movement characteristics (i.e., total fixation duration, fixation counts, and first fixation duration). These findings suggest that holistic processing strategies differ between the two expressions. Nevertheless, the difference was not reflected in attentional engagement.
Collapse
Affiliation(s)
- Juncai Sun
- School of Psychology, Qufu Normal University, Qufu, China.,
| | - Tiantian Dong
- Department of Psychology, Shanghai Normal University, Shanghai, China.,
| | - Ping Liu
- Department of Psychology, Shaoxing University, Shaoxing, China.,
| |
Collapse
|
7
|
Dong T, Sun J, He W. Positive and spontaneous facial expressions convey kindness and competence personality traits: Visual reasoning in personality attribution to faces. PERSONALITY AND INDIVIDUAL DIFFERENCES 2023. [DOI: 10.1016/j.paid.2022.111903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
8
|
Denault V, Zloteanu M. Darwin's illegitimate children: How body language experts undermine Darwin's legacy. EVOLUTIONARY HUMAN SCIENCES 2022; 4:e53. [PMID: 37588916 PMCID: PMC10426054 DOI: 10.1017/ehs.2022.50] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 10/16/2022] [Accepted: 11/03/2022] [Indexed: 11/13/2022] Open
Abstract
The Expression of the Emotions in Man and Animals has received and continues to receive much attention from emotion researchers and behavioural scientists. However, the common misconception that Darwin advocated for the universality of emotional reactions has led to a host of unfounded and discredited claims promoted by 'body language experts' on both traditional and social media. These 'experts' receive unparalleled public attention. Thus, rather than being presented with empirically supported findings on non-verbal behaviour, the public is exposed to 'body language analysis' of celebrities, politicians and defendants in criminal trials. In this perspective piece, we address the misinformation surrounding non-verbal behaviour. We also discuss the nature and scope of statements from body language experts, unpacking the claims of the most viewed YouTube video by a body language expert, comparing these claims with actual research findings, and giving specific attention to the implications for the justice system. We explain how body language experts use (and misuse) Darwin's legacy and conclude with a call for researchers to unite their voices and work towards stopping the spread of misinformation about non-verbal behaviour.
Collapse
Affiliation(s)
- Vincent Denault
- Department of Educational and Counselling Psychology, McGill University, Canada
| | | |
Collapse
|
9
|
Namba S, Sato W, Nakamura K, Watanabe K. Computational Process of Sharing Emotion: An Authentic Information Perspective. Front Psychol 2022; 13:849499. [PMID: 35645906 PMCID: PMC9134197 DOI: 10.3389/fpsyg.2022.849499] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 04/26/2022] [Indexed: 11/28/2022] Open
Abstract
Although results of many psychology studies have shown that sharing emotion achieves dyadic interaction, no report has explained a study of the transmission of authentic information from emotional expressions that can strengthen perceivers. For this study, we used computational modeling, which is a multinomial processing tree, for formal quantification of the process of sharing emotion that emphasizes the perception of authentic information for expressers’ feeling states from facial expressions. Results indicated that the ability to perceive authentic information of feeling states from a happy expression has a higher probability than the probability of judging authentic information from anger expressions. Next, happy facial expressions can activate both emotional elicitation and sharing emotion in perceivers, where emotional elicitation alone is working rather than sharing emotion for angry facial expressions. Third, parameters to detect anger experiences were found to be correlated positively with those of happiness. No robust correlation was found between the parameters extracted from this experiment task and questionnaire-measured emotional contagion, empathy, and social anxiety. Results of this study revealed the possibility that a new computational approach contributes to description of emotion sharing processes.
Collapse
Affiliation(s)
- Shushi Namba
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
- *Correspondence: Shushi Namba,
| | - Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
| | - Koyo Nakamura
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
- Japan Society for the Promotion of Science, Tokyo, Japan
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan
| | - Katsumi Watanabe
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan
- Faculty of Arts, Design and Architecture, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
10
|
Wincenciak J, Palumbo L, Epihova G, Barraclough NE, Jellema T. Are adaptation aftereffects for facial emotional expressions affected by prior knowledge about the emotion? Cogn Emot 2022; 36:602-615. [PMID: 35094648 DOI: 10.1080/02699931.2022.2031907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Accurate perception of the emotional signals conveyed by others is crucial for successful social interaction. Such perception is influenced not only by sensory input, but also by knowledge we have about the others' emotions. This study addresses the issue of whether knowing that the other's emotional state is congruent or incongruent with their displayed emotional expression ("genuine" and "fake", respectively) affects the neural mechanisms underpinning the perception of their facial emotional expressions. We used a visual adaptation paradigm to investigate this question in three experiments employing increasing adaptation durations. The adapting stimuli consisted of photographs of emotional facial expressions of joy and anger, purported to reflect (in-)congruency between felt and expressed emotion, displayed by professional actors. A Validity checking procedure ensured participants had the correct knowledge about the (in-)congruency. Significantly smaller adaptation aftereffects were obtained when participants knew that the displayed expression was incongruent with the felt emotion, following all tested adaptation periods. This study shows that knowledge relating to the congruency between felt and expressed emotion modulates face expression aftereffects. We argue that this reflects that the neural substrate responsible for the perception of facial expressions of emotion incorporates the presumed felt emotion underpinning the expression.
Collapse
Affiliation(s)
| | - Letizia Palumbo
- Department of Psychology, Liverpool Hope University, Liverpool, UK
| | | | | | | |
Collapse
|
11
|
Motion Increases Recognition of Naturalistic Postures but not Facial Expressions. JOURNAL OF NONVERBAL BEHAVIOR 2021. [DOI: 10.1007/s10919-021-00372-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
12
|
Sitting in Judgment: How Body Posture Influences Deception Detection and Gazing Behavior. Behav Sci (Basel) 2021; 11:bs11060085. [PMID: 34200633 PMCID: PMC8229315 DOI: 10.3390/bs11060085] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 06/07/2021] [Accepted: 06/08/2021] [Indexed: 11/17/2022] Open
Abstract
Body postures can affect how we process and attend to information. Here, a novel effect of adopting an open or closed posture on the ability to detect deception was investigated. It was hypothesized that the posture adopted by judges would affect their social acuity, resulting in differences in the detection of nonverbal behavior (i.e., microexpression recognition) and the discrimination of deceptive and truthful statements. In Study 1, adopting an open posture produced higher accuracy for detecting naturalistic lies, but no difference was observed in the recognition of brief facial expressions as compared to adopting a closed posture; trait empathy was found to have an additive effect on posture, with more empathic judges having higher deception detection scores. In Study 2, with the use of an eye-tracker, posture effects on gazing behavior when judging both low-stakes and high-stakes lies were measured. Sitting in an open posture reduced judges’ average dwell times looking at senders, and in particular, the amount and length of time they focused on their hands. The findings suggest that simply shifting posture can impact judges’ attention to visual information and veracity judgments (Mg = 0.40, 95% CI (0.03, 0.78)).
Collapse
|
13
|
Krumhuber EG, Hyniewska S, Orlowska A. Contextual effects on smile perception and recognition memory. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-01910-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractMost past research has focused on the role played by social context information in emotion classification, such as whether a display is perceived as belonging to one emotion category or another. The current study aims to investigate whether the effect of context extends to the interpretation of emotion displays, i.e. smiles that could be judged either as posed or spontaneous readouts of underlying positive emotion. A between-subjects design (N = 93) was used to investigate the perception and recall of posed smiles, presented together with a happy or polite social context scenario. Results showed that smiles seen in a happy context were judged as more spontaneous than the same smiles presented in a polite context. Also, smiles were misremembered as having more of the physical attributes (i.e., Duchenne marker) associated with spontaneous enjoyment when they appeared in the happy than polite context condition. Together, these findings indicate that social context information is routinely encoded during emotion perception, thereby shaping the interpretation and recognition memory of facial expressions.
Collapse
|
14
|
Namba S, Matsui H, Zloteanu M. Distinct temporal features of genuine and deliberate facial expressions of surprise. Sci Rep 2021; 11:3362. [PMID: 33564091 PMCID: PMC7873236 DOI: 10.1038/s41598-021-83077-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 01/28/2021] [Indexed: 01/30/2023] Open
Abstract
The physical properties of genuine and deliberate facial expressions remain elusive. This study focuses on observable dynamic differences between genuine and deliberate expressions of surprise based on the temporal structure of facial parts during emotional expression. Facial expressions of surprise were elicited using multiple methods and video recorded: senders were filmed as they experienced genuine surprise in response to a jack-in-the-box (Genuine), other senders were asked to produce deliberate surprise with no preparation (Improvised), by mimicking the expression of another (External), or by reproducing the surprised face after having first experienced genuine surprise (Rehearsed). A total of 127 videos were analyzed, and moment-to-moment movements of eyelids and eyebrows were annotated with deep learning-based tracking software. Results showed that all surprise displays were mainly composed of raising eyebrows and eyelids movements. Genuine displays included horizontal movement in the left part of the face, but also showed the weakest movement coupling of all conditions. External displays had faster eyebrow and eyelid movement, while Improvised displays showed the strongest coupling of movements. The findings demonstrate the importance of dynamic information in the encoding of genuine and deliberate expressions of surprise and the importance of the production method employed in research.
Collapse
Affiliation(s)
- Shushi Namba
- Psychological Process Team, BZP, Robotics Project, RIKEN, Kyoto, 6190288, Japan.
| | - Hiroshi Matsui
- Center for Human-Nature, Artificial Intelligence, and Neuroscience, Hokkaido University, Hokkaido, 0600808, Japan
| | - Mircea Zloteanu
- Department of Criminology and Sociology, Kingston University London, Kingston Upon Thames, KT1 2EE, UK
| |
Collapse
|
15
|
Zloteanu M, Krumhuber EG. Expression Authenticity: The Role of Genuine and Deliberate Displays in Emotion Perception. Front Psychol 2021; 11:611248. [PMID: 33519624 PMCID: PMC7840656 DOI: 10.3389/fpsyg.2020.611248] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 12/21/2020] [Indexed: 11/13/2022] Open
Abstract
People dedicate significant attention to others' facial expressions and to deciphering their meaning. Hence, knowing whether such expressions are genuine or deliberate is important. Early research proposed that authenticity could be discerned based on reliable facial muscle activations unique to genuine emotional experiences that are impossible to produce voluntarily. With an increasing body of research, such claims may no longer hold up to empirical scrutiny. In this article, expression authenticity is considered within the context of senders' ability to produce convincing facial displays that resemble genuine affect and human decoders' judgments of expression authenticity. This includes a discussion of spontaneous vs. posed expressions, as well as appearance- vs. elicitation-based approaches for defining emotion recognition accuracy. We further expand on the functional role of facial displays as neurophysiological states and communicative signals, thereby drawing upon the encoding-decoding and affect-induction perspectives of emotion expressions. Theoretical and methodological issues are addressed with the aim to instigate greater conceptual and operational clarity in future investigations of expression authenticity.
Collapse
Affiliation(s)
- Mircea Zloteanu
- Department of Criminology and Sociology, Kingston University London, Kingston, United Kingdom.,Department of Psychology, Kingston University London, Kingston, United Kingdom
| | - Eva G Krumhuber
- Department of Experimental Psychology, University College London, London, United Kingdom
| |
Collapse
|
16
|
Zloteanu M, Krumhuber EG, Richardson DC. Acting Surprised: Comparing Perceptions of Different Dynamic Deliberate Expressions. JOURNAL OF NONVERBAL BEHAVIOR 2020. [DOI: 10.1007/s10919-020-00349-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
AbstractPeople are accurate at classifying emotions from facial expressions but much poorer at determining if such expressions are spontaneously felt or deliberately posed. We explored if the method used by senders to produce an expression influences the decoder’s ability to discriminate authenticity, drawing inspiration from two well-known acting techniques: the Stanislavski (internal) and Mimic method (external). We compared spontaneous surprise expressions in response to a jack-in-the-box (genuine condition), to posed displays of senders who either focused on their past affective state (internal condition) or the outward expression (external condition). Although decoders performed better than chance at discriminating the authenticity of all expressions, their accuracy was lower in classifying external surprise compared to internal surprise. Decoders also found it harder to discriminate external surprise from spontaneous surprise and were less confident in their decisions, perceiving these to be similarly intense but less genuine-looking. The findings suggest that senders are capable of voluntarily producing genuine-looking expressions of emotions with minimal effort, especially by mimicking a genuine expression. Implications for research on emotion recognition are discussed.
Collapse
|
17
|
Zloteanu M, Bull P, Krumhuber EG, Richardson DC. Veracity judgement, not accuracy: Reconsidering the role of facial expressions, empathy, and emotion recognition training on deception detection. Q J Exp Psychol (Hove) 2020; 74:910-927. [PMID: 33234008 PMCID: PMC8056713 DOI: 10.1177/1747021820978851] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
People hold strong beliefs about the role of emotional cues in detecting deception. While research on the diagnostic value of such cues has been mixed, their influence on human veracity judgements is yet to be fully explored. Here, we address the relationship between emotional information and veracity judgements. In Study 1, the role of emotion recognition in the process of detecting naturalistic lies was investigated. Decoders’ veracity judgements were compared based on differences in trait empathy and their ability to recognise microexpressions and subtle expressions. Accuracy was found to be unrelated to facial cue recognition and negatively related to empathy. In Study 2, we manipulated decoders’ emotion recognition ability and the type of lies they saw: experiential or affective (emotional and unemotional). Decoders received either emotion recognition training, bogus training, or no training. In all scenarios, training did not affect veracity judgements. Experiential lies were easier to detect than affective lies; however, affective unemotional lies were overall the hardest to judge. The findings illustrate the complex relationship between emotion recognition and veracity judgements, with abilities for facial cue detection being high yet unrelated to deception accuracy.
Collapse
Affiliation(s)
- Mircea Zloteanu
- Department of Psychology, Teesside University, Middlesbrough, UK.,Department of Criminology and Sociology, Kingston University, London, UK
| | - Peter Bull
- Department of Psychology, University of York, York, UK.,Department of Psychology, University of Salford, Salford, UK
| | - Eva G Krumhuber
- Department of Experimental Psychology, University College London, London, UK
| | - Daniel C Richardson
- Department of Experimental Psychology, University College London, London, UK
| |
Collapse
|
18
|
Namba S, Rychlowska M, Orlowska A, Aviezer H, Krumhuber EG. Social context and culture influence judgments of non-Duchenne smiles. JOURNAL OF CULTURAL COGNITIVE SCIENCE 2020. [DOI: 10.1007/s41809-020-00066-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
AbstractExtant evidence points toward the role of contextual information and related cross-cultural variations in emotion perception, but most of the work to date has focused on judgments of basic emotions. The current research examines how culture and situational context affect the interpretation of emotion displays, i.e. judgments of the extent to which ambiguous smiles communicate happiness versus polite intentions. We hypothesized that smiles associated with contexts implying happiness would be judged as conveying more positive feelings compared to smiles paired with contexts implying politeness or smiles presented without context. In line with existing research on cross-cultural variation in contextual influences, we also expected these effects to be larger in Japan than in the UK. In Study 1, British participants viewed non-Duchenne smiles presented on their own or paired with background scenes implying happiness or the need to be polite. Compared to face-only stimuli, happy contexts made smiles appear more genuine, whereas polite contexts led smiles to be seen as less genuine. Study 2 replicated this result using verbal vignettes, showing a similar pattern of contextual effects among British and Japanese participants. However, while the effects of vignettes describing happy situations was comparable in both cultures, the influence of vignettes describing polite situations was stronger in Japan than the UK. Together, the findings document the importance of context information in judging smile expressions and highlight the need to investigate how culture moderates such influences.
Collapse
|
19
|
Lander K, Butcher NL. Recognizing Genuine From Posed Facial Expressions: Exploring the Role of Dynamic Information and Face Familiarity. Front Psychol 2020; 11:1378. [PMID: 32719634 PMCID: PMC7347903 DOI: 10.3389/fpsyg.2020.01378] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Accepted: 05/22/2020] [Indexed: 11/13/2022] Open
Abstract
The accurate recognition of emotion is important for interpersonal interaction and when navigating our social world. However, not all facial displays reflect the emotional experience currently being felt by the expresser. Indeed, faces express both genuine and posed displays of emotion. In this article, we summarize the importance of motion for the recognition of face identity before critically outlining the role of dynamic information in determining facial expressions and distinguishing between genuine and posed expressions of emotion. We propose that both dynamic information and face familiarity may modulate our ability to determine whether an expression is genuine or not. Finally, we consider the shared role for dynamic information across different face recognition tasks and the wider impact of face familiarity on determining genuine from posed expressions during real-world interactions.
Collapse
Affiliation(s)
- Karen Lander
- Division of Neuroscience and Experimental Psychology, University of Manchester, Manchester, United Kingdom
| | - Natalie L Butcher
- School of Social Sciences, Humanities and Law, Teesside University, Middlesbrough, United Kingdom
| |
Collapse
|
20
|
Dupré D, Krumhuber EG, Küster D, McKeown GJ. A performance comparison of eight commercially available automatic classifiers for facial affect recognition. PLoS One 2020; 15:e0231968. [PMID: 32330178 PMCID: PMC7182192 DOI: 10.1371/journal.pone.0231968] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Accepted: 04/03/2020] [Indexed: 02/03/2023] Open
Abstract
In the wake of rapid advances in automatic affect analysis, commercial automatic classifiers for facial affect recognition have attracted considerable attention in recent years. While several options now exist to analyze dynamic video data, less is known about the relative performance of these classifiers, in particular when facial expressions are spontaneous rather than posed. In the present work, we tested eight out-of-the-box automatic classifiers, and compared their emotion recognition performance to that of human observers. A total of 937 videos were sampled from two large databases that conveyed the basic six emotions (happiness, sadness, anger, fear, surprise, and disgust) either in posed (BU-4DFE) or spontaneous (UT-Dallas) form. Results revealed a recognition advantage for human observers over automatic classification. Among the eight classifiers, there was considerable variance in recognition accuracy ranging from 48% to 62%. Subsequent analyses per type of expression revealed that performance by the two best performing classifiers approximated those of human observers, suggesting high agreement for posed expressions. However, classification accuracy was consistently lower (although above chance level) for spontaneous affective behavior. The findings indicate potential shortcomings of existing out-of-the-box classifiers for measuring emotions, and highlight the need for more spontaneous facial databases that can act as a benchmark in the training and testing of automatic emotion recognition systems. We further discuss some limitations of analyzing facial expressions that have been recorded in controlled environments.
Collapse
Affiliation(s)
- Damien Dupré
- Business School, Dublin City University, Dublin, Republic of Ireland
| | - Eva G. Krumhuber
- Department of Experimental Psychology, University College London, London, England, United Kingdom
| | - Dennis Küster
- Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
- Department of Psychology and Methods, Jacobs University Bremen, Bremen, Germany
| | - Gary J. McKeown
- Department of Psychology, Queen’s University Belfast, Belfast, Northern Ireland, United Kingdom
| |
Collapse
|
21
|
Roitblat Y, Cohensedgh S, Frig-Levinson E, Cohen M, Dadbin K, Shohed C, Shvartsman D, Shterenshis M. Emotional expressions with minimal facial muscle actions. Report 2: Recognition of emotions. CURRENT PSYCHOLOGY 2020. [DOI: 10.1007/s12144-020-00691-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
22
|
Perusquía-Hernández M, Ayabe-Kanamura S, Suzuki K. Human perception and biosignal-based identification of posed and spontaneous smiles. PLoS One 2019; 14:e0226328. [PMID: 31830111 PMCID: PMC6907846 DOI: 10.1371/journal.pone.0226328] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2019] [Accepted: 11/25/2019] [Indexed: 11/21/2022] Open
Abstract
Facial expressions are behavioural cues that represent an affective state. Because of this, they are an unobtrusive alternative to affective self-report. The perceptual identification of facial expressions can be performed automatically with technological assistance. Once the facial expressions have been identified, the interpretation is usually left to a field expert. However, facial expressions do not always represent the felt affect; they can also be a communication tool. Therefore, facial expression measurements are prone to the same biases as self-report. Hence, the automatic measurement of human affect should also make inferences on the nature of the facial expressions instead of describing facial movements only. We present two experiments designed to assess whether such automated inferential judgment could be advantageous. In particular, we investigated the differences between posed and spontaneous smiles. The aim of the first experiment was to elicit both types of expressions. In contrast to other studies, the temporal dynamics of the elicited posed expression were not constrained by the eliciting instruction. Electromyography (EMG) was used to automatically discriminate between them. Spontaneous smiles were found to differ from posed smiles in magnitude, onset time, and onset and offset speed independently of the producer's ethnicity. Agreement between the expression type and EMG-based automatic detection reached 94% accuracy. Finally, measurements of the agreement between human video coders showed that although agreement on perceptual labels is fairly good, the agreement worsens with inferential labels. A second experiment confirmed that a layperson's accuracy as regards distinguishing posed from spontaneous smiles is poor. Therefore, the automatic identification of inferential labels would be beneficial in terms of affective assessments and further research on this topic.
Collapse
Affiliation(s)
- Monica Perusquía-Hernández
- Communication Science Laboratories, NTT, Atsugi, Kanagawa, Japan
- Artificial Intelligence Laboratory, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | | | - Kenji Suzuki
- Artificial Intelligence Laboratory, University of Tsukuba, Tsukuba, Ibaraki, Japan
| |
Collapse
|
23
|
Scherer KR, Ellgring H, Dieckmann A, Unfried M, Mortillaro M. Dynamic Facial Expression of Emotion and Observer Inference. Front Psychol 2019; 10:508. [PMID: 30941073 PMCID: PMC6434775 DOI: 10.3389/fpsyg.2019.00508] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Accepted: 02/20/2019] [Indexed: 11/13/2022] Open
Abstract
Research on facial emotion expression has mostly focused on emotion recognition, assuming that a small number of discrete emotions is elicited and expressed via prototypical facial muscle configurations as captured in still photographs. These are expected to be recognized by observers, presumably via template matching. In contrast, appraisal theories of emotion propose a more dynamic approach, suggesting that specific elements of facial expressions are directly produced by the result of certain appraisals and predicting the facial patterns to be expected for certain appraisal configurations. This approach has recently been extended to emotion perception, claiming that observers first infer individual appraisals and only then make categorical emotion judgments based on the estimated appraisal patterns, using inference rules. Here, we report two related studies to empirically investigate the facial action unit configurations that are used by actors to convey specific emotions in short affect bursts and to examine to what extent observers can infer a person's emotions from the predicted facial expression configurations. The results show that (1) professional actors use many of the predicted facial action unit patterns to enact systematically specified appraisal outcomes in a realistic scenario setting, and (2) naïve observers infer the respective emotions based on highly similar facial movement configurations with a degree of accuracy comparable to earlier research findings. Based on estimates of underlying appraisal criteria for the different emotions we conclude that the patterns of facial action units identified in this research correspond largely to prior predictions and encourage further research on appraisal-driven expression and inference.
Collapse
Affiliation(s)
- Klaus R Scherer
- Department of Psychology and Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Heiner Ellgring
- Department of Psychology, University of Würzburg, Würzburg, Germany
| | | | | | - Marcello Mortillaro
- Department of Psychology and Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
24
|
Emotional expressions with minimal facial muscle actions. Report 1: Cues and targets. CURRENT PSYCHOLOGY 2019. [DOI: 10.1007/s12144-019-0151-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
25
|
Camerlink I, Coulange E, Farish M, Baxter EM, Turner SP. Facial expression as a potential measure of both intent and emotion. Sci Rep 2018; 8:17602. [PMID: 30514964 PMCID: PMC6279763 DOI: 10.1038/s41598-018-35905-3] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Accepted: 11/12/2018] [Indexed: 11/24/2022] Open
Abstract
Facial expressions convey information on emotion, physical sensations, and intent. The much debated theories that facial expressions can be emotions or signals of intent have largely remained separated in animal studies. Here we integrate these approaches with the aim to 1) investigate whether pigs may use facial expressions as a signal of intent and; 2) quantify differences in facial metrics between different contexts of potentially negative emotional state. Facial metrics of 38 pigs were recorded prior to aggression, during aggression and during retreat from being attacked in a dyadic contest. Ear angle, snout ratio (length/height) and eye ratio from 572 images were measured. Prior to the occurrence of aggression, eventual initiators of the first bite had a smaller snout ratio and eventual winners showed a non-significant tendency to have their ears forward more than eventual losers. During aggression, pigs' ears were more forward orientated and their snout ratio was smaller. During retreat, pigs' ears were backwards and their eyes open less. The results suggest that facial expressions can communicate aggressive intent related to fight success, and that facial metrics can convey information about emotional responses to contexts involving aggression and fear.
Collapse
Affiliation(s)
- Irene Camerlink
- Animal Behaviour & Welfare, Animal and Veterinary Sciences Research Group, Scotland's Rural College (SRUC), West Mains Road, Edinburgh, EH9 3JG, UK.
- Institute of Animal Husbandry and Animal Welfare, Department of Farm Animals and Veterinary Public Health, University for Veterinary Medicine Vienna, Veterinärplatz 1, 1210, Vienna, Austria.
| | - Estelle Coulange
- Animal Behaviour & Welfare, Animal and Veterinary Sciences Research Group, Scotland's Rural College (SRUC), West Mains Road, Edinburgh, EH9 3JG, UK
| | - Marianne Farish
- Animal Behaviour & Welfare, Animal and Veterinary Sciences Research Group, Scotland's Rural College (SRUC), West Mains Road, Edinburgh, EH9 3JG, UK
| | - Emma M Baxter
- Animal Behaviour & Welfare, Animal and Veterinary Sciences Research Group, Scotland's Rural College (SRUC), West Mains Road, Edinburgh, EH9 3JG, UK
| | - Simon P Turner
- Animal Behaviour & Welfare, Animal and Veterinary Sciences Research Group, Scotland's Rural College (SRUC), West Mains Road, Edinburgh, EH9 3JG, UK
| |
Collapse
|