1
|
Scarpazza C, Gramegna C, Costa C, Pezzetta R, Saetti MC, Preti AN, Difonzo T, Zago S, Bolognini N. The Emotion Authenticity Recognition (EAR) test: normative data of an innovative test using dynamic emotional stimuli to evaluate the ability to recognize the authenticity of emotions expressed by faces. Neurol Sci 2024:10.1007/s10072-024-07689-0. [PMID: 39023709 DOI: 10.1007/s10072-024-07689-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 07/08/2024] [Indexed: 07/20/2024]
Abstract
Despite research has massively focused on how emotions conveyed by faces are perceived, the perception of emotions' authenticity is a topic that has been surprisingly overlooked. Here, we present the Emotion Authenticity Recognition (EAR) test, a test specifically developed using dynamic stimuli depicting authentic and posed emotions to evaluate the ability of individuals to correctly identify an emotion (emotion recognition index, ER Index) and classify its authenticity (authenticity recognition index (EA Index). The EAR test has been validated on 522 healthy participants and normative values are provided. Correlations with demographic characteristics, empathy and general cognitive status have been obtained revealing that both indices are negatively correlated with age, and positively with education, cognitive status and different facets of empathy. The EAR test offers a new ecological test to assess the ability to detect emotion authenticity that allow to explore the eventual social cognitive deficit even in patients otherwise cognitively intact.
Collapse
Affiliation(s)
- Cristina Scarpazza
- Department of General Psychology, University of Padova, Via Venezia 8, Padova, PD, Italy.
- IRCCS S Camillo Hospital, Venezia, Italy.
| | - Chiara Gramegna
- Ph.D. Program in Neuroscience, School of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
- Department of Psychology, University of Milano-Bicocca, Milan, Italy
| | - Cristiano Costa
- Padova Neuroscience Center, University of Padova, Padova, Italy
| | | | - Maria Cristina Saetti
- Neurology Unit, IRCCS Fondazione Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy
| | - Alice Naomi Preti
- Ph.D. Program in Neuroscience, School of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
- Department of Psychology, University of Milano-Bicocca, Milan, Italy
| | - Teresa Difonzo
- Neurology Unit, Foundation IRCCS Ca' Granda Hospital Maggiore Policlinico, Milano, Italy
| | - Stefano Zago
- Neurology Unit, Foundation IRCCS Ca' Granda Hospital Maggiore Policlinico, Milano, Italy
| | - Nadia Bolognini
- Department of Psychology, University of Milano-Bicocca, Milan, Italy
- Laboratory of Neuropsychology, Department of Neurorehabilitation Sciences, IRCCS Istituto Auxologico Italiano, Milano, Italy
| |
Collapse
|
2
|
Laukka P, Månsson KNT, Cortes DS, Manzouri A, Frick A, Fredborg W, Fischer H. Neural correlates of individual differences in multimodal emotion recognition ability. Cortex 2024; 175:1-11. [PMID: 38691922 DOI: 10.1016/j.cortex.2024.03.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 03/11/2024] [Accepted: 04/01/2024] [Indexed: 05/03/2024]
Abstract
Studies have reported substantial variability in emotion recognition ability (ERA) - an important social skill - but possible neural underpinnings for such individual differences are not well understood. This functional magnetic resonance imaging (fMRI) study investigated neural responses during emotion recognition in young adults (N = 49) who were selected for inclusion based on their performance (high or low) during previous testing of ERA. Participants were asked to judge brief video recordings in a forced-choice emotion recognition task, wherein stimuli were presented in visual, auditory and multimodal (audiovisual) blocks. Emotion recognition rates during brain scanning confirmed that individuals with high (vs low) ERA received higher accuracy for all presentation blocks. fMRI-analyses focused on key regions of interest (ROIs) involved in the processing of multimodal emotion expressions, based on previous meta-analyses. In neural response to emotional stimuli contrasted with neutral stimuli, individuals with high (vs low) ERA showed higher activation in the following ROIs during the multimodal condition: right middle superior temporal gyrus (mSTG), right posterior superior temporal sulcus (PSTS), and right inferior frontal cortex (IFC). Overall, results suggest that individual variability in ERA may be reflected across several stages of decisional processing, including extraction (mSTG), integration (PSTS) and evaluation (IFC) of emotional information.
Collapse
Affiliation(s)
- Petri Laukka
- Department of Psychology, Stockholm University, Stockholm, Sweden; Department of Psychology, Uppsala University, Uppsala, Sweden.
| | - Kristoffer N T Månsson
- Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Department of Clinical Psychology and Psychotherapy, Babeș-Bolyai University, Cluj-Napoca, Romania
| | - Diana S Cortes
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Amirhossein Manzouri
- Department of Psychology, Stockholm University, Stockholm, Sweden; Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Andreas Frick
- Department of Medical Sciences, Psychiatry, Uppsala University, Uppsala, Sweden
| | - William Fredborg
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Håkan Fischer
- Department of Psychology, Stockholm University, Stockholm, Sweden; Stockholm University Brain Imaging Centre (SUBIC), Stockholm University, Stockholm, Sweden; Aging Research Center, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet and Stockholm University, Stockholm, Sweden
| |
Collapse
|
3
|
Sarzedas J, Lima CF, Roberto MS, Scott SK, Pinheiro AP, Conde T. Blindness influences emotional authenticity perception in voices: Behavioral and ERP evidence. Cortex 2024; 172:254-270. [PMID: 38123404 DOI: 10.1016/j.cortex.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/31/2023] [Accepted: 11/10/2023] [Indexed: 12/23/2023]
Abstract
The ability to distinguish spontaneous from volitional emotional expressions is an important social skill. How do blind individuals perceive emotional authenticity? Unlike sighted individuals, they cannot rely on facial and body language cues, relying instead on vocal cues alone. Here, we combined behavioral and ERP measures to investigate authenticity perception in laughter and crying in individuals with early- or late-blindness onset. Early-blind, late-blind, and sighted control participants (n = 17 per group, N = 51) completed authenticity and emotion discrimination tasks while EEG data were recorded. The stimuli consisted of laughs and cries that were either spontaneous or volitional. The ERP analysis focused on the N1, P2, and late positive potential (LPP). Behaviorally, early-blind participants showed intact authenticity perception, but late-blind participants performed worse than controls. There were no group differences in the emotion discrimination task. In brain responses, all groups were sensitive to laughter authenticity at the P2 stage, and to crying authenticity at the early LPP stage. Nevertheless, only early-blind participants were sensitive to crying authenticity at the N1 and middle LPP stages, and to laughter authenticity at the early LPP stage. Furthermore, early-blind and sighted participants were more sensitive than late-blind ones to crying authenticity at the P2 and late LPP stages. Altogether, these findings suggest that early blindness relates to facilitated brain processing of authenticity in voices, both at early sensory and late cognitive-evaluative stages. Late-onset blindness, in contrast, relates to decreased sensitivity to authenticity at behavioral and brain levels.
Collapse
Affiliation(s)
- João Sarzedas
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - César F Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal; Institute of Cognitive Neuroscience, University College London, London, UK
| | - Magda S Roberto
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| | - Tatiana Conde
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| |
Collapse
|
4
|
Kim H, Kwak S, Yoo SY, Lee EC, Park S, Ko H, Bae M, Seo M, Nam G, Lee JY. Facial Expressions Track Depressive Symptoms in Old Age. SENSORS (BASEL, SWITZERLAND) 2023; 23:7080. [PMID: 37631616 PMCID: PMC10459725 DOI: 10.3390/s23167080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 07/17/2023] [Accepted: 07/20/2023] [Indexed: 08/27/2023]
Abstract
Facial expressions play a crucial role in the diagnosis of mental illnesses characterized by mood changes. The Facial Action Coding System (FACS) is a comprehensive framework that systematically categorizes and captures even subtle changes in facial appearance, enabling the examination of emotional expressions. In this study, we investigated the association between facial expressions and depressive symptoms in a sample of 59 older adults without cognitive impairment. Utilizing the FACS and the Korean version of the Beck Depression Inventory-II, we analyzed both "posed" and "spontaneous" facial expressions across six basic emotions: happiness, sadness, fear, anger, surprise, and disgust. Through principal component analysis, we summarized 17 action units across these emotion conditions. Subsequently, multiple regression analyses were performed to identify specific facial expression features that explain depressive symptoms. Our findings revealed several distinct features of posed and spontaneous facial expressions. Specifically, among older adults with higher depressive symptoms, a posed face exhibited a downward and inward pull at the corner of the mouth, indicative of sadness. In contrast, a spontaneous face displayed raised and narrowed inner brows, which was associated with more severe depressive symptoms in older adults. These findings suggest that facial expressions can provide valuable insights into assessing depressive symptoms in older adults.
Collapse
Affiliation(s)
- Hairin Kim
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 07061, Republic of Korea
| | - Seyul Kwak
- Department of Psychology, Pusan National University, Busan 46241, Republic of Korea
| | - So Young Yoo
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 07061, Republic of Korea
| | - Eui Chul Lee
- Department of Human-Centered Artificial Intelligence, Sangmyung University, Hongjimun 2-Gil 20, Jongno-Gu, Seoul 03016, Republic of Korea
| | - Soowon Park
- Division of Teacher Education, College of General Education for Truth, Sincerity and Love, Kyonggi University, Suwon 16227, Republic of Korea
| | - Hyunwoong Ko
- Samsung Advanced Institute for Health Sciences and Technology (SAIHST), Sungkyunkwan University, Samsung Medical Center, Seoul 06355, Republic of Korea
- Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul 08826, Republic of Korea
| | - Minju Bae
- Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul 08826, Republic of Korea
| | - Myogyeong Seo
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 07061, Republic of Korea
| | - Gieun Nam
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 07061, Republic of Korea
| | - Jun-Young Lee
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 07061, Republic of Korea
| |
Collapse
|
5
|
Pasqualette L, Klinger S, Kulke L. Development and validation of a natural dynamic facial expression stimulus set. PLoS One 2023; 18:e0287049. [PMID: 37379278 DOI: 10.1371/journal.pone.0287049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 05/26/2023] [Indexed: 06/30/2023] Open
Abstract
Emotion research commonly uses either controlled and standardised pictures or natural video stimuli to measure participants' reactions to emotional content. Natural stimulus materials can be beneficial; however, certain measures such as neuroscientific methods, require temporally and visually controlled stimulus material. The current study aimed to create and validate video stimuli in which a model displays positive, neutral and negative expressions. These stimuli were kept as natural as possible while editing timing and visual features to make them suitable for neuroscientific research (e.g. EEG). The stimuli were successfully controlled regarding their features and the validation studies show that participants reliably classify the displayed expression correctly and perceive it as genuine. In conclusion, we present a motion stimulus set that is perceived as natural and that is suitable for neuroscientific research, as well as a pipeline describing successful editing methods for controlling natural stimuli.
Collapse
Affiliation(s)
- Laura Pasqualette
- Neurocognitive Developmental Psychology, Psychology Department, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Bavaria, Germany
- Developmental and Educational Psychology Department, University of Bremen, Bremen, Germany
| | - Sara Klinger
- Neurocognitive Developmental Psychology, Psychology Department, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Bavaria, Germany
| | - Louisa Kulke
- Neurocognitive Developmental Psychology, Psychology Department, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Bavaria, Germany
- Developmental and Educational Psychology Department, University of Bremen, Bremen, Germany
| |
Collapse
|
6
|
Straulino E, Scarpazza C, Sartori L. What is missing in the study of emotion expression? Front Psychol 2023; 14:1158136. [PMID: 37179857 PMCID: PMC10173880 DOI: 10.3389/fpsyg.2023.1158136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/06/2023] [Indexed: 05/15/2023] Open
Abstract
While approaching celebrations for the 150 years of "The Expression of the Emotions in Man and Animals", scientists' conclusions on emotion expression are still debated. Emotion expression has been traditionally anchored to prototypical and mutually exclusive facial expressions (e.g., anger, disgust, fear, happiness, sadness, and surprise). However, people express emotions in nuanced patterns and - crucially - not everything is in the face. In recent decades considerable work has critiqued this classical view, calling for a more fluid and flexible approach that considers how humans dynamically perform genuine expressions with their bodies in context. A growing body of evidence suggests that each emotional display is a complex, multi-component, motoric event. The human face is never static, but continuously acts and reacts to internal and environmental stimuli, with the coordinated action of muscles throughout the body. Moreover, two anatomically and functionally different neural pathways sub-serve voluntary and involuntary expressions. An interesting implication is that we have distinct and independent pathways for genuine and posed facial expressions, and different combinations may occur across the vertical facial axis. Investigating the time course of these facial blends, which can be controlled consciously only in part, is recently providing a useful operational test for comparing the different predictions of various models on the lateralization of emotions. This concise review will identify shortcomings and new challenges regarding the study of emotion expressions at face, body, and contextual levels, eventually resulting in a theoretical and methodological shift in the study of emotions. We contend that the most feasible solution to address the complex world of emotion expression is defining a completely new and more complete approach to emotional investigation. This approach can potentially lead us to the roots of emotional display, and to the individual mechanisms underlying their expression (i.e., individual emotional signatures).
Collapse
Affiliation(s)
- Elisa Straulino
- Department of General Psychology, University of Padova, Padova, Italy
- *Correspondence: Elisa Straulino,
| | - Cristina Scarpazza
- Department of General Psychology, University of Padova, Padova, Italy
- IRCCS San Camillo Hospital, Venice, Italy
| | - Luisa Sartori
- Department of General Psychology, University of Padova, Padova, Italy
- Padova Neuroscience Center, University of Padova, Padova, Italy
- Luisa Sartori,
| |
Collapse
|
7
|
Denault V, Zloteanu M. Darwin's illegitimate children: How body language experts undermine Darwin's legacy. EVOLUTIONARY HUMAN SCIENCES 2022; 4:e53. [PMID: 37588916 PMCID: PMC10426054 DOI: 10.1017/ehs.2022.50] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 10/16/2022] [Accepted: 11/03/2022] [Indexed: 11/13/2022] Open
Abstract
The Expression of the Emotions in Man and Animals has received and continues to receive much attention from emotion researchers and behavioural scientists. However, the common misconception that Darwin advocated for the universality of emotional reactions has led to a host of unfounded and discredited claims promoted by 'body language experts' on both traditional and social media. These 'experts' receive unparalleled public attention. Thus, rather than being presented with empirically supported findings on non-verbal behaviour, the public is exposed to 'body language analysis' of celebrities, politicians and defendants in criminal trials. In this perspective piece, we address the misinformation surrounding non-verbal behaviour. We also discuss the nature and scope of statements from body language experts, unpacking the claims of the most viewed YouTube video by a body language expert, comparing these claims with actual research findings, and giving specific attention to the implications for the justice system. We explain how body language experts use (and misuse) Darwin's legacy and conclude with a call for researchers to unite their voices and work towards stopping the spread of misinformation about non-verbal behaviour.
Collapse
Affiliation(s)
- Vincent Denault
- Department of Educational and Counselling Psychology, McGill University, Canada
| | | |
Collapse
|
8
|
Introduction to the Special Issue of the Scientific Study of Laughter: Where We Have Been, Current Innovations, and Where We Might Go From Here. JOURNAL OF NONVERBAL BEHAVIOR 2022. [DOI: 10.1007/s10919-022-00413-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
9
|
Dawel A, Miller EJ, Horsburgh A, Ford P. A systematic survey of face stimuli used in psychological research 2000-2020. Behav Res Methods 2022; 54:1889-1901. [PMID: 34731426 DOI: 10.3758/s13428-021-01705-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/07/2021] [Indexed: 12/16/2022]
Abstract
For decades, psychology has relied on highly standardized images to understand how people respond to faces. Many of these stimuli are rigorously generated and supported by excellent normative data; as such, they have played an important role in the development of face science. However, there is now clear evidence that testing with ambient images (i.e., naturalistic images "in the wild") and including expressions that are spontaneous can lead to new and important insights. To precisely quantify the extent to which our current knowledge base has relied on standardized and posed stimuli, we systematically surveyed the face stimuli used in 12 key journals in this field across 2000-2020 (N = 3374 articles). Although a small number of posed expression databases continue to dominate the literature, the use of spontaneous expressions seems to be increasing. However, there has been no increase in the use of ambient or dynamic stimuli over time. The vast majority of articles have used highly standardized and nonmoving pictures of faces. An emerging trend is that virtual faces are being used as stand-ins for human faces in research. Overall, the results of the present survey highlight that there has been a significant imbalance in favor of standardized face stimuli. We argue that psychology would benefit from a more balanced approach because ambient and spontaneous stimuli have much to offer. We advocate a cognitive ethological approach that involves studying face processing in natural settings as well as the lab, incorporating more stimuli from "the wild".
Collapse
Affiliation(s)
- Amy Dawel
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia.
| | - Elizabeth J Miller
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia
| | - Annabel Horsburgh
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia
| | - Patrice Ford
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia
| |
Collapse
|
10
|
The spatio-temporal features of perceived-as-genuine and deliberate expressions. PLoS One 2022; 17:e0271047. [PMID: 35839208 PMCID: PMC9286247 DOI: 10.1371/journal.pone.0271047] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 06/22/2022] [Indexed: 11/24/2022] Open
Abstract
Reading the genuineness of facial expressions is important for increasing the credibility of information conveyed by faces. However, it remains unclear which spatio-temporal characteristics of facial movements serve as critical cues to the perceived genuineness of facial expressions. This study focused on observable spatio-temporal differences between perceived-as-genuine and deliberate expressions of happiness and anger expressions. In this experiment, 89 Japanese participants were asked to judge the perceived genuineness of faces in videos showing happiness or anger expressions. To identify diagnostic facial cues to the perceived genuineness of the facial expressions, we analyzed a total of 128 face videos using an automated facial action detection system; thereby, moment-to-moment activations in facial action units were annotated, and nonnegative matrix factorization extracted sparse and meaningful components from all action units data. The results showed that genuineness judgments reduced when more spatial patterns were observed in facial expressions. As for the temporal features, the perceived-as-deliberate expressions of happiness generally had faster onsets to the peak than the perceived-as-genuine expressions of happiness. Moreover, opening the mouth negatively contributed to the perceived-as-genuine expressions, irrespective of the type of facial expressions. These findings provide the first evidence for dynamic facial cues to the perceived genuineness of happiness and anger expressions.
Collapse
|
11
|
Namba S, Sato W, Matsui H. Spatio-Temporal Properties of Amused, Embarrassed, and Pained Smiles. JOURNAL OF NONVERBAL BEHAVIOR 2022. [DOI: 10.1007/s10919-022-00404-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
AbstractSmiles are universal but nuanced facial expressions that are most frequently used in face-to-face communications, typically indicating amusement but sometimes conveying negative emotions such as embarrassment and pain. Although previous studies have suggested that spatial and temporal properties could differ among these various types of smiles, no study has thoroughly analyzed these properties. This study aimed to clarify the spatiotemporal properties of smiles conveying amusement, embarrassment, and pain using a spontaneous facial behavior database. The results regarding spatial patterns revealed that pained smiles showed less eye constriction and more overall facial tension than amused smiles; no spatial differences were identified between embarrassed and amused smiles. Regarding temporal properties, embarrassed and pained smiles remained in a state of higher facial tension than amused smiles. Moreover, embarrassed smiles showed a more gradual change from tension states to the smile state than amused smiles, and pained smiles had lower probabilities of staying in or transitioning to the smile state compared to amused smiles. By comparing the spatiotemporal properties of these three smile types, this study revealed that the probability of transitioning between discrete states could help distinguish amused, embarrassed, and pained smiles.
Collapse
|
12
|
Kim HN, Sutharson SJ. Individual differences in spontaneous facial expressions in people with visual impairment and blindness. BRITISH JOURNAL OF VISUAL IMPAIRMENT 2022. [DOI: 10.1177/02646196211070927] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
People can visualize their spontaneous and voluntary emotions via facial expressions, which play a critical role in social interactions. However, less is known about mechanisms of spontaneous emotion expressions, especially in adults with visual impairment and blindness. Nineteen adults with visual impairment and blindness participated in interviews where the spontaneous facial expressions were observed and analyzed via the Facial Action Coding System (FACS). We found a set of Action Units, primarily engaged in expressing the spontaneous emotions, which were likely to be affected by participants’ different characteristics. The results of this study could serve as evidence to suggest that adults with visual impairment and blindness show individual differences in spontaneous facial expressions of emotions.
Collapse
|
13
|
Changes in Computer-Analyzed Facial Expressions with Age. SENSORS 2021; 21:s21144858. [PMID: 34300600 PMCID: PMC8309819 DOI: 10.3390/s21144858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Revised: 07/13/2021] [Accepted: 07/15/2021] [Indexed: 11/17/2022]
Abstract
Facial expressions are well known to change with age, but the quantitative properties of facial aging remain unclear. In the present study, we investigated the differences in the intensity of facial expressions between older (n = 56) and younger adults (n = 113). In laboratory experiments, the posed facial expressions of the participants were obtained based on six basic emotions and neutral facial expression stimuli, and the intensities of their faces were analyzed using a computer vision tool, OpenFace software. Our results showed that the older adults expressed strong expressions for some negative emotions and neutral faces. Furthermore, when making facial expressions, older adults used more face muscles than younger adults across the emotions. These results may help to understand the characteristics of facial expressions in aging and can provide empirical evidence for other fields regarding facial recognition.
Collapse
|
14
|
Motion Increases Recognition of Naturalistic Postures but not Facial Expressions. JOURNAL OF NONVERBAL BEHAVIOR 2021. [DOI: 10.1007/s10919-021-00372-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
15
|
Namba S, Sato W, Osumi M, Shimokawa K. Assessing Automated Facial Action Unit Detection Systems for Analyzing Cross-Domain Facial Expression Databases. SENSORS (BASEL, SWITZERLAND) 2021; 21:4222. [PMID: 34203007 PMCID: PMC8235167 DOI: 10.3390/s21124222] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 06/15/2021] [Accepted: 06/17/2021] [Indexed: 11/16/2022]
Abstract
In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. This study compared the performance of three systems (FaceReader, OpenFace, AFARtoolbox) that detect each facial movement corresponding to an action unit (AU) derived from the Facial Action Coding System. All machines could detect the presence of AUs from the dynamic facial database at a level above chance. Moreover, OpenFace and AFAR provided higher area under the receiver operating characteristic curve values compared to FaceReader. In addition, several confusion biases of facial components (e.g., AU12 and AU14) were observed to be related to each automated AU detection system and the static mode was superior to dynamic mode for analyzing the posed facial database. These findings demonstrate the features of prediction patterns for each system and provide guidance for research on facial expressions.
Collapse
Affiliation(s)
- Shushi Namba
- Psychological Process Team, BZP, Robotics Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 6190288, Japan
| | - Wataru Sato
- Psychological Process Team, BZP, Robotics Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 6190288, Japan
| | - Masaki Osumi
- KOHINATA Limited Liability Company, 2-7-3, Tateba, Naniwa-ku, Osaka 5560020, Japan; (M.O.); (K.S.)
| | - Koh Shimokawa
- KOHINATA Limited Liability Company, 2-7-3, Tateba, Naniwa-ku, Osaka 5560020, Japan; (M.O.); (K.S.)
| |
Collapse
|
16
|
Abstract
With a shift in interest toward dynamic expressions, numerous corpora of dynamic facial stimuli have been developed over the past two decades. The present research aimed to test existing sets of dynamic facial expressions (published between 2000 and 2015) in a cross-corpus validation effort. For this, 14 dynamic databases were selected that featured facial expressions of the basic six emotions (anger, disgust, fear, happiness, sadness, surprise) in posed or spontaneous form. In Study 1, a subset of stimuli from each database (N = 162) were presented to human observers and machine analysis, yielding considerable variance in emotion recognition performance across the databases. Classification accuracy further varied with perceived intensity and naturalness of the displays, with posed expressions being judged more accurately and as intense, but less natural compared to spontaneous ones. Study 2 aimed for a full validation of the 14 databases by subjecting the entire stimulus set (N = 3812) to machine analysis. A FACS-based Action Unit (AU) analysis revealed that facial AU configurations were more prototypical in posed than spontaneous expressions. The prototypicality of an expression in turn predicted emotion classification accuracy, with higher performance observed for more prototypical facial behavior. Furthermore, technical features of each database (i.e., duration, face box size, head rotation, and motion) had a significant impact on recognition accuracy. Together, the findings suggest that existing databases vary in their ability to signal specific emotions, thereby facing a trade-off between realism and ecological validity on the one end, and expression uniformity and comparability on the other.
Collapse
|
17
|
Namba S, Matsui H, Zloteanu M. Distinct temporal features of genuine and deliberate facial expressions of surprise. Sci Rep 2021; 11:3362. [PMID: 33564091 PMCID: PMC7873236 DOI: 10.1038/s41598-021-83077-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 01/28/2021] [Indexed: 01/30/2023] Open
Abstract
The physical properties of genuine and deliberate facial expressions remain elusive. This study focuses on observable dynamic differences between genuine and deliberate expressions of surprise based on the temporal structure of facial parts during emotional expression. Facial expressions of surprise were elicited using multiple methods and video recorded: senders were filmed as they experienced genuine surprise in response to a jack-in-the-box (Genuine), other senders were asked to produce deliberate surprise with no preparation (Improvised), by mimicking the expression of another (External), or by reproducing the surprised face after having first experienced genuine surprise (Rehearsed). A total of 127 videos were analyzed, and moment-to-moment movements of eyelids and eyebrows were annotated with deep learning-based tracking software. Results showed that all surprise displays were mainly composed of raising eyebrows and eyelids movements. Genuine displays included horizontal movement in the left part of the face, but also showed the weakest movement coupling of all conditions. External displays had faster eyebrow and eyelid movement, while Improvised displays showed the strongest coupling of movements. The findings demonstrate the importance of dynamic information in the encoding of genuine and deliberate expressions of surprise and the importance of the production method employed in research.
Collapse
Affiliation(s)
- Shushi Namba
- Psychological Process Team, BZP, Robotics Project, RIKEN, Kyoto, 6190288, Japan.
| | - Hiroshi Matsui
- Center for Human-Nature, Artificial Intelligence, and Neuroscience, Hokkaido University, Hokkaido, 0600808, Japan
| | - Mircea Zloteanu
- Department of Criminology and Sociology, Kingston University London, Kingston Upon Thames, KT1 2EE, UK
| |
Collapse
|
18
|
Zloteanu M, Krumhuber EG. Expression Authenticity: The Role of Genuine and Deliberate Displays in Emotion Perception. Front Psychol 2021; 11:611248. [PMID: 33519624 PMCID: PMC7840656 DOI: 10.3389/fpsyg.2020.611248] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 12/21/2020] [Indexed: 11/13/2022] Open
Abstract
People dedicate significant attention to others' facial expressions and to deciphering their meaning. Hence, knowing whether such expressions are genuine or deliberate is important. Early research proposed that authenticity could be discerned based on reliable facial muscle activations unique to genuine emotional experiences that are impossible to produce voluntarily. With an increasing body of research, such claims may no longer hold up to empirical scrutiny. In this article, expression authenticity is considered within the context of senders' ability to produce convincing facial displays that resemble genuine affect and human decoders' judgments of expression authenticity. This includes a discussion of spontaneous vs. posed expressions, as well as appearance- vs. elicitation-based approaches for defining emotion recognition accuracy. We further expand on the functional role of facial displays as neurophysiological states and communicative signals, thereby drawing upon the encoding-decoding and affect-induction perspectives of emotion expressions. Theoretical and methodological issues are addressed with the aim to instigate greater conceptual and operational clarity in future investigations of expression authenticity.
Collapse
Affiliation(s)
- Mircea Zloteanu
- Department of Criminology and Sociology, Kingston University London, Kingston, United Kingdom.,Department of Psychology, Kingston University London, Kingston, United Kingdom
| | - Eva G Krumhuber
- Department of Experimental Psychology, University College London, London, United Kingdom
| |
Collapse
|
19
|
How to Detect Altruists: Experiments Using a Zero-Acquaintance Video Presentation Paradigm. JOURNAL OF NONVERBAL BEHAVIOR 2021. [DOI: 10.1007/s10919-020-00352-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
AbstractIn this study, we investigated the cognitive processes and nonverbal cues used to detect altruism in three experiments based on a zero-acquaintance video presentation paradigm. Cognitive mechanisms of altruism detection are thought to have evolved in humans to prevent subtle cheating. Several studies have demonstrated that people can correctly estimate levels of altruism in others. In this study, we asked participants to distinguish altruists from non-altruists in video clips using the Faith game. Participants decided whether they could trust allocation of money to the targets who were videotaped while talking to the experimenter. In our first experiment, we asked the participants to play the Faith game under cognitive load. The accuracy of altruism detection was not reduced when participants simultaneously performed a cognitive task, suggesting that altruist detection is rapid and effortless. In the second experiment, we investigated the effects of affective status on the accuracy of altruism detection. Compared with participants in a positive mood, those in a negative mood were more hesitant to trust videotaped targets. However, the accuracy with which altruism levels were detected did not change when we manipulated participants’ moods. In the third experiment, we investigated the facial cues by which participants detected altruists. Participants could not detect altruists when the upper half of the target’s face was hidden, suggesting that judgment cues exist around the eyes. We also conducted a meta-analysis on the effect size in each experimental condition to verify the robustness of altruism detection ability.
Collapse
|
20
|
Zloteanu M, Krumhuber EG, Richardson DC. Acting Surprised: Comparing Perceptions of Different Dynamic Deliberate Expressions. JOURNAL OF NONVERBAL BEHAVIOR 2020. [DOI: 10.1007/s10919-020-00349-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
AbstractPeople are accurate at classifying emotions from facial expressions but much poorer at determining if such expressions are spontaneously felt or deliberately posed. We explored if the method used by senders to produce an expression influences the decoder’s ability to discriminate authenticity, drawing inspiration from two well-known acting techniques: the Stanislavski (internal) and Mimic method (external). We compared spontaneous surprise expressions in response to a jack-in-the-box (genuine condition), to posed displays of senders who either focused on their past affective state (internal condition) or the outward expression (external condition). Although decoders performed better than chance at discriminating the authenticity of all expressions, their accuracy was lower in classifying external surprise compared to internal surprise. Decoders also found it harder to discriminate external surprise from spontaneous surprise and were less confident in their decisions, perceiving these to be similarly intense but less genuine-looking. The findings suggest that senders are capable of voluntarily producing genuine-looking expressions of emotions with minimal effort, especially by mimicking a genuine expression. Implications for research on emotion recognition are discussed.
Collapse
|
21
|
Namba S, Kambara T. Semantics Based on the Physical Characteristics of Facial Expressions Used to Produce Japanese Vowels. Behav Sci (Basel) 2020; 10:E157. [PMID: 33066229 PMCID: PMC7602070 DOI: 10.3390/bs10100157] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 10/02/2020] [Accepted: 10/12/2020] [Indexed: 12/18/2022] Open
Abstract
Previous studies have reported that verbal sounds are associated-non-arbitrarily-with specific meanings (e.g., sound symbolism and onomatopoeia), including visual forms of information such as facial expressions; however, it remains unclear how mouth shapes used to utter each vowel create our semantic impressions. We asked 81 Japanese participants to evaluate mouth shapes associated with five Japanese vowels by using 10 five-item semantic differential scales. The results reveal that the physical characteristics of the facial expressions (mouth shapes) induced specific evaluations. For example, the mouth shape made to voice the vowel "a" was the one with the biggest, widest, and highest facial components compared to other mouth shapes, and people perceived words containing that vowel sound as bigger. The mouth shapes used to pronounce the vowel "i" were perceived as more likable than the other four vowels. These findings indicate that the mouth shapes producing vowels imply specific meanings. Our study provides clues about the meaning of verbal sounds and what the facial expressions in communication represent to the perceiver.
Collapse
Affiliation(s)
- Shushi Namba
- Psychological Process Team, BZP, Robotics Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 6190288, Japan;
| | - Toshimune Kambara
- Department of Psychology, Graduate School of Education, Hiroshima University, 1-1-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 7398524, Japan
| |
Collapse
|
22
|
Namba S, Rychlowska M, Orlowska A, Aviezer H, Krumhuber EG. Social context and culture influence judgments of non-Duchenne smiles. JOURNAL OF CULTURAL COGNITIVE SCIENCE 2020. [DOI: 10.1007/s41809-020-00066-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
AbstractExtant evidence points toward the role of contextual information and related cross-cultural variations in emotion perception, but most of the work to date has focused on judgments of basic emotions. The current research examines how culture and situational context affect the interpretation of emotion displays, i.e. judgments of the extent to which ambiguous smiles communicate happiness versus polite intentions. We hypothesized that smiles associated with contexts implying happiness would be judged as conveying more positive feelings compared to smiles paired with contexts implying politeness or smiles presented without context. In line with existing research on cross-cultural variation in contextual influences, we also expected these effects to be larger in Japan than in the UK. In Study 1, British participants viewed non-Duchenne smiles presented on their own or paired with background scenes implying happiness or the need to be polite. Compared to face-only stimuli, happy contexts made smiles appear more genuine, whereas polite contexts led smiles to be seen as less genuine. Study 2 replicated this result using verbal vignettes, showing a similar pattern of contextual effects among British and Japanese participants. However, while the effects of vignettes describing happy situations was comparable in both cultures, the influence of vignettes describing polite situations was stronger in Japan than the UK. Together, the findings document the importance of context information in judging smile expressions and highlight the need to investigate how culture moderates such influences.
Collapse
|
23
|
Wang S, Zheng Z, Yin S, Yang J, Ji Q. A Novel Dynamic Model Capturing Spatial and Temporal Patterns for Facial Expression Analysis. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2020; 42:2082-2095. [PMID: 30998459 DOI: 10.1109/tpami.2019.2911937] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Facial expression analysis could be greatly improved by incorporating spatial and temporal patterns present in facial behavior, but the patterns have not yet been utilized to their full advantage. We remedy this via a novel dynamic model-an interval temporal restricted Boltzmann machine (IT-RBM) - that is able to capture both universal spatial patterns and complicated temporal patterns in facial behavior for facial expression analysis. We regard a facial expression as a multifarious activity composed of sequential or overlapping primitive facial events. Allen's interval algebra is implemented to portray these complicated temporal patterns via a two-layer Bayesian network. The nodes in the upper-most layer are representative of the primitive facial events, and the nodes in the lower layer depict the temporal relationships between those events. Our model also captures inherent universal spatial patterns via a multi-value restricted Boltzmann machine in which the visible nodes are facial events, and the connections between hidden and visible nodes model intrinsic spatial patterns. Efficient learning and inference algorithms are proposed. Experiments on posed and spontaneous expression distinction and expression recognition demonstrate that our proposed IT-RBM achieves superior performance compared to state-of-the art research due to its ability to incorporate these facial behavior patterns.
Collapse
|
24
|
Barrett LF, Adolphs R, Marsella S, Martinez A, Pollak SD. Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements. Psychol Sci Public Interest 2019; 20:1-68. [PMID: 31313636 PMCID: PMC6640856 DOI: 10.1177/1529100619832930] [Citation(s) in RCA: 382] [Impact Index Per Article: 76.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
It is commonly assumed that a person's emotional state can be readily inferred from his or her facial movements, typically called emotional expressions or facial expressions. This assumption influences legal judgments, policy decisions, national security protocols, and educational practices; guides the diagnosis and treatment of psychiatric illness, as well as the development of commercial applications; and pervades everyday social interactions as well as research in other scientific fields such as artificial intelligence, neuroscience, and computer vision. In this article, we survey examples of this widespread assumption, which we refer to as the common view, and we then examine the scientific evidence that tests this view, focusing on the six most popular emotion categories used by consumers of emotion research: anger, disgust, fear, happiness, sadness, and surprise. The available scientific evidence suggests that people do sometimes smile when happy, frown when sad, scowl when angry, and so on, as proposed by the common view, more than what would be expected by chance. Yet how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation. Furthermore, similar configurations of facial movements variably express instances of more than one emotion category. In fact, a given configuration of facial movements, such as a scowl, often communicates something other than an emotional state. Scientists agree that facial movements convey a range of information and are important for social communication, emotional or otherwise. But our review suggests an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life, as well as careful study of the mechanisms by which people perceive instances of emotion in one another. We make specific research recommendations that will yield a more valid picture of how people move their faces to express emotions and how they infer emotional meaning from facial movements in situations of everyday life. This research is crucial to provide consumers of emotion research with the translational information they require.
Collapse
Affiliation(s)
- Lisa Feldman Barrett
- Northeastern University, Department of Psychology, Boston, MA
- Massachusetts General Hospital, Department of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA
- Harvard Medical School, Department of Psychiatry, Boston MA
| | - Ralph Adolphs
- California Institute of Technology, Departments of Psychology, Neuroscience, and Biology,Pasadena, CA
| | - Stacy Marsella
- Northeastern University, Department of Psychology, Boston, MA
- Northeastern University, College of Computer and Information Science, Boston, MA
- University of Glasgow, Glasgow, Scotland
| | - Aleix Martinez
- The Ohio State University, Department of Electrical and Computer Engineering, and Center for Cognitive and Brain Sciences, Columbus, OH
| | - Seth D. Pollak
- University of Wisconsin - Madison, Department of Psychology, Madison, WI
| |
Collapse
|
25
|
Automatic Recognition of Posed Facial Expression of Emotion in Individuals with Autism Spectrum Disorder. J Autism Dev Disord 2019; 49:279-293. [PMID: 30298462 DOI: 10.1007/s10803-018-3757-9] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
Facial expression is impaired in autism spectrum disorder (ASD), but rarely systematically studied. We focus on the ability of individuals with ASD to produce facial expressions of emotions in response to a verbal prompt. We used the Janssen Autism Knowledge Engine (JAKE®), including automated facial expression analysis software (FACET) to measure facial expressions in individuals with ASD (n = 144) and a typically developing (TD) comparison group (n = 41). Differences in ability to produce facial expressions were observed between ASD and TD groups, demonstrated by activation of facial action units (happy, scared, surprised, disgusted, but not angry or sad). Activation of facial action units correlated with parent-reported social communication skills. This approach has potential for diagnostic and response to intervention measures.Trial Registration NCT02299700.
Collapse
|
26
|
Zloteanu M, Krumhuber EG, Richardson DC. Detecting Genuine and Deliberate Displays of Surprise in Static and Dynamic Faces. Front Psychol 2018; 9:1184. [PMID: 30042717 PMCID: PMC6048358 DOI: 10.3389/fpsyg.2018.01184] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 06/19/2018] [Indexed: 11/13/2022] Open
Abstract
People are good at recognizing emotions from facial expressions, but less accurate at determining the authenticity of such expressions. We investigated whether this depends upon the technique that senders use to produce deliberate expressions, and on decoders seeing these in a dynamic or static format. Senders were filmed as they experienced genuine surprise in response to a jack-in-the-box (Genuine). Other senders faked surprise with no preparation (Improvised) or after having first experienced genuine surprise themselves (Rehearsed). Decoders rated the genuineness and intensity of these expressions, and the confidence of their judgment. It was found that both expression type and presentation format impacted decoder perception and accurate discrimination. Genuine surprise achieved the highest ratings of genuineness, intensity, and judgmental confidence (dynamic only), and was fairly accurately discriminated from deliberate surprise expressions. In line with our predictions, Rehearsed expressions were perceived as more genuine (in dynamic presentation), whereas Improvised were seen as more intense (in static presentation). However, both were poorly discriminated as not being genuine. In general, dynamic stimuli improved authenticity discrimination accuracy and perceptual differences between expressions. While decoders could perceive subtle differences between different expressions (especially from dynamic displays), they were not adept at detecting if these were genuine or deliberate. We argue that senders are capable of producing genuine-looking expressions of surprise, enough to fool others as to their veracity.
Collapse
Affiliation(s)
- Mircea Zloteanu
- Department of Computer Science, University College London, London, United Kingdom.,Department of Experimental Psychology, University College London, London, United Kingdom
| | - Eva G Krumhuber
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Daniel C Richardson
- Department of Experimental Psychology, University College London, London, United Kingdom
| |
Collapse
|
27
|
Zane E, Neumeyer K, Mertens J, Chugg A, Grossman RB. I Think We're Alone Now: Solitary Social Behaviors in Adolescents with Autism Spectrum Disorder. JOURNAL OF ABNORMAL CHILD PSYCHOLOGY 2018; 46:1111-1120. [PMID: 28993938 PMCID: PMC5893442 DOI: 10.1007/s10802-017-0351-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Research into emotional responsiveness in Autism Spectrum Disorder (ASD) has yielded mixed findings. Some studies report uniform, flat and emotionless expressions in ASD; others describe highly variable expressions that are as or even more intense than those of typically developing (TD) individuals. Variability in findings is likely due to differences in study design: some studies have examined posed (i.e., not spontaneous expressions) and others have examined spontaneous expressions in social contexts, during which individuals with ASD-by nature of the disorder-are likely to behave differently than their TD peers. To determine whether (and how) spontaneous facial expressions and other emotional responses are different from TD individuals, we video-recorded the spontaneous responses of children and adolescents with and without ASD (between the ages of 10 and 17 years) as they watched emotionally evocative videos in a non-social context. Researchers coded facial expressions for intensity, and noted the presence of laughter and other responsive vocalizations. Adolescents with ASD displayed more intense, frequent and varied spontaneous facial expressions than their TD peers. They also produced significantly more emotional vocalizations, including laughter. Individuals with ASD may display their emotions more frequently and more intensely than TD individuals when they are unencumbered by social pressure. Differences in the interpretation of the social setting and/or understanding of emotional display rules may also contribute to differences in emotional behaviors between groups.
Collapse
Affiliation(s)
- Emily Zane
- FACE Lab at Emerson College, 8 Park Plaza, Rm. 225, Boston, MA, 02116, USA.
| | - Kayla Neumeyer
- FACE Lab at Emerson College, 8 Park Plaza, Rm. 225, Boston, MA, 02116, USA
| | - Julia Mertens
- FACE Lab at Emerson College, 8 Park Plaza, Rm. 225, Boston, MA, 02116, USA
| | - Amanda Chugg
- FACE Lab at Emerson College, 8 Park Plaza, Rm. 225, Boston, MA, 02116, USA
| | - Ruth B Grossman
- FACE Lab at Emerson College, 8 Park Plaza, Rm. 225, Boston, MA, 02116, USA
- Communication Sciences and Disorders at Emerson College, 120 Boylston Street, Boston, MA, 02116, USA
- UMMS Shriver Center, UBank, Rm. 803, Boston, MA, 02116, USA
| |
Collapse
|
28
|
Namba S, Kabir RS, Miyatani M, Nakao T. Dynamic Displays Enhance the Ability to Discriminate Genuine and Posed Facial Expressions of Emotion. Front Psychol 2018; 9:672. [PMID: 29896135 PMCID: PMC5987704 DOI: 10.3389/fpsyg.2018.00672] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 04/18/2018] [Indexed: 11/13/2022] Open
Abstract
Accurately gauging the emotional experience of another person is important for navigating interpersonal interactions. This study investigated whether perceivers are capable of distinguishing between unintentionally expressed (genuine) and intentionally manipulated (posed) facial expressions attributed to four major emotions: amusement, disgust, sadness, and surprise. Sensitivity to this discrimination was explored by comparing unstaged dynamic and static facial stimuli and analyzing the results with signal detection theory. Participants indicated whether facial stimuli presented on a screen depicted a person showing a given emotion and whether that person was feeling a given emotion. The results showed that genuine displays were evaluated more as felt expressions than posed displays for all target emotions presented. In addition, sensitivity to the perception of emotional experience, or discriminability, was enhanced in dynamic facial displays, but was less pronounced in the case of static displays. This finding indicates that dynamic information in facial displays contributes to the ability to accurately infer the emotional experiences of another person.
Collapse
Affiliation(s)
- Shushi Namba
- Graduate School of Education, Hiroshima University, Hiroshima, Japan
| | - Russell S Kabir
- Graduate School of Education, Hiroshima University, Hiroshima, Japan
| | - Makoto Miyatani
- Department of Psychology, Hiroshima University, Hiroshima, Japan
| | - Takashi Nakao
- Department of Psychology, Hiroshima University, Hiroshima, Japan
| |
Collapse
|
29
|
Mortillaro M, Dukes D. Jumping for Joy: The Importance of the Body and of Dynamics in the Expression and Recognition of Positive Emotions. Front Psychol 2018; 9:763. [PMID: 29867704 PMCID: PMC5962906 DOI: 10.3389/fpsyg.2018.00763] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Accepted: 04/30/2018] [Indexed: 11/15/2022] Open
Abstract
The majority of research on emotion expression has focused on static facial prototypes of a few selected, mostly negative emotions. Implicitly, most researchers seem to have considered all positive emotions as sharing one common signal (namely, the smile), and consequently as being largely indistinguishable from each other in terms of expression. Recently, a new wave of studies has started to challenge the traditional assumption by considering the role of multiple modalities and the dynamics in the expression and recognition of positive emotions. Based on these recent studies, we suggest that positive emotions are better expressed and correctly perceived when (a) they are communicated simultaneously through the face and body and (b) perceivers have access to dynamic stimuli. Notably, we argue that this improvement is comparatively more important for positive emotions than for negative emotions. Our view is that the misperception of positive emotions has fewer immediate and potentially life-threatening consequences than the misperception of negative emotions; therefore, from an evolutionary perspective, there was only limited benefit in the development of clear, quick signals that allow observers to draw fine distinctions between them. Consequently, we suggest that the successful communication of positive emotions requires a stronger signal than that of negative emotions, and that this signal is provided by the use of the body and the way those movements unfold. We hope our contribution to this growing field provides a new direction and a theoretical grounding for the many lines of empirical research on the expression and recognition of positive emotions.
Collapse
Affiliation(s)
- Marcello Mortillaro
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Daniel Dukes
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
- Psychology Research Institute, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
30
|
Namba S, Kabir RS, Miyatani M, Nakao T. Spontaneous Facial Actions Map onto Emotional Experiences in a Non-social Context: Toward a Component-Based Approach. Front Psychol 2017; 8:633. [PMID: 28522979 PMCID: PMC5415601 DOI: 10.3389/fpsyg.2017.00633] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Accepted: 04/09/2017] [Indexed: 11/20/2022] Open
Abstract
While numerous studies have examined the relationships between facial actions and emotions, they have yet to account for the ways that specific spontaneous facial expressions map onto emotional experiences induced without expressive intent. Moreover, previous studies emphasized that a fine-grained investigation of facial components could establish the coherence of facial actions with actual internal states. Therefore, this study aimed to accumulate evidence for the correspondence between spontaneous facial components and emotional experiences. We reinvestigated data from previous research which secretly recorded spontaneous facial expressions of Japanese participants as they watched film clips designed to evoke four different target emotions: surprise, amusement, disgust, and sadness. The participants rated their emotional experiences via a self-reported questionnaire of 16 emotions. These spontaneous facial expressions were coded using the Facial Action Coding System, the gold standard for classifying visible facial movements. We corroborated each facial action that was present in the emotional experiences by applying stepwise regression models. The results found that spontaneous facial components occurred in ways that cohere to their evolutionary functions based on the rating values of emotional experiences (e.g., the inner brow raiser might be involved in the evaluation of novelty). This study provided new empirical evidence for the correspondence between each spontaneous facial component and first-person internal states of emotion as reported by the expresser.
Collapse
Affiliation(s)
- Shushi Namba
- Graduate School of Education, Hiroshima UniversityHiroshima, Japan
| | - Russell S Kabir
- Graduate School of Education, Hiroshima UniversityHiroshima, Japan
| | - Makoto Miyatani
- Department of Psychology, Hiroshima UniversityHiroshima, Japan
| | - Takashi Nakao
- Department of Psychology, Hiroshima UniversityHiroshima, Japan
| |
Collapse
|
31
|
Namba S, Kagamihara T, Miyatani M, Nakao T. Spontaneous Facial Expressions Reveal New Action Units for the Sad Experiences. JOURNAL OF NONVERBAL BEHAVIOR 2017. [DOI: 10.1007/s10919-017-0251-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|