1
|
von Eiff CI, Kauk J, Schweinberger SR. The Jena Audiovisual Stimuli of Morphed Emotional Pseudospeech (JAVMEPS): A database for emotional auditory-only, visual-only, and congruent and incongruent audiovisual voice and dynamic face stimuli with varying voice intensities. Behav Res Methods 2024; 56:5103-5115. [PMID: 37821750 PMCID: PMC11289065 DOI: 10.3758/s13428-023-02249-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/18/2023] [Indexed: 10/13/2023]
Abstract
We describe JAVMEPS, an audiovisual (AV) database for emotional voice and dynamic face stimuli, with voices varying in emotional intensity. JAVMEPS includes 2256 stimulus files comprising (A) recordings of 12 speakers, speaking four bisyllabic pseudowords with six naturalistic induced basic emotions plus neutral, in auditory-only, visual-only, and congruent AV conditions. It furthermore comprises (B) caricatures (140%), original voices (100%), and anti-caricatures (60%) for happy, fearful, angry, sad, disgusted, and surprised voices for eight speakers and two pseudowords. Crucially, JAVMEPS contains (C) precisely time-synchronized congruent and incongruent AV (and corresponding auditory-only) stimuli with two emotions (anger, surprise), (C1) with original intensity (ten speakers, four pseudowords), (C2) and with graded AV congruence (implemented via five voice morph levels, from caricatures to anti-caricatures; eight speakers, two pseudowords). We collected classification data for Stimulus Set A from 22 normal-hearing listeners and four cochlear implant users, for two pseudowords, in auditory-only, visual-only, and AV conditions. Normal-hearing individuals showed good classification performance (McorrAV = .59 to .92), with classification rates in the auditory-only condition ≥ .38 correct (surprise: .67, anger: .51). Despite compromised vocal emotion perception, CI users performed above chance levels of .14 for auditory-only stimuli, with best rates for surprise (.31) and anger (.30). We anticipate JAVMEPS to become a useful open resource for researchers into auditory emotion perception, especially when adaptive testing or calibration of task difficulty is desirable. With its time-synchronized congruent and incongruent stimuli, JAVMEPS can also contribute to filling a gap in research regarding dynamic audiovisual integration of emotion perception via behavioral or neurophysiological recordings.
Collapse
Affiliation(s)
- Celina I von Eiff
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Am Steiger 3, 07743, Jena, Germany.
- Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, Leutragraben 1, 07743, Jena, Germany.
- DFG SPP 2392 Visual Communication (ViCom), Frankfurt am Main, Germany.
- Jena University Hospital, 07747, Jena, Germany.
| | - Julian Kauk
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Am Steiger 3, 07743, Jena, Germany
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Am Steiger 3, 07743, Jena, Germany.
- Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, Leutragraben 1, 07743, Jena, Germany.
- DFG SPP 2392 Visual Communication (ViCom), Frankfurt am Main, Germany.
- Jena University Hospital, 07747, Jena, Germany.
| |
Collapse
|
2
|
Chang CH, Drobotenko N, Ruocco AC, Lee ACH, Nestor A. Perception and memory-based representations of facial emotions: Associations with personality functioning, affective states and recognition abilities. Cognition 2024; 245:105724. [PMID: 38266352 DOI: 10.1016/j.cognition.2024.105724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Revised: 11/09/2023] [Accepted: 01/15/2024] [Indexed: 01/26/2024]
Abstract
Personality traits and affective states are associated with biases in facial emotion perception. However, the precise personality impairments and affective states that underlie these biases remain largely unknown. To investigate how relevant factors influence facial emotion perception and recollection, Experiment 1 employed an image reconstruction approach in which community-dwelling adults (N = 89) rated the similarity of pairs of facial expressions, including those recalled from memory. Subsequently, perception- and memory-based expression representations derived from such ratings were assessed across participants and related to measures of personality impairment, state affect, and visual recognition abilities. Impairment in self-direction and level of positive affect accounted for the largest components of individual variability in perception and memory representations, respectively. Additionally, individual differences in these representations were impacted by face recognition ability. In Experiment 2, adult participants (N = 81) rated facial image reconstructions derived in Experiment 1, revealing that individual variability was associated with specific visual face properties, such as expressiveness, representation accuracy, and positivity/negativity. These findings highlight and clarify the influence of personality, affective state, and recognition abilities on individual differences in the perception and recollection of facial expressions.
Collapse
Affiliation(s)
- Chi-Hsun Chang
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada
| | - Natalia Drobotenko
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada
| | - Anthony C Ruocco
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada; Department of Psychological Clinical Science at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada
| | - Andy C H Lee
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada; Rotman Research Institute, Baycrest Centre, 3560 Bathurst St, North York, Ontario M6A 2E1, Canada
| | - Adrian Nestor
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada.
| |
Collapse
|
3
|
Daşdemir Y. Classification of Emotional and Immersive Outcomes in the Context of Virtual Reality Scene Interactions. Diagnostics (Basel) 2023; 13:3437. [PMID: 37998573 PMCID: PMC10670519 DOI: 10.3390/diagnostics13223437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 10/25/2023] [Accepted: 11/08/2023] [Indexed: 11/25/2023] Open
Abstract
The constantly evolving technological landscape of the Metaverse has introduced a significant concern: cybersickness (CS). There is growing academic interest in detecting and mitigating these adverse effects within virtual environments (VEs). However, the development of effective methodologies in this field has been hindered by the lack of sufficient benchmark datasets. In pursuit of this objective, we meticulously compiled a comprehensive dataset by analyzing the impact of virtual reality (VR) environments on CS, immersion levels, and EEG-based emotion estimation. Our dataset encompasses both implicit and explicit measurements. Implicit measurements focus on brain signals, while explicit measurements are based on participant questionnaires. These measurements were used to collect data on the extent of cybersickness experienced by participants in VEs. Using statistical methods, we conducted a comparative analysis of CS levels in VEs tailored for specific tasks and their immersion factors. Our findings revealed statistically significant differences between VEs, highlighting crucial factors influencing participant engagement, engrossment, and immersion. Additionally, our study achieved a remarkable classification performance of 96.25% in distinguishing brain oscillations associated with VR scenes using the multi-instance learning method and 95.63% in predicting emotions within the valence-arousal space with four labels. The dataset presented in this study holds great promise for objectively evaluating CS in VR contexts, differentiating between VEs, and providing valuable insights for future research endeavors.
Collapse
Affiliation(s)
- Yaşar Daşdemir
- Department of Computer Engineering, Erzurum Technical University, 25050 Erzurum, Turkey
| |
Collapse
|
4
|
Şentürk YD, Tavacioglu EE, Duymaz İ, Sayim B, Alp N. The Sabancı University Dynamic Face Database (SUDFace): Development and validation of an audiovisual stimulus set of recited and free speeches with neutral facial expressions. Behav Res Methods 2023; 55:3078-3099. [PMID: 36018484 DOI: 10.3758/s13428-022-01951-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/06/2022] [Indexed: 11/08/2022]
Abstract
Faces convey a wide range of information, including one's identity, and emotional and mental states. Face perception is a major research topic in many research fields, such as cognitive science, social psychology, and neuroscience. Frequently, stimuli are selected from a range of available face databases. However, even though faces are highly dynamic, most databases consist of static face stimuli. Here, we introduce the Sabancı University Dynamic Face (SUDFace) database. The SUDFace database consists of 150 high-resolution audiovisual videos acquired in a controlled lab environment and stored with a resolution of 1920 × 1080 pixels at a frame rate of 60 Hz. The multimodal database consists of three videos of each human model in frontal view in three different conditions: vocalizing two scripted texts (conditions 1 and 2) and one Free Speech (condition 3). The main focus of the SUDFace database is to provide a large set of dynamic faces with neutral facial expressions and natural speech articulation. Variables such as face orientation, illumination, and accessories (piercings, earrings, facial hair, etc.) were kept constant across all stimuli. We provide detailed stimulus information, including facial features (pixel-wise calculations of face length, eye width, etc.) and speeches (e.g., duration of speech and repetitions). In two validation experiments, a total number of 227 participants rated each video on several psychological dimensions (e.g., neutralness and naturalness of expressions, valence, and the perceived mental states of the models) using Likert scales. The database is freely accessible for research purposes.
Collapse
Affiliation(s)
| | | | - İlker Duymaz
- Psychology, Sabancı University, Orta Mahalle, Tuzla, İstanbul, 34956, Turkey
| | - Bilge Sayim
- SCALab - Sciences Cognitives et Sciences Affectives, Université de Lille, CNRS, Lille, France
- Institute of Psychology, University of Bern, Fabrikstrasse 8, 3012, Bern, Switzerland
| | - Nihan Alp
- Psychology, Sabancı University, Orta Mahalle, Tuzla, İstanbul, 34956, Turkey.
| |
Collapse
|
5
|
Smiles and Angry Faces vs. Nods and Head Shakes: Facial Expressions at the Service of Autonomous Vehicles. MULTIMODAL TECHNOLOGIES AND INTERACTION 2023. [DOI: 10.3390/mti7020010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
When deciding whether to cross the street or not, pedestrians take into consideration information provided by both vehicle kinematics and the driver of an approaching vehicle. It will not be long, however, before drivers of autonomous vehicles (AVs) will be unable to communicate their intention to pedestrians, as they will be engaged in activities unrelated to driving. External human–machine interfaces (eHMIs) have been developed to fill the communication gap that will result by offering information to pedestrians about the situational awareness and intention of an AV. Several anthropomorphic eHMI concepts have employed facial expressions to communicate vehicle intention. The aim of the present study was to evaluate the efficiency of emotional (smile; angry expression) and conversational (nod; head shake) facial expressions in communicating vehicle intention (yielding; non-yielding). Participants completed a crossing intention task where they were tasked with deciding appropriately whether to cross the street or not. Emotional expressions communicated vehicle intention more efficiently than conversational expressions, as evidenced by the lower latency in the emotional expression condition compared to the conversational expression condition. The implications of our findings for the development of anthropomorphic eHMIs that employ facial expressions to communicate vehicle intention are discussed.
Collapse
|
6
|
Heydari F, Sheybani S, Yoonessi A. Iranian emotional face database: Acquisition and validation of a stimulus set of basic facial expressions. Behav Res Methods 2023; 55:143-150. [PMID: 35297015 DOI: 10.3758/s13428-022-01812-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/15/2022] [Indexed: 11/08/2022]
Abstract
Facial expressions play an essential role in social interactions. Databases of face images have furnished theories of emotion perception, as well as having applications in other disciplines such as facial recognition technology. However, the faces of many ethnicities remain largely underrepresented in the existing face databases, which can impact the generalizability of the theories and technologies developed based on them. Here, we present the first survey-validated database of Iranian faces. It consists of 248 images from 40 Iranian individuals portraying six emotional expressions-anger, sadness, fear, disgust, happiness, and surprise-as well as the neutral state. The photos were taken in a studio setting, following the common scenarios of emotion induction, and controlling for conditions of lighting, camera setup, and the model's head posture. An evaluation survey confirmed high agreement between the models' intended expressions and the raters' perception of them. The database is freely available online for academic research purposes.
Collapse
Affiliation(s)
- Faeze Heydari
- Institute for Cognitive Science Studies, Tehran, Iran.
| | | | - Ali Yoonessi
- Department of Neuroscience and Addiction Studies, School of Advanced Technologies in Medicine, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
7
|
Jiang Z, Recio G, Li W, Zhu P, He J, Sommer W. The other-race effect in facial expression processing: Behavioral and ERP evidence from a balanced cross-cultural study in women. Int J Psychophysiol 2023; 183:53-60. [PMID: 36410466 DOI: 10.1016/j.ijpsycho.2022.11.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 09/05/2022] [Accepted: 11/15/2022] [Indexed: 11/21/2022]
Abstract
Although evidence for cultural variants in facial expression decoding is accumulating, the other-race effect in facial expression processing and its neural correlates are still unclear. We investigated this question with a fully balanced design, in which a group of East Asian and a group of European Caucasian women categorized pictures of sad, happy, angry, and neutral facial expressions posed by individuals of their own-race and the other-race. Results revealed a disadvantage in categorizing expressions of anger in other-race faces in both samples, and for sad expressions in the European sample only. Partially consistent, East Asian participants showed longer latency of the N170 component in the event-related potential (ERP) and European Caucasian participants showed larger N170 amplitudes to other-race faces. The late positive complex in the ERP was less distinguishable among other-race facial expressions. Therefore, the present study observed an other-race effect in early and late stages of face processing, reflecting less efficient structural encoding and less elaborate processing for other-race than own-race faces.
Collapse
Affiliation(s)
- Zhongqing Jiang
- College of Psychology, Liaoning Normal University, Dalian, China.
| | - Guillermo Recio
- Institute of Neuroscience, Universitat de Barcelona, Barcelona, Spain
| | - Wenhui Li
- College of Preschool & Primary Education, Shenyang Normal University, Shenyang, China
| | - Peng Zhu
- School of Teacher Education, Huzhou University, Huzhou, China
| | - Jiamei He
- College of Psychology, Liaoning Normal University, Dalian, China
| | | |
Collapse
|
8
|
Hossain S, Umer S, Rout RK, Tanveer M. Fine-grained image analysis for facial expression recognition using deep convolutional neural networks with bilinear pooling. Appl Soft Comput 2023. [DOI: 10.1016/j.asoc.2023.109997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
|
9
|
Leitner MC, Meurer V, Hutzler F, Schuster S, Hawelka S. The effect of masks on the recognition of facial expressions: A true-to-life study on the perception of basic emotions. Front Psychol 2022; 13:933438. [PMID: 36619058 PMCID: PMC9815612 DOI: 10.3389/fpsyg.2022.933438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 12/06/2022] [Indexed: 12/24/2022] Open
Abstract
Mouth-to-nose face masks became ubiquitous due to the COVID-19 pandemic. This ignited studies on the perception of emotions in masked faces. Most of these studies presented still images of an emotional face with a face mask digitally superimposed upon the nose-mouth region. A common finding of these studies is that smiles become less perceivable. The present study investigated the recognition of basic emotions in video sequences of faces. We replicated much of the evidence gathered from presenting still images with digitally superimposed masks. We also unearthed fundamental differences in comparison to existing studies with regard to the perception of smile which is less impeded than previous studies implied.
Collapse
Affiliation(s)
- Michael Christian Leitner
- Salzburg University of Applied Sciences, Salzburg, Austria,Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Salzburg, Austria,Department of Psychology, University of Salzburg, Salzburg, Austria
| | - Verena Meurer
- Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Salzburg, Austria,Department of Psychology, University of Salzburg, Salzburg, Austria
| | - Florian Hutzler
- Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Salzburg, Austria,Department of Psychology, University of Salzburg, Salzburg, Austria
| | - Sarah Schuster
- Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Salzburg, Austria,Department of Psychology, University of Salzburg, Salzburg, Austria
| | - Stefan Hawelka
- Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Salzburg, Austria,Department of Psychology, University of Salzburg, Salzburg, Austria,*Correspondence: Stefan Hawelka, ✉
| |
Collapse
|
10
|
Fabrício DDM, Ferreira BLC, Maximiano-Barreto MA, Muniz M, Chagas MHN. Construction of face databases for tasks to recognize facial expressions of basic emotions: a systematic review. Dement Neuropsychol 2022; 16:388-410. [DOI: 10.1590/1980-5764-dn-2022-0039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 08/01/2022] [Accepted: 08/23/2022] [Indexed: 12/12/2022] Open
Abstract
ABSTRACT. Recognizing the other's emotions is an important skill for the social context that can be modulated by variables such as gender, age, and race. A number of studies seek to elaborate specific face databases to assess the recognition of basic emotions in different contexts. Objectives: This systematic review sought to gather these studies, describing and comparing the methodologies used in their elaboration. Methods: The databases used to select the articles were the following: PubMed, Web of Science, PsycInfo, and Scopus. The following word crossing was used: “Facial expression database OR Stimulus set AND development OR Validation.” Results: A total of 36 articles showed that most of the studies used actors to express the emotions that were elicited from specific situations to generate the most spontaneous emotion possible. The databases were mainly composed of colorful and static stimuli. In addition, most of the studies sought to establish and describe patterns to record the stimuli, such as color of the garments used and background. The psychometric properties of the databases are also described. Conclusions: The data presented in this review point to the methodological heterogeneity among the studies. Nevertheless, we describe their patterns, contributing to the planning of new research studies that seek to create databases for new contexts.
Collapse
Affiliation(s)
| | | | | | - Monalisa Muniz
- Universidade Federal de São Carlos, Brazil; Universidade Federal de São Carlos, Brazil
| | - Marcos Hortes Nisihara Chagas
- Universidade Federal de São Carlos, Brazil; Universidade Federal de São Carlos, Brazil; Universidade de São Paulo, Brazil; Instituto Bairral de Psiquiatria, Brazil
| |
Collapse
|
11
|
Tan X, Fan Y, Sun M, Zhuang M, Qu F. An Emotion Index Estimation based on Facial Action Unit Prediction. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.11.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
12
|
Ghost on the Windshield: Employing a Virtual Human Character to Communicate Pedestrian Acknowledgement and Vehicle Intention. INFORMATION 2022. [DOI: 10.3390/info13090420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Pedestrians base their street-crossing decisions on vehicle-centric as well as driver-centric cues. In the future, however, drivers of autonomous vehicles will be preoccupied with non-driving related activities and will thus be unable to provide pedestrians with relevant communicative cues. External human–machine interfaces (eHMIs) hold promise for filling the expected communication gap by providing information about a vehicle’s situational awareness and intention. In this paper, we present an eHMI concept that employs a virtual human character (VHC) to communicate pedestrian acknowledgement and vehicle intention (non-yielding; cruising; yielding). Pedestrian acknowledgement is communicated via gaze direction while vehicle intention is communicated via facial expression. The effectiveness of the proposed anthropomorphic eHMI concept was evaluated in the context of a monitor-based laboratory experiment where the participants performed a crossing intention task (self-paced, two-alternative forced choice) and their accuracy in making appropriate street-crossing decisions was measured. In each trial, they were first presented with a 3D animated sequence of a VHC (male; female) that either looked directly at them or clearly to their right while producing either an emotional (smile; angry expression; surprised expression), a conversational (nod; head shake), or a neutral (neutral expression; cheek puff) facial expression. Then, the participants were asked to imagine they were pedestrians intending to cross a one-way street at a random uncontrolled location when they saw an autonomous vehicle equipped with the eHMI approaching from the right and indicate via mouse click whether they would cross the street in front of the oncoming vehicle or not. An implementation of the proposed concept where non-yielding intention is communicated via the VHC producing either an angry expression, a surprised expression, or a head shake; cruising intention is communicated via the VHC puffing its cheeks; and yielding intention is communicated via the VHC nodding, was shown to be highly effective in ensuring the safety of a single pedestrian or even two co-located pedestrians without compromising traffic flow in either case. The implications for the development of intuitive, culture-transcending eHMIs that can support multiple pedestrians in parallel are discussed.
Collapse
|
13
|
Daşdemir Y. Cognitive investigation on the effect of augmented reality-based reading on emotion classification performance: A new dataset. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
14
|
Lieberz J, Shamay-Tsoory SG, Saporta N, Kanterman A, Gorni J, Esser T, Kuskova E, Schultz J, Hurlemann R, Scheele D. Behavioral and Neural Dissociation of Social Anxiety and Loneliness. J Neurosci 2022; 42:2570-2583. [PMID: 35165170 PMCID: PMC8944238 DOI: 10.1523/jneurosci.2029-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 01/20/2022] [Accepted: 01/21/2022] [Indexed: 11/21/2022] Open
Abstract
Loneliness is a public health concern with detrimental effects on physical and mental well-being. Given phenotypical overlaps between loneliness and social anxiety (SA), cognitive-behavioral interventions targeting SA might be adopted to reduce loneliness. However, whether SA and loneliness share the same underlying neurocognitive mechanisms is still an elusive question. The current study aimed at investigating to what extent known behavioral and neural correlates of social avoidance in SA are evident in loneliness. We used a prestratified approach involving 42 (21 females) participants with high loneliness (HL) and 40 (20 females) participants with low loneliness (LL) scores. During fMRI, participants completed a social gambling task to measure the subjective value of engaging in social situations and responses to social feedback. Univariate and multivariate analyses of behavioral and neural data replicated known task effects. However, although HL participants showed increased SA, loneliness was associated with a response pattern clearly distinct from SA. Specifically, contrary to expectations based on SA differences, Bayesian analyses revealed moderate evidence for equal subjective values of engaging in social situations and comparable amygdala responses to social decision-making and striatal responses to positive social feedback in both groups. Moreover, while explorative analyses revealed reduced pleasantness ratings, increased striatal activity, and decreased striatal-hippocampal connectivity in response to negative computer feedback in HL participants, these effects were diminished for negative social feedback. Our findings suggest that, unlike SA, loneliness is not associated with withdrawal from social interactions. Thus, established interventions for SA should be adjusted when targeting loneliness.SIGNIFICANCE STATEMENT Loneliness can cause serious health problems. Adapting well-established cognitive-behavioral therapies targeting social anxiety might be promising to reduce chronic loneliness given a close link between both constructs. However, a better understanding of behavioral and neurobiological factors associated with loneliness is needed to identify which specific mechanisms of social anxiety are shared by lonely individuals. We found that lonely individuals show a consistently distinct pattern of behavioral and neural responsiveness to social decision-making and social feedback compared with previous findings for social anxiety. Our results indicate that loneliness is associated with a biased emotional reactivity to negative events rather than social avoidance. Our findings thus emphasize the distinctiveness of loneliness from social anxiety and the need for adjusted psychotherapeutic protocols.
Collapse
Affiliation(s)
- Jana Lieberz
- Research Section Medical Psychology, Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, 53127, Germany
| | | | - Nira Saporta
- Department of Psychology, University of Haifa, Haifa, 3498838, Israel
| | - Alisa Kanterman
- Department of Psychology, University of Haifa, Haifa, 3498838, Israel
| | - Jessica Gorni
- Research Section Medical Psychology, Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, 53127, Germany
| | - Timo Esser
- Research Section Medical Psychology, Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, 53127, Germany
| | - Ekaterina Kuskova
- Research Section Medical Psychology, Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, 53127, Germany
| | - Johannes Schultz
- Center for Economics and Neuroscience, University of Bonn, Bonn, 53127, Germany
- Institute of Experimental Epileptology and Cognition Research, University of Bonn, Bonn, 53127, Germany
| | - René Hurlemann
- Department of Psychiatry, School of Medicine & Health Sciences, University of Oldenburg, Oldenburg, 26129, Germany
- Research Center Neurosensory Science, University of Oldenburg, Oldenburg, 26129, Germany
| | - Dirk Scheele
- Research Section Medical Psychology, Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, 53127, Germany
- Department of Psychiatry, School of Medicine & Health Sciences, University of Oldenburg, Oldenburg, 26129, Germany
| |
Collapse
|
15
|
Development and validation of the Interoceptive States Static Images (ISSI) database. Behav Res Methods 2021; 54:1744-1765. [PMID: 34651297 PMCID: PMC9374619 DOI: 10.3758/s13428-021-01706-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/09/2021] [Indexed: 12/03/2022]
Abstract
Internal bodily signals provide an essential function for human survival. Accurate recognition of such signals in the self, known as interoception, supports the maintenance of homeostasis, and is closely related to emotional processing, learning and decision-making, and mental health. While numerous studies have investigated interoception in the self, the recognition of these states in others has not been examined despite its crucial importance for successful social relationships. This paper presents the development and validation of the Interoceptive States Static Images (ISSI), introducing a validated database of 423 visual stimuli for the study of non-affective internal state recognition in others, freely available to other researchers. Actors were photographed expressing various exemplars of both interoceptive states and control actions. The images went through a two-stage validation procedure, the first involving free-labelling and the second using multiple choice labelling and quality rating scales. Five scores were calculated for each stimulus, providing information about the quality and specificity of the depiction, as well as the extent to which labels matched the intended state/action. Results demonstrated that control action stimuli were more recognisable than internal state stimuli. Inter-category variability was found for the internal states, with some states being more recognisable than others. Recommendations for the utilisation of ISSI stimuli are discussed. The stimulus set is freely available to researchers, alongside data concerning recognisability.
Collapse
|
16
|
Abstract
With a shift in interest toward dynamic expressions, numerous corpora of dynamic facial stimuli have been developed over the past two decades. The present research aimed to test existing sets of dynamic facial expressions (published between 2000 and 2015) in a cross-corpus validation effort. For this, 14 dynamic databases were selected that featured facial expressions of the basic six emotions (anger, disgust, fear, happiness, sadness, surprise) in posed or spontaneous form. In Study 1, a subset of stimuli from each database (N = 162) were presented to human observers and machine analysis, yielding considerable variance in emotion recognition performance across the databases. Classification accuracy further varied with perceived intensity and naturalness of the displays, with posed expressions being judged more accurately and as intense, but less natural compared to spontaneous ones. Study 2 aimed for a full validation of the 14 databases by subjecting the entire stimulus set (N = 3812) to machine analysis. A FACS-based Action Unit (AU) analysis revealed that facial AU configurations were more prototypical in posed than spontaneous expressions. The prototypicality of an expression in turn predicted emotion classification accuracy, with higher performance observed for more prototypical facial behavior. Furthermore, technical features of each database (i.e., duration, face box size, head rotation, and motion) had a significant impact on recognition accuracy. Together, the findings suggest that existing databases vary in their ability to signal specific emotions, thereby facing a trade-off between realism and ecological validity on the one end, and expression uniformity and comparability on the other.
Collapse
|
17
|
Jia S, Wang S, Hu C, Webster PJ, Li X. Detection of Genuine and Posed Facial Expressions of Emotion: Databases and Methods. Front Psychol 2021; 11:580287. [PMID: 33519600 PMCID: PMC7844089 DOI: 10.3389/fpsyg.2020.580287] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2020] [Accepted: 12/09/2020] [Indexed: 11/18/2022] Open
Abstract
Facial expressions of emotion play an important role in human social interactions. However, posed expressions of emotion are not always the same as genuine feelings. Recent research has found that facial expressions are increasingly used as a tool for understanding social interactions instead of personal emotions. Therefore, the credibility assessment of facial expressions, namely, the discrimination of genuine (spontaneous) expressions from posed (deliberate/volitional/deceptive) ones, is a crucial yet challenging task in facial expression understanding. With recent advances in computer vision and machine learning techniques, rapid progress has been made in recent years for automatic detection of genuine and posed facial expressions. This paper presents a general review of the relevant research, including several spontaneous vs. posed (SVP) facial expression databases and various computer vision based detection methods. In addition, a variety of factors that will influence the performance of SVP detection methods are discussed along with open issues and technical challenges in this nascent field.
Collapse
Affiliation(s)
- Shan Jia
- State Key Laboratory of Information Engineering in Surveying Mapping and Remote Sensing, Wuhan University, Wuhan, China.,Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV, United States
| | - Shuo Wang
- Department of Chemical and Biomedical Engineering, West Virginia University, Morgantown, WV, United States
| | - Chuanbo Hu
- Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV, United States
| | - Paula J Webster
- Department of Chemical and Biomedical Engineering, West Virginia University, Morgantown, WV, United States
| | - Xin Li
- Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV, United States
| |
Collapse
|
18
|
Bhakti Sonawane, Priyanka Sharma. Deep Learning Based Approach of Emotion Detection and Grading System. PATTERN RECOGNITION AND IMAGE ANALYSIS 2021. [DOI: 10.1134/s1054661820040239] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
19
|
Niu B, Gao Z, Guo B. Facial Expression Recognition with LBP and ORB Features. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:8828245. [PMID: 33505453 PMCID: PMC7815390 DOI: 10.1155/2021/8828245] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 12/21/2020] [Accepted: 12/31/2020] [Indexed: 11/17/2022]
Abstract
Emotion plays an important role in communication. For human-computer interaction, facial expression recognition has become an indispensable part. Recently, deep neural networks (DNNs) are widely used in this field and they overcome the limitations of conventional approaches. However, application of DNNs is very limited due to excessive hardware specifications requirement. Considering low hardware specifications used in real-life conditions, to gain better results without DNNs, in this paper, we propose an algorithm with the combination of the oriented FAST and rotated BRIEF (ORB) features and Local Binary Patterns (LBP) features extracted from facial expression. First of all, every image is passed through face detection algorithm to extract more effective features. Second, in order to increase computational speed, the ORB and LBP features are extracted from the face region; specifically, region division is innovatively employed in the traditional ORB to avoid the concentration of the features. The features are invariant to scale and grayscale as well as rotation changes. Finally, the combined features are classified by Support Vector Machine (SVM). The proposed method is evaluated on several challenging databases such as Cohn-Kanade database (CK+), Japanese Female Facial Expressions database (JAFFE), and MMI database; experimental results of seven emotion state (neutral, joy, sadness, surprise, anger, fear, and disgust) show that the proposed framework is effective and accurate.
Collapse
Affiliation(s)
- Ben Niu
- School of Electronic and Information Engineering, Jinling Institute of Technology, Nanjing 211169, China
| | - Zhenxing Gao
- College of Civil Aviation, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Bingbing Guo
- School of Psychology, South China Normal University, Guangzhou 510631, China
| |
Collapse
|
20
|
Burt AL, Crewther DP. The 4D Space-Time Dimensions of Facial Perception. Front Psychol 2020; 11:1842. [PMID: 32849084 PMCID: PMC7399249 DOI: 10.3389/fpsyg.2020.01842] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Accepted: 07/06/2020] [Indexed: 12/19/2022] Open
Abstract
Facial information is a powerful channel for human-to-human communication. Characteristically, faces can be defined as biological objects that are four-dimensional (4D) patterns, whereby they have concurrently a spatial structure and surface as well as temporal dynamics. The spatial characteristics of facial objects contain a volume and surface in three dimensions (3D), namely breadth, height and importantly, depth. The temporal properties of facial objects are defined by how a 3D facial structure and surface evolves dynamically over time; where time is referred to as the fourth dimension (4D). Our entire perception of another’s face, whether it be social, affective or cognitive perceptions, is therefore built on a combination of 3D and 4D visual cues. Counterintuitively, over the past few decades of experimental research in psychology, facial stimuli have largely been captured, reproduced and presented to participants with two dimensions (2D), while remaining largely static. The following review aims to advance and update facial researchers, on the recent revolution in computer-generated, realistic 4D facial models produced from real-life human subjects. We delve in-depth to summarize recent studies which have utilized facial stimuli that possess 3D structural and surface cues (geometry, surface and depth) and 4D temporal cues (3D structure + dynamic viewpoint and movement). In sum, we have found that higher-order perceptions such as identity, gender, ethnicity, emotion and personality, are critically influenced by 4D characteristics. In future, it is recommended that facial stimuli incorporate the 4D space-time perspective with the proposed time-resolved methods.
Collapse
Affiliation(s)
- Adelaide L Burt
- Centre for Human Psychopharmacology, Swinburne University of Technology, Melbourne, VIC, Australia
| | - David P Crewther
- Centre for Human Psychopharmacology, Swinburne University of Technology, Melbourne, VIC, Australia
| |
Collapse
|
21
|
Benda MS, Scherf KS. The Complex Emotion Expression Database: A validated stimulus set of trained actors. PLoS One 2020; 15:e0228248. [PMID: 32012179 PMCID: PMC6996812 DOI: 10.1371/journal.pone.0228248] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Accepted: 01/11/2020] [Indexed: 11/19/2022] Open
Abstract
The vast majority of empirical work investigating the mechanisms supporting the perception and recognition of facial expressions is focused on basic expressions. Less is known about the underlying mechanisms supporting the processing of complex expressions, which provide signals about emotions related to more nuanced social behavior and inner thoughts. Here, we introduce the Complex Emotion Expression Database (CEED), a digital stimulus set of 243 basic and 237 complex emotional facial expressions. The stimuli represent six basic expressions (angry, disgusted, fearful, happy, sad, and surprised) and nine complex expressions (affectionate, attracted, betrayed, brokenhearted, contemptuous, desirous, flirtatious, jealous, and lovesick) that were posed by Black and White formally trained, young adult actors. All images were validated by a minimum of 50 adults in a 4-alternative forced choice task. Only images for which ≥ 50% of raters endorsed the correct emotion label were included in the final database. This database will be an excellent resource for researchers interested in studying the developmental, behavioral, and neural mechanisms supporting the perception and recognition of complex emotion expressions.
Collapse
Affiliation(s)
- Margaret S. Benda
- Department of Psychology, Pennsylvania State University, University Park, PA, United States of America
| | - K. Suzanne Scherf
- Department of Psychology, Pennsylvania State University, University Park, PA, United States of America
| |
Collapse
|
22
|
Chung KM, Kim S, Jung WH, Kim Y. Development and Validation of the Yonsei Face Database (YFace DB). Front Psychol 2019; 10:2626. [PMID: 31849755 PMCID: PMC6901828 DOI: 10.3389/fpsyg.2019.02626] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Accepted: 11/07/2019] [Indexed: 12/13/2022] Open
Abstract
The purposes of this study were to develop the Yonsei Face Database (YFace DB), consisting of both static and dynamic face stimuli for six basic emotions (happiness, sadness, anger, surprise, fear, and disgust), and to test its validity. The database includes selected pictures (static stimuli) and film clips (dynamic stimuli) of 74 models (50% female) aged between 19 and 40. Thousand four hundred and eighty selected pictures and film clips were assessed for the accuracy, intensity, and naturalness during the validation procedure by 221 undergraduate students. The overall accuracy of the pictures was 76%. Film clips had a higher accuracy, of 83%; the highest accuracy was observed in happiness and the lowest in fear across all conditions (static with mouth open or closed, or dynamic). The accuracy was higher in film clips across all emotions but happiness and disgust, while the naturalness was higher in the pictures than in film clips except for sadness and anger. The intensity varied the most across conditions and emotions. Significant gender effects were found in perception accuracy for both the gender of models and raters. Male raters perceived surprise more accurately in static stimuli with mouth open and in dynamic stimuli while female raters perceived fear more accurately in all conditions. Moreover, sadness and anger expressed in static stimuli with mouth open and fear expressed in dynamic stimuli were perceived more accurately when models were male. Disgust expressed in static stimuli with mouth open and dynamic stimuli, and fear expressed in static stimuli with mouth closed were perceived more accurately when models were female. The YFace DB is the largest Asian face database by far and the first to include both static and dynamic facial expression stimuli, and the current study can provide researchers with a wealth of information about the validity of each stimulus through the validation procedure.
Collapse
Affiliation(s)
- Kyong-Mee Chung
- Department of Psychology, Yonsei University, Seoul, South Korea
| | - Soojin Kim
- Department of Psychology, Yonsei University, Seoul, South Korea
| | - Woo Hyun Jung
- Department of Psychology, Chungbuk National University, Cheongju, South Korea
| | - Yeunjoo Kim
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| |
Collapse
|
23
|
Derya D, Kang J, Kwon DY, Wallraven C. Facial Expression Processing Is Not Affected by Parkinson's Disease, but by Age-Related Factors. Front Psychol 2019; 10:2458. [PMID: 31798486 PMCID: PMC6868040 DOI: 10.3389/fpsyg.2019.02458] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2019] [Accepted: 10/17/2019] [Indexed: 11/20/2022] Open
Abstract
The question whether facial expression processing may be impaired in Parkinson’s disease (PD) patients so far has yielded equivocal results – existing studies, however, have focused on testing expression processing in recognition tasks with static images of six standard, emotional facial expressions. Given that non-verbal communication contains both emotional and non-emotional, conversational expressions and that input to the brain is usually dynamic, here we address the question of potential facial expression processing differences in a novel format: we test a range of conversational and emotional, dynamic facial expressions in three groups – PD patients (n = 20), age- and education-matched older healthy controls (n = 20), and younger adult healthy controls (n = 20). This setup allows us to address both effects of PD and age-related differences. We employed a rating task for all groups in which 12 rating dimensions were used to assess evaluative processing of 27 expression videos from six different actors. We found that ratings overall were consistent across groups with several rating dimensions (such as arousal or outgoingness) having a strong correlation with the expressions’ motion energy content as measured by optic flow analysis. Most importantly, we found that the PD group did not differ in any rating dimension from the older healthy control group (HCG), indicating highly similar evaluation processing. Both older groups, however, did show significant differences for several rating scales in comparison with the younger adults HCG. Looking more closely, older participants rated negative expressions compared to the younger participants as more positive, but also as less natural, persuasive, empathic, and sincere. We interpret these findings in the context of the positivity effect and in-group processing advantages. Overall, our findings do not support strong processing deficits due to PD, but rather point to age-related differences in facial expression processing.
Collapse
Affiliation(s)
- Dilara Derya
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - June Kang
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Do-Young Kwon
- Department of Neurology, Korea University Ansan Hospital, Korea University College of Medicine, Ansan-si, South Korea
| | - Christian Wallraven
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea.,Department of Artificial Intelligence, Korea University, Seoul, South Korea
| |
Collapse
|
24
|
Abstract
Facial Expression Recognition (FER), as the primary processing method for non-verbal intentions, is an important and promising field of computer vision and artificial intelligence, and one of the subject areas of symmetry. This survey is a comprehensive and structured overview of recent advances in FER. We first categorise the existing FER methods into two main groups, i.e., conventional approaches and deep learning-based approaches. Methodologically, to highlight the differences and similarities, we propose a general framework of a conventional FER approach and review the possible technologies that can be employed in each component. As for deep learning-based methods, four kinds of neural network-based state-of-the-art FER approaches are presented and analysed. Besides, we introduce seventeen commonly used FER datasets and summarise four FER-related elements of datasets that may influence the choosing and processing of FER approaches. Evaluation methods and metrics are given in the later part to show how to assess FER algorithms, along with subsequent performance comparisons of different FER approaches on the benchmark datasets. At the end of the survey, we present some challenges and opportunities that need to be addressed in future.
Collapse
|
25
|
Schultz J, Willems T, Gädeke M, Chakkour G, Franke A, Weber B, Hurlemann R. A human subcortical network underlying social avoidance revealed by risky economic choices. eLife 2019; 8:45249. [PMID: 31329098 PMCID: PMC6703852 DOI: 10.7554/elife.45249] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Accepted: 07/21/2019] [Indexed: 12/02/2022] Open
Abstract
Social interactions have a major impact on well-being. While many individuals actively seek social situations, others avoid them, at great cost to their private and professional life. The neural mechanisms underlying individual differences in social approach or avoidance tendencies are poorly understood. Here we estimated people’s subjective value of engaging in a social situation. In each trial, more or less socially anxious participants chose between an interaction with a human partner providing social feedback and a monetary amount. With increasing social anxiety, the subjective value of social engagement decreased; amygdala BOLD response during decision-making and when experiencing social feedback increased; ventral striatum BOLD response to positive social feedback decreased; and connectivity between these regions during decision-making increased. Amygdala response was negatively related to the subjective value of social engagement. These findings suggest a relation between trait social anxiety/social avoidance and activity in a subcortical network during social decision-making. Your relationships with the people around you – friends, family, colleagues – have a strong influence on your overall life happiness. Even so, many people struggle to engage with the people around them. Social interactions can be stressful and many people choose to avoid them, even at a cost. Being able to measure these tendencies experimentally is a first useful step for assessing social avoidance without relying on people’s, often biased, recollections of their actions and behaviours. But how can a tendency to avoid social situations be quantified? And what can an experiment to measure this tendency reveal about the neural underpinnings of social avoidance? Schultz et al. asked volunteers to play a social game. If they played, the volunteers had the chance to win three euros, but they could choose not to play and receive a fixed amount of money, which varied across trials between zero and three euros. This approach allowed Schultz et al. to quantify how much the volunteers valued playing the game. The game involved playing with other virtual human partners, who gave either positive or negative social feedback depending on the outcome of the game in the form of videos of facial expressions. In a non-social control experiment, a computer gave abstract feedback in the form of symbols. Schultz et al. found that the value people placed on playing the social game varied with their level of social anxiety (established using a standard questionnaire). The more anxious people attributed less value to engaging in the game. Neuroimaging experiments revealed that the activity and connectivity between the amygdala and ventral striatum, two parts of the brain involved in processing emotions and reward-related stimuli, varied according to people’s levels of social anxiety. Social interactions have a major impact on the quality of life of both healthy people and those with mental disorders. Developing new ways to measure and understand the differences in the brain linked to social traits could help to characterise certain conditions and document therapy progress. Methods to quantify social anxiety and avoidance are also in line with efforts to explore the neuroscience behind the full range of human behaviour.
Collapse
Affiliation(s)
- Johannes Schultz
- Division of Medical Psychology, University of Bonn, Bonn, Germany.,Center for Economics and Neuroscience, University of Bonn, Bonn, Germany.,Institute of Experimental Epileptology and Cognition Research, University of Bonn, Bonn, Germany
| | - Tom Willems
- Division of Medical Psychology, University of Bonn, Bonn, Germany
| | - Maria Gädeke
- Division of Medical Psychology, University of Bonn, Bonn, Germany
| | - Ghada Chakkour
- Division of Medical Psychology, University of Bonn, Bonn, Germany.,Medical School, University of Bonn, Bonn, Germany
| | - Alexander Franke
- Division of Medical Psychology, University of Bonn, Bonn, Germany.,Medical School, University of Bonn, Bonn, Germany
| | - Bernd Weber
- Center for Economics and Neuroscience, University of Bonn, Bonn, Germany.,Institute of Experimental Epileptology and Cognition Research, University of Bonn, Bonn, Germany
| | - Rene Hurlemann
- Division of Medical Psychology, University of Bonn, Bonn, Germany.,Department of Psychiatry and Psychotherapy, University of Bonn, Bonn, Germany
| |
Collapse
|
26
|
Darke H, Cropper SJ, Carter O. A Novel Dynamic Morphed Stimuli Set to Assess Sensitivity to Identity and Emotion Attributes in Faces. Front Psychol 2019; 10:757. [PMID: 31024397 PMCID: PMC6465610 DOI: 10.3389/fpsyg.2019.00757] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Accepted: 03/19/2019] [Indexed: 11/13/2022] Open
Abstract
Face-based tasks are used ubiquitously in the study of human perception and cognition. Video-based (dynamic) face stimuli are increasingly utilized by researchers because they have higher ecological validity than static images. However, there are few ready-to-use dynamic stimulus sets currently available to researchers that include non-emotional and non-face control stimuli. This paper outlines the development of three original dynamic stimulus sets: a set of emotional faces (fear and disgust), a set of non-emotional faces, and a set of car animations. Morphing software was employed to vary the intensity of the expression shown and to vary the similarity between actors. Manipulating these dimensions permits us to create tasks of varying difficulty that can be optimized to detect more subtle differences in face-processing ability. Using these new stimuli, two preliminary experiments were conducted to evaluate different aspects of facial identity recognition, emotion recognition, and non-face object discrimination. Results suggest that these five different tasks successfully avoided floor and ceiling effects in a healthy sample. A second experiment found that dynamic versions of the emotional stimuli were recognized more accurately than static versions, both for labeling, and discrimination paradigms. This indicates that, like previous emotion-only stimuli sets, the use of dynamic stimuli confers an advantage over image-based stimuli. These stimuli therefore provide a useful resource for researchers looking to investigate both emotional and non-emotional face-processing using dynamic stimuli. Moreover, these stimuli vary across crucial dimensions (i.e., face similarity and intensity of emotion) which allows researchers to modify task difficulty as required.
Collapse
Affiliation(s)
- Hayley Darke
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Simon J Cropper
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Olivia Carter
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
27
|
Müller T, Schäfer R, Hahn S, Franz M. Adults' facial reaction to affective facial expressions of children and adults. Int J Psychophysiol 2019; 139:33-39. [PMID: 30695699 DOI: 10.1016/j.ijpsycho.2019.01.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2018] [Revised: 11/22/2018] [Accepted: 01/02/2019] [Indexed: 11/16/2022]
Abstract
Facial mimicry, the unconscious imitation of others' affective facial expressions, serves as an important basis for interpersonal communication. Although there are many studies dealing with this phenomenon regarding the interaction between adults, only few experiments have explored facial mimicry in response to affective facial expressions of children. In the following study affect-prototypical video clips of children's and adults' faces were presented to 44 adults while the activity of corrugator supercilii and zygomaticus muscles was electromyographically measured. A discrete mimic reaction was detected in response to each basic affect (fear, disgust, happiness, sadness, surprise and anger). The activity of corrugator supercilii muscle was significantly lower when affective facial expressions of children were presented in contrast to those of adults. In addition, negative correlations between alexithymia and the averaged facial EMG activity were detected.
Collapse
Affiliation(s)
- Tobias Müller
- Clinical Institute for Psychosomatic Medicine and Psychotherapy (15.16), Heinrich-Heine-University, Moorenstraße 5, 40225 Düsseldorf, Germany.
| | - Ralf Schäfer
- Clinical Institute for Psychosomatic Medicine and Psychotherapy (15.16), Heinrich-Heine-University, Moorenstraße 5, 40225 Düsseldorf, Germany.
| | - Sina Hahn
- Clinical Institute for Psychosomatic Medicine and Psychotherapy (15.16), Heinrich-Heine-University, Moorenstraße 5, 40225 Düsseldorf, Germany.
| | - Matthias Franz
- Clinical Institute for Psychosomatic Medicine and Psychotherapy (15.16), Heinrich-Heine-University, Moorenstraße 5, 40225 Düsseldorf, Germany.
| |
Collapse
|
28
|
Tu YZ, Lin DW, Suzuki A, Goh JOS. East Asian Young and Older Adult Perceptions of Emotional Faces From an Age- and Sex-Fair East Asian Facial Expression Database. Front Psychol 2018; 9:2358. [PMID: 30555382 PMCID: PMC6281963 DOI: 10.3389/fpsyg.2018.02358] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Accepted: 11/10/2018] [Indexed: 11/21/2022] Open
Abstract
There is increasing interest in clarifying how different face emotion expressions are perceived by people from different cultures, of different ages and sex. However, scant availability of well-controlled emotional face stimuli from non-Western populations limit the evaluation of cultural differences in face emotion perception and how this might be modulated by age and sex differences. We present a database of East Asian face expression stimuli, enacted by young and older, male and female, Taiwanese using the Facial Action Coding System (FACS). Combined with a prior database, this present database consists of 90 identities with happy, sad, angry, fearful, disgusted, surprised and neutral expressions amounting to 628 photographs. Twenty young and 24 older East Asian raters scored the photographs for intensities of multiple-dimensions of emotions and induced affect. Multivariate analyses characterized the dimensionality of perceived emotions and quantified effects of age and sex. We also applied commercial software to extract computer-based metrics of emotions in photographs. Taiwanese raters perceived happy faces as one category, sad, angry, and disgusted expressions as one category, and fearful and surprised expressions as one category. Younger females were more sensitive to face emotions than younger males. Whereas, older males showed reduced face emotion sensitivity, older female sensitivity was similar or accentuated relative to young females. Commercial software dissociated six emotions according to the FACS demonstrating that defining visual features were present. Our findings show that East Asians perceive a different dimensionality of emotions than Western-based definitions in face recognition software, regardless of age and sex. Critically, stimuli with detailed cultural norms are indispensable in interpreting neural and behavioral responses involving human facial expression processing. To this end, we add to the tools, which are available upon request, for conducting such research.
Collapse
Affiliation(s)
- Yu-Zhen Tu
- Graduate Institute of Brain and Mind Sciences, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Dong-Wei Lin
- Graduate Institute of Brain and Mind Sciences, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Atsunobu Suzuki
- Department of Psychology, Graduate School of Humanities and Sociology, The University of Tokyo, Tokyo, Japan
| | - Joshua Oon Soo Goh
- Graduate Institute of Brain and Mind Sciences, College of Medicine, National Taiwan University, Taipei, Taiwan.,Department of Psychology, College of Science, National Taiwan University, Taipei, Taiwan.,Neurobiological and Cognitive Science Center, National Taiwan University, Taipei, Taiwan.,Center for Artificial Intelligence and Advanced Robotics, National Taiwan University, Taipei, Taiwan
| |
Collapse
|
29
|
Calvo MG, Fernández-Martín A, Recio G, Lundqvist D. Human Observers and Automated Assessment of Dynamic Emotional Facial Expressions: KDEF-dyn Database Validation. Front Psychol 2018; 9:2052. [PMID: 30416473 PMCID: PMC6212581 DOI: 10.3389/fpsyg.2018.02052] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Accepted: 10/05/2018] [Indexed: 12/11/2022] Open
Abstract
Most experimental studies of facial expression processing have used static stimuli (photographs), yet facial expressions in daily life are generally dynamic. In its original photographic format, the Karolinska Directed Emotional Faces (KDEF) has been frequently utilized. In the current study, we validate a dynamic version of this database, the KDEF-dyn. To this end, we applied animation between neutral and emotional expressions (happy, sad, angry, fearful, disgusted, and surprised; 1,033-ms unfolding) to 40 KDEF models, with morphing software. Ninety-six human observers categorized the expressions of the resulting 240 video-clip stimuli, and automated face analysis assessed the evidence for 6 expressions and 20 facial action units (AUs) at 31 intensities. Low-level image properties (luminance, signal-to-noise ratio, etc.) and other purely perceptual factors (e.g., size, unfolding speed) were controlled. Human recognition performance (accuracy, efficiency, and confusions) patterns were consistent with prior research using static and other dynamic expressions. Automated assessment of expressions and AUs was sensitive to intensity manipulations. Significant correlations emerged between human observers' categorization and automated classification. The KDEF-dyn database aims to provide a balance between experimental control and ecological validity for research on emotional facial expression processing. The stimuli and the validation data are available to the scientific community.
Collapse
Affiliation(s)
- Manuel G. Calvo
- Department of Cognitive Psychology, Universidad de La Laguna, San Cristóbal de La Laguna, Spain
- Instituto Universitario de Neurociencia (IUNE), Universidad de La Laguna, Santa Cruz de Tenerife, Spain
| | | | - Guillermo Recio
- Institute of Psychology, Universität Hamburg, Hamburg, Germany
| | - Daniel Lundqvist
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
30
|
Dobs K, Bülthoff I, Schultz J. Use and Usefulness of Dynamic Face Stimuli for Face Perception Studies-a Review of Behavioral Findings and Methodology. Front Psychol 2018; 9:1355. [PMID: 30123162 PMCID: PMC6085596 DOI: 10.3389/fpsyg.2018.01355] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Accepted: 07/13/2018] [Indexed: 01/01/2023] Open
Abstract
Faces that move contain rich information about facial form, such as facial features and their configuration, alongside the motion of those features. During social interactions, humans constantly decode and integrate these cues. To fully understand human face perception, it is important to investigate what information dynamic faces convey and how the human visual system extracts and processes information from this visual input. However, partly due to the difficulty of designing well-controlled dynamic face stimuli, many face perception studies still rely on static faces as stimuli. Here, we focus on evidence demonstrating the usefulness of dynamic faces as stimuli, and evaluate different types of dynamic face stimuli to study face perception. Studies based on dynamic face stimuli revealed a high sensitivity of the human visual system to natural facial motion and consistently reported dynamic advantages when static face information is insufficient for the task. These findings support the hypothesis that the human perceptual system integrates sensory cues for robust perception. In the present paper, we review the different types of dynamic face stimuli used in these studies, and assess their usefulness for several research questions. Natural videos of faces are ecological stimuli but provide limited control of facial form and motion. Point-light faces allow for good control of facial motion but are highly unnatural. Image-based morphing is a way to achieve control over facial motion while preserving the natural facial form. Synthetic facial animations allow separation of facial form and motion to study aspects such as identity-from-motion. While synthetic faces are less natural than videos of faces, recent advances in photo-realistic rendering may close this gap and provide naturalistic stimuli with full control over facial motion. We believe that many open questions, such as what dynamic advantages exist beyond emotion and identity recognition and which dynamic aspects drive these advantages, can be addressed adequately with different types of stimuli and will improve our understanding of face perception in more ecological settings.
Collapse
Affiliation(s)
- Katharina Dobs
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, United States.,Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Isabelle Bülthoff
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Johannes Schultz
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Division of Medical Psychology and Department of Psychiatry, University of Bonn, Bonn, Germany
| |
Collapse
|
31
|
Livingstone SR, Russo FA. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS One 2018; 13:e0196391. [PMID: 29768426 PMCID: PMC5955500 DOI: 10.1371/journal.pone.0196391] [Citation(s) in RCA: 175] [Impact Index Per Article: 29.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Accepted: 04/12/2018] [Indexed: 11/19/2022] Open
Abstract
The RAVDESS is a validated multimodal database of emotional speech and song. The database is gender balanced consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent. Speech includes calm, happy, sad, angry, fearful, surprise, and disgust expressions, and song contains calm, happy, sad, angry, and fearful emotions. Each expression is produced at two levels of emotional intensity, with an additional neutral expression. All conditions are available in face-and-voice, face-only, and voice-only formats. The set of 7356 recordings were each rated 10 times on emotional validity, intensity, and genuineness. Ratings were provided by 247 individuals who were characteristic of untrained research participants from North America. A further set of 72 participants provided test-retest data. High levels of emotional validity and test-retest intrarater reliability were reported. Corrected accuracy and composite "goodness" measures are presented to assist researchers in the selection of stimuli. All recordings are made freely available under a Creative Commons license and can be downloaded at https://doi.org/10.5281/zenodo.1188976.
Collapse
Affiliation(s)
- Steven R. Livingstone
- Department of Psychology, Ryerson University, Toronto, Canada
- Department of Computer Science and Information Systems, University of Wisconsin-River Falls, Wisconsin, WI, United States of America
| | - Frank A. Russo
- Department of Psychology, Ryerson University, Toronto, Canada
| |
Collapse
|
32
|
Rummer R, Schweppe J. Talking emotions: vowel selection in fictional names depends on the emotional valence of the to-be-named faces and objects. Cogn Emot 2018; 33:404-416. [PMID: 29658373 DOI: 10.1080/02699931.2018.1456406] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
One prestudy based on a corpus analysis and four experiments in which participants had to invent novel names for persons or objects (N = 336 participants in total) investigated how the valence of a face or an object affects the phonological characteristics of the respective novel name. Based on the articulatory feedback hypothesis, we predicted that /i:/ is included more frequently in fictional names for faces or objects with a positive valence than for those with a negative valence. For /o:/, the pattern should reverse. An analysis of the Berlin Affective Word List - Reloaded (BAWL-R) yielded a higher number of occurrences of /o:/ in German words with negative valence than in words with positive valence; with /i:/ the situation is less clear. In Experiments 1 and 2, participants named persons showing a positive or a negative facial expression. Names for smiling persons included more /i:/s and fewer /o:/s than names for persons with a negative facial expression. In Experiments 3 and 4, participants heard a Swahili narration and invented pseudo-Swahili names for objects with positive, neutral, or negative valence. Names for positive objects included more /i:/s than names for neutral or negative objects, and names for negative objects included more /o:/s than names for neutral or positive objects. These finding indicate a stable vowel-emotion link.
Collapse
Affiliation(s)
- Ralf Rummer
- a Psychology , University of Kassel , Kassel , Germany
| | | |
Collapse
|
33
|
Ko BC. A Brief Review of Facial Emotion Recognition Based on Visual Information. SENSORS (BASEL, SWITZERLAND) 2018; 18:E401. [PMID: 29385749 PMCID: PMC5856145 DOI: 10.3390/s18020401] [Citation(s) in RCA: 122] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2017] [Revised: 01/24/2018] [Accepted: 01/25/2018] [Indexed: 11/24/2022]
Abstract
Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling "end-to-end" learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN) for the spatial features of an individual frame and long short-term memory (LSTM) for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work.
Collapse
Affiliation(s)
- Byoung Chul Ko
- Department of Computer Engineering, Keimyung University, Daegu 42601, Korea.
| |
Collapse
|
34
|
Abstract
Socio-affective touch communication conveys a vast amount of information about emotions and intentions in social contexts. In spite of the complexity of the socio-affective touch expressions we use daily, previous studies addressed only a few aspects of social touch mainly focusing on hedonics, such as stroking, leaving a wide range of social touch behaviour unexplored. To overcome this limit, we present the Socio-Affective Touch Expression Database (SATED), which includes a large range of dynamic interpersonal socio-affective touch events varying in valence and arousal. The original database contained 26 different social touch expressions each performed by three actor pairs. To validate each touch expression, we conducted two behavioural experiments investigating perceived naturalness and affective values. Based on the rated naturalness and valence, 13 socio-affective touch expressions along with 12 corresponding non-social touch events were selected as a complete set, achieving 75 video clips in total. Moreover, we quantified motion energy for each touch expression to investigate its intrinsic correlations with perceived affective values and its similarity among actor- and action-pairs. As a result, the touch expression database is not only systematically defined and well-controlled, but also spontaneous and natural, while eliciting clear affective responses. This database will allow a fine-grained investigation of complex interpersonal socio-affective touch in the realm of social psychology and neuroscience along with potential application areas in affective computing and neighbouring fields.
Collapse
|
35
|
Fujimura T, Umemura H. Development and validation of a facial expression database based on the dimensional and categorical model of emotions. Cogn Emot 2018; 32:1663-1670. [PMID: 29334821 DOI: 10.1080/02699931.2017.1419936] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
The present study describes the development and validation of a facial expression database comprising five different horizontal face angles in dynamic and static presentations. The database includes twelve expression types portrayed by eight Japanese models. This database was inspired by the dimensional and categorical model of emotions: surprise, fear, sadness, anger with open mouth, anger with closed mouth, disgust with open mouth, disgust with closed mouth, excitement, happiness, relaxation, sleepiness, and neutral (static only). The expressions were validated using emotion classification and Affect Grid rating tasks [Russell, Weiss, & Mendelsohn, 1989. Affect Grid: A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology, 57(3), 493-502]. The results indicate that most of the expressions were recognised as the intended emotions and could systematically represent affective valence and arousal. Furthermore, face angle and facial motion information influenced emotion classification and valence and arousal ratings. Our database will be available online at the following URL. https://www.dh.aist.go.jp/database/face2017/ .
Collapse
Affiliation(s)
- Tomomi Fujimura
- a Human Informatics Research Institute , National Institute of Advanced Industrial Science and Technology (AIST) , Tsukuba , Japan
| | - Hiroyuki Umemura
- a Human Informatics Research Institute , National Institute of Advanced Industrial Science and Technology (AIST) , Tsukuba , Japan
| |
Collapse
|
36
|
Garrido MV, Prada M. KDEF-PT: Valence, Emotional Intensity, Familiarity and Attractiveness Ratings of Angry, Neutral, and Happy Faces. Front Psychol 2017; 8:2181. [PMID: 29312053 PMCID: PMC5742208 DOI: 10.3389/fpsyg.2017.02181] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 11/30/2017] [Indexed: 12/21/2022] Open
Abstract
The Karolinska Directed Emotional Faces (KDEF) is one of the most widely used human facial expressions database. Almost a decade after the original validation study (Goeleven et al., 2008), we present subjective rating norms for a sub-set of 210 pictures which depict 70 models (half female) each displaying an angry, happy and neutral facial expressions. Our main goals were to provide an additional and updated validation to this database, using a sample from a different nationality (N = 155 Portuguese students, M = 23.73 years old, SD = 7.24) and to extend the number of subjective dimensions used to evaluate each image. Specifically, participants reported emotional labeling (forced-choice task) and evaluated the emotional intensity and valence of the expression, as well as the attractiveness and familiarity of the model (7-points rating scales). Overall, results show that happy faces obtained the highest ratings across evaluative dimensions and emotion labeling accuracy. Female (vs. male) models were perceived as more attractive, familiar and positive. The sex of the model also moderated the accuracy of emotional labeling and ratings of different facial expressions. Each picture of the set was categorized as low, moderate, or high for each dimension. Normative data for each stimulus (hits proportion, means, standard deviations, and confidence intervals per evaluative dimension) is available as supplementary material (available at https://osf.io/fvc4m/).
Collapse
Affiliation(s)
| | - Marília Prada
- Instituto Universitário de Lisboa (ISCTE-IUL), CIS - IUL, Lisboa, Portugal
| |
Collapse
|
37
|
Perdikis D, Volhard J, Müller V, Kaulard K, Brick TR, Wallraven C, Lindenberger U. Brain synchronization during perception of facial emotional expressions with natural and unnatural dynamics. PLoS One 2017; 12:e0181225. [PMID: 28723957 PMCID: PMC5517022 DOI: 10.1371/journal.pone.0181225] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2016] [Accepted: 06/28/2017] [Indexed: 11/19/2022] Open
Abstract
Research on the perception of facial emotional expressions (FEEs) often uses static images that do not capture the dynamic character of social coordination in natural settings. Recent behavioral and neuroimaging studies suggest that dynamic FEEs (videos or morphs) enhance emotion perception. To identify mechanisms associated with the perception of FEEs with natural dynamics, the present EEG (Electroencephalography)study compared (i) ecologically valid stimuli of angry and happy FEEs with natural dynamics to (ii) FEEs with unnatural dynamics, and to (iii) static FEEs. FEEs with unnatural dynamics showed faces moving in a biologically possible but unpredictable and atypical manner, generally resulting in ambivalent emotional content. Participants were asked to explicitly recognize FEEs. Using whole power (WP) and phase synchrony (Phase Locking Index, PLI), we found that brain responses discriminated between natural and unnatural FEEs (both static and dynamic). Differences were primarily observed in the timing and brain topographies of delta and theta PLI and WP, and in alpha and beta WP. Our results support the view that biologically plausible, albeit atypical, FEEs are processed by the brain by different mechanisms than natural FEEs. We conclude that natural movement dynamics are essential for the perception of FEEs and the associated brain processes.
Collapse
Affiliation(s)
- Dionysios Perdikis
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
- * E-mail:
| | - Jakob Volhard
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Viktor Müller
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Kathrin Kaulard
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Timothy R. Brick
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Christian Wallraven
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Ulman Lindenberger
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| |
Collapse
|
38
|
Krumhuber EG, Skora L, Küster D, Fou L. A Review of Dynamic Datasets for Facial Expression Research. EMOTION REVIEW 2016. [DOI: 10.1177/1754073916670022] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Temporal dynamics have been increasingly recognized as an important component of facial expressions. With the need for appropriate stimuli in research and application, a range of databases of dynamic facial stimuli has been developed. The present article reviews the existing corpora and describes the key dimensions and properties of the available sets. This includes a discussion of conceptual features in terms of thematic issues in dataset construction as well as practical features which are of applied interest to stimulus usage. To identify the most influential sets, we further examine their citation rates and usage frequencies in existing studies. General limitations and implications for emotion research are noted and future directions for stimulus generation are outlined.
Collapse
Affiliation(s)
| | - Lina Skora
- Department of Experimental Psychology, University College London, UK
| | - Dennis Küster
- Department of Psychology and Methods, Jacobs University Bremen, UK
| | - Linyun Fou
- Department of Experimental Psychology, University College London, UK
| |
Collapse
|
39
|
Dobs K, Bülthoff I, Schultz J. Identity information content depends on the type of facial movement. Sci Rep 2016; 6:34301. [PMID: 27683087 PMCID: PMC5041143 DOI: 10.1038/srep34301] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2016] [Accepted: 09/09/2016] [Indexed: 11/09/2022] Open
Abstract
Facial movements convey information about many social cues, including identity. However, how much information about a person's identity is conveyed by different kinds of facial movements is unknown. We addressed this question using a recent motion capture and animation system, with which we animated one avatar head with facial movements of three types: (1) emotional, (2) emotional in social interaction and (3) conversational, all recorded from several actors. In a delayed match-to-sample task, observers were best at matching actor identity across conversational movements, worse with emotional movements in social interactions, and at chance level with emotional facial expressions. Model observers performing this task showed similar performance profiles, indicating that performance variation was due to differences in information content, rather than processing. Our results suggest that conversational facial movements transmit more dynamic identity information than emotional facial expressions, thus suggesting different functional roles and processing mechanisms for different types of facial motion.
Collapse
Affiliation(s)
- Katharina Dobs
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Université de Toulouse, Centre de Recherche Cerveau et Cognition, Université Paul Sabatier, Toulouse, France.,CNRS, Faculté de Médecine de Purpan, UMR 5549, Toulouse, France
| | - Isabelle Bülthoff
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Johannes Schultz
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Division of Medical Psychology and Department of Psychiatry, University of Bonn, Bonn, Germany
| |
Collapse
|
40
|
Esins J, Schultz J, Stemper C, Kennerknecht I, Bülthoff I. Face Perception and Test Reliabilities in Congenital Prosopagnosia in Seven Tests. Iperception 2016; 7:2041669515625797. [PMID: 27482369 PMCID: PMC4954744 DOI: 10.1177/2041669515625797] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Congenital prosopagnosia, the innate impairment in recognizing faces, is a very heterogeneous disorder with different phenotypical manifestations. To investigate the nature of prosopagnosia in more detail, we tested 16 prosopagnosics and 21 controls with an extended test battery addressing various aspects of face recognition. Our results show that prosopagnosics exhibited significant impairments in several face recognition tasks: impaired holistic processing (they were tested amongst others with the Cambridge Face Memory Test (CFMT)) as well as reduced processing of configural information of faces. This test battery also revealed some new findings. While controls recognized moving faces better than static faces, prosopagnosics did not exhibit this effect. Furthermore, prosopagnosics had significantly impaired gender recognition—which is shown on a groupwise level for the first time in our study. There was no difference between groups in the automatic extraction of face identity information or in object recognition as tested with the Cambridge Car Memory Test. In addition, a methodological analysis of the tests revealed reduced reliability for holistic face processing tests in prosopagnosics. To our knowledge, this is the first study to show that prosopagnosics showed a significantly reduced reliability coefficient (Cronbach’s alpha) in the CFMT compared to the controls. We suggest that compensatory strategies employed by the prosopagnosics might be the cause for the vast variety of response patterns revealed by the reduced test reliability. This finding raises the question whether classical face tests measure the same perceptual processes in controls and prosopagnosics.
Collapse
Affiliation(s)
- Janina Esins
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | | | - Claudia Stemper
- Institute of Human Genetics, Westfälische Wilhelms-Universität Münster, Münster, Germany
| | - Ingo Kennerknecht
- Institute of Human Genetics, Westfälische Wilhelms-Universität Münster, Münster, Germany
| | - Isabelle Bülthoff
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
41
|
Wingenbach TSH, Ashwin C, Brosnan M. Validation of the Amsterdam Dynamic Facial Expression Set--Bath Intensity Variations (ADFES-BIV): A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions. PLoS One 2016; 11:e0147112. [PMID: 26784347 PMCID: PMC4718603 DOI: 10.1371/journal.pone.0147112] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2015] [Accepted: 12/29/2015] [Indexed: 11/19/2022] Open
Abstract
Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex) expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES) and termed the Bath Intensity Variations (ADFES-BIV). A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female) completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness) and 3 complex emotions (contempt, embarrassment, pride) that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu) hit rates) were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the corresponding author.
Collapse
Affiliation(s)
| | - Chris Ashwin
- Department of Psychology, University of Bath, Bath, United Kingdom
| | - Mark Brosnan
- Department of Psychology, University of Bath, Bath, United Kingdom
| |
Collapse
|
42
|
Kim YB, Kang SJ, Lee SH, Jung JY, Kam HR, Lee J, Kim YS, Lee J, Kim CH. Efficiently detecting outlying behavior in video-game players. PeerJ 2015; 3:e1502. [PMID: 26713250 PMCID: PMC4690374 DOI: 10.7717/peerj.1502] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2015] [Accepted: 11/24/2015] [Indexed: 11/30/2022] Open
Abstract
In this paper, we propose a method for automatically detecting the times during which game players exhibit specific behavior, such as when players commonly show excitement, concentration, immersion, and surprise. The proposed method detects such outlying behavior based on the game players’ characteristics. These characteristics are captured non-invasively in a general game environment. In this paper, cameras were used to analyze observed data such as facial expressions and player movements. Moreover, multimodal data from the game players (i.e., data regarding adjustments to the volume and the use of the keyboard and mouse) was used to analyze high-dimensional game-player data. A support vector machine was used to efficiently detect outlying behaviors. We verified the effectiveness of the proposed method using games from several genres. The recall rate of the outlying behavior pre-identified by industry experts was approximately 70%. The proposed method can also be used for feedback analysis of various interactive content provided in PC environments.
Collapse
Affiliation(s)
- Young Bin Kim
- Interdisciplinary Program in Visual Information Processing, Korea University , Seoul , Korea
| | | | - Sang Hyeok Lee
- Department of Computer and Radio Communications Engineering, Korea University , Seoul , Korea
| | | | - Hyeong Ryeol Kam
- Interdisciplinary Program in Visual Information Processing, Korea University , Seoul , Korea
| | - Jung Lee
- Department of Computer and Radio Communications Engineering, Korea University , Seoul , Korea
| | - Young Sun Kim
- Department of Computer and Radio Communications Engineering, Korea University , Seoul , Korea
| | | | - Chang Hun Kim
- Department of Computer and Radio Communications Engineering, Korea University , Seoul , Korea
| |
Collapse
|
43
|
Reinl M, Bartels A. Perception of temporal asymmetries in dynamic facial expressions. Front Psychol 2015; 6:1107. [PMID: 26300807 PMCID: PMC4523710 DOI: 10.3389/fpsyg.2015.01107] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2015] [Accepted: 07/20/2015] [Indexed: 11/13/2022] Open
Abstract
In the current study we examined whether timeline-reversals and emotional direction of dynamic facial expressions affect subjective experience of human observers. We recorded natural movies of faces that increased or decreased their expressions of fear, and played them either in the natural frame order or reversed from last to first frame (reversed timeline). This led to four conditions of increasing or decreasing fear, either following the natural or reversed temporal trajectory of facial dynamics. This 2-by-2 factorial design controlled for visual low-level properties, static visual content, and motion energy across the different factors. It allowed us to examine perceptual consequences that would occur if the timeline trajectory of facial muscle movements during the increase of an emotion are not the exact mirror of the timeline during the decrease. It additionally allowed us to study perceptual differences between increasing and decreasing emotional expressions. Perception of these time-dependent asymmetries have not yet been quantified. We found that three emotional measures, emotional intensity, artificialness of facial movement, and convincingness or plausibility of emotion portrayal, were affected by timeline-reversals as well as by the emotional direction of the facial expressions. Our results imply that natural dynamic facial expressions contain temporal asymmetries, and show that deviations from the natural timeline lead to a reduction of perceived emotional intensity and convincingness, and to an increase of perceived artificialness of the dynamic facial expression. In addition, they show that decreasing facial expressions are judged as less plausible than increasing facial expressions. Our findings are of relevance for both, behavioral as well as neuroimaging studies, as processing and perception are influenced by temporal asymmetries.
Collapse
Affiliation(s)
| | - Andreas Bartels
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
| |
Collapse
|
44
|
Generating an item pool for translational social cognition research: methodology and initial validation. Behav Res Methods 2015; 47:228-34. [PMID: 24719265 DOI: 10.3758/s13428-014-0464-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Existing sets of social and emotional stimuli suitable for social cognition research are limited in many ways, including size, unimodal stimulus delivery, and restriction to major universal emotions. Existing measures of social cognition could be improved by taking advantage of item response theory and adaptive testing technology to develop instruments that obtain more efficient measures of multimodal social cognition. However, for this to be possible, large pools of emotional stimuli must be obtained and validated. We present the development of a large, high-quality multimedia stimulus set produced by professional adult and child actors (ages 5 to 74) containing both visual and vocal emotional expressions. We obtained over 74,000 audiovisual recordings of a wide array of emotional and social behaviors, including the main universal emotions (happiness, sadness, anger, fear, disgust, and surprise), as well as more complex social expressions (pride, affection, sarcasm, jealousy, and shame). The actors generated a high quantity of technically superior, ecologically valid stimuli that were digitized, archived, and rated for accuracy and intensity of expressions. A subset of these facial and vocal expressions of emotion and social behavior were submitted for quantitative ratings to generate parameters for validity and discriminability. These stimuli are suitable for affective neuroscience-based psychometric tests, functional neuroimaging, and social cognitive rehabilitation programs. The purposes of this report are to describe the method of obtaining and validating this database and to make it accessible to the scientific community. We invite all those interested in participating in the use and validation of these stimuli to access them at www.med.upenn.edu/bbl/actors/index.shtml .
Collapse
|
45
|
Siddiqi MH, Ali R, Khan AM. Human facial expression recognition using stepwise linear discriminant analysis and hidden conditional random fields. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:1386-1398. [PMID: 25856814 DOI: 10.1109/tip.2015.2405346] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper introduces an accurate and robust facial expression recognition (FER) system. For feature extraction, the proposed FER system employs stepwise linear discriminant analysis (SWLDA). SWLDA focuses on selecting the localized features from the expression frames using the partial F-test values, thereby reducing the within class variance and increasing the low between variance among different expression classes. For recognition, the hidden conditional random fields (HCRFs) model is utilized. HCRF is capable of approximating a complex distribution using a mixture of Gaussian density functions. To achieve optimum results, the system employs a hierarchical recognition strategy. Under these settings, expressions are divided into three categories based on parts of the face that contribute most toward an expression. During recognition, at the first level, SWLDA and HCRF are employed to recognize the expression category; whereas, at the second level, the label for the expression within the recognized category is determined using a separate set of SWLDA and HCRF, trained just for that category. In order to validate the system, four publicly available data sets were used, and a total of four experiments were performed. The weighted average recognition rate for the proposed FER approach was 96.37% across the four different data sets, which is a significant improvement in contrast to the existing FER methods.
Collapse
|
46
|
Olszanowski M, Pochwatko G, Kuklinski K, Scibor-Rylski M, Lewinski P, Ohme RK. Warsaw set of emotional facial expression pictures: a validation study of facial display photographs. Front Psychol 2015; 5:1516. [PMID: 25601846 PMCID: PMC4283518 DOI: 10.3389/fpsyg.2014.01516] [Citation(s) in RCA: 92] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2014] [Accepted: 12/09/2014] [Indexed: 11/24/2022] Open
Abstract
Emotional facial expressions play a critical role in theories of emotion and figure prominently in research on almost every aspect of emotion. This article provides a background for a new database of basic emotional expressions. The goal in creating this set was to provide high quality photographs of genuine facial expressions. Thus, after proper training, participants were inclined to express "felt" emotions. The novel approach taken in this study was also used to establish whether a given expression was perceived as intended by untrained judges. The judgment task for perceivers was designed to be sensitive to subtle changes in meaning caused by the way an emotional display was evoked and expressed. Consequently, this allowed us to measure the purity and intensity of emotional displays, which are parameters that validation methods used by other researchers do not capture. The final set is comprised of those pictures that received the highest recognition marks (e.g., accuracy with intended display) from independent judges, totaling 210 high quality photographs of 30 individuals. Descriptions of the accuracy, intensity, and purity of displayed emotion as well as FACS AU's codes are provided for each picture. Given the unique methodology applied to gathering and validating this set of pictures, it may be a useful tool for research using face stimuli. The Warsaw Set of Emotional Facial Expression Pictures (WSEFEP) is freely accessible to the scientific community for non-commercial use by request at http://www.emotional-face.org.
Collapse
Affiliation(s)
- Michal Olszanowski
- Department of Psychology, University of Social Sciences and HumanitiesWarsaw, Poland
| | | | - Krzysztof Kuklinski
- Department of Psychology, University of Social Sciences and HumanitiesWarsaw, Poland
| | - Michal Scibor-Rylski
- Department of Psychology, University of Social Sciences and HumanitiesWarsaw, Poland
| | - Peter Lewinski
- Department of Communication, University of AmsterdamAmsterdam, Netherlands
| | - Rafal K. Ohme
- Faculty in Wroclaw, University of Social Sciences and HumanitiesWroclaw, Poland
| |
Collapse
|
47
|
Volkova E, de la Rosa S, Bülthoff HH, Mohler B. The MPI emotional body expressions database for narrative scenarios. PLoS One 2014; 9:e113647. [PMID: 25461382 PMCID: PMC4252031 DOI: 10.1371/journal.pone.0113647] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2014] [Accepted: 10/22/2014] [Indexed: 12/03/2022] Open
Abstract
Emotion expression in human-human interaction takes place via various types of information, including body motion. Research on the perceptual-cognitive mechanisms underlying the processing of natural emotional body language can benefit greatly from datasets of natural emotional body expressions that facilitate stimulus manipulation and analysis. The existing databases have so far focused on few emotion categories which display predominantly prototypical, exaggerated emotion expressions. Moreover, many of these databases consist of video recordings which limit the ability to manipulate and analyse the physical properties of these stimuli. We present a new database consisting of a large set (over 1400) of natural emotional body expressions typical of monologues. To achieve close-to-natural emotional body expressions, amateur actors were narrating coherent stories while their body movements were recorded with motion capture technology. The resulting 3-dimensional motion data recorded at a high frame rate (120 frames per second) provides fine-grained information about body movements and allows the manipulation of movement on a body joint basis. For each expression it gives the positions and orientations in space of 23 body joints for every frame. We report the results of physical motion properties analysis and of an emotion categorisation study. The reactions of observers from the emotion categorisation study are included in the database. Moreover, we recorded the intended emotion expression for each motion sequence from the actor to allow for investigations regarding the link between intended and perceived emotions. The motion sequences along with the accompanying information are made available in a searchable MPI Emotional Body Expression Database. We hope that this database will enable researchers to study expression and perception of naturally occurring emotional body expressions in greater depth.
Collapse
Affiliation(s)
- Ekaterina Volkova
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Graduate School of Neural & Behavioural Sciences, Tübingen, Germany
- * E-mail: (EV); (HHB)
| | - Stephan de la Rosa
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Heinrich H. Bülthoff
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- * E-mail: (EV); (HHB)
| | - Betty Mohler
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
48
|
Reinl M, Bartels A. Face processing regions are sensitive to distinct aspects of temporal sequence in facial dynamics. Neuroimage 2014; 102 Pt 2:407-15. [PMID: 25132020 DOI: 10.1016/j.neuroimage.2014.08.011] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2014] [Revised: 07/25/2014] [Accepted: 08/04/2014] [Indexed: 12/16/2022] Open
Abstract
Facial movement conveys important information for social interactions, yet its neural processing is poorly understood. Computational models propose that shape- and temporal sequence sensitive mechanisms interact in processing dynamic faces. While face processing regions are known to respond to facial movement, their sensitivity to particular temporal sequences has barely been studied. Here we used fMRI to examine the sensitivity of human face-processing regions to two aspects of directionality in facial movement trajectories. We presented genuine movie recordings of increasing and decreasing fear expressions, each of which were played in natural or reversed frame order. This two-by-two factorial design matched low-level visual properties, static content and motion energy within each factor, emotion-direction (increasing or decreasing emotion) and timeline (natural versus artificial). The results showed sensitivity for emotion-direction in FFA, which was timeline-dependent as it only occurred within the natural frame order, and sensitivity to timeline in the STS, which was emotion-direction-dependent as it only occurred for decreased fear. The occipital face area (OFA) was sensitive to the factor timeline. These findings reveal interacting temporal sequence sensitive mechanisms that are responsive to both ecological meaning and to prototypical unfolding of facial dynamics. These mechanisms are temporally directional, provide socially relevant information regarding emotional state or naturalness of behavior, and agree with predictions from modeling and predictive coding theory.
Collapse
Affiliation(s)
- Maren Reinl
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, and Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany
| | - Andreas Bartels
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, and Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany.
| |
Collapse
|
49
|
Quantifying human sensitivity to spatio-temporal information in dynamic faces. Vision Res 2014; 100:78-87. [DOI: 10.1016/j.visres.2014.04.009] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2013] [Revised: 03/04/2014] [Accepted: 04/19/2014] [Indexed: 11/21/2022]
|
50
|
Revealing variations in perception of mental states from dynamic facial expressions: a cautionary note. PLoS One 2014; 9:e84395. [PMID: 24416226 PMCID: PMC3885558 DOI: 10.1371/journal.pone.0084395] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2013] [Accepted: 11/22/2013] [Indexed: 11/19/2022] Open
Abstract
Although a great deal of research has been conducted on the recognition of basic facial emotions (e.g., anger, happiness, sadness), much less research has been carried out on the more subtle facial expressions of an individual's mental state (e.g., anxiety, disinterest, relief). Of particular concern is that these mental state expressions provide a crucial source of communication in everyday life but little is known about the accuracy with which natural dynamic facial expressions of mental states are identified and, in particular, the variability in mental state perception that is produced. Here we report the findings of two studies that investigated the accuracy and variability with which dynamic facial expressions of mental states were identified by participants. Both studies used stimuli carefully constructed using procedures adopted in previous research, and free-report (Study 1) and forced-choice (Study 2) measures of response accuracy and variability. The findings of both studies showed levels of response accuracy that were accompanied by substantial variation in the labels assigned by observers to each mental state. Thus, when mental states are identified from facial expressions in experiments, the identities attached to these expressions appear to vary considerably across individuals. This variability raises important issues for understanding the identification of mental states in everyday situations and for the use of responses in facial expression research.
Collapse
|