1
|
Zaharieva MS, Salvadori EA, Messinger DS, Visser I, Colonnesi C. Automated facial expression measurement in a longitudinal sample of 4- and 8-month-olds: Baby FaceReader 9 and manual coding of affective expressions. Behav Res Methods 2024; 56:5709-5731. [PMID: 38273072 PMCID: PMC11335827 DOI: 10.3758/s13428-023-02301-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/20/2023] [Indexed: 01/27/2024]
Abstract
Facial expressions are among the earliest behaviors infants use to express emotional states, and are crucial to preverbal social interaction. Manual coding of infant facial expressions, however, is laborious and poses limitations to replicability. Recent developments in computer vision have advanced automated facial expression analyses in adults, providing reproducible results at lower time investment. Baby FaceReader 9 is commercially available software for automated measurement of infant facial expressions, but has received little validation. We compared Baby FaceReader 9 output to manual micro-coding of positive, negative, or neutral facial expressions in a longitudinal dataset of 58 infants at 4 and 8 months of age during naturalistic face-to-face interactions with the mother, father, and an unfamiliar adult. Baby FaceReader 9's global emotional valence formula yielded reasonable classification accuracy (AUC = .81) for discriminating manually coded positive from negative/neutral facial expressions; however, the discrimination of negative from neutral facial expressions was not reliable (AUC = .58). Automatically detected a priori action unit (AU) configurations for distinguishing positive from negative facial expressions based on existing literature were also not reliable. A parsimonious approach using only automatically detected smiling (AU12) yielded good performance for discriminating positive from negative/neutral facial expressions (AUC = .86). Likewise, automatically detected brow lowering (AU3+AU4) reliably distinguished neutral from negative facial expressions (AUC = .79). These results provide initial support for the use of selected automatically detected individual facial actions to index positive and negative affect in young infants, but shed doubt on the accuracy of complex a priori formulas.
Collapse
Affiliation(s)
- Martina S Zaharieva
- Department of Developmental Psychology, Faculty of Social and Behavioural Sciences, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands.
- Developmental Psychopathology Unit, Development and Education, Faculty of Social and Behavioural Sciences, Research Institute of Child, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands.
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands.
| | - Eliala A Salvadori
- Developmental Psychopathology Unit, Development and Education, Faculty of Social and Behavioural Sciences, Research Institute of Child, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands
| | - Daniel S Messinger
- Department of Psychology, University of Miami, Coral Gables, FL, USA
- Department of Pediatrics, University of Miami, Coral Gables, FL, USA
- Department of Music Engineering, University of Miami, Coral Gables, FL, USA
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
| | - Ingmar Visser
- Department of Developmental Psychology, Faculty of Social and Behavioural Sciences, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands
| | - Cristina Colonnesi
- Developmental Psychopathology Unit, Development and Education, Faculty of Social and Behavioural Sciences, Research Institute of Child, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
2
|
Hsu CT, Sato W. Electromyographic Validation of Spontaneous Facial Mimicry Detection Using Automated Facial Action Coding. SENSORS (BASEL, SWITZERLAND) 2023; 23:9076. [PMID: 38005462 PMCID: PMC10675524 DOI: 10.3390/s23229076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 11/06/2023] [Accepted: 11/08/2023] [Indexed: 11/26/2023]
Abstract
Although electromyography (EMG) remains the standard, researchers have begun using automated facial action coding system (FACS) software to evaluate spontaneous facial mimicry despite the lack of evidence of its validity. Using the facial EMG of the zygomaticus major (ZM) as a standard, we confirmed the detection of spontaneous facial mimicry in action unit 12 (AU12, lip corner puller) via an automated FACS. Participants were alternately presented with real-time model performance and prerecorded videos of dynamic facial expressions, while simultaneous ZM signal and frontal facial videos were acquired. Facial videos were estimated for AU12 using FaceReader, Py-Feat, and OpenFace. The automated FACS is less sensitive and less accurate than facial EMG, but AU12 mimicking responses were significantly correlated with ZM responses. All three software programs detected enhanced facial mimicry by live performances. The AU12 time series showed a roughly 100 to 300 ms latency relative to the ZM. Our results suggested that while the automated FACS could not replace facial EMG in mimicry detection, it could serve a purpose for large effect sizes. Researchers should be cautious with the automated FACS outputs, especially when studying clinical populations. In addition, developers should consider the EMG validation of AU estimation as a benchmark.
Collapse
Affiliation(s)
- Chun-Ting Hsu
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Soraku-gun, Kyoto 619-0288, Japan
| | - Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Soraku-gun, Kyoto 619-0288, Japan
| |
Collapse
|
3
|
Höfling TTA, Alpers GW. Automatic facial coding predicts self-report of emotion, advertisement and brand effects elicited by video commercials. Front Neurosci 2023; 17:1125983. [PMID: 37205049 PMCID: PMC10185761 DOI: 10.3389/fnins.2023.1125983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 02/10/2023] [Indexed: 05/21/2023] Open
Abstract
Introduction Consumers' emotional responses are the prime target for marketing commercials. Facial expressions provide information about a person's emotional state and technological advances have enabled machines to automatically decode them. Method With automatic facial coding we investigated the relationships between facial movements (i.e., action unit activity) and self-report of commercials advertisement emotion, advertisement and brand effects. Therefore, we recorded and analyzed the facial responses of 219 participants while they watched a broad array of video commercials. Results Facial expressions significantly predicted self-report of emotion as well as advertisement and brand effects. Interestingly, facial expressions had incremental value beyond self-report of emotion in the prediction of advertisement and brand effects. Hence, automatic facial coding appears to be useful as a non-verbal quantification of advertisement effects beyond self-report. Discussion This is the first study to measure a broad spectrum of automatically scored facial responses to video commercials. Automatic facial coding is a promising non-invasive and non-verbal method to measure emotional responses in marketing.
Collapse
|
4
|
Straulino E, Scarpazza C, Sartori L. What is missing in the study of emotion expression? Front Psychol 2023; 14:1158136. [PMID: 37179857 PMCID: PMC10173880 DOI: 10.3389/fpsyg.2023.1158136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/06/2023] [Indexed: 05/15/2023] Open
Abstract
While approaching celebrations for the 150 years of "The Expression of the Emotions in Man and Animals", scientists' conclusions on emotion expression are still debated. Emotion expression has been traditionally anchored to prototypical and mutually exclusive facial expressions (e.g., anger, disgust, fear, happiness, sadness, and surprise). However, people express emotions in nuanced patterns and - crucially - not everything is in the face. In recent decades considerable work has critiqued this classical view, calling for a more fluid and flexible approach that considers how humans dynamically perform genuine expressions with their bodies in context. A growing body of evidence suggests that each emotional display is a complex, multi-component, motoric event. The human face is never static, but continuously acts and reacts to internal and environmental stimuli, with the coordinated action of muscles throughout the body. Moreover, two anatomically and functionally different neural pathways sub-serve voluntary and involuntary expressions. An interesting implication is that we have distinct and independent pathways for genuine and posed facial expressions, and different combinations may occur across the vertical facial axis. Investigating the time course of these facial blends, which can be controlled consciously only in part, is recently providing a useful operational test for comparing the different predictions of various models on the lateralization of emotions. This concise review will identify shortcomings and new challenges regarding the study of emotion expressions at face, body, and contextual levels, eventually resulting in a theoretical and methodological shift in the study of emotions. We contend that the most feasible solution to address the complex world of emotion expression is defining a completely new and more complete approach to emotional investigation. This approach can potentially lead us to the roots of emotional display, and to the individual mechanisms underlying their expression (i.e., individual emotional signatures).
Collapse
Affiliation(s)
- Elisa Straulino
- Department of General Psychology, University of Padova, Padova, Italy
- *Correspondence: Elisa Straulino,
| | - Cristina Scarpazza
- Department of General Psychology, University of Padova, Padova, Italy
- IRCCS San Camillo Hospital, Venice, Italy
| | - Luisa Sartori
- Department of General Psychology, University of Padova, Padova, Italy
- Padova Neuroscience Center, University of Padova, Padova, Italy
- Luisa Sartori,
| |
Collapse
|
5
|
Deliberate control of facial expressions in a go/no-go task: An ERP study. Acta Psychol (Amst) 2022; 230:103773. [DOI: 10.1016/j.actpsy.2022.103773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 08/11/2022] [Accepted: 10/11/2022] [Indexed: 11/21/2022] Open
|
6
|
Franěk M, Petružálek J, Šefara D. Facial Expressions and Self-Reported Emotions When Viewing Nature Images. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:10588. [PMID: 36078304 PMCID: PMC9518385 DOI: 10.3390/ijerph191710588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Revised: 08/16/2022] [Accepted: 08/22/2022] [Indexed: 06/15/2023]
Abstract
Many studies have demonstrated that exposure to simulated natural scenes has positive effects on emotions and reduces stress. In the present study, we investigated emotional facial expressions while viewing images of various types of natural environments. Both automated facial expression analysis by iMotions' AFFDEX 8.1 software (iMotions, Copenhagen, Denmark) and self-reported emotions were analyzed. Attractive and unattractive natural images were used, representing either open or closed natural environments. The goal was to further understand the actual features and characteristics of natural scenes that could positively affect emotional states and to evaluate face reading technology to measure such effects. It was predicted that attractive natural scenes would evoke significantly higher levels of positive emotions than unattractive scenes. The results showed generally small values of emotional facial expressions while observing the images. The facial expression of joy was significantly higher than that of other registered emotions. Contrary to predictions, there was no difference between facial emotions while viewing attractive and unattractive scenes. However, the self-reported emotions evoked by the images showed significantly larger differences between specific categories of images in accordance with the predictions. The differences between the registered emotional facial expressions and self-reported emotions suggested that the participants more likely described images in terms of common stereotypes linked with the beauty of natural environments. This result might be an important finding for further methodological considerations.
Collapse
|
7
|
Test–Retest Reliability in Automated Emotional Facial Expression Analysis: Exploring FaceReader 8.0 on Data from Typically Developing Children and Children with Autism. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157759] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Automated emotional facial expression analysis (AEFEA) is used widely in applied research, including the development of screening/diagnostic systems for atypical human neurodevelopmental conditions. The validity of AEFEA systems has been systematically studied, but their test–retest reliability has not been researched thus far. We explored the test–retest reliability of a specific AEFEA software, Noldus FaceReader 8.0 (FR8; by Noldus Information Technology). We collected intensity estimates for 8 repeated emotions through FR8 from facial video recordings of 60 children: 31 typically developing children and 29 children with autism spectrum disorder. Test–retest reliability was imperfect in 20% of cases, affecting a substantial proportion of data points; however, the test–retest differences were small. This shows that the test–retest reliability of FR8 is high but not perfect. A proportion of cases which initially failed to show perfect test–retest reliability reached it in a subsequent analysis by FR8. This suggests that repeated analyses by FR8 can, in some cases, lead to the “stabilization” of emotion intensity datasets. Under ANOVA, the test–retest differences did not influence the pattern of cross-emotion and cross-group effects and interactions. Our study does not question the validity of previous results gained by AEFEA technology, but it shows that further exploration of the test–retest reliability of AEFEA systems is desirable.
Collapse
|
8
|
Recio G, Surdzhiyska Y, Bagherzadeh-Azbari S, Hilpert P, Rostami HN, Xu Q, Sommer W. Deliberate control over facial expressions in motherhood. Evidence from a Stroop-like task. Acta Psychol (Amst) 2022; 228:103652. [PMID: 35753142 DOI: 10.1016/j.actpsy.2022.103652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 03/10/2022] [Accepted: 06/14/2022] [Indexed: 11/01/2022] Open
Abstract
The deliberate control of facial expressions is an important ability in human interactions, in particular for mothers with prelinguistic infants. Because research on this topic is still scarce, we investigated the control over facial expressions in a Stroop-like paradigm. Mothers of 2-6 months old infants and nullipara women produced smiles and frowns in response to verbal commands written on distractor faces of adults or infants showing expressions of happiness or anger/distress. Analyses of video recordings with a machine classifier for facial expression revealed pronounced effects of congruency between the expressions required by the participants and those displayed by the face stimuli on the onset latencies of the deliberate facial expressions. With adult distractor faces this Stroop effect was similar whether participants smiled or frowned. With infant distractor faces mothers and non-mothers showed indistinguishable Stroop effects on smile responses; however, for frown responses, the Stroop effect in mothers was smaller than in non-mothers. We suggest that for frown responses in mothers when facing infants, the effect of mimicry or stimulus response compatibility, leading to the Stroop effect, is offset by a caregiving response or empathy.
Collapse
Affiliation(s)
| | | | | | | | | | - Qiang Xu
- Humboldt Universität zu Berlin, Germany; Ningbo University, China
| | | |
Collapse
|
9
|
A Deep Learning Approach for Predicting Subject-Specific Human Skull Shape from Head Toward a Decision Support System for Home-Based Facial Rehabilitation. Ing Rech Biomed 2022. [DOI: 10.1016/j.irbm.2022.05.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
10
|
Real-Time Classification of Pain Level Using Zygomaticus and Corrugator EMG Features. ELECTRONICS 2022. [DOI: 10.3390/electronics11111671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The real-time recognition of pain level is required to perform an accurate pain assessment of patients in the intensive care unit, infants, and other subjects who may not be able to communicate verbally or even express the sensation of pain. Facial expression is a key pain-related behavior that may unlock the answer to an objective pain measurement tool. In this work, a machine learning-based pain level classification system using data collected from facial electromyograms (EMG) is presented. The dataset was acquired from part of the BioVid Heat Pain database to evaluate facial expression from an EMG corrugator and EMG zygomaticus and an EMG signal processing and data analysis flow is adapted for continuous pain estimation. The extracted pain-associated facial electromyography (fEMG) features classification is performed by K-nearest neighbor (KNN) by choosing the value of k which depends on the nonlinear models. The presentation of the accuracy estimation is performed, and considerable growth in classification accuracy is noticed when the subject matter from the features is omitted from the analysis. The ML algorithm for the classification of the amount of pain experienced by patients could deliver valuable evidence for health care providers and aid treatment assessment. The proposed classification algorithm has achieved a 99.4% accuracy for classifying the pain tolerance level from the baseline (P0 versus P4) without the influence of a subject bias. Moreover, the result on the classification accuracy clearly shows the relevance of the proposed approach.
Collapse
|
11
|
Katembu S, Xu Q, Rostami HN, Recio G, Sommer W. Effects of Social Context on Deliberate Facial Expressions: Evidence from a Stroop-like Task. JOURNAL OF NONVERBAL BEHAVIOR 2022. [DOI: 10.1007/s10919-022-00400-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
AbstractFacial expressions contribute to nonverbal communication, social coordination, and interaction. Facial expressions may reflect the emotional state of the expressor, but they may be modulated by the presence of others, for example, by facial mimicry or through social display rules. We examined how deliberate facial expressions of happiness and anger (smiles and frowns), prompted by written commands, are modulated by the congruency with the facial expression of background faces and how this effect depends on the age of the background face (infants vs. adults). Our main interest was whether the quality of the required expression could be influenced by a task-irrelevant background face and its emotional display. Background faces from adults and infants displayed happy, angry, or neutral expressions. To assess the activation pattern of different action units, we used a machine classifier software; the same classifier was used to assess the chronometry of the expression responses. Results indicated slower and less correct performance when an incongruent facial expression was in the background, especially when distractor stimuli showed adult faces. Interestingly, smile responses were more intense in congruent than incongruent conditions. Depending on stimulus age, frown responses were affected in their quality by incongruent (smile) expressions in terms of the additional activation or deactivation of the outer brow raiser (AU2), resulting in a blended expression, somewhat different from the prototypical expression for anger. Together, the present results show qualitative effects on deliberate facial expressions, beyond typical chronometric effects, confirming machine classification of facial expressions as a promising tool for emotion research.
Collapse
|
12
|
Höfling TTA, Alpers GW, Büdenbender B, Föhl U, Gerdes ABM. What's in a face: Automatic facial coding of untrained study participants compared to standardized inventories. PLoS One 2022; 17:e0263863. [PMID: 35239654 PMCID: PMC8893617 DOI: 10.1371/journal.pone.0263863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 01/28/2022] [Indexed: 11/19/2022] Open
Abstract
Automatic facial coding (AFC) is a novel research tool to automatically analyze emotional facial expressions. AFC can classify emotional expressions with high accuracy in standardized picture inventories of intensively posed and prototypical expressions. However, classification of facial expressions of untrained study participants is more error prone. This discrepancy requires a direct comparison between these two sources of facial expressions. To this end, 70 untrained participants were asked to express joy, anger, surprise, sadness, disgust, and fear in a typical laboratory setting. Recorded videos were scored with a well-established AFC software (FaceReader, Noldus Information Technology). These were compared with AFC measures of standardized pictures from 70 trained actors (i.e., standardized inventories). We report the probability estimates of specific emotion categories and, in addition, Action Unit (AU) profiles for each emotion. Based on this, we used a novel machine learning approach to determine the relevant AUs for each emotion, separately for both datasets. First, misclassification was more frequent for some emotions of untrained participants. Second, AU intensities were generally lower in pictures of untrained participants compared to standardized pictures for all emotions. Third, although profiles of relevant AU overlapped substantially across the two data sets, there were also substantial differences in their AU profiles. This research provides evidence that the application of AFC is not limited to standardized facial expression inventories but can also be used to code facial expressions of untrained participants in a typical laboratory setting.
Collapse
Affiliation(s)
- T. Tim A. Höfling
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Georg W. Alpers
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Björn Büdenbender
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Ulrich Föhl
- Business School, Pforzheim University of Applied Sciences, Pforzheim, Germany
| | - Antje B. M. Gerdes
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| |
Collapse
|
13
|
Kashef R. ECNN: Enhanced convolutional neural network for efficient diagnosis of autism spectrum disorder. COGN SYST RES 2022. [DOI: 10.1016/j.cogsys.2021.10.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
14
|
Franěk M, Petružálek J. Viewing Natural vs. Urban Images and Emotional Facial Expressions: An Exploratory Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18147651. [PMID: 34300102 PMCID: PMC8307470 DOI: 10.3390/ijerph18147651] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Revised: 07/14/2021] [Accepted: 07/16/2021] [Indexed: 11/25/2022]
Abstract
There is a large body of evidence that exposure to simulated natural scenes has positive effects on emotions and reduces stress. Some studies have used self-reported assessments, and others have used physiological measures or combined self-reports with physiological measures; however, analysis of facial emotional expression has rarely been assessed. In the present study, participant facial expressions were analyzed while viewing forest trees with foliage, forest trees without foliage, and urban images by iMotions’ AFFDEX software designed for the recognition of facial emotions. It was assumed that natural images would evoke a higher magnitude of positive emotions in facial expressions and a lower magnitude of negative emotions than urban images. However, the results showed only very low magnitudes of facial emotional responses, and differences between natural and urban images were not significant. While the stimuli used in the present study represented an ordinary deciduous forest and urban streets, differences between the effects of mundane and attractive natural scenes and urban images are discussed. It is suggested that more attractive images could result in more pronounced emotional facial expressions. The findings of the present study have methodological relevance for future research. Moreover, not all urban dwellers have the possibility to spend time in nature; therefore, knowing more about the effects of some forms of simulated natural scenes surrogate nature also has some practical relevance.
Collapse
|
15
|
Abstract
With a shift in interest toward dynamic expressions, numerous corpora of dynamic facial stimuli have been developed over the past two decades. The present research aimed to test existing sets of dynamic facial expressions (published between 2000 and 2015) in a cross-corpus validation effort. For this, 14 dynamic databases were selected that featured facial expressions of the basic six emotions (anger, disgust, fear, happiness, sadness, surprise) in posed or spontaneous form. In Study 1, a subset of stimuli from each database (N = 162) were presented to human observers and machine analysis, yielding considerable variance in emotion recognition performance across the databases. Classification accuracy further varied with perceived intensity and naturalness of the displays, with posed expressions being judged more accurately and as intense, but less natural compared to spontaneous ones. Study 2 aimed for a full validation of the 14 databases by subjecting the entire stimulus set (N = 3812) to machine analysis. A FACS-based Action Unit (AU) analysis revealed that facial AU configurations were more prototypical in posed than spontaneous expressions. The prototypicality of an expression in turn predicted emotion classification accuracy, with higher performance observed for more prototypical facial behavior. Furthermore, technical features of each database (i.e., duration, face box size, head rotation, and motion) had a significant impact on recognition accuracy. Together, the findings suggest that existing databases vary in their ability to signal specific emotions, thereby facing a trade-off between realism and ecological validity on the one end, and expression uniformity and comparability on the other.
Collapse
|
16
|
Küntzler T, Höfling TTA, Alpers GW. Automatic Facial Expression Recognition in Standardized and Non-standardized Emotional Expressions. Front Psychol 2021; 12:627561. [PMID: 34025503 PMCID: PMC8131548 DOI: 10.3389/fpsyg.2021.627561] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 03/11/2021] [Indexed: 12/22/2022] Open
Abstract
Emotional facial expressions can inform researchers about an individual's emotional state. Recent technological advances open up new avenues to automatic Facial Expression Recognition (FER). Based on machine learning, such technology can tremendously increase the amount of processed data. FER is now easily accessible and has been validated for the classification of standardized prototypical facial expressions. However, applicability to more naturalistic facial expressions still remains uncertain. Hence, we test and compare performance of three different FER systems (Azure Face API, Microsoft; Face++, Megvii Technology; FaceReader, Noldus Information Technology) with human emotion recognition (A) for standardized posed facial expressions (from prototypical inventories) and (B) for non-standardized acted facial expressions (extracted from emotional movie scenes). For the standardized images, all three systems classify basic emotions accurately (FaceReader is most accurate) and they are mostly on par with human raters. For the non-standardized stimuli, performance drops remarkably for all three systems, but Azure still performs similarly to humans. In addition, all systems and humans alike tend to misclassify some of the non-standardized emotional facial expressions as neutral. In sum, emotion recognition by automated facial expression recognition can be an attractive alternative to human emotion recognition for standardized and non-standardized emotional facial expressions. However, we also found limitations in accuracy for specific facial expressions; clearly there is need for thorough empirical evaluation to guide future developments in computer vision of emotional facial expressions.
Collapse
Affiliation(s)
- Theresa Küntzler
- Department of Politics and Public Administration, Center for Image Analysis in the Social Sciences, Graduate School of Decision Science, University of Konstanz, Konstanz, Germany
| | - T Tim A Höfling
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Georg W Alpers
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| |
Collapse
|
17
|
Höfling TTA, Alpers GW, Gerdes ABM, Föhl U. Automatic facial coding versus electromyography of mimicked, passive, and inhibited facial response to emotional faces. Cogn Emot 2021; 35:874-889. [PMID: 33761825 DOI: 10.1080/02699931.2021.1902786] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
Decoding someone's facial expressions provides insights into his or her emotional experience. Recently, Automatic Facial Coding (AFC) software has been developed to provide measurements of emotional facial expressions. Previous studies provided first evidence for the sensitivity of such systems to detect facial responses in study participants. In the present experiment, we set out to generalise these results to affective responses as they can occur in variable social interactions. Thus, we presented facial expressions (happy, neutral, angry) and instructed participants (N = 64) to either actively mimic, to look at them passively (n = 21), or to inhibit their own facial reaction (n = 22). A video stream for AFC and an electromyogram (EMG) of the zygomaticus and corrugator muscles were registered continuously. In the mimicking condition, both AFC and EMG differentiated well between facial expressions in response to the different emotional pictures. In the passive viewing and in the inhibition condition AFC did not detect changes in facial expressions whereas EMG was still highly sensitive. Although only EMG is sensitive when participants intend to conceal their facial reactions, these data extend previous findings that Automatic Facial Coding is a promising tool for the detection of intense facial reaction.
Collapse
Affiliation(s)
- T Tim A Höfling
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany.,Business School, Pforzheim University of Applied Sciences, Pforzheim, Germany
| | - Georg W Alpers
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Antje B M Gerdes
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Ulrich Föhl
- Business School, Pforzheim University of Applied Sciences, Pforzheim, Germany
| |
Collapse
|
18
|
Viswanatha Reddy G, Dharma Savarni C, Mukherjee S. Facial expression recognition in the wild, by fusion of deep learnt and hand-crafted features. COGN SYST RES 2020. [DOI: 10.1016/j.cogsys.2020.03.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
19
|
Höfling TTA, Gerdes ABM, Föhl U, Alpers GW. Read My Face: Automatic Facial Coding Versus Psychophysiological Indicators of Emotional Valence and Arousal. Front Psychol 2020; 11:1388. [PMID: 32636788 PMCID: PMC7316962 DOI: 10.3389/fpsyg.2020.01388] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 05/25/2020] [Indexed: 12/12/2022] Open
Abstract
Facial expressions provide insight into a person's emotional experience. To automatically decode these expressions has been made possible by tremendous progress in the field of computer vision. Researchers are now able to decode emotional facial expressions with impressive accuracy in standardized images of prototypical basic emotions. We tested the sensitivity of a well-established automatic facial coding software program to detect spontaneous emotional reactions in individuals responding to emotional pictures. We compared automatically generated scores for valence and arousal of the Facereader (FR; Noldus Information Technology) with the current psychophysiological gold standard of measuring emotional valence (Facial Electromyography, EMG) and arousal (Skin Conductance, SC). We recorded physiological and behavioral measurements of 43 healthy participants while they looked at pleasant, unpleasant, or neutral scenes. When viewing pleasant pictures, FR Valence and EMG were both comparably sensitive. However, for unpleasant pictures, FR Valence showed an expected negative shift, but the signal differentiated not well between responses to neutral and unpleasant stimuli, that were distinguishable with EMG. Furthermore, FR Arousal values had a stronger correlation with self-reported valence than with arousal while SC was sensitive and specifically associated with self-reported arousal. This is the first study to systematically compare FR measurement of spontaneous emotional reactions to standardized emotional images with established psychophysiological measurement tools. This novel technology has yet to make strides to surpass the sensitivity of established psychophysiological measures. However, it provides a promising new measurement technique for non-contact assessment of emotional responses.
Collapse
Affiliation(s)
- T. Tim A. Höfling
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Antje B. M. Gerdes
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Ulrich Föhl
- Business Unit, Pforzheim University of Applied Sciences, Pforzheim, Germany
| | - Georg W. Alpers
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| |
Collapse
|
20
|
Küster D, Krumhuber EG, Steinert L, Ahuja A, Baker M, Schultz T. Opportunities and Challenges for Using Automatic Human Affect Analysis in Consumer Research. Front Neurosci 2020; 14:400. [PMID: 32410956 PMCID: PMC7199103 DOI: 10.3389/fnins.2020.00400] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Accepted: 03/31/2020] [Indexed: 11/13/2022] Open
Abstract
The ability to automatically assess emotional responses via contact-free video recording taps into a rapidly growing market aimed at predicting consumer choices. If consumer attention and engagement are measurable in a reliable and accessible manner, relevant marketing decisions could be informed by objective data. Although significant advances have been made in automatic affect recognition, several practical and theoretical issues remain largely unresolved. These concern the lack of cross-system validation, a historical emphasis of posed over spontaneous expressions, as well as more fundamental issues regarding the weak association between subjective experience and facial expressions. To address these limitations, the present paper argues that extant commercial and free facial expression classifiers should be rigorously validated in cross-system research. Furthermore, academics and practitioners must better leverage fine-grained emotional response dynamics, with stronger emphasis on understanding naturally occurring spontaneous expressions, and in naturalistic choice settings. We posit that applied consumer research might be better situated to examine facial behavior in socio-emotional contexts rather than decontextualized, laboratory studies, and highlight how AHAA can be successfully employed in this context. Also, facial activity should be considered less as a single outcome variable, and more as a starting point for further analyses. Implications of this approach and potential obstacles that need to be overcome are discussed within the context of consumer research.
Collapse
Affiliation(s)
- Dennis Küster
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany.,Department of Psychology and Methods, Jacobs University Bremen, Bremen, Germany
| | - Eva G Krumhuber
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Lars Steinert
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| | - Anuj Ahuja
- Maharaja Surajmal Institute of Technology, Guru Gobind Singh Indraprastha University, New Delhi, India
| | - Marc Baker
- Centre for Situated Action and Communication, Department of Psychology, University of Portsmouth, Portsmouth, United Kingdom
| | - Tanja Schultz
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| |
Collapse
|
21
|
The State of Automated Facial Expression Analysis (AFEA) in Evaluating Consumer Packaged Beverages. BEVERAGES 2020. [DOI: 10.3390/beverages6020027] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
In the late 1970s, analysis of facial expressions to unveil emotional states began to grow and flourish along with new technologies and software advances. Researchers have always been able to document what consumers do, but understanding how consumers feel at a specific moment in time is an important part of the product development puzzle. Because of this, biometric testing methods have been used in numerous studies, as researchers have worked to develop a more comprehensive understanding of consumers. Despite the many articles on automated facial expression analysis (AFEA), literature is limited in regard to food and beverage studies. There are no standards to guide researchers in setting up materials, processing data, or conducting a study, and there are few, if any, compilations of the studies that have been performed to determine whether any methodologies work better than others or what trends have been found. Through a systematic Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) review, 38 articles were found that were relevant to the research goals. The authors identified AFEA study methods that have worked and those that have not been as successful and noted any trends of particular importance. Key takeaways include a listing of commercial AFEA software, experimental methods used within the PRISMA analysis, and a comprehensive explanation of the critical methods and practices of the studies analyzed. Key information was analyzed and compared to determine effects on the study outcomes. Through analyzing the various studies, suggestions and guidance for conducting and analyzing data from AFEA experiments are discussed.
Collapse
|
22
|
Kulke L, Feyerabend D, Schacht A. A Comparison of the Affectiva iMotions Facial Expression Analysis Software With EMG for Identifying Facial Expressions of Emotion. Front Psychol 2020; 11:329. [PMID: 32184749 PMCID: PMC7058682 DOI: 10.3389/fpsyg.2020.00329] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Accepted: 02/11/2020] [Indexed: 11/13/2022] Open
Abstract
Human faces express emotions, informing others about their affective states. In order to measure expressions of emotion, facial Electromyography (EMG) has widely been used, requiring electrodes and technical equipment. More recently, emotion recognition software has been developed that detects emotions from video recordings of human faces. However, its validity and comparability to EMG measures is unclear. The aim of the current study was to compare the Affectiva Affdex emotion recognition software by iMotions with EMG measurements of the zygomaticus mayor and corrugator supercilii muscle, concerning its ability to identify happy, angry and neutral faces. Twenty participants imitated these facial expressions while videos and EMG were recorded. Happy and angry expressions were detected by both the software and by EMG above chance, while neutral expressions were more often falsely identified as negative by EMG compared to the software. Overall, EMG and software values correlated highly. In conclusion, Affectiva Affdex software can identify facial expressions and its results are comparable to EMG findings.
Collapse
Affiliation(s)
- Louisa Kulke
- Affective Neuroscience and Psychophysiology Laboratory, University of Göttingen, Göttingen, Germany
- Leibniz ScienceCampus Primate Cognition, Göttingen, Germany
| | - Dennis Feyerabend
- Affective Neuroscience and Psychophysiology Laboratory, University of Göttingen, Göttingen, Germany
| | - Annekathrin Schacht
- Affective Neuroscience and Psychophysiology Laboratory, University of Göttingen, Göttingen, Germany
- Leibniz ScienceCampus Primate Cognition, Göttingen, Germany
| |
Collapse
|