1
|
Song Q, Song T, Fei X. Effects of executive functions on consecutive interpreting for Chinese-Japanese unbalanced bilinguals. Front Psychol 2023; 14:1236649. [PMID: 37727743 PMCID: PMC10506074 DOI: 10.3389/fpsyg.2023.1236649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 08/14/2023] [Indexed: 09/21/2023] Open
Abstract
Introduction Previous research on performance in interpreting has focused primarily on the influence of interpreting experience on executive functions, such as shifting, updating, and inhibition. However, limited research has explored the effects of executive functions on performance. Understanding how different executive functions affect interpreting performance can provide valuable insights for teaching methods. Therefore, the present study aims to examine the effects of executive functions on comprehension and output performance during bidirectional consecutive interpreting between Chinese and Japanese. Methods This study involved 48 Chinese advanced Japanese language learners. Self-assessment results indicated that all participants were unbalanced bilingual individuals. All participants took part in consecutive interpreting, completed comprehension tests, and underwent executive function tests. Executive functions were assessed using the color-shape switching task, 1-back task, and Stroop task. Results Analysis using Bayesian linear regression revealed the following. (1) Updating exhibited a significant impact on both Japanese-to-Chinese and Chinese-to-Japanese interpreting, indicating that higher updating ability was associated with better interpreting performance. (2) Inhibition showed a significant effect on Japanese-to-Chinese interpreting performance, whereas the effect was not significant in Chinese-to-Japanese interpreting. (3) No significant effects of shifting were observed in either Japanese-to-Chinese or Chinese-to-Japanese interpreting. Discussion The results indicate that executive functions have different effects on the interpreting performance of unbalanced bilinguals, while these effects are also influenced by the direction of the source language. Based on these findings, it is recommended that executive function training should be included in interpreter teaching and training programs, with a specific focus on the updating and inhibition functions.
Collapse
Affiliation(s)
- Qichao Song
- Graduate School of Humanities and Social Sciences, Hiroshima University, Hiroshima, Japan
| | - Ting Song
- School of Foreign Studies, Jilin University, Changchun, China
| | - Xiaodong Fei
- Beijing Center for Japanese Studies, Beijing Foreign Studies University, Beijing, China
| |
Collapse
|
2
|
Vos S, Collignon O, Boets B. The Sound of Emotion: Pinpointing Emotional Voice Processing Via Frequency Tagging EEG. Brain Sci 2023; 13:brainsci13020162. [PMID: 36831705 PMCID: PMC9954097 DOI: 10.3390/brainsci13020162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 01/13/2023] [Accepted: 01/16/2023] [Indexed: 01/20/2023] Open
Abstract
Successfully engaging in social communication requires efficient processing of subtle socio-communicative cues. Voices convey a wealth of social information, such as gender, identity, and the emotional state of the speaker. We tested whether our brain can systematically and automatically differentiate and track a periodic stream of emotional utterances among a series of neutral vocal utterances. We recorded frequency-tagged EEG responses of 20 neurotypical male adults while presenting streams of neutral utterances at a 4 Hz base rate, interleaved with emotional utterances every third stimulus, hence at a 1.333 Hz oddball frequency. Four emotions (happy, sad, angry, and fear) were presented as different conditions in different streams. To control the impact of low-level acoustic cues, we maximized variability among the stimuli and included a control condition with scrambled utterances. This scrambling preserves low-level acoustic characteristics but ensures that the emotional character is no longer recognizable. Results revealed significant oddball EEG responses for all conditions, indicating that every emotion category can be discriminated from the neutral stimuli, and every emotional oddball response was significantly higher than the response for the scrambled utterances. These findings demonstrate that emotion discrimination is fast, automatic, and is not merely driven by low-level perceptual features. Eventually, here, we present a new database for vocal emotion research with short emotional utterances (EVID) together with an innovative frequency-tagging EEG paradigm for implicit vocal emotion discrimination.
Collapse
Affiliation(s)
- Silke Vos
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, 3000 Leuven, Belgium
- Leuven Autism Research (LAuRes), KU Leuven, 3000 Leuven, Belgium
- Leuven Brain Institute (LBI), KU Leuven, 3000 Leuven, Belgium
- Correspondence: ; Tel.: +32-16-37-76-83
| | - Olivier Collignon
- Institute of Research in Psychology & Institute of Neuroscience, Université Catholique de Louvain, 1348 Louvain-La-Neuve, Belgium
- School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, 1007 Lausanne and 1950 Sion, Switzerland
| | - Bart Boets
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, 3000 Leuven, Belgium
- Leuven Autism Research (LAuRes), KU Leuven, 3000 Leuven, Belgium
- Leuven Brain Institute (LBI), KU Leuven, 3000 Leuven, Belgium
| |
Collapse
|
3
|
Straulino E, Scarpazza C, Sartori L. What is missing in the study of emotion expression? Front Psychol 2023; 14:1158136. [PMID: 37179857 PMCID: PMC10173880 DOI: 10.3389/fpsyg.2023.1158136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/06/2023] [Indexed: 05/15/2023] Open
Abstract
While approaching celebrations for the 150 years of "The Expression of the Emotions in Man and Animals", scientists' conclusions on emotion expression are still debated. Emotion expression has been traditionally anchored to prototypical and mutually exclusive facial expressions (e.g., anger, disgust, fear, happiness, sadness, and surprise). However, people express emotions in nuanced patterns and - crucially - not everything is in the face. In recent decades considerable work has critiqued this classical view, calling for a more fluid and flexible approach that considers how humans dynamically perform genuine expressions with their bodies in context. A growing body of evidence suggests that each emotional display is a complex, multi-component, motoric event. The human face is never static, but continuously acts and reacts to internal and environmental stimuli, with the coordinated action of muscles throughout the body. Moreover, two anatomically and functionally different neural pathways sub-serve voluntary and involuntary expressions. An interesting implication is that we have distinct and independent pathways for genuine and posed facial expressions, and different combinations may occur across the vertical facial axis. Investigating the time course of these facial blends, which can be controlled consciously only in part, is recently providing a useful operational test for comparing the different predictions of various models on the lateralization of emotions. This concise review will identify shortcomings and new challenges regarding the study of emotion expressions at face, body, and contextual levels, eventually resulting in a theoretical and methodological shift in the study of emotions. We contend that the most feasible solution to address the complex world of emotion expression is defining a completely new and more complete approach to emotional investigation. This approach can potentially lead us to the roots of emotional display, and to the individual mechanisms underlying their expression (i.e., individual emotional signatures).
Collapse
Affiliation(s)
- Elisa Straulino
- Department of General Psychology, University of Padova, Padova, Italy
- *Correspondence: Elisa Straulino,
| | - Cristina Scarpazza
- Department of General Psychology, University of Padova, Padova, Italy
- IRCCS San Camillo Hospital, Venice, Italy
| | - Luisa Sartori
- Department of General Psychology, University of Padova, Padova, Italy
- Padova Neuroscience Center, University of Padova, Padova, Italy
- Luisa Sartori,
| |
Collapse
|
4
|
Namba S, Nakamura K, Watanabe K. The spatio-temporal features of perceived-as-genuine and deliberate expressions. PLoS One 2022; 17:e0271047. [PMID: 35839208 PMCID: PMC9286247 DOI: 10.1371/journal.pone.0271047] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 06/22/2022] [Indexed: 11/24/2022] Open
Abstract
Reading the genuineness of facial expressions is important for increasing the credibility of information conveyed by faces. However, it remains unclear which spatio-temporal characteristics of facial movements serve as critical cues to the perceived genuineness of facial expressions. This study focused on observable spatio-temporal differences between perceived-as-genuine and deliberate expressions of happiness and anger expressions. In this experiment, 89 Japanese participants were asked to judge the perceived genuineness of faces in videos showing happiness or anger expressions. To identify diagnostic facial cues to the perceived genuineness of the facial expressions, we analyzed a total of 128 face videos using an automated facial action detection system; thereby, moment-to-moment activations in facial action units were annotated, and nonnegative matrix factorization extracted sparse and meaningful components from all action units data. The results showed that genuineness judgments reduced when more spatial patterns were observed in facial expressions. As for the temporal features, the perceived-as-deliberate expressions of happiness generally had faster onsets to the peak than the perceived-as-genuine expressions of happiness. Moreover, opening the mouth negatively contributed to the perceived-as-genuine expressions, irrespective of the type of facial expressions. These findings provide the first evidence for dynamic facial cues to the perceived genuineness of happiness and anger expressions.
Collapse
Affiliation(s)
- Shushi Namba
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
| | - Koyo Nakamura
- Faculty of Psychology, Department of Cognition, Emotion, and Methods in Psychology, University of Vienna, Vienna, Austria
- Japan Society for the Promotion of Science, Tokyo, Japan
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan
| | - Katsumi Watanabe
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan
| |
Collapse
|
5
|
Namba S, Sato W, Nakamura K, Watanabe K. Computational Process of Sharing Emotion: An Authentic Information Perspective. Front Psychol 2022; 13:849499. [PMID: 35645906 PMCID: PMC9134197 DOI: 10.3389/fpsyg.2022.849499] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 04/26/2022] [Indexed: 11/28/2022] Open
Abstract
Although results of many psychology studies have shown that sharing emotion achieves dyadic interaction, no report has explained a study of the transmission of authentic information from emotional expressions that can strengthen perceivers. For this study, we used computational modeling, which is a multinomial processing tree, for formal quantification of the process of sharing emotion that emphasizes the perception of authentic information for expressers’ feeling states from facial expressions. Results indicated that the ability to perceive authentic information of feeling states from a happy expression has a higher probability than the probability of judging authentic information from anger expressions. Next, happy facial expressions can activate both emotional elicitation and sharing emotion in perceivers, where emotional elicitation alone is working rather than sharing emotion for angry facial expressions. Third, parameters to detect anger experiences were found to be correlated positively with those of happiness. No robust correlation was found between the parameters extracted from this experiment task and questionnaire-measured emotional contagion, empathy, and social anxiety. Results of this study revealed the possibility that a new computational approach contributes to description of emotion sharing processes.
Collapse
Affiliation(s)
- Shushi Namba
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
- *Correspondence: Shushi Namba,
| | - Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
| | - Koyo Nakamura
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
- Japan Society for the Promotion of Science, Tokyo, Japan
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan
| | - Katsumi Watanabe
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan
- Faculty of Arts, Design and Architecture, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
6
|
Perrett D. Representations of facial expressions since Darwin. EVOLUTIONARY HUMAN SCIENCES 2022; 4:e22. [PMID: 37588914 PMCID: PMC10426120 DOI: 10.1017/ehs.2022.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
Darwin's book on expressions of emotion was one of the first publications to include photographs (Darwin, The expression of the emotions in Man and animals, 1872). The inclusion of expression photographs meant that readers could form their own opinions and could, like Darwin, survey others for their interpretations. As such, the images provided an evidence base and an 'open source'. Since Darwin, increases in the representativeness and realism of emotional expressions have come from the use of composite images, colour, multiple views and dynamic displays. Research on understanding emotional expressions has been aided by the use of computer graphics to interpolate parametrically between different expressions and to extrapolate exaggerations. This review tracks the developments in how emotions are illustrated and studied and considers where to go next.
Collapse
Affiliation(s)
- David Perrett
- School of Psychology and Neuroscience, University of St Andrews, St Mary's Quad, St Andrews, Fife KY169JP, UK
| |
Collapse
|
7
|
Motion Increases Recognition of Naturalistic Postures but not Facial Expressions. JOURNAL OF NONVERBAL BEHAVIOR 2021. [DOI: 10.1007/s10919-021-00372-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
8
|
Zhang M, Ihme K, Drewitz U, Jipp M. Understanding the Multidimensional and Dynamic Nature of Facial Expressions Based on Indicators for Appraisal Components as Basis for Measuring Drivers' Fear. Front Psychol 2021; 12:622433. [PMID: 33679538 PMCID: PMC7930214 DOI: 10.3389/fpsyg.2021.622433] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 01/27/2021] [Indexed: 11/13/2022] Open
Abstract
Facial expressions are one of the commonly used implicit measurements for the in-vehicle affective computing. However, the time courses and the underlying mechanism of facial expressions so far have been barely focused on. According to the Component Process Model of emotions, facial expressions are the result of an individual's appraisals, which are supposed to happen in sequence. Therefore, a multidimensional and dynamic analysis of drivers' fear by using facial expression data could profit from a consideration of these appraisals. A driving simulator experiment with 37 participants was conducted, in which fear and relaxation were induced. It was found that the facial expression indicators of high novelty and low power appraisals were significantly activated after a fear event (high novelty: Z = 2.80, p < 0.01, rcontrast = 0.46; low power: Z = 2.43, p < 0.05, rcontrast = 0.50). Furthermore, after the fear event, the activation of high novelty occurred earlier than low power. These results suggest that multidimensional analysis of facial expression is suitable as an approach for the in-vehicle measurement of the drivers' emotions. Furthermore, a dynamic analysis of drivers' facial expressions considering of effects of appraisal components can add valuable information for the in-vehicle assessment of emotions.
Collapse
Affiliation(s)
- Meng Zhang
- Institute of Transportation Systems, German Aerospace Center/Deutsches Zentrum für Luft- und Raumfahrt (DLR), Braunschweig, Germany
| | - Klas Ihme
- Institute of Transportation Systems, German Aerospace Center/Deutsches Zentrum für Luft- und Raumfahrt (DLR), Braunschweig, Germany
| | - Uwe Drewitz
- Institute of Transportation Systems, German Aerospace Center/Deutsches Zentrum für Luft- und Raumfahrt (DLR), Braunschweig, Germany
| | - Meike Jipp
- Institute of Transportation Systems, German Aerospace Center/Deutsches Zentrum für Luft- und Raumfahrt (DLR), Braunschweig, Germany
| |
Collapse
|
9
|
Namba S, Matsui H, Zloteanu M. Distinct temporal features of genuine and deliberate facial expressions of surprise. Sci Rep 2021; 11:3362. [PMID: 33564091 PMCID: PMC7873236 DOI: 10.1038/s41598-021-83077-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 01/28/2021] [Indexed: 01/30/2023] Open
Abstract
The physical properties of genuine and deliberate facial expressions remain elusive. This study focuses on observable dynamic differences between genuine and deliberate expressions of surprise based on the temporal structure of facial parts during emotional expression. Facial expressions of surprise were elicited using multiple methods and video recorded: senders were filmed as they experienced genuine surprise in response to a jack-in-the-box (Genuine), other senders were asked to produce deliberate surprise with no preparation (Improvised), by mimicking the expression of another (External), or by reproducing the surprised face after having first experienced genuine surprise (Rehearsed). A total of 127 videos were analyzed, and moment-to-moment movements of eyelids and eyebrows were annotated with deep learning-based tracking software. Results showed that all surprise displays were mainly composed of raising eyebrows and eyelids movements. Genuine displays included horizontal movement in the left part of the face, but also showed the weakest movement coupling of all conditions. External displays had faster eyebrow and eyelid movement, while Improvised displays showed the strongest coupling of movements. The findings demonstrate the importance of dynamic information in the encoding of genuine and deliberate expressions of surprise and the importance of the production method employed in research.
Collapse
Affiliation(s)
- Shushi Namba
- Psychological Process Team, BZP, Robotics Project, RIKEN, Kyoto, 6190288, Japan.
| | - Hiroshi Matsui
- Center for Human-Nature, Artificial Intelligence, and Neuroscience, Hokkaido University, Hokkaido, 0600808, Japan
| | - Mircea Zloteanu
- Department of Criminology and Sociology, Kingston University London, Kingston Upon Thames, KT1 2EE, UK
| |
Collapse
|
10
|
Yoshimura H, Qi H, Kikuchi DM, Matsui Y, Fukushima K, Kudo S, Ban K, Kusano K, Nagano D, Hara M, Sato Y, Takatsu K, Hirata S, Kinoshita K. The relationship between plant-eating and hair evacuation in snow leopards (Panthera uncia). PLoS One 2020; 15:e0236635. [PMID: 32736376 PMCID: PMC7394552 DOI: 10.1371/journal.pone.0236635] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Accepted: 07/10/2020] [Indexed: 11/18/2022] Open
Abstract
Although most felids have an exclusive carnivore diet, the presence of plant matter in scat has been reported among various species. This indicates that there may be an adaptive significance to the conservation of plant-eating behavior in felid evolution. Some studies have hypothesized that felids consume plants for self-medication or as a source of nutrition. In addition, it is thought that plant intake helps them to excrete hairballs, however, no scientific work has confirmed these effects. Thus, the objective of this study is to investigate the relationship between plant intake and hair evacuation in felid species. We selected snow leopards (Panthera uncia) as the study species because they have longer and denser hair than other felids. The behavior of 11 captive snow leopards was observed and scat samples from eight of them and two other captive individuals were analyzed. Snow leopards evacuate hair possibly by vomiting and excreting in scats. The frequency of plant-eating and vomiting and the amount of hair and plant in scat were evaluated. We found that the frequency of vomiting was much lower than the frequency of plant-eating. In addition, there was no significant relationship between the amount of plant matter contained in scats and the amount of hair in scats. Contrary to the common assumption, our results indicate that plant intake has little effect on hair evacuation in felid species.
Collapse
Affiliation(s)
- Hiroto Yoshimura
- Wildlife Research Center, Kyoto University, Kyoto, Japan
- * E-mail: (HY); (KK)
| | - Huiyuan Qi
- Wildlife Research Center, Kyoto University, Kyoto, Japan
| | - Dale M. Kikuchi
- Department of Mechanical Engineering Science, Tokyo Institute of Technology, Tokyo, Japan
| | | | | | - Sai Kudo
- Sapporo Maruyama Zoo, Sapporo, Hokkaido, Japan
| | | | - Keisuke Kusano
- Kumamoto City Zoological and Botanical Gardens, Kumamoto, Japan
| | - Daisuke Nagano
- Kumamoto City Zoological and Botanical Gardens, Kumamoto, Japan
| | - Mami Hara
- Nagoya Higashiyama Zoo and Botanical Gardens, Nagoya, Aichi, Japan
| | - Yasuhiro Sato
- Nagoya Higashiyama Zoo and Botanical Gardens, Nagoya, Aichi, Japan
| | | | - Satoshi Hirata
- Wildlife Research Center, Kyoto University, Kyoto, Japan
| | - Kodzue Kinoshita
- Wildlife Research Center, Kyoto University, Kyoto, Japan
- * E-mail: (HY); (KK)
| |
Collapse
|
11
|
Lander K, Butcher NL. Recognizing Genuine From Posed Facial Expressions: Exploring the Role of Dynamic Information and Face Familiarity. Front Psychol 2020; 11:1378. [PMID: 32719634 PMCID: PMC7347903 DOI: 10.3389/fpsyg.2020.01378] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Accepted: 05/22/2020] [Indexed: 11/13/2022] Open
Abstract
The accurate recognition of emotion is important for interpersonal interaction and when navigating our social world. However, not all facial displays reflect the emotional experience currently being felt by the expresser. Indeed, faces express both genuine and posed displays of emotion. In this article, we summarize the importance of motion for the recognition of face identity before critically outlining the role of dynamic information in determining facial expressions and distinguishing between genuine and posed expressions of emotion. We propose that both dynamic information and face familiarity may modulate our ability to determine whether an expression is genuine or not. Finally, we consider the shared role for dynamic information across different face recognition tasks and the wider impact of face familiarity on determining genuine from posed expressions during real-world interactions.
Collapse
Affiliation(s)
- Karen Lander
- Division of Neuroscience and Experimental Psychology, University of Manchester, Manchester, United Kingdom
| | - Natalie L Butcher
- School of Social Sciences, Humanities and Law, Teesside University, Middlesbrough, United Kingdom
| |
Collapse
|
12
|
Perusquía-Hernández M, Ayabe-Kanamura S, Suzuki K. Human perception and biosignal-based identification of posed and spontaneous smiles. PLoS One 2019; 14:e0226328. [PMID: 31830111 PMCID: PMC6907846 DOI: 10.1371/journal.pone.0226328] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2019] [Accepted: 11/25/2019] [Indexed: 11/21/2022] Open
Abstract
Facial expressions are behavioural cues that represent an affective state. Because of this, they are an unobtrusive alternative to affective self-report. The perceptual identification of facial expressions can be performed automatically with technological assistance. Once the facial expressions have been identified, the interpretation is usually left to a field expert. However, facial expressions do not always represent the felt affect; they can also be a communication tool. Therefore, facial expression measurements are prone to the same biases as self-report. Hence, the automatic measurement of human affect should also make inferences on the nature of the facial expressions instead of describing facial movements only. We present two experiments designed to assess whether such automated inferential judgment could be advantageous. In particular, we investigated the differences between posed and spontaneous smiles. The aim of the first experiment was to elicit both types of expressions. In contrast to other studies, the temporal dynamics of the elicited posed expression were not constrained by the eliciting instruction. Electromyography (EMG) was used to automatically discriminate between them. Spontaneous smiles were found to differ from posed smiles in magnitude, onset time, and onset and offset speed independently of the producer's ethnicity. Agreement between the expression type and EMG-based automatic detection reached 94% accuracy. Finally, measurements of the agreement between human video coders showed that although agreement on perceptual labels is fairly good, the agreement worsens with inferential labels. A second experiment confirmed that a layperson's accuracy as regards distinguishing posed from spontaneous smiles is poor. Therefore, the automatic identification of inferential labels would be beneficial in terms of affective assessments and further research on this topic.
Collapse
Affiliation(s)
- Monica Perusquía-Hernández
- Communication Science Laboratories, NTT, Atsugi, Kanagawa, Japan
- Artificial Intelligence Laboratory, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | | | - Kenji Suzuki
- Artificial Intelligence Laboratory, University of Tsukuba, Tsukuba, Ibaraki, Japan
| |
Collapse
|
13
|
Scherer KR, Ellgring H, Dieckmann A, Unfried M, Mortillaro M. Dynamic Facial Expression of Emotion and Observer Inference. Front Psychol 2019; 10:508. [PMID: 30941073 PMCID: PMC6434775 DOI: 10.3389/fpsyg.2019.00508] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Accepted: 02/20/2019] [Indexed: 11/13/2022] Open
Abstract
Research on facial emotion expression has mostly focused on emotion recognition, assuming that a small number of discrete emotions is elicited and expressed via prototypical facial muscle configurations as captured in still photographs. These are expected to be recognized by observers, presumably via template matching. In contrast, appraisal theories of emotion propose a more dynamic approach, suggesting that specific elements of facial expressions are directly produced by the result of certain appraisals and predicting the facial patterns to be expected for certain appraisal configurations. This approach has recently been extended to emotion perception, claiming that observers first infer individual appraisals and only then make categorical emotion judgments based on the estimated appraisal patterns, using inference rules. Here, we report two related studies to empirically investigate the facial action unit configurations that are used by actors to convey specific emotions in short affect bursts and to examine to what extent observers can infer a person's emotions from the predicted facial expression configurations. The results show that (1) professional actors use many of the predicted facial action unit patterns to enact systematically specified appraisal outcomes in a realistic scenario setting, and (2) naïve observers infer the respective emotions based on highly similar facial movement configurations with a degree of accuracy comparable to earlier research findings. Based on estimates of underlying appraisal criteria for the different emotions we conclude that the patterns of facial action units identified in this research correspond largely to prior predictions and encourage further research on appraisal-driven expression and inference.
Collapse
Affiliation(s)
- Klaus R Scherer
- Department of Psychology and Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Heiner Ellgring
- Department of Psychology, University of Würzburg, Würzburg, Germany
| | | | | | - Marcello Mortillaro
- Department of Psychology and Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|