1
|
Macinska S, Lindsay S, Jellema T. Visual Attention to Dynamic Emotional Faces in Adults on the Autism Spectrum. J Autism Dev Disord 2024; 54:2211-2223. [PMID: 37079180 PMCID: PMC11143001 DOI: 10.1007/s10803-023-05979-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/29/2023] [Indexed: 04/21/2023]
Abstract
Using eye-tracking, we studied allocation of attention to faces where the emotional expression and eye-gaze dynamically changed in an ecologically-valid manner. We tested typically-developed (TD) adults low or high in autistic-like traits (Experiment 1), and adults with high-functioning autism (HFA; Experiment 2). All groups fixated more on the eyes than on any of the other facial area, regardless of emotion and gaze direction, though the HFA group fixated less on the eyes and more on the nose than TD controls. The sequence of dynamic facial changes affected the groups similarly, with reduced attention to the eyes and increased attention to the mouth. The results suggest that dynamic emotional face scanning patterns are stereotypical and differ only modestly between TD and HFA adults.
Collapse
Affiliation(s)
- Sylwia Macinska
- Department of Psychology, Faculty of Health Sciences, University of Hull, Cottingham Road Hull, Hull, HU6 7RX, UK
| | - Shane Lindsay
- Department of Psychology, Faculty of Health Sciences, University of Hull, Cottingham Road Hull, Hull, HU6 7RX, UK
| | - Tjeerd Jellema
- Department of Psychology, Faculty of Health Sciences, University of Hull, Cottingham Road Hull, Hull, HU6 7RX, UK.
| |
Collapse
|
2
|
Diel A, Sato W, Hsu CT, Minato T. Asynchrony enhances uncanniness in human, android, and virtual dynamic facial expressions. BMC Res Notes 2023; 16:368. [PMID: 38082445 PMCID: PMC10714471 DOI: 10.1186/s13104-023-06648-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 11/30/2023] [Indexed: 12/18/2023] Open
Abstract
OBJECTIVE Uncanniness plays a vital role in interactions with humans and artificial agents. Previous studies have shown that uncanniness is caused by a higher sensitivity to deviation or atypicality in specialized categories, such as faces or facial expressions, marked by configural processing. We hypothesized that asynchrony, understood as a temporal deviation in facial expression, could cause uncanniness in the facial expression. We also hypothesized that the effect of asynchrony could be disrupted through inversion. RESULTS Sixty-four participants rated the uncanniness of synchronous or asynchronous dynamic face emotion expressions of human, android, or computer-generated (CG) actors, presented either upright or inverted. Asynchrony vs. synchrony expressions increased uncanniness for all upright expressions except for CG angry expressions. Inverted compared with upright presentations produced less evident asynchrony effects for human angry and android happy expressions. These results suggest that asynchrony can cause dynamic expressions to appear uncanny, which is related to configural processing but different across agents.
Collapse
Affiliation(s)
- Alexander Diel
- Cardiff University School of Psychology, Cardiff, UK.
- RIKEN Institute, Kyoto, Japan.
- Clinic for Psychosomatic Medicine and Psychotherapy, LVR University Hospital Essen, University of Duisburg-Essen, 45147, Essen, Germany.
- Center for Translational Neuro- and Behavioral Sciences (C-TNBS), University of Duisburg- Essen, 45147, Essen, Germany.
| | | | | | | |
Collapse
|
3
|
Long H, Peluso N, Baker CI, Japee S, Taubert J. A database of heterogeneous faces for studying naturalistic expressions. Sci Rep 2023; 13:5383. [PMID: 37012369 PMCID: PMC10070342 DOI: 10.1038/s41598-023-32659-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 03/30/2023] [Indexed: 04/05/2023] Open
Abstract
Facial expressions are thought to be complex visual signals, critical for communication between social agents. Most prior work aimed at understanding how facial expressions are recognized has relied on stimulus databases featuring posed facial expressions, designed to represent putative emotional categories (such as 'happy' and 'angry'). Here we use an alternative selection strategy to develop the Wild Faces Database (WFD); a set of one thousand images capturing a diverse range of ambient facial behaviors from outside of the laboratory. We characterized the perceived emotional content in these images using a standard categorization task in which participants were asked to classify the apparent facial expression in each image. In addition, participants were asked to indicate the intensity and genuineness of each expression. While modal scores indicate that the WFD captures a range of different emotional expressions, in comparing the WFD to images taken from other, more conventional databases, we found that participants responded more variably and less specifically to the wild-type faces, perhaps indicating that natural expressions are more multiplexed than a categorical model would predict. We argue that this variability can be employed to explore latent dimensions in our mental representation of facial expressions. Further, images in the WFD were rated as less intense and more genuine than images taken from other databases, suggesting a greater degree of authenticity among WFD images. The strong positive correlation between intensity and genuineness scores demonstrating that even the high arousal states captured in the WFD were perceived as authentic. Collectively, these findings highlight the potential utility of the WFD as a new resource for bridging the gap between the laboratory and real world in studies of expression recognition.
Collapse
Affiliation(s)
- Houqiu Long
- The School of Psychology, The University of Queensland, St Lucia, QLD, Australia
| | - Natalie Peluso
- The School of Psychology, The University of Queensland, St Lucia, QLD, Australia
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA
| | - Shruti Japee
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA
| | - Jessica Taubert
- The School of Psychology, The University of Queensland, St Lucia, QLD, Australia.
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA.
| |
Collapse
|
4
|
Küster D, Baker M, Krumhuber EG. PDSTD - The Portsmouth Dynamic Spontaneous Tears Database. Behav Res Methods 2022; 54:2678-2692. [PMID: 34918224 PMCID: PMC9729121 DOI: 10.3758/s13428-021-01752-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/15/2021] [Indexed: 12/16/2022]
Abstract
The vast majority of research on human emotional tears has relied on posed and static stimulus materials. In this paper, we introduce the Portsmouth Dynamic Spontaneous Tears Database (PDSTD), a free resource comprising video recordings of 24 female encoders depicting a balanced representation of sadness stimuli with and without tears. Encoders watched a neutral film and a self-selected sad film and reported their emotional experience for 9 emotions. Extending this initial validation, we obtained norming data from an independent sample of naïve observers (N = 91, 45 females) who watched videos of the encoders during three time phases (neutral, pre-sadness, sadness), yielding a total of 72 validated recordings. Observers rated the expressions during each phase on 7 discrete emotions, negative and positive valence, arousal, and genuineness. All data were analyzed by means of general linear mixed modelling (GLMM) to account for sources of random variance. Our results confirm the successful elicitation of sadness, and demonstrate the presence of a tear effect, i.e., a substantial increase in perceived sadness for spontaneous dynamic weeping. To our knowledge, the PDSTD is the first database of spontaneously elicited dynamic tears and sadness that is openly available to researchers. The stimuli can be accessed free of charge via OSF from https://osf.io/uyjeg/?view_only=24474ec8d75949ccb9a8243651db0abf .
Collapse
Affiliation(s)
- Dennis Küster
- Department of Mathematics and Computer Science, University of Bremen, Enrique-Schmidt Str. 5, 28359, Bremen, Germany.
| | - Marc Baker
- Department of Psychology, University of Portsmouth, Portsmouth, UK
| | - Eva G Krumhuber
- Department of Experimental Psychology, University College London, London, UK
| |
Collapse
|
5
|
Namba S, Sato W, Matsui H. Spatio-Temporal Properties of Amused, Embarrassed, and Pained Smiles. JOURNAL OF NONVERBAL BEHAVIOR 2022. [DOI: 10.1007/s10919-022-00404-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
AbstractSmiles are universal but nuanced facial expressions that are most frequently used in face-to-face communications, typically indicating amusement but sometimes conveying negative emotions such as embarrassment and pain. Although previous studies have suggested that spatial and temporal properties could differ among these various types of smiles, no study has thoroughly analyzed these properties. This study aimed to clarify the spatiotemporal properties of smiles conveying amusement, embarrassment, and pain using a spontaneous facial behavior database. The results regarding spatial patterns revealed that pained smiles showed less eye constriction and more overall facial tension than amused smiles; no spatial differences were identified between embarrassed and amused smiles. Regarding temporal properties, embarrassed and pained smiles remained in a state of higher facial tension than amused smiles. Moreover, embarrassed smiles showed a more gradual change from tension states to the smile state than amused smiles, and pained smiles had lower probabilities of staying in or transitioning to the smile state compared to amused smiles. By comparing the spatiotemporal properties of these three smile types, this study revealed that the probability of transitioning between discrete states could help distinguish amused, embarrassed, and pained smiles.
Collapse
|
6
|
Sato W, Namba S, Yang D, Nishida S, Ishi C, Minato T. An Android for Emotional Interaction: Spatiotemporal Validation of Its Facial Expressions. Front Psychol 2022; 12:800657. [PMID: 35185697 PMCID: PMC8855677 DOI: 10.3389/fpsyg.2021.800657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2021] [Accepted: 12/21/2021] [Indexed: 11/13/2022] Open
Abstract
Android robots capable of emotional interactions with humans have considerable potential for application to research. While several studies developed androids that can exhibit human-like emotional facial expressions, few have empirically validated androids' facial expressions. To investigate this issue, we developed an android head called Nikola based on human psychology and conducted three studies to test the validity of its facial expressions. In Study 1, Nikola produced single facial actions, which were evaluated in accordance with the Facial Action Coding System. The results showed that 17 action units were appropriately produced. In Study 2, Nikola produced the prototypical facial expressions for six basic emotions (anger, disgust, fear, happiness, sadness, and surprise), and naïve participants labeled photographs of the expressions. The recognition accuracy of all emotions was higher than chance level. In Study 3, Nikola produced dynamic facial expressions for six basic emotions at four different speeds, and naïve participants evaluated the naturalness of the speed of each expression. The effect of speed differed across emotions, as in previous studies of human expressions. These data validate the spatial and temporal patterns of Nikola's emotional facial expressions, and suggest that it may be useful for future psychological studies and real-life applications.
Collapse
Affiliation(s)
- Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
- Field Science Education and Research Center, Kyoto University, Kyoto, Japan
| | - Shushi Namba
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
| | - Dongsheng Yang
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Shin’ya Nishida
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Japan
| | - Carlos Ishi
- Interactive Robot Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
| | - Takashi Minato
- Interactive Robot Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
| |
Collapse
|
7
|
Abstract
With a shift in interest toward dynamic expressions, numerous corpora of dynamic facial stimuli have been developed over the past two decades. The present research aimed to test existing sets of dynamic facial expressions (published between 2000 and 2015) in a cross-corpus validation effort. For this, 14 dynamic databases were selected that featured facial expressions of the basic six emotions (anger, disgust, fear, happiness, sadness, surprise) in posed or spontaneous form. In Study 1, a subset of stimuli from each database (N = 162) were presented to human observers and machine analysis, yielding considerable variance in emotion recognition performance across the databases. Classification accuracy further varied with perceived intensity and naturalness of the displays, with posed expressions being judged more accurately and as intense, but less natural compared to spontaneous ones. Study 2 aimed for a full validation of the 14 databases by subjecting the entire stimulus set (N = 3812) to machine analysis. A FACS-based Action Unit (AU) analysis revealed that facial AU configurations were more prototypical in posed than spontaneous expressions. The prototypicality of an expression in turn predicted emotion classification accuracy, with higher performance observed for more prototypical facial behavior. Furthermore, technical features of each database (i.e., duration, face box size, head rotation, and motion) had a significant impact on recognition accuracy. Together, the findings suggest that existing databases vary in their ability to signal specific emotions, thereby facing a trade-off between realism and ecological validity on the one end, and expression uniformity and comparability on the other.
Collapse
|
8
|
Krumhuber EG, Hyniewska S, Orlowska A. Contextual effects on smile perception and recognition memory. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-01910-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractMost past research has focused on the role played by social context information in emotion classification, such as whether a display is perceived as belonging to one emotion category or another. The current study aims to investigate whether the effect of context extends to the interpretation of emotion displays, i.e. smiles that could be judged either as posed or spontaneous readouts of underlying positive emotion. A between-subjects design (N = 93) was used to investigate the perception and recall of posed smiles, presented together with a happy or polite social context scenario. Results showed that smiles seen in a happy context were judged as more spontaneous than the same smiles presented in a polite context. Also, smiles were misremembered as having more of the physical attributes (i.e., Duchenne marker) associated with spontaneous enjoyment when they appeared in the happy than polite context condition. Together, these findings indicate that social context information is routinely encoded during emotion perception, thereby shaping the interpretation and recognition memory of facial expressions.
Collapse
|
9
|
Küster D, Krumhuber EG, Steinert L, Ahuja A, Baker M, Schultz T. Opportunities and Challenges for Using Automatic Human Affect Analysis in Consumer Research. Front Neurosci 2020; 14:400. [PMID: 32410956 PMCID: PMC7199103 DOI: 10.3389/fnins.2020.00400] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Accepted: 03/31/2020] [Indexed: 11/13/2022] Open
Abstract
The ability to automatically assess emotional responses via contact-free video recording taps into a rapidly growing market aimed at predicting consumer choices. If consumer attention and engagement are measurable in a reliable and accessible manner, relevant marketing decisions could be informed by objective data. Although significant advances have been made in automatic affect recognition, several practical and theoretical issues remain largely unresolved. These concern the lack of cross-system validation, a historical emphasis of posed over spontaneous expressions, as well as more fundamental issues regarding the weak association between subjective experience and facial expressions. To address these limitations, the present paper argues that extant commercial and free facial expression classifiers should be rigorously validated in cross-system research. Furthermore, academics and practitioners must better leverage fine-grained emotional response dynamics, with stronger emphasis on understanding naturally occurring spontaneous expressions, and in naturalistic choice settings. We posit that applied consumer research might be better situated to examine facial behavior in socio-emotional contexts rather than decontextualized, laboratory studies, and highlight how AHAA can be successfully employed in this context. Also, facial activity should be considered less as a single outcome variable, and more as a starting point for further analyses. Implications of this approach and potential obstacles that need to be overcome are discussed within the context of consumer research.
Collapse
Affiliation(s)
- Dennis Küster
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany.,Department of Psychology and Methods, Jacobs University Bremen, Bremen, Germany
| | - Eva G Krumhuber
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Lars Steinert
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| | - Anuj Ahuja
- Maharaja Surajmal Institute of Technology, Guru Gobind Singh Indraprastha University, New Delhi, India
| | - Marc Baker
- Centre for Situated Action and Communication, Department of Psychology, University of Portsmouth, Portsmouth, United Kingdom
| | - Tanja Schultz
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| |
Collapse
|