1
|
Drummond J, Makdani A, Pawling R, Walker SC. Congenital Anosmia and Facial Emotion Recognition. Physiol Behav 2024; 278:114519. [PMID: 38490365 DOI: 10.1016/j.physbeh.2024.114519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 03/11/2024] [Accepted: 03/13/2024] [Indexed: 03/17/2024]
Abstract
Major functions of the olfactory system include guiding ingestion and avoidance of environmental hazards. People with anosmia report reliance on others, for example to check the edibility of food, as their primary coping strategy. Facial expressions are a major source of non-verbal social information that can be used to guide approach and avoidance behaviour. Thus, it is of interest to explore whether a life-long absence of the sense of smell heightens sensitivity to others' facial emotions, particularly those depicting threat. In the present, online study 28 people with congenital anosmia (mean age 43.46) and 24 people reporting no olfactory dysfunction (mean age 42.75) completed a facial emotion recognition task whereby emotionally neutral faces (6 different identities) morphed, over 40 stages, to express one of 5 basic emotions: anger, disgust, fear, happiness, or sadness. Results showed that, while the groups did not differ in their ability to identify the final, full-strength emotional expressions, nor in the accuracy of their first response, the congenital anosmia group successfully identified the emotions at significantly lower intensity (i.e. an earlier stage of the morph) than the control group. Exploratory analysis showed this main effect was primarily driven by an advantage in detecting anger and disgust. These findings indicate the absence of a functioning sense of smell during development leads to compensatory changes in visual, social cognition. Future work should explore the neural and behavioural basis for this advantage.
Collapse
Affiliation(s)
- James Drummond
- Research Centre for Brain & Behaviour, School of Psychology, Faculty of Health, Liverpool John Moores University, Liverpool, UK
| | - Adarsh Makdani
- Research Centre for Brain & Behaviour, School of Psychology, Faculty of Health, Liverpool John Moores University, Liverpool, UK
| | - Ralph Pawling
- Research Centre for Brain & Behaviour, School of Psychology, Faculty of Health, Liverpool John Moores University, Liverpool, UK
| | - Susannah C Walker
- Research Centre for Brain & Behaviour, School of Psychology, Faculty of Health, Liverpool John Moores University, Liverpool, UK.
| |
Collapse
|
2
|
Hu Y, Chen B, Lin J, Wang Y, Wang Y, Mehlman C, Lipson H. Human-robot facial coexpression. Sci Robot 2024; 9:eadi4724. [PMID: 38536902 DOI: 10.1126/scirobotics.adi4724] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 02/27/2024] [Indexed: 10/11/2024]
Abstract
Large language models are enabling rapid progress in robotic verbal communication, but nonverbal communication is not keeping pace. Physical humanoid robots struggle to express and communicate using facial movement, relying primarily on voice. The challenge is twofold: First, the actuation of an expressively versatile robotic face is mechanically challenging. A second challenge is knowing what expression to generate so that the robot appears natural, timely, and genuine. Here, we propose that both barriers can be alleviated by training a robot to anticipate future facial expressions and execute them simultaneously with a human. Whereas delayed facial mimicry looks disingenuous, facial coexpression feels more genuine because it requires correct inference of the human's emotional state for timely execution. We found that a robot can learn to predict a forthcoming smile about 839 milliseconds before the human smiles and, using a learned inverse kinematic facial self-model, coexpress the smile simultaneously with the human. We demonstrated this ability using a robot face comprising 26 degrees of freedom. We believe that the ability to coexpress simultaneous facial expressions could improve human-robot interaction.
Collapse
Affiliation(s)
- Yuhang Hu
- Creative Machines Laboratory, Department of Mechanical Engineering, Columbia University, New York, NY 10027, USA
| | - Boyuan Chen
- Mechanical Engineering and Materials Department, Duke University, Durham, NC 27708, USA
- Department of Electrical and Computer Engineering, Duke University, Durham, NC 27708, USA
- Department of Computer Science, Duke University, Durham, NC 27708, USA
| | - Jiong Lin
- Creative Machines Laboratory, Department of Mechanical Engineering, Columbia University, New York, NY 10027, USA
| | - Yunzhe Wang
- Department of Computer Science, Columbia University, New York, NY 10027, USA
| | - Yingke Wang
- Department of Computer Science, Columbia University, New York, NY 10027, USA
| | - Cameron Mehlman
- Creative Machines Laboratory, Department of Mechanical Engineering, Columbia University, New York, NY 10027, USA
| | - Hod Lipson
- Creative Machines Laboratory, Department of Mechanical Engineering, Columbia University, New York, NY 10027, USA
- Data Science Institute, Columbia University, New York, NY, 10027, USA
| |
Collapse
|
3
|
Yu H, Lin C, Sun S, Cao R, Kar K, Wang S. Multimodal investigations of emotional face processing and social trait judgment of faces. Ann N Y Acad Sci 2024; 1531:29-48. [PMID: 37965931 PMCID: PMC10858652 DOI: 10.1111/nyas.15084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2023]
Abstract
Faces are among the most important visual stimuli that humans perceive in everyday life. While extensive literature has examined emotional processing and social evaluations of faces, most studies have examined either topic using unimodal approaches. In this review, we promote the use of multimodal cognitive neuroscience approaches to study these processes, using two lines of research as examples: ambiguity in facial expressions of emotion and social trait judgment of faces. In the first set of studies, we identified an event-related potential that signals emotion ambiguity using electroencephalography and we found convergent neural responses to emotion ambiguity using functional neuroimaging and single-neuron recordings. In the second set of studies, we discuss how different neuroimaging and personality-dimensional approaches together provide new insights into social trait judgments of faces. In both sets of studies, we provide an in-depth comparison between neurotypicals and people with autism spectrum disorder. We offer a computational account for the behavioral and neural markers of the different facial processing between the two groups. Finally, we suggest new practices for studying the emotional processing and social evaluations of faces. All data discussed in the case studies of this review are publicly available.
Collapse
Affiliation(s)
- Hongbo Yu
- Department of Psychological & Brain Sciences, University of California Santa Barbara, Santa Barbara, California, USA
| | - Chujun Lin
- Department of Psychology, University of California San Diego, San Diego, California, USA
| | - Sai Sun
- Frontier Research Institute for Interdisciplinary Sciences, Tohoku University, Sendai, Japan
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
| | - Runnan Cao
- Department of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Kohitij Kar
- Department of Biology, Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Shuo Wang
- Department of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| |
Collapse
|
4
|
Lampi AJ, Brewer R, Bird G, Jaswal VK. Non-autistic adults can recognize posed autistic facial expressions: Implications for internal representations of emotion. Autism Res 2023; 16:1321-1334. [PMID: 37172211 DOI: 10.1002/aur.2938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 04/28/2023] [Indexed: 05/14/2023]
Abstract
Autistic people report that their emotional expressions are sometimes misunderstood by non-autistic people. One explanation for these misunderstandings could be that the two neurotypes have different internal representations of emotion: Perhaps they have different expectations about what a facial expression showing a particular emotion looks like. In three well-powered studies with non-autistic college students in the United States (total N = 632), we investigated this possibility. In Study 1, participants recognized most facial expressions posed by autistic individuals more accurately than those posed by non-autistic individuals. Study 2 showed that one reason the autistic expressions were recognized more accurately was because they were better and more intense examples of the intended expressions than the non-autistic expressions. In Study 3, we used a set of expressions created by autistic and non-autistic individuals who could see their faces as they made the expressions, which could allow them to explicitly match the expression they produced with their internal representation of that emotional expression. Here, neither autistic expressions nor non-autistic expressions were consistently recognized more accurately. In short, these findings suggest that differences in internal representations of what emotional expressions look like are unlikely to play a major role in explaining why non-autistic people sometimes misunderstand the emotions autistic people are experiencing.
Collapse
Affiliation(s)
- Andrew J Lampi
- Department of Psychology, University of Virginia, Charlottesville, Virginia, USA
| | - Rebecca Brewer
- Department of Psychology, Royal Holloway University of London, Egham, UK
| | - Geoffrey Bird
- Department of Experimental Psychology, Brasenose College, University of Oxford, Oxford, UK
| | - Vikram K Jaswal
- Department of Psychology, University of Virginia, Charlottesville, Virginia, USA
| |
Collapse
|
5
|
Lukac M, Zhambulova G, Abdiyeva K, Lewis M. Study on emotion recognition bias in different regional groups. Sci Rep 2023; 13:8414. [PMID: 37225756 PMCID: PMC10209154 DOI: 10.1038/s41598-023-34932-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 05/10/2023] [Indexed: 05/26/2023] Open
Abstract
Human-machine communication can be substantially enhanced by the inclusion of high-quality real-time recognition of spontaneous human emotional expressions. However, successful recognition of such expressions can be negatively impacted by factors such as sudden variations of lighting, or intentional obfuscation. Reliable recognition can be more substantively impeded due to the observation that the presentation and meaning of emotional expressions can vary significantly based on the culture of the expressor and the environment within which the emotions are expressed. As an example, an emotion recognition model trained on a regionally-specific database collected from North America might fail to recognize standard emotional expressions from another region, such as East Asia. To address the problem of regional and cultural bias in emotion recognition from facial expressions, we propose a meta-model that fuses multiple emotional cues and features. The proposed approach integrates image features, action level units, micro-expressions and macro-expressions into a multi-cues emotion model (MCAM). Each of the facial attributes incorporated into the model represents a specific category: fine-grained content-independent features, facial muscle movements, short-term facial expressions and high-level facial expressions. The results of the proposed meta-classifier (MCAM) approach show that a) the successful classification of regional facial expressions is based on non-sympathetic features b) learning the emotional facial expressions of some regional groups can confound the successful recognition of emotional expressions of other regional groups unless it is done from scratch and c) the identification of certain facial cues and features of the data-sets that serve to preclude the design of the perfect unbiased classifier. As a result of these observations we posit that to learn certain regional emotional expressions, other regional expressions first have to be "forgotten".
Collapse
Affiliation(s)
- Martin Lukac
- Department of Computer Science, Nazarbayev University, Kabanbay Batyr 53, Astana, 010000, Kazakhstan.
| | - Gulnaz Zhambulova
- Department of Computer Science, Nazarbayev University, Kabanbay Batyr 53, Astana, 010000, Kazakhstan
| | - Kamila Abdiyeva
- Department of Computer Science, Nazarbayev University, Kabanbay Batyr 53, Astana, 010000, Kazakhstan
| | - Michael Lewis
- Department of Computer Science, Nazarbayev University, Kabanbay Batyr 53, Astana, 010000, Kazakhstan
| |
Collapse
|
6
|
Gkikas S, Tsiknakis M. Automatic assessment of pain based on deep learning methods: A systematic review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107365. [PMID: 36764062 DOI: 10.1016/j.cmpb.2023.107365] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 01/06/2023] [Accepted: 01/21/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE The automatic assessment of pain is vital in designing optimal pain management interventions focused on reducing suffering and preventing the functional decline of patients. In recent years, there has been a surge in the adoption of deep learning algorithms by researchers attempting to encode the multidimensional nature of pain into meaningful features. This systematic review aims to discuss the models, the methods, and the types of data employed in establishing the foundation of a deep learning-based automatic pain assessment system. METHODS The systematic review was conducted by identifying original studies searching digital libraries, namely Scopus, IEEE Xplore, and ACM Digital Library. Inclusion and exclusion criteria were applied to retrieve and select those of interest, published until December 2021. RESULTS A total of one hundred and ten publications were identified and categorized by the number of information channels used (unimodal versus multimodal approaches) and whether the temporal dimension was also used. CONCLUSIONS This review demonstrates the importance of multimodal approaches for automatic pain estimation, especially in clinical settings, and also reveals that significant improvements are observed when the temporal exploitation of modalities is included. It provides suggestions regarding better-performing deep architectures and learning methods. Also, it provides suggestions for adopting robust evaluation protocols and interpretation methods to provide objective and comprehensible results. Furthermore, the review presents the limitations of the available pain databases for optimally supporting deep learning model development, validation, and application as decision-support tools in real-life scenarios.
Collapse
Affiliation(s)
- Stefanos Gkikas
- Department of Electrical and Computer Engineering, Hellenic Mediterranean University, Estavromenos, Heraklion, 71410, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research & Technology-Hellas, Vassilika Vouton, Heraklion, 70013, Greece.
| | - Manolis Tsiknakis
- Department of Electrical and Computer Engineering, Hellenic Mediterranean University, Estavromenos, Heraklion, 71410, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research & Technology-Hellas, Vassilika Vouton, Heraklion, 70013, Greece.
| |
Collapse
|
7
|
Luong T, Lecuyer A, Martin N, Argelaguet F. A Survey on Affective and Cognitive VR. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:5154-5171. [PMID: 34495833 DOI: 10.1109/tvcg.2021.3110459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In Virtual Reality (VR), users can be immersed in emotionally intense and cognitively engaging experiences. Yet, despite strong interest from scholars and a large amount of work associating VR and Affective and Cognitive States (ACS), there is a clear lack of structured and systematic form in which this research can be classified. We define "Affective and Cognitive VR" to relate to works which (1) induce ACS, (2) recognize ACS, or (3) exploit ACS by adapting virtual environments based on ACS measures. This survey clarifies the different models of ACS, presents the methods for measuring them with their respective advantages and drawbacks in VR, and showcases Affective and Cognitive VR studies done in an Immersive Virtual Environment (IVE) in a non-clinical context. Our article covers the main research lines in Affective and Cognitive VR. We provide a comprehensive list of references with the analysis of 63 research articles and summarize future works directions.
Collapse
|
8
|
Inagaki M, Ito T, Shinozaki T, Fujita I. Convolutional neural networks reveal differences in action units of facial expressions between face image databases developed in different countries. Front Psychol 2022; 13:988302. [DOI: 10.3389/fpsyg.2022.988302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 10/04/2022] [Indexed: 11/06/2022] Open
Abstract
Cultural similarities and differences in facial expressions have been a controversial issue in the field of facial communications. A key step in addressing the debate regarding the cultural dependency of emotional expression (and perception) is to characterize the visual features of specific facial expressions in individual cultures. Here we developed an image analysis framework for this purpose using convolutional neural networks (CNNs) that through training learned visual features critical for classification. We analyzed photographs of facial expressions derived from two databases, each developed in a different country (Sweden and Japan), in which corresponding emotion labels were available. While the CNNs reached high rates of correct results that were far above chance after training with each database, they showed many misclassifications when they analyzed faces from the database that was not used for training. These results suggest that facial features useful for classifying facial expressions differed between the databases. The selectivity of computational units in the CNNs to action units (AUs) of the face varied across the facial expressions. Importantly, the AU selectivity often differed drastically between the CNNs trained with the different databases. Similarity and dissimilarity of these tuning profiles partly explained the pattern of misclassifications, suggesting that the AUs are important for characterizing the facial features and differ between the two countries. The AU tuning profiles, especially those reduced by principal component analysis, are compact summaries useful for comparisons across different databases, and thus might advance our understanding of universality vs. specificity of facial expressions across cultures.
Collapse
|
9
|
Zeng Y, Liu X, Cheng L. Facial Emotion Perceptual Tendency in Violent and Non-violent Offenders. JOURNAL OF INTERPERSONAL VIOLENCE 2022; 37:NP15058-NP15074. [PMID: 33480321 DOI: 10.1177/0886260521989848] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
All three authors share equal authorship in this paper.Emotion perception has a vital influence on social interaction. Previous studies discussed mainly the relationship between facial emotion perception and aggressive behavior from the perspective of hostile attributional bias and the impaired violence inhibition mechanism. The present study aims to provide new evidence of different emotion perception patterns between the violent and non-violent criminal samples through a new indicator of the facial emotion recognition test, Facial Emotion Perception Tendency (FEPT), calculated by counting the times a participant recognizes a set of emotional stimuli as a particular specific emotion, and to further examine the association between aggressive behaviors and FEPT. 101 violent and 171 non-violent offenders, as well as 81 non-offending control participants, were recruited to complete the emotion recognition task with morphed stimuli (Study 1). We further recruited 62 non-offending healthy male participants to finish the Buss -Perry Aggression Questionnaire (BPAQ) after the emotion recognition task in Study 2. Both non-violent and violent offenders were significantly lower in overall accuracy of emotion recognition and disgust FEPT, but higher in happy FEPT, than non-offending healthy controls. Non-violent offenders had significantly lower fear FEPT than violent offenders, and had higher anger FEPT than non-offending controls. The results also revealed that the level of physical aggression was positively correlated with fear FEPT, while negatively correlated with anger FEPT. The current study demonstrated that FEPT was associated with aggressive behavior and implies the importance of improving the emotion decoding ability of offenders. Also, the concept "FEPT" proposed in this study is of significance for further exploration of how individuals' tendency to perceiving a particular emotion can be correlated with social behaviors.
Collapse
Affiliation(s)
- Yun Zeng
- Sun Yat-sen University, Guangzhou, China
| | - Xilin Liu
- Sun Yat-sen University, Guangzhou, China
| | | |
Collapse
|
10
|
Miyazaki Y, Kamatani M, Suda T, Wakasugi K, Matsunaga K, Kawahara JI. Effects of wearing a transparent face mask on perception of facial expressions. Iperception 2022; 13:20416695221105910. [PMID: 35782828 PMCID: PMC9243485 DOI: 10.1177/20416695221105910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 05/22/2022] [Indexed: 11/17/2022] Open
Abstract
Wearing face masks in public has become the norm in many countries post-2020. Although mask-wearing is effective in controlling infection, it has the negative side effect of occluding the mask wearer's facial expressions. The purpose of this study was to investigate the effects of wearing transparent masks on the perception of facial expressions. Participants were required to categorize the perceived facial emotion of female (Experiment 1) and male (Experiment 2) faces with different facial expressions and to rate the perceived emotion intensity of the faces. Based on the group, the participants were assigned to, the faces were presented with a surgical mask, a transparent mask, or without a mask. The results showed that wearing a surgical mask impaired the performance of reading facial expressions, both with respect to recognition and perceived intensity of facial emotions. Specifically, the impairments were robustly observed in fear and happy faces for emotion recognition, and in happy faces for perceived intensity of emotion in Experiments 1 and 2. However, the impairments were moderated by wearing a transparent mask instead of a surgical mask. During the coronavirus disease 2019 (COVID-19) pandemic, the transparent mask can be used in a range of situations where face-to-face communication is important.
Collapse
Affiliation(s)
- Yuki Miyazaki
- />Department of Psychology, Fukuyama University, Fukuyama, Japan
| | - Miki Kamatani
- />Graduate School of Letters, Hokkaido University, Sapporo, Japan
| | | | | | - Kaori Matsunaga
- />Global Research & Development Division, Unicharm Corporation, Kanonji,
Japan
| | - Jun I. Kawahara
- />Graduate School of Letters, Hokkaido University, Sapporo, Japan
| |
Collapse
|
11
|
Kawakami K, Friesen JP, Fang X. Perceiving ingroup and outgroup faces within and across nations. Br J Psychol 2022; 113:551-574. [PMID: 35383905 DOI: 10.1111/bjop.12563] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 03/19/2022] [Indexed: 12/28/2022]
Abstract
The human face is arguably the most important of all social stimuli because it provides so much valuable information about others. Therefore, one critical factor for successful social communication is the ability to process faces. In general, a wide body of social cognitive research has demonstrated that perceivers are better at extracting information from their own-race compared to other-race faces and that these differences can be a barrier to positive cross-race relationships. The primary objective of the present paper was to provide an overview of how people process faces in diverse contexts, focusing on racial ingroup and outgroup members within one nation and across nations. To achieve this goal, we first broadly describe social cognitive research on categorization processes related to ingroups vs. outgroups. Next, we briefly examine two prominent mechanisms (experience and motivation) that have been used to explain differences in recognizing facial identities and identifying emotions when processing ingroup and outgroup racial faces within nations. Then, we explore research in this domain across nations and cultural explanations, such as norms and practices, that supplement the two proposed mechanisms. Finally, we propose future cross-cultural research that has the potential to help us better understand the role of these key mechanisms in processing ingroup and outgroup faces.
Collapse
Affiliation(s)
| | | | - Xia Fang
- Zhejiang University, Hangzhou, China
| |
Collapse
|
12
|
A New Perspective on Assessing Cognition in Children through Estimating Shared Intentionality. J Intell 2022; 10:jintelligence10020021. [PMID: 35466234 PMCID: PMC9036231 DOI: 10.3390/jintelligence10020021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 03/20/2022] [Accepted: 03/26/2022] [Indexed: 02/06/2023] Open
Abstract
This theoretical article aims to create a conceptual framework for future research on digital methods for assessing cognition in children through estimating shared intentionality, different from assessing through behavioral markers. It shows the new assessing paradigm based directly on the evaluation of parent-child interaction exchanges (protoconversation), allowing early monitoring of children’s developmental trajectories. This literature analysis attempts to understand how cognition is related to emotions in interpersonal dynamics and whether assessing these dynamics shows cognitive abilities in children. The first part discusses infants’ unexpected achievements, observing the literature about children’s development. The analysis supposes that due to the caregiver’s help under emotional arousal, newborns’ intentionality could appear even before it is possible for children’s intention to occur. The emotional bond evokes intentionality in neonates. Therefore, they can manifest unexpected achievements while performing them with caregivers. This outcome shows an appearance of protoconversation in adult-children dyads through shared intentionality. The article presents experimental data of other studies that extend our knowledge about human cognition by showing an increase of coordinated neuronal activities and the acquisition of new knowledge by subjects in the absence of sensory cues. This highlights the contribution of interpersonal interaction to gain cognition, discussed already by Vygotsky. The current theoretical study hypothesizes that if shared intentionality promotes cognition from the onset, this interaction modality can also facilitate cognition in older children. Therefore in the second step, the current article analyzes empirical data of recent studies that reported meaningful interaction in mother-infant dyads without sensory cues. It discusses whether an unbiased digital assessment of the interaction ability of children is possible before the age when the typical developmental trajectory implies verbal communication. The article develops knowledge for a digital assessment that can measure the extent of children’s ability to acquire knowledge through protoconversation. This specific assessment can signalize the lack of communication ability in children even when the typical trajectory of peers’ development does not imply verbal communication.
Collapse
|
13
|
Irvin RL, Klein RJ, Robinson MD. Faster, stronger, and more obligatory?A temporal analysis of negative (versus positive) emotional reactions. JOURNAL OF EXPERIMENTAL SOCIAL PSYCHOLOGY 2022. [DOI: 10.1016/j.jesp.2021.104272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
14
|
Liu M, Duan Y, Ince RAA, Chen C, Garrod OGB, Schyns PG, Jack RE. Facial expressions elicit multiplexed perceptions of emotion categories and dimensions. Curr Biol 2022; 32:200-209.e6. [PMID: 34767768 PMCID: PMC8751635 DOI: 10.1016/j.cub.2021.10.035] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 09/07/2021] [Accepted: 10/14/2021] [Indexed: 11/22/2022]
Abstract
Human facial expressions are complex, multi-component signals that can communicate rich information about emotions,1-5 including specific categories, such as "anger," and broader dimensions, such as "negative valence, high arousal."6-8 An enduring question is how this complex signaling is achieved. Communication theory predicts that multi-component signals could transmit each type of emotion information-i.e., specific categories and broader dimensions-via the same or different facial signal components, with implications for elucidating the system and ontology of facial expression communication.9 We addressed this question using a communication-systems-based method that agnostically generates facial expressions and uses the receiver's perceptions to model the specific facial signal components that represent emotion category and dimensional information to them.10-12 First, we derived the facial expressions that elicit the perception of emotion categories (i.e., the six classic emotions13 plus 19 complex emotions3) and dimensions (i.e., valence and arousal) separately, in 60 individual participants. Comparison of these facial signals showed that they share subsets of components, suggesting that specific latent signals jointly represent-i.e., multiplex-categorical and dimensional information. Further examination revealed these specific latent signals and the joint information they represent. Our results-based on white Western participants, same-ethnicity face stimuli, and commonly used English emotion terms-show that facial expressions can jointly represent specific emotion categories and broad dimensions to perceivers via multiplexed facial signal components. Our results provide insights into the ontology and system of facial expression communication and a new information-theoretic framework that can characterize its complexities.
Collapse
Affiliation(s)
- Meng Liu
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Yaocong Duan
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Robin A A Ince
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Chaona Chen
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Oliver G B Garrod
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Philippe G Schyns
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Rachael E Jack
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK.
| |
Collapse
|
15
|
Human face and gaze perception is highly context specific and involves bottom-up and top-down neural processing. Neurosci Biobehav Rev 2021; 132:304-323. [PMID: 34861296 DOI: 10.1016/j.neubiorev.2021.11.042] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 11/24/2021] [Accepted: 11/24/2021] [Indexed: 11/21/2022]
Abstract
This review summarizes human perception and processing of face and gaze signals. Face and gaze signals are important means of non-verbal social communication. The review highlights that: (1) some evidence is available suggesting that the perception and processing of facial information starts in the prenatal period; (2) the perception and processing of face identity, expression and gaze direction is highly context specific, the effect of race and culture being a case in point. Culture affects by means of experiential shaping and social categorization the way in which information on face and gaze is collected and perceived; (3) face and gaze processing occurs in the so-called 'social brain'. Accumulating evidence suggests that the processing of facial identity, facial emotional expression and gaze involves two parallel and interacting pathways: a fast and crude subcortical route and a slower cortical pathway. The flow of information is bi-directional and includes bottom-up and top-down processing. The cortical networks particularly include the fusiform gyrus, superior temporal sulcus (STS), intraparietal sulcus, temporoparietal junction and medial prefrontal cortex.
Collapse
|
16
|
Namba S, Kabir RS, Matsuda K, Noguchi Y, Kambara K, Kobayashi R, Shigematsu J, Miyatani M, Nakao T. Fantasy Component of Interpersonal Reactivity is Associated with Empathic Accuracy: Findings from Behavioral Experiments with Implications for Applied Settings. READING PSYCHOLOGY 2021. [DOI: 10.1080/02702711.2021.1939823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Shushi Namba
- Psychological Process Research Team, Guardian Robot Project, RIKEN Information R&D and Strategy Headquarters, Kyoto, Japan
- Department of Psychology, Hiroshima University, Hiroshima, Japan
| | - Russell Sarwar Kabir
- Graduate School of Humanities and Social Sciences, Hiroshima University, Hiroshima, Japan
| | - Kiyoaki Matsuda
- Graduate School of Humanities and Social Sciences, Hiroshima University, Hiroshima, Japan
| | - Yuka Noguchi
- Graduate School of Humanities and Social Sciences, Hiroshima University, Hiroshima, Japan
| | - Kohei Kambara
- Department of Psychology, Hiroshima University, Hiroshima, Japan
| | - Ryota Kobayashi
- Graduate School of Humanities and Social Sciences, Hiroshima University, Hiroshima, Japan
| | - Jun Shigematsu
- Graduate School of Humanities and Social Sciences, Hiroshima University, Hiroshima, Japan
| | - Makoto Miyatani
- Department of Psychology, Hiroshima University, Hiroshima, Japan
| | - Takashi Nakao
- Department of Psychology, Hiroshima University, Hiroshima, Japan
| |
Collapse
|
17
|
An evaluation of the reading the mind in the eyes test's psychometric properties and scores in South Africa-cultural implications. PSYCHOLOGICAL RESEARCH 2021; 86:2289-2300. [PMID: 34125281 DOI: 10.1007/s00426-021-01539-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 05/25/2021] [Indexed: 12/19/2022]
Abstract
The 'Reading the Mind in the Eyes' test (RMET) has been translated and tested in many cultural settings. Results indicate that items show variability in meeting the original psychometric testing criteria. Individuals from non-Western cultures score differently on the RMET. As such, questions arise as to the cross-cultural validity of the RMET. This study tested the English version of the RMET, that consists almost exclusively of White faces, at a large South African university to determine its validity in a culturally diverse context. A total of 443 students from a range of different demographic backgrounds completed the instrument. Students were selected using simple random sampling. 30 out of the 36 items continued to show satisfactory psychometric properties. Further evidence shows significant differences based on race and home language in both overall scores and item level scores. Black race and African home language respondents show lower RMET scores and different item level perspectives on certain mental states. The current RMET is not inclusive. It requires stimuli reflecting more races and cultures. This lack of diversity is likely to be influencing and biasing results and psychometric properties. The continued exclusion of racial stimuli such as Black race is also promoting a systemic discriminatory instrument. These results have cultural implications for how we interpret and use the RMET.
Collapse
|
18
|
Associations between facial affect recognition and neurocognition in subjects at ultra-high risk for psychosis: A case-control study. Psychiatry Res 2020; 290:112969. [PMID: 32450415 DOI: 10.1016/j.psychres.2020.112969] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 03/28/2020] [Accepted: 03/29/2020] [Indexed: 12/22/2022]
Abstract
The nature of facial affect recognition (FAR) deficits in subjects at ultra-high risk (UHR) for psychosis remains unclear. In schizophrenia, associations between FAR impairment and poor neurocognition have been demonstrated meta-analytically, but this potential link is understudied in the UHR population. Our study investigated a cross-sectional sample of UHR subjects (n = 22) and healthy controls (n = 50), with the Degraded Facial Affect Recognition (DFAR) Task and a neurocognitive test battery. Our primary aims were 1. to examine associations between FAR and neurocognition in UHR subjects and 2. to examine if associations differed between cases and controls. The secondary aim was to examine group differences in FAR and neurocognitive performance. In UHR subjects, FAR was significantly associated with working memory, a neurocognitive composite score and intelligence, and at trend level with most other assessed neurocognitive domains, with moderate to large effect sizes. There were no significant associations in controls. Associations between FAR and working memory and the neurocognitive composite score differed significantly between cases and controls. UHR subjects did not differ from controls on DFAR Task performance but showed significant deficits in three of six neurocognitive domains. Results may suggest that FAR is associated with working memory in UHR subjects, possibly reflecting a neurocognitive compensatory mechanism.
Collapse
|
19
|
Measuring the evolution of facial ‘expression’ using multi-species FACS. Neurosci Biobehav Rev 2020; 113:1-11. [DOI: 10.1016/j.neubiorev.2020.02.031] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 01/30/2020] [Accepted: 02/23/2020] [Indexed: 11/24/2022]
|
20
|
Gendron M, Hoemann K, Crittenden AN, Mangola SM, Ruark GA, Barrett LF. Emotion Perception in Hadza Hunter-Gatherers. Sci Rep 2020; 10:3867. [PMID: 32123191 PMCID: PMC7051983 DOI: 10.1038/s41598-020-60257-2] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2019] [Accepted: 02/04/2020] [Indexed: 12/19/2022] Open
Abstract
It has long been claimed that certain configurations of facial movements are universally recognized as emotional expressions because they evolved to signal emotional information in situations that posed fitness challenges for our hunting and gathering hominin ancestors. Experiments from the last decade have called this particular evolutionary hypothesis into doubt by studying emotion perception in a wider sample of small-scale societies with discovery-based research methods. We replicate these newer findings in the Hadza of Northern Tanzania; the Hadza are semi-nomadic hunters and gatherers who live in tight-knit social units and collect wild foods for a large portion of their diet, making them a particularly relevant population for testing evolutionary hypotheses about emotion. Across two studies, we found little evidence of universal emotion perception. Rather, our findings are consistent with the hypothesis that people infer emotional meaning in facial movements using emotion knowledge embrained by cultural learning.
Collapse
Affiliation(s)
- Maria Gendron
- Yale University, Department of Psychology, New Haven, USA.
| | - Katie Hoemann
- Northeastern University, Department of Psychology, Boston, USA
| | | | | | - Gregory A Ruark
- U.S. Army Research Institute for the Behavioral and Social Sciences, Foundational Science Research Unit (FSRU), Fort Belvoir, USA
| | - Lisa Feldman Barrett
- Northeastern University, Department of Psychology, Boston, USA. .,Massachusetts General Hospital, Martinos Center for Biomedical Imaging and Department of Psychiatry, Boston, USA.
| |
Collapse
|
21
|
Derya D, Kang J, Kwon DY, Wallraven C. Facial Expression Processing Is Not Affected by Parkinson's Disease, but by Age-Related Factors. Front Psychol 2019; 10:2458. [PMID: 31798486 PMCID: PMC6868040 DOI: 10.3389/fpsyg.2019.02458] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2019] [Accepted: 10/17/2019] [Indexed: 11/20/2022] Open
Abstract
The question whether facial expression processing may be impaired in Parkinson’s disease (PD) patients so far has yielded equivocal results – existing studies, however, have focused on testing expression processing in recognition tasks with static images of six standard, emotional facial expressions. Given that non-verbal communication contains both emotional and non-emotional, conversational expressions and that input to the brain is usually dynamic, here we address the question of potential facial expression processing differences in a novel format: we test a range of conversational and emotional, dynamic facial expressions in three groups – PD patients (n = 20), age- and education-matched older healthy controls (n = 20), and younger adult healthy controls (n = 20). This setup allows us to address both effects of PD and age-related differences. We employed a rating task for all groups in which 12 rating dimensions were used to assess evaluative processing of 27 expression videos from six different actors. We found that ratings overall were consistent across groups with several rating dimensions (such as arousal or outgoingness) having a strong correlation with the expressions’ motion energy content as measured by optic flow analysis. Most importantly, we found that the PD group did not differ in any rating dimension from the older healthy control group (HCG), indicating highly similar evaluation processing. Both older groups, however, did show significant differences for several rating scales in comparison with the younger adults HCG. Looking more closely, older participants rated negative expressions compared to the younger participants as more positive, but also as less natural, persuasive, empathic, and sincere. We interpret these findings in the context of the positivity effect and in-group processing advantages. Overall, our findings do not support strong processing deficits due to PD, but rather point to age-related differences in facial expression processing.
Collapse
Affiliation(s)
- Dilara Derya
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - June Kang
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Do-Young Kwon
- Department of Neurology, Korea University Ansan Hospital, Korea University College of Medicine, Ansan-si, South Korea
| | - Christian Wallraven
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea.,Department of Artificial Intelligence, Korea University, Seoul, South Korea
| |
Collapse
|
22
|
|
23
|
Bevilacqua F, Engström H, Backlund P. Game-Calibrated and User-Tailored Remote Detection of Stress and Boredom in Games. SENSORS 2019; 19:s19132877. [PMID: 31261716 PMCID: PMC6650833 DOI: 10.3390/s19132877] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Revised: 06/21/2019] [Accepted: 06/25/2019] [Indexed: 12/24/2022]
Abstract
Emotion detection based on computer vision and remote extraction of user signals commonly rely on stimuli where users have a passive role with limited possibilities for interaction or emotional involvement, e.g., images and videos. Predictive models are also trained on a group level, which potentially excludes or dilutes key individualities of users. We present a non-obtrusive, multifactorial, user-tailored emotion detection method based on remotely estimated psychophysiological signals. A neural network learns the emotional profile of a user during the interaction with calibration games, a novel game-based emotion elicitation material designed to induce emotions while accounting for particularities of individuals. We evaluate our method in two experiments ( n = 20 and n = 62 ) with mean classification accuracy of 61.6%, which is statistically significantly better than chance-level classification. Our approach and its evaluation present unique circumstances: our model is trained on one dataset (calibration games) and tested on another (evaluation game), while preserving the natural behavior of subjects and using remote acquisition of signals. Results of this study suggest our method is feasible and an initiative to move away from questionnaires and physical sensors into a non-obtrusive, remote-based solution for detecting emotions in a context involving more naturalistic user behavior and games.
Collapse
Affiliation(s)
- Fernando Bevilacqua
- Computer Science, Federal University of Fronteira Sul, Chapecó 89802 112, Brazil
| | - Henrik Engström
- School of Informatics, University of Skövde, 541 28 Skövde, Sweden.
| | - Per Backlund
- School of Informatics, University of Skövde, 541 28 Skövde, Sweden
| |
Collapse
|
24
|
Wang Y, Zhu Z, Chen B, Fang F. Perceptual learning and recognition confusion reveal the underlying relationships among the six basic emotions. Cogn Emot 2018; 33:754-767. [PMID: 29962270 DOI: 10.1080/02699931.2018.1491831] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
The six basic emotions (disgust, anger, fear, happiness, sadness, and surprise) have long been considered discrete categories that serve as the primary units of the emotion system. Yet recent evidence indicated underlying connections among them. Here we tested the underlying relationships among the six basic emotions using a perceptual learning procedure. This technique has the potential of causally changing participants' emotion detection ability. We found that training on detecting a facial expression improved the performance not only on the trained expression but also on other expressions. Such a transfer effect was consistently demonstrated between disgust and anger detection as well as between fear and surprise detection in two experiments (Experiment 1A, n = 70; Experiment 1B, n = 42). Notably, training on any of the six emotions could improve happiness detection, while sadness detection could only be improved by training on sadness itself, suggesting the uniqueness of happiness and sadness. In an emotion recognition test using a large sample of Chinese participants (n = 1748), the confusion between disgust and anger as well as between fear and surprise was further confirmed. Taken together, our study demonstrates that the "basic" emotions share some common psychological components, which might be the more basic units of the emotion system.
Collapse
Affiliation(s)
- Yingying Wang
- a Academy for Advanced Interdisciplinary Studies, Peking-Tsinghua Center for Life Sciences, Peking University , Beijing , P. R. People's Republic of China.,b IDG/McGovern Institute for Brain Research, Peking University , Beijing , P. R. People's Republic of China
| | - Zijian Zhu
- a Academy for Advanced Interdisciplinary Studies, Peking-Tsinghua Center for Life Sciences, Peking University , Beijing , P. R. People's Republic of China.,b IDG/McGovern Institute for Brain Research, Peking University , Beijing , P. R. People's Republic of China
| | - Biqing Chen
- a Academy for Advanced Interdisciplinary Studies, Peking-Tsinghua Center for Life Sciences, Peking University , Beijing , P. R. People's Republic of China.,b IDG/McGovern Institute for Brain Research, Peking University , Beijing , P. R. People's Republic of China
| | - Fang Fang
- a Academy for Advanced Interdisciplinary Studies, Peking-Tsinghua Center for Life Sciences, Peking University , Beijing , P. R. People's Republic of China.,b IDG/McGovern Institute for Brain Research, Peking University , Beijing , P. R. People's Republic of China.,c School of Psychological and Cognitive Sciences, Peking University , Beijing , P. R. People's Republic of China.,d Beijing Key Laboratory of Behavior and Mental Health, Peking University , Beijing , P. R. People's Republic of China.,e Key Laboratory of Machine Perception (Ministry of Education) , Peking University , Beijing , P. R. People's Republic of China
| |
Collapse
|
25
|
Fiske AP, Seibt B, Schubert T. The Sudden Devotion Emotion: Kama Muta and the Cultural Practices Whose Function Is to Evoke It. EMOTION REVIEW 2017. [DOI: 10.1177/1754073917723167] [Citation(s) in RCA: 52] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
When communal sharing relationships (CSRs) suddenly intensify, people experience an emotion that English speakers may label, depending on context, “moved,” “touched,” “heart-warming,” “nostalgia,” “patriotism,” or “rapture” (although sometimes people use each of these terms for other emotions). We call the emotion kama muta (Sanskrit, “moved by love”). Kama muta evokes adaptive motives to devote and commit to the CSRs that are fundamental to social life. It occurs in diverse contexts and appears to be pervasive across cultures and throughout history, while people experience it with reference to its cultural and contextual meanings. Cultures have evolved diverse practices, institutions, roles, narratives, arts, and artifacts whose core function is to evoke kama muta. Kama muta mediates much of human sociality.
Collapse
Affiliation(s)
- Alan Page Fiske
- Department of Anthropology, University of California, Los Angeles, USA
- Department of Psychology, University of Oslo, Norway
- CIS-IUL, Instituto Universitário de Lisboa (ISCTE-IUL), Portugal
| | - Beate Seibt
- Department of Psychology, University of Oslo, Norway
- CIS-IUL, Instituto Universitário de Lisboa (ISCTE-IUL), Portugal
| | - Thomas Schubert
- Department of Psychology, University of Oslo, Norway
- CIS-IUL, Instituto Universitário de Lisboa (ISCTE-IUL), Portugal
| |
Collapse
|
26
|
Jack RE, Crivelli C, Wheatley T. Data-Driven Methods to Diversify Knowledge of Human Psychology. Trends Cogn Sci 2017; 22:1-5. [PMID: 29126772 DOI: 10.1016/j.tics.2017.10.002] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2017] [Revised: 10/07/2017] [Accepted: 10/09/2017] [Indexed: 01/14/2023]
Abstract
Psychology aims to understand real human behavior. However, cultural biases in the scientific process can constrain knowledge. We describe here how data-driven methods can relax these constraints to reveal new insights that theories can overlook. To advance knowledge we advocate a symbiotic approach that better combines data-driven methods with theory.
Collapse
Affiliation(s)
- Rachael E Jack
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK; School of Psychology, University of Glasgow, Glasgow, UK.
| | - Carlos Crivelli
- School of Applied Social Sciences, De Montfort University, Leicester, UK
| | - Thalia Wheatley
- Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
27
|
Feelings and contexts: socioecological influences on the nonverbal expression of emotion. Curr Opin Psychol 2017; 17:170-175. [PMID: 28950965 DOI: 10.1016/j.copsyc.2017.07.025] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2017] [Revised: 06/06/2017] [Accepted: 07/17/2017] [Indexed: 11/23/2022]
Abstract
Despite their relative universality, nonverbal displays of emotion are often sources of cross-cultural misunderstandings. The present article considers the relevance of historical and present socio-ecological contexts, such as heterogeneity of long-history migration, pathogen prevalence, and residential mobility for cross-cultural variation in emotional expression. We review recent evidence linking these constructs to psychological processes and discuss how the findings are relevant to the nonverbal communication of emotion. We hold that socioecological variables, because of their specificity and tractability, provide a promising framework for explaining why different cultures developed varying modes of emotional expression.
Collapse
|
28
|
Affiliation(s)
- Rachael E. Jack
- Institute of Neuroscience and Psychology, and School of Psychology, University of Glasgow, Glasgow G12 8QB United Kingdom;
| | - Philippe G. Schyns
- Institute of Neuroscience and Psychology, and School of Psychology, University of Glasgow, Glasgow G12 8QB United Kingdom;
| |
Collapse
|
29
|
Abstract
As a highly social species, humans frequently exchange social information to support almost all facets of life. One of the richest and most powerful tools in social communication is the face, from which observers can quickly and easily make a number of inferences - about identity, gender, sex, age, race, ethnicity, sexual orientation, physical health, attractiveness, emotional state, personality traits, pain or physical pleasure, deception, and even social status. With the advent of the digital economy, increasing globalization and cultural integration, understanding precisely which face information supports social communication and which produces misunderstanding is central to the evolving needs of modern society (for example, in the design of socially interactive digital avatars and companion robots). Doing so is challenging, however, because the face can be thought of as comprising a high-dimensional, dynamic information space, and this impacts cognitive science and neuroimaging, and their broader applications in the digital economy. New opportunities to address this challenge are arising from the development of new methods and technologies, coupled with the emergence of a modern scientific culture that embraces cross-disciplinary approaches. Here, we briefly review one such approach that combines state-of-the-art computer graphics, psychophysics and vision science, cultural psychology and social cognition, and highlight the main knowledge advances it has generated. In the light of current developments, we provide a vision of the future directions in the field of human facial communication within and across cultures.
Collapse
Affiliation(s)
- Rachael E Jack
- School of Psychology, University of Glasgow, Scotland G12 8QB, UK; Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, UK.
| | - Philippe G Schyns
- School of Psychology, University of Glasgow, Scotland G12 8QB, UK; Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, UK.
| |
Collapse
|
30
|
Jack RE, Garrod OGB, Schyns PG. Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time. Curr Biol 2014; 24:187-192. [PMID: 24388852 DOI: 10.1016/j.cub.2013.11.064] [Citation(s) in RCA: 192] [Impact Index Per Article: 19.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2013] [Revised: 11/28/2013] [Accepted: 11/29/2013] [Indexed: 10/25/2022]
Abstract
Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of "biologically basic to socially specific" information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four.
Collapse
Affiliation(s)
- Rachael E Jack
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland, G12 8QB, UK.
| | - Oliver G B Garrod
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland, G12 8QB, UK
| | - Philippe G Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland, G12 8QB, UK
| |
Collapse
|