1
|
Schirmer A, Croy I, Liebal K, Schweinberger SR. Non-verbal effecting - animal research sheds light on human emotion communication. Biol Rev Camb Philos Soc 2024. [PMID: 39262120 DOI: 10.1111/brv.13140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 08/23/2024] [Accepted: 08/27/2024] [Indexed: 09/13/2024]
Abstract
Cracking the non-verbal "code" of human emotions has been a chief interest of generations of scientists. Yet, despite much effort, a dictionary that clearly maps non-verbal behaviours onto meaning remains elusive. We suggest this is due to an over-reliance on language-related concepts and an under-appreciation of the evolutionary context in which a given non-verbal behaviour emerged. Indeed, work in other species emphasizes non-verbal effects (e.g. affiliation) rather than meaning (e.g. happiness) and differentiates between signals, for which communication benefits both sender and receiver, and cues, for which communication does not benefit senders. Against this backdrop, we develop a "non-verbal effecting" perspective for human research. This perspective extends the typical focus on facial expressions to a broadcasting of multisensory signals and cues that emerge from both social and non-social emotions. Moreover, it emphasizes the consequences or effects that signals and cues have for individuals and their social interactions. We believe that re-directing our attention from verbal emotion labels to non-verbal effects is a necessary step to comprehend scientifically how humans share what they feel.
Collapse
Affiliation(s)
- Annett Schirmer
- Department of Psychology, Innsbruck University, Universitaetsstrasse 5-7, Innsbruck, 6020, Austria
| | - Ilona Croy
- Department of Psychology, Friedrich Schiller University Jena, Am Steiger 3, Jena, 07743, Germany
- German Center for Mental Health (DZPG), Partner Site Halle-Jena-Magdeburg, Virchowweg 23, Berlin, 10117, Germany
| | - Katja Liebal
- Institute of Biology, Leipzig University, Talstraße 33, Leipzig, 04103, Germany
| | - Stefan R Schweinberger
- Department of Psychology, Friedrich Schiller University Jena, Am Steiger 3, Jena, 07743, Germany
| |
Collapse
|
2
|
Stein T, Gehrer N, Jusyte A, Scheeff J, Schönenberg M. Perception of emotional facial expressions in aggression and psychopathy. Psychol Med 2024:1-9. [PMID: 39246290 DOI: 10.1017/s0033291724001417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/10/2024]
Abstract
BACKGROUND Altered affective state recognition is assumed to be a root cause of aggressive behavior, a hallmark of psychopathologies such as psychopathy and antisocial personality disorder. However, the two most influential models make markedly different predictions regarding the underlying mechanism. According to the integrated emotion system theory (IES), aggression reflects impaired processing of social distress cues such as fearful faces. In contrast, the hostile attribution bias (HAB) model explains aggression with a bias to interpret ambiguous expressions as angry. METHODS In a set of four experiments, we measured processing of fearful and angry facial expressions (compared to neutral and other expressions) in a sample of 65 male imprisoned violent offenders rated using the Hare Psychopathy Checklist-Revised (PCL-R, Hare, R. D. (1991). The psychopathy checklist-revised. Toronto, ON: Multi-Health Systems) and in 60 age-matched control participants. RESULTS There was no evidence for a fear deficit in violent offenders or for an association of psychopathy or aggression with impaired processing of fearful faces. Similarly, there was no evidence for a perceptual bias for angry faces linked to psychopathy or aggression. However, using highly ambiguous stimuli and requiring explicit labeling of emotions, violent offenders showed a categorization bias for anger and this anger bias correlated with self-reported trait aggression (but not with psychopathy). CONCLUSIONS These results add to a growing literature casting doubt on the notion that fear processing is impaired in aggressive individuals and in psychopathy and provide support for the idea that aggression is related to a hostile attribution bias that emerges from later cognitive, post-perceptual processing stages.
Collapse
Affiliation(s)
- Timo Stein
- Department of Psychology, University of Amsterdam, The Netherlands
| | - Nina Gehrer
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), University of Tübingen, Germany
- Department of Clinical Psychology and Psychotherapy, University of Tübingen, Germany
| | - Aiste Jusyte
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), University of Tübingen, Germany
| | - Jonathan Scheeff
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), University of Tübingen, Germany
- Department of Clinical Psychology and Psychotherapy, University of Tübingen, Germany
| | - Michael Schönenberg
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), University of Tübingen, Germany
- Department of Clinical Psychology and Psychotherapy, University of Tübingen, Germany
| |
Collapse
|
3
|
Kim HA, Kaduthodil J, Strong RW, Germine LT, Cohan S, Wilmer JB. Multiracial Reading the Mind in the Eyes Test (MRMET): An inclusive version of an influential measure. Behav Res Methods 2024; 56:5900-5917. [PMID: 38630159 PMCID: PMC11335804 DOI: 10.3758/s13428-023-02323-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/03/2023] [Indexed: 08/21/2024]
Abstract
Can an inclusive test of face cognition meet or exceed the psychometric properties of a prominent less inclusive test? Here, we norm and validate an updated version of the influential Reading the Mind in the Eyes Test (RMET), a clinically significant neuropsychiatric paradigm that has long been used to assess theory of mind and social cognition. Unlike the RMET, our Multiracial Reading the Mind in the Eyes Test (MRMET) incorporates racially inclusive stimuli, nongendered answer choices, ground-truth referenced answers, and more accessible vocabulary. We show, via a series of large datasets, that the MRMET meets or exceeds RMET across major psychometric indices. Moreover, the reliable signal captured by the two tests is statistically indistinguishable, evidence for full interchangeability. We thus present the MRMET as a high-quality, inclusive, normed and validated alternative to the RMET, and as a case in point that inclusivity in psychometric tests of face cognition is an achievable aim. The MRMET test and our normative and validation data sets are openly available under a CC-BY-SA 4.0 license at osf.io/ahq6n.
Collapse
Affiliation(s)
- Heesu Ally Kim
- Department of Neuroscience, Wellesley College, Wellesley, MA, USA
- Division of Depression and Anxiety Disorders, McLean Hospital, Belmont, MA, USA
- Institute for Technology in Psychiatry, McLean Hospital, Belmont, MA, USA
| | - Jasmine Kaduthodil
- Department of Neurosciences and Shiley-Marcos Alzheimer's Disease Research Center, University of California San Diego School of Medicine, La Jolla, CA, USA
| | - Roger W Strong
- Institute for Technology in Psychiatry, McLean Hospital, Belmont, MA, USA
- Department of Psychiatry, Harvard Medical School, Belmont, MA, USA
- The Many Brains Project, Belmont, MA, USA
| | - Laura T Germine
- Division of Depression and Anxiety Disorders, McLean Hospital, Belmont, MA, USA
- Institute for Technology in Psychiatry, McLean Hospital, Belmont, MA, USA
- Department of Psychiatry, Harvard Medical School, Belmont, MA, USA
| | - Sarah Cohan
- Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA, USA
| | - Jeremy B Wilmer
- Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA, USA.
- Department of Psychology, Wellesley College, Wellesley, MA, USA.
| |
Collapse
|
4
|
Paletz SBF, Golonka EM, Pandža NB, Stanton G, Ryan D, Adams N, Rytting CA, Murauskaite EE, Buntain C, Johns MA, Bradley P. Social media emotions annotation guide (SMEmo): Development and initial validity. Behav Res Methods 2024; 56:4435-4485. [PMID: 37697206 DOI: 10.3758/s13428-023-02195-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/10/2023] [Indexed: 09/13/2023]
Abstract
The proper measurement of emotion is vital to understanding the relationship between emotional expression in social media and other factors, such as online information sharing. This work develops a standardized annotation scheme for quantifying emotions in social media using recent emotion theory and research. Human annotators assessed both social media posts and their own reactions to the posts' content on scales of 0 to 100 for each of 20 (Study 1) and 23 (Study 2) emotions. For Study 1, we analyzed English-language posts from Twitter (N = 244) and YouTube (N = 50). Associations between emotion ratings and text-based measures (LIWC, VADER, EmoLex, NRC-EIL, Emotionality) demonstrated convergent and discriminant validity. In Study 2, we tested an expanded version of the scheme in-country, in-language, on Polish (N = 3648) and Lithuanian (N = 1934) multimedia Facebook posts. While the correlations were lower than with English, patterns of convergent and discriminant validity with EmoLex and NRC-EIL still held. Coder reliability was strong across samples, with intraclass correlations of .80 or higher for 10 different emotions in Study 1 and 16 different emotions in Study 2. This research improves the measurement of emotions in social media to include more dimensions, multimedia, and context compared to prior schemes.
Collapse
Affiliation(s)
- Susannah B F Paletz
- College of Information Studies, University of Maryland, College Park, MD, USA.
| | - Ewa M Golonka
- Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland, College Park, MD, USA
| | - Nick B Pandža
- Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland, College Park, MD, USA
- Program in Second Language Acquisition, University of Maryland, College Park, MD, USA
| | - Grace Stanton
- Department of Criminology, University of Maryland, College Park, MD, USA
| | - David Ryan
- Feminist, Gender, and Sexuality Studies, Stanford University, Stanford, CA, USA
| | - Nikki Adams
- Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland, College Park, MD, USA
| | - C Anton Rytting
- Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland, College Park, MD, USA
| | | | - Cody Buntain
- College of Information Studies, University of Maryland, College Park, MD, USA
| | - Michael A Johns
- Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland, College Park, MD, USA
| | - Petra Bradley
- Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland, College Park, MD, USA
| |
Collapse
|
5
|
Gori M, Schiatti L, Faggioni M, Amadeo MB. Lesson learned from the COVID-19 pandemic: toddlers learn earlier to read emotions with face masks. Front Psychol 2024; 15:1386937. [PMID: 39021660 PMCID: PMC11253214 DOI: 10.3389/fpsyg.2024.1386937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 06/03/2024] [Indexed: 07/20/2024] Open
Abstract
In a prior study we demonstrated that the presence of face masks impairs the human capability of accurately inferring emotions conveyed through facial expressions, at all ages. The degree of impairment posed by face covering was notably more pronounced in children aged between three and five years old. In the current study, we conducted the same test as a follow-up after one year from the onset of the COVID-19 pandemic, when the requirement of wearing face masks was holding in almost all circumstances of everyday life when social interactions occur. The results indicate a noteworthy improvement in recognizing facial expressions with face masks among children aged three to five, compared to the pre-pandemic settings. These findings hold a significant importance, suggesting that toddlers effectively mitigated the social challenges associated with masks use: they overcame initial environmental limitations, improving their capability to interpret facial expressions even in the absence of visual cues from the lower part of the face.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People (U-VIP), Istituto Italiano di Tecnologia, Genova, Italy
| | - Lucia Schiatti
- Unit for Visually Impaired People (U-VIP), Istituto Italiano di Tecnologia, Genova, Italy
| | - Monica Faggioni
- La rotonda dei bambini, Scuola paritaria della coop. S.a.b.a., Genova, Italy
| | - Maria Bianca Amadeo
- Unit for Visually Impaired People (U-VIP), Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
6
|
Murray T, Binetti N, Venkataramaiyer R, Namboodiri V, Cosker D, Viding E, Mareschal I. Expression perceptive fields explain individual differences in the recognition of facial emotions. COMMUNICATIONS PSYCHOLOGY 2024; 2:62. [PMID: 39242751 PMCID: PMC11332168 DOI: 10.1038/s44271-024-00111-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 06/05/2024] [Indexed: 09/09/2024]
Abstract
Humans can use the facial expressions of another to infer their emotional state, although it remains unknown how this process occurs. Here we suppose the presence of perceptive fields within expression space, analogous to feature-tuned receptive-fields of early visual cortex. We developed genetic algorithms to explore a multidimensional space of possible expressions and identify those that individuals associated with different emotions. We next defined perceptive fields as probabilistic maps within expression space, and found that they could predict the emotions that individuals infer from expressions presented in a separate task. We found profound individual variability in their size, location, and specificity, and that individuals with more similar perceptive fields had similar interpretations of the emotion communicated by an expression, providing possible channels for social communication. Modelling perceptive fields therefore provides a predictive framework in which to understand how individuals infer emotions from facial expressions.
Collapse
Affiliation(s)
- Thomas Murray
- Department of Psychology, University of Cambridge, Cambridge, UK.
- Department of Psychology, Queen Mary University of London, London, UK.
| | - Nicola Binetti
- Department of Cognitive Neuroscience, International School for Advanced Studies, Trieste, Italy
- Dipartimento di Medicina dei Sistemi, Università degli studi di Roma Tor Vergata, Rome, Italy
| | | | | | - Darren Cosker
- Department of Computer Science, University of Bath, Bath, UK
- Mixed Reality & AI Lab - Cambridge, Microsoft, Cambridge, UK
| | - Essi Viding
- Division of Psychology and Language Sciences, University College London, London, UK
| | | |
Collapse
|
7
|
Bress KS, Cascio CJ. Sensorimotor regulation of facial expression - An untouched frontier. Neurosci Biobehav Rev 2024; 162:105684. [PMID: 38710425 DOI: 10.1016/j.neubiorev.2024.105684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 05/08/2024]
Abstract
Facial expression is a critical form of nonverbal social communication which promotes emotional exchange and affiliation among humans. Facial expressions are generated via precise contraction of the facial muscles, guided by sensory feedback. While the neural pathways underlying facial motor control are well characterized in humans and primates, it remains unknown how tactile and proprioceptive information reaches these pathways to guide facial muscle contraction. Thus, despite the importance of facial expressions for social functioning, little is known about how they are generated as a unique sensorimotor behavior. In this review, we highlight current knowledge about sensory feedback from the face and how it is distinct from other body regions. We describe connectivity between the facial sensory and motor brain systems, and call attention to the other brain systems which influence facial expression behavior, including vision, gustation, emotion, and interoception. Finally, we petition for more research on the sensory basis of facial expressions, asserting that incomplete understanding of sensorimotor mechanisms is a barrier to addressing atypical facial expressivity in clinical populations.
Collapse
Affiliation(s)
- Kimberly S Bress
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
| | - Carissa J Cascio
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA; Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
8
|
Cowen AS, Brooks JA, Prasad G, Tanaka M, Kamitani Y, Kirilyuk V, Somandepalli K, Jou B, Schroff F, Adam H, Sauter D, Fang X, Manokara K, Tzirakis P, Oh M, Keltner D. How emotion is experienced and expressed in multiple cultures: a large-scale experiment across North America, Europe, and Japan. Front Psychol 2024; 15:1350631. [PMID: 38966733 PMCID: PMC11223574 DOI: 10.3389/fpsyg.2024.1350631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 03/04/2024] [Indexed: 07/06/2024] Open
Abstract
Core to understanding emotion are subjective experiences and their expression in facial behavior. Past studies have largely focused on six emotions and prototypical facial poses, reflecting limitations in scale and narrow assumptions about the variety of emotions and their patterns of expression. We examine 45,231 facial reactions to 2,185 evocative videos, largely in North America, Europe, and Japan, collecting participants' self-reported experiences in English or Japanese and manual and automated annotations of facial movement. Guided by Semantic Space Theory, we uncover 21 dimensions of emotion in the self-reported experiences of participants in Japan, the United States, and Western Europe, and considerable cross-cultural similarities in experience. Facial expressions predict at least 12 dimensions of experience, despite massive individual differences in experience. We find considerable cross-cultural convergence in the facial actions involved in the expression of emotion, and culture-specific display tendencies-many facial movements differ in intensity in Japan compared to the U.S./Canada and Europe but represent similar experiences. These results quantitatively detail that people in dramatically different cultures experience and express emotion in a high-dimensional, categorical, and similar but complex fashion.
Collapse
Affiliation(s)
- Alan S. Cowen
- Hume AI, New York, NY, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| | - Jeffrey A. Brooks
- Hume AI, New York, NY, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| | | | - Misato Tanaka
- Advanced Telecommunications Research Institute, Kyoto, Japan
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Yukiyasu Kamitani
- Advanced Telecommunications Research Institute, Kyoto, Japan
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | | | - Krishna Somandepalli
- Google Research, Mountain View, CA, United States
- Department of Electrical Engineering, University of Southern California, Los Angeles, CA, United States
| | - Brendan Jou
- Google Research, Mountain View, CA, United States
| | | | - Hartwig Adam
- Google Research, Mountain View, CA, United States
| | - Disa Sauter
- Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, Netherlands
| | - Xia Fang
- Zhejiang University, Zhejiang, China
| | - Kunalan Manokara
- Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, Netherlands
| | | | - Moses Oh
- Hume AI, New York, NY, United States
| | - Dacher Keltner
- Hume AI, New York, NY, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| |
Collapse
|
9
|
Mastorogianni ME, Konstanti S, Dratsiou I, Bamidis PD. Masked emotions: does children's affective state influence emotion recognition? Front Psychol 2024; 15:1329070. [PMID: 38962230 PMCID: PMC11220387 DOI: 10.3389/fpsyg.2024.1329070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 04/22/2024] [Indexed: 07/05/2024] Open
Abstract
Introduction Facial emotion recognition abilities of children have been the focus of attention across various fields, with implications for communication, social interaction, and human behavior. In response to the COVID-19 pandemic, wearing a face mask in public became mandatory in many countries, hindering social information perception and emotion recognition. Given the importance of visual communication for children's social-emotional development, concerns have been raised on whether face masks could impair their ability to recognize emotions and thereby possibly impact their social-emotional development. Methods To this extent, a quasiexperimental study was designed with a two-fold objective: firstly, to identify children's accuracy in recognizing basic emotions (anger, happiness, fear, disgust, sadness) and emotional neutrality when presented with faces under two conditions: one with no-masks and another with faces partially covered by various types of masks (medical, nonmedical, surgical, or cloth); secondly, to explore any correlation between children's emotion recognition accuracy and their affective state. Sixty-nine (69) elementary school students aged 6-7 years old from Greece were recruited for this purpose. Following specific requirements of the second phase of the experiment students were assigned to one of three (3) distinct affective condition groups: Group A-Happiness, Group B-Sadness, and Group C-Emotional Neutrality. Image stimuli were drawn from the FACES Dataset, and students' affective state was registered using the self-reporting emotions-registration tool, AffectLecture app. Results The study's findings indicate that children can accurately recognize emotions even with masks, although recognizing disgust is more challenging. Additionally, following both positive and negative affective state priming promoted systematic inaccuracies in emotion recognition. Most significantly, results showed a negative bias for children in negative affective state and a positive bias for those in positive affective state. Discussion Children's affective state significantly influenced their emotion recognition abilities; sad affective states led to lower recognition overall and a bias toward recognizing sad expressions, while happy affective states resulted in a positive bias, improving recognition of happiness, and affecting how emotional neutrality and sadness were actually perceived. In conclusion, this study sheds light on the intriguing dynamics of how face masks affect children's emotion recognition, but also underlines the profound influence of their affective state.
Collapse
Affiliation(s)
- Maria Eirini Mastorogianni
- MSc in Learning Technologies-Education Sciences, School of Early Childhood Education, School of Electrical and Computer Engineering, School of Medicine, Aristotle University of Thessaloniki (AUTH), Thessaloniki, Greece
| | - Styliani Konstanti
- MSc in Learning Technologies-Education Sciences, School of Early Childhood Education, School of Electrical and Computer Engineering, School of Medicine, Aristotle University of Thessaloniki (AUTH), Thessaloniki, Greece
| | - Ioanna Dratsiou
- Medical Physics and Digital Innovation Laboratory, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki (AUTH), Thessaloniki, Greece
| | - Panagiotis D. Bamidis
- Medical Physics and Digital Innovation Laboratory, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki (AUTH), Thessaloniki, Greece
| |
Collapse
|
10
|
Kavanagh E, Whitehouse J, Waller BM. Being facially expressive is socially advantageous. Sci Rep 2024; 14:12798. [PMID: 38871925 DOI: 10.1038/s41598-024-62902-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Accepted: 05/22/2024] [Indexed: 06/15/2024] Open
Abstract
Individuals vary in how they move their faces in everyday social interactions. In a first large-scale study, we measured variation in dynamic facial behaviour during social interaction and examined dyadic outcomes and impression formation. In Study 1, we recorded semi-structured video calls with 52 participants interacting with a confederate across various everyday contexts. Video clips were rated by 176 independent participants. In Study 2, we examined video calls of 1315 participants engaging in unstructured video-call interactions. Facial expressivity indices were extracted using automated Facial Action Coding Scheme analysis and measures of personality and partner impressions were obtained by self-report. Facial expressivity varied considerably across participants, but little across contexts, social partners or time. In Study 1, more facially expressive participants were more well-liked, agreeable, and successful at negotiating (if also more agreeable). Participants who were more facially competent, readable, and perceived as readable were also more well-liked. In Study 2, we replicated the findings that facial expressivity was associated with agreeableness and liking by their social partner, and additionally found it to be associated with extraversion and neuroticism. Findings suggest that facial behaviour is a stable individual difference that proffers social advantages, pointing towards an affiliative, adaptive function.
Collapse
Affiliation(s)
- Eithne Kavanagh
- Department of Psychology, Nottingham Trent University, Nottingham, UK.
| | - Jamie Whitehouse
- Department of Psychology, Nottingham Trent University, Nottingham, UK
| | - Bridget M Waller
- Department of Psychology, Nottingham Trent University, Nottingham, UK
| |
Collapse
|
11
|
Bakir V, Laffer A, McStay A, Miranda D, Urquhart L. On manipulation by emotional AI: UK adults' views and governance implications. FRONTIERS IN SOCIOLOGY 2024; 9:1339834. [PMID: 38912311 PMCID: PMC11190365 DOI: 10.3389/fsoc.2024.1339834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Accepted: 05/22/2024] [Indexed: 06/25/2024]
Abstract
With growing commercial, regulatory and scholarly interest in use of Artificial Intelligence (AI) to profile and interact with human emotion ("emotional AI"), attention is turning to its capacity for manipulating people, relating to factors impacting on a person's decisions and behavior. Given prior social disquiet about AI and profiling technologies, surprisingly little is known on people's views on the benefits and harms of emotional AI technologies, especially their capacity for manipulation. This matters because regulators of AI (such as in the European Union and the UK) wish to stimulate AI innovation, minimize harms and build public trust in these systems, but to do so they should understand the public's expectations. Addressing this, we ascertain UK adults' perspectives on the potential of emotional AI technologies for manipulating people through a two-stage study. Stage One (the qualitative phase) uses design fiction principles to generate adequate understanding and informed discussion in 10 focus groups with diverse participants (n = 46) on how emotional AI technologies may be used in a range of mundane, everyday settings. The focus groups primarily flagged concerns about manipulation in two settings: emotion profiling in social media (involving deepfakes, false information and conspiracy theories), and emotion profiling in child oriented "emotoys" (where the toy responds to the child's facial and verbal expressions). In both these settings, participants express concerns that emotion profiling covertly exploits users' cognitive or affective weaknesses and vulnerabilities; additionally, in the social media setting, participants express concerns that emotion profiling damages people's capacity for rational thought and action. To explore these insights at a larger scale, Stage Two (the quantitative phase), conducts a UK-wide, demographically representative national survey (n = 2,068) on attitudes toward emotional AI. Taking care to avoid leading and dystopian framings of emotional AI, we find that large majorities express concern about the potential for being manipulated through social media and emotoys. In addition to signaling need for civic protections and practical means of ensuring trust in emerging technologies, the research also leads us to provide a policy-friendly subdivision of what is meant by manipulation through emotional AI and related technologies.
Collapse
Affiliation(s)
- Vian Bakir
- School of History, Law and Social Sciences, Bangor University, Bangor, United Kingdom
| | - Alexander Laffer
- School of Media and Film, University of Winchester, Winchester, United Kingdom
| | - Andrew McStay
- School of History, Law and Social Sciences, Bangor University, Bangor, United Kingdom
| | - Diana Miranda
- Faculty of Social Sciences, University of Stirling, Scotland, United Kingdom
| | - Lachlan Urquhart
- Edinburgh Law School, University of Edinburgh, Scotland, United Kingdom
| |
Collapse
|
12
|
Fischer R, Bailey Y, Shankar M, Safaeinili N, Karl JA, Daly A, Johnson FN, Winter T, Arahanga-Doyle H, Fox R, Abubakar A, Zulman DM. Cultural challenges for adapting behavioral intervention frameworks: A critical examination from a cultural psychology perspective. Clin Psychol Rev 2024; 110:102425. [PMID: 38614022 DOI: 10.1016/j.cpr.2024.102425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 03/14/2024] [Accepted: 04/04/2024] [Indexed: 04/15/2024]
Abstract
We introduce the bias and equivalence framework to highlight how concepts, methods, and tools from cultural psychology can contribute to successful cultural adaptation and implementation of behavioral interventions. To situate our contribution, we provide a review of recent cultural adaptation research and existing frameworks. We identified 68 different frameworks that have been cited when reporting cultural adaptations and highlight three major adaptation dimensions that can be used to differentiate adaptations. Regarding effectiveness, we found an average effect size of zr = 0.24 (95%CI 0.20, 0.29) in 24 meta-analyses published since 2014, but also substantive differences across domains and unclear effects of the extent of cultural adaptations. To advance cultural adaptation efforts, we outline a framework that integrates key steps from previous cultural adaptation frameworks and highlight how cultural bias and equivalence considerations in conjunction with community engagement help a) in the diagnosis of behavioral or psychological problems, b) identification of possible interventions, c) the selection of specific mechanisms of behavior change, d) the specification and documentation of dose effects and thresholds for diagnosis, e) entry and exit points within intervention programs, and f) cost-benefit-sustainability discussions. We provide guiding questions that may help researchers when adapting interventions to novel cultural contexts.
Collapse
Affiliation(s)
- Ronald Fischer
- Institute D'Or for Research and Education, Sao Paulo, Brazil; School of Psychology, Victoria University of Wellington, New Zealand.
| | | | - Megha Shankar
- Division of General Internal Medicine, Department of Medicine, University of California San Diego, USA
| | - Nadia Safaeinili
- Division of Primary Care and Population Health, Stanford School of Medicine, USA
| | - Johannes A Karl
- School of Psychology, Dublin City University, Dublin, Ireland; School of Psychology, Victoria University of Wellington, New Zealand
| | - Adam Daly
- School of Psychology, Dublin City University, Dublin, Ireland
| | | | - Taylor Winter
- School of Mathematics and Statistics, University of Canterbury, New Zealand
| | | | - Ririwai Fox
- School of Psychology, University of Waikato, Tauranga, New Zealand
| | - Amina Abubakar
- Aga Khan University, Nairobi, Kenya & Kenya Medical Research Institute/Wellcome Trust Research Programme, Kilifi, Kenya
| | - Donna Michelle Zulman
- Division of Primary Care and Population Health at Stanford University & Center for Innovation to Implementation (Ci2i) at VA Palo Alto, USA
| |
Collapse
|
13
|
Lloyd-Esenkaya V, Russell AJ, St Clair MC. Zoti's Social Toolkit: Developing and piloting novel animated tasks to assess emotional understanding and conflict resolution skills in childhood. BRITISH JOURNAL OF DEVELOPMENTAL PSYCHOLOGY 2024; 42:187-214. [PMID: 38323720 DOI: 10.1111/bjdp.12475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 01/16/2024] [Indexed: 02/08/2024]
Abstract
Current methods used to investigate emotional inference and conflict resolution knowledge are limited in their suitability for use with children with language disorders due to a reliance on language processing. This is problematic, as nearly 8% of the population are estimated to have developmental language disorder (DLD). In this paper, we present 'Zoti's Social Toolkit', a set of animated scenarios that can be used to assess emotion inferencing and conflict resolution knowledge. All animated scenarios contain interpersonal situations centred around a gender-neutral alien named Zoti. Four studies investigated the face and construct validity of the stimuli. The final stimulus set can be used with children, who may or may not have language difficulties and is openly available for use in research.
Collapse
|
14
|
Crawford MT, Maymon C, Miles NL, Blackburne K, Tooley M, Grimshaw GM. Emotion in motion: perceiving fear in the behaviour of individuals from minimal motion capture displays. Cogn Emot 2024; 38:451-462. [PMID: 38354068 DOI: 10.1080/02699931.2023.2300748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Accepted: 12/21/2023] [Indexed: 02/16/2024]
Abstract
The ability to quickly and accurately recognise emotional states is adaptive for numerous social functions. Although body movements are a potentially crucial cue for inferring emotions, few studies have studied the perception of body movements made in naturalistic emotional states. The current research focuses on the use of body movement information in the perception of fear expressed by targets in a virtual heights paradigm. Across three studies, participants made judgments about the emotional states of others based on motion-capture body movement recordings of those individuals actively engaged in walking a virtual plank at ground-level or 80 stories above a city street. Results indicated that participants were reliably able to differentiate between height and non-height conditions (Studies 1 & 2), were more likely to spontaneously describe target behaviour in the height condition as fearful (Study 2) and their fear estimates were highly calibrated with the fear ratings from the targets (Studies 1-3). Findings show that VR height scenarios can induce fearful behaviour and that people can perceive fear in minimal representations of body movement.
Collapse
Affiliation(s)
- Matthew T Crawford
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| | - Christopher Maymon
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| | - Nicola L Miles
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| | - Katie Blackburne
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| | - Michael Tooley
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| | - Gina M Grimshaw
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| |
Collapse
|
15
|
Ryan ZJ, Dodd HF, FitzGibbon L. Uncertain world: How children's curiosity and intolerance of uncertainty relate to their behaviour and emotion under uncertainty. Q J Exp Psychol (Hove) 2024:17470218241252651. [PMID: 38679795 DOI: 10.1177/17470218241252651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2024]
Abstract
Curiosity and intolerance of uncertainty (IU) are both thought to drive information seeking but may have different affective profiles; curiosity is often associated with positive affective responses to uncertainty and improved learning outcomes, whereas IU is associated with negative affective responses and anxiety. Curiosity and IU have not previously been examined together in children but may both play an important role in understanding how children respond to uncertainty. Our research aimed to examine how individual differences in parent-reported curiosity and IU were associated with behavioural and emotional responses to uncertainty. Children aged 8 to 12 (n = 133) completed a game in which they were presented with an array of buttons on the screen that, when clicked, played neutral or aversive sounds. Children pressed buttons (information seeking) and rated their emotions and worry under conditions of high and low uncertainty. Facial expressions were also monitored for affective responses. Analyses revealed that children sought more information under high uncertainty than low uncertainty trials and that more curious children reported feeling happier. Contrary to expectations, IU and curiosity were not related to the number of buttons children pressed, nor to their self-reported emotion or worry. However, exploratory analyses suggest that children who are high in IU may engage in more information seeking that reflects checking or safety-seeking than those who are low in IU. In addition, our findings suggest that there may be age-related change in the effects of IU on worry, with IU more strongly related to worry in uncertain situations for older children than younger children.
Collapse
Affiliation(s)
- Zoe J Ryan
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Helen F Dodd
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
- Children and Young People's Mental Health Research Collaboration, Exeter Medical School, University of Exeter, Exeter, UK
| | - Lily FitzGibbon
- Division of Psychology, Faculty of Natural Sciences, University of Stirling, Stirling, UK
| |
Collapse
|
16
|
Tessier MH, Mazet JP, Gagner E, Marcoux A, Jackson PL. Facial representations of complex affective states combining pain and a negative emotion. Sci Rep 2024; 14:11686. [PMID: 38777852 PMCID: PMC11111784 DOI: 10.1038/s41598-024-62423-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 05/16/2024] [Indexed: 05/25/2024] Open
Abstract
Pain is rarely communicated alone, as it is often accompanied by emotions such as anger or sadness. Communicating these affective states involves shared representations. However, how an individual conceptually represents these combined states must first be tested. The objective of this study was to measure the interaction between pain and negative emotions on two types of facial representations of these states, namely visual (i.e., interactive virtual agents; VAs) and sensorimotor (i.e., one's production of facial configurations). Twenty-eight participants (15 women) read short written scenarios involving only pain or a combined experience of pain and a negative emotion (anger, disgust, fear, or sadness). They produced facial configurations representing these experiences on the faces of the VAs and on their face (own production or imitation of VAs). The results suggest that affective states related to a direct threat to the body (i.e., anger, disgust, and pain) share a similar facial representation, while those that present no immediate danger (i.e., fear and sadness) differ. Although visual and sensorimotor representations of these states provide congruent affective information, they are differently influenced by factors associated with the communication cycle. These findings contribute to our understanding of pain communication in different affective contexts.
Collapse
Affiliation(s)
- Marie-Hélène Tessier
- School of Psychology, Université Laval, Québec City, Canada
- Centre for Interdisciplinary Research in Rehabilitation and Social Integration (Cirris), Québec City, Canada
- CERVO Brain Research Centre, Québec City, Canada
| | - Jean-Philippe Mazet
- Department of Computer Science and Software Engineering, Université Laval, Québec City, Canada
| | - Elliot Gagner
- School of Psychology, Université Laval, Québec City, Canada
- Centre for Interdisciplinary Research in Rehabilitation and Social Integration (Cirris), Québec City, Canada
- CERVO Brain Research Centre, Québec City, Canada
| | - Audrey Marcoux
- School of Psychology, Université Laval, Québec City, Canada
- Centre for Interdisciplinary Research in Rehabilitation and Social Integration (Cirris), Québec City, Canada
- CERVO Brain Research Centre, Québec City, Canada
| | - Philip L Jackson
- School of Psychology, Université Laval, Québec City, Canada.
- Centre for Interdisciplinary Research in Rehabilitation and Social Integration (Cirris), Québec City, Canada.
- CERVO Brain Research Centre, Québec City, Canada.
| |
Collapse
|
17
|
Plate RC, Powell T, Bedford R, Smith TJ, Bamezai A, Wedderburn Q, Broussard A, Soesanto N, Swetlitz C, Waller R, Wagner NJ. Social threat processing in adults and children: Faster orienting to, but shorter dwell time on, angry faces during visual search. Dev Sci 2024; 27:e13461. [PMID: 38054265 PMCID: PMC11229010 DOI: 10.1111/desc.13461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 11/02/2023] [Accepted: 11/22/2023] [Indexed: 12/07/2023]
Abstract
Attention to emotional signals conveyed by others is critical for gleaning information about potential social partners and the larger social context. Children appear to detect social threats (e.g., angry faces) faster than non-threatening social signals (e.g., neutral faces). However, methods that rely on behavioral responses alone are limited in identifying different attentional processes involved in threat detection or responding. To address this question, we used a visual search paradigm to assess behavioral (i.e., reaction time to select a target image) and attentional (i.e., eye-tracking fixations, saccadic shifts, and dwell time) responses in children (ages 7-10 years old, N = 42) and adults (ages 18-23 years old, N = 46). In doing so, we compared behavioral responding and attentional detection and engagement with threatening (i.e., angry and fearful faces) and non-threatening (i.e., happy faces) social signals. Overall, children and adults were faster to detect social threats (i.e., angry faces), but spent a smaller proportion of time dwelling on them and had slower behavioral responses. Findings underscore the importance of combining different measures to parse differences between processing versus responding to social signals across development. RESEARCH HIGHLIGHTS: Children and adults are slower to select angry faces when measured by time to mouse-click but faster to detect angry faces when measured by time to first eye fixation. The use of eye-tracking addresses some limitations of prior visual search tasks with children that rely on behavioral responses alone. Results suggest shorter time to first fixation, but subsequently, shorter duration of dwell on social threat in children and adults.
Collapse
Affiliation(s)
- Rista C Plate
- Department of Psychology, University of Pennsylvania, Philadelphia, USA
| | - Tralucia Powell
- Institute of Child Development, University of Minnesota, Minneapolis, USA
| | | | - Tim J Smith
- Creative Computing Institute, University of the Arts London, London, UK
| | - Ankur Bamezai
- Department of Psychological & Brain Sciences, Boston University, Boston, USA
| | - Quentin Wedderburn
- Department of Psychology, University of Pennsylvania, Philadelphia, USA
- Department of Psychology, University of South Carolina, Columbia, South Carolina, USA
| | - Alexis Broussard
- Department of Psychology, University of Pennsylvania, Philadelphia, USA
- Department of Psychology, Yale University, New Haven, USA
| | - Natasha Soesanto
- Chobanian & Avedisian School of Medicine, Boston University, Boston, USA
| | - Caroline Swetlitz
- Department of Psychological & Brain Sciences, Boston University, Boston, USA
| | - Rebecca Waller
- Department of Psychology, University of Pennsylvania, Philadelphia, USA
| | - Nicholas J Wagner
- Department of Psychological & Brain Sciences, Boston University, Boston, USA
| |
Collapse
|
18
|
Pantoji MV, Ganjekar S, Mehta UM, Chandra PS, Thippeswamy H. Development of a tool for infant facial emotion recognition (InFER) for postpartum mothers with mental illnesses. Infant Ment Health J 2024; 45:318-327. [PMID: 38478551 DOI: 10.1002/imhj.22111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 02/05/2024] [Accepted: 02/25/2024] [Indexed: 04/29/2024]
Abstract
Understanding deficits in recognition of infant emotions in mothers with mental illnesses is limited by the lack of validated instruments. We present the development and content validation of the infant facial emotion recognition tool (InFER) in India to examine the ability of mothers to detect the infants' emotions. A total of 164 images of infant faces in various emotional states were gathered from the parents of four infants (two male and two female: up to 12 months old). Infant emotion in each image was identified by the respective mother. Content validation was carried out by 21 experts. Images with ≥70% concordance among experts were selected. The newly developed tool, InFER, consists of a total 39 infant images representing the six basic emotions. This tool was then administered among mothers during their postpartum period-10 healthy mothers and 10 mothers who had remitted from any schizophrenia spectrum disorder, bipolar affective disorder or major depressive disorder. The mean age and mean years of education for both groups were comparable (age∼25 years, education ∼15 years). A significant difference was found between the two groups in their ability to recognize infant emotions (Mann-Whitney U = 12.5; p = 0.004). InFER is a promising tool in Indian settings for understanding maternal recognition of infant emotions.
Collapse
Affiliation(s)
- Makarand V Pantoji
- Department of Psychiatry, National Institute of Mental Health and Neuro Sciences (NIMHANS), Bengaluru, India
| | - Sundarnag Ganjekar
- Department of Psychiatry, National Institute of Mental Health and Neuro Sciences (NIMHANS), Bengaluru, India
| | - Urvakhsh Meherwan Mehta
- Department of Psychiatry, National Institute of Mental Health and Neuro Sciences (NIMHANS), Bengaluru, India
| | - Prabha S Chandra
- Department of Psychiatry, National Institute of Mental Health and Neuro Sciences (NIMHANS), Bengaluru, India
| | - Harish Thippeswamy
- Department of Psychiatry, National Institute of Mental Health and Neuro Sciences (NIMHANS), Bengaluru, India
| |
Collapse
|
19
|
Avola D, Cinque L, Mambro AD, Fagioli A, Marini MR, Pannone D, Fanini B, Foresti GL. Spatio-Temporal Image-Based Encoded Atlases for EEG Emotion Recognition. Int J Neural Syst 2024; 34:2450024. [PMID: 38533631 DOI: 10.1142/s0129065724500242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2024]
Abstract
Emotion recognition plays an essential role in human-human interaction since it is a key to understanding the emotional states and reactions of human beings when they are subject to events and engagements in everyday life. Moving towards human-computer interaction, the study of emotions becomes fundamental because it is at the basis of the design of advanced systems to support a broad spectrum of application areas, including forensic, rehabilitative, educational, and many others. An effective method for discriminating emotions is based on ElectroEncephaloGraphy (EEG) data analysis, which is used as input for classification systems. Collecting brain signals on several channels and for a wide range of emotions produces cumbersome datasets that are hard to manage, transmit, and use in varied applications. In this context, the paper introduces the Empátheia system, which explores a different EEG representation by encoding EEG signals into images prior to their classification. In particular, the proposed system extracts spatio-temporal image encodings, or atlases, from EEG data through the Processing and transfeR of Interaction States and Mappings through Image-based eNcoding (PRISMIN) framework, thus obtaining a compact representation of the input signals. The atlases are then classified through the Empátheia architecture, which comprises branches based on convolutional, recurrent, and transformer models designed and tuned to capture the spatial and temporal aspects of emotions. Extensive experiments were conducted on the Shanghai Jiao Tong University (SJTU) Emotion EEG Dataset (SEED) public dataset, where the proposed system significantly reduced its size while retaining high performance. The results obtained highlight the effectiveness of the proposed approach and suggest new avenues for data representation in emotion recognition from EEG signals.
Collapse
Affiliation(s)
- Danilo Avola
- Department of Computer Science, Sapienza University of Rome, Via Salaria 113, Rome 00198, Italy
| | - Luigi Cinque
- Department of Computer Science, Sapienza University of Rome, Via Salaria 113, Rome 00198, Italy
| | - Angelo Di Mambro
- Department of Computer Science, Sapienza University of Rome, Via Salaria 113, Rome 00198, Italy
| | - Alessio Fagioli
- Department of Computer Science, Sapienza University of Rome, Via Salaria 113, Rome 00198, Italy
| | - Marco Raoul Marini
- Department of Computer Science, Sapienza University of Rome, Via Salaria 113, Rome 00198, Italy
| | - Daniele Pannone
- Department of Computer Science, Sapienza University of Rome, Via Salaria 113, Rome 00198, Italy
| | - Bruno Fanini
- Institute of Heritage Science, National Research Council, Area della Ricerca Roma 1, SP35d, 9, Montelibretti 00010, Italy
| | - Gian Luca Foresti
- Department of Computer Science, Mathematics and Physics, University of Udine, Via delle Scienze 206, Udine 33100, Italy
| |
Collapse
|
20
|
Cheng X, Wang S, Wei H, Sun X, Xin L, Li L, Li C, Wang Z. Application of Stereo Digital Image Correlation on Facial Expressions Sensing. SENSORS (BASEL, SWITZERLAND) 2024; 24:2450. [PMID: 38676067 PMCID: PMC11054127 DOI: 10.3390/s24082450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Revised: 04/06/2024] [Accepted: 04/08/2024] [Indexed: 04/28/2024]
Abstract
Facial expression is an important way to reflect human emotions and it represents a dynamic deformation process. Analyzing facial movements is an effective means of understanding expressions. However, there is currently a lack of methods capable of analyzing the dynamic details of full-field deformation in expressions. In this paper, in order to enable effective dynamic analysis of expressions, a classic optical measuring method called stereo digital image correlation (stereo-DIC or 3D-DIC) is employed to analyze the deformation fields of facial expressions. The forming processes of six basic facial expressions of certain experimental subjects are analyzed through the displacement and strain fields calculated by 3D-DIC. The displacement fields of each expression exhibit strong consistency with the action units (AUs) defined by the classical Facial Action Coding System (FACS). Moreover, it is shown that the gradient of the displacement, i.e., the strain fields, offers special advantages in characterizing facial expressions due to their localized nature, effectively sensing the nuanced dynamics of facial movements. By processing extensive data, this study demonstrates two featured regions in six basic expressions, one where deformation begins and the other where deformation is most severe. Based on these two regions, the temporal evolutions of the six basic expressions are discussed. The presented investigations demonstrate the superior performance of 3D-DIC in the quantitative analysis of facial expressions. The proposed analytical strategy might have potential value in objectively characterizing human expressions based on quantitative measurement.
Collapse
Affiliation(s)
- Xuanshi Cheng
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| | - Shibin Wang
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| | - Huixin Wei
- School of Civil Engineering and Architecture, Nanchang University, Nanchang 330000, China
| | - Xin Sun
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| | - Lipan Xin
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| | - Linan Li
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| | - Chuanwei Li
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| | - Zhiyong Wang
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| |
Collapse
|
21
|
Jamali R, Generosi A, Villafan JY, Mengoni M, Pelagalli L, Battista G, Martarelli M, Chiariotti P, Mansi SA, Arnesano M, Castellini P. Facial Expression Recognition for Measuring Jurors' Attention in Acoustic Jury Tests. SENSORS (BASEL, SWITZERLAND) 2024; 24:2298. [PMID: 38610510 PMCID: PMC11014261 DOI: 10.3390/s24072298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 03/26/2024] [Accepted: 03/30/2024] [Indexed: 04/14/2024]
Abstract
The perception of sound greatly impacts users' emotional states, expectations, affective relationships with products, and purchase decisions. Consequently, assessing the perceived quality of sounds through jury testing is crucial in product design. However, the subjective nature of jurors' responses may limit the accuracy and reliability of jury test outcomes. This research explores the utility of facial expression analysis in jury testing to enhance response reliability and mitigate subjectivity. Some quantitative indicators allow the research hypothesis to be validated, such as the correlation between jurors' emotional responses and valence values, the accuracy of jury tests, and the disparities between jurors' questionnaire responses and the emotions measured by FER (facial expression recognition). Specifically, analysis of attention levels during different statuses reveals a discernible decrease in attention levels, with 70 percent of jurors exhibiting reduced attention levels in the 'distracted' state and 62 percent in the 'heavy-eyed' state. On the other hand, regression analysis shows that the correlation between jurors' valence and their choices in the jury test increases when considering the data where the jurors are attentive. The correlation highlights the potential of facial expression analysis as a reliable tool for assessing juror engagement. The findings suggest that integrating facial expression recognition can enhance the accuracy of jury testing in product design by providing a more dependable assessment of user responses and deeper insights into participants' reactions to auditory stimuli.
Collapse
Affiliation(s)
- Reza Jamali
- Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy; (R.J.); (J.Y.V.); (M.M.); (L.P.); (M.M.); (P.C.)
| | - Andrea Generosi
- Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy; (R.J.); (J.Y.V.); (M.M.); (L.P.); (M.M.); (P.C.)
| | - Josè Yuri Villafan
- Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy; (R.J.); (J.Y.V.); (M.M.); (L.P.); (M.M.); (P.C.)
| | - Maura Mengoni
- Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy; (R.J.); (J.Y.V.); (M.M.); (L.P.); (M.M.); (P.C.)
| | - Leonardo Pelagalli
- Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy; (R.J.); (J.Y.V.); (M.M.); (L.P.); (M.M.); (P.C.)
| | - Gianmarco Battista
- Department of Engineering and Architecture, Università di Parma, Parco Area delle Scienze 181/A, 43124 Parma, Italy;
| | - Milena Martarelli
- Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy; (R.J.); (J.Y.V.); (M.M.); (L.P.); (M.M.); (P.C.)
| | - Paolo Chiariotti
- Department of Mechanical Engineering, Politecnico di Milano, Via Privata Giuseppe La Masa, 1, 20156 Milano, Italy;
| | - Silvia Angela Mansi
- Università Telematica eCampus, via Isimbardi 10, 22060 Novedrate, Italy; (S.A.M.); (M.A.)
| | - Marco Arnesano
- Università Telematica eCampus, via Isimbardi 10, 22060 Novedrate, Italy; (S.A.M.); (M.A.)
| | - Paolo Castellini
- Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy; (R.J.); (J.Y.V.); (M.M.); (L.P.); (M.M.); (P.C.)
| |
Collapse
|
22
|
Robles M, Ramos-Grille I, Hervás A, Duran-Tauleria E, Galiano-Landeira J, Wormwood JB, Falter-Wagner CM, Chanes L. Reduced stereotypicality and spared use of facial expression predictions for social evaluation in autism. Int J Clin Health Psychol 2024; 24:100440. [PMID: 38426036 PMCID: PMC10901834 DOI: 10.1016/j.ijchp.2024.100440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 01/08/2024] [Indexed: 03/02/2024] Open
Abstract
Background/Objective Autism has been investigated through traditional emotion recognition paradigms, merely investigating accuracy, thereby constraining how potential differences across autistic and control individuals may be observed, identified, and described. Moreover, the use of emotional facial expression information for social functioning in autism is of relevance to provide a deeper understanding of the condition. Method Adult autistic individuals (n = 34) and adult control individuals (n = 34) were assessed with a social perception behavioral paradigm exploring facial expression predictions and their impact on social evaluation. Results Autistic individuals held less stereotypical predictions than controls. Importantly, despite such differences in predictions, the use of such predictions for social evaluation did not differ significantly between groups, as autistic individuals relied on their predictions to evaluate others to the same extent as controls. Conclusions These results help to understand how autistic individuals perceive social stimuli and evaluate others, revealing a deviation from stereotypicality beyond which social evaluation strategies may be intact.
Collapse
Affiliation(s)
- Marta Robles
- Department of Clinical and Health Psychology, Universitat Autònoma de Barcelona, Barcelona, Spain
- Department of Psychiatry and Psychotherapy, LMU University Hospital, LMU Munich, Germany
| | - Irene Ramos-Grille
- Department of Clinical and Health Psychology, Universitat Autònoma de Barcelona, Barcelona, Spain
- Division of Mental Health, Consorci Sanitari de Terrassa, Terrassa, Catalunya, Spain
| | - Amaia Hervás
- Child and Adolescent Mental Health Service, Hospital Universitari Mútua de Terrassa, Barcelona, Spain
- Institut Global d'Atenció Integral del Neurodesenvolupament (IGAIN), Barcelona, Spain
| | - Enric Duran-Tauleria
- Institut Global d'Atenció Integral del Neurodesenvolupament (IGAIN), Barcelona, Spain
| | - Jordi Galiano-Landeira
- Department of Clinical and Health Psychology, Universitat Autònoma de Barcelona, Barcelona, Spain
| | | | | | - Lorena Chanes
- Department of Clinical and Health Psychology, Universitat Autònoma de Barcelona, Barcelona, Spain
- Institut de Neurociències, Universitat Autònoma de Barcelona, Barcelona, Spain
- Serra Húnter Programme, Generalitat de Catalunya, Barcelona, Spain
| |
Collapse
|
23
|
Zhang C, Su L, Li S, Fu Y. Differential Brain Activation for Four Emotions in VR-2D and VR-3D Modes. Brain Sci 2024; 14:326. [PMID: 38671977 PMCID: PMC11048237 DOI: 10.3390/brainsci14040326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 03/10/2024] [Accepted: 03/25/2024] [Indexed: 04/28/2024] Open
Abstract
Similar to traditional imaging, virtual reality (VR) imagery encompasses nonstereoscopic (VR-2D) and stereoscopic (VR-3D) modes. Currently, Russell's emotional model has been extensively studied in traditional 2D and VR-3D modes, but there is limited comparative research between VR-2D and VR-3D modes. In this study, we investigate whether Russell's emotional model exhibits stronger brain activation states in VR-3D mode compared to VR-2D mode. By designing an experiment covering four emotional categories (high arousal-high pleasure (HAHV), high arousal-low pleasure (HALV), low arousal-low pleasure (LALV), and low arousal-high pleasure (LAHV)), EEG signals were collected from 30 healthy undergraduate and graduate students while watching videos in both VR modes. Initially, power spectral density (PSD) computations revealed distinct brain activation patterns in different emotional states across the two modes, with VR-3D videos inducing significantly higher brainwave energy, primarily in the frontal, temporal, and occipital regions. Subsequently, Differential entropy (DE) feature sets, selected via a dual ten-fold cross-validation Support Vector Machine (SVM) classifier, demonstrate satisfactory classification accuracy, particularly superior in the VR-3D mode. The paper subsequently presents a deep learning-based EEG emotion recognition framework, adeptly utilizing the frequency, spatial, and temporal information of EEG data to improve recognition accuracy. The contribution of each individual feature to the prediction probabilities is discussed through machine-learning interpretability based on Shapley values. The study reveals notable differences in brain activation states for identical emotions between the two modes, with VR-3D mode showing more pronounced activation.
Collapse
Affiliation(s)
| | - Lei Su
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China; (C.Z.); (S.L.); (Y.F.)
| | | | | |
Collapse
|
24
|
Goel S, Jara-Ettinger J, Ong DC, Gendron M. Face and context integration in emotion inference is limited and variable across categories and individuals. Nat Commun 2024; 15:2443. [PMID: 38499519 PMCID: PMC10948792 DOI: 10.1038/s41467-024-46670-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 03/05/2024] [Indexed: 03/20/2024] Open
Abstract
The ability to make nuanced inferences about other people's emotional states is central to social functioning. While emotion inferences can be sensitive to both facial movements and the situational context that they occur in, relatively little is understood about when these two sources of information are integrated across emotion categories and individuals. In a series of studies, we use one archival and five empirical datasets to demonstrate that people could be integrating, but that emotion inferences are just as well (and sometimes better) captured by knowledge of the situation alone, while isolated facial cues are insufficient. Further, people integrate facial cues more for categories for which they most frequently encounter facial expressions in everyday life (e.g., happiness). People are also moderately stable over time in their reliance on situational cues and integration of cues and those who reliably utilize situation cues more also have better situated emotion knowledge. These findings underscore the importance of studying variability in reliance on and integration of cues.
Collapse
Affiliation(s)
- Srishti Goel
- Department of Psychology, Yale University, 100 College St, New Haven, CT, USA.
| | - Julian Jara-Ettinger
- Department of Psychology, Yale University, 100 College St, New Haven, CT, USA
- Wu Tsai Institute, Yale University, 100 College St, New Haven, CT, USA
| | - Desmond C Ong
- Department of Psychology, The University of Texas at Austin, 108 E Dean Keeton St, Austin, TX, USA
| | - Maria Gendron
- Department of Psychology, Yale University, 100 College St, New Haven, CT, USA.
| |
Collapse
|
25
|
Brooks JA, Kim L, Opara M, Keltner D, Fang X, Monroy M, Corona R, Tzirakis P, Baird A, Metrick J, Taddesse N, Zegeye K, Cowen AS. Deep learning reveals what facial expressions mean to people in different cultures. iScience 2024; 27:109175. [PMID: 38433918 PMCID: PMC10906517 DOI: 10.1016/j.isci.2024.109175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 09/05/2023] [Accepted: 02/06/2024] [Indexed: 03/05/2024] Open
Abstract
Cross-cultural studies of the meaning of facial expressions have largely focused on judgments of small sets of stereotypical images by small numbers of people. Here, we used large-scale data collection and machine learning to map what facial expressions convey in six countries. Using a mimicry paradigm, 5,833 participants formed facial expressions found in 4,659 naturalistic images, resulting in 423,193 participant-generated facial expressions. In their own language, participants also rated each expression in terms of 48 emotions and mental states. A deep neural network tasked with predicting the culture-specific meanings people attributed to facial movements while ignoring physical appearance and context discovered 28 distinct dimensions of facial expression, with 21 dimensions showing strong evidence of universality and the remainder showing varying degrees of cultural specificity. These results capture the underlying dimensions of the meanings of facial expressions within and across cultures in unprecedented detail.
Collapse
Affiliation(s)
- Jeffrey A. Brooks
- Research Division, Hume AI, New York, NY 10010, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Lauren Kim
- Research Division, Hume AI, New York, NY 10010, USA
| | | | - Dacher Keltner
- Research Division, Hume AI, New York, NY 10010, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Xia Fang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, Zhejiang, China
| | - Maria Monroy
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Rebecca Corona
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | | | - Alice Baird
- Research Division, Hume AI, New York, NY 10010, USA
| | | | | | | | - Alan S. Cowen
- Research Division, Hume AI, New York, NY 10010, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| |
Collapse
|
26
|
Bruin J, Stuldreher IV, Perone P, Hogenelst K, Naber M, Kamphuis W, Brouwer AM. Detection of arousal and valence from facial expressions and physiological responses evoked by different types of stressors. FRONTIERS IN NEUROERGONOMICS 2024; 5:1338243. [PMID: 38559665 PMCID: PMC10978716 DOI: 10.3389/fnrgo.2024.1338243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 02/29/2024] [Indexed: 04/04/2024]
Abstract
Automatically detecting mental state such as stress from video images of the face could support evaluating stress responses in applicants for high risk jobs or contribute to timely stress detection in challenging operational settings (e.g., aircrew, command center operators). Challenges in automatically estimating mental state include the generalization of models across contexts and across participants. We here aim to create robust models by training them using data from different contexts and including physiological features. Fifty-one participants were exposed to different types of stressors (cognitive, social evaluative and startle) and baseline variants of the stressors. Video, electrocardiogram (ECG), electrodermal activity (EDA) and self-reports (arousal and valence) were recorded. Logistic regression models aimed to classify between high and low arousal and valence across participants, where "high" and "low" were defined relative to the center of the rating scale. Accuracy scores of different models were evaluated: models trained and tested within a specific context (either a baseline or stressor variant of a task), intermediate context (baseline and stressor variant of a task), or general context (all conditions together). Furthermore, for these different model variants, only the video data was included, only the physiological data, or both video and physiological data. We found that all (video, physiological and video-physio) models could successfully distinguish between high- and low-rated arousal and valence, though performance tended to be better for (1) arousal than valence, (2) specific context than intermediate and general contexts, (3) video-physio data than video or physiological data alone. Automatic feature selection resulted in inclusion of 3-20 features, where the models based on video-physio data usually included features from video, ECG and EDA. Still, performance of video-only models approached the performance of video-physio models. Arousal and valence ratings by three experienced human observers scores based on part of the video data did not match with self-reports. In sum, we showed that it is possible to automatically monitor arousal and valence even in relatively general contexts and better than humans can (in the given circumstances), and that non-contact video images of faces capture an important part of the information, which has practical advantages.
Collapse
Affiliation(s)
- Juliette Bruin
- TNO Human Factors, Netherlands Organization for Applied Scientific Research, Soesterberg, Netherlands
| | - Ivo V. Stuldreher
- TNO Human Factors, Netherlands Organization for Applied Scientific Research, Soesterberg, Netherlands
| | - Paola Perone
- TNO Human Factors, Netherlands Organization for Applied Scientific Research, Soesterberg, Netherlands
| | - Koen Hogenelst
- TNO Human Factors, Netherlands Organization for Applied Scientific Research, Soesterberg, Netherlands
| | - Marnix Naber
- Experimental Psychology, Helmholtz Institute, Faculty of Social and Behavioral Sciences, Utrecht University, Utrecht, Netherlands
| | - Wim Kamphuis
- TNO Human Factors, Netherlands Organization for Applied Scientific Research, Soesterberg, Netherlands
| | - Anne-Marie Brouwer
- TNO Human Factors, Netherlands Organization for Applied Scientific Research, Soesterberg, Netherlands
- Artificial Intelligence, Donders Centre, Faculty of Social Sciences, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
27
|
Santistevan AC, Fiske O, Moadab G, Charbonneau JA, Isaacowitz DM, Bliss-Moreau E. See no evil: Attentional bias toward threat is diminished in aged monkeys. Emotion 2024; 24:303-315. [PMID: 37603001 PMCID: PMC10879459 DOI: 10.1037/emo0001276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/22/2023]
Abstract
Prior evidence demonstrates that relative to younger adults, older human adults exhibit attentional biases toward positive and/or away from negative socioaffective stimuli (i.e., the age-related positivity effect). Whether or not the effect is phylogenetically conserved is currently unknown and its biopsychosocial origins are debated. To address this gap, we evaluated how visual processing of socioaffective stimuli differs in aged, compared to middle-aged, rhesus monkeys (Macaca mulatta) using eye tracking in two experimental designs that are directly comparable to those historically used for evaluating attentional biases in humans. Results of our study demonstrate that while younger rhesus possesses robust attentional biases toward threatening pictures of conspecifics' faces, aged animals evidence no such bias. Critically, these biases emerged only when threatening faces were paired with neutral and not ostensibly "positive" faces, suggesting social context modifies the effect. Results of our study suggest that the evolutionarily shared mechanisms drive age-related decline in visual biases toward negative stimuli in aging across primate species. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
- Anthony C. Santistevan
- Department of Psychology, University of California, Davis
- California National Primate Research Center, University of California, Davis
| | - Olivia Fiske
- Department of Psychology, University of California, Davis
- California National Primate Research Center, University of California, Davis
| | - Gilda Moadab
- Department of Psychology, University of California, Davis
- California National Primate Research Center, University of California, Davis
| | - Joey A. Charbonneau
- California National Primate Research Center, University of California, Davis
- Neuroscience Graduate Group, University of California, Davis
| | | | - Eliza Bliss-Moreau
- Department of Psychology, University of California, Davis
- California National Primate Research Center, University of California, Davis
| |
Collapse
|
28
|
Leshin J, Carter MJ, Doyle CM, Lindquist KA. Language access differentially alters functional connectivity during emotion perception across cultures. Front Psychol 2024; 14:1084059. [PMID: 38425348 PMCID: PMC10901990 DOI: 10.3389/fpsyg.2023.1084059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 12/15/2023] [Indexed: 03/02/2024] Open
Abstract
Introduction It is often assumed that the ability to recognize the emotions of others is reflexive and automatic, driven only by observable facial muscle configurations. However, research suggests that accumulated emotion concept knowledge shapes the way people perceive the emotional meaning of others' facial muscle movements. Cultural upbringing can shape an individual's concept knowledge, such as expectations about which facial muscle configurations convey anger, disgust, or sadness. Additionally, growing evidence suggests that access to emotion category words, such as "anger," facilitates access to such emotion concept knowledge and in turn facilitates emotion perception. Methods To investigate the impact of cultural influence and emotion concept accessibility on emotion perception, participants from two cultural groups (Chinese and White Americans) completed a functional magnetic resonance imaging scanning session to assess functional connectivity between brain regions during emotion perception. Across four blocks, participants were primed with either English emotion category words ("anger," "disgust") or control text (XXXXXX) before viewing images of White American actors posing facial muscle configurations that are stereotypical of anger and disgust in the United States. Results We found that when primed with "disgust" versus control text prior to seeing disgusted facial expressions, Chinese participants showed a significant decrease in functional connectivity between a region associated with semantic retrieval (the inferior frontal gyrus) and regions associated with semantic processing, visual perception, and social cognition. Priming the word "anger" did not impact functional connectivity for Chinese participants relative to control text, and priming neither "disgust" nor "anger" impacted functional connectivity for White American participants. Discussion These findings provide preliminary evidence that emotion concept accessibility differentially impacts perception based on participants' cultural background.
Collapse
Affiliation(s)
- Joseph Leshin
- Department of Psychology and Neuroscience, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Maleah J. Carter
- Department of Psychology and Neuroscience, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Cameron M. Doyle
- Department of Psychology and Neuroscience, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Kristen A. Lindquist
- Department of Psychology and Neuroscience, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Biomedical Research Imaging Center, School of Medicine, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| |
Collapse
|
29
|
Walsh E, Whitby J, Chen YY, Longo MR. No influence of emotional expression on size underestimation of upright faces. PLoS One 2024; 19:e0293920. [PMID: 38300951 PMCID: PMC10833517 DOI: 10.1371/journal.pone.0293920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 10/20/2023] [Indexed: 02/03/2024] Open
Abstract
Faces are a primary means of conveying social information between humans. One important factor modulating the perception of human faces is emotional expression. Face inversion also affects perception, including judgments of emotional expression, possibly through the disruption of configural processing. One intriguing inversion effect is an illusion whereby faces appear to be physically smaller when upright than when inverted. This illusion appears to be highly selective for faces. In this study, we investigated whether the emotional expression of a face (neutral, happy, afraid, and angry) modulates the magnitude of this size illusion. Results showed that for all four expressions, there was a clear bias for inverted stimuli to be judged as larger than upright ones. This demonstrates that there is no influence of emotional expression on the size underestimation of upright faces, a surprising result given that recognition of different emotional expressions is known to be affected unevenly by inversion. Results are discussed considering recent neuroimaging research which used population receptive field (pRF) mapping to investigate the neural mechanisms underlying face perception features and which may provide an explanation for how an upright face appears smaller than an inverted one. Elucidation of this effect would lead to a greater understanding of how humans communicate.
Collapse
Affiliation(s)
- Eamonn Walsh
- Department of Basic & Clinical Neuroscience, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom
- Cultural and Social Neuroscience Research Group, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom
| | - Jack Whitby
- Department of Basic & Clinical Neuroscience, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom
| | - Yen-Ya Chen
- Department of Basic & Clinical Neuroscience, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom
| | - Matthew R. Longo
- Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
| |
Collapse
|
30
|
Chiu CD, Lo APK, Mak FKL, Hui KH, Lynn SJ, Cheng SK. Remember walking in their shoes? The relation of self-referential source memory and emotion recognition. Cogn Emot 2024; 38:120-130. [PMID: 37882206 DOI: 10.1080/02699931.2023.2274040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 10/09/2023] [Indexed: 10/27/2023]
Abstract
Deficits in the ability to read the emotions of others have been demonstrated in mental disorders, such as dissociation and schizophrenia, which involve a distorted sense of self. This study examined whether weakened self-referential source memory, being unable to remember whether a piece of information has been processed with reference to oneself, is linked to ineffective emotion recognition. In two samples from a college and community, we quantified the participants' ability to remember the self-generated versus non-self-generated origins of sentences they had previously read or partially generated. We also measured their ability to read others' emotions accurately when viewing photos of people in affect-charged situations. Multinomial processing tree modelling was applied to obtain a measure of self-referential source memory that was not biased by non-mnemonic factors. Our first experiment with college participants revealed a positive correlation between correctly remembering the origins of sentences and accurately recognising the emotions of others. This correlation was successfully replicated in the second experiment with community participants. The current study offers evidence of a link between self-referential source memory and emotion recognition.
Collapse
Affiliation(s)
- Chui-De Chiu
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong S.A.R., People's Republic of China
| | - Alfred Pak-Kwan Lo
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong S.A.R., People's Republic of China
| | - Frankie Ka-Lun Mak
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong S.A.R., People's Republic of China
| | - Kam-Hei Hui
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong S.A.R., People's Republic of China
| | - Steven Jay Lynn
- Department of Psychology, Binghamton University, Binghamton, NY, USA
| | - Shih-Kuen Cheng
- Institute of Cognitive Neuroscience, National Central University, Taoyuan, Taiwan
| |
Collapse
|
31
|
Gonçalves JL, Fuertes M, Silva S, Lopes-dos-Santos P, Ferreira-Santos F. Differential effects of attachment security on visual fixation to facial expressions of emotion in 14-month-old infants: an eye-tracking study. Front Psychol 2024; 15:1302657. [PMID: 38449748 PMCID: PMC10917067 DOI: 10.3389/fpsyg.2024.1302657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 01/10/2024] [Indexed: 03/08/2024] Open
Abstract
Introduction Models of attachment and information processing suggest that the attention infants allocate to social information might occur in a schema-driven processing manner according to their attachment pattern. A major source of social information for infants consists of facial expressions of emotion. We tested for differences in attention to facial expressions and emotional discrimination between infants classified as securely attached (B), insecure-avoidant (A), and insecure-resistant (C). Methods Sixty-one 14-month-old infants participated in the Strange Situation Procedure and an experimental task of Visual Habituation and Visual Paired-Comparison Task (VPC). In the Habituation phase, a Low-Arousal Happy face (habituation face) was presented followed by a VPC task of 6 trials composed of two contrasting emotional faces always involving the same actress: the one used in habituation (trial old face) and a new one (trial new face) portraying changes in valence (Low-Arousal Angry face), arousal (High-Arousal Happy face), or valence + arousal (High-Arousal Angry face). Measures of fixation time (FT) and number of fixations (FC) were obtained for the habituation face, the trial old face, the trial new face, and the difference between the trial old face and the trial new face using an eye-tracking system. Results We found a higher FT and FC for the trial new face when compared with the trial old face, regardless of the emotional condition (valence, arousal, valence + arousal contrasts), suggesting that 14-month-old infants were able to discriminate different emotional faces. However, this effect differed according to attachment pattern: resistant-attached infants (C) had significantly higher FT and FC for the new face than patterns B and A, indicating they may remain hypervigilant toward emotional change. On the contrary, avoidant infants (A) revealed significantly longer looking times to the trial old face, suggesting overall avoidance of novel expressions and thus less sensitivity to emotional change. Discussion Overall, these findings corroborate that attachment is associated with infants' social information processing.
Collapse
Affiliation(s)
- Joana L. Gonçalves
- Center for Research in Psychology for Positive Development, Lusíada University, Porto, Portugal
| | - Marina Fuertes
- Center for Psychology at University of Porto, Faculty of Psychology and Education Sciences, University of Porto, Porto, Portugal
- Escola Superior de Educação, Instituto Politécnico de Lisboa, Lisboa, Portugal
| | - Susana Silva
- Neurocognition and Language Research Group, Center for Psychology at University of Porto, Faculty of Psychology and Education Sciences, University of Porto, Porto, Portugal
| | - Pedro Lopes-dos-Santos
- Center for Psychology at University of Porto, Faculty of Psychology and Education Sciences, University of Porto, Porto, Portugal
- Faculty of Psychology and Education Science, University of Porto, Porto, Portugal
| | - Fernando Ferreira-Santos
- Faculty of Psychology and Education Science, University of Porto, Porto, Portugal
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Science, University of Porto, Porto, Portugal
| |
Collapse
|
32
|
Hartmann KV, Rubeis G, Primc N. Healthy and Happy? An Ethical Investigation of Emotion Recognition and Regulation Technologies (ERR) within Ambient Assisted Living (AAL). SCIENCE AND ENGINEERING ETHICS 2024; 30:2. [PMID: 38270734 PMCID: PMC10811057 DOI: 10.1007/s11948-024-00470-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 01/15/2024] [Indexed: 01/26/2024]
Abstract
Ambient Assisted Living (AAL) refers to technologies that track daily activities of persons in need of care to enhance their autonomy and minimise their need for assistance. New technological developments show an increasing effort to integrate automated emotion recognition and regulation (ERR) into AAL systems. These technologies aim to recognise emotions via different sensors and, eventually, to regulate emotions defined as "negative" via different forms of intervention. Although these technologies are already implemented in other areas, AAL stands out by its tendency to enable an inconspicuous 24-hour surveillance in the private living space of users who rely on the technology to maintain a certain degree of independence in their daily activities. The combination of both technologies represents a new dimension of emotion recognition in a potentially vulnerable group of users. Our paper aims to provide an ethical contextualisation of the novel combination of both technologies. We discuss different concepts of emotions, namely Basic Emotion Theory (BET) and the Circumplex Model of Affect (CMA), that form the basis of ERR and provide an overview over the current technological developments in AAL. We highlight four ethical issues that specifically arise in the context of ERR in AAL systems, namely concerns regarding (1) the reductionist view of emotions, (2) solutionism as an underlying assumption of these technologies, (3) the privacy and autonomy of users and their emotions, (4) the tendency of machine learning techniques to normalise and generalise human behaviour and emotional reactions.
Collapse
Affiliation(s)
- Kris Vera Hartmann
- Institute for the Study of Christian Social Service (DWI), Theological Faculty, Heidelberg University, Karlstr.16, 69117, Heidelberg, Germany
| | - Giovanni Rubeis
- Division Biomedical and Public Health Ethics, Karl Landsteiner University of Health Sciences, Dr.-Karl-Dorrek-Str. 30, Krems, 3500, Austria
| | - Nadia Primc
- Institute of History and Ethics of Medicine, Medical Faculty, Heidelberg University, Im Neuenheimer Feld 327, 69120, Heidelberg, Germany
| |
Collapse
|
33
|
Larrouy-Maestri P, Poeppel D, Pell MD. The Sound of Emotional Prosody: Nearly 3 Decades of Research and Future Directions. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2024:17456916231217722. [PMID: 38232303 DOI: 10.1177/17456916231217722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Emotional voices attract considerable attention. A search on any browser using "emotional prosody" as a key phrase leads to more than a million entries. Such interest is evident in the scientific literature as well; readers are reminded in the introductory paragraphs of countless articles of the great importance of prosody and that listeners easily infer the emotional state of speakers through acoustic information. However, despite decades of research on this topic and important achievements, the mapping between acoustics and emotional states is still unclear. In this article, we chart the rich literature on emotional prosody for both newcomers to the field and researchers seeking updates. We also summarize problems revealed by a sample of the literature of the last decades and propose concrete research directions for addressing them, ultimately to satisfy the need for more mechanistic knowledge of emotional prosody.
Collapse
Affiliation(s)
- Pauline Larrouy-Maestri
- Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- School of Communication Sciences and Disorders, McGill University
- Max Planck-NYU Center for Language, Music, and Emotion, New York, New York
| | - David Poeppel
- Max Planck-NYU Center for Language, Music, and Emotion, New York, New York
- Department of Psychology and Center for Neural Science, New York University
- Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany
| | - Marc D Pell
- School of Communication Sciences and Disorders, McGill University
- Centre for Research on Brain, Language, and Music, Montreal, Quebec, Canada
| |
Collapse
|
34
|
Chen C, Messinger DS, Chen C, Yan H, Duan Y, Ince RAA, Garrod OGB, Schyns PG, Jack RE. Cultural facial expressions dynamically convey emotion category and intensity information. Curr Biol 2024; 34:213-223.e5. [PMID: 38141619 PMCID: PMC10831323 DOI: 10.1016/j.cub.2023.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 10/27/2023] [Accepted: 12/01/2023] [Indexed: 12/25/2023]
Abstract
Communicating emotional intensity plays a vital ecological role because it provides valuable information about the nature and likelihood of the sender's behavior.1,2,3 For example, attack often follows signals of intense aggression if receivers fail to retreat.4,5 Humans regularly use facial expressions to communicate such information.6,7,8,9,10,11 Yet how this complex signaling task is achieved remains unknown. We addressed this question using a perception-based, data-driven method to mathematically model the specific facial movements that receivers use to classify the six basic emotions-"happy," "surprise," "fear," "disgust," "anger," and "sad"-and judge their intensity in two distinct cultures (East Asian, Western European; total n = 120). In both cultures, receivers expected facial expressions to dynamically represent emotion category and intensity information over time, using a multi-component compositional signaling structure. Specifically, emotion intensifiers peaked earlier or later than emotion classifiers and represented intensity using amplitude variations. Emotion intensifiers are also more similar across emotions than classifiers are, suggesting a latent broad-plus-specific signaling structure. Cross-cultural analysis further revealed similarities and differences in expectations that could impact cross-cultural communication. Specifically, East Asian and Western European receivers have similar expectations about which facial movements represent high intensity for threat-related emotions, such as "anger," "disgust," and "fear," but differ on those that represent low threat emotions, such as happiness and sadness. Together, our results provide new insights into the intricate processes by which facial expressions can achieve complex dynamic signaling tasks by revealing the rich information embedded in facial expressions.
Collapse
Affiliation(s)
- Chaona Chen
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK.
| | - Daniel S Messinger
- Departments of Psychology, Pediatrics, and Electrical & Computer Engineering, University of Miami, 5665 Ponce De Leon Blvd, Coral Gables, FL 33146, USA
| | - Cheng Chen
- Foreign Language Department, Teaching Centre for General Courses, Chengdu Medical College, 601 Tianhui Street, Chengdu 610083, China
| | - Hongmei Yan
- The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, North Jianshe Road, Chengdu 611731, China
| | - Yaocong Duan
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Oliver G B Garrod
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Philippe G Schyns
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Rachael E Jack
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| |
Collapse
|
35
|
Zhou Y, Suzuki K, Kumano S. State-Aware Deep Item Response Theory using student facial features. Front Artif Intell 2024; 6:1324279. [PMID: 38239499 PMCID: PMC10794588 DOI: 10.3389/frai.2023.1324279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 12/07/2023] [Indexed: 01/22/2024] Open
Abstract
This paper introduces a novel approach to Item Response Theory (IRT) by incorporating deep learning to analyze student facial expressions to enhance the prediction and understanding of student responses to test items. This research is based on the assertion that students' facial expressions offer crucial insights into their cognitive and affective states during testing, subsequently influencing their item responses. The proposed State-Aware Deep Item Response Theory (SAD-IRT) model introduces a new parameter, the student state parameter, which can be viewed as a relative subjective difficulty parameter. It is latent-regressed from students' facial features while solving test items using state-of-the-art deep learning techniques. In an experiment with 20 students, SAD-IRT boosted prediction performance in students' responses compared to prior models without the student state parameter, including standard IRT and its deep neural network implementation, while maintaining consistent predictions of student ability and item difficulty parameters. The research further illustrates the model's early prediction ability in predicting the student's response result before the student answered. This study holds substantial implications for educational assessment, laying the groundwork for more personalized and effective learning and assessment strategies that consider students' emotional and cognitive states.
Collapse
Affiliation(s)
- Yan Zhou
- Artificial Intelligence Laboratory, University of Tsukuba, Tsukuba, Japan
| | - Kenji Suzuki
- Artificial Intelligence Laboratory, University of Tsukuba, Tsukuba, Japan
| | - Shiro Kumano
- Artificial Intelligence Laboratory, University of Tsukuba, Tsukuba, Japan
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Japan
| |
Collapse
|
36
|
Yu H, Lin C, Sun S, Cao R, Kar K, Wang S. Multimodal investigations of emotional face processing and social trait judgment of faces. Ann N Y Acad Sci 2024; 1531:29-48. [PMID: 37965931 PMCID: PMC10858652 DOI: 10.1111/nyas.15084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2023]
Abstract
Faces are among the most important visual stimuli that humans perceive in everyday life. While extensive literature has examined emotional processing and social evaluations of faces, most studies have examined either topic using unimodal approaches. In this review, we promote the use of multimodal cognitive neuroscience approaches to study these processes, using two lines of research as examples: ambiguity in facial expressions of emotion and social trait judgment of faces. In the first set of studies, we identified an event-related potential that signals emotion ambiguity using electroencephalography and we found convergent neural responses to emotion ambiguity using functional neuroimaging and single-neuron recordings. In the second set of studies, we discuss how different neuroimaging and personality-dimensional approaches together provide new insights into social trait judgments of faces. In both sets of studies, we provide an in-depth comparison between neurotypicals and people with autism spectrum disorder. We offer a computational account for the behavioral and neural markers of the different facial processing between the two groups. Finally, we suggest new practices for studying the emotional processing and social evaluations of faces. All data discussed in the case studies of this review are publicly available.
Collapse
Affiliation(s)
- Hongbo Yu
- Department of Psychological & Brain Sciences, University of California Santa Barbara, Santa Barbara, California, USA
| | - Chujun Lin
- Department of Psychology, University of California San Diego, San Diego, California, USA
| | - Sai Sun
- Frontier Research Institute for Interdisciplinary Sciences, Tohoku University, Sendai, Japan
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
| | - Runnan Cao
- Department of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Kohitij Kar
- Department of Biology, Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Shuo Wang
- Department of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| |
Collapse
|
37
|
Bian Y, Küster D, Liu H, Krumhuber EG. Understanding Naturalistic Facial Expressions with Deep Learning and Multimodal Large Language Models. SENSORS (BASEL, SWITZERLAND) 2023; 24:126. [PMID: 38202988 PMCID: PMC10781259 DOI: 10.3390/s24010126] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 11/30/2023] [Accepted: 12/21/2023] [Indexed: 01/12/2024]
Abstract
This paper provides a comprehensive overview of affective computing systems for facial expression recognition (FER) research in naturalistic contexts. The first section presents an updated account of user-friendly FER toolboxes incorporating state-of-the-art deep learning models and elaborates on their neural architectures, datasets, and performances across domains. These sophisticated FER toolboxes can robustly address a variety of challenges encountered in the wild such as variations in illumination and head pose, which may otherwise impact recognition accuracy. The second section of this paper discusses multimodal large language models (MLLMs) and their potential applications in affective science. MLLMs exhibit human-level capabilities for FER and enable the quantification of various contextual variables to provide context-aware emotion inferences. These advancements have the potential to revolutionize current methodological approaches for studying the contextual influences on emotions, leading to the development of contextualized emotion models.
Collapse
Affiliation(s)
- Yifan Bian
- Department of Experimental Psychology, University College London, London WC1H 0AP, UK;
| | - Dennis Küster
- Department of Mathematics and Computer Science, University of Bremen, 28359 Bremen, Germany; (D.K.); (H.L.)
| | - Hui Liu
- Department of Mathematics and Computer Science, University of Bremen, 28359 Bremen, Germany; (D.K.); (H.L.)
| | - Eva G. Krumhuber
- Department of Experimental Psychology, University College London, London WC1H 0AP, UK;
| |
Collapse
|
38
|
Li Z, Lu H, Liu D, Yu ANC, Gendron M. Emotional event perception is related to lexical complexity and emotion knowledge. COMMUNICATIONS PSYCHOLOGY 2023; 1:45. [PMID: 39242918 PMCID: PMC11332234 DOI: 10.1038/s44271-023-00039-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 11/23/2023] [Indexed: 09/09/2024]
Abstract
Inferring emotion is a critical skill that supports social functioning. Emotion inferences are typically studied in simplistic paradigms by asking people to categorize isolated and static cues like frowning faces. Yet emotions are complex events that unfold over time. Here, across three samples (Study 1 N = 222; Study 2 N = 261; Study 3 N = 101), we present the Emotion Segmentation Paradigm to examine inferences about complex emotional events by extending cognitive paradigms examining event perception. Participants were asked to indicate when there were changes in the emotions of target individuals within continuous streams of activity in narrative film (Study 1) and documentary clips (Study 2, preregistered, and Study 3 test-retest sample). This Emotion Segmentation Paradigm revealed robust and reliable individual differences across multiple metrics. We also tested the constructionist prediction that emotion labels constrain emotion inference, which is traditionally studied by introducing emotion labels. We demonstrate that individual differences in active emotion vocabulary (i.e., readily accessible emotion words) correlate with emotion segmentation performance.
Collapse
Affiliation(s)
- Zhimeng Li
- Department of Psychology, Yale University, New Haven, Connecticut, USA.
| | - Hanxiao Lu
- Department of Psychology, New York University, New York, NY, USA
| | - Di Liu
- Department of Psychology, Johns Hopkins University, Baltimore, MD, USA
| | - Alessandra N C Yu
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Maria Gendron
- Department of Psychology, Yale University, New Haven, Connecticut, USA.
| |
Collapse
|
39
|
Namba S, Sato W, Namba S, Nomiya H, Shimokawa K, Osumi M. Development of the RIKEN database for dynamic facial expressions with multiple angles. Sci Rep 2023; 13:21785. [PMID: 38066065 PMCID: PMC10709572 DOI: 10.1038/s41598-023-49209-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 12/05/2023] [Indexed: 12/18/2023] Open
Abstract
The development of facial expressions with sensing information is progressing in multidisciplinary fields, such as psychology, affective computing, and cognitive science. Previous facial datasets have not simultaneously dealt with multiple theoretical views of emotion, individualized context, or multi-angle/depth information. We developed a new facial database (RIKEN facial expression database) that includes multiple theoretical views of emotions and expressers' individualized events with multi-angle and depth information. The RIKEN facial expression database contains recordings of 48 Japanese participants captured using ten Kinect cameras at 25 events. This study identified several valence-related facial patterns and found them consistent with previous research investigating the coherence between facial movements and internal states. This database represents an advancement in developing a new sensing system, conducting psychological experiments, and understanding the complexity of emotional events.
Collapse
Affiliation(s)
- Shushi Namba
- RIKEN, Psychological Process Research Team, Guardian Robot Project, Kyoto, 6190288, Japan.
- Department of Psychology, Hiroshima University, Hiroshima, 7398524, Japan.
| | - Wataru Sato
- RIKEN, Psychological Process Research Team, Guardian Robot Project, Kyoto, 6190288, Japan.
| | - Saori Namba
- Department of Psychology, Hiroshima University, Hiroshima, 7398524, Japan
| | - Hiroki Nomiya
- Faculty of Information and Human Sciences, Kyoto Institute of Technology, Kyoto, 6068585, Japan
| | - Koh Shimokawa
- KOHINATA Limited Liability Company, Osaka, 5560020, Japan
| | - Masaki Osumi
- KOHINATA Limited Liability Company, Osaka, 5560020, Japan
| |
Collapse
|
40
|
Li YT, Yeh SL, Huang TR. The cross-race effect in automatic facial expression recognition violates measurement invariance. Front Psychol 2023; 14:1201145. [PMID: 38130968 PMCID: PMC10733503 DOI: 10.3389/fpsyg.2023.1201145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 10/30/2023] [Indexed: 12/23/2023] Open
Abstract
Emotion has been a subject undergoing intensive research in psychology and cognitive neuroscience over several decades. Recently, more and more studies of emotion have adopted automatic rather than manual methods of facial emotion recognition to analyze images or videos of human faces. Compared to manual methods, these computer-vision-based, automatic methods can help objectively and rapidly analyze a large amount of data. These automatic methods have also been validated and believed to be accurate in their judgments. However, these automatic methods often rely on statistical learning models (e.g., deep neural networks), which are intrinsically inductive and thus suffer from problems of induction. Specifically, the models that were trained primarily on Western faces may not generalize well to accurately judge Eastern faces, which can then jeopardize the measurement invariance of emotions in cross-cultural studies. To demonstrate such a possibility, the present study carries out a cross-racial validation of two popular facial emotion recognition systems-FaceReader and DeepFace-using two Western and two Eastern face datasets. Although both systems could achieve overall high accuracies in the judgments of emotion category on the Western datasets, they performed relatively poorly on the Eastern datasets, especially in recognition of negative emotions. While these results caution the use of these automatic methods of emotion recognition on non-Western faces, the results also suggest that the measurements of happiness outputted by these automatic methods are accurate and invariant across races and hence can still be utilized for cross-cultural studies of positive psychology.
Collapse
Affiliation(s)
- Yen-Ting Li
- Department of Psychology, National Taiwan University, Taipei City, Taiwan
| | - Su-Ling Yeh
- Department of Psychology, National Taiwan University, Taipei City, Taiwan
- Graduate Institute of Brain and Mind Sciences, National Taiwan University, Taipei City, Taiwan
- Neurobiology and Cognitive Science Center, National Taiwan University, Taipei City, Taiwan
- Center for Artificial Intelligence and Advanced Robotics, National Taiwan University, Taipei City, Taiwan
| | - Tsung-Ren Huang
- Department of Psychology, National Taiwan University, Taipei City, Taiwan
- Graduate Institute of Brain and Mind Sciences, National Taiwan University, Taipei City, Taiwan
- Neurobiology and Cognitive Science Center, National Taiwan University, Taipei City, Taiwan
- Center for Artificial Intelligence and Advanced Robotics, National Taiwan University, Taipei City, Taiwan
| |
Collapse
|
41
|
Plate RC, Woodard K, Pollak SD. Category Flexibility in Emotion Learning. AFFECTIVE SCIENCE 2023; 4:722-730. [PMID: 38156248 PMCID: PMC10751277 DOI: 10.1007/s42761-023-00192-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 05/12/2023] [Indexed: 12/30/2023]
Abstract
Learners flexibly update category boundaries to adjust to the range of experiences they encounter. However, little is known about whether the degree of flexibility is consistent across domains. We examined whether categorization of social input, specifically emotions, is afforded more flexibility as compared to other biological input. To address this question, children (6-12 years; 32 female, 37 male; 7 Hispanic or Latino, 62 not Hispanic or Latino; 8 Black or African American, 14 multiracial, 46 White, 1 selected "other") categorized faces morphed from calm to upset and animals morphed from a horse to a cow across task phases that differed in the distribution of stimuli presented. Learners flexibly adjusted both emotion and animal category boundaries according to distributional information, yet children showed more flexibility when updating their category boundaries for emotions. These results provide support for the idea that children-who must adjust to the vast and varied emotional signals of their social partners-respond to social signals dynamically in order to make predictions about the internal states and future behaviors of others.
Collapse
Affiliation(s)
- Rista C. Plate
- Department of Psychology, University of Pennsylvania, 3720 Walnut St, Philadelphia, PA 19104 USA
| | - Kristina Woodard
- Department of Psychology, University of Wisconsin-Madison, 1202 West Johnson Street, Madison, WI 53706 USA
| | - Seth D. Pollak
- Department of Psychology, University of Wisconsin-Madison, 1202 West Johnson Street, Madison, WI 53706 USA
| |
Collapse
|
42
|
Cheong JH, Jolly E, Xie T, Byrne S, Kenney M, Chang LJ. Py-Feat: Python Facial Expression Analysis Toolbox. AFFECTIVE SCIENCE 2023; 4:781-796. [PMID: 38156250 PMCID: PMC10751270 DOI: 10.1007/s42761-023-00191-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 05/07/2023] [Indexed: 12/30/2023]
Abstract
Studying facial expressions is a notoriously difficult endeavor. Recent advances in the field of affective computing have yielded impressive progress in automatically detecting facial expressions from pictures and videos. However, much of this work has yet to be widely disseminated in social science domains such as psychology. Current state-of-the-art models require considerable domain expertise that is not traditionally incorporated into social science training programs. Furthermore, there is a notable absence of user-friendly and open-source software that provides a comprehensive set of tools and functions that support facial expression research. In this paper, we introduce Py-Feat, an open-source Python toolbox that provides support for detecting, preprocessing, analyzing, and visualizing facial expression data. Py-Feat makes it easy for domain experts to disseminate and benchmark computer vision models and also for end users to quickly process, analyze, and visualize face expression data. We hope this platform will facilitate increased use of facial expression data in human behavior research. Supplementary Information The online version contains supplementary material available at 10.1007/s42761-023-00191-4.
Collapse
Affiliation(s)
- Jin Hyun Cheong
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
| | - Eshin Jolly
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
| | - Tiankang Xie
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
- Department of Quantitative Biomedical Sciences, Geisel School of Medicine, Dartmouth College, Hanover, NH 03755 USA
| | - Sophie Byrne
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
| | - Matthew Kenney
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
| | - Luke J. Chang
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
- Department of Quantitative Biomedical Sciences, Geisel School of Medicine, Dartmouth College, Hanover, NH 03755 USA
| |
Collapse
|
43
|
Wessler J, van der Schalk J, Hansen J, Klackl J, Jonas E, Fons M, Doosje B, Fischer A. Existential threat and responses to emotional displays of ingroup and outgroup members. GROUP PROCESSES & INTERGROUP RELATIONS 2023; 26:1866-1887. [PMID: 38021316 PMCID: PMC10665133 DOI: 10.1177/13684302221128229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2019] [Accepted: 08/31/2022] [Indexed: 12/01/2023]
Abstract
The present research investigates how emotional displays shape reactions to ingroup and outgroup members when people are reminded of death. We hypothesized that under mortality salience, emotions that signal social distance promote worldview defense (i.e., increased ingroup favoritism and outgroup derogation), whereas emotions that signal affiliation promote affiliation need (i.e., reduced ingroup favoritism and outgroup derogation). In three studies, participants viewed emotional displays of ingroup and/or outgroup members after a mortality salience or control manipulation. Results revealed that under mortality salience, anger increased ingroup favoritism and outgroup derogation (Study 1), enhanced perceived overlap with the ingroup (Study 3), and increased positive facial behavior to ingroup displays-measured via the Facial Action Coding System (Studies 1 and 2) and electromyography of the zygomaticus major muscle (Study 3). In contrast, happiness decreased ingroup favoritism and outgroup derogation (Study 2), and increased positive facial behavior towards outgroup members (Study 3). The findings suggest that, in times of threat, emotional displays can determine whether people move away from unfamiliar others or try to form as many friendly relations as possible.
Collapse
Affiliation(s)
- Janet Wessler
- German Research Center for Artificial Intelligence, Germany
| | | | | | | | | | | | | | | |
Collapse
|
44
|
Guntinas-Lichius O, Trentzsch V, Mueller N, Heinrich M, Kuttenreich AM, Dobel C, Volk GF, Graßme R, Anders C. High-resolution surface electromyographic activities of facial muscles during the six basic emotional expressions in healthy adults: a prospective observational study. Sci Rep 2023; 13:19214. [PMID: 37932337 PMCID: PMC10628297 DOI: 10.1038/s41598-023-45779-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2023] [Accepted: 10/24/2023] [Indexed: 11/08/2023] Open
Abstract
High-resolution facial surface electromyography (HR-sEMG) is suited to discriminate between different facial movements. Whether HR-sEMG also allows a discrimination among the six basic emotions of facial expression is unclear. 36 healthy participants (53% female, 18-67 years) were included for four sessions. Electromyograms were recorded from both sides of the face using a muscle-position oriented electrode application (Fridlund scheme) and by a landmark-oriented, muscle unrelated symmetrical electrode arrangement (Kuramoto scheme) simultaneously on the face. In each session, participants expressed the six basic emotions in response to standardized facial images expressing the corresponding emotions. This was repeated once on the same day. Both sessions were repeated two weeks later to assess repetition effects. HR-sEMG characteristics showed systematic regional distribution patterns of emotional muscle activation for both schemes with very low interindividual variability. Statistical discrimination between the different HR-sEMG patterns was good for both schemes for most but not all basic emotions (ranging from p > 0.05 to mostly p < 0.001) when using HR-sEMG of the entire face. When using information only from the lower face, the Kuramoto scheme allowed a more reliable discrimination of all six emotions (all p < 0.001). A landmark-oriented HR-sEMG recording allows specific discrimination of facial muscle activity patterns during basic emotional expressions.
Collapse
Affiliation(s)
- Orlando Guntinas-Lichius
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany.
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany.
- Center for Rare Diseases, Jena University Hospital, Jena, Germany.
| | - Vanessa Trentzsch
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
| | - Nadiya Mueller
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
| | - Martin Heinrich
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany
- Center for Rare Diseases, Jena University Hospital, Jena, Germany
| | - Anna-Maria Kuttenreich
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany
- Center for Rare Diseases, Jena University Hospital, Jena, Germany
| | - Christian Dobel
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany
| | - Gerd Fabian Volk
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany
- Center for Rare Diseases, Jena University Hospital, Jena, Germany
| | - Roland Graßme
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
- Department of Prevention, Biomechanics, German Social Accident Insurance Institution for the Foodstuffs and Catering Industry, Erfurt, Germany
| | - Christoph Anders
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
| |
Collapse
|
45
|
Casarrubea M, Di Giovanni G, Aiello S, Crescimanno G. The hole-board apparatus in the study of anxiety. Physiol Behav 2023; 271:114346. [PMID: 37690695 DOI: 10.1016/j.physbeh.2023.114346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 09/07/2023] [Accepted: 09/08/2023] [Indexed: 09/12/2023]
Abstract
Anxiety disorders pose a significant challenge in contemporary society, and their impact in terms of social and economic burden is overwhelming. Behavioral research conducted on animal subjects is crucial for comprehending these disorders and, from a translational standpoint, for introducing innovative therapeutic approaches. In this context, the Hole-Board apparatus has emerged as a widely utilized test for studying anxiety-related behaviors in rodents. Although a substantial body of literature underscores the utility and reliability of the Hole-Board in anxiety research, recent decades have witnessed a range of studies that have led to uncertainties and misinterpretations regarding the validity of this behavioral assay. The objective of this review is twofold: firstly, to underscore the utility and reliability of the Hole-Board assay, and concurrently, to examine the underlying factors contributing to potential misconceptions surrounding its utilization in the study of anxiety and anxiety-related behaviors. We will present results from both conventional quantitative analyses and multivariate approaches, while referencing a comprehensive collection of studies conducted using the Hole-Board.
Collapse
Affiliation(s)
- Maurizio Casarrubea
- Laboratory of Behavioural Physiology, Department of Biomedicine, Neuroscience and Advanced Diagnostics (Bi.N.D.), Human Physiology Section "Giuseppe Pagano", University of Palermo, Corso Tukory n.129, Palermo 90134, Italy.
| | - Giuseppe Di Giovanni
- Laboratory of Neurophysiology, Department of Physiology and Biochemistry, Faculty of Medicine and Surgery, University of Malta, Msida, Malta; Neuroscience Division, School of Biosciences, Cardiff University, Cardiff, United Kingdom
| | - Stefania Aiello
- Laboratory of Behavioural Physiology, Department of Biomedicine, Neuroscience and Advanced Diagnostics (Bi.N.D.), Human Physiology Section "Giuseppe Pagano", University of Palermo, Corso Tukory n.129, Palermo 90134, Italy
| | - Giuseppe Crescimanno
- Laboratory of Behavioural Physiology, Department of Biomedicine, Neuroscience and Advanced Diagnostics (Bi.N.D.), Human Physiology Section "Giuseppe Pagano", University of Palermo, Corso Tukory n.129, Palermo 90134, Italy
| |
Collapse
|
46
|
Patterson ML, Fridlund AJ, Crivelli C. Four Misconceptions About Nonverbal Communication. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:1388-1411. [PMID: 36791676 PMCID: PMC10623623 DOI: 10.1177/17456916221148142] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Abstract
Research and theory in nonverbal communication have made great advances toward understanding the patterns and functions of nonverbal behavior in social settings. Progress has been hindered, we argue, by presumptions about nonverbal behavior that follow from both received wisdom and faulty evidence. In this article, we document four persistent misconceptions about nonverbal communication-namely, that people communicate using decodable body language; that they have a stable personal space by which they regulate contact with others; that they express emotion using universal, evolved, iconic, categorical facial expressions; and that they can deceive and detect deception, using dependable telltale clues. We show how these misconceptions permeate research as well as the practices of popular behavior experts, with consequences that extend from intimate relationships to the boardroom and courtroom and even to the arena of international security. Notwithstanding these misconceptions, existing frameworks of nonverbal communication are being challenged by more comprehensive systems approaches and by virtual technologies that ambiguate the roles and identities of interactants and the contexts of interaction.
Collapse
Affiliation(s)
| | - Alan J. Fridlund
- Department of Psychological and Brain Sciences, University of California, Santa Barbara
| | | |
Collapse
|
47
|
Cheong JH, Molani Z, Sadhukha S, Chang LJ. Synchronized affect in shared experiences strengthens social connection. Commun Biol 2023; 6:1099. [PMID: 37898664 PMCID: PMC10613250 DOI: 10.1038/s42003-023-05461-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 10/13/2023] [Indexed: 10/30/2023] Open
Abstract
People structure their days to experience events with others. We gather to eat meals, watch TV, and attend concerts together. What constitutes a shared experience and how does it manifest in dyadic behavior? The present study investigates how shared experiences-measured through emotional, motoric, physiological, and cognitive alignment-promote social bonding. We recorded the facial expressions and electrodermal activity (EDA) of participants as they watched four episodes of a TV show for a total of 4 h with another participant. Participants displayed temporally synchronized and spatially aligned emotional facial expressions and the degree of synchronization predicted the self-reported social connection ratings between viewing partners. We observed a similar pattern of results for dyadic physiological synchrony measured via EDA and their cognitive impressions of the characters. All four of these factors, temporal synchrony of positive facial expressions, spatial alignment of expressions, EDA synchrony, and character impression similarity, contributed to a latent factor of a shared experience that predicted social connection. Our findings suggest that the development of interpersonal affiliations in shared experiences emerges from shared affective experiences comprising synchronous processes and demonstrate that these complex interpersonal processes can be studied in a holistic and multi-modal framework leveraging naturalistic experimental designs.
Collapse
Affiliation(s)
- Jin Hyun Cheong
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Zainab Molani
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Sushmita Sadhukha
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Luke J Chang
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| |
Collapse
|
48
|
Rincon AV, Waller BM, Duboscq J, Mielke A, Pérez C, Clark PR, Micheletta J. Higher social tolerance is associated with more complex facial behavior in macaques. eLife 2023; 12:RP87008. [PMID: 37787008 PMCID: PMC10547472 DOI: 10.7554/elife.87008] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023] Open
Abstract
The social complexity hypothesis for communicative complexity posits that animal societies with more complex social systems require more complex communication systems. We tested the social complexity hypothesis on three macaque species that vary in their degree of social tolerance and complexity. We coded facial behavior in >3000 social interactions across three social contexts (aggressive, submissive, affiliative) in 389 animals, using the Facial Action Coding System for macaques (MaqFACS). We quantified communicative complexity using three measures of uncertainty: entropy, specificity, and prediction error. We found that the relative entropy of facial behavior was higher for the more tolerant crested macaques as compared to the less tolerant Barbary and rhesus macaques across all social contexts, indicating that crested macaques more frequently use a higher diversity of facial behavior. The context specificity of facial behavior was higher in rhesus as compared to Barbary and crested macaques, demonstrating that Barbary and crested macaques used facial behavior more flexibly across different social contexts. Finally, a random forest classifier predicted social context from facial behavior with highest accuracy for rhesus and lowest for crested, indicating there is higher uncertainty and complexity in the facial behavior of crested macaques. Overall, our results support the social complexity hypothesis.
Collapse
Affiliation(s)
- Alan V Rincon
- Department of Psychology, Centre for Comparative and Evolutionary Psychology, University of PortsmouthPortsmouthUnited Kingdom
| | - Bridget M Waller
- Centre for Interdisciplinary Research on Social Interaction, Department of Psychology, Nottingham Trent UniversityNottinghamUnited Kingdom
| | | | - Alexander Mielke
- School of Biological and Behavioural Sciences, Queen Mary University of LondonLondonUnited Kingdom
| | - Claire Pérez
- Department of Psychology, Centre for Comparative and Evolutionary Psychology, University of PortsmouthPortsmouthUnited Kingdom
| | - Peter R Clark
- Department of Psychology, Centre for Comparative and Evolutionary Psychology, University of PortsmouthPortsmouthUnited Kingdom
- School of Psychology, University of LincolnLincolnUnited Kingdom
| | - Jérôme Micheletta
- Department of Psychology, Centre for Comparative and Evolutionary Psychology, University of PortsmouthPortsmouthUnited Kingdom
| |
Collapse
|
49
|
Kunz M, Chen JI, Lautenbacher S, Rainville P. Brain mechanisms associated with facial encoding of affective states. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023; 23:1281-1290. [PMID: 37349604 PMCID: PMC10545577 DOI: 10.3758/s13415-023-01114-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/15/2023] [Indexed: 06/24/2023]
Abstract
Affective states are typically accompanied by facial expressions, but these behavioral manifestations are highly variable. Even highly arousing and negative valent experiences, such as pain, show great instability in facial affect encoding. The present study investigated which neural mechanisms are associated with variations in facial affect encoding by focusing on facial encoding of sustained pain experiences. Facial expressions, pain ratings, and brain activity (BOLD-fMRI) during tonic heat pain were recorded in 27 healthy participants. We analyzed facial expressions by using the Facial Action Coding System (FACS) and examined brain activations during epochs of painful stimulation that were accompanied by facial expressions of pain. Epochs of facial expressions of pain were coupled with activity increase in motor areas (M1, premotor and SMA) as well as in areas involved in nociceptive processing, including primary and secondary somatosensory cortex, posterior and anterior insula, and the anterior part of the mid-cingulate cortex. In contrast, prefrontal structures (ventrolateral and medial prefrontal) were less activated during incidences of facial expressions, consistent with a role in down-regulating facial displays. These results indicate that incidences of facial encoding of pain reflect activity within nociceptive pathways interacting or possibly competing with prefrontal inhibitory systems that gate the level of expressiveness.
Collapse
Affiliation(s)
- Miriam Kunz
- Department of Medical Psychology and Sociology, University of Augsburg, Augsburg, Germany.
- Bamberger Living Lab Dementia (BamLiD), University of Bamberg, Bamberg, Germany.
| | - Jen-I Chen
- Centre de recherche de l'Institut universitaire de gériatrie de Montréal (CRIUGM), Université de Montréal, Montréal, Canada
- Department de stomatologie, Faculté de médecine dentaire, Université de Montréal, Montréal, Canada
| | - Stefan Lautenbacher
- Bamberger Living Lab Dementia (BamLiD), University of Bamberg, Bamberg, Germany
| | - Pierre Rainville
- Centre de recherche de l'Institut universitaire de gériatrie de Montréal (CRIUGM), Université de Montréal, Montréal, Canada
- Department de stomatologie, Faculté de médecine dentaire, Université de Montréal, Montréal, Canada
| |
Collapse
|
50
|
Abstract
People have a unique ability to represent other people's internal thoughts and feelings-their mental states. Mental state knowledge has a rich conceptual structure, organized along key dimensions, such as valence. People use this conceptual structure to guide social interactions. How do people acquire their understanding of this structure? Here we investigate an underexplored contributor to this process: observation of mental state dynamics. Mental states-including both emotions and cognitive states-are not static. Rather, the transitions from one state to another are systematic and predictable. Drawing on prior cognitive science, we hypothesize that these transition dynamics may shape the conceptual structure that people learn to apply to mental states. Across nine behavioral experiments (N = 1,439), we tested whether the transition probabilities between mental states causally shape people's conceptual judgments of those states. In each study, we found that observing frequent transitions between mental states caused people to judge them to be conceptually similar. Computational modeling indicated that people translated mental state dynamics into concepts by embedding the states as points within a geometric space. The closer two states are within this space, the greater the likelihood of transitions between them. In three neural network experiments, we trained artificial neural networks to predict real human mental state dynamics. The networks spontaneously learned the same conceptual dimensions that people use to understand mental states. Together these results indicate that mental state dynamics-and the goal of predicting them-shape the structure of mental state concepts. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
- Mark A. Thornton
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover NH 03755
| | - Milena Rmus
- Department of Psychology, University of California, Berkeley, Berkeley CA 94720
| | - Amisha D. Vyas
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover NH 03755
| | - Diana I. Tamir
- Department of Psychology, Princeton University, Princeton NJ 08540
- Princeton Neuroscience Institute, Princeton University, Princeton NJ 08540
| |
Collapse
|