1
|
Christensen JF, Fernández A, Smith RA, Michalareas G, Yazdi SHN, Farahi F, Schmidt EM, Bahmanian N, Roig G. EMOKINE: A software package and computational framework for scaling up the creation of highly controlled emotional full-body movement datasets. Behav Res Methods 2024; 56:7498-7542. [PMID: 38918315 DOI: 10.3758/s13428-024-02433-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/15/2024] [Indexed: 06/27/2024]
Abstract
EMOKINE is a software package and dataset creation suite for emotional full-body movement research in experimental psychology, affective neuroscience, and computer vision. A computational framework, comprehensive instructions, a pilot dataset, observer ratings, and kinematic feature extraction code are provided to facilitate future dataset creations at scale. In addition, the EMOKINE framework outlines how complex sequences of movements may advance emotion research. Traditionally, often emotional-'action'-based stimuli are used in such research, like hand-waving or walking motions. Here instead, a pilot dataset is provided with short dance choreographies, repeated several times by a dancer who expressed different emotional intentions at each repetition: anger, contentment, fear, joy, neutrality, and sadness. The dataset was simultaneously filmed professionally, and recorded using XSENS® motion capture technology (17 sensors, 240 frames/second). Thirty-two statistics from 12 kinematic features were extracted offline, for the first time in one single dataset: speed, acceleration, angular speed, angular acceleration, limb contraction, distance to center of mass, quantity of motion, dimensionless jerk (integral), head angle (with regards to vertical axis and to back), and space (convex hull 2D and 3D). Average, median absolute deviation (MAD), and maximum value were computed as applicable. The EMOKINE software is appliable to other motion-capture systems and is openly available on the Zenodo Repository. Releases on GitHub include: (i) the code to extract the 32 statistics, (ii) a rigging plugin for Python for MVNX file-conversion to Blender format (MVNX=output file XSENS® system), and (iii) a Python-script-powered custom software to assist with blurring faces; latter two under GPLv3 licenses.
Collapse
Affiliation(s)
- Julia F Christensen
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany.
| | - Andrés Fernández
- Methods of Machine Learning, University of Tübingen, Tübingen, Germany
- International Max Planck Research School for Intelligent Systems, Tübingen, Germany
| | - Rebecca A Smith
- Department of Psychology, University of Glasgow, Glasgow, Scotland
| | - Georgios Michalareas
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
| | | | | | - Eva-Madeleine Schmidt
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Max Planck School of Cognition, Leipzig, Germany
- Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany
| | - Nasimeh Bahmanian
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Department of Modern Languages, Goethe University, Frankfurt/M, Germany
| | - Gemma Roig
- Computer Science Department, Goethe University, Frankfurt/M, Germany
- The Hessian Center for Artificial Intelligence (hessian.AI), Darmstadt, Germany
| |
Collapse
|
2
|
Liu S, He W, Zhang M, Li Y, Ren J, Guan Y, Fan C, Li S, Gu R, Luo W. Emotional concepts shape the perceptual representation of body expressions. Hum Brain Mapp 2024; 45:e26789. [PMID: 39185719 PMCID: PMC11345699 DOI: 10.1002/hbm.26789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 06/25/2024] [Accepted: 07/03/2024] [Indexed: 08/27/2024] Open
Abstract
Emotion perception interacts with how we think and speak, including our concept of emotions. Body expression is an important way of emotion communication, but it is unknown whether and how its perception is modulated by conceptual knowledge. In this study, we employed representational similarity analysis and conducted three experiments combining semantic similarity, mouse-tracking task, and one-back behavioral task with electroencephalography and functional magnetic resonance imaging techniques, the results of which show that conceptual knowledge predicted the perceptual representation of body expressions. Further, this prediction effect occurred at approximately 170 ms post-stimulus. The neural encoding of body expressions in the fusiform gyrus and lingual gyrus was impacted by emotion concept knowledge. Taken together, our results indicate that conceptual knowledge of emotion categories shapes the configural representation of body expressions in the ventral visual cortex, which offers compelling evidence for the constructed emotion theory.
Collapse
Affiliation(s)
- Shuaicheng Liu
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Weiqi He
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Mingming Zhang
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Yiwen Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Jie Ren
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Yuanhao Guan
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Cong Fan
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Shuaixia Li
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Ruolei Gu
- Key Laboratory of Behavioral Science, Institute of PsychologyChinese Academy of SciencesBeijingChina
- Department of PsychologyUniversity of Chinese Academy of SciencesBeijingChina
| | - Wenbo Luo
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| |
Collapse
|
3
|
Du B, Jia S, Zhou X, Zhang M, He W. The priming effect of emotional words on body expressions: Two ERP studies. Int J Psychophysiol 2024; 202:112370. [PMID: 38802049 DOI: 10.1016/j.ijpsycho.2024.112370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 05/21/2024] [Accepted: 05/21/2024] [Indexed: 05/29/2024]
Abstract
The impact of emotional words on the recognition of body expression and the underlying neurodynamic mechanisms remain poorly understood. This study used a classic supraliminal priming paradigm and event related potential (ERP) to investigate the effect of emotion-label words (experiment 1) and emotional verbs (experiment 2) on the recognition of body expressions. The behavioral results revealed that individuals exhibited a higher accuracy in recognizing happy expressions when presented with a happy-label word condition, in contrast to neutral expressions. Furthermore, it was observed that the accuracy of recognizing happy body expressions was reduced when preceded by angry verb priming, compared to happy and neutral priming conditions. Conversely, the accuracy of recognizing angry body expressions was higher in response to angry verb priming than happy and neutral primings. The ERP results showed that, in the recognition of happy body expressions, the P300 amplitude elicited by angry-label words was more positive, while a congruent verb-expression condition elicited more positive P300 amplitude than an incongruent condition in the left hemisphere and midline. However, in the recognition of angry body expressions, the N400 amplitude elicited by a congruent verb-expression condition was smaller than that elicited by an incongruent condition. These results suggest that both abstract emotion-label words and specific emotional verbs influence the recognition of body expressions. In addition, integrating happy semantic context and body expression might occur at the P300 stage, whereas integrating angry semantic context and body expression might occur at the N400 stage. These findings present novel evidence regarding the criticality of emotional context in the recognition of emotions.
Collapse
Affiliation(s)
- Bixuan Du
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China; Hangzhou Fuyang Chunjiang Central Elementary School, Hangzhou 311421, China
| | - Shuxin Jia
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Xing Zhou
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China.
| | - Weiqi He
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China.
| |
Collapse
|
4
|
Trujillo-Llano C, Sainz-Ballesteros A, Suarez-Ardila F, Gonzalez-Gadea ML, Ibáñez A, Herrera E, Baez S. Neuroanatomical markers of social cognition in neglected adolescents. Neurobiol Stress 2024; 31:100642. [PMID: 38800539 PMCID: PMC11127280 DOI: 10.1016/j.ynstr.2024.100642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 05/07/2024] [Accepted: 05/12/2024] [Indexed: 05/29/2024] Open
Abstract
Growing up in neglectful households can impact multiple aspects of social cognition. However, research on neglect's effects on social cognition processes and their neuroanatomical correlates during adolescence is scarce. Here, we aimed to comprehensively assess social cognition processes (recognition of basic and contextual emotions, theory of mind, the experience of envy and Schadenfreude and empathy for pain) and their structural brain correlates in adolescents with legal neglect records within family-based care. First, we compared neglected adolescents (n = 27) with control participants (n = 25) on context-sensitive social cognition tasks while controlling for physical and emotional abuse and executive and intellectual functioning. Additionally, we explored the grey matter correlates of these domains through voxel-based morphometry. Compared to controls, neglected adolescents exhibited lower performance in contextual emotional recognition and theory of mind, higher levels of envy and Schadenfreude and diminished empathy. Physical and emotional abuse and executive or intellectual functioning did not explain these effects. Moreover, social cognition scores correlated with brain volumes in regions subserving social cognition and emotional processing. Our results underscore the potential impact of neglect on different aspects of social cognition during adolescence, emphasizing the necessity for preventive and intervention strategies to address these deficits in this population.
Collapse
Affiliation(s)
- Catalina Trujillo-Llano
- Department of Neurology, Universitätsmedizin Greifswald, Greifswald, Germany
- Facultad de Psicología, Universidad Del Valle, Cali, Colombia
| | - Agustín Sainz-Ballesteros
- Department of Psychology, University of Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience, Tübingen, Germany
- Department for High-Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | | | - María Luz Gonzalez-Gadea
- Cognitive Neuroscience Center, Universidad de San Andres, Buenos Aires, Argentina
- National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina
| | - Agustín Ibáñez
- Cognitive Neuroscience Center, Universidad de San Andres, Buenos Aires, Argentina
- Latin American Brain Health (BrainLat), Universidad Adolfo Ibáñez, Santiago, Chile
- Global Brain Health Institute, University of California-San Francisco, San Francisco, CA, United States
- Trinity College Dublin, Dublin, Ireland
| | - Eduar Herrera
- Universidad Icesi, Departamento de Estudios Psicológicos, Cali, Colombia
| | - Sandra Baez
- Global Brain Health Institute, University of California-San Francisco, San Francisco, CA, United States
- Trinity College Dublin, Dublin, Ireland
- Universidad de Los Andes, Bogotá, Colombia
| |
Collapse
|
5
|
Trub LR. The elephant in the zoom: will psychoanalysis survive the screen? Am J Psychoanal 2024; 84:203-228. [PMID: 38866957 DOI: 10.1057/s11231-024-09457-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2024]
Abstract
While screen-mediated analysis long predated the pandemic, it was largely seen as non-equivalent to in-person treatment by analysts and patients alike. When COVID forced us to move our entire practices to the screen, our concerns about its limitations were replaced by relief; we could continue doing analytic work during a terrifying and challenging time. Three years later, many have chosen to continue practicing remotely for reasons that are no longer driven by fears of exposure. We mostly minimize or deny our earlier concerns about the limitations of screen work. Have we chosen convenience, ease, and a personal sense of safety over togetherness, while ignoring the underbelly of remote work? This paper identifies the convergence of several forces underlying our decision to stay remote, including guilt and anxiety about privileging our own self-interest, unmourned losses and collective PTSD, fear of the future and existential anxiety about living in a techno-culture that threatens to replace us. Our denial of these powerful forces makes it easy to rationalize a decision to embrace remote work and disavow the threat it poses to our field.
Collapse
Affiliation(s)
- Leora R Trub
- Department of Psychology, Pace University, 52 Broadway, 4th floor, New York, NY, 10004, USA.
| |
Collapse
|
6
|
Abo Foul Y, Arkadir D, Demikhovskaya A, Noyman Y, Linetsky E, Abu Snineh M, Aviezer H, Eitan R. Perception of emotionally incongruent cues: evidence for overreliance on body vs. face expressions in Parkinson's disease. Front Psychol 2024; 15:1287952. [PMID: 38770252 PMCID: PMC11103677 DOI: 10.3389/fpsyg.2024.1287952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 04/08/2024] [Indexed: 05/22/2024] Open
Abstract
Individuals with Parkinson's disease (PD) may exhibit impaired emotion perception. However, research demonstrating this decline has been based almost entirely on the recognition of isolated emotional cues. In real life, emotional cues such as expressive faces are typically encountered alongside expressive bodies. The current study investigated emotion perception in individuals with PD (n = 37) using emotionally incongruent composite displays of facial and body expressions, as well as isolated face and body expressions, and congruent composite displays as a baseline. In addition to a group of healthy controls (HC) (n = 50), we also included control individuals with schizophrenia (SZ) (n = 30), who display, as in PD, similar motor symptomology and decreased emotion perception abilities. The results show that individuals with PD showed an increased tendency to categorize incongruent face-body combinations in line with the body emotion, whereas those with HC showed a tendency to classify them in line with the facial emotion. No consistent pattern for prioritizing the face or body was found in individuals with SZ. These results were not explained by the emotional recognition of the isolated cues, cognitive status, depression, or motor symptoms of individuals with PD and SZ. As real-life expressions may include inconsistent cues in the body and face, these findings may have implications for the way individuals with PD and SZ interpret the emotions of others.
Collapse
Affiliation(s)
- Yasmin Abo Foul
- Department of Psychology, Hebrew University of Jerusalem, Jerusalem, Israel
- Brain Division, Hadassah-Hebrew University Medical Center, Jerusalem, Israel
| | - David Arkadir
- Brain Division, Hadassah-Hebrew University Medical Center, Jerusalem, Israel
| | - Anastasia Demikhovskaya
- Neuropsychiatry Unit, Jerusalem Mental Health Center, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Yehuda Noyman
- Neuropsychiatry Unit, Jerusalem Mental Health Center, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Eduard Linetsky
- Brain Division, Hadassah-Hebrew University Medical Center, Jerusalem, Israel
| | - Muneer Abu Snineh
- Brain Division, Hadassah-Hebrew University Medical Center, Jerusalem, Israel
| | - Hillel Aviezer
- Department of Psychology, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Renana Eitan
- Brain Division, Hadassah-Hebrew University Medical Center, Jerusalem, Israel
- Neuropsychiatry Unit, Jerusalem Mental Health Center, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Medical Neurobiology (Physiology), Institute for Medical Research Israel-Canada, Hebrew University-Hadassah Medical School, Jerusalem, Israel
- Department of Psychiatry, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
7
|
Liu P, Zhang Y, Xiong Z, Gao Y. The Chinese customers and service staff interactive affective system (CCSIAS): introduction to a multimodal stimulus dataset. Front Psychol 2024; 15:1302253. [PMID: 38765835 PMCID: PMC11099906 DOI: 10.3389/fpsyg.2024.1302253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 04/08/2024] [Indexed: 05/22/2024] Open
Abstract
To research the emotional interaction between customers and service staff, single-modal stimuli are being used to activate subjects' emotions while multimodal emotion stimuli with better efficiency are often neglected. This study aims to construct a multimodal emotion stimuli database (CCSIAS) with video records of real work status of 29 service staff and audio clips of interactions between customers and service staff by setting up wide-angle cameras and searching in company's Ocean Engine for 15 consecutive days. First, we developed a tool to assess the emotional statuses of customers and service staff in Study 1. Second, 40 Masters and PhD students were invited to assess the audio and video data to evaluate the emotional states of customers and service staff in Study 2, using the tools developed in Study 1. Third, 118 participants were recruited to test the results from Study 2 to ensure the stability of the derived data. The results showed that 139 sets of stable emotional audio & video data were constructed (26 sets were high, 59 sets were medium and 54 sets were low). The amount of emotional information is important for the effective activation of participants' emotional states, and the degree of emotional activation of video data is significantly higher than that of the audio data. Overall, it was shown that the study of emotional interaction phenomena requires a multimodal dataset. The CCSIAS (https://osf.io/muc86/) can extend the depth and breadth of emotional interaction research and can be applied to different emotional states between customers and service staff activation in the fields of organizational behavior and psychology.
Collapse
Affiliation(s)
| | - Yi Zhang
- School of Business, Sichuan University, Chengdu, China
| | | | | |
Collapse
|
8
|
Goel S, Jara-Ettinger J, Ong DC, Gendron M. Face and context integration in emotion inference is limited and variable across categories and individuals. Nat Commun 2024; 15:2443. [PMID: 38499519 PMCID: PMC10948792 DOI: 10.1038/s41467-024-46670-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 03/05/2024] [Indexed: 03/20/2024] Open
Abstract
The ability to make nuanced inferences about other people's emotional states is central to social functioning. While emotion inferences can be sensitive to both facial movements and the situational context that they occur in, relatively little is understood about when these two sources of information are integrated across emotion categories and individuals. In a series of studies, we use one archival and five empirical datasets to demonstrate that people could be integrating, but that emotion inferences are just as well (and sometimes better) captured by knowledge of the situation alone, while isolated facial cues are insufficient. Further, people integrate facial cues more for categories for which they most frequently encounter facial expressions in everyday life (e.g., happiness). People are also moderately stable over time in their reliance on situational cues and integration of cues and those who reliably utilize situation cues more also have better situated emotion knowledge. These findings underscore the importance of studying variability in reliance on and integration of cues.
Collapse
Affiliation(s)
- Srishti Goel
- Department of Psychology, Yale University, 100 College St, New Haven, CT, USA.
| | - Julian Jara-Ettinger
- Department of Psychology, Yale University, 100 College St, New Haven, CT, USA
- Wu Tsai Institute, Yale University, 100 College St, New Haven, CT, USA
| | - Desmond C Ong
- Department of Psychology, The University of Texas at Austin, 108 E Dean Keeton St, Austin, TX, USA
| | - Maria Gendron
- Department of Psychology, Yale University, 100 College St, New Haven, CT, USA.
| |
Collapse
|
9
|
Liu J, Liu Y, Jiang H, Zhao J, Ding X. Facial feedback manipulation influences the automatic detection of unexpected emotional body expressions. Neuropsychologia 2024; 195:108802. [PMID: 38266669 DOI: 10.1016/j.neuropsychologia.2024.108802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 12/31/2023] [Accepted: 01/16/2024] [Indexed: 01/26/2024]
Abstract
Unexpected or changing facial expressions are known to be able to engage more automatic processing than frequently occurring facial expressions, thereby inducing a neural differential wave response known as expression mismatch negativity (EMMN). Recent studies have shown that EMMN can be modulated by the observer's facial feedback (i.e., feedback from their own facial movements). A similar EMMN activity has been discovered for body expressions, but thus far only a few emotion types have been investigated. It is unknown whether the EMMNs evoked by body expressions can be influenced by facial feedback. To explore this question, we recorded EEG activity of 29 participants in the reverse oddball paradigm. Here two unexamined categories of body expressions were presented, happy and sad, placed in two paired stimulus sequences: in one the happy body was presented with a probability of 80% (standards) while the sad body was presented with a probability of 20% (deviants), and in the other the probabilities were reversed. The facial feedback was manipulated by different pen holding conditions (i.e., participants holding the pen with the teeth, lips, or nondominant hand). The nonparametric cluster permutation test revealed significant happy and sad body-related EMMN (bEMMN) activities. The happy-bEMMN were more negative than sad-bEMMN within the range of 100-150 ms. Additionally, the bEMMN amplitude of both emotions is modulated by the facial feedback conditions. These results expand the range of emotional types applicable to bEMMN and provide evidence for the validity of the facial feedback hypothesis across emotional carriers.
Collapse
Affiliation(s)
- Jianyi Liu
- School of Psychology, Shaanxi Normal University and Shaanxi Provincial Key Laboratory of Behavior and Cognitive Neuroscience, Xi'an, China
| | - Yang Liu
- School of Psychology, Northwest Normal University, Lanzhou, China
| | - Heng Jiang
- School of Psychology, Northwest Normal University, Lanzhou, China
| | - Jingjing Zhao
- School of Psychology, Shaanxi Normal University and Shaanxi Provincial Key Laboratory of Behavior and Cognitive Neuroscience, Xi'an, China.
| | - Xiaobin Ding
- School of Psychology, Northwest Normal University, Lanzhou, China.
| |
Collapse
|
10
|
Wu S, Zhou L, Hu Z, Liu J. Hierarchical Context-Based Emotion Recognition With Scene Graphs. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:3725-3739. [PMID: 36018874 DOI: 10.1109/tnnls.2022.3196831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
For a better intention inference, we often try to figure out the emotional states of other people in social communications. Many studies on affective computing have been carried out to infer emotions through perceiving human states, i.e., facial expression and body posture. Such methods are skillful in a controlled environment. However, it often leads to misestimation due to the deficiency of effective inputs in unconstrained circumstances, that is, where context-aware emotion recognition appeared. We take inspiration from the advanced reasoning pattern of humans in perceived emotion recognition and propose the hierarchical context-based emotion recognition method with scene graphs. We propose to extract three contexts from the image, i.e., the entity context, the global context, and the scene context. The scene context contains abstract information about entity labels and their relationships. It is similar to the information processing of the human visual sensing mechanism. After that, these contexts are further fused to perform emotion recognition. We carried out a bunch of experiments on the widely used context-aware emotion datasets, i.e., CAER-S, EMOTIC, and BOdy Language Dataset (BoLD). We demonstrate that the hierarchical contexts can benefit emotion recognition by improving the accuracy of the SOTA score from 84.82% to 90.83% on CAER-S. The ablation experiments show that hierarchical contexts provide complementary information. Our method improves the F1 score of the SOTA result from 29.33% to 30.24% (C-F1) on EMOTIC. We also build the image-based emotion recognition task with BoLD-Img from BoLD and obtain a better emotion recognition score (ERS) score of 0.2153.
Collapse
|
11
|
Williams ACDC, Buono R, Gold N, Olugbade T, Bianchi-Berthouze N. Guarding and flow in the movements of people with chronic pain: A qualitative study of physiotherapists' observations. Eur J Pain 2024; 28:454-463. [PMID: 37934512 DOI: 10.1002/ejp.2195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 08/20/2023] [Accepted: 10/25/2023] [Indexed: 11/08/2023]
Abstract
BACKGROUND Among the adaptations of movement consistently associated with disability in chronic pain, guarding is common. Based on previous work, we sought to understand better the constituents of guarding; we also used the concept of flow to explore the description of un/naturalness that emerged from physiotherapists' descriptions of movement in chronic pain. The aim was to inform the design of technical systems to support people with chronic pain in everyday activities. METHODS Sixteen physiotherapists, experts in chronic pain, were interviewed while repeatedly watching short video clips of people with chronic low back pain doing simple movements; physiotherapists described the movements, particularly in relation to guarding and flow. The transcribed interviews were analysed thematically to elaborate these constructs. RESULTS Moderate agreement emerged on the extent of guarding in the videos, with good agreement that guarding conveyed caution about movement, distinct from biomechanical variables of stiffness or slow speed. Physiotherapists' comments on flow showed slightly better agreement, and described the overall movement in terms of restriction (where there was no flow or only some flow), of tempo of the entire movement, and as naturalness (distinguished from normality of movement). CONCLUSIONS These qualities of movement may be useful in designing technical systems to support self-management of chronic pain. SIGNIFICANCE Drawing on the descriptions of movements of people with chronic low back pain provided by expert physiotherapists to standard stimuli, two key concepts were elaborated. Guarding was distinguished from stiffness (a physical limitation) or slowness as motivated by fear or worry about movement. Flow served to describe harmonious and continuous movement, even when adapted around restrictions of pain. Movement behaviours associated with pain are better understood in terms of their particular function than aggregated without reference to function.
Collapse
Affiliation(s)
- Amanda C de C Williams
- Research Department of Clinical, Educational & Health Psychology, University College London, London, UK
| | - Raffaele Buono
- Department of Anthropology, University College London, London, UK
| | - Nicolas Gold
- Computer Science, University College London, London, UK
| | - Temitayo Olugbade
- UCL Interaction Centre (UCLIC), University College London, London, UK
| | | |
Collapse
|
12
|
Vinton LC, Preston C, de la Rosa S, Mackie G, Tipper SP, Barraclough NE. Four fundamental dimensions underlie the perception of human actions. Atten Percept Psychophys 2024; 86:536-558. [PMID: 37188862 PMCID: PMC10185378 DOI: 10.3758/s13414-023-02709-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/30/2023] [Indexed: 05/17/2023]
Abstract
We evaluate the actions of other individuals based upon a variety of movements that reveal critical information to guide decision making and behavioural responses. These signals convey a range of information about the actor, including their goals, intentions and internal mental states. Although progress has been made to identify cortical regions involved in action processing, the organising principles underlying our representation of actions still remains unclear. In this paper we investigated the conceptual space that underlies action perception by assessing which qualities are fundamental to the perception of human actions. We recorded 240 different actions using motion-capture and used these data to animate a volumetric avatar that performed the different actions. 230 participants then viewed these actions and rated the extent to which each action demonstrated 23 different action characteristics (e.g., avoiding-approaching, pulling-pushing, weak-powerful). We analysed these data using Exploratory Factor Analysis to examine the latent factors underlying visual action perception. The best fitting model was a four-dimensional model with oblique rotation. We named the factors: friendly-unfriendly, formidable-feeble, planned-unplanned, and abduction-adduction. The first two factors of friendliness and formidableness explained approximately 22% of the variance each, compared to planned and abduction, which explained approximately 7-8% of the variance each; as such we interpret this representation of action space as having 2 + 2 dimensions. A closer examination of the first two factors suggests a similarity to the principal factors underlying our evaluation of facial traits and emotions, whilst the last two factors of planning and abduction appear unique to actions.
Collapse
Affiliation(s)
- Laura C Vinton
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK
| | - Catherine Preston
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK
| | - Stephan de la Rosa
- Department of Social Sciences, IU University of Applied Sciences, Juri-Gagarin-Ring 152, 99084, Erfurt, Germany
| | - Gabriel Mackie
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK
| | - Steven P Tipper
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK
| | - Nick E Barraclough
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK.
| |
Collapse
|
13
|
Guralnik T, Moulder RG, Merom D, Zilcha-Mano S. A multi-modality and multi-dyad approach to measuring flexibility in psychotherapy. Psychother Res 2024:1-17. [PMID: 38252916 DOI: 10.1080/10503307.2023.2292746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 12/05/2023] [Indexed: 01/24/2024] Open
Abstract
INTRODUCTION Flexibility, the ability of an individual to adapt to environmental changes in ways that facilitate goal attainment, has been proposed as a potential mechanism underlying psychopathology and psychotherapy. In psychotherapy, most findings are based on self-report measures that have important limitations. We propose a multimodal, multi-dyad approach based on a nonlinear dynamical systems framework to capture the complexity of this concept. METHOD A new research paradigm was designed to explore the validity of the proposed conceptual model. The paradigm includes a psychotherapy-like social interaction, during which body movement and facial expressiveness data were collected. We analyzed the data using Hankel Alternative View of Koopmann analysis to reconstruct attractors of the observed behaviors and compare them. RESULTS The patterns of behavior in the two cases differ, and differences in the reconstructed attractors correspond with differences in self-report measures and behavior in the interactions. CONCLUSIONS The case studies show that information provided by a single modality is not enough to provide the full picture, and multiple modalities are needed. These observations can serve as an initial support for our claims that a multi-modal and multi-dyad approach to flexibility can address some of the issues of measurement in the field.
Collapse
Affiliation(s)
- Timur Guralnik
- The Department of Psychology, University of Haifa, Haifa, Israel
| | - Robert G Moulder
- Institute of Cognitive Science, University of Colorado, Boulder, CO, USA
| | - Daniel Merom
- The Department of Psychology, University of Haifa, Haifa, Israel
| | | |
Collapse
|
14
|
Hwang J, Lee Y, Kim SH. The Relative Contribution of Facial and Body Information to the Perception of Cuteness. Behav Sci (Basel) 2024; 14:68. [PMID: 38275351 PMCID: PMC10813407 DOI: 10.3390/bs14010068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Revised: 01/12/2024] [Accepted: 01/16/2024] [Indexed: 01/27/2024] Open
Abstract
Faces and bodies both provide cues to age and cuteness, but little work has explored their interaction in cuteness perception. This study examines the interplay of facial and bodily cues in the perception of cuteness, particularly when these cues convey conflicting age information. Participants rated the cuteness of face-body composites that combined either a child or adult face with an age-congruent or incongruent body alongside manipulations of the head-to-body height ratio (HBR). The findings from two experiments indicated that child-like facial features enhanced the perceived cuteness of adult bodies, while child-like bodily features generally had negative impacts. Furthermore, the results showed that an increased head size significantly boosted the perceived cuteness for child faces more than for adult faces. Lastly, the influence of the HBR was more pronounced when the outline of a body's silhouette was the only available information compared to when detailed facial and bodily features were presented. This study suggests that body proportion information, derived from the body's outline, and facial and bodily features, derived from the interior surface, are integrated to form a unitary representation of a whole person in cuteness perception. Our findings highlight the dominance of facial features over bodily information in cuteness perception, with facial attributes serving as key references for evaluating face-body relationships and body proportions. This research offers significant insights into social cognition and character design, particularly in how people perceive entities with mixed features of different social categories, underlining the importance of congruency in perceptual elements.
Collapse
Affiliation(s)
| | | | - Sung-Ho Kim
- Department of Psychology, Ewha Womans University, 52 Ewhayeodae-gil, Seodaemun-gu, Seoul 03760, Republic of Korea
| |
Collapse
|
15
|
Auer U, Kelemen Z, Vogl C, von Ritgen S, Haddad R, Torres Borda L, Gabmaier C, Breteler J, Jenner F. Development, refinement, and validation of an equine musculoskeletal pain scale. FRONTIERS IN PAIN RESEARCH 2024; 4:1292299. [PMID: 38312997 PMCID: PMC10837853 DOI: 10.3389/fpain.2023.1292299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 12/08/2023] [Indexed: 02/06/2024] Open
Abstract
Musculoskeletal disease is a common cause of chronic pain that is often overlooked and inadequately treated, impacting the quality of life of humans and horses alike. Lameness due to musculoskeletal pain is prevalent in horses, but the perception of pain by owners is low compared with veterinary diagnosis. Therefore, this study aims to establish and validate a pain scale for chronic equine orthopaedic pain that is user-friendly for horse owners and veterinarians to facilitate the identification and monitoring of pain in horses. The newly developed musculoskeletal pain scale (MPS) was applied to 154 horses (mean age 20 ± 6.4 years SD) housed at an equine sanctuary, of which 128 (83%) suffered from chronic orthopaedic disease. To complete the MPS, the horses were observed and videotaped from a distance while at rest in their box or enclosure. In addition, they received a complete clinical and orthopaedic exam. The need for veterinary intervention to address pain (assessed and executed by the sanctuary independent from this study) was used as a longitudinal health outcome to determine the MPS's predictive validity. To determine the interrater agreement, the MPS was scored for a randomly selected subset of 30 horses by six additional blinded raters, three equine veterinary practitioners, and three experienced equestrians. An iterative process was used to refine the tool based on improvements in the MPS's correlation with lameness evaluated at the walk and trot, predictive validity for longitudinal health outcomes, and interrater agreement. The intraclass correlation improved from 0.77 of the original MPS to 0.88 of the refined version (95% confidence interval: 0.8-0.94). The refined MPS correlated significantly with lameness at the walk (r = 0.44, p = 0.001) and trot (r = 0.5, p < 0.0001). The refined MPS significantly differed between horses that needed veterinary intervention (mean MPS = 8.6) and those that did not (mean MPS = 5.0, p = 0.0007). In summary, the MPS showed good interrater repeatability between expert and lay scorers, significant correlation with lameness at the walk and trot, and good predictive validity for longitudinal health outcomes, confirming its ability to identify horses with orthopaedic health problems.
Collapse
Affiliation(s)
- Ulrike Auer
- Anaesthesiology and Perioperative Intensive Care Medicine Unit, Department of Companion Animals and Horses, University of Veterinary Medicine Vienna, Vienna, Austria
| | - Zsofia Kelemen
- Equine Surgery Unit, Department of Companion Animals and Horses, University Equine Hospital, University of Veterinary Medicine Vienna, Vienna, Austria
| | - Claus Vogl
- Department of Biomedical Sciences, Institute of Animal Breeding and Genetics, University of Veterinary Medicine Vienna, Vienna, Austria
| | - Stephanie von Ritgen
- Anaesthesiology and Perioperative Intensive Care Medicine Unit, Department of Companion Animals and Horses, University of Veterinary Medicine Vienna, Vienna, Austria
| | - Rabea Haddad
- Equine Surgery Unit, Department of Companion Animals and Horses, University Equine Hospital, University of Veterinary Medicine Vienna, Vienna, Austria
| | - Laura Torres Borda
- Equine Surgery Unit, Department of Companion Animals and Horses, University Equine Hospital, University of Veterinary Medicine Vienna, Vienna, Austria
| | - Christopher Gabmaier
- Anaesthesiology and Perioperative Intensive Care Medicine Unit, Department of Companion Animals and Horses, University of Veterinary Medicine Vienna, Vienna, Austria
| | - John Breteler
- Equine Surgery Unit, Department of Companion Animals and Horses, University Equine Hospital, University of Veterinary Medicine Vienna, Vienna, Austria
| | - Florien Jenner
- Equine Surgery Unit, Department of Companion Animals and Horses, University Equine Hospital, University of Veterinary Medicine Vienna, Vienna, Austria
| |
Collapse
|
16
|
Chen C, Messinger DS, Chen C, Yan H, Duan Y, Ince RAA, Garrod OGB, Schyns PG, Jack RE. Cultural facial expressions dynamically convey emotion category and intensity information. Curr Biol 2024; 34:213-223.e5. [PMID: 38141619 PMCID: PMC10831323 DOI: 10.1016/j.cub.2023.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 10/27/2023] [Accepted: 12/01/2023] [Indexed: 12/25/2023]
Abstract
Communicating emotional intensity plays a vital ecological role because it provides valuable information about the nature and likelihood of the sender's behavior.1,2,3 For example, attack often follows signals of intense aggression if receivers fail to retreat.4,5 Humans regularly use facial expressions to communicate such information.6,7,8,9,10,11 Yet how this complex signaling task is achieved remains unknown. We addressed this question using a perception-based, data-driven method to mathematically model the specific facial movements that receivers use to classify the six basic emotions-"happy," "surprise," "fear," "disgust," "anger," and "sad"-and judge their intensity in two distinct cultures (East Asian, Western European; total n = 120). In both cultures, receivers expected facial expressions to dynamically represent emotion category and intensity information over time, using a multi-component compositional signaling structure. Specifically, emotion intensifiers peaked earlier or later than emotion classifiers and represented intensity using amplitude variations. Emotion intensifiers are also more similar across emotions than classifiers are, suggesting a latent broad-plus-specific signaling structure. Cross-cultural analysis further revealed similarities and differences in expectations that could impact cross-cultural communication. Specifically, East Asian and Western European receivers have similar expectations about which facial movements represent high intensity for threat-related emotions, such as "anger," "disgust," and "fear," but differ on those that represent low threat emotions, such as happiness and sadness. Together, our results provide new insights into the intricate processes by which facial expressions can achieve complex dynamic signaling tasks by revealing the rich information embedded in facial expressions.
Collapse
Affiliation(s)
- Chaona Chen
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK.
| | - Daniel S Messinger
- Departments of Psychology, Pediatrics, and Electrical & Computer Engineering, University of Miami, 5665 Ponce De Leon Blvd, Coral Gables, FL 33146, USA
| | - Cheng Chen
- Foreign Language Department, Teaching Centre for General Courses, Chengdu Medical College, 601 Tianhui Street, Chengdu 610083, China
| | - Hongmei Yan
- The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, North Jianshe Road, Chengdu 611731, China
| | - Yaocong Duan
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Oliver G B Garrod
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Philippe G Schyns
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Rachael E Jack
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| |
Collapse
|
17
|
Yu H, Lin C, Sun S, Cao R, Kar K, Wang S. Multimodal investigations of emotional face processing and social trait judgment of faces. Ann N Y Acad Sci 2024; 1531:29-48. [PMID: 37965931 PMCID: PMC10858652 DOI: 10.1111/nyas.15084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2023]
Abstract
Faces are among the most important visual stimuli that humans perceive in everyday life. While extensive literature has examined emotional processing and social evaluations of faces, most studies have examined either topic using unimodal approaches. In this review, we promote the use of multimodal cognitive neuroscience approaches to study these processes, using two lines of research as examples: ambiguity in facial expressions of emotion and social trait judgment of faces. In the first set of studies, we identified an event-related potential that signals emotion ambiguity using electroencephalography and we found convergent neural responses to emotion ambiguity using functional neuroimaging and single-neuron recordings. In the second set of studies, we discuss how different neuroimaging and personality-dimensional approaches together provide new insights into social trait judgments of faces. In both sets of studies, we provide an in-depth comparison between neurotypicals and people with autism spectrum disorder. We offer a computational account for the behavioral and neural markers of the different facial processing between the two groups. Finally, we suggest new practices for studying the emotional processing and social evaluations of faces. All data discussed in the case studies of this review are publicly available.
Collapse
Affiliation(s)
- Hongbo Yu
- Department of Psychological & Brain Sciences, University of California Santa Barbara, Santa Barbara, California, USA
| | - Chujun Lin
- Department of Psychology, University of California San Diego, San Diego, California, USA
| | - Sai Sun
- Frontier Research Institute for Interdisciplinary Sciences, Tohoku University, Sendai, Japan
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
| | - Runnan Cao
- Department of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Kohitij Kar
- Department of Biology, Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Shuo Wang
- Department of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| |
Collapse
|
18
|
Li Z, Lu H, Liu D, Yu ANC, Gendron M. Emotional event perception is related to lexical complexity and emotion knowledge. COMMUNICATIONS PSYCHOLOGY 2023; 1:45. [PMID: 39242918 PMCID: PMC11332234 DOI: 10.1038/s44271-023-00039-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 11/23/2023] [Indexed: 09/09/2024]
Abstract
Inferring emotion is a critical skill that supports social functioning. Emotion inferences are typically studied in simplistic paradigms by asking people to categorize isolated and static cues like frowning faces. Yet emotions are complex events that unfold over time. Here, across three samples (Study 1 N = 222; Study 2 N = 261; Study 3 N = 101), we present the Emotion Segmentation Paradigm to examine inferences about complex emotional events by extending cognitive paradigms examining event perception. Participants were asked to indicate when there were changes in the emotions of target individuals within continuous streams of activity in narrative film (Study 1) and documentary clips (Study 2, preregistered, and Study 3 test-retest sample). This Emotion Segmentation Paradigm revealed robust and reliable individual differences across multiple metrics. We also tested the constructionist prediction that emotion labels constrain emotion inference, which is traditionally studied by introducing emotion labels. We demonstrate that individual differences in active emotion vocabulary (i.e., readily accessible emotion words) correlate with emotion segmentation performance.
Collapse
Affiliation(s)
- Zhimeng Li
- Department of Psychology, Yale University, New Haven, Connecticut, USA.
| | - Hanxiao Lu
- Department of Psychology, New York University, New York, NY, USA
| | - Di Liu
- Department of Psychology, Johns Hopkins University, Baltimore, MD, USA
| | - Alessandra N C Yu
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Maria Gendron
- Department of Psychology, Yale University, New Haven, Connecticut, USA.
| |
Collapse
|
19
|
Maxwell JW, Sanchez DN, Ruthruff E. Infrequent facial expressions of emotion do not bias attention. PSYCHOLOGICAL RESEARCH 2023; 87:2449-2459. [PMID: 37258662 DOI: 10.1007/s00426-023-01844-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Accepted: 05/22/2023] [Indexed: 06/02/2023]
Abstract
Despite the obvious importance of facial expressions of emotion, most studies have found that they do not bias attention. A critical limitation, however, is that these studies generally present face distractors on all trials of the experiment. For other kinds of emotional stimuli, such as emotional scenes, infrequently presented stimuli elicit greater attentional bias than frequently presented stimuli, perhaps due to suppression or habituation. The goal of the current study then was to test whether such modulation of attentional bias by distractor frequency generalizes to facial expressions of emotion. In Experiment 1, both angry and happy faces were unable to bias attention, despite being infrequently presented. Even when the location of these face cues were more unpredictable-presented in one of two possible locations-still no attentional bias was observed (Experiment 2). Moreover, there was no bottom-up influence for angry and happy faces shown under high or low perceptual load (Experiment 3). We conclude that task-irrelevant posed facial expressions of emotion cannot bias attention even when presented infrequently.
Collapse
Affiliation(s)
- Joshua W Maxwell
- Department of Psychology, 1 University of New Mexico, Albuquerque, NM, 87131, USA.
| | - Danielle N Sanchez
- Department of Psychology, 1 University of New Mexico, Albuquerque, NM, 87131, USA
| | - Eric Ruthruff
- Department of Psychology, 1 University of New Mexico, Albuquerque, NM, 87131, USA
| |
Collapse
|
20
|
Wu C, Davaasuren D, Shafir T, Tsachor R, Wang JZ. Bodily expressed emotion understanding through integrating Laban movement analysis. PATTERNS (NEW YORK, N.Y.) 2023; 4:100816. [PMID: 37876902 PMCID: PMC10591137 DOI: 10.1016/j.patter.2023.100816] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 05/21/2023] [Accepted: 07/13/2023] [Indexed: 10/26/2023]
Abstract
Bodily expressed emotion understanding (BEEU) aims to automatically recognize human emotional expressions from body movements. Psychological research has demonstrated that people often move using specific motor elements to convey emotions. This work takes three steps to integrate human motor elements to study BEEU. First, we introduce BoME (body motor elements), a highly precise dataset for human motor elements. Second, we apply baseline models to estimate these elements on BoME, showing that deep learning methods are capable of learning effective representations of human movement. Finally, we propose a dual-source solution to enhance the BEEU model with the BoME dataset, which trains with both motor element and emotion labels and simultaneously produces predictions for both. Through experiments on the BoLD in-the-wild emotion understanding benchmark, we showcase the significant benefit of our approach. These results may inspire further research utilizing human motor elements for emotion understanding and mental health analysis.
Collapse
Affiliation(s)
- Chenyan Wu
- Data Science and Artificial Intelligence Area, College of Information Sciences and Technology, The Pennsylvania State University, University Park, PA 16802, USA
| | - Dolzodmaa Davaasuren
- Data Science and Artificial Intelligence Area, College of Information Sciences and Technology, The Pennsylvania State University, University Park, PA 16802, USA
| | - Tal Shafir
- The Emili Sagol Creative Arts Therapies Research Center, University of Haifa, Haifa 3498838, Israel
| | - Rachelle Tsachor
- School of Theatre and Music, University of Illinois, Chicago, IL 60607, USA
| | - James Z. Wang
- Data Science and Artificial Intelligence Area, College of Information Sciences and Technology, The Pennsylvania State University, University Park, PA 16802, USA
- Human-Computer Interaction Area, College of Information Sciences and Technology, The Pennsylvania State University, University Park, PA 16802, USA
| |
Collapse
|
21
|
Wang JZ, Zhao S, Wu C, Adams RB, Newman MG, Shafir T, Tsachor R. Unlocking the Emotional World of Visual Media: An Overview of the Science, Research, and Impact of Understanding Emotion: Drawing Insights From Psychology, Engineering, and the Arts, This Article Provides a Comprehensive Overview of the Field of Emotion Analysis in Visual Media and Discusses the Latest Research, Systems, Challenges, Ethical Implications, and Potential Impact of Artificial Emotional Intelligence on Society. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2023; 111:1236-1286. [PMID: 37859667 PMCID: PMC10586271 DOI: 10.1109/jproc.2023.3273517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/21/2023]
Abstract
The emergence of artificial emotional intelligence technology is revolutionizing the fields of computers and robotics, allowing for a new level of communication and understanding of human behavior that was once thought impossible. While recent advancements in deep learning have transformed the field of computer vision, automated understanding of evoked or expressed emotions in visual media remains in its infancy. This foundering stems from the absence of a universally accepted definition of "emotion," coupled with the inherently subjective nature of emotions and their intricate nuances. In this article, we provide a comprehensive, multidisciplinary overview of the field of emotion analysis in visual media, drawing on insights from psychology, engineering, and the arts. We begin by exploring the psychological foundations of emotion and the computational principles that underpin the understanding of emotions from images and videos. We then review the latest research and systems within the field, accentuating the most promising approaches. We also discuss the current technological challenges and limitations of emotion analysis, underscoring the necessity for continued investigation and innovation. We contend that this represents a "Holy Grail" research problem in computing and delineate pivotal directions for future inquiry. Finally, we examine the ethical ramifications of emotion-understanding technologies and contemplate their potential societal impacts. Overall, this article endeavors to equip readers with a deeper understanding of the domain of emotion analysis in visual media and to inspire further research and development in this captivating and rapidly evolving field.
Collapse
Affiliation(s)
- James Z Wang
- College of Information Sciences and Technology, The Pennsylvania State University, University Park, PA 16802 USA
| | - Sicheng Zhao
- Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing 100084, China
| | - Chenyan Wu
- College of Information Sciences and Technology, The Pennsylvania State University, University Park, PA 16802 USA
| | - Reginald B Adams
- Department of Psychology, The Pennsylvania State University, University Park, PA 16802 USA
| | - Michelle G Newman
- Department of Psychology, The Pennsylvania State University, University Park, PA 16802 USA
| | - Tal Shafir
- Emily Sagol Creative Arts Therapies Research Center, University of Haifa, Haifa 3498838, Israel
| | - Rachelle Tsachor
- School of Theatre and Music, University of Illinois at Chicago, Chicago, IL 60607 USA
| |
Collapse
|
22
|
Eiserbeck A, Maier M, Baum J, Abdel Rahman R. Deepfake smiles matter less-the psychological and neural impact of presumed AI-generated faces. Sci Rep 2023; 13:16111. [PMID: 37752242 PMCID: PMC10522659 DOI: 10.1038/s41598-023-42802-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 09/14/2023] [Indexed: 09/28/2023] Open
Abstract
High-quality AI-generated portraits ("deepfakes") are becoming increasingly prevalent. Understanding the responses they evoke in perceivers is crucial in assessing their societal implications. Here we investigate the impact of the belief that depicted persons are real or deepfakes on psychological and neural measures of human face perception. Using EEG, we tracked participants' (N = 30) brain responses to real faces showing positive, neutral, and negative expressions, after being informed that they are either real or fake. Smiling faces marked as fake appeared less positive, as reflected in expression ratings, and induced slower evaluations. Whereas presumed real smiles elicited canonical emotion effects with differences relative to neutral faces in the P1 and N170 components (markers of early visual perception) and in the EPN component (indicative of reflexive emotional processing), presumed deepfake smiles showed none of these effects. Additionally, only smiles presumed as fake showed enhanced LPP activity compared to neutral faces, suggesting more effortful evaluation. Negative expressions induced typical emotion effects, whether considered real or fake. Our findings demonstrate a dampening effect on perceptual, emotional, and evaluative processing of presumed deepfake smiles, but not angry expressions, adding new specificity to the debate on the societal impact of AI-generated content.
Collapse
Affiliation(s)
- Anna Eiserbeck
- Department of Psychology, Faculty of Life Sciences, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099, Berlin, Germany.
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Berlin, Germany.
| | - Martin Maier
- Department of Psychology, Faculty of Life Sciences, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099, Berlin, Germany.
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Berlin, Germany.
| | - Julia Baum
- Department of Psychology, Faculty of Life Sciences, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099, Berlin, Germany
| | - Rasha Abdel Rahman
- Department of Psychology, Faculty of Life Sciences, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099, Berlin, Germany
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Berlin, Germany
| |
Collapse
|
23
|
Lin C, Bulls LS, Tepfer LJ, Vyas AD, Thornton MA. Advancing Naturalistic Affective Science with Deep Learning. AFFECTIVE SCIENCE 2023; 4:550-562. [PMID: 37744976 PMCID: PMC10514024 DOI: 10.1007/s42761-023-00215-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 08/03/2023] [Indexed: 09/26/2023]
Abstract
People express their own emotions and perceive others' emotions via a variety of channels, including facial movements, body gestures, vocal prosody, and language. Studying these channels of affective behavior offers insight into both the experience and perception of emotion. Prior research has predominantly focused on studying individual channels of affective behavior in isolation using tightly controlled, non-naturalistic experiments. This approach limits our understanding of emotion in more naturalistic contexts where different channels of information tend to interact. Traditional methods struggle to address this limitation: manually annotating behavior is time-consuming, making it infeasible to do at large scale; manually selecting and manipulating stimuli based on hypotheses may neglect unanticipated features, potentially generating biased conclusions; and common linear modeling approaches cannot fully capture the complex, nonlinear, and interactive nature of real-life affective processes. In this methodology review, we describe how deep learning can be applied to address these challenges to advance a more naturalistic affective science. First, we describe current practices in affective research and explain why existing methods face challenges in revealing a more naturalistic understanding of emotion. Second, we introduce deep learning approaches and explain how they can be applied to tackle three main challenges: quantifying naturalistic behaviors, selecting and manipulating naturalistic stimuli, and modeling naturalistic affective processes. Finally, we describe the limitations of these deep learning methods, and how these limitations might be avoided or mitigated. By detailing the promise and the peril of deep learning, this review aims to pave the way for a more naturalistic affective science.
Collapse
Affiliation(s)
- Chujun Lin
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Landry S. Bulls
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Lindsey J. Tepfer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Amisha D. Vyas
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Mark A. Thornton
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| |
Collapse
|
24
|
Zhu X, Gong Y, Xu T, Lian W, Xu S, Fan L. Incongruent gestures slow the processing of facial expressions in university students with social anxiety. Front Psychol 2023; 14:1199537. [PMID: 37674750 PMCID: PMC10478090 DOI: 10.3389/fpsyg.2023.1199537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 08/08/2023] [Indexed: 09/08/2023] Open
Abstract
In recent years, an increasing number of studies have examined the mechanisms underlying nonverbal emotional information processing in people with high social anxiety (HSA). However, most of these studies have focused on the processing of facial expressions, and there has been scarce research on gesture or even face-gesture combined processing in HSA individuals. The present study explored the processing characteristics and mechanism of the interaction between gestures and facial expressions in people with HSA and low social anxiety (LSA). The present study recruited university students as participants and used the Liebowitz Social Anxiety Scale scores to distinguish the HSA and LSA groups. We used a 2 (group: HSA and LSA) × 2 (emotion valence: positive, negative) × 2 (task: face, gesture) multifactor mixed design, and videos of a single face or gesture and combined face-gesture cues were used as stimuli. We found that (1) there is a distinction in the processing of faces and gestures, with individuals recognizing gestures faster than faces; (2) there is an attentional enhancement in the processing of gestures, particularly for negative gestures; and (3) when the emotional valence of faces and gestures align, it facilitates the recognition of both. However, incongruent gestures have a stronger impact on the processing of facial expressions compared to facial expressions themselves, suggesting that the processing of facial emotions is more influenced by environmental cues provided by gestures. These findings indicated that gestures played an important role in emotional processing, and facial emotional processing was more dependent on the environmental cues derived from gestures, which helps to clarify the reasons for biases in the interpretation of emotional information in people with HSA.
Collapse
Affiliation(s)
- Xinyi Zhu
- Department of Psychology, School of Education, Wenzhou University, Wenzhou, China
- Department of Psychology, Jing Hengyi School of Education, Hangzhou Normal University, Hangzhou, China
| | - Yan Gong
- Department of Psychology, School of Education, Wenzhou University, Wenzhou, China
| | - Tingting Xu
- Department of Psychology, School of Education, Wenzhou University, Wenzhou, China
| | - Wen Lian
- Department of Psychology, School of Education, Wenzhou University, Wenzhou, China
| | - Shuhui Xu
- Department of Psychology, School of Education, Wenzhou University, Wenzhou, China
| | - Lu Fan
- Department of Psychology, School of Education, Wenzhou University, Wenzhou, China
| |
Collapse
|
25
|
Wu Y, Ying H. The background assimilation effect: Facial emotional perception is affected by surrounding stimuli. Iperception 2023; 14:20416695231190254. [PMID: 37654695 PMCID: PMC10467198 DOI: 10.1177/20416695231190254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 07/10/2023] [Indexed: 09/02/2023] Open
Abstract
The perception of facial emotion is not only determined by the physical features of the face itself but also be influenced by the emotional information of the background or surrounding information. However, the details of such effect are not fully understood. Here, the authors tested the perceived emotion of a target face surrounded by stimuli with different levels of emotional valence. In Experiment 1, four types of objects were divided into three groups (negative, unpleasant flowers and unpleasant animals; mildly negative (neutral), houses; positive, pleasant flowers). In Experiment 2, three groups of surrounding faces with different social-emotional valence (negative, neutral, and positive) were formed with the memory of affective personal knowledge. The data from two experiments showed that the perception of facial emotion can be influenced and modulated by the emotional valence of the surrounding stimuli, which can be explained by assimilation: the positive stimuli increased the valence of a target face, while the negative stimuli comparatively decreased it. Furthermore, the neutral stimuli also increased the valence of the target, which could be explained by the social positive effect. Therefore, the process of assimilation is likely to be a high-level emotional cognition rather than a low-level visual perception. The results of this study may help us better understand face perception in realistic scenarios.
Collapse
Affiliation(s)
- Yujie Wu
- Department of Psychology, Soochow University, Suzhou, China
| | - Haojiang Ying
- Department of Psychology, Soochow University, Suzhou, China
| |
Collapse
|
26
|
Boch M, Wagner IC, Karl S, Huber L, Lamm C. Functionally analogous body- and animacy-responsive areas are present in the dog (Canis familiaris) and human occipito-temporal lobe. Commun Biol 2023; 6:645. [PMID: 37369804 PMCID: PMC10300132 DOI: 10.1038/s42003-023-05014-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 06/05/2023] [Indexed: 06/29/2023] Open
Abstract
Comparing the neural correlates of socio-cognitive skills across species provides insights into the evolution of the social brain and has revealed face- and body-sensitive regions in the primate temporal lobe. Although from a different lineage, dogs share convergent visuo-cognitive skills with humans and a temporal lobe which evolved independently in carnivorans. We investigated the neural correlates of face and body perception in dogs (N = 15) and humans (N = 40) using functional MRI. Combining univariate and multivariate analysis approaches, we found functionally analogous occipito-temporal regions involved in the perception of animate entities and bodies in both species and face-sensitive regions in humans. Though unpredicted, we also observed neural representations of faces compared to inanimate objects, and dog compared to human bodies in dog olfactory regions. These findings shed light on the evolutionary foundations of human and dog social cognition and the predominant role of the temporal lobe.
Collapse
Affiliation(s)
- Magdalena Boch
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria.
- Department of Cognitive Biology, Faculty of Life Sciences, University of Vienna, Vienna, Austria.
| | - Isabella C Wagner
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
- Centre for Microbiology and Environmental Systems Science, University of Vienna, Vienna, Austria
| | - Sabrina Karl
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna and University of Vienna, Vienna, Austria
| | - Ludwig Huber
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna and University of Vienna, Vienna, Austria
| | - Claus Lamm
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| |
Collapse
|
27
|
Preißler L, Keck J, Krüger B, Munzert J, Schwarzer G. Recognition of emotional body language from dyadic and monadic point-light displays in 5-year-old children and adults. J Exp Child Psychol 2023; 235:105713. [PMID: 37331307 DOI: 10.1016/j.jecp.2023.105713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 04/13/2023] [Accepted: 05/16/2023] [Indexed: 06/20/2023]
Abstract
Most child studies on emotion perception used faces and speech as emotion stimuli, but little is known about children's perception of emotions conveyed by body movements, that is, emotional body language (EBL). This study aimed to investigate whether processing advantages for positive emotions in children and negative emotions in adults found in studies on emotional face and term perception also occur in EBL perception. We also aimed to uncover which specific movement features of EBL contribute to emotion perception from interactive dyads compared with noninteractive monads in children and adults. We asked 5-year-old children and adults to categorize happy and angry point-light displays (PLDs), presented as pairs (dyads) and single actors (monads), in a button-press task. By applying representational similarity analyses, we determined intra- and interpersonal movement features of the PLDs and their relation to the participants' emotional categorizations. Results showed significantly higher recognition of happy PLDs in 5-year-olds and of angry PLDs in adults in monads but not in dyads. In both age groups, emotion recognition depended significantly on kinematic and postural movement features such as limb contraction and vertical movement in monads and dyads, whereas in dyads recognition also relied on interpersonal proximity measures such as interpersonal distance. Thus, EBL processing in monads seems to undergo a similar developmental shift from a positivity bias to a negativity bias, as was previously found for emotional faces and terms. Despite these age-specific processing biases, children and adults seem to use similar movement features in EBL processing.
Collapse
Affiliation(s)
- Lucie Preißler
- Department of Developmental Psychology, Justus Liebig University Giessen, 35394 Gießen, Germany.
| | - Johannes Keck
- Neuromotor Behavior Lab, Department of Sport Science, Justus Liebig University Giessen, 35394 Gießen, Germany
| | - Britta Krüger
- Neuromotor Behavior Lab, Department of Sport Science, Justus Liebig University Giessen, 35394 Gießen, Germany
| | - Jörn Munzert
- Neuromotor Behavior Lab, Department of Sport Science, Justus Liebig University Giessen, 35394 Gießen, Germany
| | - Gudrun Schwarzer
- Department of Developmental Psychology, Justus Liebig University Giessen, 35394 Gießen, Germany
| |
Collapse
|
28
|
Dildine TC, Amir CM, Parsons J, Atlas LY. How Pain-Related Facial Expressions Are Evaluated in Relation to Gender, Race, and Emotion. AFFECTIVE SCIENCE 2023; 4:350-369. [PMID: 37293681 PMCID: PMC9982800 DOI: 10.1007/s42761-023-00181-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 01/24/2023] [Indexed: 03/06/2023]
Abstract
Inequities in pain assessment are well-documented; however, the psychological mechanisms underlying such biases are poorly understood. We investigated potential perceptual biases in the judgments of faces displaying pain-related movements. Across five online studies, 956 adult participants viewed images of computer-generated faces ("targets") that varied in features related to race (Black and White) and gender (women and men). Target identity was manipulated across participants, and each target had equivalent facial movements that displayed varying intensities of movement in facial action-units related to pain (Studies 1-4) or pain and emotion (Study 5). On each trial, participants provided categorical judgments as to whether a target was in pain (Studies 1-4) or which expression the target displayed (Study 5) and then rated the perceived intensity of the expression. Meta-analyses of Studies 1-4 revealed that movement intensity was positively associated with both categorizing a trial as painful and perceived pain intensity. Target race and gender did not consistently affect pain-related judgments, contrary to well-documented clinical inequities. In Study 5, in which pain was equally likely relative to other emotions, pain was the least frequently selected emotion (5%). Our results suggest that perceivers can utilize facial movements to evaluate pain in other individuals, but perceiving pain may depend on contextual factors. Furthermore, assessments of computer-generated, pain-related facial movements online do not replicate sociocultural biases observed in the clinic. These findings provide a foundation for future studies comparing CGI and real images of pain and emphasize the need for further work on the relationship between pain and emotion. Supplementary Information The online version contains supplementary material available at 10.1007/s42761-023-00181-6.
Collapse
Affiliation(s)
- Troy C. Dildine
- National Center for Complementary and Integrative Health, National Institutes of Health, 10, Center Drive, Bethesda, MD 20892 USA
- Department of Clinical Neuroscience, Karolinska Institute, 171 77 Solna, Sweden
| | - Carolyn M. Amir
- National Center for Complementary and Integrative Health, National Institutes of Health, 10, Center Drive, Bethesda, MD 20892 USA
| | - Julie Parsons
- National Center for Complementary and Integrative Health, National Institutes of Health, 10, Center Drive, Bethesda, MD 20892 USA
| | - Lauren Y. Atlas
- National Center for Complementary and Integrative Health, National Institutes of Health, 10, Center Drive, Bethesda, MD 20892 USA
- National Institute of Mental Health, National Institutes of Health, Bethesda, MD 20892 USA
- National Institute on Drug Abuse, National Institutes of Health, Baltimore, MD 21224 USA
| |
Collapse
|
29
|
Christensen JF, Bruhn L, Schmidt EM, Bahmanian N, Yazdi SHN, Farahi F, Sancho-Escanero L, Menninghaus W. A 5-emotions stimuli set for emotion perception research with full-body dance movements. Sci Rep 2023; 13:8757. [PMID: 37253770 DOI: 10.1038/s41598-023-33656-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 04/17/2023] [Indexed: 06/01/2023] Open
Abstract
Ekman famously contended that there are different channels of emotional expression (face, voice, body), and that emotion recognition ability confers an adaptive advantage to the individual. Yet, still today, much emotion perception research is focussed on emotion recognition from the face, and few validated emotionally expressive full-body stimuli sets are available. Based on research on emotional speech perception, we created a new, highly controlled full-body stimuli set. We used the same-sequence approach, and not emotional actions (e.g., jumping of joy, recoiling in fear): One professional dancer danced 30 sequences of (dance) movements five times each, expressing joy, anger, fear, sadness or a neutral state, one at each repetition. We outline the creation of a total of 150, 6-s-long such video stimuli, that show the dancer as a white silhouette on a black background. Ratings from 90 participants (emotion recognition, aesthetic judgment) showed that intended emotion was recognized above chance (chance: 20%; joy: 45%, anger: 48%, fear: 37%, sadness: 50%, neutral state: 51%), and that aesthetic judgment was sensitive to the intended emotion (beauty ratings: joy > anger > fear > neutral state, and sad > fear > neutral state). The stimuli set, normative values and code are available for download.
Collapse
Affiliation(s)
- Julia F Christensen
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany.
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany.
| | - Laura Bruhn
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany
| | - Eva-Madeleine Schmidt
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Max Planck School of Cognition, Max Planck Institute, Leipzig, Germany
| | - Nasimeh Bahmanian
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Department of Modern Languages, Goethe University, Frankfurt, Germany
| | | | | | | | - Winfried Menninghaus
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany
| |
Collapse
|
30
|
O’Toole AJ, Hu Y. First impressions from faces in the real world: Commentary on Sutherland and Young (2022). Br J Psychol 2023; 114:508-510. [PMID: 36519182 PMCID: PMC10443674 DOI: 10.1111/bjop.12621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 11/21/2022] [Indexed: 12/23/2022]
Abstract
The study of first impressions from faces now emphasizes the need to understand trait inferences made to naturalistic face images (British Journal of Psychology, 113, 2022, 1056). Face recognition algorithms based on deep convolutional neural networks simultaneously represent invariant, changeable and environmental variables in face images. Therefore, we suggest them as a comprehensive 'face space' model of first impressions of naturalistic faces. We also suggest that to understand trait inferences in the real world, a logical next step is to consider trait inferences made to whole people (faces and bodies). On the role of cultural contributions to trait perception, we think it is important for the field to begin to consider the way in which trait inferences motivate (or not) behaviour in independent and interdependent cultures.
Collapse
Affiliation(s)
| | - Ying Hu
- State Key Laboratory of Brain and Cognitive Science,
Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of
Sciences, Beijing, China
| |
Collapse
|
31
|
Kelly SD, Ngo Tran QA. Exploring the Emotional Functions of Co-Speech Hand Gesture in Language and Communication. Top Cogn Sci 2023. [PMID: 37115518 DOI: 10.1111/tops.12657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 04/05/2023] [Accepted: 04/06/2023] [Indexed: 04/29/2023]
Abstract
Research over the past four decades has built a convincing case that co-speech hand gestures play a powerful role in human cognition . However, this recent focus on the cognitive function of gesture has, to a large extent, overlooked its emotional role-a role that was once central to research on bodily expression. In the present review, we first give a brief summary of the wealth of research demonstrating the cognitive function of co-speech gestures in language acquisition, learning, and thinking. Building on this foundation, we revisit the emotional function of gesture across a wide range of communicative contexts, from clinical to artistic to educational, and spanning diverse fields, from cognitive neuroscience to linguistics to affective science. Bridging the cognitive and emotional functions of gesture highlights promising avenues of research that have varied practical and theoretical implications for human-machine interactions, therapeutic interventions, language evolution, embodied cognition, and more.
Collapse
Affiliation(s)
- Spencer D Kelly
- Department of Psychological and Brain Sciences, Center for Language and Brain, Colgate University, 13 Oak Dr., Hamilton, NY, 13346, United States
| | - Quang-Anh Ngo Tran
- Department of Psychological and Brain Sciences, Indiana University, 1101 E. 10th St., Bloomington, IN, 47405, United States
| |
Collapse
|
32
|
Ward IL, Raven EP, de la Rosa S, Jones DK, Teufel C, von dem Hagen E. White matter microstructure in face and body networks predicts facial expression and body posture perception across development. Hum Brain Mapp 2023; 44:2307-2322. [PMID: 36661194 PMCID: PMC10028674 DOI: 10.1002/hbm.26211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 12/05/2022] [Accepted: 01/07/2023] [Indexed: 01/21/2023] Open
Abstract
Facial expression and body posture recognition have protracted developmental trajectories. Interactions between face and body perception, such as the influence of body posture on facial expression perception, also change with development. While the brain regions underpinning face and body processing are well-defined, little is known about how white-matter tracts linking these regions relate to perceptual development. Here, we obtained complementary diffusion magnetic resonance imaging (MRI) measures (fractional anisotropy [FA], spherical mean Ṧμ ), and a quantitative MRI myelin-proxy measure (R1), within white-matter tracts of face- and body-selective networks in children and adolescents and related these to perceptual development. In tracts linking occipital and fusiform face areas, facial expression perception was predicted by age-related maturation, as measured by Ṧμ and R1, as well as age-independent individual differences in microstructure, captured by FA and R1. Tract microstructure measures linking posterior superior temporal sulcus body region with anterior temporal lobe (ATL) were related to the influence of body on facial expression perception, supporting ATL as a site of face and body network convergence. Overall, our results highlight age-dependent and age-independent constraints that white-matter microstructure poses on perceptual abilities during development and the importance of complementary microstructural measures in linking brain structure and behaviour.
Collapse
Affiliation(s)
- Isobel L. Ward
- Cardiff University Brain Research Imaging Centre, School of PsychologyCardiff UniversityCardiffUK
| | - Erika P. Raven
- Cardiff University Brain Research Imaging Centre, School of PsychologyCardiff UniversityCardiffUK
- Center for Biomedical Imaging, Department of RadiologyNew York University Grossman School of MedicineNew YorkNew YorkUSA
| | | | - Derek K. Jones
- Cardiff University Brain Research Imaging Centre, School of PsychologyCardiff UniversityCardiffUK
| | - Christoph Teufel
- Cardiff University Brain Research Imaging Centre, School of PsychologyCardiff UniversityCardiffUK
| | - Elisabeth von dem Hagen
- Cardiff University Brain Research Imaging Centre, School of PsychologyCardiff UniversityCardiffUK
| |
Collapse
|
33
|
Long H, Peluso N, Baker CI, Japee S, Taubert J. A database of heterogeneous faces for studying naturalistic expressions. Sci Rep 2023; 13:5383. [PMID: 37012369 PMCID: PMC10070342 DOI: 10.1038/s41598-023-32659-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 03/30/2023] [Indexed: 04/05/2023] Open
Abstract
Facial expressions are thought to be complex visual signals, critical for communication between social agents. Most prior work aimed at understanding how facial expressions are recognized has relied on stimulus databases featuring posed facial expressions, designed to represent putative emotional categories (such as 'happy' and 'angry'). Here we use an alternative selection strategy to develop the Wild Faces Database (WFD); a set of one thousand images capturing a diverse range of ambient facial behaviors from outside of the laboratory. We characterized the perceived emotional content in these images using a standard categorization task in which participants were asked to classify the apparent facial expression in each image. In addition, participants were asked to indicate the intensity and genuineness of each expression. While modal scores indicate that the WFD captures a range of different emotional expressions, in comparing the WFD to images taken from other, more conventional databases, we found that participants responded more variably and less specifically to the wild-type faces, perhaps indicating that natural expressions are more multiplexed than a categorical model would predict. We argue that this variability can be employed to explore latent dimensions in our mental representation of facial expressions. Further, images in the WFD were rated as less intense and more genuine than images taken from other databases, suggesting a greater degree of authenticity among WFD images. The strong positive correlation between intensity and genuineness scores demonstrating that even the high arousal states captured in the WFD were perceived as authentic. Collectively, these findings highlight the potential utility of the WFD as a new resource for bridging the gap between the laboratory and real world in studies of expression recognition.
Collapse
Affiliation(s)
- Houqiu Long
- The School of Psychology, The University of Queensland, St Lucia, QLD, Australia
| | - Natalie Peluso
- The School of Psychology, The University of Queensland, St Lucia, QLD, Australia
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA
| | - Shruti Japee
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA
| | - Jessica Taubert
- The School of Psychology, The University of Queensland, St Lucia, QLD, Australia.
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA.
| |
Collapse
|
34
|
Ventura-Bort C, Panza D, Weymar M. Words matter when inferring emotions: a conceptual replication and extension. Cogn Emot 2023:1-15. [PMID: 36856025 DOI: 10.1080/02699931.2023.2183491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2023]
Abstract
It is long known that facial configurations play a critical role when inferring mental and emotional states from others. Nevertheless, there is still a scientific debate on how we infer emotions from facial configurations. The theory of constructed emotion (TCE) suggests that we may infer different emotions from the same facial configuration, depending on the context (e.g. provided by visual and lexical cues) in which they are perceived. For instance, a recent study found that participants were more accurate in inferring mental and emotional states across three different datasets (i.e. RMET, static and dynamic emojis) when words were provided (i.e. forced-choice task), compared to when they were not (i.e. free-labelling task), suggesting that words serve as contexts that modulate the inference from facial configurations. The goal of the current within-subject study was to replicate and extend these findings by adding a fourth dataset (KDEF-dyn), consisting of morphed human faces (to increase the ecological validity). Replicating previous findings, we observed that words increased accuracy across the three (previously used) datasets, an effect that was also observed for the facial morphed stimuli. Our findings are in line with the TCE, providing support for the importance of contextual verbal cues in emotion perception.
Collapse
Affiliation(s)
- C Ventura-Bort
- Department of Biological Psychology and Affective Science, Faculty of Human Sciences, University of Potsdam, Potsdam, Germany
| | - D Panza
- Department of Biological Psychology and Affective Science, Faculty of Human Sciences, University of Potsdam, Potsdam, Germany
| | - M Weymar
- Department of Biological Psychology and Affective Science, Faculty of Human Sciences, University of Potsdam, Potsdam, Germany.,Research Focus Cognitive Sciences, University of Potsdam, Potsdam, Germany.,Faculty of Health Sciences Brandenburg, University of Potsdam, Potsdam, Germany
| |
Collapse
|
35
|
Chen H, Shi H, Liu X, Li X, Zhao G. SMG: A Micro-gesture Dataset Towards Spontaneous Body Gestures for Emotional Stress State Analysis. Int J Comput Vis 2023. [DOI: 10.1007/s11263-023-01761-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
Abstract
AbstractWe explore using body gestures for hidden emotional state analysis. As an important non-verbal communicative fashion, human body gestures are capable of conveying emotional information during social communication. In previous works, efforts have been made mainly on facial expressions, speech, or expressive body gestures to interpret classical expressive emotions. Differently, we focus on a specific group of body gestures, called micro-gestures (MGs), used in the psychology research field to interpret inner human feelings. MGs are subtle and spontaneous body movements that are proven, together with micro-expressions, to be more reliable than normal facial expressions for conveying hidden emotional information. In this work, a comprehensive study of MGs is presented from the computer vision aspect, including a novel spontaneous micro-gesture (SMG) dataset with two emotional stress states and a comprehensive statistical analysis indicating the correlations between MGs and emotional states. Novel frameworks are further presented together with various state-of-the-art methods as benchmarks for automatic classification, online recognition of MGs, and emotional stress state recognition. The dataset and methods presented could inspire a new way of utilizing body gestures for human emotion understanding and bring a new direction to the emotion AI community. The source code and dataset are made available: https://github.com/mikecheninoulu/SMG.
Collapse
|
36
|
Hu Y, O'Toole AJ. First impressions: Integrating faces and bodies in personality trait perception. Cognition 2023; 231:105309. [PMID: 36347653 DOI: 10.1016/j.cognition.2022.105309] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 10/05/2022] [Accepted: 10/12/2022] [Indexed: 11/07/2022]
Abstract
Faces and bodies spontaneously elicit personality trait judgments (e.g., trustworthy, dominant, lazy). We examined how trait information from the face and body combine to form first impressions of the whole person and whether trait judgments from the face and body are affected by seeing the whole person. Consistent with the trait-dependence hypothesis, Experiment 1 showed that the relative contribution of the face and body to whole-person perception varied with the trait judged. Agreeableness traits (e.g., warm, aggressive, sympathetic, trustworthy) were inferred primarily from the face, conscientiousness traits (e.g., dependable, careless) from the body, and extraversion traits (e.g., dominant, quiet, confident) from the whole person. A control experiment showed that both clothing and body shape contributed to whole-person judgments. In Experiment 2, we found that a face (body) rated in the whole person elicited a different rating than when it was rated in isolation. Specifically, when trait ratings differed for an isolated face and body of the same identity, the whole-person context biased in-context ratings of the faces and bodies towards the ratings of the context. These results showed that face and body trait perception interact more than previously assumed. We combine current and established findings to propose a novel framework to account for face-body integration in trait perception. This framework incorporates basic elements such as perceptual determinants, nonperceptual determinants, trait formation, and integration, as well as predictive factors such as the rater, the person rated, and the situation.
Collapse
Affiliation(s)
- Ying Hu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
| | | |
Collapse
|
37
|
Emotion is perceived accurately from isolated body parts, especially hands. Cognition 2023; 230:105260. [PMID: 36058103 DOI: 10.1016/j.cognition.2022.105260] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 08/16/2022] [Accepted: 08/17/2022] [Indexed: 11/21/2022]
Abstract
Body posture and configuration provide important visual cues about the emotion states of other people. We know that bodily form is processed holistically, however, emotion recognition may depend on different mechanisms; certain body parts, such as the hands, may be especially important for perceiving emotion. This study therefore compared participants' emotion recognition performance when shown images of full bodies, or of isolated hands, arms, heads and torsos. Across three experiments, emotion recognition accuracy was above chance for all body parts. While emotions were recognized most accurately from full bodies, recognition performance from the hands was more accurate than for other body parts. Representational similarity analysis further showed that the pattern of errors for the hands was related to that for full bodies. Performance was reduced when stimuli were inverted, showing a clear body inversion effect. The high performance for hands was not due only to the fact that there are two hands, as performance remained well above chance even when just one hand was shown. These results demonstrate that emotions can be decoded from body parts. Furthermore, certain features, such as the hands, are more important to emotion perception than others. STATEMENT OF RELEVANCE: Successful social interaction relies on accurately perceiving emotional information from others. Bodies provide an abundance of emotion cues; however, the way in which emotional bodies and body parts are perceived is unclear. We investigated this perceptual process by comparing emotion recognition for body parts with that for full bodies. Crucially, we found that while emotions were most accurately recognized from full bodies, emotions were also classified accurately when images of isolated hands, arms, heads and torsos were seen. Of the body parts shown, emotion recognition from the hands was most accurate. Furthermore, shared patterns of emotion classification for hands and full bodies suggested that emotion recognition mechanisms are shared for full bodies and body parts. That the hands are key to emotion perception is important evidence in its own right. It could also be applied to interventions for individuals who find it difficult to read emotions from faces and bodies.
Collapse
|
38
|
Smith RA, Cross ES. The McNorm library: creating and validating a new library of emotionally expressive whole body dance movements. PSYCHOLOGICAL RESEARCH 2023; 87:484-508. [PMID: 35385989 PMCID: PMC8985749 DOI: 10.1007/s00426-022-01669-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Accepted: 02/23/2022] [Indexed: 11/28/2022]
Abstract
The ability to exchange affective cues with others plays a key role in our ability to create and maintain meaningful social relationships. We express our emotions through a variety of socially salient cues, including facial expressions, the voice, and body movement. While significant advances have been made in our understanding of verbal and facial communication, to date, understanding of the role played by human body movement in our social interactions remains incomplete. To this end, here we describe the creation and validation of a new set of emotionally expressive whole-body dance movement stimuli, named the Motion Capture Norming (McNorm) Library, which was designed to reconcile a number of limitations associated with previous movement stimuli. This library comprises a series of point-light representations of a dancer's movements, which were performed to communicate to observers neutrality, happiness, sadness, anger, and fear. Based on results from two validation experiments, participants could reliably discriminate the intended emotion expressed in the clips in this stimulus set, with accuracy rates up to 60% (chance = 20%). We further explored the impact of dance experience and trait empathy on emotion recognition and found that neither significantly impacted emotion discrimination. As all materials for presenting and analysing this movement library are openly available, we hope this resource will aid other researchers in further exploration of affective communication expressed by human bodily movement.
Collapse
Affiliation(s)
- Rebecca A. Smith
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland
| | - Emily S. Cross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland ,Department of Cognitive Science, Macquarie University, Sydney, Australia
| |
Collapse
|
39
|
Straulino E, Scarpazza C, Sartori L. What is missing in the study of emotion expression? Front Psychol 2023; 14:1158136. [PMID: 37179857 PMCID: PMC10173880 DOI: 10.3389/fpsyg.2023.1158136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/06/2023] [Indexed: 05/15/2023] Open
Abstract
While approaching celebrations for the 150 years of "The Expression of the Emotions in Man and Animals", scientists' conclusions on emotion expression are still debated. Emotion expression has been traditionally anchored to prototypical and mutually exclusive facial expressions (e.g., anger, disgust, fear, happiness, sadness, and surprise). However, people express emotions in nuanced patterns and - crucially - not everything is in the face. In recent decades considerable work has critiqued this classical view, calling for a more fluid and flexible approach that considers how humans dynamically perform genuine expressions with their bodies in context. A growing body of evidence suggests that each emotional display is a complex, multi-component, motoric event. The human face is never static, but continuously acts and reacts to internal and environmental stimuli, with the coordinated action of muscles throughout the body. Moreover, two anatomically and functionally different neural pathways sub-serve voluntary and involuntary expressions. An interesting implication is that we have distinct and independent pathways for genuine and posed facial expressions, and different combinations may occur across the vertical facial axis. Investigating the time course of these facial blends, which can be controlled consciously only in part, is recently providing a useful operational test for comparing the different predictions of various models on the lateralization of emotions. This concise review will identify shortcomings and new challenges regarding the study of emotion expressions at face, body, and contextual levels, eventually resulting in a theoretical and methodological shift in the study of emotions. We contend that the most feasible solution to address the complex world of emotion expression is defining a completely new and more complete approach to emotional investigation. This approach can potentially lead us to the roots of emotional display, and to the individual mechanisms underlying their expression (i.e., individual emotional signatures).
Collapse
Affiliation(s)
- Elisa Straulino
- Department of General Psychology, University of Padova, Padova, Italy
- *Correspondence: Elisa Straulino,
| | - Cristina Scarpazza
- Department of General Psychology, University of Padova, Padova, Italy
- IRCCS San Camillo Hospital, Venice, Italy
| | - Luisa Sartori
- Department of General Psychology, University of Padova, Padova, Italy
- Padova Neuroscience Center, University of Padova, Padova, Italy
- Luisa Sartori,
| |
Collapse
|
40
|
Taubert J, Japee S, Patterson A, Wild H, Goyal S, Yu D, Ungerleider LG. A broadly tuned network for affective body language in the macaque brain. SCIENCE ADVANCES 2022; 8:eadd6865. [PMID: 36427322 PMCID: PMC9699662 DOI: 10.1126/sciadv.add6865] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 10/27/2022] [Indexed: 06/16/2023]
Abstract
Body language is a powerful tool that we use to communicate how we feel, but it is unclear whether other primates also communicate in this way. Here, we use functional magnetic resonance imaging to show that the body-selective patches in macaques are activated by affective body language. Unexpectedly, we found these regions to be tolerant of naturalistic variation in posture as well as species; the bodies of macaques, humans, and domestic cats all evoked a stronger response when they conveyed fear than when they conveyed no affect. Multivariate analyses confirmed that the neural representation of fear-related body expressions was species-invariant. Collectively, these findings demonstrate that, like humans, macaques have body-selective brain regions in the ventral visual pathway for processing affective body language. These data also indicate that representations of body stimuli in these regions are built on the basis of emergent properties, such as socio-affective meaning, and not just putative image properties.
Collapse
Affiliation(s)
- Jessica Taubert
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20892, USA
- School of Psychology, The University of Queensland, Brisbane, QLD 4072, Australia
| | - Shruti Japee
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20892, USA
| | - Amanda Patterson
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20892, USA
| | - Hannah Wild
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20892, USA
| | - Shivani Goyal
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20892, USA
| | - David Yu
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20892, USA
| | - Leslie G. Ungerleider
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20892, USA
| |
Collapse
|
41
|
Zhang M, Li P, Yu L, Ren J, Jia S, Wang C, He W, Luo W. Emotional body expressions facilitate working memory: Evidence from the n‐back task. Psych J 2022; 12:178-184. [PMID: 36403986 DOI: 10.1002/pchj.616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 10/10/2022] [Indexed: 11/22/2022]
Abstract
In daily life, individuals need to recognize and update emotional information from others' changing body expressions. However, whether emotional bodies can enhance working memory (WM) remains unknown. In the present study, participants completed a modified n-back task, in which they were required to indicate whether a presented image of an emotional body matched that of an item displayed before each block (0-back) or two positions previously (2-back). Each block comprised only fear, happiness, or neutral. We found that in the 0-back trials, when compared with neutral body expressions, the participants took less time and showed comparable ceiling effects for accuracy in happy bodies followed by fearful bodies. When WM load increased to 2-back, both fearful and happy bodies significantly facilitated WM performance (i.e., faster reaction time and higher accuracy) relative to neutral conditions. In summary, the current findings reveal the enhancement effect of emotional body expressions on WM and highlight the importance of emotional action information in WM.
Collapse
Affiliation(s)
- Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience Liaoning Normal University Dalian China
- Key Laboratory of Brain and Cognitive Neuroscience Dalian China
| | - Ping Li
- School of Literature and Journalism North Minzu University Yinchuan China
| | - Lu Yu
- Research Center of Brain and Cognitive Neuroscience Liaoning Normal University Dalian China
- Key Laboratory of Brain and Cognitive Neuroscience Dalian China
| | - Jie Ren
- Research Center of Brain and Cognitive Neuroscience Liaoning Normal University Dalian China
- Key Laboratory of Brain and Cognitive Neuroscience Dalian China
| | - Shuxin Jia
- Research Center of Brain and Cognitive Neuroscience Liaoning Normal University Dalian China
- Key Laboratory of Brain and Cognitive Neuroscience Dalian China
| | - Chaolun Wang
- Department of Psychology Sun Yat‐Sen University Guangzhou China
| | - Weiqi He
- Research Center of Brain and Cognitive Neuroscience Liaoning Normal University Dalian China
- Key Laboratory of Brain and Cognitive Neuroscience Dalian China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience Liaoning Normal University Dalian China
- Key Laboratory of Brain and Cognitive Neuroscience Dalian China
| |
Collapse
|
42
|
Liu P, Zhang Y, Xiong Z, Wang Y, Qing L. Judging the emotional states of customer service staff in the workplace: A multimodal dataset analysis. Front Psychol 2022; 13:1001885. [PMID: 36438381 PMCID: PMC9691964 DOI: 10.3389/fpsyg.2022.1001885] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Accepted: 10/31/2022] [Indexed: 10/26/2023] Open
Abstract
Background Emotions play a decisive and central role in the workplace, especially in the service-oriented enterprises. Due to the highly participatory and interactive nature of the service process, employees' emotions are usually highly volatile during the service delivery process, which can have a negative impact on business performance. Therefore, it is important to effectively judge the emotional states of customer service staff. Methods We collected data on real-life work situations of call center employees in a large company. Three consecutive studies were conducted: first, the emotional states of 29 customer service staff were videotaped by wide-angle cameras. In Study 1, we constructed scoring criteria and auxiliary tools of picture-type scales through a free association test. In Study 2, two groups of experts were invited to evaluate the emotional states of customer service staff. In Study 3, based on the results in Study 2 and a multimodal emotional recognition method, a multimodal dataset was constructed to explore how each modality conveys the emotions of customer service staff in workplace. Results Through the scoring by 2 groups of experts and 1 group of volunteers, we first developed a set of scoring criteria and picture-type scales with the combination of SAM scale for judging the emotional state of customer service staff. Then we constructed 99 (out of 297) sets of stable multimodal emotion datasets. Based on the comparison among the datasets, we found that voice conveys emotional valence in the workplace more significantly, and that facial expressions have more prominant connection with emotional arousal. Conclusion Theoretically, this study enriches the way in which emotion data is collected and can provide a basis for the subsequent development of multimodal emotional datasets. Practically, it can provide guidance for the effective judgment of employee emotions in the workplace.
Collapse
Affiliation(s)
- Ping Liu
- School of Business, Sichuan University, Chengdu, China
| | - Yi Zhang
- School of Business, Sichuan University, Chengdu, China
| | - Ziyue Xiong
- School of Business, Sichuan University, Chengdu, China
| | - Yijie Wang
- School of Business and Tourism Management, Yunnan University, Kunming, China
| | - Linbo Qing
- School of Electronic and Information Engineering, Sichuan University, Chengdu, China
| |
Collapse
|
43
|
Calbi M, Montalti M, Pederzani C, Arcuri E, Umiltà MA, Gallese V, Mirabella G. Emotional body postures affect inhibitory control only when task-relevant. Front Psychol 2022; 13:1035328. [PMID: 36405118 PMCID: PMC9669573 DOI: 10.3389/fpsyg.2022.1035328] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 10/10/2022] [Indexed: 08/05/2023] Open
Abstract
A classical theoretical frame to interpret motor reactions to emotional stimuli is that such stimuli, particularly those threat-related, are processed preferentially, i.e., they are capable of capturing and grabbing attention automatically. Research has recently challenged this view, showing that the task relevance of emotional stimuli is crucial to having a reliable behavioral effect. Such evidence indicated that emotional facial expressions do not automatically influence motor responses in healthy young adults, but they do so only when intrinsically pertinent to the ongoing subject's goals. Given the theoretical relevance of these findings, it is essential to assess their generalizability to different, socially relevant emotional stimuli such as emotional body postures. To address this issue, we compared the performance of 36 right-handed participants in two different versions of a Go/No-go task. In the Emotional Discrimination task, participants were required to withhold their responses at the display of emotional body postures (fearful or happy) and to move at the presentation of neutral postures. Differently, in the control task, the same images were shown, but participants had to respond according to the color of the actor/actress' t-shirt, disregarding the emotional content. Results showed that participants made more commission errors (instances in which they moved even though the No-go signal was presented) for happy than fearful body postures in the Emotional Discrimination task. However, this difference disappeared in the control task. Such evidence indicates that, like facial emotion, emotional body expressions do not influence motor control automatically, but only when they are task-relevant.
Collapse
Affiliation(s)
- Marta Calbi
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Parma, Italy
- Lab Neuroscience & Humanities, University of Parma, Parma, Italy
- Department of Philosophy, State University of Milan, Milan, Italy
| | - Martina Montalti
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Parma, Italy
- Lab Neuroscience & Humanities, University of Parma, Parma, Italy
- Department of Clinical and Experimental Sciences, University of Brescia, Brescia, Italy
| | - Carlotta Pederzani
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Parma, Italy
| | - Edoardo Arcuri
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Parma, Italy
- Lab Neuroscience & Humanities, University of Parma, Parma, Italy
| | - Maria Alessandra Umiltà
- Lab Neuroscience & Humanities, University of Parma, Parma, Italy
- Department of Food and Drug Sciences, University of Parma, Parma, Italy
| | - Vittorio Gallese
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Parma, Italy
- Lab Neuroscience & Humanities, University of Parma, Parma, Italy
| | - Giovanni Mirabella
- Department of Clinical and Experimental Sciences, University of Brescia, Brescia, Italy
- IRCCS Neuromed, Pozzilli, Italy
| |
Collapse
|
44
|
Broda MD, de Haas B. Individual differences in looking at persons in scenes. J Vis 2022; 22:9. [PMID: 36342691 PMCID: PMC9652713 DOI: 10.1167/jov.22.12.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 09/23/2022] [Indexed: 11/09/2022] Open
Abstract
Individuals freely viewing complex scenes vary in their fixation behavior. The most prominent and reliable dimension of such individual differences is the tendency to fixate faces. However, much less is known about how observers distribute fixations across other body parts of persons in scenes and how individuals may vary in this regard. Here, we aimed to close this gap. We expanded a popular annotated stimulus set (Xu, Jiang, Wang, Kankanhalli, & Zhao, 2014) with 6,365 hand-delineated pixel masks for the body parts of 1,136 persons embedded in 700 complex scenes, which we publish with this article (https://osf.io/ynujz/). This resource allowed us to analyze the person-directed fixations of 103 participants freely viewing these scenes. We found large and reliable individual differences in the distribution of fixations across person features. Individual fixation tendencies formed two anticorrelated clusters, one for the eyes, head, and the inner face and one for body features (torsi, arms, legs, and hands). Interestingly, the tendency to fixate mouths was independent of the face cluster. Finally, our results show that observers who tend to avoid person fixations in general, particularly do so for the face region. These findings underscore the role of individual differences in fixation behavior and reveal underlying dimensions. They are further in line with a recently proposed push-pull relationship between cortical tuning for faces and bodies. They may also aid the comparison of special populations to general variation.
Collapse
Affiliation(s)
- Maximilian Davide Broda
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | - Benjamin de Haas
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
45
|
Viola M. Seeing through the shades of situated affectivity. Sunglasses as a socio-affective artifact. PHILOSOPHICAL PSYCHOLOGY 2022. [DOI: 10.1080/09515089.2022.2118574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Affiliation(s)
- Marco Viola
- Department of Philosophy, Communication, and Performing Arts, Rome 3 University, Rome, Italy
| |
Collapse
|
46
|
Keck J, Zabicki A, Bachmann J, Munzert J, Krüger B. Decoding spatiotemporal features of emotional body language in social interactions. Sci Rep 2022; 12:15088. [PMID: 36064559 PMCID: PMC9445068 DOI: 10.1038/s41598-022-19267-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 08/26/2022] [Indexed: 11/11/2022] Open
Abstract
How are emotions perceived through human body language in social interactions? This study used point-light displays of human interactions portraying emotional scenes (1) to examine quantitative intrapersonal kinematic and postural body configurations, (2) to calculate interaction-specific parameters of these interactions, and (3) to analyze how far both contribute to the perception of an emotion category (i.e. anger, sadness, happiness or affection) as well as to the perception of emotional valence. By using ANOVA and classification trees, we investigated emotion-specific differences in the calculated parameters. We further applied representational similarity analyses to determine how perceptual ratings relate to intra- and interpersonal features of the observed scene. Results showed that within an interaction, intrapersonal kinematic cues corresponded to emotion category ratings, whereas postural cues reflected valence ratings. Perception of emotion category was also driven by interpersonal orientation, proxemics, the time spent in the personal space of the counterpart, and the motion–energy balance between interacting people. Furthermore, motion–energy balance and orientation relate to valence ratings. Thus, features of emotional body language are connected with the emotional content of an observed scene and people make use of the observed emotionally expressive body language and interpersonal coordination to infer emotional content of interactions.
Collapse
Affiliation(s)
- Johannes Keck
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany. .,Center for Mind, Brain and Behavior-CMBB, Universities Marburg and Giessen, Marburg, Germany.
| | - Adam Zabicki
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany
| | - Julia Bachmann
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany
| | - Jörn Munzert
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany.,Center for Mind, Brain and Behavior-CMBB, Universities Marburg and Giessen, Marburg, Germany
| | - Britta Krüger
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany
| |
Collapse
|
47
|
Rychlowska M, McKeown GJ, Sneddon I, Curran W. The Role of Contextual Information in Classifying Spontaneous Social Laughter. JOURNAL OF NONVERBAL BEHAVIOR 2022. [DOI: 10.1007/s10919-022-00412-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
AbstractLaughter is a ubiquitous and important social signal, but its nature is yet to be fully explored. One of the open empirical questions is about the role of context in the interpretation of laughter. Can laughs presented on their own convey specific feelings and social motives? How influential is social context when a person tries to understand the meaning of a laugh? Here we test the extent to which the classification of laughs produced in different situations is guided by knowing the context within which these laughs were produced. In the current study, stimuli were spontaneous laughs recorded in social situations engineered to elicit amusement, embarrassment, and schadenfreude. In a between-subjects design, participants classified these laughs being assigned to one of the four experimental conditions: audio only, audio-visual, side-by-side videos of two interactants, and side-by-side videos accompanied by a brief vignette. Participants’ task was to label each laugh as an instance of amusement, embarrassment, or schadenfreude laugh, or “other.” Laughs produced in situations inducing embarrassment were classified more accurately than laughs produced in other situations. Most importantly, eliminating information about the social settings in which laughs were produced decreased participants’ classification accuracy such that accuracy was no better than chance in the experimental conditions providing minimal contextual information. Our findings demonstrate the importance of context in the interpretation of laughter and highlight the complexity of experimental investigations of schadenfreude displays.
Collapse
|
48
|
EmBody/EmFace as a new open tool to assess emotion recognition from body and face expressions. Sci Rep 2022; 12:14165. [PMID: 35986068 PMCID: PMC9391359 DOI: 10.1038/s41598-022-17866-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 08/02/2022] [Indexed: 01/29/2023] Open
Abstract
Nonverbal expressions contribute substantially to social interaction by providing information on another person’s intentions and feelings. While emotion recognition from dynamic facial expressions has been widely studied, dynamic body expressions and the interplay of emotion recognition from facial and body expressions have attracted less attention, as suitable diagnostic tools are scarce. Here, we provide validation data on a new open source paradigm enabling the assessment of emotion recognition from both 3D-animated emotional body expressions (Task 1: EmBody) and emotionally corresponding dynamic faces (Task 2: EmFace). Both tasks use visually standardized items depicting three emotional states (angry, happy, neutral), and can be used alone or together. We here demonstrate successful psychometric matching of the EmBody/EmFace items in a sample of 217 healthy subjects with excellent retest reliability and validity (correlations with the Reading-the-Mind-in-the-Eyes-Test and Autism-Spectrum Quotient, no correlations with intelligence, and given factorial validity). Taken together, the EmBody/EmFace is a novel, effective (< 5 min per task), highly standardized and reliably precise tool to sensitively assess and compare emotion recognition from body and face stimuli. The EmBody/EmFace has a wide range of potential applications in affective, cognitive and social neuroscience, and in clinical research studying face- and body-specific emotion recognition in patient populations suffering from social interaction deficits such as autism, schizophrenia, or social anxiety.
Collapse
|
49
|
Dawel A, Miller EJ, Horsburgh A, Ford P. A systematic survey of face stimuli used in psychological research 2000-2020. Behav Res Methods 2022; 54:1889-1901. [PMID: 34731426 DOI: 10.3758/s13428-021-01705-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/07/2021] [Indexed: 12/16/2022]
Abstract
For decades, psychology has relied on highly standardized images to understand how people respond to faces. Many of these stimuli are rigorously generated and supported by excellent normative data; as such, they have played an important role in the development of face science. However, there is now clear evidence that testing with ambient images (i.e., naturalistic images "in the wild") and including expressions that are spontaneous can lead to new and important insights. To precisely quantify the extent to which our current knowledge base has relied on standardized and posed stimuli, we systematically surveyed the face stimuli used in 12 key journals in this field across 2000-2020 (N = 3374 articles). Although a small number of posed expression databases continue to dominate the literature, the use of spontaneous expressions seems to be increasing. However, there has been no increase in the use of ambient or dynamic stimuli over time. The vast majority of articles have used highly standardized and nonmoving pictures of faces. An emerging trend is that virtual faces are being used as stand-ins for human faces in research. Overall, the results of the present survey highlight that there has been a significant imbalance in favor of standardized face stimuli. We argue that psychology would benefit from a more balanced approach because ambient and spontaneous stimuli have much to offer. We advocate a cognitive ethological approach that involves studying face processing in natural settings as well as the lab, incorporating more stimuli from "the wild".
Collapse
Affiliation(s)
- Amy Dawel
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia.
| | - Elizabeth J Miller
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia
| | - Annabel Horsburgh
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia
| | - Patrice Ford
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia
| |
Collapse
|
50
|
Shimizu I, Matsuyama Y, Duvivier R, van der Vleuten C. Perceived positive social interdependence in online versus face-to-face team-based learning styles of collaborative learning: a randomized, controlled, mixed-methods study. BMC MEDICAL EDUCATION 2022; 22:567. [PMID: 35869477 PMCID: PMC9307427 DOI: 10.1186/s12909-022-03633-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 07/18/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Collaborative learning is a group learning approach in which positive social interdependence within a group is key to better learning performance and future attitudes toward team practice. Recent attempts to replace a face-to-face environment with an online one have been developed using information communication technology. However, this raises the concern that online collaborative learning (OCL) may reduce positive social interdependence. Therefore, this study aimed to compare the degree of social interdependence in OCL with face-to-face environments and clarify aspects that affect social interdependence in OCL. METHODS We conducted a crossover study comparing online and face-to-face collaborative learning environments in a clinical reasoning class using team-based learning for medical students (n = 124) in 2021. The participants were randomly assigned to two cohorts: Cohort A began in an online environment, while Cohort B began in a face-to-face environment. At the study's midpoint, the two cohorts exchanged the environments as a washout. The participants completed surveys using the social interdependence in collaborative learning scale (SOCS) to measure their perceived positive social interdependence before and after the class. Changes in the mean SOCS scores were compared using paired t-tests. Qualitative data related to the characteristics of the online environment were obtained from the focus groups and coded using thematic analysis. RESULTS The matched-pair tests of SOCS showed significant progression between pre- and post-program scores in the online and face-to-face groups. There were no significant differences in overall SOCS scores between the two groups. Sub-analysis by subcategory showed significant improvement in boundary (discontinuities among individuals) and means interdependence (resources, roles, and tasks) in both groups, but outcome interdependence (goals and rewards) improved significantly only in the online group. Qualitative analysis revealed four major themes affecting social interdependence in OCL: communication, task-sharing process, perception of other groups, and working facilities. CONCLUSIONS There is a difference in the communication styles of students in face-to-face and online environments, and these various influences equalize the social interdependence in a face-to-face and online environment.
Collapse
Affiliation(s)
- Ikuo Shimizu
- Center for Medical Education and Clinical Training, Shinshu University, 3-1-1 Asahi, Matsumoto, 3908621 Japan
| | - Yasushi Matsuyama
- Medical Education Centre, Jichi Medical University, 3311-1 Yakushiji, Shimotsuke-shi, Tochigi Japan
| | - Robbert Duvivier
- Center for Educational Development and Research in Health Sciences (CEDAR), University Medical Center Groningen, Antonius Deusinglaan, 19713 AV Groningen, The Netherlands
| | - Cees van der Vleuten
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, The Netherlands
| |
Collapse
|