1
|
Becker C, Conduit R, Chouinard PA, Laycock R. Can deepfakes be used to study emotion perception? A comparison of dynamic face stimuli. Behav Res Methods 2024; 56:7674-7690. [PMID: 38834812 PMCID: PMC11362322 DOI: 10.3758/s13428-024-02443-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/11/2024] [Indexed: 06/06/2024]
Abstract
Video recordings accurately capture facial expression movements; however, they are difficult for face perception researchers to standardise and manipulate. For this reason, dynamic morphs of photographs are often used, despite their lack of naturalistic facial motion. This study aimed to investigate how humans perceive emotions from faces using real videos and two different approaches to artificially generating dynamic expressions - dynamic morphs, and AI-synthesised deepfakes. Our participants perceived dynamic morphed expressions as less intense when compared with videos (all emotions) and deepfakes (fearful, happy, sad). Videos and deepfakes were perceived similarly. Additionally, they perceived morphed happiness and sadness, but not morphed anger or fear, as less genuine than other formats. Our findings support previous research indicating that social responses to morphed emotions are not representative of those to video recordings. The findings also suggest that deepfakes may offer a more suitable standardized stimulus type compared to morphs. Additionally, qualitative data were collected from participants and analysed using ChatGPT, a large language model. ChatGPT successfully identified themes in the data consistent with those identified by an independent human researcher. According to this analysis, our participants perceived dynamic morphs as less natural compared with videos and deepfakes. That participants perceived deepfakes and videos similarly suggests that deepfakes effectively replicate natural facial movements, making them a promising alternative for face perception research. The study contributes to the growing body of research exploring the usefulness of generative artificial intelligence for advancing the study of human perception.
Collapse
|
2
|
Christensen JF, Fernández A, Smith RA, Michalareas G, Yazdi SHN, Farahi F, Schmidt EM, Bahmanian N, Roig G. EMOKINE: A software package and computational framework for scaling up the creation of highly controlled emotional full-body movement datasets. Behav Res Methods 2024; 56:7498-7542. [PMID: 38918315 DOI: 10.3758/s13428-024-02433-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/15/2024] [Indexed: 06/27/2024]
Abstract
EMOKINE is a software package and dataset creation suite for emotional full-body movement research in experimental psychology, affective neuroscience, and computer vision. A computational framework, comprehensive instructions, a pilot dataset, observer ratings, and kinematic feature extraction code are provided to facilitate future dataset creations at scale. In addition, the EMOKINE framework outlines how complex sequences of movements may advance emotion research. Traditionally, often emotional-'action'-based stimuli are used in such research, like hand-waving or walking motions. Here instead, a pilot dataset is provided with short dance choreographies, repeated several times by a dancer who expressed different emotional intentions at each repetition: anger, contentment, fear, joy, neutrality, and sadness. The dataset was simultaneously filmed professionally, and recorded using XSENS® motion capture technology (17 sensors, 240 frames/second). Thirty-two statistics from 12 kinematic features were extracted offline, for the first time in one single dataset: speed, acceleration, angular speed, angular acceleration, limb contraction, distance to center of mass, quantity of motion, dimensionless jerk (integral), head angle (with regards to vertical axis and to back), and space (convex hull 2D and 3D). Average, median absolute deviation (MAD), and maximum value were computed as applicable. The EMOKINE software is appliable to other motion-capture systems and is openly available on the Zenodo Repository. Releases on GitHub include: (i) the code to extract the 32 statistics, (ii) a rigging plugin for Python for MVNX file-conversion to Blender format (MVNX=output file XSENS® system), and (iii) a Python-script-powered custom software to assist with blurring faces; latter two under GPLv3 licenses.
Collapse
Affiliation(s)
- Julia F Christensen
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany.
| | - Andrés Fernández
- Methods of Machine Learning, University of Tübingen, Tübingen, Germany
- International Max Planck Research School for Intelligent Systems, Tübingen, Germany
| | - Rebecca A Smith
- Department of Psychology, University of Glasgow, Glasgow, Scotland
| | - Georgios Michalareas
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
| | | | | | - Eva-Madeleine Schmidt
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Max Planck School of Cognition, Leipzig, Germany
- Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany
| | - Nasimeh Bahmanian
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Department of Modern Languages, Goethe University, Frankfurt/M, Germany
| | - Gemma Roig
- Computer Science Department, Goethe University, Frankfurt/M, Germany
- The Hessian Center for Artificial Intelligence (hessian.AI), Darmstadt, Germany
| |
Collapse
|
3
|
Oswald F, Samra SK. A scoping review and index of body stimuli in psychological science. Behav Res Methods 2024; 56:5434-5455. [PMID: 38030921 DOI: 10.3758/s13428-023-02278-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/20/2023] [Indexed: 12/01/2023]
Abstract
Naturalistic body stimuli are necessary for understanding many aspects of human psychology, yet there are no centralized databases of body stimuli. Furthermore, there are a high number of independently developed stimulus sets lacking in standardization and reproducibility potential, and a general lack of organization, contributing to issues of both replicability and generalizability in body-related research. We conducted a comprehensive scoping review to index and explore existing naturalistic whole-body stimuli. Our research questions were as follows: (1) What sets of naturalistic human whole-body stimuli are present in the literature? And (2) On what factors (e.g., demographics, emotion expression) do these stimuli vary? To be included, stimulus sets had to (1) include human bodies as stimuli; (2) be photographs, videos, or other depictions of real human bodies (not computer generated, drawn, etc.); (3) include the whole body (defined as torso, arms, and legs); and (4) could include edited images, but still had to be recognizable as human bodies. We identified a relatively large number of existing stimulus sets (N = 79) which offered relative variability in terms of main manipulated factors and the degree of visual information included (i.e., inclusion of heads and/or faces). However, stimulus sets were demographically homogenous, skewed towards White, young adult, and female bodies. We identified significant issues in reporting and availability practices, posing a challenge to the generalizability, reliability, and reproducibility of body-related research. Accordingly, we urge researchers to adopt transparent and accessible practices and to take steps to diversify body stimuli.
Collapse
Affiliation(s)
- Flora Oswald
- Department of Psychology, The Pennsylvania State University, University Park, PA, USA.
- Department of Women's, Gender, and Sexuality Studies, The Pennsylvania State University, University Park, PA, USA.
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA.
- Department of Psychology, University of Denver, Denver, CO, USA.
| | | |
Collapse
|
4
|
Keck J, Honekamp C, Gebhardt K, Nolte S, Linka M, de Haas B, Munzert J, Krüger K, Krüger B. Exercise-induced inflammation alters the perception and visual exploration of emotional interactions. Brain Behav Immun Health 2024; 39:100806. [PMID: 38974339 PMCID: PMC11225855 DOI: 10.1016/j.bbih.2024.100806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 05/08/2024] [Accepted: 06/10/2024] [Indexed: 07/09/2024] Open
Abstract
Introduction The study aimed to investigate whether an exercise-induced pro-inflammatory response alters the perception as well as visual exploration of emotional body language in social interactions. Methods In a within-subject design, 19 male, healthy adults aged between 19 and 33 years performed a downhill run for 45 min at 70% of their VO2max on a treadmill to induce maximal myokine blood elevations, leading to a pro-inflammatory status. Two control conditions were selected: a control run with no decline and a rest condition without physical exercise. Blood samples were taken before (T0), directly after (T1), 3 h after (T3), and 24 h after (T24) each exercise for analyzing the inflammatory response. 3 h after exercise, participants observed point-light displays (PLDs) of human interactions portraying four emotions (happiness, affection, sadness, and anger). Participants categorized the emotional content, assessed the emotional intensity of the stimuli, and indicated their confidence in their ratings. Eye movements during the entire paradigm and self-reported current mood were also recorded. Results The downhill exercise condition resulted in significant elevations of measured cytokines (IL6, CRP, MCP-1) and markers for muscle damage (Myoglobin) compared to the control running condition, indicating a pro-inflammatory state after the downhill run. Emotion recognition rates decreased significantly after the downhill run, whereas no such effect was observed after control running. Participants' sensitivity to emotion-specific cues also declined. However, the downhill run had no effect on the perceived emotional intensity or the subjective confidence in the given ratings. Visual scanning behavior was affected after the downhill run, with participants fixating more on sad stimuli, in contrast to the control conditions, where participants exhibited more fixations while observing happy stimuli. Conclusion Our study demonstrates that inflammation, induced through a downhill running model, impairs perception and emotional recognition abilities. Specifically, inflammation leads to decreased recognition rates of emotional content of social interactions, attributable to diminished discrimination capabilities across all emotional categories. Additionally, we observed alterations in visual exploration behavior. This confirms that inflammation significantly affects an individual's responsiveness to social and affective stimuli.
Collapse
Affiliation(s)
- Johannes Keck
- Neuromotor Behavior Lab, Institute of Sports Science, Justus-Liebig-University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Phillips University of Marburg and Justus-Liebig-University Giessen, Germany
| | - Celine Honekamp
- Sensorimotor Control and Learning, Centre for Cognitive Science, Technical University of Darmstadt, Germany
| | - Kristina Gebhardt
- Department of Exercise Physiology and Sports Therapy, Institute of Sports Science, Justus-Liebig-University Giessen, Giessen, Germany
| | - Svenja Nolte
- Department of Exercise Physiology and Sports Therapy, Institute of Sports Science, Justus-Liebig-University Giessen, Giessen, Germany
| | - Marcel Linka
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Benjamin de Haas
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Phillips University of Marburg and Justus-Liebig-University Giessen, Germany
| | - Jörn Munzert
- Neuromotor Behavior Lab, Institute of Sports Science, Justus-Liebig-University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Phillips University of Marburg and Justus-Liebig-University Giessen, Germany
| | - Karsten Krüger
- Department of Exercise Physiology and Sports Therapy, Institute of Sports Science, Justus-Liebig-University Giessen, Giessen, Germany
| | - Britta Krüger
- Neuromotor Behavior Lab, Institute of Sports Science, Justus-Liebig-University Giessen, Giessen, Germany
| |
Collapse
|
5
|
Chouinard B, Pesquita A, Enns JT, Chapman CS. Processing of visual social-communication cues during a social-perception of action task in autistic and non-autistic observers. Neuropsychologia 2024; 198:108880. [PMID: 38555063 DOI: 10.1016/j.neuropsychologia.2024.108880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 03/14/2024] [Accepted: 03/26/2024] [Indexed: 04/02/2024]
Abstract
Social perception and communication differ between those with and without autism, even when verbal fluency and intellectual ability are equated. Previous work found that observers responded more quickly to an actor's points if the actor had chosen by themselves where to point instead of being directed where to point. Notably, this 'choice-advantage' effect decreased across non-autistic participants as the number of autistic-like traits and tendencies increased (Pesquita et al., 2016). Here, we build on that work using the same task to study individuals over a broader range of the spectrum, from autistic to non-autistic, measuring both response initiation and mouse movement times, and considering the response to each actor separately. Autistic and non-autistic observers viewed videos of three different actors pointing to one of two locations, without knowing that the actors were sometimes freely choosing to point to one target and other times being directed where to point. All observers exhibited a choice-advantage overall, meaning they responded more rapidly when actors were freely choosing versus when they were directed, indicating a sensitivity to the actors' postural cues and movements. Our fine-grained analyses found a more robust choice-advantage to some actors than others, with autistic observers showing a choice-advantage only in response to one of the actors, suggesting that both actor and observer characteristics influence the overall effect. We briefly explore existing actor characteristics that may have contributed to this effect, finding that both duration of exposure to pre-movement cues and kinematic cues of the actors likely influence the choice advantage to different degrees across the groups. Altogether, the evidence suggested that both autistic and non-autistic individuals could detect the choice-advantage signal, but that for autistic observers the choice-advantage was actor specific. Notably, we found that the influence of the signal, when present, was detected early for all actors by the non-autistic observers, but detected later and only for one actor by the autistic observers. Altogether, we have more accurately characterized the ability of social-perception in autistic individuals as intact, but highlighted that detection of signal is likely delayed/distributed compared to non-autistic observers and that it is important to investigate actor characteristics that may influence detection and use of their social-perception signals.
Collapse
Affiliation(s)
| | | | - J T Enns
- University of British Columbia, Canada
| | | |
Collapse
|
6
|
Vikhanova A, Tibber MS, Mareschal I. Post-migration living difficulties and poor mental health associated with increased interpretation bias for threat. Q J Exp Psychol (Hove) 2024; 77:1154-1168. [PMID: 37477179 PMCID: PMC11103921 DOI: 10.1177/17470218231191442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 05/30/2023] [Accepted: 07/15/2023] [Indexed: 07/22/2023]
Abstract
Previous research has found associations between mental health difficulties and interpretation biases, including heightened interpretation of threat from neutral or ambiguous stimuli. Building on this research, we explored associations between interpretation biases (positive and negative) and three constructs that have been linked to migrant experience: mental health symptoms (Global Severity Index [GSI]), Post-Migration Living Difficulties (PMLD), and Perceived Ethnic Discrimination Questionnaire (PEDQ). Two hundred thirty students who identified as first- (n = 94) or second-generation ethnic minority migrants (n = 68), and first-generation White migrants (n = 68) completed measures of GSI, PEDQ, and PMLD. They also performed an interpretation bias task using Point Light Walkers (PLW), dynamic stimuli with reduced visual input that are easily perceived as humans performing an action. Five categories of PLW were used: four that clearly depicted human forms undertaking positive, neutral, negative, or ambiguous actions, and a fifth that involved scrambled animations with no clear action or form. Participants were asked to imagine their interaction with the stimuli and rate their friendliness (positive interpretation bias) and aggressiveness (interpretation bias for threat). We found that the three groups differed on PEDQ and PMLD, with no significant differences in GSI, and the three measured were positively correlated. Poorer mental health and increased PMLD were associated with a heightened interpretation for threat of scrambled animations only. These findings have implications for understanding of the role of threat biases in mental health and the migrant experience.
Collapse
Affiliation(s)
- Anastasia Vikhanova
- Department of Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, London, UK
| | - Marc S Tibber
- Research Department of Clinical, Educational and Health Psychology, University College London, London, UK
| | - Isabelle Mareschal
- Department of Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, London, UK
| |
Collapse
|
7
|
Crawford MT, Maymon C, Miles NL, Blackburne K, Tooley M, Grimshaw GM. Emotion in motion: perceiving fear in the behaviour of individuals from minimal motion capture displays. Cogn Emot 2024; 38:451-462. [PMID: 38354068 DOI: 10.1080/02699931.2023.2300748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Accepted: 12/21/2023] [Indexed: 02/16/2024]
Abstract
The ability to quickly and accurately recognise emotional states is adaptive for numerous social functions. Although body movements are a potentially crucial cue for inferring emotions, few studies have studied the perception of body movements made in naturalistic emotional states. The current research focuses on the use of body movement information in the perception of fear expressed by targets in a virtual heights paradigm. Across three studies, participants made judgments about the emotional states of others based on motion-capture body movement recordings of those individuals actively engaged in walking a virtual plank at ground-level or 80 stories above a city street. Results indicated that participants were reliably able to differentiate between height and non-height conditions (Studies 1 & 2), were more likely to spontaneously describe target behaviour in the height condition as fearful (Study 2) and their fear estimates were highly calibrated with the fear ratings from the targets (Studies 1-3). Findings show that VR height scenarios can induce fearful behaviour and that people can perceive fear in minimal representations of body movement.
Collapse
Affiliation(s)
- Matthew T Crawford
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| | - Christopher Maymon
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| | - Nicola L Miles
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| | - Katie Blackburne
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| | - Michael Tooley
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| | - Gina M Grimshaw
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| |
Collapse
|
8
|
Nam SM, Park HY, Kim MJ. Exploring the experiences of dancers who have achieved peak performance: on-stage, pre-stage, and post-stage. Front Psychol 2024; 15:1392242. [PMID: 38855308 PMCID: PMC11162116 DOI: 10.3389/fpsyg.2024.1392242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 05/01/2024] [Indexed: 06/11/2024] Open
Abstract
The aim of this study is to identify and classify the different attributes that contribute to peak performance among professional dancers, and to understand how these attributes change over time. We conducted an analysis using inductive content analysis on open-ended survey data collected from 42 formally trained professional dancers. Additionally, we analyzed interview data from seven professional dancers who demonstrated outstanding achievements in the field among the survey participants. The main themes that emerged were related to various temporal events of peak performance experience: pre-stage, on-stage, and post-stage. During the on-stage, peak performance was perceived by both internal and external factors. During the pre-stage, emphasis was placed on technical, cognitive, and artistic strategies during practice, whereas just before going on the stage, attention shifted to psychological and physical strategies. During the post-stage, dancers reported immediate changes in their psychological and physical states following the peak performance experience, and thereafter, the peak performance experience was noted to influence psychological, technical, and cognitive aspects. These findings provide valuable insights into the key characteristics that emerge throughout a series of peak performance experiences and are consistent with previous research.
Collapse
Affiliation(s)
- Soo Mi Nam
- Division of Sports Science, Hanyang University, Ansan, Republic of Korea
| | - Hye Youn Park
- Institute of Sports Science, Seoul National University, Seoul, Republic of Korea
| | - Min Joo Kim
- Division of Sports and Exercise Science, Kunsan National University, Gunsan-si, Republic of Korea
| |
Collapse
|
9
|
Roberti E, Turati C, Actis-Grosso R. Single point motion kinematics convey emotional signals in children and adults. PLoS One 2024; 19:e0301896. [PMID: 38598520 PMCID: PMC11006184 DOI: 10.1371/journal.pone.0301896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 03/25/2024] [Indexed: 04/12/2024] Open
Abstract
This study investigates whether humans recognize different emotions conveyed only by the kinematics of a single moving geometrical shape and how this competence unfolds during development, from childhood to adulthood. To this aim, animations in which a shape moved according to happy, fearful, or neutral cartoons were shown, in a forced-choice paradigm, to 7- and 10-year-old children and adults. Accuracy and response times were recorded, and the movement of the mouse while the participants selected a response was tracked. Results showed that 10-year-old children and adults recognize happiness and fear when conveyed solely by different kinematics, with an advantage for fearful stimuli. Fearful stimuli were also accurately identified at 7-year-olds, together with neutral stimuli, while, at this age, the accuracy for happiness was not significantly different than chance. Overall, results demonstrates that emotions can be identified by a single point motion alone during both childhood and adulthood. Moreover, motion contributes in various measures to the comprehension of emotions, with fear recognized earlier in development and more readily even later on, when all emotions are accurately labeled.
Collapse
Affiliation(s)
- Elisa Roberti
- Psychology Department, University of Milano–Bicocca, Milan, Italy
- Neuromi, Milan Center for Neuroscience, Milan, Italy
| | - Chiara Turati
- Psychology Department, University of Milano–Bicocca, Milan, Italy
- Neuromi, Milan Center for Neuroscience, Milan, Italy
| | - Rossana Actis-Grosso
- Psychology Department, University of Milano–Bicocca, Milan, Italy
- Neuromi, Milan Center for Neuroscience, Milan, Italy
| |
Collapse
|
10
|
Richer R, Koch V, Abel L, Hauck F, Kurz M, Ringgold V, Müller V, Küderle A, Schindler-Gmelch L, Eskofier BM, Rohleder N. Machine learning-based detection of acute psychosocial stress from body posture and movements. Sci Rep 2024; 14:8251. [PMID: 38589504 PMCID: PMC11375162 DOI: 10.1038/s41598-024-59043-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 04/05/2024] [Indexed: 04/10/2024] Open
Abstract
Investigating acute stress responses is crucial to understanding the underlying mechanisms of stress. Current stress assessment methods include self-reports that can be biased and biomarkers that are often based on complex laboratory procedures. A promising additional modality for stress assessment might be the observation of body movements, which are affected by negative emotions and threatening situations. In this paper, we investigated the relationship between acute psychosocial stress induction and body posture and movements. We collected motion data from N = 59 individuals over two studies (Pilot Study: N = 20, Main Study: N = 39) using inertial measurement unit (IMU)-based motion capture suits. In both studies, individuals underwent the Trier Social Stress Test (TSST) and a stress-free control condition (friendly-TSST; f-TSST) in randomized order. Our results show that acute stress induction leads to a reproducible freezing behavior, characterized by less overall motion as well as more and longer periods of no movement. Based on these data, we trained machine learning pipelines to detect acute stress solely from movement information, achieving an accuracy of75.0 ± 17.7 % (Pilot Study) and73.4 ± 7.7 % (Main Study). This, for the first time, suggests that body posture and movements can be used to detect whether individuals are exposed to acute psychosocial stress. While more studies are needed to further validate our approach, we are convinced that motion information can be a valuable extension to the existing biomarkers and can help to obtain a more holistic picture of the human stress response. Our work is the first to systematically explore the use of full-body body posture and movement to gain novel insights into the human stress response and its effects on the body and mind.
Collapse
Affiliation(s)
- Robert Richer
- Machine Learning and Data Analytics Lab (MaD Lab), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Germany.
| | - Veronika Koch
- Fraunhofer Institute for Integrated Circuits IIS, 91058, Erlangen, Germany
| | - Luca Abel
- Machine Learning and Data Analytics Lab (MaD Lab), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Germany
| | - Felicitas Hauck
- Chair of Health Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Germany
| | - Miriam Kurz
- Chair of Health Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Germany
| | - Veronika Ringgold
- Machine Learning and Data Analytics Lab (MaD Lab), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Germany
- Chair of Health Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Germany
| | - Victoria Müller
- Machine Learning and Data Analytics Lab (MaD Lab), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Germany
| | - Arne Küderle
- Machine Learning and Data Analytics Lab (MaD Lab), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Germany
| | - Lena Schindler-Gmelch
- Chair of Clinical Psychology and Psychotherapy, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Germany
| | - Bjoern M Eskofier
- Machine Learning and Data Analytics Lab (MaD Lab), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Germany
- Translational Digital Health Group, Institute of AI for Health, Helmholtz Zentrum München - German Research Center for Environmental Health, 85764, Neuherberg, Germany
| | - Nicolas Rohleder
- Chair of Health Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Germany
| |
Collapse
|
11
|
Smekal V, Poyo Solanas M, Fraats EIC, de Gelder B. Differential contributions of body form, motion, and temporal information to subjective action understanding in naturalistic stimuli. Front Integr Neurosci 2024; 18:1302960. [PMID: 38533314 PMCID: PMC10963482 DOI: 10.3389/fnint.2024.1302960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 02/14/2024] [Indexed: 03/28/2024] Open
Abstract
Introduction We investigated the factors underlying naturalistic action recognition and understanding, as well as the errors occurring during recognition failures. Methods Participants saw full-light stimuli of ten different whole-body actions presented in three different conditions: as normal videos, as videos with the temporal order of the frames scrambled, and as single static representative frames. After each stimulus presentation participants completed one of two tasks-a forced choice task where they were given the ten potential action labels as options, or a free description task, where they could describe the action performed in each stimulus in their own words. Results While generally, a combination of form, motion, and temporal information led to the highest action understanding, for some actions form information was sufficient and adding motion and temporal information did not increase recognition accuracy. We also analyzed errors in action recognition and found primarily two different types. Discussion One type of error was on the semantic level, while the other consisted of reverting to the kinematic level of body part processing without any attribution of semantics. We elaborate on these results in the context of naturalistic action perception.
Collapse
Affiliation(s)
- Vojtěch Smekal
- Brain and Emotion Lab, Department of Cognitive Neuroscience, Maastricht Brain Imaging Centre, Maastricht University, Maastricht, Netherlands
| | | | | | | |
Collapse
|
12
|
Schmidt EM, Smith RA, Fernández A, Emmermann B, Christensen JF. Mood induction through imitation of full-body movements with different affective intentions. Br J Psychol 2024; 115:148-180. [PMID: 37740117 DOI: 10.1111/bjop.12681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 07/05/2023] [Indexed: 09/24/2023]
Abstract
Theories of human emotion, including some emotion embodiment theories, suggest that our moods and affective states are reflected in the movements of our bodies. We used the reverse process for mood regulation; modulate body movements to regulate mood. Dancing is a type of full-body movement characterized by affective expressivity and, hence, offers the possibility to express different affective states through the same movement sequences. We tested whether the repeated imitation of a dancer performing two simple full-body dance movement sequences with different affective expressivity (happy or sad) could change mood states. Computer-based systems, using avatars as dance models to imitate, offer a series of advantages such as independence from physical contact and location. Therefore, we compared mood induction effects in two conditions: participants were asked to imitate dance movements from one of the two avatars showing: (a) videos of a human dancer model or (b) videos of a robot dancer model. The mood induction was successful for both happy and sad imitations, regardless of condition (human vs. robot avatar dance model). Moreover, the magnitude of happy mood induction and how much participants liked the task predicted work-related motivation after the mood induction. We conclude that mood regulation through dance movements is possible and beneficial in the work context.
Collapse
Affiliation(s)
- Eva-Madeleine Schmidt
- Department of Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Max Planck School of Cognition, Leipzig, Germany
| | - Rebecca A Smith
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| | - Andrés Fernández
- Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, UK
| | - Birte Emmermann
- Chair of Ergonomics, Technical University of Munich, Munich, Germany
| | - Julia F Christensen
- Department of Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| |
Collapse
|
13
|
Pisanu E, Arbula S, Rumiati RI. Agreeableness modulates mental state decoding: Electrophysiological evidence. Hum Brain Mapp 2024; 45:e26593. [PMID: 38339901 PMCID: PMC10826893 DOI: 10.1002/hbm.26593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 12/27/2023] [Accepted: 01/02/2024] [Indexed: 02/12/2024] Open
Abstract
Agreeableness is one of the five personality traits which is associated with theory of mind (ToM) abilities. One of the critical processes involved in ToM is the decoding of emotional cues. In the present study, we investigated whether this process is modulated by agreeableness using electroencephalography (EEG) while taking into account task complexity and sex differences that are expected to moderate the relationship between emotional decoding and agreeableness. This approach allowed us to identify at which stage of the neural processing agreeableness kicks in, in order to distinguish the impact on early, perceptual processes from slower, inferential processing. Two tasks were employed and submitted to 62 participants during EEG recording: the reading the mind in the eyes (RME) task, requiring the decoding of complex mental states from eye expressions, and the biological (e)motion task, involving the perception of basic emotional actions through point-light body stimuli. Event-related potential (ERP) results showed a significant correlation between agreeableness and the contrast for emotional and non-emotional trials in a late time window only during the RME task. Specifically, higher levels of agreeableness were associated with a deeper neural processing of emotional versus non-emotional trials within the whole and male samples. In contrast, the modulation in females was negligible. The source analysis highlighted that this ERP-agreeableness association engages the ventromedial prefrontal cortex. Our findings expand previous research on personality and social processing and confirm that sex modulates this relationship.
Collapse
Affiliation(s)
| | | | - Raffaella Ida Rumiati
- Neuroscience Area, SISSATriesteItaly
- Dipartimento di Medicina dei SistemiUniversità degli Studi di Roma “Tor Vergata”RomeItaly
| |
Collapse
|
14
|
Christensen A, Taubert N, Huis in ’t Veld EM, de Gelder B, Giese MA. Perceptual encoding of emotions in interactive bodily expressions. iScience 2024; 27:108548. [PMID: 38161419 PMCID: PMC10755352 DOI: 10.1016/j.isci.2023.108548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 08/23/2023] [Accepted: 11/20/2023] [Indexed: 01/03/2024] Open
Abstract
For social species, e.g., primates, the perceptual analysis of social interactions is an essential skill for survival, emerging already early during development. While real-life emotional behavior includes predominantly interactions between conspecifics, research on the perception of emotional body expressions has primarily focused on perception of single individuals. While previous studies using point-light or video stimuli of interacting people suggest an influence of social context on the perception and neural encoding of interacting bodies, it remains entirely unknown how emotions of multiple interacting agents are perceptually integrated. We studied this question using computer animation by creating scenes with two interacting avatars whose emotional style was independently controlled. While participants had to report the emotional style of a single agent, we found a systematic influence of the emotion expressed by the other, which was consistent with the social interaction context. The emotional styles of interacting individuals are thus jointly encoded.
Collapse
Affiliation(s)
- Andrea Christensen
- Section Computational Sensomotorics, Department of Cognitive Neurology, Hertie-Institute for Clinical Brain Research, Centre for Integrative Neuroscience, University Clinic Tübingen, Germany
| | - Nick Taubert
- Section Computational Sensomotorics, Department of Cognitive Neurology, Hertie-Institute for Clinical Brain Research, Centre for Integrative Neuroscience, University Clinic Tübingen, Germany
| | - Elisabeth M.J. Huis in ’t Veld
- Department of Medical and Clinical Psychology, School of Social and Behavioral Sciences, Tilburg University, Tilburg, the Netherlands
| | - Beatrice de Gelder
- Brain and Emotion Laboratory, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, EV Maastricht 6229, the Netherlands
| | - Martin A. Giese
- Section Computational Sensomotorics, Department of Cognitive Neurology, Hertie-Institute for Clinical Brain Research, Centre for Integrative Neuroscience, University Clinic Tübingen, Germany
| |
Collapse
|
15
|
Lohaus T, Reckelkamm S, Thoma P. Treating social cognition impairment with the online therapy 'SoCoBo': A randomized controlled trial including traumatic brain injury patients. PLoS One 2024; 19:e0294767. [PMID: 38198450 PMCID: PMC10781160 DOI: 10.1371/journal.pone.0294767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 11/01/2023] [Indexed: 01/12/2024] Open
Abstract
OBJECTIVE Acquired brain injuries (ABIs), such as traumatic brain injuries (TBIs), often entail impairments of general cognition (e.g., memory, attention or executive functions) and social cognition (e.g. emotion recognition, theory of mind [ToM], social problem-solving). The availability of fully computerized interventions targeting sociocognitive deficits specifically in neurologically impaired patients is extremely limited. Therefore, the Treatment Program for Deficits in Social Cognition and Social Competencies of the Ruhr University Bochum (SoCoBo), a fully computerized online therapy designed for ABI patients was evaluated in a randomized controlled trial involving TBI patients. METHOD Sixty-four patients with TBI were randomly assigned to two groups with 43 patients fully completing either SoCoBo (N = 27) or a commercially available computerized program for cognitive rehabilitation (RehaCom®, N = 16). All participants underwent comprehensive pre-post online neuropsychological assessment and worked with their respective rehabilitation programs for four days a week during a scheduled period of 12 weeks. RESULTS After treatment, the SoCoBo group, but not the RehaCom® group showed significant improvements in facial emotion recognition and self-rated empathy. Moreover, in the SoCoBo group, an increase in empathy was also associated with increased life satisfaction after treatment. There were no improvements in ToM and social problem-solving. Furthermore, general cognition did not improve in any of the groups. CONCLUSIONS SoCoBo represents an effective new online therapy for the amelioration of deficits in key domains of social cognition. Its implementation in clinical practice will serve as a meaningful addition to the existing fully computerized approaches specifically in neurological patient groups.
Collapse
Affiliation(s)
- Tobias Lohaus
- Neuropsychological Therapy Centre (NTC), Faculty of Psychology, Ruhr-University Bochum, Bochum, North Rhine-Westphalia, Germany
| | - Sally Reckelkamm
- Neuropsychological Therapy Centre (NTC), Faculty of Psychology, Ruhr-University Bochum, Bochum, North Rhine-Westphalia, Germany
| | - Patrizia Thoma
- Neuropsychological Therapy Centre (NTC), Faculty of Psychology, Ruhr-University Bochum, Bochum, North Rhine-Westphalia, Germany
| |
Collapse
|
16
|
Richoz AR, Stacchi L, Schaller P, Lao J, Papinutto M, Ticcinelli V, Caldara R. Recognizing facial expressions of emotion amid noise: A dynamic advantage. J Vis 2024; 24:7. [PMID: 38197738 PMCID: PMC10790674 DOI: 10.1167/jov.24.1.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 11/12/2023] [Indexed: 01/11/2024] Open
Abstract
Humans communicate internal states through complex facial movements shaped by biological and evolutionary constraints. Although real-life social interactions are flooded with dynamic signals, current knowledge on facial expression recognition mainly arises from studies using static face images. This experimental bias might stem from previous studies consistently reporting that young adults minimally benefit from the richer dynamic over static information, whereas children, the elderly, and clinical populations very strongly do (Richoz, Jack, Garrod, Schyns, & Caldara, 2015, Richoz, Jack, Garrod, Schyns, & Caldara, 2018b). These observations point to a near-optimal facial expression decoding system in young adults, almost insensitive to the advantage of dynamic over static cues. Surprisingly, no study has yet tested the idea that such evidence might be rooted in a ceiling effect. To this aim, we asked 70 healthy young adults to perform static and dynamic facial expression recognition of the six basic expressions while parametrically and randomly varying the low-level normalized phase and contrast signal (0%-100%) of the faces. As predicted, when 100% face signals were presented, static and dynamic expressions were recognized with equal efficiency with the exception of those with the most informative dynamics (i.e., happiness and surprise). However, when less signal was available, dynamic expressions were all better recognized than their static counterpart (peaking at ∼20%). Our data show that facial movements increase our ability to efficiently identify emotional states of others under the suboptimal visual conditions that can occur in everyday life. Dynamic signals are more effective and sensitive than static ones for decoding all facial expressions of emotion for all human observers.
Collapse
Affiliation(s)
- Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Lisa Stacchi
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Pauline Schaller
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Michael Papinutto
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Valentina Ticcinelli
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
17
|
Pan H, Chen Z, Jospe K, Gao Q, Sheng J, Gao Z, Perry A. Mood congruency affects physiological synchrony but not empathic accuracy in a naturalistic empathy task. Biol Psychol 2023; 184:108720. [PMID: 37952694 DOI: 10.1016/j.biopsycho.2023.108720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 11/05/2023] [Accepted: 11/08/2023] [Indexed: 11/14/2023]
Abstract
Empathy is a crucial aspect of our daily lives, as it enhances our wellbeing and is a proxy for prosocial behavior. It encompasses two related but partially distinct components: cognitive and affective empathy. Both are susceptible to context, biases and an individual's physiological state. Few studies have explored the effects of a person's mood on these empathy components, and results are mixed. The current study takes advantage of an ecological, naturalistic empathy task - the empathic accuracy (EA) task - in combination with physiological measurements to examine and differentiate between the effects of one's mood on both empathy components. Participants were induced with positive or negative mood and presented videos of targets narrating autobiographical negative stories, selected from a Chinese empathy dataset that we developed (now publicly available). The stories were conveyed in audio-only, visual-only and full-video formats. Participants rated the target's emotional state while watching or listening to their stories, and physiological measures were taken throughout the process. Importantly, similar measures were taken from the targets when they narrated the stories, allowing a comparison between participants' and targets' measures. We found that in audio-only and visual-only conditions, participants whose moods were congruent with the target showed higher physiological synchrony than those with incongruent mood, implying a mood-congruency effect on affective empathy. However, there was no mood effect on empathic accuracy (reflecting cognitive empathy), suggesting a different influence of mood on the two empathy components.
Collapse
Affiliation(s)
- Hanxi Pan
- Department of Psychology and Behavioral Sciences, Zhejiang University, China
| | - Zhiyun Chen
- Department of Psychology and Behavioral Sciences, Zhejiang University, China
| | - Karine Jospe
- Department of Psychology, The Hebrew University of Jerusalem, Israel; Department of Psychology, Tel-Aviv University, Israel
| | - Qi Gao
- Department of Psychology and Behavioral Sciences, Zhejiang University, China.
| | - Jinyou Sheng
- Department of Psychology and Behavioral Sciences, Zhejiang University, China
| | - Zaifeng Gao
- Department of Psychology and Behavioral Sciences, Zhejiang University, China.
| | - Anat Perry
- Department of Psychology, The Hebrew University of Jerusalem, Israel
| |
Collapse
|
18
|
Wang JZ, Zhao S, Wu C, Adams RB, Newman MG, Shafir T, Tsachor R. Unlocking the Emotional World of Visual Media: An Overview of the Science, Research, and Impact of Understanding Emotion: Drawing Insights From Psychology, Engineering, and the Arts, This Article Provides a Comprehensive Overview of the Field of Emotion Analysis in Visual Media and Discusses the Latest Research, Systems, Challenges, Ethical Implications, and Potential Impact of Artificial Emotional Intelligence on Society. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2023; 111:1236-1286. [PMID: 37859667 PMCID: PMC10586271 DOI: 10.1109/jproc.2023.3273517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/21/2023]
Abstract
The emergence of artificial emotional intelligence technology is revolutionizing the fields of computers and robotics, allowing for a new level of communication and understanding of human behavior that was once thought impossible. While recent advancements in deep learning have transformed the field of computer vision, automated understanding of evoked or expressed emotions in visual media remains in its infancy. This foundering stems from the absence of a universally accepted definition of "emotion," coupled with the inherently subjective nature of emotions and their intricate nuances. In this article, we provide a comprehensive, multidisciplinary overview of the field of emotion analysis in visual media, drawing on insights from psychology, engineering, and the arts. We begin by exploring the psychological foundations of emotion and the computational principles that underpin the understanding of emotions from images and videos. We then review the latest research and systems within the field, accentuating the most promising approaches. We also discuss the current technological challenges and limitations of emotion analysis, underscoring the necessity for continued investigation and innovation. We contend that this represents a "Holy Grail" research problem in computing and delineate pivotal directions for future inquiry. Finally, we examine the ethical ramifications of emotion-understanding technologies and contemplate their potential societal impacts. Overall, this article endeavors to equip readers with a deeper understanding of the domain of emotion analysis in visual media and to inspire further research and development in this captivating and rapidly evolving field.
Collapse
Affiliation(s)
- James Z Wang
- College of Information Sciences and Technology, The Pennsylvania State University, University Park, PA 16802 USA
| | - Sicheng Zhao
- Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing 100084, China
| | - Chenyan Wu
- College of Information Sciences and Technology, The Pennsylvania State University, University Park, PA 16802 USA
| | - Reginald B Adams
- Department of Psychology, The Pennsylvania State University, University Park, PA 16802 USA
| | - Michelle G Newman
- Department of Psychology, The Pennsylvania State University, University Park, PA 16802 USA
| | - Tal Shafir
- Emily Sagol Creative Arts Therapies Research Center, University of Haifa, Haifa 3498838, Israel
| | - Rachelle Tsachor
- School of Theatre and Music, University of Illinois at Chicago, Chicago, IL 60607 USA
| |
Collapse
|
19
|
Zhang M, Zhou Y, Xu X, Ren Z, Zhang Y, Liu S, Luo W. Multi-view emotional expressions dataset using 2D pose estimation. Sci Data 2023; 10:649. [PMID: 37739952 PMCID: PMC10516935 DOI: 10.1038/s41597-023-02551-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 09/07/2023] [Indexed: 09/24/2023] Open
Abstract
Human body expressions convey emotional shifts and intentions of action and, in some cases, are even more effective than other emotion models. Despite many datasets of body expressions incorporating motion capture available, there is a lack of more widely distributed datasets regarding naturalized body expressions based on the 2D video. In this paper, therefore, we report the multi-view emotional expressions dataset (MEED) using 2D pose estimation. Twenty-two actors presented six emotional (anger, disgust, fear, happiness, sadness, surprise) and neutral body movements from three viewpoints (left, front, right). A total of 4102 videos were captured. The MEED consists of the corresponding pose estimation results (i.e., 397,809 PNG files and 397,809 JSON files). The size of MEED exceeds 150 GB. We believe this dataset will benefit the research in various fields, including affective computing, human-computer interaction, social neuroscience, and psychiatry.
Collapse
Affiliation(s)
- Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Yanan Zhou
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Xinye Xu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Ziwei Ren
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Yihan Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Shenglan Liu
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China.
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China.
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China.
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China.
| |
Collapse
|
20
|
Kim H, Küster D, Girard JM, Krumhuber EG. Human and machine recognition of dynamic and static facial expressions: prototypicality, ambiguity, and complexity. Front Psychol 2023; 14:1221081. [PMID: 37794914 PMCID: PMC10546417 DOI: 10.3389/fpsyg.2023.1221081] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 08/22/2023] [Indexed: 10/06/2023] Open
Abstract
A growing body of research suggests that movement aids facial expression recognition. However, less is known about the conditions under which the dynamic advantage occurs. The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, ambiguity, and complexity) in human and machine analysis. In two studies, facial expression videos and corresponding images depicting the peak of the target and non-target emotion were presented to human observers and the machine classifier (FACET). Results revealed higher recognition rates for dynamic stimuli compared to non-target images. Such benefit disappeared in the context of target-emotion images which were similarly well (or even better) recognised than videos, and more prototypical, less ambiguous, and more complex in appearance than non-target images. While prototypicality and ambiguity exerted more predictive power in machine performance, complexity was more indicative of human emotion recognition. Interestingly, recognition performance by the machine was found to be superior to humans for both target and non-target images. Together, the findings point towards a compensatory role of dynamic information, particularly when static-based stimuli lack relevant features of the target emotion. Implications for research using automatic facial expression analysis (AFEA) are discussed.
Collapse
Affiliation(s)
- Hyunwoo Kim
- Departmet of Experimental Psychology, University College London, London, United Kingdom
| | - Dennis Küster
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| | - Jeffrey M. Girard
- Department of Psychology, University of Kansas, Lawrence, KS, United States
| | - Eva G. Krumhuber
- Departmet of Experimental Psychology, University College London, London, United Kingdom
| |
Collapse
|
21
|
Lin C, Bulls LS, Tepfer LJ, Vyas AD, Thornton MA. Advancing Naturalistic Affective Science with Deep Learning. AFFECTIVE SCIENCE 2023; 4:550-562. [PMID: 37744976 PMCID: PMC10514024 DOI: 10.1007/s42761-023-00215-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 08/03/2023] [Indexed: 09/26/2023]
Abstract
People express their own emotions and perceive others' emotions via a variety of channels, including facial movements, body gestures, vocal prosody, and language. Studying these channels of affective behavior offers insight into both the experience and perception of emotion. Prior research has predominantly focused on studying individual channels of affective behavior in isolation using tightly controlled, non-naturalistic experiments. This approach limits our understanding of emotion in more naturalistic contexts where different channels of information tend to interact. Traditional methods struggle to address this limitation: manually annotating behavior is time-consuming, making it infeasible to do at large scale; manually selecting and manipulating stimuli based on hypotheses may neglect unanticipated features, potentially generating biased conclusions; and common linear modeling approaches cannot fully capture the complex, nonlinear, and interactive nature of real-life affective processes. In this methodology review, we describe how deep learning can be applied to address these challenges to advance a more naturalistic affective science. First, we describe current practices in affective research and explain why existing methods face challenges in revealing a more naturalistic understanding of emotion. Second, we introduce deep learning approaches and explain how they can be applied to tackle three main challenges: quantifying naturalistic behaviors, selecting and manipulating naturalistic stimuli, and modeling naturalistic affective processes. Finally, we describe the limitations of these deep learning methods, and how these limitations might be avoided or mitigated. By detailing the promise and the peril of deep learning, this review aims to pave the way for a more naturalistic affective science.
Collapse
Affiliation(s)
- Chujun Lin
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Landry S. Bulls
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Lindsey J. Tepfer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Amisha D. Vyas
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Mark A. Thornton
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| |
Collapse
|
22
|
Goettker A, Borgerding N, Leeske L, Gegenfurtner KR. Cues for predictive eye movements in naturalistic scenes. J Vis 2023; 23:12. [PMID: 37728915 PMCID: PMC10516764 DOI: 10.1167/jov.23.10.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 08/23/2023] [Indexed: 09/22/2023] Open
Abstract
We previously compared following of the same trajectories with eye movements, but either as an isolated targets or embedded in a naturalistic scene-in this case, the movement of a puck in an ice hockey game. We observed that the oculomotor system was able to leverage the contextual cues available in the naturalistic scene to produce predictive eye movements. In this study, we wanted to assess which factors are critical for achieving this predictive advantage by manipulating four factors: the expertise of the viewers, the amount of available peripheral information, and positional and kinematic cues. The more peripheral information became available (by manipulating the area of the video that was visible), the better the predictions of all observers. However, expert ice hockey fans were consistently better at predicting than novices and used peripheral information more effectively for predictive saccades. Artificial cues about player positions did not lead to a predictive advantage, whereas impairing the causal structure of kinematic cues by playing the video in reverse led to a severe impairment. When videos were flipped vertically to introduce more difficult kinematic cues, predictive behavior was comparable to watching the original videos. Together, these results demonstrate that, when contextual information is available in naturalistic scenes, the oculomotor system is successfully integrating them and is not relying only on low-level information about the target trajectory. Critical factors for successful prediction seem to be the amount of available information, experience with the stimuli, and the availability of intact kinematic cues for player movements.
Collapse
Affiliation(s)
- Alexander Goettker
- Justus Liebig Universität Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University, Giessen, Germany
| | | | - Linus Leeske
- Justus Liebig Universität Giessen, Giessen, Germany
| | - Karl R Gegenfurtner
- Justus Liebig Universität Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
23
|
Fakheir Y, Khalil R. The effects of abnormal visual experience on neurodevelopmental disorders. Dev Psychobiol 2023; 65:e22408. [PMID: 37607893 DOI: 10.1002/dev.22408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 05/14/2023] [Accepted: 06/13/2023] [Indexed: 08/24/2023]
Abstract
Normal visual development is supported by intrinsic neurobiological mechanisms and by appropriate stimulation from the environment, both of which facilitate the maturation of visual functions. However, an offset of this balance can give rise to visual disorders. Therefore, understanding the factors that support normal vision during development and in the mature brain is important, as vision guides movement, enables social interaction, and allows children to recognize and understand their environment. In this paper, we review fundamental mechanisms that support the maturation of visual functions and discuss and draw links between the perceptual and neurobiological impairments in autism spectrum disorder (ASD) and schizophrenia. We aim to explore how this is evident in the case of ASD, and how perceptual and neurobiological deficits further degrade social ability. Furthermore, we describe the altered perceptual experience of those with schizophrenia and evaluate theories of the underlying neural deficits that alter perception.
Collapse
Affiliation(s)
- Yara Fakheir
- Department of Biology, Chemistry, and Environmental Sciences, American University of Sharjah, Sharjah, UAE
| | - Reem Khalil
- Department of Biology, Chemistry, and Environmental Sciences, American University of Sharjah, Sharjah, UAE
| |
Collapse
|
24
|
Asalıoğlu EN, Göksun T. The role of hand gestures in emotion communication: Do type and size of gestures matter? PSYCHOLOGICAL RESEARCH 2023; 87:1880-1898. [PMID: 36436110 DOI: 10.1007/s00426-022-01774-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Accepted: 11/17/2022] [Indexed: 11/28/2022]
Abstract
We communicate emotions in a multimodal way, yet non-verbal emotion communication is a relatively understudied area of research. In three experiments, we investigated the role of gesture characteristics (e.g., type, size in space) on individuals' processing of emotional content. In Experiment 1, participants were asked to rate the emotional intensity of emotional narratives from the videoclips either with iconic or beat gestures. Participants in the iconic gesture condition rated the emotional intensity higher than participants in the beat gesture condition. In Experiment 2, the size of gestures and its interaction with gesture type were investigated in a within-subjects design. Participants again rated the emotional intensity of emotional narratives from the videoclips. Although individuals overall rated narrow gestures more emotionally intense than wider gestures, no effects of gesture type, or gesture size and type interaction were found. Experiment 3 was conducted to check whether findings of Experiment 2 were due to viewing gestures in all videoclips. We compared the gesture and no gesture (i.e., speech only) conditions and showed that there was not a difference between them on emotional ratings. However, we could not replicate the findings related to gesture size of Experiment 2. Overall, these findings indicate the importance of examining gesture's role in emotional contexts and that different gesture characteristics such as size of gestures can be considered in nonverbal communication.
Collapse
Affiliation(s)
- Esma Nur Asalıoğlu
- Department of Psychology, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey
| | - Tilbe Göksun
- Department of Psychology, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey.
| |
Collapse
|
25
|
Zhu X, Gong Y, Xu T, Lian W, Xu S, Fan L. Incongruent gestures slow the processing of facial expressions in university students with social anxiety. Front Psychol 2023; 14:1199537. [PMID: 37674750 PMCID: PMC10478090 DOI: 10.3389/fpsyg.2023.1199537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 08/08/2023] [Indexed: 09/08/2023] Open
Abstract
In recent years, an increasing number of studies have examined the mechanisms underlying nonverbal emotional information processing in people with high social anxiety (HSA). However, most of these studies have focused on the processing of facial expressions, and there has been scarce research on gesture or even face-gesture combined processing in HSA individuals. The present study explored the processing characteristics and mechanism of the interaction between gestures and facial expressions in people with HSA and low social anxiety (LSA). The present study recruited university students as participants and used the Liebowitz Social Anxiety Scale scores to distinguish the HSA and LSA groups. We used a 2 (group: HSA and LSA) × 2 (emotion valence: positive, negative) × 2 (task: face, gesture) multifactor mixed design, and videos of a single face or gesture and combined face-gesture cues were used as stimuli. We found that (1) there is a distinction in the processing of faces and gestures, with individuals recognizing gestures faster than faces; (2) there is an attentional enhancement in the processing of gestures, particularly for negative gestures; and (3) when the emotional valence of faces and gestures align, it facilitates the recognition of both. However, incongruent gestures have a stronger impact on the processing of facial expressions compared to facial expressions themselves, suggesting that the processing of facial emotions is more influenced by environmental cues provided by gestures. These findings indicated that gestures played an important role in emotional processing, and facial emotional processing was more dependent on the environmental cues derived from gestures, which helps to clarify the reasons for biases in the interpretation of emotional information in people with HSA.
Collapse
Affiliation(s)
- Xinyi Zhu
- Department of Psychology, School of Education, Wenzhou University, Wenzhou, China
- Department of Psychology, Jing Hengyi School of Education, Hangzhou Normal University, Hangzhou, China
| | - Yan Gong
- Department of Psychology, School of Education, Wenzhou University, Wenzhou, China
| | - Tingting Xu
- Department of Psychology, School of Education, Wenzhou University, Wenzhou, China
| | - Wen Lian
- Department of Psychology, School of Education, Wenzhou University, Wenzhou, China
| | - Shuhui Xu
- Department of Psychology, School of Education, Wenzhou University, Wenzhou, China
| | - Lu Fan
- Department of Psychology, School of Education, Wenzhou University, Wenzhou, China
| |
Collapse
|
26
|
Zhang M, Yu L, Zhang K, Du B, Zhan B, Jia S, Chen S, Han F, Li Y, Liu S, Yi X, Liu S, Luo W. Construction and validation of the Dalian emotional movement open-source set (DEMOS). Behav Res Methods 2023; 55:2353-2366. [PMID: 35931937 DOI: 10.3758/s13428-022-01887-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/24/2022] [Indexed: 11/08/2022]
Abstract
Human body movements are important for emotion recognition and social communication and have received extensive attention from researchers. In this field, emotional biological motion stimuli, as depicted by point-light displays, are widely used. However, the number of stimuli in the existing material library is small, and there is a lack of standardized indicators, which subsequently limits experimental design and conduction. Therefore, based on our prior kinematic dataset, we constructed the Dalian Emotional Movement Open-source Set (DEMOS) using computational modeling. The DEMOS has three views (i.e., frontal 0°, left 45°, and left 90°) and in total comprises 2664 high-quality videos of emotional biological motion, each displaying happiness, sadness, anger, fear, disgust, and neutral. All stimuli were validated in terms of recognition accuracy, emotional intensity, and subjective movement. The objective movement for each expression was also calculated. The DEMOS can be downloaded for free from https://osf.io/83fst/ . To our knowledge, this is the largest multi-view emotional biological motion set based on the whole body. The DEMOS can be applied in many fields, including affective computing, social cognition, and psychiatry.
Collapse
Affiliation(s)
- Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Lu Yu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Keye Zhang
- School of Social and Behavioral Sciences, Nanjing University, Nanjing, 210023, China
| | - Bixuan Du
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Bin Zhan
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Shuxin Jia
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Shaohua Chen
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Fengxu Han
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Yiwen Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Shuaicheng Liu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Xi Yi
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Shenglan Liu
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, China.
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, China.
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China.
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China.
| |
Collapse
|
27
|
Wang R, Lu X, Jiang Y. Distributed and hierarchical neural encoding of multidimensional biological motion attributes in the human brain. Cereb Cortex 2023; 33:8510-8522. [PMID: 37118887 PMCID: PMC10786095 DOI: 10.1093/cercor/bhad136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/31/2023] [Accepted: 04/01/2023] [Indexed: 04/30/2023] Open
Abstract
The human visual system can efficiently extract distinct physical, biological, and social attributes (e.g. facing direction, gender, and emotional state) from biological motion (BM), but how these attributes are encoded in the brain remains largely unknown. In the current study, we used functional magnetic resonance imaging to investigate this issue when participants viewed multidimensional BM stimuli. Using multiple regression representational similarity analysis, we identified distributed brain areas, respectively, related to the processing of facing direction, gender, and emotional state conveyed by BM. These brain areas are governed by a hierarchical structure in which the respective neural encoding of facing direction, gender, and emotional state is modulated by each other in descending order. We further revealed that a portion of the brain areas identified in representational similarity analysis was specific to the neural encoding of each attribute and correlated with the corresponding behavioral results. These findings unravel the brain networks for encoding BM attributes in consideration of their interactions, and highlight that the processing of multidimensional BM attributes is recurrently interactive.
Collapse
Affiliation(s)
- Ruidi Wang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China
- Chinese Institute for Brain Research, 26 Science Park Road, Beijing 102206, China
| | - Xiqian Lu
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China
- Chinese Institute for Brain Research, 26 Science Park Road, Beijing 102206, China
| | - Yi Jiang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China
- Chinese Institute for Brain Research, 26 Science Park Road, Beijing 102206, China
| |
Collapse
|
28
|
Preißler L, Keck J, Krüger B, Munzert J, Schwarzer G. Recognition of emotional body language from dyadic and monadic point-light displays in 5-year-old children and adults. J Exp Child Psychol 2023; 235:105713. [PMID: 37331307 DOI: 10.1016/j.jecp.2023.105713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 04/13/2023] [Accepted: 05/16/2023] [Indexed: 06/20/2023]
Abstract
Most child studies on emotion perception used faces and speech as emotion stimuli, but little is known about children's perception of emotions conveyed by body movements, that is, emotional body language (EBL). This study aimed to investigate whether processing advantages for positive emotions in children and negative emotions in adults found in studies on emotional face and term perception also occur in EBL perception. We also aimed to uncover which specific movement features of EBL contribute to emotion perception from interactive dyads compared with noninteractive monads in children and adults. We asked 5-year-old children and adults to categorize happy and angry point-light displays (PLDs), presented as pairs (dyads) and single actors (monads), in a button-press task. By applying representational similarity analyses, we determined intra- and interpersonal movement features of the PLDs and their relation to the participants' emotional categorizations. Results showed significantly higher recognition of happy PLDs in 5-year-olds and of angry PLDs in adults in monads but not in dyads. In both age groups, emotion recognition depended significantly on kinematic and postural movement features such as limb contraction and vertical movement in monads and dyads, whereas in dyads recognition also relied on interpersonal proximity measures such as interpersonal distance. Thus, EBL processing in monads seems to undergo a similar developmental shift from a positivity bias to a negativity bias, as was previously found for emotional faces and terms. Despite these age-specific processing biases, children and adults seem to use similar movement features in EBL processing.
Collapse
Affiliation(s)
- Lucie Preißler
- Department of Developmental Psychology, Justus Liebig University Giessen, 35394 Gießen, Germany.
| | - Johannes Keck
- Neuromotor Behavior Lab, Department of Sport Science, Justus Liebig University Giessen, 35394 Gießen, Germany
| | - Britta Krüger
- Neuromotor Behavior Lab, Department of Sport Science, Justus Liebig University Giessen, 35394 Gießen, Germany
| | - Jörn Munzert
- Neuromotor Behavior Lab, Department of Sport Science, Justus Liebig University Giessen, 35394 Gießen, Germany
| | - Gudrun Schwarzer
- Department of Developmental Psychology, Justus Liebig University Giessen, 35394 Gießen, Germany
| |
Collapse
|
29
|
Zhang N, Hu HL, Tso SH, Liu C. To switch or not? Effects of spokes-character urgency during the social app loading process and app type on user switching intention. Front Psychol 2023; 14:1110808. [PMID: 37384167 PMCID: PMC10299737 DOI: 10.3389/fpsyg.2023.1110808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 01/20/2023] [Indexed: 06/30/2023] Open
Abstract
Users of mobile phone applications (apps) often have to wait for the pages of apps to load, a process that substantially affects user experience. Based on the Attentional Gate Model and Emotional Contagion Theory, this paper explores the effects of the urgency expressed by a spokes-character's movement in the loading page of a social app the app type on users' switching intention through two studies. In Study 1 (N = 173), the results demonstrated that for a hedonic-orientated app, a high-urgency (vs. low-urgency) spokes-character resulted in a lower switching intention, whereas the opposite occurred for a utilitarian-orientated app. We adopted a similar methodology in Study 2 (N = 182) and the results showed that perceived waiting time mediated the interaction effect demonstrated in Study 1. Specifically, for the hedonic-orientated (vs. utilitarian-orientated) social app, the high-urgency (vs. low-urgency) spokes-character made participants estimate a shorter perceived waiting time, which induces a lower user switching intention. This paper contributes to the literature on emotion, spokes-characters, and human-computer interaction, which extends an enhanced understanding of users' perception during loading process and informs the design of spokes-characters for the loading pages of apps.
Collapse
Affiliation(s)
- Ning Zhang
- College of Management, Shenzhen University, Shenzhen, China
| | - Hsin-Li Hu
- School of Communication, Hang Seng University of Hong Kong, Hong Kong, China
| | - Scarlet H. Tso
- School of Communication, Hang Seng University of Hong Kong, Hong Kong, China
| | - Chunqun Liu
- School of Hotel and Tourism Management, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
30
|
Christensen JF, Bruhn L, Schmidt EM, Bahmanian N, Yazdi SHN, Farahi F, Sancho-Escanero L, Menninghaus W. A 5-emotions stimuli set for emotion perception research with full-body dance movements. Sci Rep 2023; 13:8757. [PMID: 37253770 DOI: 10.1038/s41598-023-33656-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 04/17/2023] [Indexed: 06/01/2023] Open
Abstract
Ekman famously contended that there are different channels of emotional expression (face, voice, body), and that emotion recognition ability confers an adaptive advantage to the individual. Yet, still today, much emotion perception research is focussed on emotion recognition from the face, and few validated emotionally expressive full-body stimuli sets are available. Based on research on emotional speech perception, we created a new, highly controlled full-body stimuli set. We used the same-sequence approach, and not emotional actions (e.g., jumping of joy, recoiling in fear): One professional dancer danced 30 sequences of (dance) movements five times each, expressing joy, anger, fear, sadness or a neutral state, one at each repetition. We outline the creation of a total of 150, 6-s-long such video stimuli, that show the dancer as a white silhouette on a black background. Ratings from 90 participants (emotion recognition, aesthetic judgment) showed that intended emotion was recognized above chance (chance: 20%; joy: 45%, anger: 48%, fear: 37%, sadness: 50%, neutral state: 51%), and that aesthetic judgment was sensitive to the intended emotion (beauty ratings: joy > anger > fear > neutral state, and sad > fear > neutral state). The stimuli set, normative values and code are available for download.
Collapse
Affiliation(s)
- Julia F Christensen
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany.
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany.
| | - Laura Bruhn
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany
| | - Eva-Madeleine Schmidt
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Max Planck School of Cognition, Max Planck Institute, Leipzig, Germany
| | - Nasimeh Bahmanian
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Department of Modern Languages, Goethe University, Frankfurt, Germany
| | | | | | | | - Winfried Menninghaus
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany
| |
Collapse
|
31
|
Bieńkiewicz MMN, Janaqi S, Jean P, Bardy BG. Impact of emotion-laden acoustic stimuli on group synchronisation performance. Sci Rep 2023; 13:7094. [PMID: 37127737 PMCID: PMC10150690 DOI: 10.1038/s41598-023-34406-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 04/28/2023] [Indexed: 05/03/2023] Open
Abstract
The ability to synchronise with other people is a core socio-motor competence acquired during human development. In this study we aimed to understand the impact of individual emotional arousal on joint action performance. We asked 15 mixed-gender groups (of 4 individuals each) to participate in a digital, four-way movement synchronisation task. Participants shared the same physical space, but could not see each other during the task. In each trial run, every participant was induced with an emotion-laden acoustic stimulus (pre-selected from the second version of International Affective Digitized Sounds). Our data demonstrated that the human ability to synchronise is overall robust to fluctuations in individual emotional arousal, but performance varies in quality and movement speed as a result of valence of emotional induction (both on the individual and group level). We found that three negative inductions per group per trial led to a drop in overall group synchronisation performance (measured as the median and standard deviation of Kuramoto's order parameter-an index measuring the strength of synchrony between oscillators, in this study, players) in the 15 sec post-induction. We report that negatively-valenced inductions led to slower oscillations, whilst positive induction afforded faster oscillations. On the individual level of synchronisation performance we found an effect of empathetic disposition (higher competence linked to better performance during the negative induction condition) and of participant's sex (males displayed better synchronisation performance with others). We believe this work is a blueprint for exploring the frontiers of inextricably bound worlds of emotion and joint action, be it physical or digital.
Collapse
Affiliation(s)
- Marta M N Bieńkiewicz
- EuroMov Digital Health in Motion, Univ. Montpellier IMT Mines Ales, Hérault, Montpellier, 34090, France.
| | - Stefan Janaqi
- EuroMov Digital Health in Motion, Univ. Montpellier IMT Mines Ales, Hérault, Montpellier, 34090, France
| | - Pierre Jean
- EuroMov Digital Health in Motion, Univ. Montpellier IMT Mines Ales, Hérault, Montpellier, 34090, France
| | - Benoît G Bardy
- EuroMov Digital Health in Motion, Univ. Montpellier IMT Mines Ales, Hérault, Montpellier, 34090, France
| |
Collapse
|
32
|
Francisco V, Decatoire A, Bidet-Ildei C. Action observation and motor learning: The role of action observation in learning judo techniques. Eur J Sport Sci 2023; 23:319-329. [PMID: 35098899 DOI: 10.1080/17461391.2022.2036816] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Within the theoretical framework of embodied cognition, several experiments have shown the existence of links between action observation and motor learning. Our aim was to assess the effectiveness of an observational learning protocol (action observation training: AOT) of point-light-display (PLD) in judoka. Twenty participants were given 7 days to learn Go-No-Sen. During this time period, all of the participants received conventional kata training consisting of Uchi-komi and Nage-komi (repetition of techniques) on tatami. In addition to this conventional learning, the experimental group watched 5 min of PLD video representing the different kata techniques, whereas the control group watched neutral videos during the same time period. After the learning period, both the qualitative and biomechanical performances on the kata and the transfer abilities were assessed. The results showed better biomechanical performance and transfer ability in the experimental group than in the control group. Therefore, this first experiment suggests that observational learning of PLD may be beneficial for the acquisition of judo techniques. Future experiments will be needed to specify the mechanisms that are involved in this effect.
Collapse
Affiliation(s)
- Victor Francisco
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Centre National de la Recherche Scientifique (CNRS), Université de Poitiers, Université de Tours, Poitiers, France
| | - Arnaud Decatoire
- Centre National de la Recherche Scientifique, Institut PPRIME (UPR CNRS 3346), Université de Poitiers, Poitiers, France
| | - Christel Bidet-Ildei
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Centre National de la Recherche Scientifique (CNRS), Université de Poitiers, Université de Tours, Poitiers, France
| |
Collapse
|
33
|
Bidet-Ildei C, Francisco V, Decatoire A, Pylouster J, Blandin Y. PLAViMoP database: A new continuously assessed and collaborative 3D point-light display dataset. Behav Res Methods 2023; 55:694-715. [PMID: 35441360 DOI: 10.3758/s13428-022-01850-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/23/2022] [Indexed: 11/08/2022]
Abstract
It was more than 45 years ago that Gunnar Johansson invented the point-light display technique. This showed for the first time that kinematics is crucial for action recognition, and that humans are very sensitive to their conspecifics' movements. As a result, many of today's researchers use point-light displays to better understand the mechanisms behind this recognition ability. In this paper, we propose PLAViMoP, a new database of 3D point-light displays representing everyday human actions (global and fine-motor control movements), sports movements, facial expressions, interactions, and robotic movements. Access to the database is free, at https://plavimop.prd.fr/en/motions . Moreover, it incorporates a search engine to facilitate action retrieval. In this paper, we describe the construction, functioning, and assessment of the PLAViMoP database. Each sequence was analyzed according to four parameters: type of movement, movement label, sex of the actor, and age of the actor. We provide both the mean scores for each assessment of each point-light display, and the comparisons between the different categories of sequences. Our results are discussed in the light of the literature and the suitability of our stimuli for research and applications.
Collapse
Affiliation(s)
- Christel Bidet-Ildei
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Université de Poitiers, Université de Tours, Centre National de la Recherche Scientifique, Poitiers, France.
- MSHS, Bâtiment A5, 5 rue Théodore Lefebvre TSA 21103, 86073, Poitiers, Cedex 9, France.
| | - Victor Francisco
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Université de Poitiers, Université de Tours, Centre National de la Recherche Scientifique, Poitiers, France
| | - Arnaud Decatoire
- Institut PPRIME (UPR CNRS 3346), Université de Poitiers, Centre National de la Recherche Scientifique, Poitiers, France
| | - Jean Pylouster
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Université de Poitiers, Université de Tours, Centre National de la Recherche Scientifique, Poitiers, France
| | - Yannick Blandin
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Université de Poitiers, Université de Tours, Centre National de la Recherche Scientifique, Poitiers, France
| |
Collapse
|
34
|
Moura N, Fonseca P, Goethel M, Oliveira-Silva P, Vilas-Boas JP, Serra S. The impact of visual display of human motion on observers' perception of music performance. PLoS One 2023; 18:e0281755. [PMID: 36888588 PMCID: PMC9994732 DOI: 10.1371/journal.pone.0281755] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 01/31/2023] [Indexed: 03/09/2023] Open
Abstract
In investigating the influence of body movement in multimodal perception, human motion displays are frequently used as a means of visual standardization and control of external confounders. However, no principle is established regarding the selection of an adequate display for specific study purposes. The aim of this study was to evaluate the effects of adopting 4 visual displays (point-light, stick figure, body mass, skeleton) on the observers' perception of music performances in 2 expressive conditions (immobile, projected expressiveness). Two hundred eleven participants rated 8 audio-visual samples in expressiveness, match between movement and music, and overall evaluation. The results revealed significant isolated main effects of visual display and expressive condition on the observers' ratings (in both, p < 0.001), and interaction effects between the two factors (p < 0.001). Displays closer to a human form (mostly skeleton, sometimes body mass) exponentiated the evaluations of expressiveness and music-movement match in the projected expressiveness condition, and of overall evaluation in the immobile condition; the opposite trend occurred with the simplified motion display (stick figure). Projected expressiveness performances were higher rated than immobile performances. Although the expressive conditions remained distinguishable across displays, the more complex ones potentiated the attribution of subjective qualities. We underline the importance of considering the variable display as an influencing factor in perceptual studies.
Collapse
Affiliation(s)
- Nádia Moura
- School of Arts, Research Centre in Science and Technology of the Arts, Universidade Católica Portuguesa, Porto, Portugal
- * E-mail:
| | - Pedro Fonseca
- Porto Biomechanics Laboratory, Faculty of Sport, University of Porto, Porto, Portugal
| | - Márcio Goethel
- Porto Biomechanics Laboratory, Faculty of Sport, University of Porto, Porto, Portugal
- Centre of Research, Education, Innovation and Intervention in Sport, Faculty of Sport, University of Porto, Porto, Portugal
| | - Patrícia Oliveira-Silva
- Human Neurobehavioral Laboratory, Research Centre for Human Development, Universidade Católica Portuguesa, Porto, Portugal
| | - João Paulo Vilas-Boas
- Porto Biomechanics Laboratory, Faculty of Sport, University of Porto, Porto, Portugal
- Centre of Research, Education, Innovation and Intervention in Sport, Faculty of Sport, University of Porto, Porto, Portugal
| | - Sofia Serra
- School of Arts, Research Centre in Science and Technology of the Arts, Universidade Católica Portuguesa, Porto, Portugal
| |
Collapse
|
35
|
Smith RA, Cross ES. The McNorm library: creating and validating a new library of emotionally expressive whole body dance movements. PSYCHOLOGICAL RESEARCH 2023; 87:484-508. [PMID: 35385989 PMCID: PMC8985749 DOI: 10.1007/s00426-022-01669-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Accepted: 02/23/2022] [Indexed: 11/28/2022]
Abstract
The ability to exchange affective cues with others plays a key role in our ability to create and maintain meaningful social relationships. We express our emotions through a variety of socially salient cues, including facial expressions, the voice, and body movement. While significant advances have been made in our understanding of verbal and facial communication, to date, understanding of the role played by human body movement in our social interactions remains incomplete. To this end, here we describe the creation and validation of a new set of emotionally expressive whole-body dance movement stimuli, named the Motion Capture Norming (McNorm) Library, which was designed to reconcile a number of limitations associated with previous movement stimuli. This library comprises a series of point-light representations of a dancer's movements, which were performed to communicate to observers neutrality, happiness, sadness, anger, and fear. Based on results from two validation experiments, participants could reliably discriminate the intended emotion expressed in the clips in this stimulus set, with accuracy rates up to 60% (chance = 20%). We further explored the impact of dance experience and trait empathy on emotion recognition and found that neither significantly impacted emotion discrimination. As all materials for presenting and analysing this movement library are openly available, we hope this resource will aid other researchers in further exploration of affective communication expressed by human bodily movement.
Collapse
Affiliation(s)
- Rebecca A. Smith
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland
| | - Emily S. Cross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland ,Department of Cognitive Science, Macquarie University, Sydney, Australia
| |
Collapse
|
36
|
Straulino E, Scarpazza C, Sartori L. What is missing in the study of emotion expression? Front Psychol 2023; 14:1158136. [PMID: 37179857 PMCID: PMC10173880 DOI: 10.3389/fpsyg.2023.1158136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/06/2023] [Indexed: 05/15/2023] Open
Abstract
While approaching celebrations for the 150 years of "The Expression of the Emotions in Man and Animals", scientists' conclusions on emotion expression are still debated. Emotion expression has been traditionally anchored to prototypical and mutually exclusive facial expressions (e.g., anger, disgust, fear, happiness, sadness, and surprise). However, people express emotions in nuanced patterns and - crucially - not everything is in the face. In recent decades considerable work has critiqued this classical view, calling for a more fluid and flexible approach that considers how humans dynamically perform genuine expressions with their bodies in context. A growing body of evidence suggests that each emotional display is a complex, multi-component, motoric event. The human face is never static, but continuously acts and reacts to internal and environmental stimuli, with the coordinated action of muscles throughout the body. Moreover, two anatomically and functionally different neural pathways sub-serve voluntary and involuntary expressions. An interesting implication is that we have distinct and independent pathways for genuine and posed facial expressions, and different combinations may occur across the vertical facial axis. Investigating the time course of these facial blends, which can be controlled consciously only in part, is recently providing a useful operational test for comparing the different predictions of various models on the lateralization of emotions. This concise review will identify shortcomings and new challenges regarding the study of emotion expressions at face, body, and contextual levels, eventually resulting in a theoretical and methodological shift in the study of emotions. We contend that the most feasible solution to address the complex world of emotion expression is defining a completely new and more complete approach to emotional investigation. This approach can potentially lead us to the roots of emotional display, and to the individual mechanisms underlying their expression (i.e., individual emotional signatures).
Collapse
Affiliation(s)
- Elisa Straulino
- Department of General Psychology, University of Padova, Padova, Italy
- *Correspondence: Elisa Straulino,
| | - Cristina Scarpazza
- Department of General Psychology, University of Padova, Padova, Italy
- IRCCS San Camillo Hospital, Venice, Italy
| | - Luisa Sartori
- Department of General Psychology, University of Padova, Padova, Italy
- Padova Neuroscience Center, University of Padova, Padova, Italy
- Luisa Sartori,
| |
Collapse
|
37
|
Cracco E, Oomen D, Papeo L, Wiersema JR. Using EEG movement tagging to isolate brain responses coupled to biological movements. Neuropsychologia 2022; 177:108395. [PMID: 36272677 DOI: 10.1016/j.neuropsychologia.2022.108395] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 09/27/2022] [Accepted: 10/09/2022] [Indexed: 11/06/2022]
Abstract
Detecting biological motion is essential for adaptive social behavior. Previous research has revealed the brain processes underlying this ability. However, brain activity during biological motion perception captures a multitude of processes. As a result, it is often unclear which processes reflect movement processing and which processes reflect secondary processes that build on movement processing. To address this issue, we developed a new approach to measure brain responses directly coupled to observed movements. Specifically, we showed 30 male and female adults a point-light walker moving at a pace of 2.4 Hz and used EEG frequency tagging to measure the brain response coupled to that pace ('movement tagging'). The results revealed a reliable response at the walking frequency that was reduced by two manipulations known to disrupt biological motion perception: phase scrambling and inversion. Interestingly, we also identified a brain response at half the walking frequency (i.e., 1.2 Hz), corresponding to the rate at which the individual dots completed a cycle. In contrast to the 2.4 Hz response, the response at 1.2 Hz was increased for scrambled (vs. unscrambled) walkers. These results show that frequency tagging can be used to capture the visual processing of biological movements and can dissociate between global (2.4 Hz) and local (1.2 Hz) processes involved in biological motion perception, at different frequencies of the brain signal.
Collapse
Affiliation(s)
- Emiel Cracco
- Department of Experimental Clinical and Health Psychology, Ghent University, Belgium.
| | - Danna Oomen
- Department of Experimental Clinical and Health Psychology, Ghent University, Belgium
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| | - Jan R Wiersema
- Department of Experimental Clinical and Health Psychology, Ghent University, Belgium
| |
Collapse
|
38
|
Exposure to multisensory and visual static or moving stimuli enhances processing of nonoptimal visual rhythms. Atten Percept Psychophys 2022; 84:2655-2669. [PMID: 36241841 PMCID: PMC9630188 DOI: 10.3758/s13414-022-02569-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/05/2022] [Indexed: 11/25/2022]
Abstract
Research has shown that visual moving and multisensory stimuli can efficiently mediate rhythmic information. It is possible, therefore, that the previously reported auditory dominance in rhythm perception is due to the use of nonoptimal visual stimuli. Yet it remains unknown whether exposure to multisensory or visual-moving rhythms would benefit the processing of rhythms consisting of nonoptimal static visual stimuli. Using a perceptual learning paradigm, we tested whether the visual component of the multisensory training pair can affect processing of metric simple two integer-ratio nonoptimal visual rhythms. Participants were trained with static (AVstat), moving-inanimate (AVinan), or moving-animate (AVan) visual stimuli along with auditory tones and a regular beat. In the pre- and posttraining tasks, participants responded whether two static-visual rhythms differed or not. Results showed improved posttraining performance for all training groups irrespective of the type of visual stimulation. To assess whether this benefit was auditory driven, we introduced visual-only training with a moving or static stimulus and a regular beat (Vinan). Comparisons between Vinan and Vstat showed that, even in the absence of auditory information, training with visual-only moving or static stimuli resulted in an enhanced posttraining performance. Overall, our findings suggest that audiovisual and visual static or moving training can benefit processing of nonoptimal visual rhythms.
Collapse
|
39
|
Pavlova MA, Romagnano V, Kubon J, Isernia S, Fallgatter AJ, Sokolov AN. Ties between reading faces, bodies, eyes, and autistic traits. Front Neurosci 2022; 16:997263. [PMID: 36248653 PMCID: PMC9554539 DOI: 10.3389/fnins.2022.997263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 08/12/2022] [Indexed: 11/29/2022] Open
Abstract
While reading covered with masks faces during the COVID-19 pandemic, for efficient social interaction, we need to combine information from different sources such as the eyes (without faces hidden by masks) and bodies. This may be challenging for individuals with neuropsychiatric conditions, in particular, autism spectrum disorders. Here we examined whether reading of dynamic faces, bodies, and eyes are tied in a gender-specific way, and how these capabilities are related to autistic traits expression. Females and males accomplished a task with point-light faces along with a task with point-light body locomotion portraying different emotional expressions. They had to infer emotional content of displays. In addition, participants were administered the Reading the Mind in the Eyes Test, modified and Autism Spectrum Quotient questionnaire. The findings show that only in females, inferring emotions from dynamic bodies and faces are firmly linked, whereas in males, reading in the eyes is knotted with face reading. Strikingly, in neurotypical males only, accuracy of face, body, and eyes reading was negatively tied with autistic traits. The outcome points to gender-specific modes in social cognition: females rely upon merely dynamic cues while reading faces and bodies, whereas males most likely trust configural information. The findings are of value for examination of face and body language reading in neuropsychiatric conditions, in particular, autism, most of which are gender/sex-specific. This work suggests that if male individuals with autistic traits experience difficulties in reading covered with masks faces, these deficits may be unlikely compensated by reading (even dynamic) bodies and faces. By contrast, in females, reading covered faces as well as reading language of dynamic bodies and faces are not compulsorily connected to autistic traits preventing them from paying high costs for maladaptive social interaction.
Collapse
Affiliation(s)
- Marina A. Pavlova
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
- *Correspondence: Marina A. Pavlova,
| | - Valentina Romagnano
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Julian Kubon
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Sara Isernia
- IRCCS Fondazione Don Carlo Gnocchi ONLUS, Milan, Italy
| | - Andreas J. Fallgatter
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Alexander N. Sokolov
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| |
Collapse
|
40
|
Perceiving Assertiveness and Anger from Gesturing Speed in Different Contexts. JOURNAL OF NONVERBAL BEHAVIOR 2022. [DOI: 10.1007/s10919-022-00418-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
41
|
Keck J, Zabicki A, Bachmann J, Munzert J, Krüger B. Decoding spatiotemporal features of emotional body language in social interactions. Sci Rep 2022; 12:15088. [PMID: 36064559 PMCID: PMC9445068 DOI: 10.1038/s41598-022-19267-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 08/26/2022] [Indexed: 11/11/2022] Open
Abstract
How are emotions perceived through human body language in social interactions? This study used point-light displays of human interactions portraying emotional scenes (1) to examine quantitative intrapersonal kinematic and postural body configurations, (2) to calculate interaction-specific parameters of these interactions, and (3) to analyze how far both contribute to the perception of an emotion category (i.e. anger, sadness, happiness or affection) as well as to the perception of emotional valence. By using ANOVA and classification trees, we investigated emotion-specific differences in the calculated parameters. We further applied representational similarity analyses to determine how perceptual ratings relate to intra- and interpersonal features of the observed scene. Results showed that within an interaction, intrapersonal kinematic cues corresponded to emotion category ratings, whereas postural cues reflected valence ratings. Perception of emotion category was also driven by interpersonal orientation, proxemics, the time spent in the personal space of the counterpart, and the motion–energy balance between interacting people. Furthermore, motion–energy balance and orientation relate to valence ratings. Thus, features of emotional body language are connected with the emotional content of an observed scene and people make use of the observed emotionally expressive body language and interpersonal coordination to infer emotional content of interactions.
Collapse
Affiliation(s)
- Johannes Keck
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany. .,Center for Mind, Brain and Behavior-CMBB, Universities Marburg and Giessen, Marburg, Germany.
| | - Adam Zabicki
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany
| | - Julia Bachmann
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany
| | - Jörn Munzert
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany.,Center for Mind, Brain and Behavior-CMBB, Universities Marburg and Giessen, Marburg, Germany
| | - Britta Krüger
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany
| |
Collapse
|
42
|
EmBody/EmFace as a new open tool to assess emotion recognition from body and face expressions. Sci Rep 2022; 12:14165. [PMID: 35986068 PMCID: PMC9391359 DOI: 10.1038/s41598-022-17866-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 08/02/2022] [Indexed: 01/29/2023] Open
Abstract
Nonverbal expressions contribute substantially to social interaction by providing information on another person’s intentions and feelings. While emotion recognition from dynamic facial expressions has been widely studied, dynamic body expressions and the interplay of emotion recognition from facial and body expressions have attracted less attention, as suitable diagnostic tools are scarce. Here, we provide validation data on a new open source paradigm enabling the assessment of emotion recognition from both 3D-animated emotional body expressions (Task 1: EmBody) and emotionally corresponding dynamic faces (Task 2: EmFace). Both tasks use visually standardized items depicting three emotional states (angry, happy, neutral), and can be used alone or together. We here demonstrate successful psychometric matching of the EmBody/EmFace items in a sample of 217 healthy subjects with excellent retest reliability and validity (correlations with the Reading-the-Mind-in-the-Eyes-Test and Autism-Spectrum Quotient, no correlations with intelligence, and given factorial validity). Taken together, the EmBody/EmFace is a novel, effective (< 5 min per task), highly standardized and reliably precise tool to sensitively assess and compare emotion recognition from body and face stimuli. The EmBody/EmFace has a wide range of potential applications in affective, cognitive and social neuroscience, and in clinical research studying face- and body-specific emotion recognition in patient populations suffering from social interaction deficits such as autism, schizophrenia, or social anxiety.
Collapse
|
43
|
Strang CC, Harris A, Moody EJ, Reed CL. Peak frequency of the sensorimotor mu rhythm varies with autism-spectrum traits. Front Neurosci 2022; 16:950539. [PMID: 35992926 PMCID: PMC9389406 DOI: 10.3389/fnins.2022.950539] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 07/18/2022] [Indexed: 11/17/2022] Open
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental syndrome characterized by impairments in social perception and communication. Growing evidence suggests that the relationship between deficits in social perception and ASD may extend into the neurotypical population. In electroencephalography (EEG), high autism-spectrum traits in both ASD and neurotypical samples are associated with changes to the mu rhythm, an alpha-band (8–12 Hz) oscillation measured over sensorimotor cortex which typically shows reductions in spectral power during both one’s own movements and observation of others’ actions. This mu suppression is thought to reflect integration of perceptual and motor representations for understanding of others’ mental states, which may be disrupted in individuals with autism-spectrum traits. However, because spectral power is usually quantified at the group level, it has limited usefulness for characterizing individual variation in the mu rhythm, particularly with respect to autism-spectrum traits. Instead, individual peak frequency may provide a better measure of mu rhythm variability across participants. Previous developmental studies have linked ASD to slowing of individual peak frequency in the alpha band, or peak alpha frequency (PAF), predominantly associated with selective attention. Yet individual variability in the peak mu frequency (PMF) remains largely unexplored, particularly with respect to autism-spectrum traits. Here we quantified peak frequency of occipitoparietal alpha and sensorimotor mu rhythms across neurotypical individuals as a function of autism-spectrum traits. High-density 128-channel EEG data were collected from 60 participants while they completed two tasks previously reported to reliably index the sensorimotor mu rhythm: motor execution (bimanual finger tapping) and action observation (viewing of whole-body human movements). We found that individual measurement in the peak oscillatory frequency of the mu rhythm was highly reliable within participants, was not driven by resting vs. task states, and showed good correlation across action execution and observation tasks. Within our neurotypical sample, higher autism-spectrum traits were associated with slowing of the PMF, as predicted. This effect was not likely explained by volume conduction of the occipitoparietal PAF associated with attention. Together, these data support individual peak oscillatory alpha-band frequency as a correlate of autism-spectrum traits, warranting further research with larger samples and clinical populations.
Collapse
Affiliation(s)
| | - Alison Harris
- Department of Psychological Science, Claremont McKenna College, Claremont, CA, United States
- *Correspondence: Alison Harris,
| | - Eric J. Moody
- Wyoming Institute for Disabilities (WIND), University of Wyoming, Laramie, WY, United States
| | - Catherine L. Reed
- Department of Psychological Science, Claremont McKenna College, Claremont, CA, United States
| |
Collapse
|
44
|
Ciardo F, De Tommaso D, Wykowska A. Human-like behavioral variability blurs the distinction between a human and a machine in a nonverbal Turing test. Sci Robot 2022; 7:eabo1241. [PMID: 35895925 DOI: 10.1126/scirobotics.abo1241] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
Variability is a property of biological systems, and in animals (including humans), behavioral variability is characterized by certain features, such as the range of variability and the shape of its distribution. Nevertheless, only a few studies have investigated whether and how variability features contribute to the ascription of humanness to robots in a human-robot interaction setting. Here, we tested whether two aspects of behavioral variability, namely, the standard deviation and the shape of distribution of reaction times, affect the ascription of humanness to robots during a joint action scenario. We designed an interactive task in which pairs of participants performed a joint Simon task with an iCub robot placed by their side. Either iCub could perform the task in a preprogrammed manner, or its button presses could be teleoperated by the other member of the pair, seated in the other room. Under the preprogrammed condition, the iCub pressed buttons with reaction times falling within the range of human variability. However, the distribution of the reaction times did not resemble a human-like shape. Participants were sensitive to humanness, because they correctly detected the human agent above chance level. When the iCub was controlled by the computer program, it passed our variation of a nonverbal Turing test. Together, our results suggest that hints of humanness, such as the range of behavioral variability, might be used by observers to ascribe humanness to a humanoid robot.
Collapse
Affiliation(s)
- F Ciardo
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genoa, Italy
| | - D De Tommaso
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genoa, Italy
| | - A Wykowska
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genoa, Italy
| |
Collapse
|
45
|
Urquiza-Haas EG, Kotrschal K. Human-Animal Similarity and the Imageability of Mental State Concepts for Mentalizing Animals. JOURNAL OF COGNITION AND CULTURE 2022. [DOI: 10.1163/15685373-12340133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Abstract
The attribution of mental states (MS) to other species typically follows a scala naturae pattern. However, “simple” mental states, including emotions, sensing, and feelings are attributed to a wider range of animals as compared to the so-called “higher” cognitive abilities. We propose that such attributions are based on the perceptual quality (i.e. imageability) of mental representations related to MS concepts. We hypothesized that the attribution of highly imaginable MS is more dependent on the familiarity of participants with animals when compared to the attribution of MS low in imageability. In addition, we also assessed how animal agreeableness, familiarity with animals, and the type of human-animal interaction related to the judged similarity of animals to humans. Sixty-one participants (19 females, 42 males) with a rural (n = 20) and urban (n = 41) background rated twenty-six wild and domestic animals for their perceived similarity with humans and ability to experience a set of MS: (1) Highly imageable MS: joy, anger, and fear, and (2) MS low in imageability: capacity to plan and deceive. Results show that more agreeable and familiar animals were considered more human-like. Primates, followed by carnivores, suines, ungulates, and rodents were rated more human-like than xenarthrans, birds, arthropods, and reptiles. Higher MS ratings were given to more similar animals and more so if the MS attributed were high in imageability. Familiarity with animals was only relevant for the attribution of the MS high in imageability.
Collapse
Affiliation(s)
- Esmeralda G. Urquiza-Haas
- PhD candidate, Department of Cognitive Biology and Department of Behavioural Biology, University of Vienna Vienna Austria
| | - Kurt Kotrschal
- Retired Professor, Department of Behavioural Biology, University of Vienna Vienna Austria
| |
Collapse
|
46
|
Ross P, George E. Are Face Masks a Problem for Emotion Recognition? Not When the Whole Body Is Visible. Front Neurosci 2022; 16:915927. [PMID: 35924222 PMCID: PMC9339646 DOI: 10.3389/fnins.2022.915927] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 06/23/2022] [Indexed: 01/10/2023] Open
Abstract
The rise of the novel COVID-19 virus has made face masks commonplace items around the globe. Recent research found that face masks significantly impair emotion recognition on isolated faces. However, faces are rarely seen in isolation and the body is also a key cue for emotional portrayal. Here, therefore, we investigated the impact of face masks on emotion recognition when surveying the full body. Stimuli expressing anger, happiness, sadness, and fear were selected from the BEAST stimuli set. Masks were added to these images and participants were asked to recognize the emotion and give a confidence level for that decision for both the masked and unmasked stimuli. We found that, contrary to some work viewing faces in isolation, emotion recognition was generally not impaired by face masks when the whole body is present. We did, however, find that when viewing masked faces, only the recognition of happiness significantly decreased when the whole body was present. In contrast to actual performance, confidence levels were found to decline during the Mask condition across all emotional conditions. This research suggests that the impact of masks on emotion recognition may not be as pronounced as previously thought, as long as the whole body is also visible.
Collapse
Affiliation(s)
- Paddy Ross
- Department of Psychology, Durham University, Durham, United Kingdom
| | | |
Collapse
|
47
|
Ke H, Vuong QC, Geangu E. Three- and six-year-old children are sensitive to natural body expressions of emotion: An event-related potential emotional priming study. J Exp Child Psychol 2022; 224:105497. [PMID: 35850023 DOI: 10.1016/j.jecp.2022.105497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 03/23/2022] [Accepted: 06/06/2022] [Indexed: 12/01/2022]
Abstract
Body movements provide a rich source of emotional information during social interactions. Although the ability to perceive biological motion cues related to those movements begins to develop during infancy, processing those cues to identify emotions likely continues to develop into childhood. Previous studies used posed or exaggerated body movements, which might not reflect the kind of body expressions children experience. The current study used an event-related potential (ERP) priming paradigm to investigate the development of emotion recognition from more naturalistic body movements. Point-light displays (PLDs) of male adult bodies expressing happy or angry emotional movements while narrating a story were used as prime stimuli, whereas audio recordings of the words "happy" and "angry" spoken with an emotionally neutral prosody were used as targets. We recorded the ERPs time-locked to the onset of the auditory target from 3- and 6-year-old children, and we compared amplitude and latency of the N300 and N400 responses between the two age groups in the different prime-target conditions. There was an overall effect of prime for the N300 amplitude, with more negative-going responses for happy PLDs compared with angry PLDs. There was also an interaction between prime and target for the N300 latency, suggesting that all children were sensitive to the emotional congruency between body movements and words. For the N400 component, there was only an interaction among age, prime, and target for latency, suggesting an age-dependent modulation of this component when prime and target did not match in emotional information. Overall, our results suggest that the emergence of more complex emotion processing of body expressions occurs around 6 years of age, but it is not fully developed at this point in ontogeny.
Collapse
Affiliation(s)
- Han Ke
- Department of Psychology, Lancaster University, Lancaster LA1 4YF, UK.
| | - Quoc C Vuong
- Biosciences Institute, Newcastle University, Newcastle upon Tyne NE2 4HH, UK
| | - Elena Geangu
- Department of Psychology, University of York, York YO10 5DD, UK
| |
Collapse
|
48
|
Presti P, Ruzzon D, Galasso GM, Avanzini P, Caruana F, Vecchiato G. The Avatar's Gist: How to Transfer Affective Components From Dynamic Walking to Static Body Postures. Front Neurosci 2022; 16:842433. [PMID: 35784850 PMCID: PMC9240741 DOI: 10.3389/fnins.2022.842433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Accepted: 04/27/2022] [Indexed: 11/30/2022] Open
Abstract
Dynamic virtual representations of the human being can communicate a broad range of affective states through body movements, thus effectively studying emotion perception. However, the possibility of modeling static body postures preserving affective information is still fundamental in a broad spectrum of experimental settings exploring time-locked cognitive processes. We propose a novel automatic method for creating virtual affective body postures starting from kinematics data. Exploiting body features related to postural cues and movement velocity, we transferred the affective components from dynamic walking to static body postures of male and female virtual avatars. Results of two online experiments showed that participants coherently judged different valence and arousal levels in the avatar's body posture, highlighting the reliability of the proposed methodology. In addition, esthetic and postural cues made women more emotionally expressive than men. Overall, we provided a valid methodology to create affective body postures of virtual avatars, which can be used within different virtual scenarios to understand better the way we perceive the affective state of others.
Collapse
Affiliation(s)
- Paolo Presti
- Institute of Neuroscience, National Research Council of Italy, Parma, Italy
- Department of Medicine and Surgery, University of ParmaParma, Italy
| | - Davide Ruzzon
- TUNED, Lombardini22, Milan, Italy
- Dipartimento Culture del Progetto, University IUAV, Venice, Italy
| | | | - Pietro Avanzini
- Institute of Neuroscience, National Research Council of Italy, Parma, Italy
| | - Fausto Caruana
- Institute of Neuroscience, National Research Council of Italy, Parma, Italy
| | - Giovanni Vecchiato
- Institute of Neuroscience, National Research Council of Italy, Parma, Italy
| |
Collapse
|
49
|
Ito K, Ong CW. Perception of emotional tears with body postures, visual scenes, and written scenarios. ASIAN JOURNAL OF SOCIAL PSYCHOLOGY 2022. [DOI: 10.1111/ajsp.12544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Kenichi Ito
- Department of Psychology University of Lethbridge Lethbridge Alberta Canada
| | - Chew Wei Ong
- School of Social Sciences Nanyang Technological University Singapore City Singapore
| |
Collapse
|
50
|
Rossion B. Twenty years of investigation with the case of prosopagnosia PS to understand human face identity recognition. Part I: Function. Neuropsychologia 2022; 173:108278. [DOI: 10.1016/j.neuropsychologia.2022.108278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 03/28/2022] [Accepted: 05/25/2022] [Indexed: 10/18/2022]
|