1
|
Lanzilotto M, Dal Monte O, Diano M, Panormita M, Battaglia S, Celeghin A, Bonini L, Tamietto M. Learning to fear novel stimuli by observing others in the social affordance framework. Neurosci Biobehav Rev 2025; 169:106006. [PMID: 39788170 DOI: 10.1016/j.neubiorev.2025.106006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Revised: 12/12/2024] [Accepted: 01/06/2025] [Indexed: 01/12/2025]
Abstract
Fear responses to novel stimuli can be learned directly, through personal experiences (Fear Conditioning, FC), or indirectly, by observing conspecific reactions to a stimulus (Social Fear Learning, SFL). Although substantial knowledge exists about FC and SFL in humans and other species, they are typically conceived as mechanisms that engage separate neural networks and operate at different levels of complexity. Here, we propose a broader framework that links these two fear learning modes by supporting the view that social signals may act as unconditioned stimuli during SFL. In this context, we highlight the potential role of subcortical structures of ancient evolutionary origin in encoding social signals and argue that they play a pivotal function in transforming observed emotional expressions into adaptive behavioural responses. This perspective extends the social affordance hypothesis to subcortical circuits underlying vicarious learning in social contexts. Recognising the interplay between these two modes of fear learning paves the way for new empirical studies focusing on interspecies comparisons and broadens the boundaries of our knowledge of fear acquisition.
Collapse
Affiliation(s)
- M Lanzilotto
- Department of Medicine and Surgery, University of Parma, Parma, Italy; Department of Psychology, University of Turin, Turin, Italy.
| | - O Dal Monte
- Department of Psychology, University of Turin, Turin, Italy; Department of Psychology, Yale University, New Haven, USA
| | - M Diano
- Department of Psychology, University of Turin, Turin, Italy
| | - M Panormita
- Department of Psychology, University of Turin, Turin, Italy; Department of Neuroscience, KU Leuven University, Leuven, Belgium
| | - S Battaglia
- Department of Psychology, University of Turin, Turin, Italy; Department of Psychology, University of Bologna, Cesena, Italy
| | - A Celeghin
- Department of Psychology, University of Turin, Turin, Italy
| | - L Bonini
- Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - M Tamietto
- Department of Psychology, University of Turin, Turin, Italy; Department of Medical and Clinical Psychology, Tilburg University, Netherlands; Centro Linceo Interdisciplinare "Beniamino Segre", Accademia Nazionale dei Lincei, Roma, Italy.
| |
Collapse
|
2
|
Ikeda E, Destler N, Feldman J. The role of dynamic shape cues in the recognition of emotion from naturalistic body motion. Atten Percept Psychophys 2025:10.3758/s13414-024-02990-8. [PMID: 39821558 DOI: 10.3758/s13414-024-02990-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/21/2024] [Indexed: 01/19/2025]
Abstract
Human observers can often judge emotional or affective states from bodily motion, even in the absence of facial information, but the mechanisms underlying this inference are not completely understood. Important clues come from the literature on "biological motion" using point-light displays (PLDs), which convey human action, and possibly emotion, apparently on the basis of body movements alone. However, most studies have used simplified and often exaggerated displays chosen to convey emotions as clearly as possible. In the current study we aim to study emotion interpretation using more naturalistic stimuli, which we draw from narrative films, security footage, and other sources not created for experimental purposes. We use modern algorithmic methods to extract joint positions, from which we create three display types intended to probe the nature of the cues observers use to interpret emotions: PLDs; stick figures, which convey "skeletal" information more overtly; and a control condition in which joint positions are connected in an anatomically incorrect manner. The videos depicted a range of emotions, including fear, joy, nurturing, anger, sadness, and determination. Subjects were able to estimate the depicted emotion with a high degree of reliability and accuracy, most effectively from stick figures, somewhat less so for PLDs, and least for the control condition. These results confirm that people can interpret emotion from naturalistic body movements alone, and suggest that the mechanisms underlying this interpretation rely heavily on skeletal representations of dynamic shape.
Collapse
Affiliation(s)
- Erika Ikeda
- Department of Psychology, Rutgers University - New Brunswick, 152 Frelinghuysen Rd, Piscataway, NJ, 08854, USA.
- Department of Psychology, Georgetown University, Washington, DC, 20057, USA.
| | - Nathan Destler
- Department of Psychology, Rutgers University - New Brunswick, 152 Frelinghuysen Rd, Piscataway, NJ, 08854, USA
| | - Jacob Feldman
- Department of Psychology, Rutgers University - New Brunswick, 152 Frelinghuysen Rd, Piscataway, NJ, 08854, USA
| |
Collapse
|
3
|
Ren J, Zhang M, Liu S, He W, Luo W. Maintenance of Bodily Expressions Modulates Functional Connectivity Between Prefrontal Cortex and Extrastriate Body Area During Working Memory Processing. Brain Sci 2024; 14:1172. [PMID: 39766371 PMCID: PMC11674776 DOI: 10.3390/brainsci14121172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2024] [Revised: 11/13/2024] [Accepted: 11/21/2024] [Indexed: 01/11/2025] Open
Abstract
Background/Objectives: As a form of visual input, bodily expressions can be maintained and manipulated in visual working memory (VWM) over a short period of time. While the prefrontal cortex (PFC) plays an indispensable role in top-down control, it remains largely unclear whether this region also modulates the VWM storage of bodily expressions during a delay period. Therefore, the two primary goals of this study were to examine whether the emotional bodies would elicit heightened brain activity among areas such as the PFC and extrastriate body area (EBA) and whether the emotional effects subsequently modulate the functional connectivity patterns for active maintenance during delay periods. Methods: During functional magnetic resonance imaging (fMRI) scanning, participants performed a delayed-response task in which they were instructed to view and maintain a body stimulus in working memory before emotion categorization (happiness, anger, and neutral). If processing happy and angry bodies consume increased cognitive demands, stronger PFC activation and its functional connectivity with perceptual areas would be observed. Results: Results based on univariate and multivariate analyses conducted on the data collected during stimulus presentation revealed an enhanced processing of the left PFC and left EBA. Importantly, subsequent functional connectivity analyses performed on delayed-period data using a psychophysiological interaction model indicated that functional connectivity between the PFC and EBA increases for happy and angry bodies compared to neutral bodies. Conclusions: The emotion-modulated coupling between the PFC and EBA during maintenance deepens our understanding of the functional organization underlying the VWM processing of bodily information.
Collapse
Affiliation(s)
- Jie Ren
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China; (J.R.); (M.Z.); (S.L.); (W.H.)
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China
| | - Mingming Zhang
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China; (J.R.); (M.Z.); (S.L.); (W.H.)
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China
| | - Shuaicheng Liu
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China; (J.R.); (M.Z.); (S.L.); (W.H.)
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China
| | - Weiqi He
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China; (J.R.); (M.Z.); (S.L.); (W.H.)
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China
| | - Wenbo Luo
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China; (J.R.); (M.Z.); (S.L.); (W.H.)
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China
| |
Collapse
|
4
|
Huang L, Du F, Huang W, Ren H, Qiu W, Zhang J, Wang Y. Three-stage Dynamic Brain-cognitive Model of Understanding Action Intention Displayed by Human Body Movements. Brain Topogr 2024; 37:1055-1067. [PMID: 38874853 DOI: 10.1007/s10548-024-01061-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Accepted: 06/04/2024] [Indexed: 06/15/2024]
Abstract
The ability to comprehend the intention conveyed through human body movements is crucial for effective interpersonal interactions. If people can't understand the intention behind other individuals' isolated or interactive actions, their actions will become meaningless. Psychologists have investigated the cognitive processes and neural representations involved in understanding action intention, yet a cohesive theoretical explanation remains elusive. Hence, we mainly review existing literature related to neural correlates of action intention, and primarily propose a putative Three-stage Dynamic Brain-cognitive Model of understanding action intention, which involves body perception, action identification and intention understanding. Specifically, at the first stage, body parts/shapes are processed by those brain regions such as extrastriate and fusiform body areas; During the second stage, differentiating observed actions relies on configuring relationships between body parts, facilitated by the activation of the Mirror Neuron System; The last stage involves identifying various intention categories, utilizing the Mentalizing System for recruitment, and different activation patterns concerning the nature of the intentions participants dealing with. Finally, we delves into the clinical practice, like intervention training based on a theoretical model for individuals with autism spectrum disorders who encounter difficulties in interpersonal communication.
Collapse
Affiliation(s)
- Liang Huang
- Fujian Key Laboratory of Applied Cognition and Personality, Minnan Normal University, Zhangzhou, China.
- Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy.
| | - Fangyuan Du
- Fuzhou University of International Studies and Trade, Fuzhou, China
| | - Wenxin Huang
- Fujian Key Laboratory of Applied Cognition and Personality, Minnan Normal University, Zhangzhou, China
- School of Management, Zhejiang University of Technology, Hangzhou, China
| | - Hanlin Ren
- Third People's Hospital of Zhongshan, Zhongshan, China
| | - Wenzhen Qiu
- Fujian Key Laboratory of Applied Cognition and Personality, Minnan Normal University, Zhangzhou, China
| | - Jiayi Zhang
- Fujian Key Laboratory of Applied Cognition and Personality, Minnan Normal University, Zhangzhou, China
| | - Yiwen Wang
- The School of Economics and Management, Fuzhou University, Fuzhou, China.
| |
Collapse
|
5
|
Bouret S, Paradis E, Prat S, Castro L, Perez P, Gilissen E, Garcia C. Linking the evolution of two prefrontal brain regions to social and foraging challenges in primates. eLife 2024; 12:RP87780. [PMID: 39468920 PMCID: PMC11521368 DOI: 10.7554/elife.87780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/30/2024] Open
Abstract
The diversity of cognitive skills across primates remains both a fascinating and a controversial issue. Recent comparative studies provided conflicting results regarding the contribution of social vs ecological constraints to the evolution of cognition. Here, we used an interdisciplinary approach combining comparative cognitive neurosciences and behavioral ecology. Using brain imaging data from 16 primate species, we measured the size of two prefrontal brain regions, the frontal pole (FP) and the dorso-lateral prefrontal cortex (DLPFC), respectively, involved in metacognition and working memory, and examined their relation to a combination of socio-ecological variables. The size of these prefrontal regions, as well as the whole brain, was best explained by three variables: body mass, daily traveled distance (an index of ecological constraints), and population density (an index of social constraints). The strong influence of ecological constraints on FP and DLPFC volumes suggests that both metacognition and working memory are critical for foraging in primates. Interestingly, FP volume was much more sensitive to social constraints than DLPFC volume, in line with laboratory studies showing an implication of FP in complex social interactions. Thus, our data highlights the relative weight of social vs ecological constraints on the evolution of specific prefrontal brain regions and their associated cognitive operations in primates.
Collapse
Affiliation(s)
- Sebastien Bouret
- Team Motivation Brain & Behavior, ICM – Brain and Spine InstituteParisFrance
| | | | - Sandrine Prat
- UMR 7194 (HNHP), MNHN/CNRS/UPVD, Musée de l’HommeParisFrance
| | - Laurie Castro
- UMR 7194 (HNHP), MNHN/CNRS/UPVD, Musée de l’HommeParisFrance
- UMR 7206 Eco-anthropologie, CNRS – MNHN – Univ. Paris Cité, Musée de l'HommeParisFrance
| | - Pauline Perez
- Team Motivation Brain & Behavior, ICM – Brain and Spine InstituteParisFrance
| | - Emmanuel Gilissen
- Department of African Zoology, Royal Museum for Central AfricaTervurenBelgium
- Université Libre de Bruxelles, Laboratory of Histology and NeuropathologyBrusselsBelgium
| | - Cecile Garcia
- UMR 7206 Eco-anthropologie, CNRS – MNHN – Univ. Paris Cité, Musée de l'HommeParisFrance
| |
Collapse
|
6
|
Abassi E, Bognár A, de Gelder B, Giese M, Isik L, Lappe A, Mukovskiy A, Solanas MP, Taubert J, Vogels R. Neural Encoding of Bodies for Primate Social Perception. J Neurosci 2024; 44:e1221242024. [PMID: 39358024 PMCID: PMC11450534 DOI: 10.1523/jneurosci.1221-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Revised: 07/22/2024] [Accepted: 07/23/2024] [Indexed: 10/04/2024] Open
Abstract
Primates, as social beings, have evolved complex brain mechanisms to navigate intricate social environments. This review explores the neural bases of body perception in both human and nonhuman primates, emphasizing the processing of social signals conveyed by body postures, movements, and interactions. Early studies identified selective neural responses to body stimuli in macaques, particularly within and ventral to the superior temporal sulcus (STS). These regions, known as body patches, represent visual features that are present in bodies but do not appear to be semantic body detectors. They provide information about posture and viewpoint of the body. Recent research using dynamic stimuli has expanded the understanding of the body-selective network, highlighting its complexity and the interplay between static and dynamic processing. In humans, body-selective areas such as the extrastriate body area (EBA) and fusiform body area (FBA) have been implicated in the perception of bodies and their interactions. Moreover, studies on social interactions reveal that regions in the human STS are also tuned to the perception of dyadic interactions, suggesting a specialized social lateral pathway. Computational work developed models of body recognition and social interaction, providing insights into the underlying neural mechanisms. Despite advances, significant gaps remain in understanding the neural mechanisms of body perception and social interaction. Overall, this review underscores the importance of integrating findings across species to comprehensively understand the neural foundations of body perception and the interaction between computational modeling and neural recording.
Collapse
Affiliation(s)
- Etienne Abassi
- The Neuro, Montreal Neurological Institute-Hospital, McGill University, Montréal, QC H3A 2B4, Canada
| | - Anna Bognár
- Department of Neuroscience, KU Leuven, Leuven 3000, Belgium
- Leuven Brain Institute, KU Leuven, Leuven 3000, Belgium
| | - Bea de Gelder
- Cognitive Neuroscience, Maastricht University, Maastricht 6229 EV, Netherlands
| | - Martin Giese
- Section Computational Sensomotorics, Hertie Institute for Clinical Brain Research & Centre for Integrative Neurocience, University Clinic Tuebingen, Tuebingen D-72076, Germany
| | - Leyla Isik
- Cognitive Science, Johns Hopkins University, Baltimore, Maryland 21218
| | - Alexander Lappe
- Section Computational Sensomotorics, Hertie Institute for Clinical Brain Research & Centre for Integrative Neurocience, University Clinic Tuebingen, Tuebingen D-72076, Germany
| | - Albert Mukovskiy
- Section Computational Sensomotorics, Hertie Institute for Clinical Brain Research & Centre for Integrative Neurocience, University Clinic Tuebingen, Tuebingen D-72076, Germany
| | - Marta Poyo Solanas
- Cognitive Neuroscience, Maastricht University, Maastricht 6229 EV, Netherlands
| | - Jessica Taubert
- The School of Psychology, University of Queensland, St Lucia, QLD 4072, Australia
| | - Rufin Vogels
- Department of Neuroscience, KU Leuven, Leuven 3000, Belgium
- Leuven Brain Institute, KU Leuven, Leuven 3000, Belgium
| |
Collapse
|
7
|
Brady N, Leonard S, Choisdealbha ÁN. Visual perspective taking and action understanding. Acta Psychol (Amst) 2024; 249:104467. [PMID: 39173344 DOI: 10.1016/j.actpsy.2024.104467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 07/24/2024] [Accepted: 08/19/2024] [Indexed: 08/24/2024] Open
Abstract
Understanding what others are doing is a fundamental aspect of social cognition and a skill that is arguably linked to visuospatial perspective taking (VPT), the ability to apprehend the spatial layout of a scene from another's perspective. Yet, with few and notable exceptions, action understanding and VPT are rarely studied together. Participants (43 females, 37 males) made judgements about the spatial layout of objects in a scene from the perspective of an avatar who was positioned at 0°, 90°, 270° or 180° relative to the participant. In a variant of a traditional VPT task, the avatar either interacted with the objects in the scene, by pointing to or reaching for them, or was present but did not engage with the objects. Although the task was identical across all conditions - to say whether a target object is to the right or left of a control object - we show that the avatar's actions modulates performance. Specifically, participants were more accurate when the avatar engaged with the target object, and correspondingly, less accurate and slower when the avatar interacted with the control objects. As these effects were independent of the angular disparity between participant and avatar perspectives, we conclude that action understanding and VPT are likely linked via the rapid deployment of two separate cognitive mechanisms. All participants provided a measure of self-reported empathy and we show that response times decrease with increasing empathy scores for female but not for male participants. However, within the range of 'typical' empathy scores, defined here as the interquartile range where 50 % of the data lie, females were faster than males. These findings lend further insight into the relationship between spatial and social perspective taking.
Collapse
Affiliation(s)
- Nuala Brady
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland.
| | - Sophie Leonard
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland
| | | |
Collapse
|
8
|
Liu S, He W, Zhang M, Li Y, Ren J, Guan Y, Fan C, Li S, Gu R, Luo W. Emotional concepts shape the perceptual representation of body expressions. Hum Brain Mapp 2024; 45:e26789. [PMID: 39185719 PMCID: PMC11345699 DOI: 10.1002/hbm.26789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 06/25/2024] [Accepted: 07/03/2024] [Indexed: 08/27/2024] Open
Abstract
Emotion perception interacts with how we think and speak, including our concept of emotions. Body expression is an important way of emotion communication, but it is unknown whether and how its perception is modulated by conceptual knowledge. In this study, we employed representational similarity analysis and conducted three experiments combining semantic similarity, mouse-tracking task, and one-back behavioral task with electroencephalography and functional magnetic resonance imaging techniques, the results of which show that conceptual knowledge predicted the perceptual representation of body expressions. Further, this prediction effect occurred at approximately 170 ms post-stimulus. The neural encoding of body expressions in the fusiform gyrus and lingual gyrus was impacted by emotion concept knowledge. Taken together, our results indicate that conceptual knowledge of emotion categories shapes the configural representation of body expressions in the ventral visual cortex, which offers compelling evidence for the constructed emotion theory.
Collapse
Affiliation(s)
- Shuaicheng Liu
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Weiqi He
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Mingming Zhang
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Yiwen Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Jie Ren
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Yuanhao Guan
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Cong Fan
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Shuaixia Li
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Ruolei Gu
- Key Laboratory of Behavioral Science, Institute of PsychologyChinese Academy of SciencesBeijingChina
- Department of PsychologyUniversity of Chinese Academy of SciencesBeijingChina
| | - Wenbo Luo
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| |
Collapse
|
9
|
Kroczek LOH, Lingnau A, Schwind V, Wolff C, Mühlberger A. Observers predict actions from facial emotional expressions during real-time social interactions. Behav Brain Res 2024; 471:115126. [PMID: 38950784 DOI: 10.1016/j.bbr.2024.115126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 06/07/2024] [Accepted: 06/19/2024] [Indexed: 07/03/2024]
Abstract
In face-to-face social interactions, emotional expressions provide insights into the mental state of an interactive partner. This information can be crucial to infer action intentions and react towards another person's actions. Here we investigate how facial emotional expressions impact subjective experience and physiological and behavioral responses to social actions during real-time interactions. Thirty-two participants interacted with virtual agents while fully immersed in Virtual Reality. Agents displayed an angry or happy facial expression before they directed an appetitive (fist bump) or aversive (punch) social action towards the participant. Participants responded to these actions, either by reciprocating the fist bump or by defending the punch. For all interactions, subjective experience was measured using ratings. In addition, physiological responses (electrodermal activity, electrocardiogram) and participants' response times were recorded. Aversive actions were judged to be more arousing and less pleasant relative to appetitive actions. In addition, angry expressions increased heart rate relative to happy expressions. Crucially, interaction effects between facial emotional expression and action were observed. Angry expressions reduced pleasantness stronger for appetitive compared to aversive actions. Furthermore, skin conductance responses to aversive actions were increased for happy compared to angry expressions and reaction times were faster to aversive compared to appetitive actions when agents showed an angry expression. These results indicate that observers used facial emotional expression to generate expectations for particular actions. Consequently, the present study demonstrates that observers integrate information from facial emotional expressions with actions during social interactions.
Collapse
Affiliation(s)
- Leon O H Kroczek
- Department of Psychology, Clinical Psychology and Psychotherapy, University of Regensburg, Regensburg, Germany.
| | - Angelika Lingnau
- Department of Psychology, Cognitive Neuroscience, University of Regensburg, Regensburg, Germany
| | - Valentin Schwind
- Human Computer Interaction, University of Applied Sciences in Frankfurt a. M., Frankfurt a. M, Germany; Department of Media Informatics, University of Regensburg, Regensburg, Germany
| | - Christian Wolff
- Department of Media Informatics, University of Regensburg, Regensburg, Germany
| | - Andreas Mühlberger
- Department of Psychology, Clinical Psychology and Psychotherapy, University of Regensburg, Regensburg, Germany
| |
Collapse
|
10
|
Smekal V, Poyo Solanas M, Fraats EIC, de Gelder B. Differential contributions of body form, motion, and temporal information to subjective action understanding in naturalistic stimuli. Front Integr Neurosci 2024; 18:1302960. [PMID: 38533314 PMCID: PMC10963482 DOI: 10.3389/fnint.2024.1302960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 02/14/2024] [Indexed: 03/28/2024] Open
Abstract
Introduction We investigated the factors underlying naturalistic action recognition and understanding, as well as the errors occurring during recognition failures. Methods Participants saw full-light stimuli of ten different whole-body actions presented in three different conditions: as normal videos, as videos with the temporal order of the frames scrambled, and as single static representative frames. After each stimulus presentation participants completed one of two tasks-a forced choice task where they were given the ten potential action labels as options, or a free description task, where they could describe the action performed in each stimulus in their own words. Results While generally, a combination of form, motion, and temporal information led to the highest action understanding, for some actions form information was sufficient and adding motion and temporal information did not increase recognition accuracy. We also analyzed errors in action recognition and found primarily two different types. Discussion One type of error was on the semantic level, while the other consisted of reverting to the kinematic level of body part processing without any attribution of semantics. We elaborate on these results in the context of naturalistic action perception.
Collapse
Affiliation(s)
- Vojtěch Smekal
- Brain and Emotion Lab, Department of Cognitive Neuroscience, Maastricht Brain Imaging Centre, Maastricht University, Maastricht, Netherlands
| | | | | | | |
Collapse
|
11
|
Brady N, Gough P, Leonard S, Allan P, McManus C, Foley T, O'Leary A, McGovern DP. Actions are characterized by 'canonical moments' in a sequence of movements. Cognition 2024; 242:105652. [PMID: 37866178 DOI: 10.1016/j.cognition.2023.105652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 10/12/2023] [Accepted: 10/15/2023] [Indexed: 10/24/2023]
Abstract
Understanding what others are doing is an essential aspect of social cognition that depends on our ability to quickly recognize and categorize their actions. To effectively study action recognition we need to understand how actions are bounded, where they start and where they end. Here we borrow a conceptual approach - the notion of 'canonicality' - introduced by Palmer and colleagues in their study of object recognition and apply it to the study of action recognition. Using a set of 50 video clips sourced from stock photography sites, we show that many everyday actions - transitive and intransitive, social and non-social, communicative - are characterized by 'canonical moments' in a sequence of movements that are agreed by participants to 'best represent' a named action, as indicated in a forced choice (Exp 1, n = 142) and a free choice (Exp 2, n = 125) paradigm. In Exp 3 (n = 102) we confirm that canonical moments from action sequences are more readily named as depicting specific actions and, mirroring research in object recognition, that such canonical moments are privileged in memory (Exp 4, n = 95). We suggest that 'canonical moments', being those that convey maximal information about human actions, are integral to the representation of human action.1.
Collapse
Affiliation(s)
- Nuala Brady
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland.
| | - Patricia Gough
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland
| | - Sophie Leonard
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland
| | - Paul Allan
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland
| | - Caoimhe McManus
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland
| | - Tomas Foley
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland
| | - Aoife O'Leary
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland
| | - David P McGovern
- School of Psychology, Dublin City University, Glasnevin Campus, Dublin 9, Ireland
| |
Collapse
|
12
|
Vaessen M, Van der Heijden K, de Gelder B. Modality-specific brain representations during automatic processing of face, voice and body expressions. Front Neurosci 2023; 17:1132088. [PMID: 37869514 PMCID: PMC10587395 DOI: 10.3389/fnins.2023.1132088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Accepted: 09/05/2023] [Indexed: 10/24/2023] Open
Abstract
A central question in affective science and one that is relevant for its clinical applications is how emotions provided by different stimuli are experienced and represented in the brain. Following the traditional view emotional signals are recognized with the help of emotion concepts that are typically used in descriptions of mental states and emotional experiences, irrespective of the sensory modality. This perspective motivated the search for abstract representations of emotions in the brain, shared across variations in stimulus type (face, body, voice) and sensory origin (visual, auditory). On the other hand, emotion signals like for example an aggressive gesture, trigger rapid automatic behavioral responses and this may take place before or independently of full abstract representation of the emotion. This pleads in favor specific emotion signals that may trigger rapid adaptative behavior only by mobilizing modality and stimulus specific brain representations without relying on higher order abstract emotion categories. To test this hypothesis, we presented participants with naturalistic dynamic emotion expressions of the face, the whole body, or the voice in a functional magnetic resonance (fMRI) study. To focus on automatic emotion processing and sidestep explicit concept-based emotion recognition, participants performed an unrelated target detection task presented in a different sensory modality than the stimulus. By using multivariate analyses to assess neural activity patterns in response to the different stimulus types, we reveal a stimulus category and modality specific brain organization of affective signals. Our findings are consistent with the notion that under ecological conditions emotion expressions of the face, body and voice may have different functional roles in triggering rapid adaptive behavior, even if when viewed from an abstract conceptual vantage point, they may all exemplify the same emotion. This has implications for a neuroethologically grounded emotion research program that should start from detailed behavioral observations of how face, body, and voice expressions function in naturalistic contexts.
Collapse
|
13
|
Zhang M, Zhou Y, Xu X, Ren Z, Zhang Y, Liu S, Luo W. Multi-view emotional expressions dataset using 2D pose estimation. Sci Data 2023; 10:649. [PMID: 37739952 PMCID: PMC10516935 DOI: 10.1038/s41597-023-02551-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 09/07/2023] [Indexed: 09/24/2023] Open
Abstract
Human body expressions convey emotional shifts and intentions of action and, in some cases, are even more effective than other emotion models. Despite many datasets of body expressions incorporating motion capture available, there is a lack of more widely distributed datasets regarding naturalized body expressions based on the 2D video. In this paper, therefore, we report the multi-view emotional expressions dataset (MEED) using 2D pose estimation. Twenty-two actors presented six emotional (anger, disgust, fear, happiness, sadness, surprise) and neutral body movements from three viewpoints (left, front, right). A total of 4102 videos were captured. The MEED consists of the corresponding pose estimation results (i.e., 397,809 PNG files and 397,809 JSON files). The size of MEED exceeds 150 GB. We believe this dataset will benefit the research in various fields, including affective computing, human-computer interaction, social neuroscience, and psychiatry.
Collapse
Affiliation(s)
- Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Yanan Zhou
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Xinye Xu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Ziwei Ren
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Yihan Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Shenglan Liu
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China.
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China.
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China.
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China.
| |
Collapse
|
14
|
Romero V, Paxton A. Stage 2: Visual information and communication context as modulators of interpersonal coordination in face-to-face and videoconference-based interactions. Acta Psychol (Amst) 2023; 239:103992. [PMID: 37536011 DOI: 10.1016/j.actpsy.2023.103992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 06/23/2023] [Accepted: 07/21/2023] [Indexed: 08/05/2023] Open
Abstract
Interpersonal coordination of body movement-or similarity in patterning and timing of body movement between interaction partners-is well documented in face-to-face (FTF) conversation. Here, we investigated the degree to which interpersonal coordination is impacted by the amount of visual information available and the type of interaction conversation partners are having. To do so within a naturalistic context, we took advantage of the increased familiarity with videoconferencing (VC) platforms and with limited visual information in FTF conversation due to the COVID-19 pandemic. Pairs of participants communicated in one of three ways: FTF in a laboratory setting while socially distanced and wearing face masks; VC in a laboratory setting with a view of one another's full movements; or VC in a remote setting with a view of one another's face and shoulders. Each pair held three conversations: affiliative, argumentative, and cooperative task-based. We quantified interpersonal coordination as the relationship between the two participants' overall body movement using nonlinear time series analyses. Coordination changed as a function of the contextual constraints, and these constraints interacted with coordination patterns to affect subjective conversation outcomes. Importantly, we found patterns of results that were distinct from previous research; we hypothesize that these differences may be due to changes in the broader social context from COVID-19. Taken together, our results are consistent with a dynamical systems view of social phenomena, with interpersonal coordination emerging from the interaction between components, constraints, and history of the system.
Collapse
Affiliation(s)
- Veronica Romero
- Psychology Department, Colby College, Waterville, ME, USA; Davis Institute for Artificial Intelligence, Colby College, Waterville, ME, USA.
| | - Alexandra Paxton
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA; Center for the Ecological Study of Perception and Action, University of Connecticut, Storrs, CT, USA
| |
Collapse
|
15
|
Zhang M, Yu L, Zhang K, Du B, Zhan B, Jia S, Chen S, Han F, Li Y, Liu S, Yi X, Liu S, Luo W. Construction and validation of the Dalian emotional movement open-source set (DEMOS). Behav Res Methods 2023; 55:2353-2366. [PMID: 35931937 DOI: 10.3758/s13428-022-01887-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/24/2022] [Indexed: 11/08/2022]
Abstract
Human body movements are important for emotion recognition and social communication and have received extensive attention from researchers. In this field, emotional biological motion stimuli, as depicted by point-light displays, are widely used. However, the number of stimuli in the existing material library is small, and there is a lack of standardized indicators, which subsequently limits experimental design and conduction. Therefore, based on our prior kinematic dataset, we constructed the Dalian Emotional Movement Open-source Set (DEMOS) using computational modeling. The DEMOS has three views (i.e., frontal 0°, left 45°, and left 90°) and in total comprises 2664 high-quality videos of emotional biological motion, each displaying happiness, sadness, anger, fear, disgust, and neutral. All stimuli were validated in terms of recognition accuracy, emotional intensity, and subjective movement. The objective movement for each expression was also calculated. The DEMOS can be downloaded for free from https://osf.io/83fst/ . To our knowledge, this is the largest multi-view emotional biological motion set based on the whole body. The DEMOS can be applied in many fields, including affective computing, social cognition, and psychiatry.
Collapse
Affiliation(s)
- Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Lu Yu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Keye Zhang
- School of Social and Behavioral Sciences, Nanjing University, Nanjing, 210023, China
| | - Bixuan Du
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Bin Zhan
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Shuxin Jia
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Shaohua Chen
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Fengxu Han
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Yiwen Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Shuaicheng Liu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Xi Yi
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Shenglan Liu
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, China.
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, China.
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China.
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China.
| |
Collapse
|
16
|
Marrazzo G, De Martino F, Lage-Castellanos A, Vaessen MJ, de Gelder B. Voxelwise encoding models of body stimuli reveal a representational gradient from low-level visual features to postural features in occipitotemporal cortex. Neuroimage 2023:120240. [PMID: 37348622 DOI: 10.1016/j.neuroimage.2023.120240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 06/16/2023] [Accepted: 06/19/2023] [Indexed: 06/24/2023] Open
Abstract
Research on body representation in the brain has focused on category-specific representation, using fMRI to investigate the response pattern to body stimuli in occipitotemporal cortex without so far addressing the issue of the specific computations involved in body selective regions, only defined by higher order category selectivity. This study used ultra-high field fMRI and banded ridge regression to investigate the coding of body images, by comparing the performance of three encoding models in predicting brain activity in occipitotemporal cortex and specifically the extrastriate body area (EBA). Our results suggest that bodies are encoded in occipitotemporal cortex and in the EBA according to a combination of low-level visual features and postural features.
Collapse
Affiliation(s)
- Giuseppe Marrazzo
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Limburg 6200 MD, Maastricht, The Netherlands
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Limburg 6200 MD, Maastricht, The Netherlands; Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN 55455, United States and Department of NeuroInformatics
| | - Agustin Lage-Castellanos
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Limburg 6200 MD, Maastricht, The Netherlands; Cuban Center for Neuroscience, Street 190 e/25 and 27 Cubanacán Playa Havana, CP 11600, Cuba
| | - Maarten J Vaessen
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Limburg 6200 MD, Maastricht, The Netherlands
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Limburg 6200 MD, Maastricht, The Netherlands.
| |
Collapse
|
17
|
Zhang N, Hu HL, Tso SH, Liu C. To switch or not? Effects of spokes-character urgency during the social app loading process and app type on user switching intention. Front Psychol 2023; 14:1110808. [PMID: 37384167 PMCID: PMC10299737 DOI: 10.3389/fpsyg.2023.1110808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 01/20/2023] [Indexed: 06/30/2023] Open
Abstract
Users of mobile phone applications (apps) often have to wait for the pages of apps to load, a process that substantially affects user experience. Based on the Attentional Gate Model and Emotional Contagion Theory, this paper explores the effects of the urgency expressed by a spokes-character's movement in the loading page of a social app the app type on users' switching intention through two studies. In Study 1 (N = 173), the results demonstrated that for a hedonic-orientated app, a high-urgency (vs. low-urgency) spokes-character resulted in a lower switching intention, whereas the opposite occurred for a utilitarian-orientated app. We adopted a similar methodology in Study 2 (N = 182) and the results showed that perceived waiting time mediated the interaction effect demonstrated in Study 1. Specifically, for the hedonic-orientated (vs. utilitarian-orientated) social app, the high-urgency (vs. low-urgency) spokes-character made participants estimate a shorter perceived waiting time, which induces a lower user switching intention. This paper contributes to the literature on emotion, spokes-characters, and human-computer interaction, which extends an enhanced understanding of users' perception during loading process and informs the design of spokes-characters for the loading pages of apps.
Collapse
Affiliation(s)
- Ning Zhang
- College of Management, Shenzhen University, Shenzhen, China
| | - Hsin-Li Hu
- School of Communication, Hang Seng University of Hong Kong, Hong Kong, China
| | - Scarlet H. Tso
- School of Communication, Hang Seng University of Hong Kong, Hong Kong, China
| | - Chunqun Liu
- School of Hotel and Tourism Management, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
18
|
de Gelder B. Social affordances, mirror neurons, and how to understand the social brain. Trends Cogn Sci 2023; 27:218-219. [PMID: 36635183 DOI: 10.1016/j.tics.2022.11.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 11/16/2022] [Indexed: 01/12/2023]
Affiliation(s)
- Beatrice de Gelder
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands; Department of Computer Science, University College London, London, UK.
| |
Collapse
|
19
|
Bonini L, Rotunno C, Arcuri E, Gallese V. The mirror mechanism: linking perception and social interaction. Trends Cogn Sci 2023; 27:220-221. [PMID: 36635182 DOI: 10.1016/j.tics.2022.12.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Accepted: 12/15/2022] [Indexed: 01/12/2023]
Affiliation(s)
- Luca Bonini
- Department of Medicine and Surgery, University of Parma, Parma, Italy.
| | - Cristina Rotunno
- Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Edoardo Arcuri
- Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Vittorio Gallese
- Department of Medicine and Surgery, University of Parma, Parma, Italy
| |
Collapse
|
20
|
Li B, Solanas MP, Marrazzo G, Raman R, Taubert N, Giese M, Vogels R, de Gelder B. A large-scale brain network of species-specific dynamic human body perception. Prog Neurobiol 2023; 221:102398. [PMID: 36565985 DOI: 10.1016/j.pneurobio.2022.102398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 11/25/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
This ultrahigh field 7 T fMRI study addressed the question of whether there exists a core network of brain areas at the service of different aspects of body perception. Participants viewed naturalistic videos of monkey and human faces, bodies, and objects along with mosaic-scrambled videos for control of low-level features. Independent component analysis (ICA) based network analysis was conducted to find body and species modulations at both the voxel and the network levels. Among the body areas, the highest species selectivity was found in the middle frontal gyrus and amygdala. Two large-scale networks were highly selective to bodies, dominated by the lateral occipital cortex and right superior temporal sulcus (STS) respectively. The right STS network showed high species selectivity, and its significant human body-induced node connectivity was focused around the extrastriate body area (EBA), STS, temporoparietal junction (TPJ), premotor cortex, and inferior frontal gyrus (IFG). The human body-specific network discovered here may serve as a brain-wide internal model of the human body serving as an entry point for a variety of processes relying on body descriptions as part of their more specific categorization, action, or expression recognition functions.
Collapse
Affiliation(s)
- Baichen Li
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, the Netherlands
| | - Marta Poyo Solanas
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, the Netherlands
| | - Giuseppe Marrazzo
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, the Netherlands
| | - Rajani Raman
- Laboratory for Neuro, and Psychophysiology, Department of Neurosciences, KU Leuven Medical School, Leuven 3000, Belgium; Leuven Brain Institute, KU Leuven, Leuven 3000, Belgium
| | - Nick Taubert
- Section for Computational Sensomotorics, Centre for Integrative Neuroscience & Hertie Institute for Clinical Brain Research, University Clinic Tübingen, Tübingen 72076, Germany
| | - Martin Giese
- Section for Computational Sensomotorics, Centre for Integrative Neuroscience & Hertie Institute for Clinical Brain Research, University Clinic Tübingen, Tübingen 72076, Germany
| | - Rufin Vogels
- Laboratory for Neuro, and Psychophysiology, Department of Neurosciences, KU Leuven Medical School, Leuven 3000, Belgium; Leuven Brain Institute, KU Leuven, Leuven 3000, Belgium
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, the Netherlands; Department of Computer Science, University College London, London WC1E 6BT, UK.
| |
Collapse
|
21
|
Emotion is perceived accurately from isolated body parts, especially hands. Cognition 2023; 230:105260. [PMID: 36058103 DOI: 10.1016/j.cognition.2022.105260] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 08/16/2022] [Accepted: 08/17/2022] [Indexed: 11/21/2022]
Abstract
Body posture and configuration provide important visual cues about the emotion states of other people. We know that bodily form is processed holistically, however, emotion recognition may depend on different mechanisms; certain body parts, such as the hands, may be especially important for perceiving emotion. This study therefore compared participants' emotion recognition performance when shown images of full bodies, or of isolated hands, arms, heads and torsos. Across three experiments, emotion recognition accuracy was above chance for all body parts. While emotions were recognized most accurately from full bodies, recognition performance from the hands was more accurate than for other body parts. Representational similarity analysis further showed that the pattern of errors for the hands was related to that for full bodies. Performance was reduced when stimuli were inverted, showing a clear body inversion effect. The high performance for hands was not due only to the fact that there are two hands, as performance remained well above chance even when just one hand was shown. These results demonstrate that emotions can be decoded from body parts. Furthermore, certain features, such as the hands, are more important to emotion perception than others. STATEMENT OF RELEVANCE: Successful social interaction relies on accurately perceiving emotional information from others. Bodies provide an abundance of emotion cues; however, the way in which emotional bodies and body parts are perceived is unclear. We investigated this perceptual process by comparing emotion recognition for body parts with that for full bodies. Crucially, we found that while emotions were most accurately recognized from full bodies, emotions were also classified accurately when images of isolated hands, arms, heads and torsos were seen. Of the body parts shown, emotion recognition from the hands was most accurate. Furthermore, shared patterns of emotion classification for hands and full bodies suggested that emotion recognition mechanisms are shared for full bodies and body parts. That the hands are key to emotion perception is important evidence in its own right. It could also be applied to interventions for individuals who find it difficult to read emotions from faces and bodies.
Collapse
|
22
|
Zhang M, Li P, Yu L, Ren J, Jia S, Wang C, He W, Luo W. Emotional body expressions facilitate working memory: Evidence from the n‐back task. Psych J 2022; 12:178-184. [PMID: 36403986 DOI: 10.1002/pchj.616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 10/10/2022] [Indexed: 11/22/2022]
Abstract
In daily life, individuals need to recognize and update emotional information from others' changing body expressions. However, whether emotional bodies can enhance working memory (WM) remains unknown. In the present study, participants completed a modified n-back task, in which they were required to indicate whether a presented image of an emotional body matched that of an item displayed before each block (0-back) or two positions previously (2-back). Each block comprised only fear, happiness, or neutral. We found that in the 0-back trials, when compared with neutral body expressions, the participants took less time and showed comparable ceiling effects for accuracy in happy bodies followed by fearful bodies. When WM load increased to 2-back, both fearful and happy bodies significantly facilitated WM performance (i.e., faster reaction time and higher accuracy) relative to neutral conditions. In summary, the current findings reveal the enhancement effect of emotional body expressions on WM and highlight the importance of emotional action information in WM.
Collapse
Affiliation(s)
- Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience Liaoning Normal University Dalian China
- Key Laboratory of Brain and Cognitive Neuroscience Dalian China
| | - Ping Li
- School of Literature and Journalism North Minzu University Yinchuan China
| | - Lu Yu
- Research Center of Brain and Cognitive Neuroscience Liaoning Normal University Dalian China
- Key Laboratory of Brain and Cognitive Neuroscience Dalian China
| | - Jie Ren
- Research Center of Brain and Cognitive Neuroscience Liaoning Normal University Dalian China
- Key Laboratory of Brain and Cognitive Neuroscience Dalian China
| | - Shuxin Jia
- Research Center of Brain and Cognitive Neuroscience Liaoning Normal University Dalian China
- Key Laboratory of Brain and Cognitive Neuroscience Dalian China
| | - Chaolun Wang
- Department of Psychology Sun Yat‐Sen University Guangzhou China
| | - Weiqi He
- Research Center of Brain and Cognitive Neuroscience Liaoning Normal University Dalian China
- Key Laboratory of Brain and Cognitive Neuroscience Dalian China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience Liaoning Normal University Dalian China
- Key Laboratory of Brain and Cognitive Neuroscience Dalian China
| |
Collapse
|
23
|
Lillywhite A, Nijhof D, Glowinski D, Giordano BL, Camurri A, Cross I, Pollick FE. A functional magnetic resonance imaging examination of audiovisual observation of a point-light string quartet using intersubject correlation and physical feature analysis. Front Neurosci 2022; 16:921489. [PMID: 36148146 PMCID: PMC9486104 DOI: 10.3389/fnins.2022.921489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 08/05/2022] [Indexed: 11/13/2022] Open
Abstract
We use functional Magnetic Resonance Imaging (fMRI) to explore synchronized neural responses between observers of audiovisual presentation of a string quartet performance during free viewing. Audio presentation was accompanied by visual presentation of the string quartet as stick figures observed from a static viewpoint. Brain data from 18 musical novices were obtained during audiovisual presentation of a 116 s performance of the allegro of String Quartet, No. 14 in D minor by Schubert played by the 'Quartetto di Cremona.' These data were analyzed using intersubject correlation (ISC). Results showed extensive ISC in auditory and visual areas as well as parietal cortex, frontal cortex and subcortical areas including the medial geniculate and basal ganglia (putamen). These results from a single fixed viewpoint of multiple musicians are greater than previous reports of ISC from unstructured group activity but are broadly consistent with related research that used ISC to explore listening to music or watching solo dance. A feature analysis examining the relationship between brain activity and physical features of the auditory and visual signals yielded findings of a large proportion of activity related to auditory and visual processing, particularly in the superior temporal gyrus (STG) as well as midbrain areas. Motor areas were also involved, potentially as a result of watching motion from the stick figure display of musicians in the string quartet. These results reveal involvement of areas such as the putamen in processing complex musical performance and highlight the potential of using brief naturalistic stimuli to localize distinct brain areas and elucidate potential mechanisms underlying multisensory integration.
Collapse
Affiliation(s)
- Amanda Lillywhite
- School of Psychology & Neuroscience, University of Glasgow, Glasgow, United Kingdom
- Department of Psychology, University of Bath, Bath, United Kingdom
| | - Dewy Nijhof
- School of Psychology & Neuroscience, University of Glasgow, Glasgow, United Kingdom
- Institute of Health & Wellbeing, University of Glasgow, Glasgow, United Kingdom
| | - Donald Glowinski
- La Source School of Nursing, Institut et Haute Ecole de la Santé La Source (HES-SO), Lausanne, Switzerland
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Bruno L. Giordano
- Institut de Neurosciences de la Timone, UMR 7289, CNRS, Aix-Marseille University, Marseille, France
| | - Antonio Camurri
- Casa Paganini-InfoMus, DIBRIS, University of Genoa, Genoa, Italy
| | - Ian Cross
- Centre for Music and Science, Faculty of Music, School of Arts and Humanities, University of Cambridge, Cambridge, United Kingdom
| | - Frank E. Pollick
- School of Psychology & Neuroscience, University of Glasgow, Glasgow, United Kingdom
| |
Collapse
|
24
|
Keck J, Zabicki A, Bachmann J, Munzert J, Krüger B. Decoding spatiotemporal features of emotional body language in social interactions. Sci Rep 2022; 12:15088. [PMID: 36064559 PMCID: PMC9445068 DOI: 10.1038/s41598-022-19267-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 08/26/2022] [Indexed: 11/11/2022] Open
Abstract
How are emotions perceived through human body language in social interactions? This study used point-light displays of human interactions portraying emotional scenes (1) to examine quantitative intrapersonal kinematic and postural body configurations, (2) to calculate interaction-specific parameters of these interactions, and (3) to analyze how far both contribute to the perception of an emotion category (i.e. anger, sadness, happiness or affection) as well as to the perception of emotional valence. By using ANOVA and classification trees, we investigated emotion-specific differences in the calculated parameters. We further applied representational similarity analyses to determine how perceptual ratings relate to intra- and interpersonal features of the observed scene. Results showed that within an interaction, intrapersonal kinematic cues corresponded to emotion category ratings, whereas postural cues reflected valence ratings. Perception of emotion category was also driven by interpersonal orientation, proxemics, the time spent in the personal space of the counterpart, and the motion–energy balance between interacting people. Furthermore, motion–energy balance and orientation relate to valence ratings. Thus, features of emotional body language are connected with the emotional content of an observed scene and people make use of the observed emotionally expressive body language and interpersonal coordination to infer emotional content of interactions.
Collapse
Affiliation(s)
- Johannes Keck
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany. .,Center for Mind, Brain and Behavior-CMBB, Universities Marburg and Giessen, Marburg, Germany.
| | - Adam Zabicki
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany
| | - Julia Bachmann
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany
| | - Jörn Munzert
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany.,Center for Mind, Brain and Behavior-CMBB, Universities Marburg and Giessen, Marburg, Germany
| | - Britta Krüger
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany
| |
Collapse
|
25
|
Mirror neurons 30 years later: implications and applications. Trends Cogn Sci 2022; 26:767-781. [PMID: 35803832 DOI: 10.1016/j.tics.2022.06.003] [Citation(s) in RCA: 57] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 05/21/2022] [Accepted: 06/07/2022] [Indexed: 12/30/2022]
Abstract
Mirror neurons (MNs) were first described in a seminal paper in 1992 as a class of monkey premotor cells discharging during both action execution and observation. Despite their debated origin and function, recent studies in several species, from birds to humans, revealed that beyond MNs properly so called, a variety of cell types distributed among multiple motor, sensory, and emotional brain areas form a 'mirror mechanism' more complex and flexible than originally thought, which has an evolutionarily conserved role in social interaction. Here, we trace the current limits and envisage the future trends of this discovery, showing that it inspired translational research and the development of new neurorehabilitation approaches, and constitutes a point of no return in social and affective neuroscience.
Collapse
|
26
|
Abstract
Visual representations of bodies, in addition to those of faces, contribute to the recognition of con- and heterospecifics, to action recognition, and to nonverbal communication. Despite its importance, the neural basis of the visual analysis of bodies has been less studied than that of faces. In this article, I review what is known about the neural processing of bodies, focusing on the macaque temporal visual cortex. Early single-unit recording work suggested that the temporal visual cortex contains representations of body parts and bodies, with the dorsal bank of the superior temporal sulcus representing bodily actions. Subsequent functional magnetic resonance imaging studies in both humans and monkeys showed several temporal cortical regions that are strongly activated by bodies. Single-unit recordings in the macaque body patches suggest that these represent mainly body shape features. More anterior patches show a greater viewpoint-tolerant selectivity for body features, which may reflect a processing principle shared with other object categories, including faces. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Rufin Vogels
- Laboratorium voor Neuro- en Psychofysiologie, KU Leuven, Belgium; .,Leuven Brain Institute, KU Leuven, Belgium
| |
Collapse
|
27
|
Berry M, Lewin S, Brown S. Correlated expression of the body, face, and voice during character portrayal in actors. Sci Rep 2022; 12:8253. [PMID: 35585175 PMCID: PMC9117657 DOI: 10.1038/s41598-022-12184-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 05/03/2022] [Indexed: 11/25/2022] Open
Abstract
Actors are required to engage in multimodal modulations of their body, face, and voice in order to create a holistic portrayal of a character during performance. We present here the first trimodal analysis, to our knowledge, of the process of character portrayal in professional actors. The actors portrayed a series of stock characters (e.g., king, bully) that were organized according to a predictive scheme based on the two orthogonal personality dimensions of assertiveness and cooperativeness. We used 3D motion capture technology to analyze the relative expansion/contraction of 6 body segments across the head, torso, arms, and hands. We compared this with previous results for these portrayals for 4 segments of facial expression and the vocal parameters of pitch and loudness. The results demonstrated significant cross-modal correlations for character assertiveness (but not cooperativeness), as manifested collectively in a straightening of the head and torso, expansion of the arms and hands, lowering of the jaw, and a rise in vocal pitch and loudness. These results demonstrate what communication theorists refer to as "multichannel reinforcement". We discuss this reinforcement in light of both acting theories and theories of human communication more generally.
Collapse
Affiliation(s)
- Matthew Berry
- Department of Psychology, Neuroscience & Behaviour, McMaster University, 1280 Main St. West, Hamilton, ON, L8S 4K1, Canada.
| | - Sarah Lewin
- Department of Psychology, Neuroscience & Behaviour, McMaster University, 1280 Main St. West, Hamilton, ON, L8S 4K1, Canada
| | - Steven Brown
- Department of Psychology, Neuroscience & Behaviour, McMaster University, 1280 Main St. West, Hamilton, ON, L8S 4K1, Canada
| |
Collapse
|
28
|
Zhuang T, Lingnau A. The characterization of actions at the superordinate, basic and subordinate level. PSYCHOLOGICAL RESEARCH 2021; 86:1871-1891. [PMID: 34907466 PMCID: PMC9363348 DOI: 10.1007/s00426-021-01624-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Accepted: 11/26/2021] [Indexed: 10/26/2022]
Abstract
Objects can be categorized at different levels of abstraction, ranging from the superordinate (e.g., fruit) and the basic (e.g., apple) to the subordinate level (e.g., golden delicious). The basic level is assumed to play a key role in categorization, e.g., in terms of the number of features used to describe these actions and the speed of processing. To which degree do these principles also apply to the categorization of observed actions? To address this question, we first selected a range of actions at the superordinate (e.g., locomotion), basic (e.g., to swim) and subordinate level (e.g., to swim breaststroke), using verbal material (Experiments 1-3). Experiments 4-6 aimed to determine the characteristics of these actions across the three taxonomic levels. Using a feature listing paradigm (Experiment 4), we determined the number of features that were provided by at least six out of twenty participants (common features), separately for the three different levels. In addition, we examined the number of shared (i.e., provided for more than one category) and distinct (i.e., provided for one category only) features. Participants produced the highest number of common features for actions at the basic level. Actions at the subordinate level shared more features with other actions at the same level than those at the superordinate level. Actions at the superordinate and basic level were described with more distinct features compared to those provided at the subordinate level. Using an auditory priming paradigm (Experiment 5), we observed that participants responded faster to action images preceded by a matching auditory cue corresponding to the basic and subordinate level, but not for superordinate level cues, suggesting that the basic level is the most abstract level at which verbal cues facilitate the processing of an upcoming action. Using a category verification task (Experiment 6), we found that participants were faster and more accurate to verify action categories (depicted as images) at the basic and subordinate level in comparison to the superordinate level. Together, in line with the object categorization literature, our results suggest that information about action categories is maximized at the basic level.
Collapse
Affiliation(s)
- Tonghe Zhuang
- Chair of Cognitive Neuroscience, Faculty of Human Sciences, Institute of Psychology, University of Regensburg, Universitätsstrasse 31, 93053, Regensburg, Germany
| | - Angelika Lingnau
- Chair of Cognitive Neuroscience, Faculty of Human Sciences, Institute of Psychology, University of Regensburg, Universitätsstrasse 31, 93053, Regensburg, Germany.
| |
Collapse
|
29
|
Marrazzo G, Vaessen MJ, de Gelder B. Decoding the difference between explicit and implicit body expression representation in high level visual, prefrontal and inferior parietal cortex. Neuroimage 2021; 243:118545. [PMID: 34478822 DOI: 10.1016/j.neuroimage.2021.118545] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 08/30/2021] [Accepted: 08/31/2021] [Indexed: 11/28/2022] Open
Abstract
Recent studies provide an increasing understanding of how visual objects categories like faces or bodies are represented in the brain and also raised the question whether a category based or more dynamic network inspired models are more powerful. Two important and so far sidestepped issues in this debate are, first, how major category attributes like the emotional expression directly influence category representation and second, whether category and attribute representation are sensitive to task demands. This study investigated the impact of a crucial category attribute like emotional expression on category area activity and whether this varies with the participants' task. Using (fMRI) we measured BOLD responses while participants viewed whole body expressions and performed either an explicit (emotion) or an implicit (shape) recognition task. Our results based on multivariate methods show that the type of task is the strongest determinant of brain activity and can be decoded in EBA, VLPFC and IPL. Brain activity was higher for the explicit task condition in VLPFC and was not emotion specific. This pattern suggests that during explicit recognition of the body expression, body category representation may be strengthened, and emotion and action related activity suppressed. Taken together these results stress the importance of the task and of the role of category attributes for understanding the functional organization of high level visual cortex.
Collapse
Affiliation(s)
- Giuseppe Marrazzo
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Limburg 6200 MD, Maastricht, the Netherlands
| | - Maarten J Vaessen
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Limburg 6200 MD, Maastricht, the Netherlands
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Limburg 6200 MD, Maastricht, the Netherlands; Department of Computer Science, University College London, London WC1E 6BT, United Kingdom.
| |
Collapse
|
30
|
Bieńkiewicz MMN, Smykovskyi AP, Olugbade T, Janaqi S, Camurri A, Bianchi-Berthouze N, Björkman M, Bardy BG. Bridging the gap between emotion and joint action. Neurosci Biobehav Rev 2021; 131:806-833. [PMID: 34418437 DOI: 10.1016/j.neubiorev.2021.08.014] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 08/08/2021] [Accepted: 08/13/2021] [Indexed: 11/17/2022]
Abstract
Our daily human life is filled with a myriad of joint action moments, be it children playing, adults working together (i.e., team sports), or strangers navigating through a crowd. Joint action brings individuals (and embodiment of their emotions) together, in space and in time. Yet little is known about how individual emotions propagate through embodied presence in a group, and how joint action changes individual emotion. In fact, the multi-agent component is largely missing from neuroscience-based approaches to emotion, and reversely joint action research has not found a way yet to include emotion as one of the key parameters to model socio-motor interaction. In this review, we first identify the gap and then stockpile evidence showing strong entanglement between emotion and acting together from various branches of sciences. We propose an integrative approach to bridge the gap, highlight five research avenues to do so in behavioral neuroscience and digital sciences, and address some of the key challenges in the area faced by modern societies.
Collapse
Affiliation(s)
- Marta M N Bieńkiewicz
- EuroMov Digital Health in Motion, Univ. Montpellier IMT Mines Ales, Montpellier, France.
| | - Andrii P Smykovskyi
- EuroMov Digital Health in Motion, Univ. Montpellier IMT Mines Ales, Montpellier, France
| | | | - Stefan Janaqi
- EuroMov Digital Health in Motion, Univ. Montpellier IMT Mines Ales, Montpellier, France
| | | | | | | | - Benoît G Bardy
- EuroMov Digital Health in Motion, Univ. Montpellier IMT Mines Ales, Montpellier, France.
| |
Collapse
|