1
|
Lin C, Keles U, Thornton MA, Adolphs R. How trait impressions of faces shape subsequent mental state inferences. Nat Hum Behav 2025; 9:208-226. [PMID: 39622977 DOI: 10.1038/s41562-024-02059-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Accepted: 10/10/2024] [Indexed: 01/30/2025]
Abstract
People form impressions of one another in a split second from faces. However, people also infer others' momentary mental states on the basis of context-for example, one might infer that somebody feels encouraged from the fact that they are receiving constructive feedback. How do trait judgements of faces influence these context-based mental state inferences? In this Registered Report, we asked participants to infer the mental states of unfamiliar people, identified by their neutral faces, under specific contexts. To increase generalizability, we representatively sampled all stimuli from inclusive sets using computational methods. We tested four hypotheses: that trait impressions of faces (1) are correlated with subsequent mental state inferences in a range of contexts, (2) alter the dimensional space that underlies mental state inferences, (3) are associated with specific mental state dimensions in this space and (4) causally influence mental state inferences. We found evidence in support of all hypotheses.
Collapse
Affiliation(s)
- Chujun Lin
- Department of Psychology, University of California, San Diego, San Diego, CA, USA.
| | - Umit Keles
- Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, CA, USA
| | - Mark A Thornton
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Ralph Adolphs
- Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, CA, USA
| |
Collapse
|
2
|
Cheng M, Tseng CH, Fujiwara K, Higashiyama S, Weng A, Kitamura Y. Toward an Asian-based bodily movement database for emotional communication. Behav Res Methods 2024; 57:10. [PMID: 39656347 PMCID: PMC11632091 DOI: 10.3758/s13428-024-02558-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/18/2024] [Indexed: 12/13/2024]
Abstract
Most current databases for bodily emotion expression are created in Western countries, resulting in culturally skewed representations. To address the obvious risk this bias poses to academic comprehension, we attempted to expand the current repertoire of human bodily emotions by recruiting Asian professional performers to wear whole-body suits with 57 retroreflective markers attached to major joints and body segments, and express seven basic emotions with whole-body movements in a motion-capture lab. For each emotion, actors performed three self-created scenarios that covered a broad range of real-life events to elicit the target emotion within 2-5 seconds. Subsequently, a separate group of participants was invited to judge the perceived emotional category from the extracted biological motions (point-light displays with 18 or 57 markers). The results demonstrated that the emotion discrimination accuracy was comparable to Western databases containing standardized performance scenarios. The results provide a significant step toward establishing a database using a novel emotional induction approach based on personalized scenarios. This database will contribute to a more comprehensive understanding of emotional expression across diverse contexts.
Collapse
Affiliation(s)
- Miao Cheng
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
- Interdiscipinary ICT Research Center for Cyber and Real Spaces, Tohoku University, Sendai, Japan
| | - Chia-Huei Tseng
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan.
- Interdiscipinary ICT Research Center for Cyber and Real Spaces, Tohoku University, Sendai, Japan.
| | - Ken Fujiwara
- Department of Psychology, National Chung Cheng University, Chiayi, Taiwan
| | - Shoi Higashiyama
- Graduate School of Information Sciences, Tohoku University, Sendai, Japan
| | - Abby Weng
- Said Business School, University of Oxford, Oxford, UK
| | - Yoshifumi Kitamura
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
- Interdiscipinary ICT Research Center for Cyber and Real Spaces, Tohoku University, Sendai, Japan
| |
Collapse
|
3
|
Ran D, Zhang Y, Hao B, Li S. Emotional Evaluations from Partners and Opponents Differentially Influence the Perception of Ambiguous Faces. Behav Sci (Basel) 2024; 14:1168. [PMID: 39767309 PMCID: PMC11673254 DOI: 10.3390/bs14121168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Revised: 12/01/2024] [Accepted: 12/03/2024] [Indexed: 01/11/2025] Open
Abstract
The influence of contextual valence and interpersonal distance on facial expression perception remains unclear despite their significant role in shaping social perceptions. In this event-related potential (ERP) study, we investigated the temporal dynamics underlying the processing of surprised faces across different interpersonal distances (partner, opponent, or stranger) and contextual valence (positive, neutral, or negative) contexts. Thirty-five participants rated the valence of surprised faces. An advanced mass univariate statistical approach was utilized to analyze the ERP data. Behaviorally, surprised faces in partner-related negative contexts were rated more negatively than those in opponent- and stranger-related contexts. The ERP results revealed an increased P1 amplitude for surprised faces in negative relative to neutral contexts. Both the early posterior negativity (EPN) and late positive potentials (LPP) were also modulated by contextual valence, with larger amplitudes for faces in positive relative to neutral and negative contexts. Additionally, when compared to stranger-related contexts, faces in partner-related contexts exhibited enhanced P1 and EPN responses, while those in opponent-related contexts showed amplified LPP responses. Taken together, these findings elucidate the modulation of intricate social contexts on the perception and interpretation of ambiguous facial expressions, thereby enhancing our understanding of nonverbal communication and emotional cognition.
Collapse
Affiliation(s)
- Danyang Ran
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Yihan Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Bin Hao
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Shuaixia Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| |
Collapse
|
4
|
Zhao M, Wang J. Consistent social information perceived in animated backgrounds improves ensemble perception of facial expressions. Perception 2024; 53:563-578. [PMID: 38725355 DOI: 10.1177/03010066241253073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
Observers can rapidly extract the mean emotion from a set of faces with remarkable precision, known as ensemble coding. Previous studies have demonstrated that matched physical backgrounds improve the precision of ongoing ensemble tasks. However, it remains unknown whether this facilitation effect still occurs when matched social information is perceived from the backgrounds. In two experiments, participants decided whether the test face in the retrieving phase appeared more disgusted or neutral than the mean emotion of the face set in the encoding phase. Both phases were paired with task-irrelevant animated backgrounds, which included either the forward movement trajectory carrying the "cooperatively chasing" information, or the backward movement trajectory conveying no such chasing information. The backgrounds in the encoding and retrieving phases were either mismatched (i.e., forward and backward replays of the same trajectory), or matched (i.e., two identical forward movement trajectories in Experiment 1, or two different forward movement trajectories in Experiment 2). Participants in both experiments showed higher ensemble precisions and better discrimination sensitivities when backgrounds matched. The findings suggest that consistent social information perceived from memory-related context exerts a context-matching facilitation effect on ensemble coding and, more importantly, this effect is independent of consistent physical information.
Collapse
Affiliation(s)
- Mengfei Zhao
- School of Psychology, Zhejiang Normal University, Jinhua, PR China
| | - Jun Wang
- School of Psychology, Zhejiang Normal University, Jinhua, PR China
- Zhejiang Philosophy and Social Science Laboratory for the Mental Health and Crisis Intervention of Children and Adolescents, Zhejiang Normal University, Jinhua, PR China
| |
Collapse
|
5
|
Li Z, Lu H, Liu D, Yu ANC, Gendron M. Emotional event perception is related to lexical complexity and emotion knowledge. COMMUNICATIONS PSYCHOLOGY 2023; 1:45. [PMID: 39242918 PMCID: PMC11332234 DOI: 10.1038/s44271-023-00039-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 11/23/2023] [Indexed: 09/09/2024]
Abstract
Inferring emotion is a critical skill that supports social functioning. Emotion inferences are typically studied in simplistic paradigms by asking people to categorize isolated and static cues like frowning faces. Yet emotions are complex events that unfold over time. Here, across three samples (Study 1 N = 222; Study 2 N = 261; Study 3 N = 101), we present the Emotion Segmentation Paradigm to examine inferences about complex emotional events by extending cognitive paradigms examining event perception. Participants were asked to indicate when there were changes in the emotions of target individuals within continuous streams of activity in narrative film (Study 1) and documentary clips (Study 2, preregistered, and Study 3 test-retest sample). This Emotion Segmentation Paradigm revealed robust and reliable individual differences across multiple metrics. We also tested the constructionist prediction that emotion labels constrain emotion inference, which is traditionally studied by introducing emotion labels. We demonstrate that individual differences in active emotion vocabulary (i.e., readily accessible emotion words) correlate with emotion segmentation performance.
Collapse
Affiliation(s)
- Zhimeng Li
- Department of Psychology, Yale University, New Haven, Connecticut, USA.
| | - Hanxiao Lu
- Department of Psychology, New York University, New York, NY, USA
| | - Di Liu
- Department of Psychology, Johns Hopkins University, Baltimore, MD, USA
| | - Alessandra N C Yu
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Maria Gendron
- Department of Psychology, Yale University, New Haven, Connecticut, USA.
| |
Collapse
|
6
|
Namba S, Sato W, Namba S, Nomiya H, Shimokawa K, Osumi M. Development of the RIKEN database for dynamic facial expressions with multiple angles. Sci Rep 2023; 13:21785. [PMID: 38066065 PMCID: PMC10709572 DOI: 10.1038/s41598-023-49209-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 12/05/2023] [Indexed: 12/18/2023] Open
Abstract
The development of facial expressions with sensing information is progressing in multidisciplinary fields, such as psychology, affective computing, and cognitive science. Previous facial datasets have not simultaneously dealt with multiple theoretical views of emotion, individualized context, or multi-angle/depth information. We developed a new facial database (RIKEN facial expression database) that includes multiple theoretical views of emotions and expressers' individualized events with multi-angle and depth information. The RIKEN facial expression database contains recordings of 48 Japanese participants captured using ten Kinect cameras at 25 events. This study identified several valence-related facial patterns and found them consistent with previous research investigating the coherence between facial movements and internal states. This database represents an advancement in developing a new sensing system, conducting psychological experiments, and understanding the complexity of emotional events.
Collapse
Affiliation(s)
- Shushi Namba
- RIKEN, Psychological Process Research Team, Guardian Robot Project, Kyoto, 6190288, Japan.
- Department of Psychology, Hiroshima University, Hiroshima, 7398524, Japan.
| | - Wataru Sato
- RIKEN, Psychological Process Research Team, Guardian Robot Project, Kyoto, 6190288, Japan.
| | - Saori Namba
- Department of Psychology, Hiroshima University, Hiroshima, 7398524, Japan
| | - Hiroki Nomiya
- Faculty of Information and Human Sciences, Kyoto Institute of Technology, Kyoto, 6068585, Japan
| | - Koh Shimokawa
- KOHINATA Limited Liability Company, Osaka, 5560020, Japan
| | - Masaki Osumi
- KOHINATA Limited Liability Company, Osaka, 5560020, Japan
| |
Collapse
|
7
|
Zhang M, Zhou Y, Xu X, Ren Z, Zhang Y, Liu S, Luo W. Multi-view emotional expressions dataset using 2D pose estimation. Sci Data 2023; 10:649. [PMID: 37739952 PMCID: PMC10516935 DOI: 10.1038/s41597-023-02551-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 09/07/2023] [Indexed: 09/24/2023] Open
Abstract
Human body expressions convey emotional shifts and intentions of action and, in some cases, are even more effective than other emotion models. Despite many datasets of body expressions incorporating motion capture available, there is a lack of more widely distributed datasets regarding naturalized body expressions based on the 2D video. In this paper, therefore, we report the multi-view emotional expressions dataset (MEED) using 2D pose estimation. Twenty-two actors presented six emotional (anger, disgust, fear, happiness, sadness, surprise) and neutral body movements from three viewpoints (left, front, right). A total of 4102 videos were captured. The MEED consists of the corresponding pose estimation results (i.e., 397,809 PNG files and 397,809 JSON files). The size of MEED exceeds 150 GB. We believe this dataset will benefit the research in various fields, including affective computing, human-computer interaction, social neuroscience, and psychiatry.
Collapse
Affiliation(s)
- Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Yanan Zhou
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Xinye Xu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Ziwei Ren
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Yihan Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Shenglan Liu
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China.
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China.
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China.
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China.
| |
Collapse
|
8
|
Houlihan SD, Kleiman-Weiner M, Hewitt LB, Tenenbaum JB, Saxe R. Emotion prediction as computation over a generative theory of mind. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2023; 381:20220047. [PMID: 37271174 PMCID: PMC10239682 DOI: 10.1098/rsta.2022.0047] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 03/16/2023] [Indexed: 06/06/2023]
Abstract
From sparse descriptions of events, observers can make systematic and nuanced predictions of what emotions the people involved will experience. We propose a formal model of emotion prediction in the context of a public high-stakes social dilemma. This model uses inverse planning to infer a person's beliefs and preferences, including social preferences for equity and for maintaining a good reputation. The model then combines these inferred mental contents with the event to compute 'appraisals': whether the situation conformed to the expectations and fulfilled the preferences. We learn functions mapping computed appraisals to emotion labels, allowing the model to match human observers' quantitative predictions of 20 emotions, including joy, relief, guilt and envy. Model comparison indicates that inferred monetary preferences are not sufficient to explain observers' emotion predictions; inferred social preferences are factored into predictions for nearly every emotion. Human observers and the model both use minimal individualizing information to adjust predictions of how different people will respond to the same event. Thus, our framework integrates inverse planning, event appraisals and emotion concepts in a single computational model to reverse-engineer people's intuitive theory of emotions. This article is part of a discussion meeting issue 'Cognitive artificial intelligence'.
Collapse
Affiliation(s)
- Sean Dae Houlihan
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Max Kleiman-Weiner
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Luke B. Hewitt
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Joshua B. Tenenbaum
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Rebecca Saxe
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
9
|
Ortega J, Chen Z, Whitney D. Inferential Emotion Tracking reveals impaired context-based emotion processing in individuals with high Autism Quotient scores. Sci Rep 2023; 13:8093. [PMID: 37208368 DOI: 10.1038/s41598-023-35371-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 05/17/2023] [Indexed: 05/21/2023] Open
Abstract
Emotion perception is essential for successful social interactions and maintaining long-term relationships with friends and family. Individuals with autism spectrum disorder (ASD) experience social communication deficits and have reported difficulties in facial expression recognition. However, emotion recognition depends on more than just processing face expression; context is critically important to correctly infer the emotions of others. Whether context-based emotion processing is impacted in those with Autism remains unclear. Here, we used a recently developed context-based emotion perception task, called Inferential Emotion Tracking (IET), and investigated whether individuals who scored high on the Autism Spectrum Quotient (AQ) had deficits in context-based emotion perception. Using 34 videos (including Hollywood movies, home videos, and documentaries), we tested 102 participants as they continuously tracked the affect (valence and arousal) of a blurred-out, invisible character. We found that individual differences in Autism Quotient scores were more strongly correlated with IET task accuracy than they are with traditional face emotion perception tasks. This correlation remained significant even when controlling for potential covarying factors, general intelligence, and performance on traditional face perception tasks. These findings suggest that individuals with ASD may have impaired perception of contextual information, it reveals the importance of developing ecologically relevant emotion perception tasks in order to better assess and treat ASD, and it provides a new direction for further research on context-based emotion perception deficits in ASD.
Collapse
Affiliation(s)
- Jefferson Ortega
- Department of Psychology, University of California, Berkeley, CA, 94720, USA.
| | - Zhimin Chen
- Department of Psychology, University of California, Berkeley, CA, 94720, USA
| | - David Whitney
- Department of Psychology, University of California, Berkeley, CA, 94720, USA
- Vision Science Program, University of California, Berkeley, CA, 94720, USA
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, 94720, USA
| |
Collapse
|
10
|
Klingner CM, Guntinas-Lichius O. Facial expression and emotion. Laryngorhinootologie 2023; 102:S115-S125. [PMID: 37130535 PMCID: PMC10171334 DOI: 10.1055/a-2003-5687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Human facial expressions are unique in their ability to express our emotions and communicate them to others. The mimic expression of basic emotions is very similar across different cultures and has also many features in common with other mammals. This suggests a common genetic origin of the association between facial expressions and emotion. However, recent studies also show cultural influences and differences. The recognition of emotions from facial expressions, as well as the process of expressing one's emotions facially, occurs within an extremely complex cerebral network. Due to the complexity of the cerebral processing system, there are a variety of neurological and psychiatric disorders that can significantly disrupt the coupling of facial expressions and emotions. Wearing masks also limits our ability to convey and recognize emotions through facial expressions. Through facial expressions, however, not only "real" emotions can be expressed, but also acted ones. Thus, facial expressions open up the possibility of faking socially desired expressions and also of consciously faking emotions. However, these pretenses are mostly imperfect and can be accompanied by short-term facial movements that indicate the emotions that are actually present (microexpressions). These microexpressions are of very short duration and often barely perceptible by humans, but they are the ideal application area for computer-aided analysis. This automatic identification of microexpressions has not only received scientific attention in recent years, but its use is also being tested in security-related areas. This article summarizes the current state of knowledge of facial expressions and emotions.
Collapse
Affiliation(s)
- Carsten M Klingner
- Hans Berger Department of Neurology, Jena University Hospital, Germany
- Biomagnetic Center, Jena University Hospital, Germany
| | | |
Collapse
|
11
|
Irwantoro K, Nimsha Nilakshi Lennon N, Mareschal I, Miflah Hussain Ismail A. Contextualising facial expressions: The effect of temporal context and individual differences on classification. Q J Exp Psychol (Hove) 2023; 76:450-459. [PMID: 35360991 PMCID: PMC9896254 DOI: 10.1177/17470218221094296] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
The influence of context on facial expression classification is most often investigated using simple cues in static faces portraying basic expressions with a fixed emotional intensity. We examined (1) whether a perceptually rich, dynamic audiovisual context, presented in the form of movie clips (to achieve closer resemblance to real life), affected the subsequent classification of dynamic basic (happy) and non-basic (sarcastic) facial expressions and (2) whether people's susceptibility to contextual cues was related to their ability to classify facial expressions viewed in isolation. Participants classified facial expressions-gradually progressing from neutral to happy/sarcastic in increasing intensity-that followed movie clips. Classification was relatively more accurate and faster when the preceding context predicted the upcoming expression, compared with when the context did not. Speeded classifications suggested that predictive contexts reduced the emotional intensity required to be accurately classified. More importantly, we show for the first time that participants' accuracy in classifying expressions without an informative context correlated with the magnitude of the contextual effects experienced by them-poor classifiers of isolated expressions were more susceptible to a predictive context. Our findings support the emerging view that contextual cues and individual differences must be considered when explaining mechanisms underlying facial expression classification.
Collapse
Affiliation(s)
- Kinenoita Irwantoro
- School of Psychology, University of Nottingham Malaysia, Semenyih, Malaysia,Kinenoita Irwantoro, School of Psychology, University of Nottingham Malaysia, 43500 Semenyih, Selangor, Malaysia.
| | | | - Isabelle Mareschal
- School of Biological and Behavioural Sciences, Queen Mary University of London, London, UK
| | | |
Collapse
|
12
|
Biró B, Cserjési R, Kocsel N, Galambos A, Gecse K, Kovács LN, Baksa D, Juhász G, Kökönyei G. The neural correlates of context driven changes in the emotional response: An fMRI study. PLoS One 2022; 17:e0279823. [PMID: 36584048 PMCID: PMC9803168 DOI: 10.1371/journal.pone.0279823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 12/15/2022] [Indexed: 12/31/2022] Open
Abstract
Emotional flexibility reflects the ability to adjust the emotional response to the changing environmental context. To understand how context can trigger a change in emotional response, i.e., how it can upregulate the initial emotional response or trigger a shift in the valence of emotional response, we used a task consisting of picture pairs during functional magnetic resonance imaging sessions. In each pair, the first picture was a smaller detail (a decontextualized photograph depicting emotions using primarily facial and postural expressions) from the second (contextualized) picture, and the neural response to a decontextualized picture was compared with the same picture in a context. Thirty-one healthy participants (18 females; mean age: 24.44 ± 3.4) were involved in the study. In general, context (vs. pictures without context) increased activation in areas involved in facial emotional processing (e.g., middle temporal gyrus, fusiform gyrus, and temporal pole) and affective mentalizing (e.g., precuneus, temporoparietal junction). After excluding the general effect of context by using an exclusive mask with activation to context vs. no-context, the automatic shift from positive to negative valence induced by the context was associated with increased activation in the thalamus, caudate, medial frontal gyrus and lateral orbitofrontal cortex. When the meaning changed from negative to positive, it resulted in a less widespread activation pattern, mainly in the precuneus, middle temporal gyrus, and occipital lobe. Providing context cues to facial information recruited brain areas that induced changes in the emotional responses and interpretation of the emotional situations automatically to support emotional flexibility.
Collapse
Affiliation(s)
- Brigitte Biró
- NAP3.0-SE Neuropsychopharmacology Research Group, Hungarian Brain Research Program, Semmelweis University, Budapest, Hungary
- Doctoral School of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
- Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Renáta Cserjési
- Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Natália Kocsel
- Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Attila Galambos
- Doctoral School of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
- Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Kinga Gecse
- NAP3.0-SE Neuropsychopharmacology Research Group, Hungarian Brain Research Program, Semmelweis University, Budapest, Hungary
- Faculty of Pharmacy, Department of Pharmacodynamics, Semmelweis University, Budapest, Hungary
| | - Lilla Nóra Kovács
- Doctoral School of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
- Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Dániel Baksa
- NAP3.0-SE Neuropsychopharmacology Research Group, Hungarian Brain Research Program, Semmelweis University, Budapest, Hungary
- Faculty of Pharmacy, Department of Pharmacodynamics, Semmelweis University, Budapest, Hungary
| | - Gabriella Juhász
- NAP3.0-SE Neuropsychopharmacology Research Group, Hungarian Brain Research Program, Semmelweis University, Budapest, Hungary
- Faculty of Pharmacy, Department of Pharmacodynamics, Semmelweis University, Budapest, Hungary
| | - Gyöngyi Kökönyei
- NAP3.0-SE Neuropsychopharmacology Research Group, Hungarian Brain Research Program, Semmelweis University, Budapest, Hungary
- Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
- Faculty of Pharmacy, Department of Pharmacodynamics, Semmelweis University, Budapest, Hungary
- * E-mail: ,
| |
Collapse
|
13
|
Chen Y, Xu Q, Fan C, Wang Y, Jiang Y. Eye gaze direction modulates nonconscious affective contextual effect. Conscious Cogn 2022; 102:103336. [DOI: 10.1016/j.concog.2022.103336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 04/06/2022] [Accepted: 04/23/2022] [Indexed: 11/03/2022]
|
14
|
Faustmann LL, Eckhardt L, Hamann PS, Altgassen M. The Effects of Separate Facial Areas on Emotion Recognition in Different Adult Age Groups: A Laboratory and a Naturalistic Study. Front Psychol 2022; 13:859464. [PMID: 35846682 PMCID: PMC9281501 DOI: 10.3389/fpsyg.2022.859464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 06/07/2022] [Indexed: 11/13/2022] Open
Abstract
The identification of facial expressions is critical for social interaction. The ability to recognize facial emotional expressions declines with age. These age effects have been associated with differential age-related looking patterns. The present research project set out to systematically test the role of specific facial areas for emotion recognition across the adult lifespan. Study 1 investigated the impact of displaying only separate facial areas versus the full face on emotion recognition in 62 younger (20-24 years) and 65 middle-aged adults (40-65 years). Study 2 examined if wearing face masks differentially compromises younger (18-33 years, N = 71) versus middle-aged to older adults' (51-83 years, N = 73) ability to identify different emotional expressions. Results of Study 1 suggested no general decrease in emotion recognition across the lifespan; instead, age-related performance seems to depend on the specific emotion and presented face area. Similarly, Study 2 observed only deficits in the identification of angry, fearful, and neutral expressions in older adults, but no age-related differences with regards to happy, sad, and disgusted expressions. Overall, face masks reduced participants' emotion recognition; however, there were no differential age effects. Results are discussed in light of current models of age-related changes in emotion recognition.
Collapse
Affiliation(s)
| | | | | | - Mareike Altgassen
- Department of Psychology, Johannes Gutenberg University Mainz, Mainz, Germany
| |
Collapse
|
15
|
Monteith S, Glenn T, Geddes J, Whybrow PC, Bauer M. Commercial Use of Emotion Artificial Intelligence (AI): Implications for Psychiatry. Curr Psychiatry Rep 2022; 24:203-211. [PMID: 35212918 DOI: 10.1007/s11920-022-01330-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 02/07/2022] [Indexed: 11/03/2022]
Abstract
PURPOSE OF REVIEW Emotion artificial intelligence (AI) is technology for emotion detection and recognition. Emotion AI is expanding rapidly in commercial and government settings outside of medicine, and will increasingly become a routine part of daily life. The goal of this narrative review is to increase awareness both of the widespread use of emotion AI, and of the concerns with commercial use of emotion AI in relation to people with mental illness. RECENT FINDINGS This paper discusses emotion AI fundamentals, a general overview of commercial emotion AI outside of medicine, and examples of the use of emotion AI in employee hiring and workplace monitoring. The successful re-integration of patients with mental illness into society must recognize the increasing commercial use of emotion AI. There are concerns that commercial use of emotion AI will increase stigma and discrimination, and have negative consequences in daily life for people with mental illness. Commercial emotion AI algorithm predictions about mental illness should not be treated as medical fact.
Collapse
Affiliation(s)
- Scott Monteith
- Michigan State University College of Human Medicine, Traverse City Campus, 1400 Medical Campus Drive, Traverse City, MI, 49684, USA.
| | - Tasha Glenn
- ChronoRecord Association, Fullerton, CA, USA
| | - John Geddes
- Department of Psychiatry, University of Oxford, Warneford Hospital, Oxford, UK
| | - Peter C Whybrow
- Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, University of California Los Angeles (UCLA), Los Angeles, CA, USA
| | - Michael Bauer
- Department of Psychiatry and Psychotherapy, University Hospital Carl Gustav Carus Medical Faculty, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
16
|
Diagnosis of Depressive Disorder Model on Facial Expression Based on Fast R-CNN. Diagnostics (Basel) 2022; 12:diagnostics12020317. [PMID: 35204407 PMCID: PMC8871079 DOI: 10.3390/diagnostics12020317] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Revised: 01/21/2022] [Accepted: 01/24/2022] [Indexed: 02/01/2023] Open
Abstract
This study examines related literature to propose a model based on artificial intelligence (AI), that can assist in the diagnosis of depressive disorder. Depressive disorder can be diagnosed through a self-report questionnaire, but it is necessary to check the mood and confirm the consistency of subjective and objective descriptions. Smartphone-based assistance in diagnosing depressive disorders can quickly lead to their identification and provide data for intervention provision. Through fast region-based convolutional neural networks (R-CNN), a deep learning method that recognizes vector-based information, a model to assist in the diagnosis of depressive disorder can be devised by checking the position change of the eyes and lips, and guessing emotions based on accumulated photos of the participants who will repeatedly participate in the diagnosis of depressive disorder.
Collapse
|
17
|
Reschke PJ, Walle EA. The Unique and Interactive Effects of Faces, Postures, and Scenes on Emotion Categorization. AFFECTIVE SCIENCE 2021; 2:468-483. [PMID: 36046211 PMCID: PMC9382938 DOI: 10.1007/s42761-021-00061-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Accepted: 07/21/2021] [Indexed: 06/13/2023]
Abstract
UNLABELLED There is ongoing debate as to whether emotion perception is determined by facial expressions or context (i.e., non-facial cues). The present investigation examined the independent and interactive effects of six emotions (anger, disgust, fear, joy, sadness, neutral) conveyed by combinations of facial expressions, bodily postures, and background scenes in a fully crossed design. Participants viewed each face-posture-scene (FPS) combination for 5 s and were then asked to categorize the emotion depicted in the image. Four key findings emerged from the analyses: (1) For fully incongruent FPS combinations, participants categorized images using the face in 61% of instances and the posture and scene in 18% and 11% of instances, respectively; (2) postures (with neutral scenes) and scenes (with neutral postures) exerted differential influences on emotion categorizations when combined with incongruent facial expressions; (3) contextual asymmetries were observed for some incongruent face-posture pairings and their inverse (e.g., anger-fear vs. fear-anger), but not for face-scene pairings; (4) finally, scenes exhibited a boosting effect of posture when combined with a congruent posture and attenuated the effect of posture when combined with a congruent face. Overall, these findings highlight independent and interactional roles of posture and scene in emotion face perception. Theoretical implications for the study of emotions in context are discussed. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s42761-021-00061-x.
Collapse
Affiliation(s)
- Peter J. Reschke
- School of Family Life, Brigham Young University, Provo, UT 84602 USA
| | | |
Collapse
|
18
|
Fountain JE. The moon, the ghetto and artificial intelligence: Reducing systemic racism in computational algorithms. GOVERNMENT INFORMATION QUARTERLY 2021. [DOI: 10.1016/j.giq.2021.101645] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
19
|
Garcia-Cairasco N, Podolsky-Gondim G, Tejada J. Searching for a paradigm shift in the research on the epilepsies and associated neuropsychiatric comorbidities. From ancient historical knowledge to the challenge of contemporary systems complexity and emergent functions. Epilepsy Behav 2021; 121:107930. [PMID: 33836959 DOI: 10.1016/j.yebeh.2021.107930] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 03/06/2021] [Indexed: 10/21/2022]
Abstract
In this review, we will discuss in four scenarios our challenges to offer possible solutions for the puzzle associated with the epilepsies and neuropsychiatric comorbidities. We need to recognize that (1) since quite old times, human wisdom was linked to the plural (distinct global places/cultures) perception of the Universe we are in, with deep respect for earth and nature. Plural ancestral knowledge was added with the scientific methods; however, their joint efforts are the ideal scenario; (2) human behavior is not different than animal behavior, in essence the product of Darwinian natural selection; knowledge of animal and human behavior are complementary; (3) the expression of human behavior follows the same rules that complex systems with emergent properties, therefore, we can measure events in human, clinical, neurobiological situations with complexity systems' tools; (4) we can use the semiology of epilepsies and comorbidities, their neural substrates, and potential treatments (including experimental/computational modeling, neurosurgical interventions), as a source and collection of integrated big data to predict with them (e.g.: machine/deep learning) diagnosis/prognosis, individualized solutions (precision medicine), basic underlying mechanisms and molecular targets. Once the group of symptoms/signals (with a myriad of changing definitions and interpretations over time) and their specific sequences are determined, in epileptology research and clinical settings, the use of modern and contemporary techniques such as neuroanatomical maps, surface electroencephalogram and stereoelectroencephalography (SEEG) and imaging (MRI, BOLD, DTI, SPECT/PET), neuropsychological testing, among others, are auxiliary in the determination of the best electroclinical hypothesis, and help design a specific treatment, usually as the first attempt, with available pharmacological resources. On top of ancient knowledge, currently known and potentially new antiepileptic drugs, alternative treatments and mechanisms are usually produced as a consequence of the hard, multidisciplinary, and integrated studies of clinicians, surgeons, and basic scientists, all over the world. The existence of pharmacoresistant patients, calls for search of other solutions, being along the decades the surgeries the most common interventions, such as resective procedures (i.e., selective or standard lobectomy, lesionectomy), callosotomy, hemispherectomy and hemispherotomy, added by vagus nerve stimulation (VNS), deep brain stimulation (DBS), neuromodulation, and more recently focal minimal or noninvasive ablation. What is critical when we consider the pharmacoresistance aspect with the potential solution through surgery, is still the pursuit of localization-dependent regions (e.g.: epileptogenic zone (EZ)), in order to decide, no matter how sophisticated are the brain mapping tools (EEG and MRI), the size and location of the tissue to be removed. Mimicking the semiology and studying potential neural mechanisms and molecular targets - by means of experimental and computational modeling - are fundamental steps of the whole process. Concluding, with the conjunction of ancient knowledge, coupled to critical and creative contemporary, scientific (not dogmatic) clinical/surgical, and experimental/computational contributions, a better world and of improved quality of life can be offered to the people with epilepsy and neuropsychiatric comorbidities, who are still waiting (as well as the scientists) for a paradigm shift in epileptology, both in the Basic Science, Computational, Clinical, and Neurosurgical Arenas. This article is part of the Special Issue "NEWroscience 2018".
Collapse
Affiliation(s)
- Norberto Garcia-Cairasco
- Laboratório de Neurofisiologia e Neuroetologia Experimental, Departmento de Fisiologia, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto. Brazil; Departamento de Neurociências e Ciências do Comportamento, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto, Brazil.
| | - Guilherme Podolsky-Gondim
- Departamento de Neurociências e Ciências do Comportamento, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto, Brazil.
| | - Julian Tejada
- Departamento de Psicologia, Universidade Federal de Sergipe, Brazil.
| |
Collapse
|
20
|
Chen Z, Whitney D. Inferential affective tracking reveals the remarkable speed of context-based emotion perception. Cognition 2020; 208:104549. [PMID: 33340812 DOI: 10.1016/j.cognition.2020.104549] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 12/08/2020] [Accepted: 12/09/2020] [Indexed: 10/22/2022]
Abstract
Understanding the emotional states of others is important for social functioning. Recent studies show that context plays an essential role in emotion recognition. However, it remains unclear whether emotion inference from visual scene context is as efficient as emotion recognition from faces. Here, we measured the speed of context-based emotion perception, using Inferential Affective Tracking (IAT) with naturalistic and dynamic videos. Using cross-correlation analyses, we found that inferring affect based on visual context alone is just as fast as tracking affect with all available information including face and body. We further demonstrated that this approach has high precision and sensitivity to sub-second lags. Our results suggest that emotion recognition from dynamic contextual information might be automatic and immediate. Seemingly complex context-based emotion perception is far more efficient than previously assumed.
Collapse
Affiliation(s)
- Zhimin Chen
- Department of Psychology, University of California, Berkeley, CA 94720, United States of America.
| | - David Whitney
- Department of Psychology, University of California, Berkeley, CA 94720, United States of America; Vision Science Program, University of California, Berkeley, CA 94720, United States of America; Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94720, United States of America
| |
Collapse
|
21
|
Namba S, Rychlowska M, Orlowska A, Aviezer H, Krumhuber EG. Social context and culture influence judgments of non-Duchenne smiles. JOURNAL OF CULTURAL COGNITIVE SCIENCE 2020. [DOI: 10.1007/s41809-020-00066-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
AbstractExtant evidence points toward the role of contextual information and related cross-cultural variations in emotion perception, but most of the work to date has focused on judgments of basic emotions. The current research examines how culture and situational context affect the interpretation of emotion displays, i.e. judgments of the extent to which ambiguous smiles communicate happiness versus polite intentions. We hypothesized that smiles associated with contexts implying happiness would be judged as conveying more positive feelings compared to smiles paired with contexts implying politeness or smiles presented without context. In line with existing research on cross-cultural variation in contextual influences, we also expected these effects to be larger in Japan than in the UK. In Study 1, British participants viewed non-Duchenne smiles presented on their own or paired with background scenes implying happiness or the need to be polite. Compared to face-only stimuli, happy contexts made smiles appear more genuine, whereas polite contexts led smiles to be seen as less genuine. Study 2 replicated this result using verbal vignettes, showing a similar pattern of contextual effects among British and Japanese participants. However, while the effects of vignettes describing happy situations was comparable in both cultures, the influence of vignettes describing polite situations was stronger in Japan than the UK. Together, the findings document the importance of context information in judging smile expressions and highlight the need to investigate how culture moderates such influences.
Collapse
|
22
|
Zhang M, Yu L, Zhang K, Du B, Zhan B, Chen S, Jiang X, Guo S, Zhao J, Wang Y, Wang B, Liu S, Luo W. Kinematic dataset of actors expressing emotions. Sci Data 2020; 7:292. [PMID: 32901035 PMCID: PMC7478954 DOI: 10.1038/s41597-020-00635-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 08/07/2020] [Indexed: 11/09/2022] Open
Abstract
Human body movements can convey a variety of emotions and even create advantages in some special life situations. However, how emotion is encoded in body movements has remained unclear. One reason is that there is a lack of public human body kinematic dataset regarding the expressing of various emotions. Therefore, we aimed to produce a comprehensive dataset to assist in recognizing cues from all parts of the body that indicate six basic emotions (happiness, sadness, anger, fear, disgust, surprise) and neutral expression. The present dataset was created using a portable wireless motion capture system. Twenty-two semi-professional actors (half male) completed performances according to the standardized guidance and preferred daily events. A total of 1402 recordings at 125 Hz were collected, consisting of the position and rotation data of 72 anatomical nodes. To our knowledge, this is now the largest emotional kinematic dataset of the human body. We hope this dataset will contribute to multiple fields of research and practice, including social neuroscience, psychiatry, computer vision, and biometric and information forensics.
Collapse
Affiliation(s)
- Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Lu Yu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Keye Zhang
- School of Social and Behavioral Sciences, Nanjing University, Nanjing, 210023, Jiangsu, China
| | - Bixuan Du
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Bin Zhan
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Shaohua Chen
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Xiuhao Jiang
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Shuai Guo
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Jiafeng Zhao
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Yang Wang
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Bin Wang
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Shenglan Liu
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China.
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China.
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China.
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China.
| |
Collapse
|
23
|
Alwis Y, Haberman JM. Emotional judgments of scenes are influenced by unintentional averaging. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2020; 5:28. [PMID: 32529469 PMCID: PMC7290017 DOI: 10.1186/s41235-020-00228-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 05/09/2020] [Indexed: 11/28/2022]
Abstract
Background The visual system uses ensemble perception to summarize visual input across a variety of domains. This heuristic operates at multiple levels of vision, compressing information as basic as oriented lines or as complex as emotional faces. Given its pervasiveness, the ensemble unsurprisingly can influence how an individual item is perceived, and vice versa. Methods In the current experiments, we tested whether the perceived emotional valence of a single scene could be influenced by surrounding, simultaneously presented scenes. Observers first rated the emotional valence of a series of individual scenes. They then saw ensembles of the original images, presented in sets of four, and were cued to rate, for a second time, one of four. Results Results confirmed that the perceived emotional valence of the cued image was pulled toward the mean emotion of the surrounding ensemble on the majority of trials, even though the ensemble was task-irrelevant. Control experiments and analyses confirmed that the pull was driven by high-level, ensemble information. Conclusion We conclude that high-level ensemble information can influence how we perceive individual items in a crowd, even when working memory demands are low and the ensemble information is not directly task-relevant.
Collapse
Affiliation(s)
- Yavin Alwis
- The Department of Psychology, Rhodes College, 2000 N Parkway, Memphis, TN, USA
| | - Jason M Haberman
- The Department of Psychology, Rhodes College, 2000 N Parkway, Memphis, TN, USA.
| |
Collapse
|
24
|
|
25
|
Fan X, Wang F, Shao H, Zhang P, He S. The bottom-up and top-down processing of faces in the human occipitotemporal cortex. eLife 2020; 9:48764. [PMID: 31934855 PMCID: PMC7000216 DOI: 10.7554/elife.48764] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Accepted: 01/10/2020] [Indexed: 01/07/2023] Open
Abstract
Although face processing has been studied extensively, the dynamics of how face-selective cortical areas are engaged remains unclear. Here, we uncovered the timing of activation in core face-selective regions using functional Magnetic Resonance Imaging and Magnetoencephalography in humans. Processing of normal faces started in the posterior occipital areas and then proceeded to anterior regions. This bottom-up processing sequence was also observed even when internal facial features were misarranged. However, processing of two-tone Mooney faces lacking explicit prototypical facial features engaged top-down projection from the right posterior fusiform face area to right occipital face area. Further, face-specific responses elicited by contextual cues alone emerged simultaneously in the right ventral face-selective regions, suggesting parallel contextual facilitation. Together, our findings chronicle the precise timing of bottom-up, top-down, as well as context-facilitated processing sequences in the occipital-temporal face network, highlighting the importance of the top-down operations especially when faced with incomplete or ambiguous input.
Collapse
Affiliation(s)
- Xiaoxu Fan
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Fan Wang
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Hanyu Shao
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
| | - Peng Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Sheng He
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Minnesota, Minneapolis, United States
| |
Collapse
|
26
|
Olenina AH, Amazeen EL, Eckard B, Papenfuss J. Embodied Cognition in Performance: The Impact of Michael Chekhov's Acting Exercises on Affect and Height Perception. Front Psychol 2019; 10:2277. [PMID: 31649594 PMCID: PMC6794455 DOI: 10.3389/fpsyg.2019.02277] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2019] [Accepted: 09/23/2019] [Indexed: 11/13/2022] Open
Abstract
Modern embodied approaches to cognitive science overlap with ideas long explored in theater. Performance coaches such as Michael Chekhov have emphasized proprioceptive awareness of movement as a path to attaining psychological states relevant for embodying characters and inhabiting fictional spaces. Yet, the psychology of performance remains scientifically understudied. Experiments, presented in this paper, investigated the effects of three sets of exercises adapted from Chekhov's influential techniques for actors' training. Following a continuous physical demonstration and verbal prompts by the actress Bonnie Eckard, 29 participants enacted neutral, expanding, and contracting gestures and attitudes in space. After each set of exercises, the participants' affect (pleasantness and arousal) and self-perceptions of height were measured. Within the limitations of the study, we measured a significant impact of the exercises on affect: pleasantness increased by 50% after 15 min of expanding exercises and arousal increased by 15% after 15 min of contracting exercises, each relative to the other exercise. Although the exercises produced statistically non-significant changes in the perceived height, there was a significant relation between perceived height and affect, in which perceived height increased with increases in either pleasantness, or arousal. These findings provide a preliminary support for Chekhov's intuition that expanding and contracting physical actions exert opposite effects on the practitioners' psychological experience. Further studies are needed to consider a wider range of factors at work in Chekhov's method and the embodied experience of acting in general.
Collapse
Affiliation(s)
- Ana Hedberg Olenina
- School of International Letters and Cultures, Arizona State University, Tempe, AZ, United States
| | - Eric L. Amazeen
- Department of Psychology, Arizona State University, Tempe, AZ, United States
| | - Bonnie Eckard
- Herberger Institute for Design and the Arts, Arizona State University, Tempe, AZ, United States
| | - Jason Papenfuss
- School of Sustainability, Arizona State University, Tempe, AZ, United States
| |
Collapse
|
27
|
Cowen A, Sauter D, Tracy JL, Keltner D. Mapping the Passions: Toward a High-Dimensional Taxonomy of Emotional Experience and Expression. Psychol Sci Public Interest 2019; 20:69-90. [PMID: 31313637 PMCID: PMC6675572 DOI: 10.1177/1529100619850176] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
What would a comprehensive atlas of human emotions include? For 50 years, scientists have sought to map emotion-related experience, expression, physiology, and recognition in terms of the "basic six"-anger, disgust, fear, happiness, sadness, and surprise. Claims about the relationships between these six emotions and prototypical facial configurations have provided the basis for a long-standing debate over the diagnostic value of expression (for review and latest installment in this debate, see Barrett et al., p. 1). Building on recent empirical findings and methodologies, we offer an alternative conceptual and methodological approach that reveals a richer taxonomy of emotion. Dozens of distinct varieties of emotion are reliably distinguished by language, evoked in distinct circumstances, and perceived in distinct expressions of the face, body, and voice. Traditional models-both the basic six and affective-circumplex model (valence and arousal)-capture a fraction of the systematic variability in emotional response. In contrast, emotion-related responses (e.g., the smile of embarrassment, triumphant postures, sympathetic vocalizations, blends of distinct expressions) can be explained by richer models of emotion. Given these developments, we discuss why tests of a basic-six model of emotion are not tests of the diagnostic value of facial expression more generally. Determining the full extent of what facial expressions can tell us, marginally and in conjunction with other behavioral and contextual cues, will require mapping the high-dimensional, continuous space of facial, bodily, and vocal signals onto richly multifaceted experiences using large-scale statistical modeling and machine-learning methods.
Collapse
Affiliation(s)
- Alan Cowen
- Department of Psychology, University of California, Berkeley
| | - Disa Sauter
- Faculty of Social and Behavioural Sciences, University of Amsterdam
| | | | - Dacher Keltner
- Department of Psychology, University of California, Berkeley
| |
Collapse
|
28
|
|