1
|
He S, Zhao R, Li C, Hui L, Dong S, Xu C, Cui L. Scene effects on disgusted facial expression detection in individuals with social anxiety: The role of emotional intensity. Biol Psychol 2024; 192:108863. [PMID: 39270922 DOI: 10.1016/j.biopsycho.2024.108863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 09/08/2024] [Accepted: 09/10/2024] [Indexed: 09/15/2024]
Abstract
Individuals exhibiting high social anxiety (HSA) typically encounter challenges in identifying threatening stimuli with varying levels of intensity in different social scenes, ultimately affecting their social interactions. However, it is not well understood how social scenes, emotional intensity, and interaction influence the recognition of threat stimuli among HSA individuals (HSAs). To address this issue, a face recognition task was administered to 20 HSA participants and 22 individuals exhibiting low social anxiety (LSA) in this study. Results indicated that during the social scene presentation stage, HSAs produced larger P2 amplitude than LSA individuals (LSAs) no matter the valence of the scenes. During the face recognition stage, HSAs had smaller N170 amplitude than LSAs and exhibited lower recognition time for 2 % disgusted faces compared to LSAs. Furthermore, the consistency between scenes and faces led to faster recognition of disgusted faces in HSAs, but not in LSAs. Consequently, our findings suggested that HSAs exhibited unique cognitive processing patterns in social scenes, manifested by increased attention to scenes and decreased attention to faces. In addition, the emotional congruence between the scene and the faces could facilitate the recognition of faces by HSAs.
Collapse
Affiliation(s)
- Siyu He
- Beijing Key Laboratory of Learning and Cognition and School of Psychology, Capital Normal University, Beijing, PR China
| | - Ruonan Zhao
- Beijing Key Laboratory of Learning and Cognition and School of Psychology, Capital Normal University, Beijing, PR China
| | - Chieh Li
- Department of Applied Psychology, Northeastern University, Boston, MA, USA
| | - Lihong Hui
- Beijing Key Laboratory of Learning and Cognition and School of Psychology, Capital Normal University, Beijing, PR China
| | - Shubo Dong
- Beijing Key Laboratory of Learning and Cognition and School of Psychology, Capital Normal University, Beijing, PR China
| | - Cai Xu
- Beijing Key Laboratory of Learning and Cognition and School of Psychology, Capital Normal University, Beijing, PR China
| | - Lixia Cui
- Beijing Key Laboratory of Learning and Cognition and School of Psychology, Capital Normal University, Beijing, PR China.
| |
Collapse
|
2
|
Wang J, Jia Y, Shao X, Wang C, Wang W. Pure Emotion-loaded Materials in the International Affective Digitized Sounds (IADS): A Study on Intensity Ratings in Chinese University Students. CURRENT PSYCHIATRY RESEARCH AND REVIEWS 2019. [DOI: 10.2174/1573400515666190822110933] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Background:
Materials loaded with pure emotion are essential for basic and clinical research
on sounds. The International Affective Digitized Sounds (IADS) is one of the widely-used
emotional tools, but its materials are not clearly labeled with specific emotions. We have hypothesized
that the IADS contains pure vectors of at least disgust, erotica (or erotism), fear, happiness,
sadness and neutral emotions.
Methods:
We therefore selected 48 IADS sounds with saturate emotions, and invited 271 male and
353 female university students to rate the intensity of the emotions conveyed in each sound. The
ratings were then analyzed with the exploratory and confirmatory factor analyses.
Results:
Five factors were observed, namely: erotica, fear-sadness, happiness, neutrality, and disgust.
Later, as two facets, sounds of fear-sadness were separated. Thirty sounds under six facets
were finally retained with good model-fit indices and satisfactory factor internal reliabilities.
Moreover, males scored significantly higher on erotica than females did.
Conclusion:
Our study purified a series of emotion-loaded IADS sounds, which might help clarify
the pure effects of sound emotion in future research and clinical management of affective disorders.
Collapse
Affiliation(s)
- Jiawei Wang
- Department of Clinical Psychology and Psychiatry, School of Public Health, Zhejiang University College of Medicine, Hangzhou, China
| | - Yanli Jia
- Department of Clinical Psychology and Psychiatry, School of Public Health, Zhejiang University College of Medicine, Hangzhou, China
| | - Xu Shao
- Department of Clinical Psychology and Psychiatry, School of Public Health, Zhejiang University College of Medicine, Hangzhou, China
| | - Chu Wang
- Department of Clinical Psychology and Psychiatry, School of Public Health, Zhejiang University College of Medicine, Hangzhou, China
| | - Wei Wang
- Department of Clinical Psychology and Psychiatry, School of Public Health, Zhejiang University College of Medicine, Hangzhou, China
| |
Collapse
|
3
|
Mishra MV, Srinivasan N. Exogenous attention intensifies perceived emotion expressions. Neurosci Conscious 2017; 2017:nix022. [PMID: 30042853 PMCID: PMC6007186 DOI: 10.1093/nc/nix022] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2017] [Revised: 10/12/2017] [Accepted: 10/23/2017] [Indexed: 12/15/2022] Open
Abstract
Spatial attention not only enhances early visual processing and improves performance but also alters phenomenology of basic perceptual features. However, in spite of extensive research on attention altering appearance, it is still unknown whether attention also intensifies perceived facial emotional expressions. We investigated the effect of exogenous attention on two categories of emotions, one positive (happy) and one negative (sad) in separate sessions. Exogenous attention was manipulated using peripheral cues followed by two faces varying in emotional intensity that were presented on either side of fixation. Participants were asked to report the location of the emotional face displaying higher intensity of emotion. At short cue-to-target interval [CTI, Experiment 1 (60 ms)], participants reported the cued emotional face as more intense in expression compared with the uncued face. However, at longer CTI [Experiment 2 (500 ms)], this effect was absent. Results show that exogenous attention enhances appearance of higher level features, such as emotional intensity, irrespective of valence. Further, two experiments investigated the mediating role of facial contrast as a possible underlying mechanism for the observed effect. Although the results show that higher contrast faces are judged as more in emotional intensity, spatial attention effects seem to be dependent on task instructions. Possible mechanisms underlying the attentional effects on emotion intensity are discussed.
Collapse
Affiliation(s)
- Maruti V Mishra
- Centre of Behavioural and Cognitive Sciences, University of Allahabad, Allahabad, Uttar Pradesh, India
| | - Narayanan Srinivasan
- Centre of Behavioural and Cognitive Sciences, University of Allahabad, Allahabad, Uttar Pradesh, India
| |
Collapse
|
4
|
Shibata K, Watanabe T, Kawato M, Sasaki Y. Differential Activation Patterns in the Same Brain Region Led to Opposite Emotional States. PLoS Biol 2016; 14:e1002546. [PMID: 27608359 PMCID: PMC5015828 DOI: 10.1371/journal.pbio.1002546] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2016] [Accepted: 08/05/2016] [Indexed: 11/18/2022] Open
Abstract
In human studies, how averaged activation in a brain region relates to human behavior has been extensively investigated. This approach has led to the finding that positive and negative facial preferences are represented by different brain regions. However, using a functional magnetic resonance imaging (fMRI) decoded neurofeedback (DecNef) method, we found that different patterns of neural activations within the cingulate cortex (CC) play roles in representing opposite directions of facial preference. In the present study, while neutrally preferred faces were presented, multi-voxel activation patterns in the CC that corresponded to higher (or lower) preference were repeatedly induced by fMRI DecNef. As a result, previously neutrally preferred faces became more (or less) preferred. We conclude that a different activation pattern in the CC, rather than averaged activation in a different area, represents and suffices to determine positive or negative facial preference. This new approach may reveal the importance of an activation pattern within a brain region in many cognitive functions. A newly developed fMRI method, decoded neurofeedback (DecNef), reveals that specific activation patterns in the cingulate cortex are largely responsible for determining human facial preferences. Although it is well studied how averaged activation of a brain region relates to behavior, it is still unclear if specific patterns of activation within regions also relate to cognitive function. In recent years, several methods have been developed for manipulating brain activity in humans. Real-time functional magnetic resonance imaging decoded neurofeedback (fMRI DecNef) is a method that allows the induction of specific patterns of brain activity by measuring the current pattern, comparing this to the pattern to be induced, and giving the subjects feedback on how close the two patterns of neuronal activity are. Using fMRI DecNef, we manipulated the pattern of activation in the cingulate cortex—a part of the cerebral cortex that plays a role in preference to different categories including faces and daily items—and tested whether we could change these preferences. In the experiment, a specific activation pattern in the cingulate cortex corresponding to higher (or lower) preference was induced by fMRI DecNef while subjects were seeing a neutrally preferred face. As a result, these neutrally preferred faces became more (or less) preferred. Our finding suggests that different patterns of activation in the cingulate cortex represent, and are sufficient to determine, different emotional states. Our new approach using fMRI DecNef may reveal the importance of activation patterns within a brain region, rather than activation in a whole region, in many cognitive functions.
Collapse
Affiliation(s)
- Kazuhisa Shibata
- Brain Information Communication Research Laboratory Group, Advanced Telecommunications Research Institute International, 2-2-2 Hikaridai, Keihanna Science City, Kyoto, Japan
- Department of Cognitive, Linguistics, & Psychological Sciences, Brown University, Providence, Rhode Island, United States of America
| | - Takeo Watanabe
- Brain Information Communication Research Laboratory Group, Advanced Telecommunications Research Institute International, 2-2-2 Hikaridai, Keihanna Science City, Kyoto, Japan
- Department of Cognitive, Linguistics, & Psychological Sciences, Brown University, Providence, Rhode Island, United States of America
| | - Mitsuo Kawato
- Brain Information Communication Research Laboratory Group, Advanced Telecommunications Research Institute International, 2-2-2 Hikaridai, Keihanna Science City, Kyoto, Japan
- * E-mail:
| | - Yuka Sasaki
- Brain Information Communication Research Laboratory Group, Advanced Telecommunications Research Institute International, 2-2-2 Hikaridai, Keihanna Science City, Kyoto, Japan
- Department of Cognitive, Linguistics, & Psychological Sciences, Brown University, Providence, Rhode Island, United States of America
| |
Collapse
|
5
|
Lavan N, Lima CF, Harvey H, Scott SK, McGettigan C. I thought that I heard you laughing: Contextual facial expressions modulate the perception of authentic laughter and crying. Cogn Emot 2014; 29:935-44. [DOI: 10.1080/02699931.2014.957656] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
6
|
Auditory rhythms are systemically associated with spatial-frequency and density information in visual scenes. Psychon Bull Rev 2014; 20:740-6. [PMID: 23423817 DOI: 10.3758/s13423-013-0399-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A variety of perceptual correspondences between auditory and visual features have been reported, but few studies have investigated how rhythm, an auditory feature defined purely by dynamics relevant to speech and music, interacts with visual features. Here, we demonstrate a novel crossmodal association between auditory rhythm and visual clutter. Participants were shown a variety of visual scenes from diverse categories and asked to report the auditory rhythm that perceptually matched each scene by adjusting the rate of amplitude modulation (AM) of a sound. Participants matched each scene to a specific AM rate with surprising consistency. A spatial-frequency analysis showed that scenes with greater contrast energy in midrange spatial frequencies were matched to faster AM rates. Bandpass-filtering the scenes indicated that greater contrast energy in this spatial-frequency range was associated with an abundance of object boundaries and contours, suggesting that participants matched more cluttered scenes to faster AM rates. Consistent with this hypothesis, AM-rate matches were strongly correlated with perceived clutter. Additional results indicated that both AM-rate matches and perceived clutter depend on object-based (cycles per object) rather than retinal (cycles per degree of visual angle) spatial frequency. Taken together, these results suggest a systematic crossmodal association between auditory rhythm, representing density in the temporal domain, and visual clutter, representing object-based density in the spatial domain. This association may allow for the use of auditory rhythm to influence how visual clutter is perceived and attended.
Collapse
|
7
|
Schwager S, Rothermund K. Counter-regulation triggered by emotions: positive/negative affective states elicit opposite valence biases in affective processing. Cogn Emot 2012; 27:839-55. [PMID: 23237331 DOI: 10.1080/02699931.2012.750599] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
The present study investigated whether counter-regulation in affective processing is triggered by emotions. Automatic attention allocation to valent stimuli was measured in the context of positive and negative affective states. Valence biases were assessed by comparing the detection of positive versus negative words in a visual search task (Experiment 1) or by comparing interference effects of positive and negative distractor words in an emotional Stroop task (Experiment 2). Imagining a hypothetical emotional situation (Experiment 1) or watching romantic versus depressing movie clips (Experiment 2) increased attention allocation to stimuli that were opposite in valence to the current emotional state. Counter-regulation is assumed to reflect a basic mechanism underlying implicit emotion regulation.
Collapse
Affiliation(s)
- Susanne Schwager
- Institut für Psychologie, Friedrich-Schiller-Universität Jena, Jena, Germany
| | | |
Collapse
|
8
|
Wieser MJ, Brosch T. Faces in context: a review and systematization of contextual influences on affective face processing. Front Psychol 2012; 3:471. [PMID: 23130011 PMCID: PMC3487423 DOI: 10.3389/fpsyg.2012.00471] [Citation(s) in RCA: 204] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2012] [Accepted: 10/15/2012] [Indexed: 12/16/2022] Open
Abstract
Facial expressions are of eminent importance for social interaction as they convey information about other individuals’ emotions and social intentions. According to the predominant “basic emotion” approach, the perception of emotion in faces is based on the rapid, automatic categorization of prototypical, universal expressions. Consequently, the perception of facial expressions has typically been investigated using isolated, de-contextualized, static pictures of facial expressions that maximize the distinction between categories. However, in everyday life, an individual’s face is not perceived in isolation, but almost always appears within a situational context, which may arise from other people, the physical environment surrounding the face, as well as multichannel information from the sender. Furthermore, situational context may be provided by the perceiver, including already present social information gained from affective learning and implicit processing biases such as race bias. Thus, the perception of facial expressions is presumably always influenced by contextual variables. In this comprehensive review, we aim at (1) systematizing the contextual variables that may influence the perception of facial expressions and (2) summarizing experimental paradigms and findings that have been used to investigate these influences. The studies reviewed here demonstrate that perception and neural processing of facial expressions are substantially modified by contextual information, including verbal, visual, and auditory information presented together with the face as well as knowledge or processing biases already present in the observer. These findings further challenge the assumption of automatic, hardwired categorical emotion extraction mechanisms predicted by basic emotion theories. Taking into account a recent model on face processing, we discuss where and when these different contextual influences may take place, thus outlining potential avenues in future research.
Collapse
|
9
|
Sounds exaggerate visual shape. Cognition 2012; 124:194-200. [PMID: 22633004 DOI: 10.1016/j.cognition.2012.04.009] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2011] [Revised: 04/24/2012] [Accepted: 04/28/2012] [Indexed: 11/23/2022]
Abstract
While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes to perception of 3D space, objects and faces. Hearing a /woo/ sound increases the apparent vertical elongation of a shape, whereas hearing a /wee/ sound increases the apparent horizontal elongation. We further demonstrate that these sounds influence aspect ratio coding. Viewing and adapting to a tall (or flat) shape makes a subsequently presented symmetric shape appear flat (or tall). These aspect ratio aftereffects are enhanced when associated speech sounds are presented during the adaptation period, suggesting that the sounds influence visual population coding of aspect ratio. Taken together, these results extend previous demonstrations that visual information constrains auditory perception by showing the converse - speech sounds influence visual perception of a basic geometric feature.
Collapse
|