1
|
Ziereis A, Schacht A. Gender congruence and emotion effects in cross-modal associative learning: Insights from ERPs and pupillary responses. Psychophysiology 2023; 60:e14380. [PMID: 37387451 DOI: 10.1111/psyp.14380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 05/01/2023] [Accepted: 06/17/2023] [Indexed: 07/01/2023]
Abstract
Social and emotional cues from faces and voices are highly relevant and have been reliably demonstrated to attract attention involuntarily. However, there are mixed findings as to which degree associating emotional valence to faces occurs automatically. In the present study, we tested whether inherently neutral faces gain additional relevance by being conditioned with either positive, negative, or neutral vocal affect bursts. During learning, participants performed a gender-matching task on face-voice pairs without explicit emotion judgments of the voices. In the test session on a subsequent day, only the previously associated faces were presented and had to be categorized regarding gender. We analyzed event-related potentials (ERPs), pupil diameter, and response times (RTs) of N = 32 subjects. Emotion effects were found in auditory ERPs and RTs during the learning session, suggesting that task-irrelevant emotion was automatically processed. However, ERPs time-locked to the conditioned faces were mainly modulated by the task-relevant information, that is, the gender congruence of the face and voice, but not by emotion. Importantly, these ERP and RT effects of learned congruence were not limited to learning but extended to the test session, that is, after removing the auditory stimuli. These findings indicate successful associative learning in our paradigm, but it did not extend to the task-irrelevant dimension of emotional relevance. Therefore, cross-modal associations of emotional relevance may not be completely automatic, even though the emotion was processed in the voice.
Collapse
Affiliation(s)
- Annika Ziereis
- Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Institute of Psychology, Georg-August-University of Göttingen, Göttingen, Germany
| | - Anne Schacht
- Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Institute of Psychology, Georg-August-University of Göttingen, Göttingen, Germany
| |
Collapse
|
2
|
Gan S, Li W. Aberrant neural correlates of multisensory processing of audiovisual social cues related to social anxiety: An electrophysiological study. Front Psychiatry 2023; 14:1020812. [PMID: 36761870 PMCID: PMC9902659 DOI: 10.3389/fpsyt.2023.1020812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 01/03/2023] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND Social anxiety disorder (SAD) is characterized by abnormal fear to social cues. Although unisensory processing to social stimuli associated with social anxiety (SA) has been well described, how multisensory processing relates to SA is still open to clarification. Using electroencephalography (EEG) measurement, we investigated the neural correlates of multisensory processing and related temporal dynamics in social anxiety disorder (SAD). METHODS Twenty-five SAD participants and 23 healthy control (HC) participants were presented with angry and neutral faces, voices and their combinations with congruent emotions and they completed an emotional categorization task. RESULTS We found that face-voice combinations facilitated auditory processing in multiple stages indicated by the acceleration of auditory N1 latency, attenuation of auditory N1 and P250 amplitudes, and decrease of theta power. In addition, bimodal inputs elicited cross-modal integrative activity which is indicated by the enhancement of visual P1, N170, and P3/LPP amplitudes and superadditive response of P1 and P3/LPP. More importantly, excessively greater integrative activity (at P3/LPP amplitude) was found in SAD participants, and this abnormal integrative activity in both early and late temporal stages was related to the larger interpretation bias of miscategorizing neutral face-voice combinations as angry. CONCLUSION The study revealed that neural correlates of multisensory processing was aberrant in SAD and it was related to the interpretation bias to multimodal social cues in multiple processing stages. Our findings suggest that deficit in multisensory processing might be an important factor in the psychopathology of SA.
Collapse
Affiliation(s)
- Shuzhen Gan
- Shanghai Changning Mental Health Center, Shanghai, China.,Shanghai Mental Health Center, Shanghai, China
| | - Weijun Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China.,Key Laboratory of Brain and Cognitive Neuroscience, Dalian, Liaoning, China
| |
Collapse
|
3
|
Audiovisual Emotional Congruency Modulates the Stimulus-Driven Cross-Modal Spread of Attention. Brain Sci 2022; 12:brainsci12091229. [PMID: 36138965 PMCID: PMC9497153 DOI: 10.3390/brainsci12091229] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Revised: 09/04/2022] [Accepted: 09/07/2022] [Indexed: 11/18/2022] Open
Abstract
It has been reported that attending to stimuli in visual modality can spread to task-irrelevant but synchronously presented stimuli in auditory modality, a phenomenon termed the cross-modal spread of attention, which could be either stimulus-driven or representation-driven depending on whether the visual constituent of an audiovisual object is further selected based on the object representation. The stimulus-driven spread of attention occurs whenever a task-irrelevant sound synchronizes with an attended visual stimulus, regardless of the cross-modal semantic congruency. The present study recorded event-related potentials (ERPs) to investigate whether the stimulus-driven cross-modal spread of attention could be modulated by audio-visual emotional congruency in a visual oddball task where emotion (positive/negative) was task-irrelevant. The results first demonstrated a prominent stimulus-driven spread of attention regardless of audio-visual emotional congruency by showing that for all audiovisual pairs, the extracted ERPs to the auditory constituents of audiovisual stimuli within the time window of 200–300 ms were significantly larger than ERPs to the same auditory stimuli delivered alone. However, the amplitude of this stimulus-driven auditory Nd component during 200–300 ms was significantly larger for emotionally incongruent than congruent audiovisual stimuli when their visual constituents’ emotional valences were negative. Moreover, the Nd was sustained during 300–400 ms only for the incongruent audiovisual stimuli with emotionally negative visual constituents. These findings suggest that although the occurrence of the stimulus-driven cross-modal spread of attention is independent of audio-visual emotional congruency, its magnitude is nevertheless modulated even when emotion is task-irrelevant.
Collapse
|
4
|
Wang Z, Goerlich KS, Xu P, Luo Y, Aleman A. Perceptive and Affective Impairments In Emotive Eye-Region Processing in Alexithymia. Soc Cogn Affect Neurosci 2022; 17:912-922. [PMID: 35277722 PMCID: PMC9527467 DOI: 10.1093/scan/nsac013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 01/13/2022] [Accepted: 02/20/2022] [Indexed: 12/02/2022] Open
Abstract
Alexithymia is characterized by impairments in emotion processing, frequently linked to facial expressions of emotion. The eye-region conveys information necessary for emotion processing. It has been demonstrated that alexithymia is associated with reduced attention to the eyes, but little is known regarding the cognitive and electrophysiological mechanisms underlying emotive eye-region processing in alexithymia. Here, we recorded behavioral and electrophysiological responses of individuals with alexithymia (ALEX; n = 25) and individuals without alexithymia (NonALEX; n = 23) while they viewed intact and eyeless faces with angry and sad expressions during a dual-target rapid serial visual presentation task. Results showed different eye-region focuses and differentiating N1 responses between intact and eyeless faces to anger and sadness in NonALEX, but not in ALEX, suggesting deficient perceptual processing of the eye-region in alexithymia. Reduced eye-region focus and smaller differences in frontal alpha asymmetry in response to sadness between intact and eyeless faces were observed in ALEX than NonALEX, indicative of impaired affective processing of the eye-region in alexithymia. These findings highlight perceptual and affective abnormalities of emotive eye-region processing in alexithymia. Our results contribute to understanding the neuropsychopathology of alexithymia and alexithymia-related disorders.
Collapse
Affiliation(s)
- Zhihao Wang
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging Center, Center for Brain Disorders and Cognitive Sciences, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen 518060, China
- Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Katharina S Goerlich
- Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Pengfei Xu
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (BNU), Faculty of Psychology, Beijing Normal University, Beijing 100875, China
- Center for Neuroimaging, Shenzhen Institute of Neuroscience, Shenzhen 518106, China
| | - Yuejia Luo
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging Center, Center for Brain Disorders and Cognitive Sciences, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen 518060, China
- College of Teacher Education, Qilu Normal University, Jinan 250200, China
- The Research Center of Brain Science and Visual Cognition, Medical School, Kunming University of Science and Technology, Kunming 650031, China
| | - André Aleman
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging Center, Center for Brain Disorders and Cognitive Sciences, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen 518060, China
- Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
5
|
Pell MD, Sethi S, Rigoulot S, Rothermich K, Liu P, Jiang X. Emotional voices modulate perception and predictions about an upcoming face. Cortex 2022; 149:148-164. [DOI: 10.1016/j.cortex.2021.12.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 09/15/2021] [Accepted: 01/05/2022] [Indexed: 11/26/2022]
|
6
|
Effects of integration of facial expression and emotional voice on inhibition of return. ACTA PSYCHOLOGICA SINICA 2022. [DOI: 10.3724/sp.j.1041.2022.00331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
7
|
Barabanschikov V, Suvorova E. Part-Whole Perception of Audiovideoimages of Multimodal Emotional States of a Person. EXPERIMENTAL PSYCHOLOGY (RUSSIA) 2022. [DOI: 10.17759/exppsy.2022150401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
<p>The patterns of perception of a part and a whole of multimodal emotional dynamic states of people unfamiliar to observers are studied. Audio-video clips of fourteen key emotional states expressed by specially trained actors were randomly presented to two groups of observers. In one group (N=96, average age — 34, SD — 9.4l.), each audio—video image was shown in full, in the other (N=78, average age — 25, SD — 9.6l.), it was divided into two parts of equal duration from the beginning to the conditional middle (short phonetic pause) and from the middle to the end of the exposure. The stimulus material contained facial expressions, gestures, head and eye movements, changes in the position of the body of the sitters, who voiced pseudolinguistic statements accompanied by affective intonations. The accuracy of identification and the structure of categorical fields were evaluated depending on the modality and form (whole/part) of the exposure of affective states. After the exposure of each audio-video image from the presented list of emotions, observers were required to choose the one that best corresponds to what they saw. According to the data obtained, the accuracy of identifying the emotions of the initial and final fragments of audio-video images practically coincide, but significantly less than with full exposure. Functional differences in the perception of fragmented audio-video images of the same emotional states are revealed. The modes of transitions from the initial stage to the final one and the conditions affecting the relative speed of the perceptual process are shown. The uneven formation of the information basis of multimodal expressions and the heterochronous perceptogenesis of emotional states of actors are demonstrated.</p>
Collapse
|
8
|
Liang J, Li Y, Zhang Z, Luo W. Sound gaps boost emotional audiovisual integration independent of attention: Evidence from an ERP study. Biol Psychol 2021; 168:108246. [PMID: 34968556 DOI: 10.1016/j.biopsycho.2021.108246] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Revised: 12/18/2021] [Accepted: 12/23/2021] [Indexed: 11/02/2022]
Abstract
The emotion discrimination paradigm was adopted to study the effect of interrupted sound on visual emotional processing under different attentional states. There were two experiments: Experiment 1: judging facial expressions (explicit task), Experiment 2: judging the position of a bar (implicit task). In Experiment 1, ERP results showed that there was a sound gap accelerating the effect of P1 present only under neutral faces. In Experiment 2, the accelerating effect (P1) existed regardless of the emotional condition. Combining two experiments, P1 findings suggest that sound gap enhances bottom-up attention. The N170 and late positive component (LPC) were found to be regulated by emotion face in both experiments, with fear over the neutral. Comparing the two experiments, the explicit task induced a larger LPC than the implicit task. Overall, sound gaps boosted the audiovisual integration by bottom-up attention in early integration, while cognitive expectations led to top-down attention in late stages.
Collapse
Affiliation(s)
- Junyu Liang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian 116029, Liaoning Province, China
| | - Yuchen Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian 116029, Liaoning Province, China
| | - Zhao Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Institute of Psychology, Weifang Medical University, Weifang 216053, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian 116029, Liaoning Province, China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian 116029, Liaoning Province, China.
| |
Collapse
|
9
|
Begau A, Klatt LI, Wascher E, Schneider D, Getzmann S. Do congruent lip movements facilitate speech processing in a dynamic audiovisual multi-talker scenario? An ERP study with older and younger adults. Behav Brain Res 2021; 412:113436. [PMID: 34175355 DOI: 10.1016/j.bbr.2021.113436] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 04/26/2021] [Accepted: 06/21/2021] [Indexed: 11/26/2022]
Abstract
In natural conversations, visible mouth and lip movements play an important role in speech comprehension. There is evidence that visual speech information improves speech comprehension, especially for older adults and under difficult listening conditions. However, the neurocognitive basis is still poorly understood. The present EEG experiment investigated the benefits of audiovisual speech in a dynamic cocktail-party scenario with 22 (aged 20-34 years) younger and 20 (aged 55-74 years) older participants. We presented three simultaneously talking faces with a varying amount of visual speech input (still faces, visually unspecific and audiovisually congruent). In a two-alternative forced-choice task, participants had to discriminate target words ("yes" or "no") among two distractors (one-digit number words). In half of the experimental blocks, the target was always presented from a central position, in the other half, occasional switches to a lateral position could occur. We investigated behavioral and electrophysiological modulations due to age, location switches and the content of visual information, analyzing response times and accuracy as well as the P1, N1, P2, N2 event-related potentials (ERPs) and the contingent negative variation (CNV) in the EEG. We found that audiovisually congruent speech information improved performance and modulated ERP amplitudes in both age groups, suggesting enhanced preparation and integration of the subsequent auditory input. In the older group, larger amplitude measures were found in early phases of processing (P1-N1). Here, amplitude measures were reduced in response to audiovisually congruent stimuli. In later processing phases (P2-N2) we found decreased amplitude measures in the older group, while an amplitude reduction for audiovisually congruent compared to visually unspecific stimuli was still observable. However, these benefits were only observed as long as no location switches occurred, leading to enhanced amplitude measures in later processing phases (P2-N2). To conclude, meaningful visual information in a multi-talker setting, when presented from the expected location, is shown to be beneficial for both younger and older adults.
Collapse
Affiliation(s)
- Alexandra Begau
- Leibniz Research Centre for Working Environment and Human Factors, TU Dortmund, Germany.
| | - Laura-Isabelle Klatt
- Leibniz Research Centre for Working Environment and Human Factors, TU Dortmund, Germany
| | - Edmund Wascher
- Leibniz Research Centre for Working Environment and Human Factors, TU Dortmund, Germany
| | - Daniel Schneider
- Leibniz Research Centre for Working Environment and Human Factors, TU Dortmund, Germany
| | - Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, TU Dortmund, Germany
| |
Collapse
|
10
|
The N400 and late occipital positivity in processing dynamic facial expressions with natural emotional voice. Neuroreport 2021; 32:858-863. [PMID: 34029292 DOI: 10.1097/wnr.0000000000001669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
People require multimodal emotional interactions to live in a social environment. Several studies using dynamic facial expressions and emotional voices have reported that multimodal emotional incongruency evokes an early sensory component of event-related potentials (ERPs), while others have found a late cognitive component. The integration mechanism of two different results remains unclear. We speculate that it is semantic analysis in a multimodal integration framework that evokes the late ERP component. An electrophysiological experiment was conducted using emotionally congruent or incongruent dynamic faces and natural voices to promote semantic analysis. To investigate the top-down modulation of the ERP component, attention was manipulated via two tasks that directed participants to attend to facial versus vocal expressions. Our results revealed interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N400 ERP amplitudes but not N1 and P2 amplitudes, for incongruent emotional face-voice combinations only in the face-attentive task. A late occipital positive potential amplitude emerged only during the voice-attentive task. Overall, these findings support the idea that semantic analysis is a key factor in evoking the late cognitive component. The task effect for these ERPs suggests that top-down attention alters not only the amplitude of ERP but also the ERP component per se. Our results implicate a principle of emotional face-voice processing in the brain that may underlie complex audiovisual interactions in everyday communication.
Collapse
|
11
|
Wang Z, Chen M, Goerlich KS, Aleman A, Xu P, Luo Y. Deficient auditory emotion processing but intact emotional multisensory integration in alexithymia. Psychophysiology 2021; 58:e13806. [PMID: 33742708 PMCID: PMC9285530 DOI: 10.1111/psyp.13806] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 01/29/2021] [Accepted: 02/24/2021] [Indexed: 11/29/2022]
Abstract
Alexithymia has been associated with emotion recognition deficits in both auditory and visual domains. Although emotions are inherently multimodal in daily life, little is known regarding abnormalities of emotional multisensory integration (eMSI) in relation to alexithymia. Here, we employed an emotional Stroop‐like audiovisual task while recording event‐related potentials (ERPs) in individuals with high alexithymia levels (HA) and low alexithymia levels (LA). During the task, participants had to indicate whether a voice was spoken in a sad or angry prosody while ignoring the simultaneously presented static face which could be either emotionally congruent or incongruent to the human voice. We found that HA performed worse and showed higher P2 amplitudes than LA independent of emotion congruency. Furthermore, difficulties in identifying and describing feelings were positively correlated with the P2 component, and P2 correlated negatively with behavioral performance. Bayesian statistics showed no group differences in eMSI and classical integration‐related ERP components (N1 and N2). Although individuals with alexithymia indeed showed deficits in auditory emotion recognition as indexed by decreased performance and higher P2 amplitudes, the present findings suggest an intact capacity to integrate emotional information from multiple channels in alexithymia. Our work provides valuable insights into the relationship between alexithymia and neuropsychological mechanisms of emotional multisensory integration. Our behavioral and electrophysiological data provide substantial evidence for intact emotion multisensory integration in relation to alexithymia. With high ecological validity, these findings are of particular importance given that humans are constantly exposed to competing, complex audiovisual emotional information in social interaction contexts. Our work has important implications for the psychophysiology of alexithymia and emotional processing.
Collapse
Affiliation(s)
- Zhihao Wang
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen, China.,Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Mai Chen
- School of Psychology, Shenzhen University, Shenzhen, China
| | - Katharina S Goerlich
- Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - André Aleman
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen, China.,Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Pengfei Xu
- State Key Laboratory of Cognitive and Learning, Faculty of Psychology, Beijing Normal University, Beijing, China.,Center for Neuroimaging, Shenzhen Institute of Neuroscience, Shenzhen, China.,Guangdong-Hong Kong-Macao Greater Bay Area Research Institute for Neuroscience and Neurotechnologies, Kwun Tong, Hong Kong, China
| | - Yuejia Luo
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen, China.,State Key Laboratory of Cognitive and Learning, Faculty of Psychology, Beijing Normal University, Beijing, China.,Department of Psychology, Southern Medical University, Guangzhou, China.,The Research Center of Brain Science and Visual Cognition, Medical School, Kunming University of Science and Technology, Kunming, China.,Center for Neuroimaging, Shenzhen Institute of Neuroscience, Shenzhen, China
| |
Collapse
|
12
|
Dercksen TT, Stuckenberg MV, Schröger E, Wetzel N, Widmann A. Cross-modal predictive processing depends on context rather than local contingencies. Psychophysiology 2021; 58:e13811. [PMID: 33723870 DOI: 10.1111/psyp.13811] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 02/25/2021] [Accepted: 02/25/2021] [Indexed: 11/28/2022]
Abstract
Visual symbols or events may provide predictive information on to-be-expected sound events. When the perceived sound does not confirm the visual prediction, the incongruency response (IR), a prediction error signal of the event-related brain potentials, is elicited. It is unclear whether predictions are derived from lower-level local contingencies (e.g., recent events or repetitions) or from higher-level global rules applied top-down. In a recent study, sound pitch was predicted by a preceding note symbol. IR elicitation was confined to the condition where one of two sounds was presented more frequently and was not present with equal probability of both sounds. These findings suggest that local repetitions support predictive cross-modal processing. On the other hand, IR has also been observed with equal stimulus probabilities, where visual patterns predicted the upcoming sound sequence. This suggests the application of global rules. Here, we investigated the influence of stimulus repetition on the elicitation of the IR by presenting identical trial trains of a particular visual note symbol cueing a particular sound resulting either in a congruent or an incongruent pair. Trains of four different lengths: 1, 2, 4, or 7 were presented. The IR was observed already after a single presentation of a congruent visual-cue-sound combination and did not change in amplitude as trial train length increased. We conclude that higher-level associations applied in a top-down manner are involved in elicitation of the prediction error signal reflected by the IR, independent from local contingencies.
Collapse
Affiliation(s)
- Tjerk T Dercksen
- Institute of Psychology, Leipzig University, Leipzig, Germany.,Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Maria V Stuckenberg
- Institute of Psychology, Leipzig University, Leipzig, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Erich Schröger
- Institute of Psychology, Leipzig University, Leipzig, Germany
| | - Nicole Wetzel
- Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Andreas Widmann
- Institute of Psychology, Leipzig University, Leipzig, Germany.,Leibniz Institute for Neurobiology, Magdeburg, Germany
| |
Collapse
|
13
|
Do infants represent human actions cross-modally? An ERP visual-auditory priming study. Biol Psychol 2021; 160:108047. [PMID: 33596461 DOI: 10.1016/j.biopsycho.2021.108047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 01/15/2021] [Accepted: 02/08/2021] [Indexed: 12/27/2022]
Abstract
Recent findings indicate that 7-months-old infants perceive and represent the sounds inherent to moving human bodies. However, it is not known whether infants integrate auditory and visual information in representations of specific human actions. To address this issue, we used ERPs to investigate infants' neural sensitivity to the correspondence between sounds and images of human actions. In a cross-modal priming paradigm, 7-months-olds were presented with the sounds generated by two types of human body movement, walking and handclapping, after watching the kinematics of those actions in either a congruent or incongruent manner. ERPs recorded from frontal, central and parietal electrodes in response to action sounds indicate that 7-months-old infants perceptually link the visual and auditory cues of human actions. However, at this age these percepts do not seem to be integrated in cognitive multimodal representations of human actions.
Collapse
|
14
|
Lu T, Yang J, Zhang X, Guo Z, Li S, Yang W, Chen Y, Wu N. Crossmodal Audiovisual Emotional Integration in Depression: An Event-Related Potential Study. Front Psychiatry 2021; 12:694665. [PMID: 34354614 PMCID: PMC8329241 DOI: 10.3389/fpsyt.2021.694665] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 06/21/2021] [Indexed: 11/16/2022] Open
Abstract
Depression is related to the defect of emotion processing, and people's emotional processing is crossmodal. This article aims to investigate whether there is a difference in audiovisual emotional integration between the depression group and the normal group using a high-resolution event-related potential (ERP) technique. We designed a visual and/or auditory detection task. The behavioral results showed that the responses to bimodal audiovisual stimuli were faster than those to unimodal auditory or visual stimuli, indicating that crossmodal integration of emotional information occurred in both the depression and normal groups. The ERP results showed that the N2 amplitude induced by sadness was significantly higher than that induced by happiness. The participants in the depression group showed larger amplitudes of N1 and P2, and the average amplitude of LPP evoked in the frontocentral lobe in the depression group was significantly lower than that in the normal group. The results indicated that there are different audiovisual emotional processing mechanisms between depressed and non-depressed college students.
Collapse
Affiliation(s)
- Ting Lu
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Jingjing Yang
- School of Artificial Intelligence, Changchun University of Science and Technology, Changchun, China
| | - Xinyu Zhang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Zihan Guo
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Shengnan Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Ying Chen
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Nannan Wu
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| |
Collapse
|
15
|
de Boer MJ, Jürgens T, Cornelissen FW, Başkent D. Degraded visual and auditory input individually impair audiovisual emotion recognition from speech-like stimuli, but no evidence for an exacerbated effect from combined degradation. Vision Res 2020; 180:51-62. [PMID: 33360918 DOI: 10.1016/j.visres.2020.12.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 11/06/2020] [Accepted: 12/06/2020] [Indexed: 10/22/2022]
Abstract
Emotion recognition requires optimal integration of the multisensory signals from vision and hearing. A sensory loss in either or both modalities can lead to changes in integration and related perceptual strategies. To investigate potential acute effects of combined impairments due to sensory information loss only, we degraded the visual and auditory information in audiovisual video-recordings, and presented these to a group of healthy young volunteers. These degradations intended to approximate some aspects of vision and hearing impairment in simulation. Other aspects, related to advanced age, potential health issues, but also long-term adaptation and cognitive compensation strategies, were not included in the simulations. Besides accuracy of emotion recognition, eye movements were recorded to capture perceptual strategies. Our data show that emotion recognition performance decreases when degraded visual and auditory information are presented in isolation, but simultaneously degrading both modalities does not exacerbate these isolated effects. Moreover, degrading the visual information strongly impacts recognition performance and on viewing behavior. In contrast, degrading auditory information alongside normal or degraded video had little (additional) effect on performance or gaze. Nevertheless, our results hold promise for visually impaired individuals, because the addition of any audio to any video greatly facilitates performance, even though adding audio does not completely compensate for the negative effects of video degradation. Additionally, observers modified their viewing behavior to degraded video in order to maximize their performance. Therefore, optimizing the hearing of visually impaired individuals and teaching them such optimized viewing behavior could be worthwhile endeavors for improving emotion recognition.
Collapse
Affiliation(s)
- Minke J de Boer
- Research School of Behavioural and Cognitive Neuroscience (BCN), University of Groningen, Groningen, The Netherlands; Laboratory of Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands; Department of Otorhinolaryngology - Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| | - Tim Jürgens
- Institute of Acoustics, Technische Hochschule Lübeck, Lübeck, Germany
| | - Frans W Cornelissen
- Research School of Behavioural and Cognitive Neuroscience (BCN), University of Groningen, Groningen, The Netherlands; Laboratory of Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- Research School of Behavioural and Cognitive Neuroscience (BCN), University of Groningen, Groningen, The Netherlands; Department of Otorhinolaryngology - Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
16
|
Nonverbal auditory communication - Evidence for integrated neural systems for voice signal production and perception. Prog Neurobiol 2020; 199:101948. [PMID: 33189782 DOI: 10.1016/j.pneurobio.2020.101948] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Revised: 10/12/2020] [Accepted: 11/04/2020] [Indexed: 12/24/2022]
Abstract
While humans have developed a sophisticated and unique system of verbal auditory communication, they also share a more common and evolutionarily important nonverbal channel of voice signaling with many other mammalian and vertebrate species. This nonverbal communication is mediated and modulated by the acoustic properties of a voice signal, and is a powerful - yet often neglected - means of sending and perceiving socially relevant information. From the viewpoint of dyadic (involving a sender and a signal receiver) voice signal communication, we discuss the integrated neural dynamics in primate nonverbal voice signal production and perception. Most previous neurobiological models of voice communication modelled these neural dynamics from the limited perspective of either voice production or perception, largely disregarding the neural and cognitive commonalities of both functions. Taking a dyadic perspective on nonverbal communication, however, it turns out that the neural systems for voice production and perception are surprisingly similar. Based on the interdependence of both production and perception functions in communication, we first propose a re-grouping of the neural mechanisms of communication into auditory, limbic, and paramotor systems, with special consideration for a subsidiary basal-ganglia-centered system. Second, we propose that the similarity in the neural systems involved in voice signal production and perception is the result of the co-evolution of nonverbal voice production and perception systems promoted by their strong interdependence in dyadic interactions.
Collapse
|
17
|
Zhao Z, Lei S, Weiqi H, Suyong Y, Wenbo L. The influence of the cross-modal emotional pre-preparation effect on audiovisual integration. Neuroreport 2020; 31:1161-1166. [PMID: 32991523 DOI: 10.1097/wnr.0000000000001530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Previous studies have shown that the cross-modal pre-preparation effect is an important factor for audiovisual integration. However, the facilitating influence of the pre-preparation effect on the integration of emotional cues remains unclear. Therefore, this study examined the emotional pre-preparation effect during the multistage process of audiovisual integration. Event-related potentials (ERPs) were recorded while participants performed a synchronous or asynchronous integration task with fearful or neutral stimuli. The results indicated that, compared with the sum of the unisensory presentation of visual (V) and auditory (A) stimuli (A+V), only fearful audiovisual stimuli induced a decreased N1 and an enhanced P2; this was not found for the neutral stimuli. Moreover, the fearful stimuli triggered a larger P2 than the neutral stimuli in the audiovisual condition, but not in the sum of the combined (A+V) waveforms. Our findings imply that, in the early perceptual processing stage and perceptual fine processing stage, fear improves the processing efficiency of the emotional audiovisual integration. In the last cognitively assessing stage, the fearful audiovisual induced a larger late positive component (LPC) than the neutral audiovisual. Moreover, the asynchronous-audiovisual induced a greater LPC than the synchronous-audiovisual during the 400-550 ms period. The different integration effects between the fearful and neutral stimuli may reflect the existence of distinct mechanisms of the pre-preparation in terms of the emotional dimension. In light of these results, we present a cross-modal emotional pre-preparation effect involving a three-phase emotional audiovisual integration.
Collapse
Affiliation(s)
- Zhang Zhao
- Institute of Psychology, Weifang Medical University, Weifang.,Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University.,Key Laboratory of Brain and Cognitive Neurosience, Dalian, Liaoning Province
| | - Sun Lei
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University.,Key Laboratory of Brain and Cognitive Neurosience, Dalian, Liaoning Province
| | - He Weiqi
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University.,Key Laboratory of Brain and Cognitive Neurosience, Dalian, Liaoning Province
| | - Yang Suyong
- School of Psychology, Shanghai University of Sport, Shanghai, China
| | - Luo Wenbo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University.,Key Laboratory of Brain and Cognitive Neurosience, Dalian, Liaoning Province
| |
Collapse
|
18
|
A crowd of emotional voices influences the perception of emotional faces: Using adaptation, stimulus salience, and attention to probe audio-visual interactions for emotional stimuli. Atten Percept Psychophys 2020; 82:3973-3992. [PMID: 32935292 DOI: 10.3758/s13414-020-02104-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Correctly assessing the emotional state of others is a crucial part of social interaction. While facial expressions provide much information, faces are often not viewed in isolation, but occur with concurrent sounds, usually voices, which also provide information about the emotion being portrayed. Many studies have examined the crossmodal processing of faces and sounds, but results have been mixed, with different paradigms yielding different results. Using a psychophysical adaptation paradigm, we carried out a series of four experiments to determine whether there is a perceptual advantage when faces and voices match in emotion (congruent), versus when they do not match (incongruent). We presented a single face and a crowd of voices, a crowd of faces and a crowd of voices, a single face of reduced salience and a crowd of voices, and tested this last condition with and without attention directed to the emotion in the face. While we observed aftereffects in the hypothesized direction (adaptation to faces conveying positive emotion yielded negative, contrastive, perceptual aftereffects), we only found a congruent advantage (stronger adaptation effects) when faces were attended and of reduced salience, in line with the theory of inverse effectiveness.
Collapse
|
19
|
Neural indices of orienting, discrimination, and conflict monitoring after contextual fear and safety learning. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2020; 20:917-927. [PMID: 32720204 DOI: 10.3758/s13415-020-00810-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Investigations of fear conditioning have recently begun to evaluate contextual factors that affect attention-related processes. However, much of the extant literature does not evaluate how contextual fear learning influences neural indicators of attentional processes during goal-directed activity. The current study evaluated how early attention for task-relevant stimuli and conflict monitoring were affected when presented within task-irrelevant safety and threat contexts after fear learning. Participants (N = 72) completed a Flanker task with modified context before and after context-dependent fear learning. Flanker stimuli were presented in the same threat and safety contexts utilized in the fear learning task while EEG was collected. Results indicated increased early attention (N1) to flankers appearing in threat contexts and later increased neural indicators (P2) of attention to flankers appearing in safety contexts. Results of this study indicate that contextual fear learning modulates early attentional processes for task-relevant stimuli that appear in the context of safety and threat. Theoretical and clinical implications are discussed.
Collapse
|
20
|
Topalidis P, Zinchenko A, Gädeke JC, Föcker J. The role of spatial selective attention in the processing of affective prosodies in congenitally blind adults: An ERP study. Brain Res 2020; 1739:146819. [PMID: 32251662 DOI: 10.1016/j.brainres.2020.146819] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 03/25/2020] [Accepted: 04/02/2020] [Indexed: 10/24/2022]
Abstract
The question whether spatial selective attention is necessary in order to process vocal affective prosody has been controversially discussed in sighted individuals: whereas some studies argue that attention is required in order to process emotions, other studies conclude that vocal prosody can be processed even outside the focus of spatial selective attention. Here, we asked whether spatial selective attention is necessary for the processing of affective prosodies after visual deprivation from birth. For this purpose, pseudowords spoken in happy, neutral, fearful or threatening prosodies were presented at the left or right loudspeaker. Congenitally blind individuals (N = 8) and sighted controls (N = 13) had to attend to one of the loudspeakers and detect rare pseudowords presented at the attended loudspeaker during EEG recording. Emotional prosody of the syllables was task-irrelevant. Blind individuals outperformed sighted controls by being more efficient in detecting deviant syllables at the attended loudspeaker. A higher auditory N1 amplitude was observed in blind individuals compared to sighted controls. Additionally, sighted controls showed enhanced attention-related ERP amplitudes in response to fearful and threatening voices during the time range of the N1. By contrast, blind individuals revealed enhanced ERP amplitudes in attended relative to unattended locations irrespective of the affective valence in all time windows (110-350 ms). These effects were mainly observed at posterior electrodes. The results provide evidence for "emotion-general" auditory spatial selective attention effects in congenitally blind individuals and suggest a potential reorganization of the voice processing brain system following visual deprivation from birth.
Collapse
Affiliation(s)
- Pavlos Topalidis
- Department of Psychology and Educational Sciences, Ludwig Maximilian University, Munich, Germany
| | - Artyom Zinchenko
- Department of Psychology and Educational Sciences, Ludwig Maximilian University, Munich, Germany
| | - Julia C Gädeke
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Julia Föcker
- Biological Psychology and Neuropsychology, University of Hamburg, Germany; University of Lincoln, School of Social Sciences, United Kingdom.
| |
Collapse
|
21
|
de Boer MJ, Başkent D, Cornelissen FW. Eyes on Emotion: Dynamic Gaze Allocation During Emotion Perception From Speech-Like Stimuli. Multisens Res 2020; 34:17-47. [PMID: 33706278 DOI: 10.1163/22134808-bja10029] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Accepted: 05/29/2020] [Indexed: 11/19/2022]
Abstract
The majority of emotional expressions used in daily communication are multimodal and dynamic in nature. Consequently, one would expect that human observers utilize specific perceptual strategies to process emotions and to handle the multimodal and dynamic nature of emotions. However, our present knowledge on these strategies is scarce, primarily because most studies on emotion perception have not fully covered this variation, and instead used static and/or unimodal stimuli with few emotion categories. To resolve this knowledge gap, the present study examined how dynamic emotional auditory and visual information is integrated into a unified percept. Since there is a broad spectrum of possible forms of integration, both eye movements and accuracy of emotion identification were evaluated while observers performed an emotion identification task in one of three conditions: audio-only, visual-only video, or audiovisual video. In terms of adaptations of perceptual strategies, eye movement results showed a shift in fixations toward the eyes and away from the nose and mouth when audio is added. Notably, in terms of task performance, audio-only performance was mostly significantly worse than video-only and audiovisual performances, but performance in the latter two conditions was often not different. These results suggest that individuals flexibly and momentarily adapt their perceptual strategies to changes in the available information for emotion recognition, and these changes can be comprehensively quantified with eye tracking.
Collapse
Affiliation(s)
- Minke J de Boer
- 1Research School of Behavioural and Cognitive Neurosciences (BCN), University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.,2Department of Otorhinolaryngology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.,3Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- 1Research School of Behavioural and Cognitive Neurosciences (BCN), University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.,2Department of Otorhinolaryngology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Frans W Cornelissen
- 1Research School of Behavioural and Cognitive Neurosciences (BCN), University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.,3Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
22
|
Creupelandt C, D'Hondt F, de Timary P, Falagiarda F, Collignon O, Maurage P. Selective visual and crossmodal impairment in the discrimination of anger and fear expressions in severe alcohol use disorder. Drug Alcohol Depend 2020; 213:108079. [PMID: 32554170 DOI: 10.1016/j.drugalcdep.2020.108079] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Revised: 05/12/2020] [Accepted: 05/14/2020] [Indexed: 10/24/2022]
Abstract
BACKGROUND Severe alcohol use disorder (SAUD) is associated with impaired discrimination of emotional expressions. This deficit appears increased in crossmodal settings, when simultaneous inputs from different sensory modalities are presented. However, so far, studies exploring emotional crossmodal processing in SAUD relied on static faces and unmatched face/voice pairs, thus offering limited ecological validity. Our aim was therefore to assess emotional processing using a validated and ecological paradigm relying on dynamic audio-visual stimuli, manipulating the amount of emotional information available. METHOD Thirty individuals with SAUD and 30 matched healthy controls performed an emotional discrimination task requiring to identify five emotions (anger, disgust, fear, happiness, sadness) expressed as visual, auditory, or auditory-visual segments of varying length. Sensitivity indices (d') were computed to get an unbiased measure of emotional discrimination and entered in a Generalized Linear Mixed Model. Incorrect emotional attributions were also scrutinized through confusion matrices. RESULTS Discrimination levels varied across sensory modalities and emotions, and increased with stimuli duration. Crucially, performances also improved from unimodal to crossmodal conditions in both groups, but discrimination for anger crossmodal stimuli and fear crossmodal/visual stimuli remained selectively impaired in SAUD. These deficits were not influenced by stimuli duration, suggesting that they were not modulated by the amount of emotional information available. Moreover, they were not associated with systematic emotional error patterns reflecting specific confusions between emotions. CONCLUSIONS These results clarify the nature and extent of crossmodal impairments in SAUD and converge with earlier findings to ascribe a specific role for anger and fear in this pathology.
Collapse
Affiliation(s)
- Coralie Creupelandt
- Louvain Experimental Psychopathology Research Group (UCLEP), Psychological Sciences Research Institute (IPSY) and Institute of Neuroscience (IoNS), UCLouvain, B-1348, Louvain-la-Neuve, Belgium.
| | - Fabien D'Hondt
- Univ. Lille, Inserm, CHU Lille, U1172, Lille Neuroscience & Cognition, F-59000 Lille, France; CHU Lille, Clinique de Psychiatrie, CURE, F-59000, Lille, France; Centre National de Ressources et de Résilience Lille-Paris (CN2R), F-59000, Lille, France.
| | - Philippe de Timary
- Louvain Experimental Psychopathology Research Group (UCLEP), Psychological Sciences Research Institute (IPSY) and Institute of Neuroscience (IoNS), UCLouvain, B-1348, Louvain-la-Neuve, Belgium; Department of Adult Psychiatry, Saint-Luc Academic Hospital, B-1200, Brussels, Belgium.
| | - Federica Falagiarda
- Crossmodal Perception and Plasticity laboratory (CPP-Lab), Psychological Sciences Research Institute (IPSY) and Institute of Neuroscience (IoNS), UCLouvain, B-1348, Louvain-la-Neuve, Belgium.
| | - Olivier Collignon
- Crossmodal Perception and Plasticity laboratory (CPP-Lab), Psychological Sciences Research Institute (IPSY) and Institute of Neuroscience (IoNS), UCLouvain, B-1348, Louvain-la-Neuve, Belgium; Centre for Mind/Brain Studies, University of Trento, Trento, Italy.
| | - Pierre Maurage
- Louvain Experimental Psychopathology Research Group (UCLEP), Psychological Sciences Research Institute (IPSY) and Institute of Neuroscience (IoNS), UCLouvain, B-1348, Louvain-la-Neuve, Belgium.
| |
Collapse
|
23
|
Barabanschikov V, Korolkova O. Perception of “Live” Facial Expressions. EXPERIMENTAL PSYCHOLOGY (RUSSIA) 2020. [DOI: 10.17759/exppsy.2020130305] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The article provides a review of experimental studies of interpersonal perception on the material of static and dynamic facial expressions as a unique source of information about the person’s inner world. The focus is on the patterns of perception of a moving face, included in the processes of communication and joint activities (an alternative to the most commonly studied perception of static images of a person outside of a behavioral context). The review includes four interrelated topics: face statics and dynamics in the recognition of emotional expressions; specificity of perception of moving face expressions; multimodal integration of emotional cues; generation and perception of facial expressions in communication processes. The analysis identifies the most promising areas of research of face in motion. We show that the static and dynamic modes of facial perception complement each other, and describe the role of qualitative features of the facial expression dynamics in assessing the emotional state of a person. Facial expression is considered as part of a holistic multimodal manifestation of emotions. The importance of facial movements as an instrument of social interaction is emphasized.
Collapse
|
24
|
Zinchenko A, Kotz SA, Schröger E, Kanske P. Moving towards dynamics: Emotional modulation of cognitive and emotional control. Int J Psychophysiol 2020; 147:193-201. [DOI: 10.1016/j.ijpsycho.2019.10.018] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2019] [Revised: 10/18/2019] [Accepted: 10/23/2019] [Indexed: 12/13/2022]
|
25
|
Aydin S. Deep Learning Classification of Neuro-Emotional Phase Domain Complexity Levels Induced by Affective Video Film Clips. IEEE J Biomed Health Inform 2019; 24:1695-1702. [PMID: 31841425 DOI: 10.1109/jbhi.2019.2959843] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In the present article, a novel emotional complexity marker is proposed for classification of discrete emotions induced by affective video film clips. Principal Component Analysis (PCA) is applied to full-band specific phase space trajectory matrix (PSTM) extracted from short emotional EEG segment of 6 s, then the first principal component is used to measure the level of local neuronal complexity. As well, Phase Locking Value (PLV) between right and left hemispheres is estimated for in order to observe the superiority of local neuronal complexity estimation to regional neuro-cortical connectivity measurements in clustering nine discrete emotions (fear, anger, happiness, sadness, amusement, surprise, excitement, calmness, disgust) by using Long-Short-Term-Memory Networks as deep learning applications. In tests, two groups (healthy females and males aged between 22 and 33 years old) are classified with the accuracy levels of [Formula: see text] and [Formula: see text] through the proposed emotional complexity markers and and connectivity levels in terms of PLV in amusement. The groups are found to be statistically different ( p << 0.5) in amusement with respect to both metrics, even if gender difference does not lead to different neuro-cortical functions in any of the other discrete emotional states. The high deep learning classification accuracy of [Formula: see text] is commonly obtained for discrimination of positive emotions from negative emotions through the proposed new complexity markers. Besides, considerable useful classification performance is obtained in discriminating mixed emotions from each other through full-band connectivity features. The results reveal that emotion formation is mostly influenced by individual experiences rather than gender. In detail, local neuronal complexity is mostly sensitive to the affective valance rating, while regional neuro-cortical connectivity levels are mostly sensitive to the affective arousal ratings.
Collapse
|
26
|
Gao C, Weber CE, Shinkareva SV. The brain basis of audiovisual affective processing: Evidence from a coordinate-based activation likelihood estimation meta-analysis. Cortex 2019; 120:66-77. [DOI: 10.1016/j.cortex.2019.05.016] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 05/03/2019] [Accepted: 05/28/2019] [Indexed: 01/19/2023]
|
27
|
Jiang X, Gossack-Keenan K, Pell MD. To believe or not to believe? How voice and accent information in speech alter listener impressions of trust. Q J Exp Psychol (Hove) 2019; 73:55-79. [DOI: 10.1177/1747021819865833] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Our decision to believe what another person says can be influenced by vocally expressed confidence in speech and by whether the speaker–listener are members of the same social group. The dynamic effects of these two information sources on neurocognitive processes that promote believability impressions from vocal cues are unclear. Here, English Canadian listeners were presented personal statements ( She has access to the building) produced in a confident or doubtful voice by speakers of their own dialect (in-group) or speakers from two different “out-groups” (regional or foreign-accented English). Participants rated how believable the speaker is for each statement and event-related potentials (ERPs) were analysed from utterance onset. Believability decisions were modulated by both the speaker’s vocal confidence level and their perceived in-group status. For in-group speakers, ERP effects revealed an early differentiation of vocally expressed confidence (i.e., N100, P200), highlighting the motivational significance of doubtful voices for drawing believability inferences. These early effects on vocal confidence perception were qualitatively different or absent when speakers had an accent; evaluating out-group voices was associated with increased demands on contextual integration and re-analysis of a non-native representation of believability (i.e., increased N400, late negativity response). Accent intelligibility and experience with particular out-group accents each influenced how vocal confidence was processed for out-group speakers. The N100 amplitude was sensitive to out-group attitudes and predicted actual believability decisions for certain out-group speakers. We propose a neurocognitive model in which vocal identity information (social categorization) dynamically influences how vocal expressions are decoded and used to derive social inferences during person perception.
Collapse
Affiliation(s)
- Xiaoming Jiang
- School of Communication Sciences and Disorders, McGill University, Montréal, Québec, Canada
- Department of Psychology, Tongji University, Shanghai, China
| | - Kira Gossack-Keenan
- School of Communication Sciences and Disorders, McGill University, Montréal, Québec, Canada
| | - Marc D Pell
- School of Communication Sciences and Disorders, McGill University, Montréal, Québec, Canada
| |
Collapse
|
28
|
Izen SC, Lapp HE, Harris DA, Hunter RG, Ciaramitaro VM. Seeing a Face in a Crowd of Emotional Voices: Changes in Perception and Cortisol in Response to Emotional Information across the Senses. Brain Sci 2019; 9:brainsci9080176. [PMID: 31349644 PMCID: PMC6721384 DOI: 10.3390/brainsci9080176] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Revised: 07/01/2019] [Accepted: 07/24/2019] [Indexed: 11/17/2022] Open
Abstract
One source of information we glean from everyday experience, which guides social interaction, is assessing the emotional state of others. Emotional state can be expressed through several modalities: body posture or movements, body odor, touch, facial expression, or the intonation in a voice. Much research has examined emotional processing within one sensory modality or the transfer of emotional processing from one modality to another. Yet, less is known regarding interactions across different modalities when perceiving emotions, despite our common experience of seeing emotion in a face while hearing the corresponding emotion in a voice. Our study examined if visual and auditory emotions of matched valence (congruent) conferred stronger perceptual and physiological effects compared to visual and auditory emotions of unmatched valence (incongruent). We quantified how exposure to emotional faces and/or voices altered perception using psychophysics and how it altered a physiological proxy for stress or arousal using salivary cortisol. While we found no significant advantage of congruent over incongruent emotions, we found that changes in cortisol were associated with perceptual changes. Following exposure to negative emotional content, larger decreases in cortisol, indicative of less stress, correlated with more positive perceptual after-effects, indicative of stronger biases to see neutral faces as happier.
Collapse
Affiliation(s)
- Sarah C Izen
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Hannah E Lapp
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Daniel A Harris
- Division of Epidemiology, Dalla Lana School of Public Health, University of Toronto, Toronto, ON M5T 3M7, Canada
| | - Richard G Hunter
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Vivian M Ciaramitaro
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA.
| |
Collapse
|
29
|
Lin H, Liang J. Contextual effects of angry vocal expressions on the encoding and recognition of emotional faces: An event-related potential (ERP) study. Neuropsychologia 2019; 132:107147. [PMID: 31325481 DOI: 10.1016/j.neuropsychologia.2019.107147] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 07/15/2019] [Accepted: 07/15/2019] [Indexed: 11/30/2022]
Abstract
It has been shown that stimulus memory (e.g., encoding and recognition) is influenced by emotion. In terms of face memory, event-related potential (ERP) studies have shown that the encoding of emotional faces is influenced by the emotion of concomitant context, when contextual stimuli were input from a visual modality. Behavioral studies also investigated the effect of contextual emotion on subsequent recognition of neutral faces. However, there might be no studies ever investigating the context effect on face encoding and recognition, when contextual stimuli were input from other sensory modalities (e.g., an auditory modality). Additionally, it may be unknown about the neural mechanisms underlying context effects on recognition of emotional faces. Therefore, the present study aimed to use vocal expressions as contexts to investigate whether contextual emotion influences ERP responses during face encoding and recognition. To this end, participants in the present study were asked to memorize angry and neutral faces. The faces were presented concomitant with either angry or neutral vocal expressions. Subsequently, participants were asked to perform an old/new recognition task, in which only faces were presented. In the encoding phase, ERP results showed that compared to neutral vocal expression, angry vocal expressions led to smaller P1 and N170 responses to both angry and neutral faces. For angry faces, however, late positive potential (LPP) responses were increased in the angry voice condition. In the later recognition phase, N170 responses were larger for neutral-encoded faces that had been presented with angry compared to neutral vocal expressions. Preceding angry vocal expression increased FN400 and LPP responses to both neutral-encoded and angry-encoded faces, when the faces showed the encoded expression. Therefore, the present study indicates that contextual emotion with regard to vocal expression influences neural responses during face encoding and subsequent recognition.
Collapse
Affiliation(s)
- Huiyan Lin
- Institute of Applied Psychology, School of Public Administration, Guangdong University of Finance, 510521, Guangzhou, China; Laboratory for Behavioral and Regional Finance, Guangdong University of Finance, 510521, Guangzhou, China.
| | - Jiafeng Liang
- School of Education, Guangdong University of Education, 510303, Guangzhou, China
| |
Collapse
|
30
|
Föcker J, Röder B. Event-Related Potentials Reveal Evidence for Late Integration of Emotional Prosody and Facial Expression in Dynamic Stimuli: An ERP Study. Multisens Res 2019; 32:473-497. [DOI: 10.1163/22134808-20191332] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2018] [Accepted: 04/01/2019] [Indexed: 11/19/2022]
Abstract
Abstract
The aim of the present study was to test whether multisensory interactions of emotional signals are modulated by intermodal attention and emotional valence. Faces, voices and bimodal emotionally congruent or incongruent face–voice pairs were randomly presented. The EEG was recorded while participants were instructed to detect sad emotional expressions in either faces or voices while ignoring all stimuli with another emotional expression and sad stimuli of the task irrelevant modality. Participants processed congruent sad face–voice pairs more efficiently than sad stimuli paired with an incongruent emotion and performance was higher in congruent bimodal compared to unimodal trials, irrespective of which modality was task-relevant. Event-related potentials (ERPs) to congruent emotional face–voice pairs started to differ from ERPs to incongruent emotional face–voice pairs at 180 ms after stimulus onset: Irrespectively of which modality was task-relevant, ERPs revealed a more pronounced positivity (180 ms post-stimulus) to emotionally congruent trials compared to emotionally incongruent trials if the angry emotion was presented in the attended modality. A larger negativity to incongruent compared to congruent trials was observed in the time range of 400–550 ms (N400) for all emotions (happy, neutral, angry), irrespectively of whether faces or voices were task relevant. These results suggest an automatic interaction of emotion related information.
Collapse
Affiliation(s)
- Julia Föcker
- 1Biological Psychology and Neuropsychology, University of Hamburg, Germany
- 2School of Psychology, College of Social Science, University of Lincoln, United Kingdom
| | - Brigitte Röder
- 1Biological Psychology and Neuropsychology, University of Hamburg, Germany
| |
Collapse
|
31
|
Zinchenko A, Kanske P, Obermeier C, Schröger E, Villringer A, Kotz SA. Modulation of Cognitive and Emotional Control in Age-Related Mild-to-Moderate Hearing Loss. Front Neurol 2018; 9:783. [PMID: 30283398 PMCID: PMC6156531 DOI: 10.3389/fneur.2018.00783] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Accepted: 08/30/2018] [Indexed: 12/12/2022] Open
Abstract
Progressive hearing loss is a common phenomenon in healthy aging and may affect the perception of emotions expressed in speech. Elderly with mild to moderate hearing loss often rate emotional expressions as less emotional and display reduced activity in emotion-sensitive brain areas (e.g., amygdala). However, it is not clear how hearing loss affects cognitive and emotional control mechanisms engaged in multimodal speech processing. In previous work we showed that negative, task-relevant and -irrelevant emotion modulates the two types of control in younger and older adults without hearing loss. To further explore how reduced hearing capacity affects emotional and cognitive control, we tested whether moderate hearing loss (>30 dB) at frequencies relevant for speech impacts cognitive and emotional control. We tested two groups of older adults with hearing loss (HL; N = 21; mean age = 70.5) and without hearing loss (NH; N = 21; mean age = 68.4). In two EEG experiments participants observed multimodal video clips and either categorized pronounced vowels (cognitive conflict) or their emotions (emotional conflict). Importantly, the facial expressions were either matched or mismatched with the corresponding vocalizations. In both conflict tasks, we found that negative stimuli modulated behavioral conflict processing in the NH but not the HL group, while the HL group performed at chance level in the emotional conflict task. Further, we found that the amplitude difference between congruent and incongruent stimuli was larger in negative relative to neutral N100 responses across tasks and groups. Lastly, in the emotional conflict task, neutral stimuli elicited a smaller N200 response than emotional stimuli primarily in the HL group. Consequently, age-related hearing loss not only affects the processing of emotional acoustic cues but also alters the behavioral benefits of emotional stimuli on cognitive and emotional control, despite preserved early neural responses. The resulting difficulties in the multimodal integration of incongruent emotional stimuli may lead to problems in processing complex social information (irony, sarcasm) and impact emotion processing in the limbic network. This could be related to social isolation and depression observed in the elderly with age-related hearing loss.
Collapse
Affiliation(s)
- Artyom Zinchenko
- International Max Planck Research School on Neuroscience of Communication (IMPRS NeuroCom), Leipzig, Germany.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Philipp Kanske
- Chair of Clinical Psychology and Behavioral Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Department of Social Neuroscience, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Christian Obermeier
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Erich Schröger
- Institute of Psychology, University of Leipzig, Leipzig, Germany
| | - Arno Villringer
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Sonja A Kotz
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
32
|
Casado-Aranda LA, Van der Laan LN, Sánchez-Fernández J. Neural correlates of gender congruence in audiovisual commercials for gender-targeted products: An fMRI study. Hum Brain Mapp 2018; 39:4360-4372. [PMID: 29964348 DOI: 10.1002/hbm.24276] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2018] [Revised: 05/08/2018] [Accepted: 05/31/2018] [Indexed: 11/07/2022] Open
Abstract
This article explores neural and self-report responses to gender congruence in product-voice combinations in commercials. An fMRI study was carried out in which participants (n = 30) were presented with gender-targeted pictures of characteristic male or female products accompanied by either gender congruent or incongruent voices. The findings show that attitudes are more positive toward commercials with gender congruent than with gender incongruent product-voice combinations. fMRI analyses revealed that primary visual brain areas, namely calcarine and cuneus, responded stronger to congruent than incongruent combinations suggesting that participants enhanced their endogenous attention toward congruent commercials. Incongruent combinations, by contrast, elicited stronger activation in areas related to the perception of conflicts in information processing and error monitoring, such as the supramarginal, inferior parietal gyri and superior, and middle temporal gyri. Interestingly, increased activation in the posterior cingulate cortex (an area related to value encoding) predicted more positive attitudes toward congruent commercials. Together, these results advance our understanding of the neural correlates of processing congruent and incongruent audiovisual stimuli. These findings may advice advertising professionals in designing successful campaigns of everyday products, namely by making use of congruent instead of incongruent product-voice combinations.
Collapse
Affiliation(s)
- Luis-Alberto Casado-Aranda
- Department of Marketing and Market Research, University of Granada, Campus Universitario la Cartuja, Granada, Spain
| | - Laura Nynke Van der Laan
- University of Amsterdam, Amsterdam School of Communication Research (ASCoR), NG Amsterdam, The Netherlands
| | - Juan Sánchez-Fernández
- Department of Marketing and Market Research, University of Granada, Campus Universitario la Cartuja, Granada, Spain
| |
Collapse
|
33
|
Garrido-Vásquez P, Pell MD, Paulmann S, Kotz SA. Dynamic Facial Expressions Prime the Processing of Emotional Prosody. Front Hum Neurosci 2018; 12:244. [PMID: 29946247 PMCID: PMC6007283 DOI: 10.3389/fnhum.2018.00244] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 05/28/2018] [Indexed: 11/29/2022] Open
Abstract
Evidence suggests that emotion is represented supramodally in the human brain. Emotional facial expressions, which often precede vocally expressed emotion in real life, can modulate event-related potentials (N100 and P200) during emotional prosody processing. To investigate these cross-modal emotional interactions, two lines of research have been put forward: cross-modal integration and cross-modal priming. In cross-modal integration studies, visual and auditory channels are temporally aligned, while in priming studies they are presented consecutively. Here we used cross-modal emotional priming to study the interaction of dynamic visual and auditory emotional information. Specifically, we presented dynamic facial expressions (angry, happy, neutral) as primes and emotionally-intoned pseudo-speech sentences (angry, happy) as targets. We were interested in how prime-target congruency would affect early auditory event-related potentials, i.e., N100 and P200, in order to shed more light on how dynamic facial information is used in cross-modal emotional prediction. Results showed enhanced N100 amplitudes for incongruently primed compared to congruently and neutrally primed emotional prosody, while the latter two conditions did not significantly differ. However, N100 peak latency was significantly delayed in the neutral condition compared to the other two conditions. Source reconstruction revealed that the right parahippocampal gyrus was activated in incongruent compared to congruent trials in the N100 time window. No significant ERP effects were observed in the P200 range. Our results indicate that dynamic facial expressions influence vocal emotion processing at an early point in time, and that an emotional mismatch between a facial expression and its ensuing vocal emotional signal induces additional processing costs in the brain, potentially because the cross-modal emotional prediction mechanism is violated in case of emotional prime-target incongruency.
Collapse
Affiliation(s)
- Patricia Garrido-Vásquez
- Department of Experimental Psychology and Cognitive Science, Justus Liebig University Giessen, Giessen, Germany.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Marc D Pell
- School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada
| | - Silke Paulmann
- Department of Psychology, University of Essex, Colchester, United Kingdom
| | - Sonja A Kotz
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neuropsychology and Psychopharmacology, University of Maastricht, Maastricht, Netherlands
| |
Collapse
|
34
|
Zinchenko A, Obermeier C, Kanske P, Schröger E, Villringer A, Kotz SA. The Influence of Negative Emotion on Cognitive and Emotional Control Remains Intact in Aging. Front Aging Neurosci 2017; 9:349. [PMID: 29163132 PMCID: PMC5671981 DOI: 10.3389/fnagi.2017.00349] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Accepted: 10/16/2017] [Indexed: 02/06/2023] Open
Abstract
Healthy aging is characterized by a gradual decline in cognitive control and inhibition of interferences, while emotional control is either preserved or facilitated. Emotional control regulates the processing of emotional conflicts such as in irony in speech, and cognitive control resolves conflict between non-affective tendencies. While negative emotion can trigger control processes and speed up resolution of both cognitive and emotional conflicts, we know little about how aging affects the interaction of emotion and control. In two EEG experiments, we compared the influence of negative emotion on cognitive and emotional conflict processing in groups of younger adults (mean age = 25.2 years) and older adults (69.4 years). Participants viewed short video clips and either categorized spoken vowels (cognitive conflict) or their emotional valence (emotional conflict), while the visual facial information was congruent or incongruent. Results show that negative emotion modulates both cognitive and emotional conflict processing in younger and older adults as indicated in reduced response times and/or enhanced event-related potentials (ERPs). In emotional conflict processing, we observed a valence-specific N100 ERP component in both age groups. In cognitive conflict processing, we observed an interaction of emotion by congruence in the N100 responses in both age groups, and a main effect of congruence in the P200 and N200. Thus, the influence of emotion on conflict processing remains intact in aging, despite a marked decline in cognitive control. Older adults may prioritize emotional wellbeing and preserve the role of emotion in cognitive and emotional control.
Collapse
Affiliation(s)
- Artyom Zinchenko
- International Max Planck Research School on Neuroscience of Communication, Leipzig, Germany.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Christian Obermeier
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Philipp Kanske
- Department of Social Neuroscience, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Institute of Clinical Psychology and Psychotherapy, Department of Psychology, Technische Universität Dresden, Dresden, Germany
| | - Erich Schröger
- Institute of Psychology, University of Leipzig, Leipzig, Germany
| | - Arno Villringer
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Sonja A Kotz
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
35
|
Scheumann M, Hasting AS, Zimmermann E, Kotz SA. Human Novelty Response to Emotional Animal Vocalizations: Effects of Phylogeny and Familiarity. Front Behav Neurosci 2017; 11:204. [PMID: 29114210 PMCID: PMC5660701 DOI: 10.3389/fnbeh.2017.00204] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2017] [Accepted: 10/06/2017] [Indexed: 11/13/2022] Open
Abstract
Darwin (1872) postulated that emotional expressions contain universals that are retained across species. We recently showed that human rating responses were strongly affected by a listener's familiarity with vocalization types, whereas evidence for universal cross-taxa emotion recognition was limited. To disentangle the impact of evolutionarily retained mechanisms (phylogeny) and experience-driven cognitive processes (familiarity), we compared the temporal unfolding of event-related potentials (ERPs) in response to agonistic and affiliative vocalizations expressed by humans and three animal species. Using an auditory oddball novelty paradigm, ERPs were recorded in response to task-irrelevant novel sounds, comprising vocalizations varying in their degree of phylogenetic relationship and familiarity to humans. Vocalizations were recorded in affiliative and agonistic contexts. Offline, participants rated the vocalizations for valence, arousal, and familiarity. Correlation analyses revealed a significant correlation between a posteriorly distributed early negativity and arousal ratings. More specifically, a contextual category effect of this negativity was observed for human infant and chimpanzee vocalizations but absent for other species vocalizations. Further, a significant correlation between the later and more posteriorly P3a and P3b responses and familiarity ratings indicates a link between familiarity and attentional processing. A contextual category effect of the P3b was observed for the less familiar chimpanzee and tree shrew vocalizations. Taken together, these findings suggest that early negative ERP responses to agonistic and affiliative vocalizations may be influenced by evolutionary retained mechanisms, whereas the later orienting of attention (positive ERPs) may mainly be modulated by the prior experience.
Collapse
Affiliation(s)
- Marina Scheumann
- Institute of Zoology, University of Veterinary Medicine Hannover, Hannover, Germany
| | - Anna S. Hasting
- Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Day Clinic for Cognitive Neurology, University Hospital Leipzig, Leipzig, Germany
| | - Elke Zimmermann
- Institute of Zoology, University of Veterinary Medicine Hannover, Hannover, Germany
| | - Sonja A. Kotz
- Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
36
|
Pan Z, Liu X, Luo Y, Chen X. Emotional Intensity Modulates the Integration of Bimodal Angry Expressions: ERP Evidence. Front Neurosci 2017; 11:349. [PMID: 28680388 PMCID: PMC5478688 DOI: 10.3389/fnins.2017.00349] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2016] [Accepted: 06/06/2017] [Indexed: 11/18/2022] Open
Abstract
Integration of information from face and voice plays a central role in social interactions. The present study investigated the modulation of emotional intensity on the integration of facial-vocal emotional cues by recording EEG for participants while they were performing emotion identification task on facial, vocal, and bimodal angry expressions varying in emotional intensity. Behavioral results showed the rates of anger and reaction speed increased as emotional intensity across modalities. Critically, the P2 amplitudes were larger for bimodal expressions than for the sum of facial and vocal expressions for low emotional intensity stimuli, but not for middle and high emotional intensity stimuli. These findings suggested that emotional intensity modulates the integration of facial-vocal angry expressions, following the principle of Inverse Effectiveness (IE) in multimodal sensory integration.
Collapse
Affiliation(s)
- Zhihui Pan
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal UniversityXi'an, China
| | - Xi Liu
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, School of Brain Cognitive Science, Beijing Normal UniversityBeijing, China
| | - Yangmei Luo
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal UniversityXi'an, China
| | - Xuhai Chen
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal UniversityXi'an, China
| |
Collapse
|
37
|
Zinchenko A, Obermeier C, Kanske P, Schröger E, Kotz SA. Positive emotion impedes emotional but not cognitive conflict processing. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2017; 17:665-677. [PMID: 28321705 PMCID: PMC5403863 DOI: 10.3758/s13415-017-0504-1] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Cognitive control enables successful goal-directed behavior by resolving a conflict between opposing action tendencies, while emotional control arises as a consequence of emotional conflict processing such as in irony. While negative emotion facilitates both cognitive and emotional conflict processing, it is unclear how emotional conflict processing is affected by positive emotion (e.g., humor). In 2 EEG experiments, we investigated the role of positive audiovisual target stimuli in cognitive and emotional conflict processing. Participants categorized either spoken vowels (cognitive task) or their emotional valence (emotional task) and ignored the visual stimulus dimension. Behaviorally, a positive target showed no influence on cognitive conflict processing, but impeded emotional conflict processing. In the emotional task, response time conflict costs were higher for positive than for neutral targets. In the EEG, we observed an interaction of emotion by congruence in the P200 and N200 ERP components in emotional but not in cognitive conflict processing. In the emotional conflict task, the P200 and N200 conflict effect was larger for emotional than neutral targets. Thus, our results show that emotion affects conflict processing differently as a function of conflict type and emotional valence. This suggests that there are conflict- and valence-specific mechanisms modulating executive control.
Collapse
Affiliation(s)
- Artyom Zinchenko
- International Max Planck Research School on Neuroscience of Communication (IMPRS NeuroCom), Leipzig, Germany
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany
| | - Christian Obermeier
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany
| | - Philipp Kanske
- Department of Social Neuroscience, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Erich Schröger
- Institute of Psychology, University of Leipzig, Leipzig, Germany
| | - Sonja A Kotz
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany.
- Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, The Netherlands.
| |
Collapse
|
38
|
Gao C, Wedell DH, Kim J, Weber CE, Shinkareva SV. Modelling audiovisual integration of affect from videos and music. Cogn Emot 2017; 32:516-529. [DOI: 10.1080/02699931.2017.1320979] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Chuanji Gao
- Department of Psychology, University of South Carolina, Columbia, SC, USA
| | - Douglas H. Wedell
- Department of Psychology, University of South Carolina, Columbia, SC, USA
| | - Jongwan Kim
- Department of Psychology, University of South Carolina, Columbia, SC, USA
| | - Christine E. Weber
- Department of Psychology, University of South Carolina, Columbia, SC, USA
| | | |
Collapse
|
39
|
Affiliation(s)
- Stefan R. Schweinberger
- Department of General Psychology, Friedrich Schiller University and DFG Research Unit Person Perception, Jena, Germany
| | - David M.C. Robertson
- Department of General Psychology, Friedrich Schiller University and DFG Research Unit Person Perception, Jena, Germany
| |
Collapse
|
40
|
Kokinous J, Tavano A, Kotz SA, Schröger E. Perceptual integration of faces and voices depends on the interaction of emotional content and spatial frequency. Biol Psychol 2017; 123:155-165. [DOI: 10.1016/j.biopsycho.2016.12.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2016] [Revised: 10/11/2016] [Accepted: 12/11/2016] [Indexed: 10/20/2022]
|
41
|
Symons AE, El-Deredy W, Schwartze M, Kotz SA. The Functional Role of Neural Oscillations in Non-Verbal Emotional Communication. Front Hum Neurosci 2016; 10:239. [PMID: 27252638 PMCID: PMC4879141 DOI: 10.3389/fnhum.2016.00239] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 05/09/2016] [Indexed: 12/18/2022] Open
Abstract
Effective interpersonal communication depends on the ability to perceive and interpret nonverbal emotional expressions from multiple sensory modalities. Current theoretical models propose that visual and auditory emotion perception involves a network of brain regions including the primary sensory cortices, the superior temporal sulcus (STS), and orbitofrontal cortex (OFC). However, relatively little is known about how the dynamic interplay between these regions gives rise to the perception of emotions. In recent years, there has been increasing recognition of the importance of neural oscillations in mediating neural communication within and between functional neural networks. Here we review studies investigating changes in oscillatory activity during the perception of visual, auditory, and audiovisual emotional expressions, and aim to characterize the functional role of neural oscillations in nonverbal emotion perception. Findings from the reviewed literature suggest that theta band oscillations most consistently differentiate between emotional and neutral expressions. While early theta synchronization appears to reflect the initial encoding of emotionally salient sensory information, later fronto-central theta synchronization may reflect the further integration of sensory information with internal representations. Additionally, gamma synchronization reflects facilitated sensory binding of emotional expressions within regions such as the OFC, STS, and, potentially, the amygdala. However, the evidence is more ambiguous when it comes to the role of oscillations within the alpha and beta frequencies, which vary as a function of modality (or modalities), presence or absence of predictive information, and attentional or task demands. Thus, the synchronization of neural oscillations within specific frequency bands mediates the rapid detection, integration, and evaluation of emotional expressions. Moreover, the functional coupling of oscillatory activity across multiples frequency bands supports a predictive coding model of multisensory emotion perception in which emotional facial and body expressions facilitate the processing of emotional vocalizations.
Collapse
Affiliation(s)
- Ashley E. Symons
- School of Psychological Sciences, University of ManchesterManchester, UK
| | - Wael El-Deredy
- School of Psychological Sciences, University of ManchesterManchester, UK
- School of Biomedical Engineering, Universidad de ValparaisoValparaiso, Chile
| | - Michael Schwartze
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
- Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht UniversityMaastricht, Netherlands
| | - Sonja A. Kotz
- School of Psychological Sciences, University of ManchesterManchester, UK
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
- Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht UniversityMaastricht, Netherlands
| |
Collapse
|
42
|
Taffou M, Ondřej J, O'Sullivan C, Warusfel O, Dubal S, Viaud-Delmon I. Multisensory aversive stimuli differentially modulate negative feelings in near and far space. PSYCHOLOGICAL RESEARCH 2016; 81:764-776. [PMID: 27150637 DOI: 10.1007/s00426-016-0774-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2015] [Accepted: 04/25/2016] [Indexed: 12/19/2022]
Abstract
Affect, space, and multisensory integration are processes that are closely linked. However, it is unclear whether the spatial location of emotional stimuli interacts with multisensory presentation to influence the emotional experience they induce in the perceiver. In this study, we used the unique advantages of virtual reality techniques to present potentially aversive crowd stimuli embedded in a natural context and to control their display in terms of sensory and spatial presentation. Individuals high in crowdphobic fear navigated in an auditory-visual virtual environment, in which they encountered virtual crowds presented through the visual channel, the auditory channel, or both. They reported the intensity of their negative emotional experience at a far distance and at a close distance from the crowd stimuli. Whereas auditory-visual presentation of close feared stimuli amplified negative feelings, auditory-visual presentation of distant feared stimuli did not amplify negative feelings. This suggests that spatial closeness allows multisensory processes to modulate the intensity of the emotional experience induced by aversive stimuli. Nevertheless, the specific role of auditory stimulation must be investigated to better understand this interaction between multisensory, affective, and spatial representation processes. This phenomenon may serve the implementation of defensive behaviors in response to aversive stimuli that are in position to threaten an individual's feeling of security.
Collapse
Affiliation(s)
- Marine Taffou
- Sciences et Technologies de la Musique et du Son, CNRS UMR 9912, IRCAM, Sorbonne Universités, UPMC Univ Paris 06, 1 place Igor Stravinsky, 75004, Paris, France.
- Social and Affective Neuroscience (SAN) Laboratory, Institut du Cerveau et de la Moelle épinière, ICM, Inserm, U 1127, CNRS UMR 7225, Sorbonne Universités, UPMC Univ Paris 06 UMR S 1127, 47 boulevard de l'hôpital, 75013, Paris, France.
| | - Jan Ondřej
- School of Computer Science and Statistics, Trinity College Dublin, Dublin 2, Ireland
| | - Carol O'Sullivan
- School of Computer Science and Statistics, Trinity College Dublin, Dublin 2, Ireland
| | - Olivier Warusfel
- Sciences et Technologies de la Musique et du Son, CNRS UMR 9912, IRCAM, Sorbonne Universités, UPMC Univ Paris 06, 1 place Igor Stravinsky, 75004, Paris, France
| | - Stéphanie Dubal
- Social and Affective Neuroscience (SAN) Laboratory, Institut du Cerveau et de la Moelle épinière, ICM, Inserm, U 1127, CNRS UMR 7225, Sorbonne Universités, UPMC Univ Paris 06 UMR S 1127, 47 boulevard de l'hôpital, 75013, Paris, France
| | - Isabelle Viaud-Delmon
- Sciences et Technologies de la Musique et du Son, CNRS UMR 9912, IRCAM, Sorbonne Universités, UPMC Univ Paris 06, 1 place Igor Stravinsky, 75004, Paris, France
| |
Collapse
|
43
|
Hemodynamic (fNIRS) and EEG (N200) correlates of emotional inter-species interactions modulated by visual and auditory stimulation. Sci Rep 2016; 6:23083. [PMID: 26976052 PMCID: PMC4791677 DOI: 10.1038/srep23083] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2015] [Accepted: 03/01/2016] [Indexed: 11/15/2022] Open
Abstract
The brain activity, considered in its hemodynamic (optical imaging: functional Near-Infrared Spectroscopy, fNIRS) and electrophysiological components (event-related potentials, ERPs, N200) was monitored when subjects observed (visual stimulation, V) or observed and heard (visual + auditory stimulation, VU) situations which represented inter-species (human-animal) interactions, with an emotional positive (cooperative) or negative (uncooperative) content. In addition, the cortical lateralization effect (more left or right dorsolateral prefrontal cortex, DLPFC) was explored. Both ERP and fNIRS showed significant effects due to emotional interactions which were discussed at light of cross-modal integration effects. The significance of inter-species effect for the emotional behavior was considered. In addition, hemodynamic and EEG consonant results and their value as integrated measures were discussed at light of valence effect.
Collapse
|
44
|
Schirmer A, Ng T, Escoffier N, Penney TB. Emotional Voices Distort Time: Behavioral and Neural Correlates. TIMING & TIME PERCEPTION 2016. [DOI: 10.1163/22134468-00002058] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
The present study explored the effect of vocally expressed emotions on duration perception. Recordings of the syllable ‘ah’ spoken in a disgusted (negative), surprised (positive), and neutral voice were subjected to a compression/stretching algorithm producing seven durations ranging from 300 to 1200 ms. The resulting stimuli served in a duration bisection procedure in which participants indicated whether a stimulus was more similar in duration to a previously studied 300 ms (short) or 1200 ms (long) 440 Hz tone. Behavioural results indicate that disgusted expressions were perceived as shorter than surprised expressions in both men and women and this effect was related to perceived valence. Additionally, both emotional expressions were perceived as shorter than neutral expressions in women only and this effect was related to perceived arousal. Event-related potentials showed an influence of emotion and rate of acoustic change (fast for compressed/short and slow for stretched/long stimuli) on stimulus encoding in women only. Based on these findings, we suggest that emotions interfere with temporal processes and facilitate the influence of contextual information (e.g., rate of acoustic change, attention) on duration judgements. Because women are more sensitive than men to unattended vocal emotions, their temporal judgements are more strongly distorted.
Collapse
Affiliation(s)
- Annett Schirmer
- National University of SingaporeSingapore
- National University of SingaporeSingapore
- 3
Duke/NUS Graduate Medical School, Singapore
| | - Tabitha Ng
- National University of SingaporeSingapore
- National University of SingaporeSingapore
| | - Nicolas Escoffier
- National University of SingaporeSingapore
- National University of SingaporeSingapore
| | - Trevor B. Penney
- National University of SingaporeSingapore
- National University of SingaporeSingapore
| |
Collapse
|
45
|
Chen X, Pan Z, Wang P, Yang X, Liu P, You X, Yuan J. The integration of facial and vocal cues during emotional change perception: EEG markers. Soc Cogn Affect Neurosci 2015; 11:1152-61. [PMID: 26130820 DOI: 10.1093/scan/nsv083] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2015] [Accepted: 06/24/2015] [Indexed: 11/13/2022] Open
Abstract
The ability to detect emotional changes is of primary importance for social living. Though emotional signals are often conveyed by multiple modalities, how emotional changes in vocal and facial modalities integrate into a unified percept has yet to be directly investigated. To address this issue, we asked participants to detect emotional changes delivered by facial, vocal and facial-vocal expressions while behavioral responses and electroencephalogram were recorded. Behavioral results showed that bimodal emotional changes were detected with higher accuracy and shorter response latencies compared with each unimodal condition. Moreover, the detection of emotional change, regardless of modalities, was associated with enhanced amplitudes in the N2 and P3 component, as well as greater theta synchronization. More importantly, the P3 amplitudes and theta synchronization were larger for the bimodal emotional change condition than for the sum of the two unimodal conditions. The superadditive responses in P3 amplitudes and theta synchronization were both positively correlated with the magnitude of the bimodal superadditivity in accuracy. These behavioral and electrophysiological data consistently illustrated an effect of audiovisual integration during the detection of emotional changes, which is most likely mediated by the P3 activity and theta oscillations in brain responses.
Collapse
Affiliation(s)
- Xuhai Chen
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China, Key Laboratory of Modern Teaching Technology, Ministry of Education, Shaanxi Normal University, Xi'an 710062, China
| | - Zhihui Pan
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China
| | - Ping Wang
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China
| | - Xiaohong Yang
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China, and
| | - Peng Liu
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China
| | - Xuqun You
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China
| | - Jiajin Yuan
- Key Laboratory of Cognition and Personality of Ministry of Education, School of Psychology, Southwest University, Chongqing 400715, China
| |
Collapse
|
46
|
Zinchenko A, Kanske P, Obermeier C, Schröger E, Kotz SA. Emotion and goal-directed behavior: ERP evidence on cognitive and emotional conflict. Soc Cogn Affect Neurosci 2015; 10:1577-87. [PMID: 25925271 DOI: 10.1093/scan/nsv050] [Citation(s) in RCA: 74] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2014] [Accepted: 04/24/2015] [Indexed: 12/29/2022] Open
Abstract
Cognitive control supports goal-directed behavior by resolving conflict among opposing action tendencies. Emotion can trigger cognitive control processes, thus speeding up conflict processing when the target dimension of stimuli is emotional. However, it is unclear what role emotionality of the target dimension plays in the processing of emotional conflict (e.g. in irony). In two EEG experiments, we compared the influence of emotional valence of the target (emotional, neutral) in cognitive and emotional conflict processing. To maximally approximate real-life communication, we used audiovisual stimuli. Participants either categorized spoken vowels (cognitive conflict) or their emotional valence (emotional conflict), while visual information was congruent or incongruent. Emotional target dimension facilitated both cognitive and emotional conflict processing, as shown in a reduced reaction time conflict effect. In contrast, the N100 in the event-related potentials showed a conflict-specific reversal: the conflict effect was larger for emotional compared with neutral trials in cognitive conflict and smaller in emotional conflict. Additionally, domain-general conflict effects were observed in the P200 and N200 responses. The current findings confirm that emotions have a strong influence on cognitive and emotional conflict processing. They also highlight the complexity and heterogeneity of the interaction of emotion with different types of conflict.
Collapse
Affiliation(s)
- Artyom Zinchenko
- International Max Planck Research School on Neuroscience of Communication (IMPRS NeuroCom), Leipzig, Germany, Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany,
| | - Philipp Kanske
- Department of Social Neuroscience, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Christian Obermeier
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | | | - Sonja A Kotz
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany, School of Psychological Sciences, University of Manchester, Manchester M13 9PL, UK
| |
Collapse
|
47
|
Schröger E, Marzecová A, SanMiguel I. Attention and prediction in human audition: a lesson from cognitive psychophysiology. Eur J Neurosci 2015; 41:641-64. [PMID: 25728182 PMCID: PMC4402002 DOI: 10.1111/ejn.12816] [Citation(s) in RCA: 147] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2014] [Revised: 11/27/2014] [Accepted: 12/01/2014] [Indexed: 11/30/2022]
Abstract
Attention is a hypothetical mechanism in the service of perception that facilitates the processing of relevant information and inhibits the processing of irrelevant information. Prediction is a hypothetical mechanism in the service of perception that considers prior information when interpreting the sensorial input. Although both (attention and prediction) aid perception, they are rarely considered together. Auditory attention typically yields enhanced brain activity, whereas auditory prediction often results in attenuated brain responses. However, when strongly predicted sounds are omitted, brain responses to silence resemble those elicited by sounds. Studies jointly investigating attention and prediction revealed that these different mechanisms may interact, e.g. attention may magnify the processing differences between predicted and unpredicted sounds. Following the predictive coding theory, we suggest that prediction relates to predictions sent down from predictive models housed in higher levels of the processing hierarchy to lower levels and attention refers to gain modulation of the prediction error signal sent up to the higher level. As predictions encode contents and confidence in the sensory data, and as gain can be modulated by the intention of the listener and by the predictability of the input, various possibilities for interactions between attention and prediction can be unfolded. From this perspective, the traditional distinction between bottom-up/exogenous and top-down/endogenous driven attention can be revisited and the classic concepts of attentional gain and attentional trace can be integrated.
Collapse
Affiliation(s)
- Erich Schröger
- Institute for Psychology, BioCog - Cognitive and Biological Psychology, University of LeipzigNeumarkt 9-19, D-04109, Leipzig, Germany
| | - Anna Marzecová
- Institute for Psychology, BioCog - Cognitive and Biological Psychology, University of LeipzigNeumarkt 9-19, D-04109, Leipzig, Germany
| | - Iria SanMiguel
- Institute for Psychology, BioCog - Cognitive and Biological Psychology, University of LeipzigNeumarkt 9-19, D-04109, Leipzig, Germany
| |
Collapse
|