1
|
Scheliga S, Kellermann T, Lampert A, Rolke R, Spehr M, Habel U. Neural correlates of multisensory integration in the human brain: an ALE meta-analysis. Rev Neurosci 2023; 34:223-245. [PMID: 36084305 DOI: 10.1515/revneuro-2022-0065] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 07/22/2022] [Indexed: 02/07/2023]
Abstract
Previous fMRI research identified superior temporal sulcus as central integration area for audiovisual stimuli. However, less is known about a general multisensory integration network across senses. Therefore, we conducted activation likelihood estimation meta-analysis with multiple sensory modalities to identify a common brain network. We included 49 studies covering all Aristotelian senses i.e., auditory, visual, tactile, gustatory, and olfactory stimuli. Analysis revealed significant activation in bilateral superior temporal gyrus, middle temporal gyrus, thalamus, right insula, and left inferior frontal gyrus. We assume these regions to be part of a general multisensory integration network comprising different functional roles. Here, thalamus operate as first subcortical relay projecting sensory information to higher cortical integration centers in superior temporal gyrus/sulcus while conflict-processing brain regions as insula and inferior frontal gyrus facilitate integration of incongruent information. We additionally performed meta-analytic connectivity modelling and found each brain region showed co-activations within the identified multisensory integration network. Therefore, by including multiple sensory modalities in our meta-analysis the results may provide evidence for a common brain network that supports different functional roles for multisensory integration.
Collapse
Affiliation(s)
- Sebastian Scheliga
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Thilo Kellermann
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Angelika Lampert
- Institute of Physiology, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Roman Rolke
- Department of Palliative Medicine, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Marc Spehr
- Department of Chemosensation, RWTH Aachen University, Institute for Biology, Worringerweg 3, 52074 Aachen, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| |
Collapse
|
2
|
Thakral PP, Bottary R, Kensinger EA. Representing the Good and Bad: fMRI signatures during the encoding of multisensory positive, negative, and neutral events. Cortex 2022; 151:240-258. [PMID: 35462202 PMCID: PMC9124690 DOI: 10.1016/j.cortex.2022.02.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 01/04/2022] [Accepted: 02/13/2022] [Indexed: 11/27/2022]
Abstract
Few studies have examined how multisensory emotional experiences are processed and encoded into memory. Here, we aimed to determine whether, at encoding, activity within functionally-defined visual- and auditory-processing brain regions discriminated the emotional category (i.e., positive, negative, or neutral) of the multisensory (audio-visual) events. Participants incidentally encoded positive, negative, and neutral multisensory stimuli during event-related functional magnetic resonance imaging (fMRI). Following a 3-h post-encoding delay, their memory for studied stimuli was tested, allowing us to identify emotion-category-specific subsequent-memory effects focusing on medial temporal lobe regions (i.e., amygdala, hippocampus) and visual- and auditory-processing regions. We used a combination of univariate and multivoxel pattern fMRI analyses (MVPA) to examine emotion-category-specificity in mean activity levels and neural patterning, respectively. Univariate analyses revealed many more visual regions that showed negative-category-specificity relative to positive-category-specificity, and auditory regions only showed negative-category-specificity. These results suggest that negative emotion is more closely tied to information contained within sensory regions, a conclusion that was supported by the MVPA analyses. Functional connectivity analyses further revealed that the visual amplification of category-selective processing is driven, in part, by mean signal from the amygdala. Interestingly, while stronger representations in visuo-auditory regions were related to subsequent-memory for neutral multisensory stimuli, they were related to subsequent-forgetting of positive and negative stimuli. Neural patterning in the hippocampus and amygdala were related to memory for negative multisensory stimuli. These results provide new evidence that negative emotional stimuli are processed with increased engagement of visuosensory regions, but that this sensory engagement-that generalizes across the entire emotion category-is not the type of sensory encoding that is most beneficial for later retrieval.
Collapse
Affiliation(s)
| | - Ryan Bottary
- Department of Psychology and Neuroscience, Boston College, MA, USA; Division of Sleep Medicine, Harvard Medical School, MA, USA
| | | |
Collapse
|
3
|
Li C, Kovács G, Trapp S. Visual short-term memory load modulates repetition related fMRI signal adaptation. Biol Psychol 2021; 166:108199. [PMID: 34634432 DOI: 10.1016/j.biopsycho.2021.108199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 10/02/2021] [Accepted: 10/04/2021] [Indexed: 10/20/2022]
Abstract
While several computational models have suggested how predictive coding could be implemented on an algorithmic level, reference to cognitive processes remains rather sparse. A crucial process might be elevating relevant prior information from long-term memory to render it highly accessible for subsequent comparison with sensory input. In many models, visual short-term memory (VSTM) is considered as information from long-term memory in a state of elevated activity. We measured the BOLD signal in face-specific cortical areas using repetition suppression (RS) paradigm. RS has been associated with predictive processing in previous studies. We show that RS within the fusiform face area is significantly attenuated when VSTM is loaded with other, non-facial visual information. Although an unequivocal inference is not possible, the data indicate a role of VSTM for predictive processes as indexed by expectation-related RS.
Collapse
Affiliation(s)
- Chenglin Li
- Department of Biological Psychology and Cognitive Neurosciences, Institute of Psychology, University of Jena, Jena, Germany
| | - Gyula Kovács
- Department of Biological Psychology and Cognitive Neurosciences, Institute of Psychology, University of Jena, Jena, Germany
| | - Sabrina Trapp
- Department of Sport Science, University of Bielefeld, Bielefeld, Germany; Department of Psychology, University of Leipzig, Leipzig, Germany.
| |
Collapse
|
4
|
Xu J, Dong H, Li N, Wang Z, Guo F, Wei J, Dang J. Weighted RSA: An Improved Framework on the Perception of Audio-visual Affective Speech in Left Insula and Superior Temporal Gyrus. Neuroscience 2021; 469:46-58. [PMID: 34119576 DOI: 10.1016/j.neuroscience.2021.06.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Revised: 05/16/2021] [Accepted: 06/02/2021] [Indexed: 12/24/2022]
Abstract
Being able to accurately perceive the emotion expressed by the facial or verbal expression from others is critical to successful social interaction. However, only few studies examined the multimodal interactions on speech emotion, and there is no consistence in studies on the speech emotion perception. It remains unclear, how the speech emotion of different valence is perceived on the multimodal stimuli by our human brain. In this paper, we conducted a functional magnetic resonance imaging (fMRI) study with an event-related design, using dynamic facial expressions and emotional speech stimuli to express different emotions, in order to explore the perception mechanism of speech emotion in audio-visual modality. The representational similarity analysis (RSA), whole-brain searchlight analysis, and conjunction analysis of emotion were used to interpret the representation of speech emotion in different aspects. Significantly, a weighted RSA approach was creatively proposed to evaluate the contribution of each candidate model to the best fitted model and provided a supplement to RSA. The results of weighted RSA indicated that the fitted models were superior to all candidate models and the weights could be used to explain the representation of ROIs. The bilateral amygdala has been shown to be associated with the processing of both positive and negative emotions except neutral emotion. It is indicated that the left posterior insula and the left anterior superior temporal gyrus (STG) play important roles in the perception of multimodal speech emotion.
Collapse
Affiliation(s)
- Junhai Xu
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Haibin Dong
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China; State Grid Tianjin Electric Power Company, China
| | - Na Li
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Zeyu Wang
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Fei Guo
- School of Computer Science and Engineering, Central South University, Changsha 410083, China.
| | - Jianguo Wei
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China.
| | - Jianwu Dang
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China; School of Information Science, Japan Advanced Institute of Science and Technology, Japan
| |
Collapse
|
5
|
Csonka M, Mardmomen N, Webster PJ, Brefczynski-Lewis JA, Frum C, Lewis JW. Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain. Cereb Cortex Commun 2021; 2:tgab002. [PMID: 33718874 PMCID: PMC7941256 DOI: 10.1093/texcom/tgab002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 12/31/2020] [Accepted: 01/06/2021] [Indexed: 01/23/2023] Open
Abstract
Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical "hubs") preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.
Collapse
Affiliation(s)
- Matt Csonka
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Nadia Mardmomen
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Paula J Webster
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Julie A Brefczynski-Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Chris Frum
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - James W Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
6
|
Boasen J, Giroux F, Duchesneau MO, Sénécal S, Léger PM, Ménard JF. High-fidelity vibrokinetic stimulation induces sustained changes in intercortical coherence during a cinematic experience. J Neural Eng 2020; 17:046046. [PMID: 32756020 DOI: 10.1088/1741-2552/abaca2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE High-fidelity vibrokinetic (HFVK) technology is widely used to enhance the immersiveness of audiovisual (AV) entertainment experiences. However, despite evidence that HFVK technology does subjectively enhance AV immersion, the underlying mechanism has not been clarified. Neurophysiological studies could provide important evidence to illuminate this mechanism, thereby benefiting HFVK stimulus design, and facilitating expansion of HFVK technology. APPROACH We conducted a between-subjects (VK, N = 11; Control, N = 9) exploratory study to measure the effect of HFVK stimulation through an HFVK seat on electroencephalographic cortical activity during an AV cinematic experience. Subjective appreciation of the experience was assessed and incorporated into statistical models exploring the effects of HFVK stimulation across cortical brain areas. We separately analyzed alpha-band (8-12 Hz) and theta-band (5-7 Hz) activities as indices of engagement and sensory processing, respectively. We also performed theta-band (5-7 Hz) coherence analyses using cortical seed areas identified from the theta activity analysis. MAIN RESULTS The right fusiform gyrus, inferiotemporal gyrus, and supramarginal gyrus, known for emotion, AV-spatial, and vestibular processing, were identified as seeds from theta analyses. Coherence from these areas was uniformly enhanced in HFVK subjects in right motor areas, albeit predominantly in those who were appreciative. Meanwhile, compared to control subjects, HFVK subjects exhibited uniform interhemispheric decoherence with the left insula, which is important for self-processing. SIGNIFICANCE The results collectively point to sustained decoherence between sensory and self-processing as a possible mechanism for how HFVK increases immersion, and that coordination of emotional, spatial, and vestibular processing hubs with the motor system may be required for appreciation of the HFVK-enhanced experience. Overall, this study offers the first ever demonstration that HFVK stimulation has a real and sustained effect on brain activity during a cinematic experience.
Collapse
Affiliation(s)
- J Boasen
- Tech3Lab, HEC Montréal, Montréal, Canada. Faculty of Health Sciences, Hokkaido University, Sapporo, Japan
| | | | | | | | | | | |
Collapse
|
7
|
Creupelandt C, D'Hondt F, de Timary P, Falagiarda F, Collignon O, Maurage P. Selective visual and crossmodal impairment in the discrimination of anger and fear expressions in severe alcohol use disorder. Drug Alcohol Depend 2020; 213:108079. [PMID: 32554170 DOI: 10.1016/j.drugalcdep.2020.108079] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Revised: 05/12/2020] [Accepted: 05/14/2020] [Indexed: 10/24/2022]
Abstract
BACKGROUND Severe alcohol use disorder (SAUD) is associated with impaired discrimination of emotional expressions. This deficit appears increased in crossmodal settings, when simultaneous inputs from different sensory modalities are presented. However, so far, studies exploring emotional crossmodal processing in SAUD relied on static faces and unmatched face/voice pairs, thus offering limited ecological validity. Our aim was therefore to assess emotional processing using a validated and ecological paradigm relying on dynamic audio-visual stimuli, manipulating the amount of emotional information available. METHOD Thirty individuals with SAUD and 30 matched healthy controls performed an emotional discrimination task requiring to identify five emotions (anger, disgust, fear, happiness, sadness) expressed as visual, auditory, or auditory-visual segments of varying length. Sensitivity indices (d') were computed to get an unbiased measure of emotional discrimination and entered in a Generalized Linear Mixed Model. Incorrect emotional attributions were also scrutinized through confusion matrices. RESULTS Discrimination levels varied across sensory modalities and emotions, and increased with stimuli duration. Crucially, performances also improved from unimodal to crossmodal conditions in both groups, but discrimination for anger crossmodal stimuli and fear crossmodal/visual stimuli remained selectively impaired in SAUD. These deficits were not influenced by stimuli duration, suggesting that they were not modulated by the amount of emotional information available. Moreover, they were not associated with systematic emotional error patterns reflecting specific confusions between emotions. CONCLUSIONS These results clarify the nature and extent of crossmodal impairments in SAUD and converge with earlier findings to ascribe a specific role for anger and fear in this pathology.
Collapse
Affiliation(s)
- Coralie Creupelandt
- Louvain Experimental Psychopathology Research Group (UCLEP), Psychological Sciences Research Institute (IPSY) and Institute of Neuroscience (IoNS), UCLouvain, B-1348, Louvain-la-Neuve, Belgium.
| | - Fabien D'Hondt
- Univ. Lille, Inserm, CHU Lille, U1172, Lille Neuroscience & Cognition, F-59000 Lille, France; CHU Lille, Clinique de Psychiatrie, CURE, F-59000, Lille, France; Centre National de Ressources et de Résilience Lille-Paris (CN2R), F-59000, Lille, France.
| | - Philippe de Timary
- Louvain Experimental Psychopathology Research Group (UCLEP), Psychological Sciences Research Institute (IPSY) and Institute of Neuroscience (IoNS), UCLouvain, B-1348, Louvain-la-Neuve, Belgium; Department of Adult Psychiatry, Saint-Luc Academic Hospital, B-1200, Brussels, Belgium.
| | - Federica Falagiarda
- Crossmodal Perception and Plasticity laboratory (CPP-Lab), Psychological Sciences Research Institute (IPSY) and Institute of Neuroscience (IoNS), UCLouvain, B-1348, Louvain-la-Neuve, Belgium.
| | - Olivier Collignon
- Crossmodal Perception and Plasticity laboratory (CPP-Lab), Psychological Sciences Research Institute (IPSY) and Institute of Neuroscience (IoNS), UCLouvain, B-1348, Louvain-la-Neuve, Belgium; Centre for Mind/Brain Studies, University of Trento, Trento, Italy.
| | - Pierre Maurage
- Louvain Experimental Psychopathology Research Group (UCLEP), Psychological Sciences Research Institute (IPSY) and Institute of Neuroscience (IoNS), UCLouvain, B-1348, Louvain-la-Neuve, Belgium.
| |
Collapse
|
8
|
Johnson LR, Battle AR, Martinac B. Remembering Mechanosensitivity of NMDA Receptors. Front Cell Neurosci 2019; 13:533. [PMID: 31866826 PMCID: PMC6906178 DOI: 10.3389/fncel.2019.00533] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2019] [Accepted: 11/18/2019] [Indexed: 12/14/2022] Open
Abstract
An increase in post-synaptic Ca2+ conductance through activation of the ionotropic N-methyl-D-aspartate receptor (NMDAR) and concomitant structural changes are essential for the initiation of long-term potentiation (LTP) and memory formation. Memories can be initiated by coincident events, as occurs in classical conditioning, where the NMDAR can act as a molecular coincidence detector. Binding of glutamate and glycine, together with depolarization of the postsynaptic cell membrane to remove the Mg2+ channel pore block, results in NMDAR opening for Ca2+ conductance. Accumulating evidence has implicated both force-from-lipids and protein tethering mechanisms for mechanosensory transduction in NMDAR, which has been demonstrated by both, membrane stretch and application of amphipathic molecules such as arachidonic acid (AA). The contribution of mechanosensitivity to memory formation and consolidation may be to increase activity of the NMDAR leading to facilitated memory formation. In this review we look back at the progress made toward understanding the physiological and pathological role of NMDA receptor channels in mechanobiology of the nervous system and consider these findings in like of their potential functional implications for memory formation. We examine recent studies identifying mechanisms of both NMDAR and other mechanosensitive channels and discuss functional implications including gain control of NMDA opening probability. Mechanobiology is a rapidly growing area of biology with many important implications for understanding form, function and pathology in the nervous system.
Collapse
Affiliation(s)
- Luke R Johnson
- Victor Chang Cardiac Research Institute, Darlinghurst, NSW, Australia.,St. Vincent's Clinical School, University of New South Wales, Darlinghurst, NSW, Australia.,Division of Psychology, School of Medicine, University of Tasmania, Launceston, TAS, Australia.,Department of Psychiatry, Center for the Study of Traumatic Stress, Uniformed Services University of the Health Sciences, Bethesda, MD, United States.,School of Biomedical Sciences, Institute of Health and Biomedical Innovation (IHBI), Queensland University of Technology, Brisbane, QLD, Australia
| | - Andrew R Battle
- School of Biomedical Sciences, Institute of Health and Biomedical Innovation (IHBI), Queensland University of Technology, Brisbane, QLD, Australia.,Prince Charles Hospital Northside Clinical Unit, School of Clinical Medicine, The University of Queensland, Brisbane, QLD, Australia.,Translational Research Institute, Woolloongabba, QLD, Australia
| | - Boris Martinac
- Victor Chang Cardiac Research Institute, Darlinghurst, NSW, Australia.,St. Vincent's Clinical School, University of New South Wales, Darlinghurst, NSW, Australia
| |
Collapse
|
9
|
Gao C, Weber CE, Shinkareva SV. The brain basis of audiovisual affective processing: Evidence from a coordinate-based activation likelihood estimation meta-analysis. Cortex 2019; 120:66-77. [DOI: 10.1016/j.cortex.2019.05.016] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 05/03/2019] [Accepted: 05/28/2019] [Indexed: 01/19/2023]
|
10
|
Watson R, de Gelder B. The representation and plasticity of body emotion expression. PSYCHOLOGICAL RESEARCH 2019; 84:1400-1406. [PMID: 30603865 DOI: 10.1007/s00426-018-1133-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2015] [Accepted: 12/10/2018] [Indexed: 11/29/2022]
Abstract
Emotions are expressed by the face, the voice and the whole body. Research on the face and the voice has not only demonstrated that emotions are perceived categorically, but that this perception can be manipulated. The purpose of this study was to investigate, via two separate experiments using adaptation and multisensory techniques, whether the perception of body emotion expressions also shows categorical effects and plasticity. We used an approach developed for studies investigating both face and voice emotion perception and created novel morphed affective body stimuli, which varied in small incremental steps between emotions. Participants were instructed to perform an emotion categorisation of these morphed bodies after adaptation to bodies conveying different expressions (Experiment 1), or while simultaneously hearing affective voices (Experiment 2). We show that not only is body expression perceived categorically, but that both adaptation to affective body expressions and concurrent presentation of vocal affective information can shift the categorical boundary between body expressions, specifically for the angry body expressions. Overall, our findings provide significant new insights into emotional body categorisation, which may prove important in gaining a deeper understanding of body expression perception in everyday social situations.
Collapse
Affiliation(s)
- Rebecca Watson
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV, Maastricht, The Netherlands
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV, Maastricht, The Netherlands.
| |
Collapse
|
11
|
Gao C, Wedell DH, Green JJ, Jia X, Mao X, Guo C, Shinkareva SV. Temporal dynamics of audiovisual affective processing. Biol Psychol 2018; 139:59-72. [DOI: 10.1016/j.biopsycho.2018.10.001] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2018] [Revised: 08/28/2018] [Accepted: 10/01/2018] [Indexed: 11/16/2022]
|
12
|
Cao L, Xu J, Yang X, Li X, Liu B. Abstract Representations of Emotions Perceived From the Face, Body, and Whole-Person Expressions in the Left Postcentral Gyrus. Front Hum Neurosci 2018; 12:419. [PMID: 30405375 PMCID: PMC6200969 DOI: 10.3389/fnhum.2018.00419] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2018] [Accepted: 09/27/2018] [Indexed: 12/03/2022] Open
Abstract
Emotions can be perceived through the face, body, and whole-person, while previous studies on the abstract representations of emotions only focused on the emotions of the face and body. It remains unclear whether emotions can be represented at an abstract level regardless of all three sensory cues in specific brain regions. In this study, we used the representational similarity analysis (RSA) to explore the hypothesis that the emotion category is independent of all three stimulus types and can be decoded based on the activity patterns elicited by different emotions. Functional magnetic resonance imaging (fMRI) data were collected when participants classified emotions (angry, fearful, and happy) expressed by videos of faces, bodies, and whole-persons. An abstract emotion model was defined to estimate the neural representational structure in the whole-brain RSA, which assumed that the neural patterns were significantly correlated in within-emotion conditions ignoring the stimulus types but uncorrelated in between-emotion conditions. A neural representational dissimilarity matrix (RDM) for each voxel was then compared to the abstract emotion model to examine whether specific clusters could identify the abstract representation of emotions that generalized across stimulus types. The significantly positive correlations between neural RDMs and models suggested that the abstract representation of emotions could be successfully captured by the representational space of specific clusters. The whole-brain RSA revealed an emotion-specific but stimulus category-independent neural representation in the left postcentral gyrus, left inferior parietal lobe (IPL) and right superior temporal sulcus (STS). Further cluster-based MVPA revealed that only the left postcentral gyrus could successfully distinguish three types of emotions for the two stimulus type pairs (face-body and body-whole person) and happy versus angry/fearful, which could be considered as positive versus negative for three stimulus type pairs, when the cross-modal classification analysis was performed. Our study suggested that abstract representations of three emotions (angry, fearful, and happy) could extend from the face and body stimuli to whole-person stimuli and the findings of this study provide support for abstract representations of emotions in the left postcentral gyrus.
Collapse
Affiliation(s)
- Linjing Cao
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Junhai Xu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Xiaoli Yang
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, China
| | - Baolin Liu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China.,State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology, Tsinghua University, Beijing, China
| |
Collapse
|
13
|
Garrido-Vásquez P, Pell MD, Paulmann S, Kotz SA. Dynamic Facial Expressions Prime the Processing of Emotional Prosody. Front Hum Neurosci 2018; 12:244. [PMID: 29946247 PMCID: PMC6007283 DOI: 10.3389/fnhum.2018.00244] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 05/28/2018] [Indexed: 11/29/2022] Open
Abstract
Evidence suggests that emotion is represented supramodally in the human brain. Emotional facial expressions, which often precede vocally expressed emotion in real life, can modulate event-related potentials (N100 and P200) during emotional prosody processing. To investigate these cross-modal emotional interactions, two lines of research have been put forward: cross-modal integration and cross-modal priming. In cross-modal integration studies, visual and auditory channels are temporally aligned, while in priming studies they are presented consecutively. Here we used cross-modal emotional priming to study the interaction of dynamic visual and auditory emotional information. Specifically, we presented dynamic facial expressions (angry, happy, neutral) as primes and emotionally-intoned pseudo-speech sentences (angry, happy) as targets. We were interested in how prime-target congruency would affect early auditory event-related potentials, i.e., N100 and P200, in order to shed more light on how dynamic facial information is used in cross-modal emotional prediction. Results showed enhanced N100 amplitudes for incongruently primed compared to congruently and neutrally primed emotional prosody, while the latter two conditions did not significantly differ. However, N100 peak latency was significantly delayed in the neutral condition compared to the other two conditions. Source reconstruction revealed that the right parahippocampal gyrus was activated in incongruent compared to congruent trials in the N100 time window. No significant ERP effects were observed in the P200 range. Our results indicate that dynamic facial expressions influence vocal emotion processing at an early point in time, and that an emotional mismatch between a facial expression and its ensuing vocal emotional signal induces additional processing costs in the brain, potentially because the cross-modal emotional prediction mechanism is violated in case of emotional prime-target incongruency.
Collapse
Affiliation(s)
- Patricia Garrido-Vásquez
- Department of Experimental Psychology and Cognitive Science, Justus Liebig University Giessen, Giessen, Germany.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Marc D Pell
- School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada
| | - Silke Paulmann
- Department of Psychology, University of Essex, Colchester, United Kingdom
| | - Sonja A Kotz
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neuropsychology and Psychopharmacology, University of Maastricht, Maastricht, Netherlands
| |
Collapse
|
14
|
Bailey HD, Mullaney AB, Gibney KD, Kwakye LD. Audiovisual Integration Varies With Target and Environment Richness in Immersive Virtual Reality. Multisens Res 2018; 31:689-713. [PMID: 31264608 DOI: 10.1163/22134808-20181301] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Accepted: 02/26/2018] [Indexed: 11/19/2022]
Abstract
We are continually bombarded by information arriving to each of our senses; however, the brain seems to effortlessly integrate this separate information into a unified percept. Although multisensory integration has been researched extensively using simple computer tasks and stimuli, much less is known about how multisensory integration functions in real-world contexts. Additionally, several recent studies have demonstrated that multisensory integration varies tremendously across naturalistic stimuli. Virtual reality can be used to study multisensory integration in realistic settings because it combines realism with precise control over the environment and stimulus presentation. In the current study, we investigated whether multisensory integration as measured by the redundant signals effects (RSE) is observable in naturalistic environments using virtual reality and whether it differs as a function of target and/or environment cue-richness. Participants detected auditory, visual, and audiovisual targets which varied in cue-richness within three distinct virtual worlds that also varied in cue-richness. We demonstrated integrative effects in each environment-by-target pairing and further showed a modest effect on multisensory integration as a function of target cue-richness but only in the cue-rich environment. Our study is the first to definitively show that minimal and more naturalistic tasks elicit comparable redundant signals effects. Our results also suggest that multisensory integration may function differently depending on the features of the environment. The results of this study have important implications in the design of virtual multisensory environments that are currently being used for training, educational, and entertainment purposes.
Collapse
Affiliation(s)
| | | | - Kyla D Gibney
- Department of Neuroscience, Oberlin College, Oberlin, OH, USA
| | | |
Collapse
|
15
|
Wang X, Zhou X, Dai Q, Ji B, Feng Z. The Role of Motivation in Cognitive Reappraisal for Depressed Patients. Front Hum Neurosci 2017; 11:516. [PMID: 29163097 PMCID: PMC5671608 DOI: 10.3389/fnhum.2017.00516] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Accepted: 10/11/2017] [Indexed: 11/13/2022] Open
Abstract
Background: People engage in emotion regulation in service of motive goals (typically, to approach a desired emotional goal or avoid an undesired emotional goal). However, how motives (goals) in emotion regulation operate to shape the regulation of emotion is rarely known. Furthermore, the modulatory role of motivation in the impaired reappraisal capacity and neural abnormalities typical of depressed patients is not clear. Our hypothesis was that (1) approach and avoidance motivation may modulate emotion regulation and the underlying neural substrates; (2) approach/avoidance motivation may modulate emotion regulation neural abnormalities in depressed patients. Methods: Twelve drug-free depressed patients and fifteen matched healthy controls reappraised emotional pictures with approach/avoidant strategies and self-rated their emotional intensities during fMRI scans. Approach/avoidance motivation was measured using Behavioral Inhibition System and Behavioral Activation System (BIS/BAS) Scale. We conducted whole-brain analyses and correlation analyses of regions of interest to identify alterations in regulatory prefrontal-amygdala circuits which were modulated by motivation. Results: Depressed patients had a higher level of BIS and lower levels of BAS-reward responsiveness and BAS-drive. BIS scores were positively correlated with depressive severity. We found the main effect of motivation as well as the interactive effect of motivation and group on the neural correlates of emotion regulation. Specifically, hypoactivation of IFG underlying the group differences in the motivation-related neural correlates during reappraisal may be partially explained by the interaction between group and reappraisal. Consistent with our prediction, dlPFC and vmPFC was differentially between groups which were modulated by motivation. Specifically, the avoidance motivation of depressed patients could predict the right dlPFC activation during decreasing positive emotion, while the approach motivation of normal individuals could predict the right vmPFC activation during decreasing negative emotion. Notably, striatal regions were observed when examining the neural substrates underlying the main effect of motivation (lentiform nucleus) and the interactive effect between motivation and group (midbrain). Conclusions: Our findings highlight the modulatory role of approach and avoidance motivation in cognitive reappraisal, which is dysfunctional in depressed patients. The results could enlighten the CBT directed at modifying the motivation deficits in cognitive regulation of emotion.
Collapse
Affiliation(s)
- Xiaoxia Wang
- Department of Basic Psychology, School of Psychology, Third Military Medical University, Chongqing, China
| | - Xiaoyan Zhou
- Department of Clinical Psychology, Chongqing City Mental Health Center, Chongqing, China
| | - Qin Dai
- Department of Psychological Nursing, School of Nursing, Third Military Medical University, Chongqing, China
| | - Bing Ji
- Department of Radiology, Southwest Hospital, Third Military Medical University, Chongqing, China
| | - Zhengzhi Feng
- Department of Behavioral Medicine, School of Psychology, Third Military Medical University, Chongqing, China
| |
Collapse
|
16
|
Convergence of semantics and emotional expression within the IFG pars orbitalis. Neuroimage 2017; 156:240-248. [DOI: 10.1016/j.neuroimage.2017.04.020] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Revised: 03/16/2017] [Accepted: 04/07/2017] [Indexed: 10/19/2022] Open
|
17
|
Gibney KD, Aligbe E, Eggleston BA, Nunes SR, Kerkhoff WG, Dean CL, Kwakye LD. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity. Front Integr Neurosci 2017; 11:1. [PMID: 28163675 PMCID: PMC5247431 DOI: 10.3389/fnint.2017.00001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2016] [Accepted: 01/04/2017] [Indexed: 11/30/2022] Open
Abstract
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.
Collapse
Affiliation(s)
- Kyla D Gibney
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| | | | | | - Sarah R Nunes
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| | | | | | - Leslie D Kwakye
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| |
Collapse
|
18
|
Kokinous J, Tavano A, Kotz SA, Schröger E. Perceptual integration of faces and voices depends on the interaction of emotional content and spatial frequency. Biol Psychol 2017; 123:155-165. [DOI: 10.1016/j.biopsycho.2016.12.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2016] [Revised: 10/11/2016] [Accepted: 12/11/2016] [Indexed: 10/20/2022]
|
19
|
Trapp S, Kotz SA. Predicting Affective Information - An Evaluation of Repetition Suppression Effects. Front Psychol 2016; 7:1365. [PMID: 27667980 PMCID: PMC5016514 DOI: 10.3389/fpsyg.2016.01365] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2016] [Accepted: 08/26/2016] [Indexed: 11/30/2022] Open
Abstract
Both theoretical proposals and empirical studies suggest that the brain interprets sensory input based on expectations to mitigate computational burden. However, as social beings, much of sensory input is affectively loaded – e.g., the smile of a partner, the critical voice of a boss, or the welcoming gesture of a friend. Given that affective information is highly complex and often ambiguous, building up expectations of upcoming affective sensory input may greatly contribute to its rapid and efficient processing. This review points to the role of affective information in the context of the ‘predictive brain’. It particularly focuses on repetition suppression (RS) effects that have recently been linked to prediction processes. The findings are interpreted as evidence for more pronounced prediction processes with affective material. Importantly, it is argued that bottom-up attention inflates the neural RS effect, and because affective stimuli tend to attract more bottom-up attention, it thereby particularly overshadows the magnitude of RS effects for this information. Finally, anxiety disorders, such as social phobia, are briefly discussed as manifestations of modulations in affective prediction.
Collapse
Affiliation(s)
- Sabrina Trapp
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan Israel
| | - Sonja A Kotz
- Max Planck Institute for Human Cognitive and Brain Sciences, LeipzigGermany; Faculty of Psychology and Neuroscience, Maastricht University, MaastrichtNetherlands
| |
Collapse
|
20
|
Symons AE, El-Deredy W, Schwartze M, Kotz SA. The Functional Role of Neural Oscillations in Non-Verbal Emotional Communication. Front Hum Neurosci 2016; 10:239. [PMID: 27252638 PMCID: PMC4879141 DOI: 10.3389/fnhum.2016.00239] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 05/09/2016] [Indexed: 12/18/2022] Open
Abstract
Effective interpersonal communication depends on the ability to perceive and interpret nonverbal emotional expressions from multiple sensory modalities. Current theoretical models propose that visual and auditory emotion perception involves a network of brain regions including the primary sensory cortices, the superior temporal sulcus (STS), and orbitofrontal cortex (OFC). However, relatively little is known about how the dynamic interplay between these regions gives rise to the perception of emotions. In recent years, there has been increasing recognition of the importance of neural oscillations in mediating neural communication within and between functional neural networks. Here we review studies investigating changes in oscillatory activity during the perception of visual, auditory, and audiovisual emotional expressions, and aim to characterize the functional role of neural oscillations in nonverbal emotion perception. Findings from the reviewed literature suggest that theta band oscillations most consistently differentiate between emotional and neutral expressions. While early theta synchronization appears to reflect the initial encoding of emotionally salient sensory information, later fronto-central theta synchronization may reflect the further integration of sensory information with internal representations. Additionally, gamma synchronization reflects facilitated sensory binding of emotional expressions within regions such as the OFC, STS, and, potentially, the amygdala. However, the evidence is more ambiguous when it comes to the role of oscillations within the alpha and beta frequencies, which vary as a function of modality (or modalities), presence or absence of predictive information, and attentional or task demands. Thus, the synchronization of neural oscillations within specific frequency bands mediates the rapid detection, integration, and evaluation of emotional expressions. Moreover, the functional coupling of oscillatory activity across multiples frequency bands supports a predictive coding model of multisensory emotion perception in which emotional facial and body expressions facilitate the processing of emotional vocalizations.
Collapse
Affiliation(s)
- Ashley E. Symons
- School of Psychological Sciences, University of ManchesterManchester, UK
| | - Wael El-Deredy
- School of Psychological Sciences, University of ManchesterManchester, UK
- School of Biomedical Engineering, Universidad de ValparaisoValparaiso, Chile
| | - Michael Schwartze
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
- Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht UniversityMaastricht, Netherlands
| | - Sonja A. Kotz
- School of Psychological Sciences, University of ManchesterManchester, UK
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
- Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht UniversityMaastricht, Netherlands
| |
Collapse
|