1
|
Kiyokawa H, Hayashi R. Commonalities and variations in emotion representation across modalities and brain regions. Sci Rep 2024; 14:20992. [PMID: 39251743 PMCID: PMC11385795 DOI: 10.1038/s41598-024-71690-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Accepted: 08/30/2024] [Indexed: 09/11/2024] Open
Abstract
Humans express emotions through various modalities such as facial expressions and natural language. However, the relationships between emotions expressed through different modalities and their correlations with neural activities remain uncertain. Here, we aimed to unveil some of these uncertainties by investigating the similarity of emotion representations across modalities and brain regions. First, we represented various emotion categories as multi-dimensional vectors derived from visual (face), linguistic, and visio-linguistic data, and used representational similarity analysis to compare these modalities. Second, we examined the linear transferability of emotion representation from other modalities to the visual modality. Third, we compared the representational structure derived in the first step with those from brain activities across 360 regions. Our findings revealed that emotion representations share commonalities across modalities with modality-type dependent variations, and they can be linearly mapped from other modalities to the visual modality. Additionally, emotion representations in uni-modalities showed relatively higher similarity with specific brain regions, while multi-modal emotion representation was most similar to representations across the entire brain region. These findings suggest that emotional experiences are represented differently across various brain regions with varying degrees of similarity to different modality types, and that they may be multi-modally conveyable in visual and linguistic domains.
Collapse
Affiliation(s)
- Hiroaki Kiyokawa
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki, Japan
- Graduate School of Science and Engineering, Saitama University, Saitama, Japan
| | - Ryusuke Hayashi
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki, Japan.
| |
Collapse
|
2
|
Park C, Kim J. Taste the music: Modality-general representation of affective states derived from auditory and gustatory stimuli. Cognition 2024; 249:105830. [PMID: 38810426 DOI: 10.1016/j.cognition.2024.105830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 05/12/2024] [Accepted: 05/21/2024] [Indexed: 05/31/2024]
Abstract
Prior studies have extensively examined modality-general representation of affect across various sensory modalities, particularly focusing on auditory and visual stimuli. However, little research has explored the modality-general representation of affect between gustatory and other sensory modalities. This study aimed to investigate whether the affective responses induced by tastes and musical pieces could be predicted within and across modalities. For each modality, eight stimuli were chosen based on four basic taste conditions (sweet, bitter, sour, and salty). Participants rated their responses to each stimulus using both taste and emotion scales. The multivariate analyses including multidimensional scaling and classification analysis were performed. The findings revealed that auditory and gustatory stimuli in the sweet category were associated with positive valence, whereas those from the other taste categories were linked to negative valence. Additionally, auditory and gustatory stimuli in sour taste category were linked to high arousal, whereas stimuli in bitter taste category were associated with low arousal. This study revealed the potential mapping of gustatory and auditory stimuli onto core affect space in everyday experiences. Moreover, it demonstrated that emotions evoked by taste and music could be predicted across modalities, supporting modality-general representation of affect.
Collapse
Affiliation(s)
- Chaery Park
- Jeonbuk National University, Jeonsu-si, South Korea, 567 Baekje-daero, Deokjin-gu, Jeonju-si, Jeollabuk-do 54896, Republic of Korea
| | - Jongwan Kim
- Jeonbuk National University, Jeonsu-si, South Korea, 567 Baekje-daero, Deokjin-gu, Jeonju-si, Jeollabuk-do 54896, Republic of Korea.
| |
Collapse
|
3
|
Lee J, Park S. Multi-modal Representation of the Size of Space in the Human Brain. J Cogn Neurosci 2024; 36:340-361. [PMID: 38010320 DOI: 10.1162/jocn_a_02092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
To estimate the size of an indoor space, we must analyze the visual boundaries that limit the spatial extent and acoustic cues from reflected interior surfaces. We used fMRI to examine how the brain processes the geometric size of indoor scenes when various types of sensory cues are presented individually or together. Specifically, we asked whether the size of space is represented in a modality-specific way or in an integrative way that combines multimodal cues. In a block-design study, images or sounds that depict small- and large-sized indoor spaces were presented. Visual stimuli were real-world pictures of empty spaces that were small or large. Auditory stimuli were sounds convolved with different reverberations. By using a multivoxel pattern classifier, we asked whether the two sizes of space can be classified in visual, auditory, and visual-auditory combined conditions. We identified both sensory-specific and multimodal representations of the size of space. To further investigate the nature of the multimodal region, we specifically examined whether it contained multimodal information in a coexistent or integrated form. We found that angular gyrus and the right medial frontal gyrus had modality-integrated representation, displaying sensitivity to the match in the spatial size information conveyed through image and sound. Background functional connectivity analysis further demonstrated that the connection between sensory-specific regions and modality-integrated regions increases in the multimodal condition compared with single modality conditions. Our results suggest that spatial size perception relies on both sensory-specific and multimodal representations, as well as their interplay during multimodal perception.
Collapse
|
4
|
Vaessen M, Van der Heijden K, de Gelder B. Modality-specific brain representations during automatic processing of face, voice and body expressions. Front Neurosci 2023; 17:1132088. [PMID: 37869514 PMCID: PMC10587395 DOI: 10.3389/fnins.2023.1132088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Accepted: 09/05/2023] [Indexed: 10/24/2023] Open
Abstract
A central question in affective science and one that is relevant for its clinical applications is how emotions provided by different stimuli are experienced and represented in the brain. Following the traditional view emotional signals are recognized with the help of emotion concepts that are typically used in descriptions of mental states and emotional experiences, irrespective of the sensory modality. This perspective motivated the search for abstract representations of emotions in the brain, shared across variations in stimulus type (face, body, voice) and sensory origin (visual, auditory). On the other hand, emotion signals like for example an aggressive gesture, trigger rapid automatic behavioral responses and this may take place before or independently of full abstract representation of the emotion. This pleads in favor specific emotion signals that may trigger rapid adaptative behavior only by mobilizing modality and stimulus specific brain representations without relying on higher order abstract emotion categories. To test this hypothesis, we presented participants with naturalistic dynamic emotion expressions of the face, the whole body, or the voice in a functional magnetic resonance (fMRI) study. To focus on automatic emotion processing and sidestep explicit concept-based emotion recognition, participants performed an unrelated target detection task presented in a different sensory modality than the stimulus. By using multivariate analyses to assess neural activity patterns in response to the different stimulus types, we reveal a stimulus category and modality specific brain organization of affective signals. Our findings are consistent with the notion that under ecological conditions emotion expressions of the face, body and voice may have different functional roles in triggering rapid adaptive behavior, even if when viewed from an abstract conceptual vantage point, they may all exemplify the same emotion. This has implications for a neuroethologically grounded emotion research program that should start from detailed behavioral observations of how face, body, and voice expressions function in naturalistic contexts.
Collapse
|
5
|
Lee J, Park S. Multi-modal representation of the size of space in the human brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.24.550343. [PMID: 37546991 PMCID: PMC10402083 DOI: 10.1101/2023.07.24.550343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
To estimate the size of an indoor space, we must analyze the visual boundaries that limit the spatial extent and acoustic cues from reflected interior surfaces. We used fMRI to examine how the brain processes geometric size of indoor scenes when various types of sensory cues are presented individually or together. Specifically, we asked whether the size of space is represented in a modality-specific way or in an integrative way that combines multimodal cues. In a block-design study, images or sounds that depict small and large sized indoor spaces were presented. Visual stimuli were real-world pictures of empty spaces that were small or large. Auditory stimuli were sounds convolved with different reverberation. By using a multi-voxel pattern classifier, we asked whether the two sizes of space can be classified in visual, auditory, and visual-auditory combined conditions. We identified both sensory specific and multimodal representations of the size of space. To further investigate the nature of the multimodal region, we specifically examined whether it contained multimodal information in a coexistent or integrated form. We found that AG and the right IFG pars opercularis had modality-integrated representation, displaying sensitivity to the match in the spatial size information conveyed through image and sound. Background functional connectivity analysis further demonstrated that the connection between sensory specific regions and modality-integrated regions increase in the multimodal condition compared to single modality conditions. Our results suggest that the spatial size perception relies on both sensory specific and multimodal representations, as well as their interplay during multimodal perception.
Collapse
Affiliation(s)
- Jaeeun Lee
- Department of Psychology, University of Minnesota, Minneapolis, MN
| | - Soojin Park
- Department of Psychology, Yonsei University, Seoul, South Korea
| |
Collapse
|
6
|
The EEG microstate representation of discrete emotions. Int J Psychophysiol 2023; 186:33-41. [PMID: 36773887 DOI: 10.1016/j.ijpsycho.2023.02.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 02/03/2023] [Accepted: 02/07/2023] [Indexed: 02/11/2023]
Abstract
Understanding how human emotions are represented in our brain is a central question in the field of affective neuroscience. While previous studies have mainly adopted a modular and static perspective on the neural representation of emotions, emerging research suggests that emotions may rely on a distributed and dynamic representation. The present study aimed to explore the EEG microstate representations for nine discrete emotions (Anger, Disgust, Fear, Sadness, Neutral, Amusement, Inspiration, Joy and Tenderness). Seventy-eight participants were recruited to watch emotion eliciting videos with their EEGs recorded. Multivariate analysis revealed that different emotions had distinct EEG microstate features. By using the EEG microstate features in the Neutral condition as the reference, the coverage of C, duration of C and occurrence of B were found to be the top-contributing microstate features for the discrete positive and negative emotions. The emotions of Disgust, Fear and Joy were found to be most effectively represented by EEG microstate. The present study provided the first piece of evidence of EEG microstate representation for discrete emotions, highlighting a whole-brain, dynamical representation of human emotions.
Collapse
|
7
|
Scheliga S, Kellermann T, Lampert A, Rolke R, Spehr M, Habel U. Neural correlates of multisensory integration in the human brain: an ALE meta-analysis. Rev Neurosci 2023; 34:223-245. [PMID: 36084305 DOI: 10.1515/revneuro-2022-0065] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 07/22/2022] [Indexed: 02/07/2023]
Abstract
Previous fMRI research identified superior temporal sulcus as central integration area for audiovisual stimuli. However, less is known about a general multisensory integration network across senses. Therefore, we conducted activation likelihood estimation meta-analysis with multiple sensory modalities to identify a common brain network. We included 49 studies covering all Aristotelian senses i.e., auditory, visual, tactile, gustatory, and olfactory stimuli. Analysis revealed significant activation in bilateral superior temporal gyrus, middle temporal gyrus, thalamus, right insula, and left inferior frontal gyrus. We assume these regions to be part of a general multisensory integration network comprising different functional roles. Here, thalamus operate as first subcortical relay projecting sensory information to higher cortical integration centers in superior temporal gyrus/sulcus while conflict-processing brain regions as insula and inferior frontal gyrus facilitate integration of incongruent information. We additionally performed meta-analytic connectivity modelling and found each brain region showed co-activations within the identified multisensory integration network. Therefore, by including multiple sensory modalities in our meta-analysis the results may provide evidence for a common brain network that supports different functional roles for multisensory integration.
Collapse
Affiliation(s)
- Sebastian Scheliga
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Thilo Kellermann
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Angelika Lampert
- Institute of Physiology, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Roman Rolke
- Department of Palliative Medicine, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Marc Spehr
- Department of Chemosensation, RWTH Aachen University, Institute for Biology, Worringerweg 3, 52074 Aachen, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| |
Collapse
|
8
|
Gan S, Li W. Aberrant neural correlates of multisensory processing of audiovisual social cues related to social anxiety: An electrophysiological study. Front Psychiatry 2023; 14:1020812. [PMID: 36761870 PMCID: PMC9902659 DOI: 10.3389/fpsyt.2023.1020812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 01/03/2023] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND Social anxiety disorder (SAD) is characterized by abnormal fear to social cues. Although unisensory processing to social stimuli associated with social anxiety (SA) has been well described, how multisensory processing relates to SA is still open to clarification. Using electroencephalography (EEG) measurement, we investigated the neural correlates of multisensory processing and related temporal dynamics in social anxiety disorder (SAD). METHODS Twenty-five SAD participants and 23 healthy control (HC) participants were presented with angry and neutral faces, voices and their combinations with congruent emotions and they completed an emotional categorization task. RESULTS We found that face-voice combinations facilitated auditory processing in multiple stages indicated by the acceleration of auditory N1 latency, attenuation of auditory N1 and P250 amplitudes, and decrease of theta power. In addition, bimodal inputs elicited cross-modal integrative activity which is indicated by the enhancement of visual P1, N170, and P3/LPP amplitudes and superadditive response of P1 and P3/LPP. More importantly, excessively greater integrative activity (at P3/LPP amplitude) was found in SAD participants, and this abnormal integrative activity in both early and late temporal stages was related to the larger interpretation bias of miscategorizing neutral face-voice combinations as angry. CONCLUSION The study revealed that neural correlates of multisensory processing was aberrant in SAD and it was related to the interpretation bias to multimodal social cues in multiple processing stages. Our findings suggest that deficit in multisensory processing might be an important factor in the psychopathology of SA.
Collapse
Affiliation(s)
- Shuzhen Gan
- Shanghai Changning Mental Health Center, Shanghai, China.,Shanghai Mental Health Center, Shanghai, China
| | - Weijun Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China.,Key Laboratory of Brain and Cognitive Neuroscience, Dalian, Liaoning, China
| |
Collapse
|
9
|
Dong H, Li N, Fan L, Wei J, Xu J. Integrative interaction of emotional speech in audio-visual modality. Front Neurosci 2022; 16:797277. [PMID: 36440282 PMCID: PMC9695733 DOI: 10.3389/fnins.2022.797277] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 09/21/2022] [Indexed: 11/13/2022] Open
Abstract
Emotional clues are always expressed in many ways in our daily life, and the emotional information we receive is often represented by multiple modalities. Successful social interactions require a combination of multisensory cues to accurately determine the emotion of others. The integration mechanism of multimodal emotional information has been widely investigated. Different brain activity measurement methods were used to determine the location of brain regions involved in the audio-visual integration of emotional information, mainly in the bilateral superior temporal regions. However, the methods adopted in these studies are relatively simple, and the materials of the study rarely contain speech information. The integration mechanism of emotional speech in the human brain still needs further examinations. In this paper, a functional magnetic resonance imaging (fMRI) study was conducted using event-related design to explore the audio-visual integration mechanism of emotional speech in the human brain by using dynamic facial expressions and emotional speech to express emotions of different valences. Representational similarity analysis (RSA) based on regions of interest (ROIs), whole brain searchlight analysis, modality conjunction analysis and supra-additive analysis were used to analyze and verify the role of relevant brain regions. Meanwhile, a weighted RSA method was used to evaluate the contributions of each candidate model in the best fitted model of ROIs. The results showed that only the left insula was detected by all methods, suggesting that the left insula played an important role in the audio-visual integration of emotional speech. Whole brain searchlight analysis, modality conjunction analysis and supra-additive analysis together revealed that the bilateral middle temporal gyrus (MTG), right inferior parietal lobule and bilateral precuneus might be involved in the audio-visual integration of emotional speech from other aspects.
Collapse
Affiliation(s)
- Haibin Dong
- Tianjin Key Lab of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Na Li
- Tianjin Key Lab of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Lingzhong Fan
- Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Jianguo Wei
- Tianjin Key Lab of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Junhai Xu
- Tianjin Key Lab of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China
- *Correspondence: Junhai Xu,
| |
Collapse
|
10
|
Correlates of individual voice and face preferential responses during resting state. Sci Rep 2022; 12:7117. [PMID: 35505233 PMCID: PMC9065073 DOI: 10.1038/s41598-022-11367-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Accepted: 04/15/2022] [Indexed: 11/20/2022] Open
Abstract
Human nonverbal social signals are transmitted to a large extent by vocal and facial cues. The prominent importance of these cues is reflected in specialized cerebral regions which preferentially respond to these stimuli, e.g. the temporal voice area (TVA) for human voices and the fusiform face area (FFA) for human faces. But it remained up to date unknown whether there are respective specializations during resting state, i.e. in the absence of any cues, and if so, whether these representations share neural substrates across sensory modalities. In the present study, resting state functional connectivity (RSFC) as well as voice- and face-preferential activations were analysed from functional magnetic resonance imaging (fMRI) data sets of 60 healthy individuals. Data analysis comprised seed-based analyses using the TVA and FFA as regions of interest (ROIs) as well as multi voxel pattern analyses (MVPA). Using the face- and voice-preferential responses of the FFA and TVA as regressors, we identified several correlating clusters during resting state spread across frontal, temporal, parietal and occipital regions. Using these regions as seeds, characteristic and distinct network patterns were apparent with a predominantly convergent pattern for the bilateral TVAs whereas a largely divergent pattern was observed for the bilateral FFAs. One region in the anterior medial frontal cortex displayed a maximum of supramodal convergence of informative connectivity patterns reflecting voice- and face-preferential responses of both TVAs and the right FFA, pointing to shared neural resources in supramodal voice and face processing. The association of individual voice- and face-preferential neural activity with resting state connectivity patterns may support the perspective of a network function of the brain beyond an activation of specialized regions.
Collapse
|
11
|
Saarimäki H, Glerean E, Smirnov D, Mynttinen H, Jääskeläinen IP, Sams M, Nummenmaa L. Classification of emotion categories based on functional connectivity patterns of the human brain. Neuroimage 2021; 247:118800. [PMID: 34896586 PMCID: PMC8803541 DOI: 10.1016/j.neuroimage.2021.118800] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 12/05/2021] [Accepted: 12/08/2021] [Indexed: 12/01/2022] Open
Abstract
Neurophysiological and psychological models posit that emotions depend on connections across wide-spread corticolimbic circuits. While previous studies using pattern recognition on neuroimaging data have shown differences between various discrete emotions in brain activity patterns, less is known about the differences in functional connectivity. Thus, we employed multivariate pattern analysis on functional magnetic resonance imaging data (i) to develop a pipeline for applying pattern recognition in functional connectivity data, and (ii) to test whether connectivity patterns differ across emotion categories. Six emotions (anger, fear, disgust, happiness, sadness, and surprise) and a neutral state were induced in 16 participants using one-minute-long emotional narratives with natural prosody while brain activity was measured with functional magnetic resonance imaging (fMRI). We computed emotion-wise connectivity matrices both for whole-brain connections and for 10 previously defined functionally connected brain subnetworks and trained an across-participant classifier to categorize the emotional states based on whole-brain data and for each subnetwork separately. The whole-brain classifier performed above chance level with all emotions except sadness, suggesting that different emotions are characterized by differences in large-scale connectivity patterns. When focusing on the connectivity within the 10 subnetworks, classification was successful within the default mode system and for all emotions. We thus show preliminary evidence for consistently different sustained functional connectivity patterns for instances of emotion categories particularly within the default mode system.
Collapse
Affiliation(s)
- Heini Saarimäki
- Faculty of Social Sciences, Tampere University, FI-33014 Tampere University, Tampere, Finland; Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, Espoo, Finland.
| | - Enrico Glerean
- Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, Espoo, Finland; Advanced Magnetic Imaging (AMI) Centre, Aalto NeuroImaging, School of Science, Aalto University, Espoo, Finland; Turku PET Centre and Department of Psychology, University of Turku, Turku, Finland; Department of Computer Science, School of Science, Aalto University, Espoo, Finland; International Laboratory of Social Neurobiology, Institute for Cognitive Neuroscience, HSE University, Moscow, Russian Federation
| | - Dmitry Smirnov
- Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, Espoo, Finland
| | - Henri Mynttinen
- Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, Espoo, Finland
| | - Iiro P Jääskeläinen
- Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, Espoo, Finland; International Laboratory of Social Neurobiology, Institute for Cognitive Neuroscience, HSE University, Moscow, Russian Federation
| | - Mikko Sams
- Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, Espoo, Finland; Department of Computer Science, School of Science, Aalto University, Espoo, Finland
| | - Lauri Nummenmaa
- Turku PET Centre and Department of Psychology, University of Turku, Turku, Finland
| |
Collapse
|
12
|
Liu P, Sutherland M, Pollick FE. Incongruence effects in cross-modal emotional processing in autistic traits: An fMRI study. Neuropsychologia 2021; 161:107997. [PMID: 34425144 DOI: 10.1016/j.neuropsychologia.2021.107997] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Revised: 07/26/2021] [Accepted: 08/17/2021] [Indexed: 10/20/2022]
Abstract
In everyday life, emotional information is often conveyed by both the face and the voice. Consequently, information presented by one source can alter the way in which information from the other source is perceived, leading to emotional incongruence. Here, we used functional magnetic resonance imaging (fMRI) to examine neutral correlates of two different types of emotional incongruence in audiovisual processing, namely incongruence of emotion-valence and incongruence of emotion-presence. Participants were in two groups, one group with a low Autism Quotient score (LAQ) and one with a high score (HAQ). Each participant experienced emotional (happy, fearful) or neutral faces or voices while concurrently being exposed to emotional (happy, fearful) or neutral voices or faces. They were instructed to attend to either the visual or auditory track. The incongruence effect of emotion-valence was characterized by activation in a wide range of brain regions in both hemispheres involving the inferior frontal gyrus, cuneus, superior temporal gyrus, and middle frontal gyrus. The incongruence effect of emotion-presence was characterized by activation in a set of temporal and occipital regions in both hemispheres, including the middle occipital gyrus, middle temporal gyrus and inferior temporal gyrus. In addition, the present study identified greater recruitment of the right inferior parietal lobule in perceiving audio-visual emotional expressions in HAQ individuals, as compared to the LAQ individuals. Depending on face or voice-to-be attended, different patterns of emotional incongruence were found between the two groups. Specifically, the HAQ group tend to show more incidental processing to visual information whilst the LAQ group tend to show more incidental processing to auditory information during the crossmodal emotional incongruence decoding. These differences might be attributed to different attentional demands and different processing strategies between the two groups.
Collapse
Affiliation(s)
- Peipei Liu
- Department of Psychology, Sun Yat-Sen University, Guangzhou, 510006, China; School of Psychology, University of Glasgow, Glasgow, G12 8QB, UK; School of Education, University of Glasgow, Glasgow, G3 6NH, UK
| | | | - Frank E Pollick
- School of Psychology, University of Glasgow, Glasgow, G12 8QB, UK.
| |
Collapse
|
13
|
Suslow T, Kersting A. Beyond Face and Voice: A Review of Alexithymia and Emotion Perception in Music, Odor, Taste, and Touch. Front Psychol 2021; 12:707599. [PMID: 34393944 PMCID: PMC8362879 DOI: 10.3389/fpsyg.2021.707599] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 07/06/2021] [Indexed: 11/22/2022] Open
Abstract
Alexithymia is a clinically relevant personality trait characterized by deficits in recognizing and verbalizing one's emotions. It has been shown that alexithymia is related to an impaired perception of external emotional stimuli, but previous research focused on emotion perception from faces and voices. Since sensory modalities represent rather distinct input channels it is important to know whether alexithymia also affects emotion perception in other modalities and expressive domains. The objective of our review was to summarize and systematically assess the literature on the impact of alexithymia on the perception of emotional (or hedonic) stimuli in music, odor, taste, and touch. Eleven relevant studies were identified. On the basis of the reviewed research, it can be preliminary concluded that alexithymia might be associated with deficits in the perception of primarily negative but also positive emotions in music and a reduced perception of aversive taste. The data available on olfaction and touch are inconsistent or ambiguous and do not allow to draw conclusions. Future investigations would benefit from a multimethod assessment of alexithymia and control of negative affect. Multimodal research seems necessary to advance our understanding of emotion perception deficits in alexithymia and clarify the contribution of modality-specific and supramodal processing impairments.
Collapse
Affiliation(s)
- Thomas Suslow
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Leipzig, Germany
| | - Anette Kersting
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Leipzig, Germany
| |
Collapse
|
14
|
Liang P, Jiang J, Chen J, Wei L. Affective Face Processing Modified by Different Tastes. Front Psychol 2021; 12:644704. [PMID: 33790842 PMCID: PMC8006344 DOI: 10.3389/fpsyg.2021.644704] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Accepted: 02/15/2021] [Indexed: 11/13/2022] Open
Abstract
Facial emotional recognition is something used often in our daily lives. How does the brain process the face search? Can taste modify such a process? This study employed two tastes (sweet and acidic) to investigate the cross-modal interaction between taste and emotional face recognition. The behavior responses (reaction time and correct response ratios) and the event-related potential (ERP) were applied to analyze the interaction between taste and face processing. Behavior data showed that when detecting a negative target face with a positive face as a distractor, the participants perform the task faster with an acidic taste than with sweet. No interaction effect was observed with correct response ratio analysis. The early (P1, N170) and mid-stage [early posterior negativity (EPN)] components have shown that sweet and acidic tastes modified the ERP components with the affective face search process in the ERP results. No interaction effect was observed in the late-stage (LPP) component. Our data have extended the understanding of the cross-modal mechanism and provided electrophysiological evidence that affective facial processing could be influenced by sweet and acidic tastes.
Collapse
Affiliation(s)
- Pei Liang
- Department of Psychology, Faculty of Education, Hubei University, Hubei, China.,Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei University, Hubei, China
| | - Jiayu Jiang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Liaoning, China.,School of Fundamental Sciences, China Medical University, Shenyang, China
| | - Jie Chen
- Department of Psychology, Faculty of Education, Hubei University, Hubei, China
| | - Liuqing Wei
- Department of Psychology, Faculty of Education, Hubei University, Hubei, China.,Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei University, Hubei, China
| |
Collapse
|
15
|
Lu T, Yang J, Zhang X, Guo Z, Li S, Yang W, Chen Y, Wu N. Crossmodal Audiovisual Emotional Integration in Depression: An Event-Related Potential Study. Front Psychiatry 2021; 12:694665. [PMID: 34354614 PMCID: PMC8329241 DOI: 10.3389/fpsyt.2021.694665] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 06/21/2021] [Indexed: 11/16/2022] Open
Abstract
Depression is related to the defect of emotion processing, and people's emotional processing is crossmodal. This article aims to investigate whether there is a difference in audiovisual emotional integration between the depression group and the normal group using a high-resolution event-related potential (ERP) technique. We designed a visual and/or auditory detection task. The behavioral results showed that the responses to bimodal audiovisual stimuli were faster than those to unimodal auditory or visual stimuli, indicating that crossmodal integration of emotional information occurred in both the depression and normal groups. The ERP results showed that the N2 amplitude induced by sadness was significantly higher than that induced by happiness. The participants in the depression group showed larger amplitudes of N1 and P2, and the average amplitude of LPP evoked in the frontocentral lobe in the depression group was significantly lower than that in the normal group. The results indicated that there are different audiovisual emotional processing mechanisms between depressed and non-depressed college students.
Collapse
Affiliation(s)
- Ting Lu
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Jingjing Yang
- School of Artificial Intelligence, Changchun University of Science and Technology, Changchun, China
| | - Xinyu Zhang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Zihan Guo
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Shengnan Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Ying Chen
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Nannan Wu
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| |
Collapse
|
16
|
Shinkareva SV, Gao C, Wedell D. Audiovisual Representations of Valence: a Cross-study Perspective. ACTA ACUST UNITED AC 2020; 1:237-246. [DOI: 10.1007/s42761-020-00023-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 10/22/2020] [Indexed: 01/25/2023]
|
17
|
Sonderfeld M, Mathiak K, Häring GS, Schmidt S, Habel U, Gur R, Klasen M. Supramodal neural networks support top-down processing of social signals. Hum Brain Mapp 2020; 42:676-689. [PMID: 33073911 PMCID: PMC7814753 DOI: 10.1002/hbm.25252] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 08/08/2020] [Accepted: 09/29/2020] [Indexed: 12/17/2022] Open
Abstract
The perception of facial and vocal stimuli is driven by sensory input and cognitive top‐down influences. Important top‐down influences are attentional focus and supramodal social memory representations. The present study investigated the neural networks underlying these top‐down processes and their role in social stimulus classification. In a neuroimaging study with 45 healthy participants, we employed a social adaptation of the Implicit Association Test. Attentional focus was modified via the classification task, which compared two domains of social perception (emotion and gender), using the exactly same stimulus set. Supramodal memory representations were addressed via congruency of the target categories for the classification of auditory and visual social stimuli (voices and faces). Functional magnetic resonance imaging identified attention‐specific and supramodal networks. Emotion classification networks included bilateral anterior insula, pre‐supplementary motor area, and right inferior frontal gyrus. They were pure attention‐driven and independent from stimulus modality or congruency of the target concepts. No neural contribution of supramodal memory representations could be revealed for emotion classification. In contrast, gender classification relied on supramodal memory representations in rostral anterior cingulate and ventromedial prefrontal cortices. In summary, different domains of social perception involve different top‐down processes which take place in clearly distinguishable neural networks.
Collapse
Affiliation(s)
- Melina Sonderfeld
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Gianna S Häring
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Sarah Schmidt
- Life & Brain - Institute for Experimental Epileptology and Cognition Research, Bonn, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Raquel Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Martin Klasen
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany.,Interdisciplinary Training Centre for Medical Education and Patient Safety - AIXTRA, Medical Faculty, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
18
|
Gao C, Weber CE, Wedell DH, Shinkareva SV. An fMRI Study of Affective Congruence across Visual and Auditory Modalities. J Cogn Neurosci 2020; 32:1251-1262. [DOI: 10.1162/jocn_a_01553] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
Abstract
Evaluating multisensory emotional content is a part of normal day-to-day interactions. We used fMRI to examine brain areas sensitive to congruence of audiovisual valence and their overlap with areas sensitive to valence. Twenty-one participants watched audiovisual clips with either congruent or incongruent valence across visual and auditory modalities. We showed that affective congruence versus incongruence across visual and auditory modalities is identifiable on a trial-by-trial basis across participants. Representations of affective congruence were widely distributed with some overlap with the areas sensitive to valence. Regions of overlap included bilateral superior temporal cortex and right pregenual anterior cingulate. The overlap between the regions identified here and in the emotion congruence literature lends support to the idea that valence may be a key determinant of affective congruence processing across a variety of discrete emotions.
Collapse
|
19
|
Pegado F, Hendriks MH, Amelynck S, Daniels N, Steyaert J, Boets B, Op de Beeck H. Adults with high functioning autism display idiosyncratic behavioral patterns, neural representations and connectivity of the ‘Voice Area’ while judging the appropriateness of emotional vocal reactions. Cortex 2020; 125:90-108. [DOI: 10.1016/j.cortex.2019.11.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Revised: 08/14/2019] [Accepted: 11/17/2019] [Indexed: 12/17/2022]
|
20
|
Chan HY, Smidts A, Schoots VC, Sanfey AG, Boksem MAS. Decoding dynamic affective responses to naturalistic videos with shared neural patterns. Neuroimage 2020; 216:116618. [PMID: 32036021 DOI: 10.1016/j.neuroimage.2020.116618] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 01/21/2020] [Accepted: 02/05/2020] [Indexed: 11/17/2022] Open
Abstract
This study explored the feasibility of using shared neural patterns from brief affective episodes (viewing affective pictures) to decode extended, dynamic affective sequences in a naturalistic experience (watching movie-trailers). Twenty-eight participants viewed pictures from the International Affective Picture System (IAPS) and, in a separate session, watched various movie-trailers. We first located voxels at bilateral occipital cortex (LOC) responsive to affective picture categories by GLM analysis, then performed between-subject hyperalignment on the LOC voxels based on their responses during movie-trailer watching. After hyperalignment, we trained between-subject machine learning classifiers on the affective pictures, and used the classifiers to decode affective states of an out-of-sample participant both during picture viewing and during movie-trailer watching. Within participants, neural classifiers identified valence and arousal categories of pictures, and tracked self-reported valence and arousal during video watching. In aggregate, neural classifiers produced valence and arousal time series that tracked the dynamic ratings of the movie-trailers obtained from a separate sample. Our findings provide further support for the possibility of using pre-trained neural representations to decode dynamic affective responses during a naturalistic experience.
Collapse
Affiliation(s)
- Hang-Yee Chan
- Department of Marketing Management, Rotterdam School of Management, Erasmus University Rotterdam, the Netherlands.
| | - Ale Smidts
- Department of Marketing Management, Rotterdam School of Management, Erasmus University Rotterdam, the Netherlands
| | - Vincent C Schoots
- Department of Marketing Management, Rotterdam School of Management, Erasmus University Rotterdam, the Netherlands
| | - Alan G Sanfey
- Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Maarten A S Boksem
- Department of Marketing Management, Rotterdam School of Management, Erasmus University Rotterdam, the Netherlands
| |
Collapse
|
21
|
Gao C, Weber CE, Shinkareva SV. The brain basis of audiovisual affective processing: Evidence from a coordinate-based activation likelihood estimation meta-analysis. Cortex 2019; 120:66-77. [DOI: 10.1016/j.cortex.2019.05.016] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 05/03/2019] [Accepted: 05/28/2019] [Indexed: 01/19/2023]
|
22
|
Gu J, Cao L, Liu B. Modality-general representations of valences perceived from visual and auditory modalities. Neuroimage 2019; 203:116199. [PMID: 31536804 DOI: 10.1016/j.neuroimage.2019.116199] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 08/31/2019] [Accepted: 09/14/2019] [Indexed: 01/29/2023] Open
Abstract
Valence is a dimension of emotion and can be either positive, negative, or neutral. Valences can be expressed through the visual and auditory modalities, and the valences of each modality can be conveyed by different types of stimuli (face, body, voice or music). This study focused on the modality-general representations of valences, that is, valence information can be shared across not only visual and auditory modalities but also different types of stimuli within each modality. Functional magnetic resonance imaging (fMRI) data were collected when subjects made affective judgment on silent videos (face and body) and audio clips (voice and music). The searchlight analysis helped to locate four areas that might be sensitive to the representations of modality-general valences, including the bilateral postcentral gyrus, left middle temporal gyrus (MTG) and right middle frontal gyrus (MFG). Further cross-modal classification based on multivoxel pattern analysis (MVPA) was performed as a validation analysis, which suggested that only the left postcentral gyrus could successfully distinguish three valences (positive versus negative and versus neutral: PvsNvs0) across different types of stimuli (face, body, voice or music), and the classification was also successful in left MTG across the stimuli types of face and body. The univariate analysis further found the valence-specific activation differences across stimulus types in MTG. Our study showed that the left postcentral gyrus was informative to valence representations, and extended the research about valence representation that the modality-general representation of valences across not only visual and auditory modalities but also different types of stimuli within each modality.
Collapse
Affiliation(s)
- Jin Gu
- College of Intelligence and Computing, Tianjin University, Tianjin, 300350, PR China
| | - Linjing Cao
- College of Intelligence and Computing, Tianjin University, Tianjin, 300350, PR China
| | - Baolin Liu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, PR China.
| |
Collapse
|
23
|
Rutherford HJ, Xu J, Worhunsky PD, Zhang R, Yip SW, Morie KP, Calhoun VD, Kim S, Strathearn L, Mayes LC, Potenza MN. Gradient theories of brain activation: A novel application to studying the parental brain. Curr Behav Neurosci Rep 2019; 6:119-125. [PMID: 32154064 PMCID: PMC7062306 DOI: 10.1007/s40473-019-00182-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE OF REVIEW Parental brain research primarily employs general-linear-model-based (GLM-based) analyses to assess blood-oxygenation-level-dependent responses to infant auditory and visual cues, reporting common responses in shared cortical and subcortical structures. However, this approach does not reveal intermixed neural substrates related to different sensory modalities. We consider this notion in studying the parental brain. RECENT FINDINGS Spatial independent component analysis (sICA) has been used to separate mixed source signals from overlapping functional networks. We explore relative differences between GLM-based analysis and sICA as applied to an fMRI dataset acquired from women while they listened to infant cries or viewed infant sad faces. SUMMARY There is growing appreciation for the value of moving beyond GLM-based analyses to consider brain functional organization as continuous, distributive, and overlapping gradients of neural substrates related to different sensory modalities. Preliminary findings suggest sICA can be applied to the study of the parental brain.
Collapse
Affiliation(s)
- Helena J.V. Rutherford
- Child Study Center, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Jiansong Xu
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Patrick D. Worhunsky
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Rubin Zhang
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Sarah W. Yip
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Kristen P. Morie
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Vince D. Calhoun
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
- The Mind Research Network, Albuquerque, NM 87131, United States
- Dept of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, NM, 87131, United States
| | - Sohye Kim
- Department of Obstetrics and Gynecology, Baylor College of Medicine
- Department of Pediatrics and Menninger Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine
- Center for Reproductive Psychiatry, Pavilion for Women, Texas Children’s Hospital
| | - Lane Strathearn
- Department of Pediatrics and Menninger Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine
- Stead Family Department of Pediatrics, University of Iowa Carver College of Medicine
| | - Linda C. Mayes
- Child Study Center, Yale University School of Medicine, New Haven, CT 06510, United States
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Marc N. Potenza
- Child Study Center, Yale University School of Medicine, New Haven, CT 06510, United States
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
- Department of Neuroscience, Yale University School of Medicine, New Haven, CT 06510, United States
- The Connecticut Council on Problem Gambling, Wethersfield, CT 06109, United States
- The Connecticut Mental Health Center, New Haven, CT 06519, United States
| |
Collapse
|
24
|
Aryani A, Hsu CT, Jacobs AM. Affective iconic words benefit from additional sound-meaning integration in the left amygdala. Hum Brain Mapp 2019; 40:5289-5300. [PMID: 31444898 PMCID: PMC6864889 DOI: 10.1002/hbm.24772] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 07/21/2019] [Accepted: 07/31/2019] [Indexed: 01/01/2023] Open
Abstract
Recent studies have shown that a similarity between sound and meaning of a word (i.e., iconicity) can help more readily access the meaning of that word, but the neural mechanisms underlying this beneficial role of iconicity in semantic processing remain largely unknown. In an fMRI study, we focused on the affective domain and examined whether affective iconic words (e.g., high arousal in both sound and meaning) activate additional brain regions that integrate emotional information from different domains (i.e., sound and meaning). In line with our hypothesis, affective iconic words, compared to their non‐iconic counterparts, elicited additional BOLD responses in the left amygdala known for its role in multimodal representation of emotions. Functional connectivity analyses revealed that the observed amygdalar activity was modulated by an interaction of iconic condition and activations in two hubs representative for processing sound (left superior temporal gyrus) and meaning (left inferior frontal gyrus) of words. These results provide a neural explanation for the facilitative role of iconicity in language processing and indicate that language users are sensitive to the interaction between sound and meaning aspect of words, suggesting the existence of iconicity as a general property of human language.
Collapse
Affiliation(s)
- Arash Aryani
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Germany
| | - Chun-Ting Hsu
- Kokoro Research Center, Kyoto University, Kyoto, Japan
| | - Arthur M Jacobs
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Germany.,Centre for Cognitive Neuroscience Berlin (CCNB), Berlin, Germany
| |
Collapse
|
25
|
Domínguez-Borràs J, Guex R, Méndez-Bértolo C, Legendre G, Spinelli L, Moratti S, Frühholz S, Mégevand P, Arnal L, Strange B, Seeck M, Vuilleumier P. Human amygdala response to unisensory and multisensory emotion input: No evidence for superadditivity from intracranial recordings. Neuropsychologia 2019; 131:9-24. [PMID: 31158367 DOI: 10.1016/j.neuropsychologia.2019.05.027] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Revised: 05/15/2019] [Accepted: 05/28/2019] [Indexed: 12/14/2022]
Abstract
The amygdala is crucially implicated in processing emotional information from various sensory modalities. However, there is dearth of knowledge concerning the integration and relative time-course of its responses across different channels, i.e., for auditory, visual, and audiovisual input. Functional neuroimaging data in humans point to a possible role of this region in the multimodal integration of emotional signals, but direct evidence for anatomical and temporal overlap of unisensory and multisensory-evoked responses in amygdala is still lacking. We recorded event-related potentials (ERPs) and oscillatory activity from 9 amygdalae using intracranial electroencephalography (iEEG) in patients prior to epilepsy surgery, and compared electrophysiological responses to fearful, happy, or neutral stimuli presented either in voices alone, faces alone, or voices and faces simultaneously delivered. Results showed differential amygdala responses to fearful stimuli, in comparison to neutral, reaching significance 100-200 ms post-onset for auditory, visual and audiovisual stimuli. At later latencies, ∼400 ms post-onset, amygdala response to audiovisual information was also amplified in comparison to auditory or visual stimuli alone. Importantly, however, we found no evidence for either super- or subadditivity effects in any of the bimodal responses. These results suggest, first, that emotion processing in amygdala occurs at globally similar early stages of perceptual processing for auditory, visual, and audiovisual inputs; second, that overall larger responses to multisensory information occur at later stages only; and third, that the underlying mechanisms of this multisensory gain may reflect a purely additive response to concomitant visual and auditory inputs. Our findings provide novel insights on emotion processing across the sensory pathways, and their convergence within the limbic system.
Collapse
Affiliation(s)
- Judith Domínguez-Borràs
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
| | - Raphaël Guex
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
| | | | - Guillaume Legendre
- Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Laurent Spinelli
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland.
| | - Stephan Moratti
- Department of Experimental Psychology, Complutense University of Madrid, Spain; Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Universidad Politécnica de Madrid, Spain.
| | - Sascha Frühholz
- Department of Psychology, University of Zurich, Switzerland.
| | - Pierre Mégevand
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Luc Arnal
- Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Bryan Strange
- Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Universidad Politécnica de Madrid, Spain; Department of Neuroimaging, Alzheimer's Disease Research Centre, Reina Sofia-CIEN Foundation, Madrid, Spain.
| | - Margitta Seeck
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland.
| | - Patrik Vuilleumier
- Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| |
Collapse
|
26
|
Picó-Pérez M, Ipser J, Taylor P, Alonso P, López-Solà C, Real E, Segalàs C, Roos A, Menchón JM, Stein DJ, Soriano-Mas C. Intrinsic functional and structural connectivity of emotion regulation networks in obsessive-compulsive disorder. Depress Anxiety 2019; 36:110-120. [PMID: 30253000 PMCID: PMC8980996 DOI: 10.1002/da.22845] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/22/2018] [Revised: 08/18/2018] [Accepted: 09/02/2018] [Indexed: 01/20/2023] Open
Abstract
Despite emotion regulation being altered in patients with obsessive-compulsive disorder (OCD), no studies have investigated its relation to multimodal amygdala connectivity. We compared corticolimbic functional and structural connectivity between OCD patients and healthy controls (HCs), and correlated this with the dispositional use of emotion regulation strategies and with OCD severity. OCD patients (n = 73) and HCs (n = 42) were assessed for suppression and reappraisal strategies using the Emotion Regulation Questionnaire (ERQ) and for OCD severity using the Yale-Brown Obsessive-Compulsive Scale. Resting-state functional magnetic resonance imaging (rs-fMRI) connectivity maps were generated using subject-specific left amygdala (LA) and right amygdala (RA) masks. We identified between-group differences in amygdala whole-brain connectivity, and evaluated the moderating effect of ERQ strategies. Significant regions and amygdala seeds were used as targets in probabilistic tractography analysis. Patients scored higher in suppression and lower in reappraisal. We observed higher rs-fMRI RA-right postcentral gyrus (PCG) connectivity in HC, and in patients this was correlated with symptom severity. Reappraisal scores were associated with higher negative LA-left insula connectivity in HC, and suppression scores were negatively associated with LA-precuneus and angular gyri connectivity in OCD. Structurally, patients showed higher mean diffusivity in tracts connecting the amygdala with the other targets. RA-PCG connectivity is diminished in patients, while disrupted emotion regulation is related to altered amygdala connectivity with the insula and posterior brain regions. Our results are the first showing, from a multimodal perspective, the association between amygdala connectivity and specific emotional processing domains, emphasizing the importance of amygdala connectivity in OCD pathophysiology.
Collapse
Affiliation(s)
- Maria Picó-Pérez
- Department of Psychiatry, Bellvitge University
Hospital-IDIBELL, Barcelona, Spain,Department of Clinical Sciences, School of Medicine,
University of Barcelona, Barcelona, Spain
| | - Jonathan Ipser
- Department of Psychiatry and Mental Health, University of
Cape Town, J-Block Groote Schuur Hospital, Observatory, 7925, South Africa
| | - Paul Taylor
- MRC/UCT Medical Imaging Research Unit, Department of Human
Biology, University of Cape Town, South Africa,African Institute for Mathematical Sciences, South
Africa,Scientific and Statistical Computing Core, National
Institute of Mental Health, Bethesda, MD, USA
| | - Pino Alonso
- Department of Psychiatry, Bellvitge University
Hospital-IDIBELL, Barcelona, Spain,Department of Clinical Sciences, School of Medicine,
University of Barcelona, Barcelona, Spain,CIBER Salud Mental (CIBERSam), Instituto Salud Carlos III
(ISCIII), Barcelona, Spain
| | - Clara López-Solà
- Adult Mental Health Unit, Parc Taulí University
Hospital, Sabadell, Spain
| | - Eva Real
- Department of Psychiatry, Bellvitge University
Hospital-IDIBELL, Barcelona, Spain,CIBER Salud Mental (CIBERSam), Instituto Salud Carlos III
(ISCIII), Barcelona, Spain
| | - Cinto Segalàs
- Department of Psychiatry, Bellvitge University
Hospital-IDIBELL, Barcelona, Spain,CIBER Salud Mental (CIBERSam), Instituto Salud Carlos III
(ISCIII), Barcelona, Spain
| | - Annerine Roos
- SU/UCT MRC Unit on Risk and Resilience in Mental Disorders,
Department of Psychiatry, Stellenbosch University, PO Box 241, Cape Town 8000, South
Africa
| | - José M. Menchón
- Department of Psychiatry, Bellvitge University
Hospital-IDIBELL, Barcelona, Spain,Department of Clinical Sciences, School of Medicine,
University of Barcelona, Barcelona, Spain,CIBER Salud Mental (CIBERSam), Instituto Salud Carlos III
(ISCIII), Barcelona, Spain
| | - Dan J. Stein
- Department of Psychiatry and Mental Health, University of
Cape Town, J-Block Groote Schuur Hospital, Observatory, 7925, South Africa,SU/UCT MRC Unit on Risk and Resilience in Mental Disorders,
Department of Psychiatry, Stellenbosch University, PO Box 241, Cape Town 8000, South
Africa
| | - Carles Soriano-Mas
- Department of Psychiatry, Bellvitge University
Hospital-IDIBELL, Barcelona, Spain,CIBER Salud Mental (CIBERSam), Instituto Salud Carlos III
(ISCIII), Barcelona, Spain,Department of Psychobiology and Methodology in Health
Sciences, Universitat Autònoma de Barcelona, Barcelona, Spain,Corresponding author: Carles Soriano-Mas, PhD,
Department of Psychiatry, Bellvitge University Hospital, Bellvitge Biomedical
Research Institute-IDIBELL, Feixa Llarga s/n, 08907 L’Hospitalet de
Llobregat, Barcelona, Spain. Tel: (+34) 93 2607500 (ext. 2889) Fax: (+34)
932607658,
| |
Collapse
|
27
|
Föcker J, Röder B. Event-Related Potentials Reveal Evidence for Late Integration of Emotional Prosody and Facial Expression in Dynamic Stimuli: An ERP Study. Multisens Res 2019; 32:473-497. [DOI: 10.1163/22134808-20191332] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2018] [Accepted: 04/01/2019] [Indexed: 11/19/2022]
Abstract
Abstract
The aim of the present study was to test whether multisensory interactions of emotional signals are modulated by intermodal attention and emotional valence. Faces, voices and bimodal emotionally congruent or incongruent face–voice pairs were randomly presented. The EEG was recorded while participants were instructed to detect sad emotional expressions in either faces or voices while ignoring all stimuli with another emotional expression and sad stimuli of the task irrelevant modality. Participants processed congruent sad face–voice pairs more efficiently than sad stimuli paired with an incongruent emotion and performance was higher in congruent bimodal compared to unimodal trials, irrespective of which modality was task-relevant. Event-related potentials (ERPs) to congruent emotional face–voice pairs started to differ from ERPs to incongruent emotional face–voice pairs at 180 ms after stimulus onset: Irrespectively of which modality was task-relevant, ERPs revealed a more pronounced positivity (180 ms post-stimulus) to emotionally congruent trials compared to emotionally incongruent trials if the angry emotion was presented in the attended modality. A larger negativity to incongruent compared to congruent trials was observed in the time range of 400–550 ms (N400) for all emotions (happy, neutral, angry), irrespectively of whether faces or voices were task relevant. These results suggest an automatic interaction of emotion related information.
Collapse
Affiliation(s)
- Julia Föcker
- 1Biological Psychology and Neuropsychology, University of Hamburg, Germany
- 2School of Psychology, College of Social Science, University of Lincoln, United Kingdom
| | - Brigitte Röder
- 1Biological Psychology and Neuropsychology, University of Hamburg, Germany
| |
Collapse
|
28
|
Cao L, Xu J, Yang X, Li X, Liu B. Abstract Representations of Emotions Perceived From the Face, Body, and Whole-Person Expressions in the Left Postcentral Gyrus. Front Hum Neurosci 2018; 12:419. [PMID: 30405375 PMCID: PMC6200969 DOI: 10.3389/fnhum.2018.00419] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2018] [Accepted: 09/27/2018] [Indexed: 12/03/2022] Open
Abstract
Emotions can be perceived through the face, body, and whole-person, while previous studies on the abstract representations of emotions only focused on the emotions of the face and body. It remains unclear whether emotions can be represented at an abstract level regardless of all three sensory cues in specific brain regions. In this study, we used the representational similarity analysis (RSA) to explore the hypothesis that the emotion category is independent of all three stimulus types and can be decoded based on the activity patterns elicited by different emotions. Functional magnetic resonance imaging (fMRI) data were collected when participants classified emotions (angry, fearful, and happy) expressed by videos of faces, bodies, and whole-persons. An abstract emotion model was defined to estimate the neural representational structure in the whole-brain RSA, which assumed that the neural patterns were significantly correlated in within-emotion conditions ignoring the stimulus types but uncorrelated in between-emotion conditions. A neural representational dissimilarity matrix (RDM) for each voxel was then compared to the abstract emotion model to examine whether specific clusters could identify the abstract representation of emotions that generalized across stimulus types. The significantly positive correlations between neural RDMs and models suggested that the abstract representation of emotions could be successfully captured by the representational space of specific clusters. The whole-brain RSA revealed an emotion-specific but stimulus category-independent neural representation in the left postcentral gyrus, left inferior parietal lobe (IPL) and right superior temporal sulcus (STS). Further cluster-based MVPA revealed that only the left postcentral gyrus could successfully distinguish three types of emotions for the two stimulus type pairs (face-body and body-whole person) and happy versus angry/fearful, which could be considered as positive versus negative for three stimulus type pairs, when the cross-modal classification analysis was performed. Our study suggested that abstract representations of three emotions (angry, fearful, and happy) could extend from the face and body stimuli to whole-person stimuli and the findings of this study provide support for abstract representations of emotions in the left postcentral gyrus.
Collapse
Affiliation(s)
- Linjing Cao
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Junhai Xu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Xiaoli Yang
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, China
| | - Baolin Liu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China.,State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology, Tsinghua University, Beijing, China
| |
Collapse
|
29
|
Chen T, Becker B, Camilleri J, Wang L, Yu S, Eickhoff SB, Feng C. A domain-general brain network underlying emotional and cognitive interference processing: evidence from coordinate-based and functional connectivity meta-analyses. Brain Struct Funct 2018; 223:3813-3840. [PMID: 30083997 DOI: 10.1007/s00429-018-1727-9] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Accepted: 07/31/2018] [Indexed: 02/05/2023]
Abstract
The inability to control or inhibit emotional distractors characterizes a range of psychiatric disorders. Despite the use of a variety of task paradigms to determine the mechanisms underlying the control of emotional interference, a precise characterization of the brain regions and networks that support emotional interference processing remains elusive. Here, we performed coordinate-based and functional connectivity meta-analyses to determine the brain networks underlying emotional interference. Paradigms addressing interference processing in the cognitive or emotional domain were included in the meta-analyses, particularly the Stroop, Flanker, and Simon tasks. Our results revealed a consistent involvement of the bilateral dorsal anterior cingulate cortex, anterior insula, left inferior frontal gyrus, and superior parietal lobule during emotional interference. Follow-up conjunction analyses identified correspondence in these regions between emotional and cognitive interference processing. Finally, the patterns of functional connectivity of these regions were examined using resting-state functional connectivity and meta-analytic connectivity modeling. These regions were strongly connected as a distributed system, primarily mapping onto fronto-parietal control, ventral attention, and dorsal attention networks. Together, the present findings indicate that a domain-general neural system is engaged across multiple types of interference processing and that regulating emotional and cognitive interference depends on interactions between large-scale distributed brain networks.
Collapse
Affiliation(s)
- Taolin Chen
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Benjamin Becker
- Clinical Hospital of the Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China
| | - Julia Camilleri
- Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany.,Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Centre Jülich, Jülich, Germany
| | - Li Wang
- Collaborative Innovation Center of Assessment Toward Basic Education Quality, Beijing Normal University, Beijing, China
| | - Shuqi Yu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Simon B Eickhoff
- Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany.,Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Centre Jülich, Jülich, Germany
| | - Chunliang Feng
- College of Information Science and Technology, Beijing Normal University, Beijing, China. .,State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China.
| |
Collapse
|
30
|
Chavez RS, Heatherton TF, Wagner DD. Neural Population Decoding Reveals the Intrinsic Positivity of the Self. Cereb Cortex 2018; 27:5222-5229. [PMID: 27664966 DOI: 10.1093/cercor/bhw302] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2016] [Accepted: 09/01/2016] [Indexed: 11/15/2022] Open
Abstract
People are motivated to hold favorable views of themselves, which manifests as a positivity bias when evaluating their own performance and abilities. However, it remains an open question whether positive affect is an essential component of people's self-concept. Prior functional neuroimaging research demonstrated that similar regions of the brain support positive affect and self-referential processing, although a direct test of their shared representation has yet to be examined. Here we use functional magnetic resonance imaging in conjunction with multivariate pattern analysis in a cross-domain neural population decoding paradigm. We found that a multivariate pattern classifier model trained to dissociate neural responses to viewing positively and negatively valenced images can dissociate thinking about oneself from a close friend during a lexical trait-judgment task commonly used in the study of self-referential processing. Cross-domain classification accuracy was found to be highest in the ventral medial prefrontal cortex (vMPFC), a region previously implicated in both self-referential processing and positive affect. These results show that brain responses during self-referential processing can be decoded from multi-voxel activation patterns in the vMPFC when viewing positively valenced material, thereby providing evidence that positive affect may be a central component of the mental representation of the self.
Collapse
Affiliation(s)
- Robert S Chavez
- The Ohio State University, Department of Psychology, Columbus, OH 43210, USA
| | - Todd F Heatherton
- Dartmouth College, Department of Psychological and Brain Sciences, Hanover, NH 03755, USA
| | - Dylan D Wagner
- The Ohio State University, Department of Psychology, Columbus, OH 43210, USA
| |
Collapse
|
31
|
Garrido-Vásquez P, Pell MD, Paulmann S, Kotz SA. Dynamic Facial Expressions Prime the Processing of Emotional Prosody. Front Hum Neurosci 2018; 12:244. [PMID: 29946247 PMCID: PMC6007283 DOI: 10.3389/fnhum.2018.00244] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 05/28/2018] [Indexed: 11/29/2022] Open
Abstract
Evidence suggests that emotion is represented supramodally in the human brain. Emotional facial expressions, which often precede vocally expressed emotion in real life, can modulate event-related potentials (N100 and P200) during emotional prosody processing. To investigate these cross-modal emotional interactions, two lines of research have been put forward: cross-modal integration and cross-modal priming. In cross-modal integration studies, visual and auditory channels are temporally aligned, while in priming studies they are presented consecutively. Here we used cross-modal emotional priming to study the interaction of dynamic visual and auditory emotional information. Specifically, we presented dynamic facial expressions (angry, happy, neutral) as primes and emotionally-intoned pseudo-speech sentences (angry, happy) as targets. We were interested in how prime-target congruency would affect early auditory event-related potentials, i.e., N100 and P200, in order to shed more light on how dynamic facial information is used in cross-modal emotional prediction. Results showed enhanced N100 amplitudes for incongruently primed compared to congruently and neutrally primed emotional prosody, while the latter two conditions did not significantly differ. However, N100 peak latency was significantly delayed in the neutral condition compared to the other two conditions. Source reconstruction revealed that the right parahippocampal gyrus was activated in incongruent compared to congruent trials in the N100 time window. No significant ERP effects were observed in the P200 range. Our results indicate that dynamic facial expressions influence vocal emotion processing at an early point in time, and that an emotional mismatch between a facial expression and its ensuing vocal emotional signal induces additional processing costs in the brain, potentially because the cross-modal emotional prediction mechanism is violated in case of emotional prime-target incongruency.
Collapse
Affiliation(s)
- Patricia Garrido-Vásquez
- Department of Experimental Psychology and Cognitive Science, Justus Liebig University Giessen, Giessen, Germany.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Marc D Pell
- School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada
| | - Silke Paulmann
- Department of Psychology, University of Essex, Colchester, United Kingdom
| | - Sonja A Kotz
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neuropsychology and Psychopharmacology, University of Maastricht, Maastricht, Netherlands
| |
Collapse
|
32
|
Baart M, Vroomen J. Recalibration of vocal affect by a dynamic face. Exp Brain Res 2018; 236:1911-1918. [PMID: 29696314 PMCID: PMC6010487 DOI: 10.1007/s00221-018-5270-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Accepted: 04/20/2018] [Indexed: 11/04/2022]
Abstract
Perception of vocal affect is influenced by the concurrent sight of an emotional face. We demonstrate that the sight of an emotional face also can induce recalibration of vocal affect. Participants were exposed to videos of a ‘happy’ or ‘fearful’ face in combination with a slightly incongruous sentence with ambiguous prosody. After this exposure, ambiguous test sentences were rated as more ‘happy’ when the exposure phase contained ‘happy’ instead of ‘fearful’ faces. This auditory shift likely reflects recalibration that is induced by error minimization of the inter-sensory discrepancy. In line with this view, when the prosody of the exposure sentence was non-ambiguous and congruent with the face (without audiovisual discrepancy), aftereffects went in the opposite direction, likely reflecting adaptation. Our results demonstrate, for the first time, that perception of vocal affect is flexible and can be recalibrated by slightly discrepant visual information.
Collapse
Affiliation(s)
- Martijn Baart
- Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, 5000 LE, Tilburg, The Netherlands. .,BCBL, Basque Center on Cognition, Brain and Language, Donostia, Spain.
| | - Jean Vroomen
- Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, 5000 LE, Tilburg, The Netherlands.
| |
Collapse
|
33
|
Pegado F, Hendriks MHA, Amelynck S, Daniels N, Bulthé J, Lee Masson H, Boets B, Op de Beeck H. A Multitude of Neural Representations Behind Multisensory "Social Norm" Processing. Front Hum Neurosci 2018; 12:153. [PMID: 29740297 PMCID: PMC5924771 DOI: 10.3389/fnhum.2018.00153] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2017] [Accepted: 04/05/2018] [Indexed: 01/10/2023] Open
Abstract
Humans show a unique capacity to process complex information from multiple sources. Social perception in natural environment provides a good example of such capacity as it typically requires the integration of information from different sensory systems, and also from different levels of sensory processing. Here, instead of studying one isolate system and level of representation, we focused upon a neuroimaging paradigm which allows to capture multiple brain representations simultaneously, i.e., low and high-level processing in two different sensory systems, as well as abstract cognitive processing of congruency. Subjects performed social decisions based on the congruency between auditory and visual processing. Using multivoxel pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data, we probed a wide variety of representations. Our results confirmed the expected representations at each level and system according to the literature. Further, beyond the hierarchical organization of the visual, auditory and higher order neural systems, we provide a more nuanced picture of the brain functional architecture. Indeed, brain regions of the same neural system show similarity in their representations, but they also share information with regions from other systems. Further, the strength of neural information varied considerably across domains in a way that was not obviously related to task relevance. For instance, selectivity for task-irrelevant animacy of visual input was very strong. The present approach represents a new way to explore the richness of co-activated brain representations underlying the natural complexity in human cognition.
Collapse
Affiliation(s)
- Felipe Pegado
- Department of Brain and Cognition, KU Leuven, Leuven, Belgium.,Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium
| | - Michelle H A Hendriks
- Department of Brain and Cognition, KU Leuven, Leuven, Belgium.,Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium
| | | | - Nicky Daniels
- Department of Brain and Cognition, KU Leuven, Leuven, Belgium
| | - Jessica Bulthé
- Department of Brain and Cognition, KU Leuven, Leuven, Belgium
| | | | - Bart Boets
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium
| | | |
Collapse
|
34
|
Klasen M, von Marschall C, Isman G, Zvyagintsev M, Gur RC, Mathiak K. Prosody production networks are modulated by sensory cues and social context. Soc Cogn Affect Neurosci 2018. [PMID: 29514331 PMCID: PMC5928400 DOI: 10.1093/scan/nsy015] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
The neurobiology of emotional prosody production is not well investigated. In particular, the effects of cues and social context are not known. The present study sought to differentiate cued from free emotion generation and the effect of social feedback from a human listener. Online speech filtering enabled functional magnetic resonance imaging during prosodic communication in 30 participants. Emotional vocalizations were (i) free, (ii) auditorily cued, (iii) visually cued or (iv) with interactive feedback. In addition to distributed language networks, cued emotions increased activity in auditory and—in case of visual stimuli—visual cortex. Responses were larger in posterior superior temporal gyrus at the right hemisphere and the ventral striatum when participants were listened to and received feedback from the experimenter. Sensory, language and reward networks contributed to prosody production and were modulated by cues and social context. The right posterior superior temporal gyrus is a central hub for communication in social interactions—in particular for interpersonal evaluation of vocal emotions.
Collapse
Affiliation(s)
- Martin Klasen
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| | - Clara von Marschall
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| | - Güldehen Isman
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Mikhail Zvyagintsev
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| | - Ruben C Gur
- Department of Psychiatry, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| |
Collapse
|
35
|
Wang Y, Zhou W, Cheng Y, Bian X. Gaze Patterns in Auditory-Visual Perception of Emotion by Children with Hearing Aids and Hearing Children. Front Psychol 2017; 8:2281. [PMID: 29312104 PMCID: PMC5743909 DOI: 10.3389/fpsyg.2017.02281] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2017] [Accepted: 12/14/2017] [Indexed: 12/30/2022] Open
Abstract
This study investigated eye-movement patterns during emotion perception for children with hearing aids and hearing children. Seventy-eight participants aged from 3 to 7 were asked to watch videos with a facial expression followed by an oral statement, and these two cues were either congruent or incongruent in emotional valence. Results showed that while hearing children paid more attention to the upper part of the face, children with hearing aids paid more attention to the lower part of the face after the oral statement was presented, especially for the neutral facial expression/neutral oral statement condition. These results suggest that children with hearing aids have an altered eye contact pattern with others and a difficulty in matching visual and voice cues in emotion perception. The negative cause and effect of these gaze patterns should be avoided in earlier rehabilitation for hearing-impaired children with assistive devices.
Collapse
Affiliation(s)
- Yifang Wang
- School of Psychology, Capital Normal University, Beijing, China
| | - Wei Zhou
- School of Psychology, Capital Normal University, Beijing, China
| | | | - Xiaoying Bian
- School of Psychology, Capital Normal University, Beijing, China
| |
Collapse
|
36
|
Zinchenko A, Obermeier C, Kanske P, Schröger E, Villringer A, Kotz SA. The Influence of Negative Emotion on Cognitive and Emotional Control Remains Intact in Aging. Front Aging Neurosci 2017; 9:349. [PMID: 29163132 PMCID: PMC5671981 DOI: 10.3389/fnagi.2017.00349] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Accepted: 10/16/2017] [Indexed: 02/06/2023] Open
Abstract
Healthy aging is characterized by a gradual decline in cognitive control and inhibition of interferences, while emotional control is either preserved or facilitated. Emotional control regulates the processing of emotional conflicts such as in irony in speech, and cognitive control resolves conflict between non-affective tendencies. While negative emotion can trigger control processes and speed up resolution of both cognitive and emotional conflicts, we know little about how aging affects the interaction of emotion and control. In two EEG experiments, we compared the influence of negative emotion on cognitive and emotional conflict processing in groups of younger adults (mean age = 25.2 years) and older adults (69.4 years). Participants viewed short video clips and either categorized spoken vowels (cognitive conflict) or their emotional valence (emotional conflict), while the visual facial information was congruent or incongruent. Results show that negative emotion modulates both cognitive and emotional conflict processing in younger and older adults as indicated in reduced response times and/or enhanced event-related potentials (ERPs). In emotional conflict processing, we observed a valence-specific N100 ERP component in both age groups. In cognitive conflict processing, we observed an interaction of emotion by congruence in the N100 responses in both age groups, and a main effect of congruence in the P200 and N200. Thus, the influence of emotion on conflict processing remains intact in aging, despite a marked decline in cognitive control. Older adults may prioritize emotional wellbeing and preserve the role of emotion in cognitive and emotional control.
Collapse
Affiliation(s)
- Artyom Zinchenko
- International Max Planck Research School on Neuroscience of Communication, Leipzig, Germany.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Christian Obermeier
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Philipp Kanske
- Department of Social Neuroscience, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Institute of Clinical Psychology and Psychotherapy, Department of Psychology, Technische Universität Dresden, Dresden, Germany
| | - Erich Schröger
- Institute of Psychology, University of Leipzig, Leipzig, Germany
| | - Arno Villringer
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Sonja A Kotz
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
37
|
Neural correlates of ambient thermal sensation: An fMRI study. Sci Rep 2017; 7:11279. [PMID: 28900235 PMCID: PMC5595885 DOI: 10.1038/s41598-017-11802-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2017] [Accepted: 08/30/2017] [Indexed: 11/18/2022] Open
Abstract
An increasing number of biometeorological and psychological studies have demonstrated the importance and complexity of the processes involved in environmental thermal perception in humans. However, extant functional imaging data on thermal perception have yet to fully reveal the neural mechanisms underlying these processes because most studies were performed using local thermal stimulation and did not dissociate thermal sensation from comfort. Thus, for the first time, the present study employed functional magnetic resonance imaging (fMRI) and manipulated ambient temperature during brain measurement to independently explore the neural correlates of thermal sensation and comfort. There were significant correlations between the sensation of a lower temperature and activation in the left dorsal posterior insula, putamen, amygdala, and bilateral retrosplenial cortices but no significant correlations were observed between brain activation and thermal comfort. The dorsal posterior insula corresponds to the phylogenetically new thermosensory cortex whereas the limbic structures (i.e., amygdala and retrosplenial cortex) and dorsal striatum may be associated with supramodal emotional representations and the behavioral motivation to obtain heat, respectively. The co-involvement of these phylogenetically new and old systems may explain the psychological processes underlying the flexible psychological and behavioral thermo-environmental adaptations that are unique to humans.
Collapse
|
38
|
Clark CN, Nicholas JM, Agustus JL, Hardy CJD, Russell LL, Brotherhood EV, Dick KM, Marshall CR, Mummery CJ, Rohrer JD, Warren JD. Auditory conflict and congruence in frontotemporal dementia. Neuropsychologia 2017; 104:144-156. [PMID: 28811257 PMCID: PMC5637159 DOI: 10.1016/j.neuropsychologia.2017.08.009] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2017] [Revised: 07/31/2017] [Accepted: 08/05/2017] [Indexed: 12/14/2022]
Abstract
Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes.
Collapse
Affiliation(s)
- Camilla N Clark
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Jennifer M Nicholas
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom; London School of Hygiene and Tropical Medicine, University of London, London, United Kingdomt
| | - Jennifer L Agustus
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Christopher J D Hardy
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Lucy L Russell
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Emilie V Brotherhood
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Katrina M Dick
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Charles R Marshall
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Catherine J Mummery
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Jonathan D Rohrer
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Jason D Warren
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom.
| |
Collapse
|
39
|
Gao C, Wedell DH, Kim J, Weber CE, Shinkareva SV. Modelling audiovisual integration of affect from videos and music. Cogn Emot 2017; 32:516-529. [DOI: 10.1080/02699931.2017.1320979] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Chuanji Gao
- Department of Psychology, University of South Carolina, Columbia, SC, USA
| | - Douglas H. Wedell
- Department of Psychology, University of South Carolina, Columbia, SC, USA
| | - Jongwan Kim
- Department of Psychology, University of South Carolina, Columbia, SC, USA
| | - Christine E. Weber
- Department of Psychology, University of South Carolina, Columbia, SC, USA
| | | |
Collapse
|
40
|
Kim J, Shinkareva SV, Wedell DH. Representations of modality-general valence for videos and music derived from fMRI data. Neuroimage 2017; 148:42-54. [DOI: 10.1016/j.neuroimage.2017.01.002] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2016] [Revised: 12/13/2016] [Accepted: 01/01/2017] [Indexed: 11/28/2022] Open
|
41
|
Hölig C, Föcker J, Best A, Röder B, Büchel C. Activation in the angular gyrus and in the pSTS is modulated by face primes during voice recognition. Hum Brain Mapp 2017; 38:2553-2565. [PMID: 28218433 DOI: 10.1002/hbm.23540] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2016] [Revised: 12/23/2016] [Accepted: 02/06/2017] [Indexed: 11/08/2022] Open
Abstract
The aim of the present study was to better understand the interaction of face and voice processing when identifying people. In a S1-S2 crossmodal priming fMRI experiment, the target (S2) was a disyllabic voice stimulus, whereas the modality of the prime (S1) was manipulated blockwise and consisted of the silent video of a speaking face in the crossmodal condition or of a voice stimulus in the unimodal condition. Primes and targets were from the same speaker (person-congruent) or from two different speakers (person-incongruent). Participants had to classify the S2 as either an old or a young person. Response times were shorter after a congruent than after an incongruent face prime. The right posterior superior temporal sulcus (pSTS) and the right angular gyrus showed a significant person identity effect (person-incongruent > person-congruent) in the crossmodal condition but not in the unimodal condition. In the unimodal condition, a person identity effect was observed in the bilateral inferior frontal gyrus. Our data suggest that both the priming with a voice and with a face result in a preactivated voice representation of the respective person, which eventually facilitates (person-congruent trials) or hampers (person-incongruent trials) the processing of the identity of a subsequent voice. This process involves activation in the right pSTS and in the right angular gyrus for voices primed by faces, but not for voices primed by voices. Hum Brain Mapp 38:2553-2565, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Cordula Hölig
- Biological Psychology and Neuropsychology, University of Hamburg, Germany.,Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Germany
| | - Julia Föcker
- Department of Psychology, Ludwig Maximilian University, Munich, Germany
| | - Anna Best
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Christian Büchel
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Germany
| |
Collapse
|
42
|
Schirmer A, Adolphs R. Emotion Perception from Face, Voice, and Touch: Comparisons and Convergence. Trends Cogn Sci 2017; 21:216-228. [PMID: 28173998 DOI: 10.1016/j.tics.2017.01.001] [Citation(s) in RCA: 140] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2016] [Revised: 12/23/2016] [Accepted: 01/03/2017] [Indexed: 11/30/2022]
Abstract
Historically, research on emotion perception has focused on facial expressions, and findings from this modality have come to dominate our thinking about other modalities. Here we examine emotion perception through a wider lens by comparing facial with vocal and tactile processing. We review stimulus characteristics and ensuing behavioral and brain responses and show that audition and touch do not simply duplicate visual mechanisms. Each modality provides a distinct input channel and engages partly nonoverlapping neuroanatomical systems with different processing specializations (e.g., specific emotions versus affect). Moreover, processing of signals across the different modalities converges, first into multi- and later into amodal representations that enable holistic emotion judgments.
Collapse
Affiliation(s)
- Annett Schirmer
- Chinese University of Hong Kong, Hong Kong; Max Planck Institute for Human Cognitive and Brain Sciences, Germany; National University of Singapore, Singapore.
| | - Ralph Adolphs
- California Institute of Technology, Pasadena, CA, USA.
| |
Collapse
|
43
|
Eisner P, Klasen M, Wolf D, Zerres K, Eggermann T, Eisert A, Zvyagintsev M, Sarkheil P, Mathiak KA, Zepf F, Mathiak K. Cortico-limbic connectivity in MAOA-L carriers is vulnerable to acute tryptophan depletion. Hum Brain Mapp 2016; 38:1622-1635. [PMID: 27935229 DOI: 10.1002/hbm.23475] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2016] [Revised: 11/09/2016] [Accepted: 11/14/2016] [Indexed: 01/12/2023] Open
Abstract
INTRODUCTION A gene-environment interaction between expression genotypes of the monoamine oxidase A (MAOA) and adverse childhood experience increases the risk of antisocial behavior. However, the neural underpinnings of this interaction remain uninvestigated. A cortico-limbic circuit involving the prefrontal cortex (PFC) and the amygdala is central to the suppression of aggressive impulses and is modulated by serotonin (5-HT). MAOA genotypes may modulate the vulnerability of this circuit and increase the risk for emotion regulation deficits after specific life events. Acute tryptophan depletion (ATD) challenges 5-HT regulation and may identify vulnerable neuronal circuits, contributing to the gene-environment interaction. METHODS Functional magnetic resonance imaging measured the resting-state state activity in 64 healthy males in a double-blind, placebo-controlled study. Cortical maps of amygdala correlation identified the impact of ATD and its interaction with low- (MAOA-L) and high-expression variants (MAOA-H) of MAOA on cortico-limbic connectivity. RESULTS Across all Regions of Interest (ROIs) exhibiting an ATD effect on cortico-limbic connectivity, MAOA-L carriers were more susceptible to ATD than MAOA-H carriers. In particular, the MAOA-L group exhibited a larger reduction of amygdala connectivity with the right prefrontal cortex and a larger increase of amygdala connectivity with the insula and dorsal PCC. CONCLUSION MAOA-L carriers were more susceptable to a central 5-HT challenge in cortico-limbic networks. Such vulnerability of the cortical serotonergic system may contribute to the emergence of antisocial behavior after systemic challenges, observed as gene-environment interaction. Hum Brain Mapp 38:1622-1635, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Patrick Eisner
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany.,JARA-Translational Brain Medicine, Aachen, Germany
| | - Martin Klasen
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany.,JARA-Translational Brain Medicine, Aachen, Germany
| | - Dhana Wolf
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany.,JARA-Translational Brain Medicine, Aachen, Germany
| | - Klaus Zerres
- Department of Human Genetics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Thomas Eggermann
- Department of Human Genetics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Albrecht Eisert
- Department of Pharmacy, RWTH Aachen University, Aachen, Germany
| | - Mikhail Zvyagintsev
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany.,JARA-Translational Brain Medicine, Aachen, Germany
| | - Pegah Sarkheil
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany.,JARA-Translational Brain Medicine, Aachen, Germany
| | - Krystyna A Mathiak
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany.,JARA-Translational Brain Medicine, Aachen, Germany
| | - Florian Zepf
- Department of Child and Adolescent Psychiatry, School of Psychiatry and Clinical Neurosciences and School of Pediatrics and Child Health; Faculty of Medicine, Dentistry and Health Sciences; The University of Western Australia (M561), Perth, Australia.,Department of Health in Western Australia, Specialized Child and Adolescent Mental Health Services (CAMHS), Perth, Australia
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany.,JARA-Translational Brain Medicine, Aachen, Germany
| |
Collapse
|
44
|
Mitchell RLC, Jazdzyk A, Stets M, Kotz SA. Recruitment of Language-, Emotion- and Speech-Timing Associated Brain Regions for Expressing Emotional Prosody: Investigation of Functional Neuroanatomy with fMRI. Front Hum Neurosci 2016; 10:518. [PMID: 27803656 PMCID: PMC5067951 DOI: 10.3389/fnhum.2016.00518] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2016] [Accepted: 09/29/2016] [Indexed: 12/02/2022] Open
Abstract
We aimed to progress understanding of prosodic emotion expression by establishing brain regions active when expressing specific emotions, those activated irrespective of the target emotion, and those whose activation intensity varied depending on individual performance. BOLD contrast data were acquired whilst participants spoke non-sense words in happy, angry or neutral tones, or performed jaw-movements. Emotion-specific analyses demonstrated that when expressing angry prosody, activated brain regions included the inferior frontal and superior temporal gyri, the insula, and the basal ganglia. When expressing happy prosody, the activated brain regions also included the superior temporal gyrus, insula, and basal ganglia, with additional activation in the anterior cingulate. Conjunction analysis confirmed that the superior temporal gyrus and basal ganglia were activated regardless of the specific emotion concerned. Nevertheless, disjunctive comparisons between the expression of angry and happy prosody established that anterior cingulate activity was significantly higher for angry prosody than for happy prosody production. Degree of inferior frontal gyrus activity correlated with the ability to express the target emotion through prosody. We conclude that expressing prosodic emotions (vs. neutral intonation) requires generic brain regions involved in comprehending numerous aspects of language, emotion-related processes such as experiencing emotions, and in the time-critical integration of speech information.
Collapse
Affiliation(s)
- Rachel L C Mitchell
- Centre for Affective Disorders, Institute of Psychiatry Psychology and Neuroscience, King's College London London, UK
| | | | - Manuela Stets
- Department of Psychology, University of Essex Colchester, UK
| | - Sonja A Kotz
- Section of Neuropsychology and Psychopharmacology, Maastricht University Maastricht, Netherlands
| |
Collapse
|
45
|
Kim J, Wang J, Wedell DH, Shinkareva SV. Identifying Core Affect in Individuals from fMRI Responses to Dynamic Naturalistic Audiovisual Stimuli. PLoS One 2016; 11:e0161589. [PMID: 27598534 PMCID: PMC5012606 DOI: 10.1371/journal.pone.0161589] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2016] [Accepted: 08/08/2016] [Indexed: 01/19/2023] Open
Abstract
Recent research has demonstrated that affective states elicited by viewing pictures varying in valence and arousal are identifiable from whole brain activation patterns observed with functional magnetic resonance imaging (fMRI). Identification of affective states from more naturalistic stimuli has clinical relevance, but the feasibility of identifying these states on an individual trial basis from fMRI data elicited by dynamic multimodal stimuli is unclear. The goal of this study was to determine whether affective states can be similarly identified when participants view dynamic naturalistic audiovisual stimuli. Eleven participants viewed 5s audiovisual clips in a passive viewing task in the scanner. Valence and arousal for individual trials were identified both within and across participants based on distributed patterns of activity in areas selectively responsive to audiovisual naturalistic stimuli while controlling for lower level features of the stimuli. In addition, the brain regions identified by searchlight analyses to represent valence and arousal were consistent with previously identified regions associated with emotion processing. These findings extend previous results on the distributed representation of affect to multimodal dynamic stimuli.
Collapse
Affiliation(s)
- Jongwan Kim
- Department of Psychology, University of South Carolina, Columbia, South Carolina, United States of America
| | - Jing Wang
- Department of Psychology, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
| | - Douglas H. Wedell
- Department of Psychology, University of South Carolina, Columbia, South Carolina, United States of America
| | - Svetlana V. Shinkareva
- Department of Psychology, University of South Carolina, Columbia, South Carolina, United States of America
- * E-mail:
| |
Collapse
|
46
|
Ebisch SJH, Salone A, Martinotti G, Carlucci L, Mantini D, Perrucci MG, Saggino A, Romani GL, Di Giannantonio M, Northoff G, Gallese V. Integrative Processing of Touch and Affect in Social Perception: An fMRI Study. Front Hum Neurosci 2016; 10:209. [PMID: 27242474 PMCID: PMC4861868 DOI: 10.3389/fnhum.2016.00209] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2016] [Accepted: 04/25/2016] [Indexed: 11/13/2022] Open
Abstract
Social perception commonly employs multiple sources of information. The present study aimed at investigating the integrative processing of affective social signals. Task-related and task-free functional magnetic resonance imaging was performed in 26 healthy adult participants during a social perception task concerning dynamic visual stimuli simultaneously depicting facial expressions of emotion and tactile sensations that could be either congruent or incongruent. Confounding effects due to affective valence, inhibitory top-down influences, cross-modal integration, and conflict processing were minimized. The results showed that the perception of congruent, compared to incongruent stimuli, elicited enhanced neural activity in a set of brain regions including left amygdala, bilateral posterior cingulate cortex (PCC), and left superior parietal cortex. These congruency effects did not differ as a function of emotion or sensation. A complementary task-related functional interaction analysis preliminarily suggested that amygdala activity depended on previous processing stages in fusiform gyrus and PCC. The findings provide support for the integrative processing of social information about others' feelings from manifold bodily sources (sensory-affective information) in amygdala and PCC. Given that the congruent stimuli were also judged as being more self-related and more familiar in terms of personal experience in an independent sample of participants, we speculate that such integrative processing might be mediated by the linking of external stimuli with self-experience. Finally, the prediction of task-related responses in amygdala by intrinsic functional connectivity between amygdala and PCC during a task-free state implies a neuro-functional basis for an individual predisposition for the integrative processing of social stimulus content.
Collapse
Affiliation(s)
- Sjoerd J H Ebisch
- Department of Neuroscience, Imaging and Clinical Sciences and Institute of Advanced Biomedical Technologies, G. d'Annunzio University of Chieti-Pescara Chieti, Italy
| | - Anatolia Salone
- Department of Neuroscience, Imaging and Clinical Sciences and Institute of Advanced Biomedical Technologies, G. d'Annunzio University of Chieti-Pescara Chieti, Italy
| | - Giovanni Martinotti
- Department of Neuroscience, Imaging and Clinical Sciences and Institute of Advanced Biomedical Technologies, G. d'Annunzio University of Chieti-Pescara Chieti, Italy
| | - Leonardo Carlucci
- Department of Psychological, Health and Territorial Sciences, School of Medicine and Health Sciences, G. d'Annunzio University of Chieti-Pescara Chieti, Italy
| | - Dante Mantini
- Department of Health Sciences and Technology, ETH ZurichZurich, Switzerland; Department of Experimental Psychology, University of Oxford, OxfordUK; Research Center for Motor Control and Neuroplasticity, KU LeuvenLeuven, Belgium
| | - Mauro G Perrucci
- Department of Neuroscience, Imaging and Clinical Sciences and Institute of Advanced Biomedical Technologies, G. d'Annunzio University of Chieti-Pescara Chieti, Italy
| | - Aristide Saggino
- Department of Psychological, Health and Territorial Sciences, School of Medicine and Health Sciences, G. d'Annunzio University of Chieti-Pescara Chieti, Italy
| | - Gian Luca Romani
- Department of Neuroscience, Imaging and Clinical Sciences and Institute of Advanced Biomedical Technologies, G. d'Annunzio University of Chieti-Pescara Chieti, Italy
| | - Massimo Di Giannantonio
- Department of Neuroscience, Imaging and Clinical Sciences and Institute of Advanced Biomedical Technologies, G. d'Annunzio University of Chieti-Pescara Chieti, Italy
| | - Georg Northoff
- The Royal's Institute of Mental Health Research & University of Ottawa Brain and Mind Research Institute, Centre for Neural Dynamics, Faculty of Medicine, University of Ottawa Ottawa, ON, Canada
| | - Vittorio Gallese
- Section of Physiology, Department of Neuroscience, University of ParmaParma, Italy; Institute of Philosophy, School of Advanced Study, University of LondonLondon, UK
| |
Collapse
|
47
|
Kim J, Wedell DH. Comparison of physiological responses to affect eliciting pictures and music. Int J Psychophysiol 2016; 101:9-17. [DOI: 10.1016/j.ijpsycho.2015.12.011] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2015] [Revised: 12/14/2015] [Accepted: 12/30/2015] [Indexed: 11/27/2022]
|
48
|
Flaisch T, Imhof M, Schmälzle R, Wentz KU, Ibach B, Schupp HT. Implicit and Explicit Attention to Pictures and Words: An fMRI-Study of Concurrent Emotional Stimulus Processing. Front Psychol 2015; 6:1861. [PMID: 26733895 PMCID: PMC4683193 DOI: 10.3389/fpsyg.2015.01861] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2015] [Accepted: 11/17/2015] [Indexed: 11/25/2022] Open
Abstract
The present study utilized functional magnetic resonance imaging (fMRI) to examine the neural processing of concurrently presented emotional stimuli under varying explicit and implicit attention demands. Specifically, in separate trials, participants indicated the category of either pictures or words. The words were placed over the center of the pictures and the picture-word compound-stimuli were presented for 1500 ms in a rapid event-related design. The results reveal pronounced main effects of task and emotion: the picture categorization task prompted strong activations in visual, parietal, temporal, frontal, and subcortical regions; the word categorization task evoked increased activation only in left extrastriate cortex. Furthermore, beyond replicating key findings regarding emotional picture and word processing, the results point to a dissociation of semantic-affective and sensory-perceptual processes for words: while emotional words engaged semantic-affective networks of the left hemisphere regardless of task, the increased activity in left extrastriate cortex associated with explicitly attending to words was diminished when the word was overlaid over an erotic image. Finally, we observed a significant interaction between Picture Category and Task within dorsal visual-associative regions, inferior parietal, and dorsolateral, and medial prefrontal cortices: during the word categorization task, activation was increased in these regions when the words were overlaid over erotic as compared to romantic pictures. During the picture categorization task, activity in these areas was relatively decreased when categorizing erotic as compared to romantic pictures. Thus, the emotional intensity of the pictures strongly affected brain regions devoted to the control of task-related word or picture processing. These findings are discussed with respect to the interplay of obligatory stimulus processing with task-related attentional control mechanisms.
Collapse
Affiliation(s)
- Tobias Flaisch
- Department of Psychology, University of Konstanz Konstanz, Germany
| | - Martin Imhof
- Department of Psychology, University of Konstanz Konstanz, Germany
| | - Ralf Schmälzle
- Department of Psychology, University of Konstanz Konstanz, Germany
| | - Klaus-Ulrich Wentz
- Department of Radiology, Kantonsspital Münsterlingen Münsterlingen, Switzerland
| | - Bernd Ibach
- Department of Psychiatry, Psychiatrische Dienste Thurgau Münsterlingen, Switzerland
| | - Harald T Schupp
- Department of Psychology, University of Konstanz Konstanz, Germany
| |
Collapse
|
49
|
The P300 component wave reveals differences in subclinical anxious-depressive states during bimodal oddball tasks: An effect of stimulus congruence. Clin Neurophysiol 2015; 126:2108-23. [DOI: 10.1016/j.clinph.2015.01.012] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2014] [Revised: 01/13/2015] [Accepted: 01/18/2015] [Indexed: 12/30/2022]
|
50
|
Novak LR, Gitelman DR, Schuyler B, Li W. Olfactory-visual integration facilitates perception of subthreshold negative emotion. Neuropsychologia 2015; 77:288-97. [PMID: 26359718 DOI: 10.1016/j.neuropsychologia.2015.09.005] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2015] [Revised: 08/01/2015] [Accepted: 09/04/2015] [Indexed: 12/19/2022]
Abstract
A fast growing literature of multisensory emotion integration notwithstanding, the chemical senses, intimately associated with emotion, have been largely overlooked. Moreover, an ecologically highly relevant principle of "inverse effectiveness", rendering maximal integration efficacy with impoverished sensory input, remains to be assessed in emotion integration. Presenting minute, subthreshold negative (vs. neutral) cues in faces and odors, we demonstrated olfactory-visual emotion integration in improved emotion detection (especially among individuals with weaker perception of unimodal negative cues) and response enhancement in the amygdala. Moreover, while perceptual gain for visual negative emotion involved the posterior superior temporal sulcus/pSTS, perceptual gain for olfactory negative emotion engaged both the associative olfactory (orbitofrontal) cortex and amygdala. Dynamic causal modeling (DCM) analysis of fMRI timeseries further revealed connectivity strengthening among these areas during crossmodal emotion integration. That multisensory (but not low-level unisensory) areas exhibited both enhanced response and region-to-region coupling favors a top-down (vs. bottom-up) account for olfactory-visual emotion integration. Current findings thus confirm the involvement of multisensory convergence areas, while highlighting unique characteristics of olfaction-related integration. Furthermore, successful crossmodal binding of subthreshold aversive cues not only supports the principle of "inverse effectiveness" in emotion integration but also accentuates the automatic, unconscious quality of crossmodal emotion synthesis.
Collapse
Affiliation(s)
- Lucas R Novak
- Department of Psychology, Florida State University, 1107 W. Call St., Tallahassee, FL 32304, USA.
| | - Darren R Gitelman
- Department of Neurology, Northwestern University Feinberg School of Medicine, USA
| | - Brianna Schuyler
- Waisman Center for Brain Imaging and Behavior, University of Wisconsin-Madison, USA
| | - Wen Li
- Department of Psychology, Florida State University, 1107 W. Call St., Tallahassee, FL 32304, USA.
| |
Collapse
|