1
|
Chen Y, Zhang L, Yin H. Different emotion regulation strategies mediate the relations of corresponding connections within the default-mode network to sleep quality. Brain Imaging Behav 2024; 18:302-314. [PMID: 38057650 DOI: 10.1007/s11682-023-00828-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/19/2023] [Indexed: 12/08/2023]
Abstract
Despite a long history of interest in the relation of emotion regulation to sleep quality, how different strategies link with sleep quality at the neural level is still poorly understood. Thus, we utilized the process model of emotion regulation as an organizing framework for examining the neurological underpinning of the links between the two emotion regulation strategies and sleep quality. 183 young adults (51.7% females, Mage = 22.16) were guided to undergo the MRI scans and then complete the Pittsburgh Sleep Quality Index (PSQI) and the emotion regulation Questionnaire (ERQ) formed by two dimensions: cognitive reappraisal and expressive suppression. Results found that emotion regulation mediated the association between functional connectivity within the intrinsic default-mode network (DMN) and sleep quality. Specifically, rsFC analysis showed that cognitive reappraisal was positively correlated with rsFC within DMN, including left superior temporal gyrus (lSTG)-left lateral occipital cortex (lLOC), lSTG-left anterior cingulate gyrus (lACG), right lateral occipital cortex (rLOC)-left middle frontal gyrus (lMFG), and rLOC-lSTG. Further mediation analysis indicated a mediated role of cognitive reappraisal in the links between the four connectivity within the DMN and sleep quality. In addition, expressive suppression was positively correlated with rsFC within DMN, including left precuneus cortex (lPrcu)-right Temporal Pole (rTP) and lPrcu- lSTG. Further mediation analysis indicated a mediated role of expressive suppression in the links between the two connectivity within the DMN and sleep quality. Overall, this finding supports the process model of emotion regulation in that the effects of reappraisal and suppression have varying neural circuits that impact that strategy's effect on sleep quality.
Collapse
Affiliation(s)
- Yang Chen
- Department of Psychology, School of Education Science, Hunan Normal University, 36 Lushan Road, Changsha, Hunan, 410081, China
- Centre for Mind & Brain Science, Hunan Normal University, Changsha, China
| | - Li Zhang
- Department of Psychology, School of Education Science, Hunan Normal University, 36 Lushan Road, Changsha, Hunan, 410081, China
- Centre for Mind & Brain Science, Hunan Normal University, Changsha, China
| | - Huazhan Yin
- Department of Psychology, School of Education Science, Hunan Normal University, 36 Lushan Road, Changsha, Hunan, 410081, China.
- Centre for Mind & Brain Science, Hunan Normal University, Changsha, China.
| |
Collapse
|
2
|
Lee SW, Kim S, Lee S, Seo HS, Cha H, Chang Y, Lee SJ. Neural mechanisms of acceptance-commitment therapy for obsessive-compulsive disorder: a resting-state and task-based fMRI study. Psychol Med 2024; 54:374-384. [PMID: 37427558 DOI: 10.1017/s0033291723001769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
BACKGROUND There is growing evidence for the use of acceptance-commitment therapy (ACT) for the treatment of obsessive-compulsive disorder (OCD). However, few fully implemented ACT have been conducted on the neural mechanisms underlying its effect on OCD. Thus, this study aimed to elucidate the neural correlates of ACT in patients with OCD using task-based and resting-state functional magnetic resonance imaging (fMRI). METHODS Patients with OCD were randomly assigned to the ACT (n = 21) or the wait-list control group (n = 21). An 8-week group-format ACT program was provided to the ACT group. All participants underwent an fMRI scan and psychological measurements before and after 8 weeks. RESULTS Patients with OCD showed significantly increased activation in the bilateral insula and superior temporal gyri (STG), induced by the thought-action fusion task after ACT intervention. Further psycho-physiological interaction analyses with these regions as seeds revealed that the left insular-left inferior frontal gyrus (IFG) connectivity was strengthened in the ACT group after treatment. Increased resting-state functional connectivity was also found in the posterior cingulate cortex (PCC), precuneus, and lingual gyrus after ACT intervention Most of these regions showed significant correlations with ACT process measures while only the right insula was correlated with the obsessive-compulsive symptom measure. CONCLUSIONS These findings suggest that the therapeutic effect of ACT on OCD may involve the salience and interoception processes (i.e. insula), multisensory integration (i.e. STG), language (i.e. IFG), and self-referential processes (i.e. PCC and precuneus). These areas or their interactions could be important for understanding how ACT works psychologically.
Collapse
Affiliation(s)
- Sang Won Lee
- Department of Psychiatry, Kyungpook National University Chilgok Hospital, Daegu, Korea
- Department of Psychiatry, School of Medicine, Kyungpook National University, Daegu, Korea
| | - Seungho Kim
- Department of Medical & Biological Engineering, Kyungpook National University, Daegu, Korea
| | - Sangyeol Lee
- Department of Medical & Biological Engineering, Kyungpook National University, Daegu, Korea
| | - Ho Seok Seo
- Department of Psychiatry, Kyungpook National University Hospital, Daegu, Korea
| | - Hyunsil Cha
- Institute of Biomedical Engineering Research, Kyungpook National University, Daegu, Korea
| | - Yongmin Chang
- Department of Molecular Medicine, School of Medicine, Kyungpook National University, Daegu, Korea
- Department of Radiology, Kyungpook National University Hospital, Daegu, Korea
| | - Seung Jae Lee
- Department of Psychiatry, School of Medicine, Kyungpook National University, Daegu, Korea
- Department of Psychiatry, Kyungpook National University Hospital, Daegu, Korea
| |
Collapse
|
3
|
Vaessen M, Van der Heijden K, de Gelder B. Modality-specific brain representations during automatic processing of face, voice and body expressions. Front Neurosci 2023; 17:1132088. [PMID: 37869514 PMCID: PMC10587395 DOI: 10.3389/fnins.2023.1132088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Accepted: 09/05/2023] [Indexed: 10/24/2023] Open
Abstract
A central question in affective science and one that is relevant for its clinical applications is how emotions provided by different stimuli are experienced and represented in the brain. Following the traditional view emotional signals are recognized with the help of emotion concepts that are typically used in descriptions of mental states and emotional experiences, irrespective of the sensory modality. This perspective motivated the search for abstract representations of emotions in the brain, shared across variations in stimulus type (face, body, voice) and sensory origin (visual, auditory). On the other hand, emotion signals like for example an aggressive gesture, trigger rapid automatic behavioral responses and this may take place before or independently of full abstract representation of the emotion. This pleads in favor specific emotion signals that may trigger rapid adaptative behavior only by mobilizing modality and stimulus specific brain representations without relying on higher order abstract emotion categories. To test this hypothesis, we presented participants with naturalistic dynamic emotion expressions of the face, the whole body, or the voice in a functional magnetic resonance (fMRI) study. To focus on automatic emotion processing and sidestep explicit concept-based emotion recognition, participants performed an unrelated target detection task presented in a different sensory modality than the stimulus. By using multivariate analyses to assess neural activity patterns in response to the different stimulus types, we reveal a stimulus category and modality specific brain organization of affective signals. Our findings are consistent with the notion that under ecological conditions emotion expressions of the face, body and voice may have different functional roles in triggering rapid adaptive behavior, even if when viewed from an abstract conceptual vantage point, they may all exemplify the same emotion. This has implications for a neuroethologically grounded emotion research program that should start from detailed behavioral observations of how face, body, and voice expressions function in naturalistic contexts.
Collapse
|
4
|
Gan S, Li W. Aberrant neural correlates of multisensory processing of audiovisual social cues related to social anxiety: An electrophysiological study. Front Psychiatry 2023; 14:1020812. [PMID: 36761870 PMCID: PMC9902659 DOI: 10.3389/fpsyt.2023.1020812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 01/03/2023] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND Social anxiety disorder (SAD) is characterized by abnormal fear to social cues. Although unisensory processing to social stimuli associated with social anxiety (SA) has been well described, how multisensory processing relates to SA is still open to clarification. Using electroencephalography (EEG) measurement, we investigated the neural correlates of multisensory processing and related temporal dynamics in social anxiety disorder (SAD). METHODS Twenty-five SAD participants and 23 healthy control (HC) participants were presented with angry and neutral faces, voices and their combinations with congruent emotions and they completed an emotional categorization task. RESULTS We found that face-voice combinations facilitated auditory processing in multiple stages indicated by the acceleration of auditory N1 latency, attenuation of auditory N1 and P250 amplitudes, and decrease of theta power. In addition, bimodal inputs elicited cross-modal integrative activity which is indicated by the enhancement of visual P1, N170, and P3/LPP amplitudes and superadditive response of P1 and P3/LPP. More importantly, excessively greater integrative activity (at P3/LPP amplitude) was found in SAD participants, and this abnormal integrative activity in both early and late temporal stages was related to the larger interpretation bias of miscategorizing neutral face-voice combinations as angry. CONCLUSION The study revealed that neural correlates of multisensory processing was aberrant in SAD and it was related to the interpretation bias to multimodal social cues in multiple processing stages. Our findings suggest that deficit in multisensory processing might be an important factor in the psychopathology of SA.
Collapse
Affiliation(s)
- Shuzhen Gan
- Shanghai Changning Mental Health Center, Shanghai, China.,Shanghai Mental Health Center, Shanghai, China
| | - Weijun Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China.,Key Laboratory of Brain and Cognitive Neuroscience, Dalian, Liaoning, China
| |
Collapse
|
5
|
Dong H, Li N, Fan L, Wei J, Xu J. Integrative interaction of emotional speech in audio-visual modality. Front Neurosci 2022; 16:797277. [PMID: 36440282 PMCID: PMC9695733 DOI: 10.3389/fnins.2022.797277] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 09/21/2022] [Indexed: 11/13/2022] Open
Abstract
Emotional clues are always expressed in many ways in our daily life, and the emotional information we receive is often represented by multiple modalities. Successful social interactions require a combination of multisensory cues to accurately determine the emotion of others. The integration mechanism of multimodal emotional information has been widely investigated. Different brain activity measurement methods were used to determine the location of brain regions involved in the audio-visual integration of emotional information, mainly in the bilateral superior temporal regions. However, the methods adopted in these studies are relatively simple, and the materials of the study rarely contain speech information. The integration mechanism of emotional speech in the human brain still needs further examinations. In this paper, a functional magnetic resonance imaging (fMRI) study was conducted using event-related design to explore the audio-visual integration mechanism of emotional speech in the human brain by using dynamic facial expressions and emotional speech to express emotions of different valences. Representational similarity analysis (RSA) based on regions of interest (ROIs), whole brain searchlight analysis, modality conjunction analysis and supra-additive analysis were used to analyze and verify the role of relevant brain regions. Meanwhile, a weighted RSA method was used to evaluate the contributions of each candidate model in the best fitted model of ROIs. The results showed that only the left insula was detected by all methods, suggesting that the left insula played an important role in the audio-visual integration of emotional speech. Whole brain searchlight analysis, modality conjunction analysis and supra-additive analysis together revealed that the bilateral middle temporal gyrus (MTG), right inferior parietal lobule and bilateral precuneus might be involved in the audio-visual integration of emotional speech from other aspects.
Collapse
Affiliation(s)
- Haibin Dong
- Tianjin Key Lab of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Na Li
- Tianjin Key Lab of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Lingzhong Fan
- Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Jianguo Wei
- Tianjin Key Lab of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Junhai Xu
- Tianjin Key Lab of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China
- *Correspondence: Junhai Xu,
| |
Collapse
|
6
|
The Role of the Interaction between the Inferior Parietal Lobule and Superior Temporal Gyrus in the Multisensory Go/No-go Task. Neuroimage 2022; 254:119140. [PMID: 35342002 DOI: 10.1016/j.neuroimage.2022.119140] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 03/19/2022] [Accepted: 03/22/2022] [Indexed: 11/23/2022] Open
Abstract
Information from multiple sensory modalities interacts. Using functional magnetic resonance imaging (fMRI), we aimed to identify the neural structures correlated with how cooccurring sound modulates the visual motor response execution. The reaction time (RT) to audiovisual stimuli was significantly faster than the RT to visual stimuli. Signal detection analyses showed no significant difference in the perceptual sensitivity (d') between audiovisual and visual stimuli, while the response criteria (β or c) of the audiovisual stimuli was decreased compared to the visual stimuli. The functional connectivity between the left inferior parietal lobule (IPL) and bilateral superior temporal gyrus (STG) was enhanced in Go processing compared with No-go processing of audiovisual stimuli. Furthermore, the left precentral gyrus (PreCG) showed enhanced functional connectivity with the bilateral STG and other areas of the ventral stream in Go processing compared with No-go processing of audiovisual stimuli. These results revealed that the neuronal network correlated with modulations of the motor response execution after the presentation of both visual stimuli along with cooccurring sound in a multisensory Go/Nogo task, including the left IPL, left PreCG, bilateral STG and some areas of the ventral stream. The role of the interaction between the IPL and STG in transforming audiovisual information into motor behavior is discussed. The current study provides a new perspective for exploring potential brain mechanisms underlying how humans execute appropriate behaviors on the basis of multisensory information.
Collapse
|
7
|
Huang P, Luan XH, Xie Z, Li MT, Chen SD, Liu J, Jia XZ, Cao L, Zhou HY. Altered Local Brain Amplitude of Fluctuations in Patients With Myotonic Dystrophy Type 1. Front Aging Neurosci 2021; 13:790632. [PMID: 34955817 PMCID: PMC8703136 DOI: 10.3389/fnagi.2021.790632] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Accepted: 11/17/2021] [Indexed: 01/18/2023] Open
Abstract
This study is aimed at investigating the characteristics of the spontaneous brain activity in patients with myotonic dystrophy type 1 (DM1). A total of 18 patients with DM1 and 18 healthy controls (HCs) were examined by resting-state functional MRI. Combined methods include amplitude of low-frequency fluctuations (ALFFs), the fractional amplitude of low-frequency fluctuations (fALFFs), and Wavelet transform-based ALFFs (Wavelet-ALFFs) with standardization, percent amplitude of fluctuation (PerAF) with/without standardization were applied to evaluate the spontaneous brain activity of patients with DM1. Compared with HCs, patients with DM1 showed decreased ALFFs and Wavelet-ALFFs in the bilateral precuneus (PCUN), angular gyrus (ANG), inferior parietal, but supramarginal and angular gyri (IPL), posterior cingulate gyrus (PCG), superior frontal gyrus, medial (SFGmed), middle occipital gyrus (MOG), which were mainly distributed in the brain regions of default mode network (DMN). Decreased ALFFs and Wavelet-ALFFs were also seen in bilateral middle frontal gyrus (MFG), inferior frontal gyrus, opercular part (IFGoperc), which were the main components of the executive control network (ECN). Patients with DM1 also showed decreased fALFFs in SFGmed.R, the right anterior cingulate and paracingulate gyri (ACGR), bilateral MFG. Reduced PerAF in bilateral PCUN, ANG, PCG, MOG, and IPLL as well as decreased PerAF without standardization in PCUNR and bilateral PCG also existed in patients with DM1. In conclusion, patients with DM1 had decreased activity in DMN and ECN with increased fluctuations in the temporal cortex and cerebellum. Decreased brain activity in DMN was the most repeatable and reliable with PCUN and PCG being the most specific imaging biomarker of brain dysfunction in patients with DM1.
Collapse
Affiliation(s)
- Pei Huang
- Department of Neurology and Institute of Neurology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xing-Hua Luan
- Department of Neurology and Institute of Neurology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Department of Neurology, Shanghai Jiao Tong University Affiliated Sixth People’s Hospital, Shanghai, China
| | - Zhou Xie
- School of Information and Electronics Technology, Jiamusi University, Jiamusi, China
| | - Meng-Ting Li
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, China
| | - Sheng-Di Chen
- Department of Neurology and Institute of Neurology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jun Liu
- Department of Neurology and Institute of Neurology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xi-Ze Jia
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, China
| | - Li Cao
- Department of Neurology and Institute of Neurology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Department of Neurology, Shanghai Jiao Tong University Affiliated Sixth People’s Hospital, Shanghai, China
| | - Hai-Yan Zhou
- Department of Neurology and Institute of Neurology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
8
|
Wang D, Liang S. Dynamic Causal Modeling on the Identification of Interacting Networks in the Brain: A Systematic Review. IEEE Trans Neural Syst Rehabil Eng 2021; 29:2299-2311. [PMID: 34714747 DOI: 10.1109/tnsre.2021.3123964] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Dynamic causal modeling (DCM) has long been used to characterize effective connectivity within networks of distributed neuronal responses. Previous reviews have highlighted the understanding of the conceptual basis behind DCM and its variants from different aspects. However, no detailed summary or classification research on the task-related effective connectivity of various brain regions has been made formally available so far, and there is also a lack of application analysis of DCM for hemodynamic and electrophysiological measurements. This review aims to analyze the effective connectivity of different brain regions using DCM for different measurement data. We found that, in general, most studies focused on the networks between different cortical regions, and the research on the networks between other deep subcortical nuclei or between them and the cerebral cortex are receiving increasing attention, but far from the same scale. Our analysis also reveals a clear bias towards some task types. Based on these results, we identify and discuss several promising research directions that may help the community to attain a clear understanding of the brain network interactions under different tasks.
Collapse
|
9
|
Xu J, Dong H, Li N, Wang Z, Guo F, Wei J, Dang J. Weighted RSA: An Improved Framework on the Perception of Audio-visual Affective Speech in Left Insula and Superior Temporal Gyrus. Neuroscience 2021; 469:46-58. [PMID: 34119576 DOI: 10.1016/j.neuroscience.2021.06.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Revised: 05/16/2021] [Accepted: 06/02/2021] [Indexed: 12/24/2022]
Abstract
Being able to accurately perceive the emotion expressed by the facial or verbal expression from others is critical to successful social interaction. However, only few studies examined the multimodal interactions on speech emotion, and there is no consistence in studies on the speech emotion perception. It remains unclear, how the speech emotion of different valence is perceived on the multimodal stimuli by our human brain. In this paper, we conducted a functional magnetic resonance imaging (fMRI) study with an event-related design, using dynamic facial expressions and emotional speech stimuli to express different emotions, in order to explore the perception mechanism of speech emotion in audio-visual modality. The representational similarity analysis (RSA), whole-brain searchlight analysis, and conjunction analysis of emotion were used to interpret the representation of speech emotion in different aspects. Significantly, a weighted RSA approach was creatively proposed to evaluate the contribution of each candidate model to the best fitted model and provided a supplement to RSA. The results of weighted RSA indicated that the fitted models were superior to all candidate models and the weights could be used to explain the representation of ROIs. The bilateral amygdala has been shown to be associated with the processing of both positive and negative emotions except neutral emotion. It is indicated that the left posterior insula and the left anterior superior temporal gyrus (STG) play important roles in the perception of multimodal speech emotion.
Collapse
Affiliation(s)
- Junhai Xu
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Haibin Dong
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China; State Grid Tianjin Electric Power Company, China
| | - Na Li
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Zeyu Wang
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Fei Guo
- School of Computer Science and Engineering, Central South University, Changsha 410083, China.
| | - Jianguo Wei
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China.
| | - Jianwu Dang
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China; School of Information Science, Japan Advanced Institute of Science and Technology, Japan
| |
Collapse
|
10
|
Csonka M, Mardmomen N, Webster PJ, Brefczynski-Lewis JA, Frum C, Lewis JW. Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain. Cereb Cortex Commun 2021; 2:tgab002. [PMID: 33718874 PMCID: PMC7941256 DOI: 10.1093/texcom/tgab002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 12/31/2020] [Accepted: 01/06/2021] [Indexed: 01/23/2023] Open
Abstract
Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical "hubs") preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.
Collapse
Affiliation(s)
- Matt Csonka
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Nadia Mardmomen
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Paula J Webster
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Julie A Brefczynski-Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Chris Frum
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - James W Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
11
|
Creupelandt C, D'Hondt F, de Timary P, Falagiarda F, Collignon O, Maurage P. Selective visual and crossmodal impairment in the discrimination of anger and fear expressions in severe alcohol use disorder. Drug Alcohol Depend 2020; 213:108079. [PMID: 32554170 DOI: 10.1016/j.drugalcdep.2020.108079] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Revised: 05/12/2020] [Accepted: 05/14/2020] [Indexed: 10/24/2022]
Abstract
BACKGROUND Severe alcohol use disorder (SAUD) is associated with impaired discrimination of emotional expressions. This deficit appears increased in crossmodal settings, when simultaneous inputs from different sensory modalities are presented. However, so far, studies exploring emotional crossmodal processing in SAUD relied on static faces and unmatched face/voice pairs, thus offering limited ecological validity. Our aim was therefore to assess emotional processing using a validated and ecological paradigm relying on dynamic audio-visual stimuli, manipulating the amount of emotional information available. METHOD Thirty individuals with SAUD and 30 matched healthy controls performed an emotional discrimination task requiring to identify five emotions (anger, disgust, fear, happiness, sadness) expressed as visual, auditory, or auditory-visual segments of varying length. Sensitivity indices (d') were computed to get an unbiased measure of emotional discrimination and entered in a Generalized Linear Mixed Model. Incorrect emotional attributions were also scrutinized through confusion matrices. RESULTS Discrimination levels varied across sensory modalities and emotions, and increased with stimuli duration. Crucially, performances also improved from unimodal to crossmodal conditions in both groups, but discrimination for anger crossmodal stimuli and fear crossmodal/visual stimuli remained selectively impaired in SAUD. These deficits were not influenced by stimuli duration, suggesting that they were not modulated by the amount of emotional information available. Moreover, they were not associated with systematic emotional error patterns reflecting specific confusions between emotions. CONCLUSIONS These results clarify the nature and extent of crossmodal impairments in SAUD and converge with earlier findings to ascribe a specific role for anger and fear in this pathology.
Collapse
Affiliation(s)
- Coralie Creupelandt
- Louvain Experimental Psychopathology Research Group (UCLEP), Psychological Sciences Research Institute (IPSY) and Institute of Neuroscience (IoNS), UCLouvain, B-1348, Louvain-la-Neuve, Belgium.
| | - Fabien D'Hondt
- Univ. Lille, Inserm, CHU Lille, U1172, Lille Neuroscience & Cognition, F-59000 Lille, France; CHU Lille, Clinique de Psychiatrie, CURE, F-59000, Lille, France; Centre National de Ressources et de Résilience Lille-Paris (CN2R), F-59000, Lille, France.
| | - Philippe de Timary
- Louvain Experimental Psychopathology Research Group (UCLEP), Psychological Sciences Research Institute (IPSY) and Institute of Neuroscience (IoNS), UCLouvain, B-1348, Louvain-la-Neuve, Belgium; Department of Adult Psychiatry, Saint-Luc Academic Hospital, B-1200, Brussels, Belgium.
| | - Federica Falagiarda
- Crossmodal Perception and Plasticity laboratory (CPP-Lab), Psychological Sciences Research Institute (IPSY) and Institute of Neuroscience (IoNS), UCLouvain, B-1348, Louvain-la-Neuve, Belgium.
| | - Olivier Collignon
- Crossmodal Perception and Plasticity laboratory (CPP-Lab), Psychological Sciences Research Institute (IPSY) and Institute of Neuroscience (IoNS), UCLouvain, B-1348, Louvain-la-Neuve, Belgium; Centre for Mind/Brain Studies, University of Trento, Trento, Italy.
| | - Pierre Maurage
- Louvain Experimental Psychopathology Research Group (UCLEP), Psychological Sciences Research Institute (IPSY) and Institute of Neuroscience (IoNS), UCLouvain, B-1348, Louvain-la-Neuve, Belgium.
| |
Collapse
|
12
|
Li Y, Wang F, Chen Y, Cichocki A, Sejnowski T. The Effects of Audiovisual Inputs on Solving the Cocktail Party Problem in the Human Brain: An fMRI Study. Cereb Cortex 2019; 28:3623-3637. [PMID: 29029039 DOI: 10.1093/cercor/bhx235] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2017] [Indexed: 11/13/2022] Open
Abstract
At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem.
Collapse
Affiliation(s)
- Yuanqing Li
- Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou, China.,Guangzhou Key Laboratory of Brain Computer Interaction and Applications, Guangzhou, China
| | - Fangyi Wang
- Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou, China.,Guangzhou Key Laboratory of Brain Computer Interaction and Applications, Guangzhou, China
| | - Yongbin Chen
- Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou, China.,Guangzhou Key Laboratory of Brain Computer Interaction and Applications, Guangzhou, China
| | - Andrzej Cichocki
- Riken Brain Science Institute, Wako shi, Japan.,Skolkovo Institute of Science and Technology (SKOTECH), Moscow, Russia
| | - Terrence Sejnowski
- Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA, USA
| |
Collapse
|
13
|
Gao C, Weber CE, Shinkareva SV. The brain basis of audiovisual affective processing: Evidence from a coordinate-based activation likelihood estimation meta-analysis. Cortex 2019; 120:66-77. [DOI: 10.1016/j.cortex.2019.05.016] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 05/03/2019] [Accepted: 05/28/2019] [Indexed: 01/19/2023]
|
14
|
Aryani A, Hsu CT, Jacobs AM. Affective iconic words benefit from additional sound-meaning integration in the left amygdala. Hum Brain Mapp 2019; 40:5289-5300. [PMID: 31444898 PMCID: PMC6864889 DOI: 10.1002/hbm.24772] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 07/21/2019] [Accepted: 07/31/2019] [Indexed: 01/01/2023] Open
Abstract
Recent studies have shown that a similarity between sound and meaning of a word (i.e., iconicity) can help more readily access the meaning of that word, but the neural mechanisms underlying this beneficial role of iconicity in semantic processing remain largely unknown. In an fMRI study, we focused on the affective domain and examined whether affective iconic words (e.g., high arousal in both sound and meaning) activate additional brain regions that integrate emotional information from different domains (i.e., sound and meaning). In line with our hypothesis, affective iconic words, compared to their non‐iconic counterparts, elicited additional BOLD responses in the left amygdala known for its role in multimodal representation of emotions. Functional connectivity analyses revealed that the observed amygdalar activity was modulated by an interaction of iconic condition and activations in two hubs representative for processing sound (left superior temporal gyrus) and meaning (left inferior frontal gyrus) of words. These results provide a neural explanation for the facilitative role of iconicity in language processing and indicate that language users are sensitive to the interaction between sound and meaning aspect of words, suggesting the existence of iconicity as a general property of human language.
Collapse
Affiliation(s)
- Arash Aryani
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Germany
| | - Chun-Ting Hsu
- Kokoro Research Center, Kyoto University, Kyoto, Japan
| | - Arthur M Jacobs
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Germany.,Centre for Cognitive Neuroscience Berlin (CCNB), Berlin, Germany
| |
Collapse
|
15
|
Domínguez-Borràs J, Guex R, Méndez-Bértolo C, Legendre G, Spinelli L, Moratti S, Frühholz S, Mégevand P, Arnal L, Strange B, Seeck M, Vuilleumier P. Human amygdala response to unisensory and multisensory emotion input: No evidence for superadditivity from intracranial recordings. Neuropsychologia 2019; 131:9-24. [PMID: 31158367 DOI: 10.1016/j.neuropsychologia.2019.05.027] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Revised: 05/15/2019] [Accepted: 05/28/2019] [Indexed: 12/14/2022]
Abstract
The amygdala is crucially implicated in processing emotional information from various sensory modalities. However, there is dearth of knowledge concerning the integration and relative time-course of its responses across different channels, i.e., for auditory, visual, and audiovisual input. Functional neuroimaging data in humans point to a possible role of this region in the multimodal integration of emotional signals, but direct evidence for anatomical and temporal overlap of unisensory and multisensory-evoked responses in amygdala is still lacking. We recorded event-related potentials (ERPs) and oscillatory activity from 9 amygdalae using intracranial electroencephalography (iEEG) in patients prior to epilepsy surgery, and compared electrophysiological responses to fearful, happy, or neutral stimuli presented either in voices alone, faces alone, or voices and faces simultaneously delivered. Results showed differential amygdala responses to fearful stimuli, in comparison to neutral, reaching significance 100-200 ms post-onset for auditory, visual and audiovisual stimuli. At later latencies, ∼400 ms post-onset, amygdala response to audiovisual information was also amplified in comparison to auditory or visual stimuli alone. Importantly, however, we found no evidence for either super- or subadditivity effects in any of the bimodal responses. These results suggest, first, that emotion processing in amygdala occurs at globally similar early stages of perceptual processing for auditory, visual, and audiovisual inputs; second, that overall larger responses to multisensory information occur at later stages only; and third, that the underlying mechanisms of this multisensory gain may reflect a purely additive response to concomitant visual and auditory inputs. Our findings provide novel insights on emotion processing across the sensory pathways, and their convergence within the limbic system.
Collapse
Affiliation(s)
- Judith Domínguez-Borràs
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
| | - Raphaël Guex
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
| | | | - Guillaume Legendre
- Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Laurent Spinelli
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland.
| | - Stephan Moratti
- Department of Experimental Psychology, Complutense University of Madrid, Spain; Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Universidad Politécnica de Madrid, Spain.
| | - Sascha Frühholz
- Department of Psychology, University of Zurich, Switzerland.
| | - Pierre Mégevand
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Luc Arnal
- Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Bryan Strange
- Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Universidad Politécnica de Madrid, Spain; Department of Neuroimaging, Alzheimer's Disease Research Centre, Reina Sofia-CIEN Foundation, Madrid, Spain.
| | - Margitta Seeck
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland.
| | - Patrik Vuilleumier
- Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| |
Collapse
|
16
|
Vanneste S, To WT, De Ridder D. Tinnitus and neuropathic pain share a common neural substrate in the form of specific brain connectivity and microstate profiles. Prog Neuropsychopharmacol Biol Psychiatry 2019; 88:388-400. [PMID: 30142355 DOI: 10.1016/j.pnpbp.2018.08.015] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/13/2018] [Revised: 07/06/2018] [Accepted: 08/19/2018] [Indexed: 12/12/2022]
Abstract
Tinnitus and neuropathic pain share similar pathophysiological, clinical, and treatment characteristics. In this EEG study, a group of tinnitus (n = 100) and neuropathic pain (n = 100) patients are compared to each other and to a healthy control group (n = 100). Spectral analysis demonstrates gamma band activity within the primary auditory and somatosensory cortices in patients with tinnitus and neuropathic pain, respectively. A conjunction analysis further demonstrates an overlap of tinnitus and pain related activity in the anterior and posterior cingulate cortex as well as in the dorsolateral prefrontal cortex in comparison to healthy controls. Further analysis reveals that similar states characterize tinnitus and neuropathic pain patients, two of which differ from the healthy group and two of which are shared. Both pain and tinnitus patients spend half of the time in one specific microstate. Seed-based functional connectivity with the source within the predominant microstate shows delta, alpha1, and gamma lagged phase synchronization overlap with multiple brain areas between pain and tinnitus. These data suggest that auditory and somatosensory phantom perceptions share an overlapping brain network with common activation and connectivity patterns and are differentiated by specific sensory cortex gamma activation.
Collapse
Affiliation(s)
- Sven Vanneste
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA.
| | - Wing Ting To
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA
| | - Dirk De Ridder
- Department of Surgical Sciences, Dunedin School of Medicine, University of Otago, New Zealand
| |
Collapse
|
17
|
Föcker J, Röder B. Event-Related Potentials Reveal Evidence for Late Integration of Emotional Prosody and Facial Expression in Dynamic Stimuli: An ERP Study. Multisens Res 2019; 32:473-497. [DOI: 10.1163/22134808-20191332] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2018] [Accepted: 04/01/2019] [Indexed: 11/19/2022]
Abstract
Abstract
The aim of the present study was to test whether multisensory interactions of emotional signals are modulated by intermodal attention and emotional valence. Faces, voices and bimodal emotionally congruent or incongruent face–voice pairs were randomly presented. The EEG was recorded while participants were instructed to detect sad emotional expressions in either faces or voices while ignoring all stimuli with another emotional expression and sad stimuli of the task irrelevant modality. Participants processed congruent sad face–voice pairs more efficiently than sad stimuli paired with an incongruent emotion and performance was higher in congruent bimodal compared to unimodal trials, irrespective of which modality was task-relevant. Event-related potentials (ERPs) to congruent emotional face–voice pairs started to differ from ERPs to incongruent emotional face–voice pairs at 180 ms after stimulus onset: Irrespectively of which modality was task-relevant, ERPs revealed a more pronounced positivity (180 ms post-stimulus) to emotionally congruent trials compared to emotionally incongruent trials if the angry emotion was presented in the attended modality. A larger negativity to incongruent compared to congruent trials was observed in the time range of 400–550 ms (N400) for all emotions (happy, neutral, angry), irrespectively of whether faces or voices were task relevant. These results suggest an automatic interaction of emotion related information.
Collapse
Affiliation(s)
- Julia Föcker
- 1Biological Psychology and Neuropsychology, University of Hamburg, Germany
- 2School of Psychology, College of Social Science, University of Lincoln, United Kingdom
| | - Brigitte Röder
- 1Biological Psychology and Neuropsychology, University of Hamburg, Germany
| |
Collapse
|
18
|
Gao C, Wedell DH, Green JJ, Jia X, Mao X, Guo C, Shinkareva SV. Temporal dynamics of audiovisual affective processing. Biol Psychol 2018; 139:59-72. [DOI: 10.1016/j.biopsycho.2018.10.001] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2018] [Revised: 08/28/2018] [Accepted: 10/01/2018] [Indexed: 11/16/2022]
|
19
|
Straube B, Wroblewski A, Jansen A, He Y. The connectivity signature of co-speech gesture integration: The superior temporal sulcus modulates connectivity between areas related to visual gesture and auditory speech processing. Neuroimage 2018; 181:539-549. [PMID: 30025854 DOI: 10.1016/j.neuroimage.2018.07.037] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2018] [Revised: 07/04/2018] [Accepted: 07/15/2018] [Indexed: 10/28/2022] Open
Abstract
Humans integrate information communicated by speech and gestures. Functional magnetic resonance imaging (fMRI) studies suggest that the posterior superior temporal sulcus (STS) and adjacent gyri are relevant for multisensory integration. However, a connectivity model representing this essential combinatory process is still missing. Here, we used dynamic causal modeling for fMRI to analyze the effective connectivity pattern between middle temporal gyrus (MTG), occipital cortex (OC) and STS associated with auditory verbal, visual gesture-related, and integrative processing, respectively, to unveil the neural mechanisms underlying integration of intrinsically meaningful gestures (e.g., "Thumbs-up gesture") and corresponding speech. 20 participants were presented videos of an actor either performing intrinsic meaningful gestures in the context of German or Russian sentences, or speaking a German sentence without gesture, while performing a content judgment task. The connectivity analyses resulted in a winning model that included bidirectional intrinsic connectivity between all areas. Furthermore, the model included modulations of both connections to the STS (OC→STS; MTG→STS), and non-linear modulatory effects of the STS on bidirectional connections between MTG and OC. Coupling strength in the occipital pathway (OC→STS) correlated with gesture related advantages in task performance, whereas the temporal pathway (MTG→STS) correlated with performance in the speech only condition. Coupling between MTG and OC correlated negatively with subsequent memory performance for sentences of the Gesture-German condition. Our model provides a first step towards a better understanding of speech-gesture integration on network level. It corroborates the importance of the STS during audio-visual integration by showing that this region inhibits direct auditory-visual coupling.
Collapse
Affiliation(s)
- Benjamin Straube
- Translational Neuroimaging Marburg (TNM), Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Germany; Center for Mind, Brain and Behavior - CMBB, Philipps-University Marburg, Germany.
| | - Adrian Wroblewski
- Translational Neuroimaging Marburg (TNM), Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Germany; Center for Mind, Brain and Behavior - CMBB, Philipps-University Marburg, Germany
| | - Andreas Jansen
- Laboratory for Multimodal Neuroimaging (LMN), Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Germany; Center for Mind, Brain and Behavior - CMBB, Philipps-University Marburg, Germany; Core-Facility Brainimaging, Faculty of Medicine, University of Marburg, Germany
| | - Yifei He
- Translational Neuroimaging Marburg (TNM), Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Germany; Center for Mind, Brain and Behavior - CMBB, Philipps-University Marburg, Germany
| |
Collapse
|
20
|
Modality-Independent Coding of Scene Categories in Prefrontal Cortex. J Neurosci 2018; 38:5969-5981. [PMID: 29858483 DOI: 10.1523/jneurosci.0272-18.2018] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Revised: 05/03/2018] [Accepted: 05/26/2018] [Indexed: 11/21/2022] Open
Abstract
Natural environments convey information through multiple sensory modalities, all of which contribute to people's percepts. Although it has been shown that visual or auditory content of scene categories can be decoded from brain activity, it remains unclear how humans represent scene information beyond a specific sensory modality domain. To address this question, we investigated how categories of scene images and sounds are represented in several brain regions. A group of healthy human subjects (both sexes) participated in the present study, where their brain activity was measured with fMRI while viewing images or listening to sounds of different real-world environments. We found that both visual and auditory scene categories can be decoded not only from modality-specific areas, but also from several brain regions in the temporal, parietal, and prefrontal cortex (PFC). Intriguingly, only in the PFC, but not in any other regions, categories of scene images and sounds appear to be represented in similar activation patterns, suggesting that scene representations in PFC are modality-independent. Furthermore, the error patterns of neural decoders indicate that category-specific neural activity patterns in the middle and superior frontal gyri are tightly linked to categorization behavior. Our findings demonstrate that complex scene information is represented at an abstract level in the PFC, regardless of the sensory modality of the stimulus.SIGNIFICANCE STATEMENT Our experience in daily life includes multiple sensory inputs, such as images, sounds, or scents from the surroundings, which all contribute to our understanding of the environment. Here, for the first time, we investigated where and how in the brain information about the natural environment from multiple senses is merged to form modality-independent representations of scene categories. We show direct decoding of scene categories across sensory modalities from patterns of neural activity in the prefrontal cortex (PFC). We also conclusively tie these neural representations to human categorization behavior by comparing patterns of errors between a neural decoder and behavior. Our findings suggest that PFC is a central hub for integrating sensory information and computing modality-independent representations of scene categories.
Collapse
|
21
|
Stevenson RA, Sheffield SW, Butera IM, Gifford RH, Wallace MT. Multisensory Integration in Cochlear Implant Recipients. Ear Hear 2018; 38:521-538. [PMID: 28399064 DOI: 10.1097/aud.0000000000000435] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry. We review the research that has been conducted on multisensory integration in CI users to date and suggest a number of areas needing further research. The overall pattern of results indicates that many CI recipients show at least some perceptual gain that can be attributable to multisensory integration. The extent of this gain, however, varies based on a number of factors, including age of implantation and specific task being assessed (e.g., stimulus detection, phoneme perception, word recognition). Although both children and adults with CIs obtain audiovisual benefits for phoneme, word, and sentence stimuli, neither group shows demonstrable gain for suprasegmental feature perception. Additionally, only early-implanted children and the highest performing adults obtain audiovisual integration benefits similar to individuals with normal hearing. Increasing age of implantation in children is associated with poorer gains resultant from audiovisual integration, suggesting a sensitive period in development for the brain networks that subserve these integrative functions, as well as length of auditory experience. This finding highlights the need for early detection of and intervention for hearing loss, not only in terms of auditory perception, but also in terms of the behavioral and perceptual benefits of audiovisual processing. Importantly, patterns of auditory, visual, and audiovisual responses suggest that underlying integrative processes may be fundamentally different between CI users and typical-hearing listeners. Future research, particularly in low-level processing tasks such as signal detection will help to further assess mechanisms of multisensory integration for individuals with hearing loss, both with and without CIs.
Collapse
Affiliation(s)
- Ryan A Stevenson
- 1Department of Psychology, University of Western Ontario, London, Ontario, Canada; 2Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada; 3Walter Reed National Military Medical Center, Audiology and Speech Pathology Center, London, Ontario, Canada; 4Vanderbilt Brain Institute, Nashville, Tennesse; 5Vanderbilt Kennedy Center, Nashville, Tennesse; 6Department of Psychology, Vanderbilt University, Nashville, Tennesse; 7Department of Psychiatry, Vanderbilt University Medical Center, Nashville, Tennesse; and 8Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennesse
| | | | | | | | | |
Collapse
|
22
|
Pehrs C, Zaki J, Schlochtermeier LH, Jacobs AM, Kuchinke L, Koelsch S. The Temporal Pole Top-Down Modulates the Ventral Visual Stream During Social Cognition. Cereb Cortex 2018; 27:777-792. [PMID: 26604273 DOI: 10.1093/cercor/bhv226] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
The temporal pole (TP) has been associated with diverse functions of social cognition and emotion processing. Although the underlying mechanism remains elusive, one possibility is that TP acts as domain-general hub integrating socioemotional information. To test this, 26 participants were presented with 60 empathy-evoking film clips during fMRI scanning. The film clips were preceded by a linguistic sad or neutral context and half of the clips were accompanied by sad music. In line with its hypothesized role, TP was involved in the processing of sad context and furthermore tracked participants' empathic concern. To examine the neuromodulatory impact of TP, we applied nonlinear dynamic causal modeling to a multisensory integration network from previous work consisting of superior temporal gyrus (STG), fusiform gyrus (FG), and amygdala, which was extended by an additional node in the TP. Bayesian model comparison revealed a gating of STG and TP on fusiform-amygdalar coupling and an increase of TP to FG connectivity during the integration of contextual information. Moreover, these backward projections were strengthened by emotional music. The findings indicate that during social cognition, TP integrates information from different modalities and top-down modulates lower-level perceptual areas in the ventral visual stream as a function of integration demands.
Collapse
Affiliation(s)
- Corinna Pehrs
- Cluster of Excellence "Languages of Emotion", 14195 Berlin, Germany.,Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany.,Dahlem Institute for Neuroimaging of Emotion, 14195 Berlin, Germany
| | - Jamil Zaki
- Department of Psychology, Stanford University, Stanford, CA 94305, USA
| | - Lorna H Schlochtermeier
- Cluster of Excellence "Languages of Emotion", 14195 Berlin, Germany.,Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany.,Dahlem Institute for Neuroimaging of Emotion, 14195 Berlin, Germany
| | - Arthur M Jacobs
- Cluster of Excellence "Languages of Emotion", 14195 Berlin, Germany.,Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany.,Dahlem Institute for Neuroimaging of Emotion, 14195 Berlin, Germany
| | - Lars Kuchinke
- Cluster of Excellence "Languages of Emotion", 14195 Berlin, Germany.,Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany.,Dahlem Institute for Neuroimaging of Emotion, 14195 Berlin, Germany.,Department of Psychology, Experimental Psychology and Methods, Ruhr-Universität Bochum, 44801 Bochum, Germany
| | - Stefan Koelsch
- Department of Biological and Medical Psychology, University of Bergen, 5009 Bergen, Norway
| |
Collapse
|
23
|
Ran G, Cao X, Chen X. Emotional prediction: An ALE meta-analysis and MACM analysis. Conscious Cogn 2017; 58:158-169. [PMID: 29128283 DOI: 10.1016/j.concog.2017.10.019] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2017] [Revised: 10/21/2017] [Accepted: 10/30/2017] [Indexed: 11/26/2022]
Abstract
The prediction of emotion has been explored in a variety of functional brain imaging and neurophysiological studies. However, an overall picture of the areas involved this process remains unexploited. Here, we quantitatively summarized the published literature on emotional prediction using activation likelihood estimation (ALE) in functional magnetic resonance imaging (fMRI). Furthermore, the current study employed a meta-analytic connectivity modeling (MACM) to map the meta-analytic coactivation maps of regions of interest (ROIs). Our ALE analysis revealed significant convergent activations in some vital brain areas involved in emotional prediction, including the dorsolateral prefrontal cortex (DLPFC), ventrolateral prefrontal cortex (VLPFC), orbitofrontal cortex (OFC) and medial prefrontal cortex (MPFC). For the MACM analysis, we identified that the DLPFC, VLPFC and OFC were the core areas in the coactivation network of emotional prediction. Overall, the results of ALE and MACM indicated that prefrontal brain areas play critical roles in emotional prediction.
Collapse
Affiliation(s)
- Guangming Ran
- Department of Psychology, Institute of Education, China West Normal University, Nanchong 637002, China.
| | - Xiaojun Cao
- Department of Psychology, Institute of Education, China West Normal University, Nanchong 637002, China
| | - Xu Chen
- Faculty of Psychology, Southwest University, Chongqing 400715, China
| |
Collapse
|
24
|
Brion M, D'Hondt F, Lannoy S, Pitel AL, Davidoff DA, Maurage P. Crossmodal processing of emotions in alcohol-dependence and Korsakoff syndrome. Cogn Neuropsychiatry 2017; 22:436-451. [PMID: 28885888 DOI: 10.1080/13546805.2017.1373639] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
INTRODUCTION Decoding emotional information from faces and voices is crucial for efficient interpersonal communication. Emotional decoding deficits have been found in alcohol-dependence (ALC), particularly in crossmodal situations (with simultaneous stimulations from different modalities), but are still underexplored in Korsakoff syndrome (KS). The aim of this study is to determine whether the continuity hypothesis, postulating a gradual worsening of cognitive and brain impairments from ALC to KS, is valid for emotional crossmodal processing. METHODS Sixteen KS, 17 ALC and 19 matched healthy controls (CP) had to detect the emotion (anger or happiness) displayed by auditory, visual or crossmodal auditory-visual stimuli. Crossmodal stimuli were either emotionally congruent (leading to a facilitation effect, i.e. enhanced performance for crossmodal condition compared to unimodal ones) or incongruent (leading to an interference effect, i.e. decreased performance for crossmodal condition due to discordant information across modalities). Reaction times and accuracy were recorded. RESULTS Crossmodal integration for congruent information was dampened only in ALC, while both ALC and KS demonstrated, compared to CP, decreased performance for decoding emotional facial expressions in the incongruent condition. CONCLUSIONS The crossmodal integration appears impaired in ALC but preserved in KS. Both alcohol-related disorders present an increased interference effect. These results show the interest of more ecological designs, using crossmodal stimuli, to explore emotional decoding in alcohol-related disorders. They also suggest that the continuum hypothesis cannot be generalised to emotional decoding abilities.
Collapse
Affiliation(s)
- Mélanie Brion
- a Laboratory for Experimental Psychopathology , Psychological Sciences Research Institute, Université catholique de Louvain , Louvain-la-Neuve , Belgium
| | - Fabien D'Hondt
- b Univ. Lille, CNRS , UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives , Lille , France.,c CHU Lille , Clinique de Psychiatrie, CURE , Lille , France
| | - Séverine Lannoy
- a Laboratory for Experimental Psychopathology , Psychological Sciences Research Institute, Université catholique de Louvain , Louvain-la-Neuve , Belgium
| | - Anne-Lise Pitel
- d INSERM, École Pratique des Hautes Études , Université de Caen-Basse Normandie, Unité U1077, GIP Cyceron, CHU Caen , Caen , France
| | - Donald A Davidoff
- e Harvard Medical School , Boston , MA , USA.,f Department of Neuropsychology , McLean Hospital , Belmont , USA
| | - Pierre Maurage
- a Laboratory for Experimental Psychopathology , Psychological Sciences Research Institute, Université catholique de Louvain , Louvain-la-Neuve , Belgium
| |
Collapse
|
25
|
Yaple ZA, Vakhrushev R. Investigating Emotional Top Down Modulation of Ambiguous Faces by Single Pulse TMS on Early Visual Cortices. Front Neurosci 2016; 10:305. [PMID: 27445674 PMCID: PMC4928532 DOI: 10.3389/fnins.2016.00305] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2016] [Accepted: 06/16/2016] [Indexed: 11/25/2022] Open
Abstract
Top-down processing is a mechanism in which memory, context and expectation are used to perceive stimuli. For this study we investigated how emotion content, induced by music mood, influences perception of happy and sad emoticons. Using single pulse TMS we stimulated right occipital face area (rOFA), primary visual cortex (V1) and vertex while subjects performed a face-detection task and listened to happy and sad music. At baseline, incongruent audio-visual pairings decreased performance, demonstrating dependence of emotion while perceiving ambiguous faces. However, performance of face identification decreased during rOFA stimulation regardless of emotional content. No effects were found between Cz and V1 stimulation. These results suggest that while rOFA is important for processing faces regardless of emotion, V1 stimulation had no effect. Our findings suggest that early visual cortex activity may not integrate emotional auditory information with visual information during emotion top-down modulation of faces.
Collapse
Affiliation(s)
- Zachary A Yaple
- Centre for Cognition and Decision Making, National Research University Higher School of EconomicsMoscow, Russia; Department of Experimental Psychology, University of GroningenGroningen, Netherlands
| | - Roman Vakhrushev
- Department of Psychology, National Research University Higher School of Economics Moscow, Russia
| |
Collapse
|
26
|
Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention. Sci Rep 2016; 6:18914. [PMID: 26759193 PMCID: PMC4725371 DOI: 10.1038/srep18914] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2015] [Accepted: 11/30/2015] [Indexed: 11/23/2022] Open
Abstract
An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.
Collapse
|
27
|
The P300 component wave reveals differences in subclinical anxious-depressive states during bimodal oddball tasks: An effect of stimulus congruence. Clin Neurophysiol 2015; 126:2108-23. [DOI: 10.1016/j.clinph.2015.01.012] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2014] [Revised: 01/13/2015] [Accepted: 01/18/2015] [Indexed: 12/30/2022]
|
28
|
Järvinen A, Ng R, Crivelli D, Arnold AJ, Woo-VonHoogenstyn N, Bellugi U. Relations between social-perceptual ability in multi- and unisensory contexts, autonomic reactivity, and social functioning in individuals with Williams syndrome. Neuropsychologia 2015; 73:127-40. [PMID: 26002754 DOI: 10.1016/j.neuropsychologia.2015.04.035] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2014] [Revised: 04/26/2015] [Accepted: 04/30/2015] [Indexed: 10/23/2022]
Abstract
Compromised social-perceptual ability has been proposed to contribute to social dysfunction in neurodevelopmental disorders. While such impairments have been identified in Williams syndrome (WS), little is known about emotion processing in auditory and multisensory contexts. Employing a multidimensional approach, individuals with WS and typical development (TD) were tested for emotion identification across fearful, happy, and angry multisensory and unisensory face and voice stimuli. Autonomic responses were monitored in response to unimodal emotion. The WS group was administered an inventory of social functioning. Behaviorally, individuals with WS relative to TD demonstrated impaired processing of unimodal vocalizations and emotionally incongruent audiovisual compounds, reflecting a generalized deficit in social-auditory processing in WS. The TD group outperformed their counterparts with WS in identifying negative (fearful and angry) emotion, with similar between-group performance with happy stimuli. Mirroring this pattern, electrodermal activity (EDA) responses to the emotional content of the stimuli indicated that whereas those with WS showed the highest arousal to happy, and lowest arousal to fearful stimuli, the TD participants demonstrated the contrasting pattern. In WS, more normal social functioning was related to higher autonomic arousal to facial expressions. Implications for underlying neural architecture and emotional functions are discussed.
Collapse
Affiliation(s)
- Anna Järvinen
- Laboratory for Cognitive Neuroscience, The Salk Institute for Biological Studies, La Jolla, CA, USA.
| | - Rowena Ng
- Laboratory for Cognitive Neuroscience, The Salk Institute for Biological Studies, La Jolla, CA, USA; Institute of Child Development, University of Minnesota, Twin Cities, MN, USA
| | - Davide Crivelli
- Department of Psychology, Catholic University of the Sacred Heart, Milan, Italy
| | - Andrew J Arnold
- Laboratory for Cognitive Neuroscience, The Salk Institute for Biological Studies, La Jolla, CA, USA
| | | | - Ursula Bellugi
- Laboratory for Cognitive Neuroscience, The Salk Institute for Biological Studies, La Jolla, CA, USA
| |
Collapse
|
29
|
High-frequency electroencephalographic activity in left temporal area is associated with pleasant emotion induced by video clips. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2015; 2015:762769. [PMID: 25883640 PMCID: PMC4391494 DOI: 10.1155/2015/762769] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2014] [Revised: 03/05/2015] [Accepted: 03/05/2015] [Indexed: 11/29/2022]
Abstract
Recent findings suggest that specific neural correlates for the key elements of basic emotions do exist and can be identified by neuroimaging techniques. In this paper, electroencephalogram (EEG) is used to explore the markers for video-induced emotions. The problem is approached from a classifier perspective: the features that perform best in classifying person's valence and arousal while watching video clips with audiovisual emotional content are searched from a large feature set constructed from the EEG spectral powers of single channels as well as power differences between specific channel pairs. The feature selection is carried out using a sequential forward floating search method and is done separately for the classification of valence and arousal, both derived from the emotional keyword that the subject had chosen after seeing the clips. The proposed classifier-based approach reveals a clear association between the increased high-frequency (15–32 Hz) activity in the left temporal area and the clips described as “pleasant” in the valence and “medium arousal” in the arousal scale. These clips represent the emotional keywords amusement and joy/happiness. The finding suggests the occurrence of a specific neural activation during video-induced pleasant emotion and the possibility to detect this from the left temporal area using EEG.
Collapse
|
30
|
Luherne-du Boullay V, Plaza M, Perrault A, Capelle L, Chaby L. Atypical crossmodal emotional integration in patients with gliomas. Brain Cogn 2014; 92C:92-100. [DOI: 10.1016/j.bandc.2014.10.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2014] [Revised: 10/03/2014] [Accepted: 10/06/2014] [Indexed: 12/13/2022]
|
31
|
Kogler L, Gur RC, Derntl B. Sex differences in cognitive regulation of psychosocial achievement stress: brain and behavior. Hum Brain Mapp 2014; 36:1028-42. [PMID: 25376429 DOI: 10.1002/hbm.22683] [Citation(s) in RCA: 80] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2014] [Revised: 09/15/2014] [Accepted: 10/22/2014] [Indexed: 12/28/2022] Open
Abstract
Although cognitive regulation of emotion has been extensively examined, there is a lack of studies assessing cognitive regulation in stressful achievement situations. This study used functional magnetic resonance imaging in 23 females and 20 males to investigate cognitive downregulation of negative, stressful sensations during a frequently used psychosocial stress task. Additionally, subjective responses, cognitive regulation strategies, salivary cortisol, and skin conductance response were assessed. Subjective response supported the experimental manipulation by showing higher anger and negative affect ratings after stress regulation than after the mere exposure to stress. On a neural level, right middle frontal gyrus (MFG) and right superior temporal gyrus (STG) were more strongly activated during regulation than nonregulation, whereas the hippocampus was less activated during regulation. Sex differences were evident: after regulation females expressed higher subjective stress ratings than males, and these ratings were associated with right hippocampal activation. In the nonregulation block, females showed greater activation of the left amygdala and the right STG during stress than males while males recruited the putamen more robustly in this condition. Thus, cognitive regulation of stressful achievement situations seems to induce additional stress, to recruit regions implicated in attention integration and working memory and to deactivate memory retrieval. Stress itself is associated with greater activation of limbic as well as attention areas in females than males. Additionally, activation of the memory system during cognitive regulation of stress is associated with greater perceived stress in females. Sex differences in cognitive regulation strategies merit further investigation that can guide sex sensitive interventions for stress-associated disorders.
Collapse
Affiliation(s)
- Lydia Kogler
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen University, Aachen, Germany; Translational Brain Medicine, Jülich-Aachen-Research Alliance, Jülich/Aachen, Germany
| | | | | |
Collapse
|
32
|
Cao H, Cooper DG, Keutmann MK, Gur RC, Nenkova A, Verma R. CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING 2014; 5:377-390. [PMID: 25653738 PMCID: PMC4313618 DOI: 10.1109/taffc.2014.2336244] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
People convey their emotional state in their face and voice. We present an audio-visual data set uniquely suited for the study of multi-modal emotion expression and perception. The data set consists of facial and vocal emotional expressions in sentences spoken in a range of basic emotional states (happy, sad, anger, fear, disgust, and neutral). 7,442 clips of 91 actors with diverse ethnic backgrounds were rated by multiple raters in three modalities: audio, visual, and audio-visual. Categorical emotion labels and real-value intensity values for the perceived emotion were collected using crowd-sourcing from 2,443 raters. The human recognition of intended emotion for the audio-only, visual-only, and audio-visual data are 40.9%, 58.2% and 63.6% respectively. Recognition rates are highest for neutral, followed by happy, anger, disgust, fear, and sad. Average intensity levels of emotion are rated highest for visual-only perception. The accurate recognition of disgust and fear requires simultaneous audio-visual cues, while anger and happiness can be well recognized based on evidence from a single modality. The large dataset we introduce can be used to probe other questions concerning the audio-visual perception of emotion.
Collapse
Affiliation(s)
- Houwei Cao
- Radiology Department at the University of Pennsylvania, 3600 Market Street, Suite 380, Philadelphia, PA 19104.
| | - David G Cooper
- Math and Computer Science Department at Ursinus College, 601 E. Main Street, Collegeville, PA, 19426.
| | - Michael K Keutmann
- Department of Psychology at the University of Illinois at Chicago, 1007 West Harrison Street, M/C 285, Chicago, IL, 60607.
| | - Ruben C Gur
- Neuropsychiatry section of the Psychiatry Department of the University of Pennsylvania 3400 Spruce Street, 10th Floor, Gates Bldg.,and the Philadelphia Veterans Administration Medical Center, Philadelphia, PA 19104.
| | - Ani Nenkova
- Department of Computer and Information Science, University of Pennsylvania, 3330 Walnut Street, Philadelphia, PA 19104.
| | - Ragini Verma
- Radiology Department at the University of Pennsylvania, 3600 Market Street, Suite 380, Philadelphia, PA 19104
| |
Collapse
|
33
|
Müller VI, Cieslik EC, Kellermann TS, Eickhoff SB. Crossmodal emotional integration in major depression. Soc Cogn Affect Neurosci 2014; 9:839-48. [PMID: 23576809 PMCID: PMC4040101 DOI: 10.1093/scan/nst057] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2012] [Accepted: 04/09/2013] [Indexed: 11/13/2022] Open
Abstract
Major depression goes along with affective and social-cognitive deficits. Most research on affective deficits in depression has, however, only focused on unimodal emotion processing, whereas in daily life, emotional perception is often highly dependent on the evaluation of multimodal inputs. We thus investigated emotional audiovisual integration in patients with depression and healthy subjects. Subjects rated the expression of happy, neutral and fearful faces while concurrently being exposed to emotional or neutral sounds. Results demonstrated group differences in left inferior frontal gyrus and inferior parietal cortex when comparing incongruent to congruent happy facial conditions, mainly due to a failure of patients to deactivate these regions in response to congruent stimulus pairs. Moreover, healthy subjects decreased activation in right posterior superior temporal gyrus/sulcus and midcingulate cortex when an emotional stimulus was paired with a neutral rather than another emotional one. In contrast, patients did not show such deactivation when neutral stimuli were integrated. These results demonstrate aberrant neural response in audiovisual processing in depression, indicated by failure to deactivate regions involved in inhibition and salience processing when congruent and neutral audiovisual stimuli pairs are integrated, providing a possible mechanism of constant arousal and readiness to act in this patient group.
Collapse
Affiliation(s)
- Veronika I Müller
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty Heinrich Heine University, D-40225 Düsseldorf, Germany, Department of Neuroscience und Medicine, INM-1, Research Center Jülich, D-52428 Jülich, Germany, Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, D-52074 Aachen, Germany, and JARA-Brain, Translational Brain Medicine, Jülich/Aachen, GermanyInstitute of Clinical Neuroscience and Medical Psychology, Medical Faculty Heinrich Heine University, D-40225 Düsseldorf, Germany, Department of Neuroscience und Medicine, INM-1, Research Center Jülich, D-52428 Jülich, Germany, Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, D-52074 Aachen, Germany, and JARA-Brain, Translational Brain Medicine, Jülich/Aachen, GermanyInstitute of Clinical Neuroscience and Medical Psychology, Medical Faculty Heinrich Heine University, D-40225 Düsseldorf, Germany, Department of Neuroscience und Medicine, INM-1, Research Center Jülich, D-52428 Jülich, Germany, Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, D-52074 Aachen, Germany, and JARA-Brain, Translational Brain Medicine, Jülich/Aachen, Germany
| | - Edna C Cieslik
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty Heinrich Heine University, D-40225 Düsseldorf, Germany, Department of Neuroscience und Medicine, INM-1, Research Center Jülich, D-52428 Jülich, Germany, Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, D-52074 Aachen, Germany, and JARA-Brain, Translational Brain Medicine, Jülich/Aachen, GermanyInstitute of Clinical Neuroscience and Medical Psychology, Medical Faculty Heinrich Heine University, D-40225 Düsseldorf, Germany, Department of Neuroscience und Medicine, INM-1, Research Center Jülich, D-52428 Jülich, Germany, Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, D-52074 Aachen, Germany, and JARA-Brain, Translational Brain Medicine, Jülich/Aachen, GermanyInstitute of Clinical Neuroscience and Medical Psychology, Medical Faculty Heinrich Heine University, D-40225 Düsseldorf, Germany, Department of Neuroscience und Medicine, INM-1, Research Center Jülich, D-52428 Jülich, Germany, Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, D-52074 Aachen, Germany, and JARA-Brain, Translational Brain Medicine, Jülich/Aachen, Germany
| | - Tanja S Kellermann
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty Heinrich Heine University, D-40225 Düsseldorf, Germany, Department of Neuroscience und Medicine, INM-1, Research Center Jülich, D-52428 Jülich, Germany, Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, D-52074 Aachen, Germany, and JARA-Brain, Translational Brain Medicine, Jülich/Aachen, GermanyInstitute of Clinical Neuroscience and Medical Psychology, Medical Faculty Heinrich Heine University, D-40225 Düsseldorf, Germany, Department of Neuroscience und Medicine, INM-1, Research Center Jülich, D-52428 Jülich, Germany, Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, D-52074 Aachen, Germany, and JARA-Brain, Translational Brain Medicine, Jülich/Aachen, GermanyInstitute of Clinical Neuroscience and Medical Psychology, Medical Faculty Heinrich Heine University, D-40225 Düsseldorf, Germany, Department of Neuroscience und Medicine, INM-1, Research Center Jülich, D-52428 Jülich, Germany, Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, D-52074 Aachen, Germany, and JARA-Brain, Translational Brain Medicine, Jülich/Aachen, Germany
| | - Simon B Eickhoff
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty Heinrich Heine University, D-40225 Düsseldorf, Germany, Department of Neuroscience und Medicine, INM-1, Research Center Jülich, D-52428 Jülich, Germany, Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, D-52074 Aachen, Germany, and JARA-Brain, Translational Brain Medicine, Jülich/Aachen, GermanyInstitute of Clinical Neuroscience and Medical Psychology, Medical Faculty Heinrich Heine University, D-40225 Düsseldorf, Germany, Department of Neuroscience und Medicine, INM-1, Research Center Jülich, D-52428 Jülich, Germany, Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, D-52074 Aachen, Germany, and JARA-Brain, Translational Brain Medicine, Jülich/Aachen, Germany
| |
Collapse
|
34
|
Milesi V, Cekic S, Péron J, Frühholz S, Cristinzio C, Seeck M, Grandjean D. Multimodal emotion perception after anterior temporal lobectomy (ATL). Front Hum Neurosci 2014; 8:275. [PMID: 24839437 PMCID: PMC4017134 DOI: 10.3389/fnhum.2014.00275] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2013] [Accepted: 04/14/2014] [Indexed: 11/30/2022] Open
Abstract
In the context of emotion information processing, several studies have demonstrated the involvement of the amygdala in emotion perception, for unimodal and multimodal stimuli. However, it seems that not only the amygdala, but several regions around it, may also play a major role in multimodal emotional integration. In order to investigate the contribution of these regions to multimodal emotion perception, five patients who had undergone unilateral anterior temporal lobe resection were exposed to both unimodal (vocal or visual) and audiovisual emotional and neutral stimuli. In a classic paradigm, participants were asked to rate the emotional intensity of angry, fearful, joyful, and neutral stimuli on visual analog scales. Compared with matched controls, patients exhibited impaired categorization of joyful expressions, whether the stimuli were auditory, visual, or audiovisual. Patients confused joyful faces with neutral faces, and joyful prosody with surprise. In the case of fear, unlike matched controls, patients provided lower intensity ratings for visual stimuli than for vocal and audiovisual ones. Fearful faces were frequently confused with surprised ones. When we controlled for lesion size, we no longer observed any overall difference between patients and controls in their ratings of emotional intensity on the target scales. Lesion size had the greatest effect on intensity perceptions and accuracy in the visual modality, irrespective of the type of emotion. These new findings suggest that a damaged amygdala, or a disrupted bundle between the amygdala and the ventral part of the occipital lobe, has a greater impact on emotion perception in the visual modality than it does in either the vocal or audiovisual one. We can surmise that patients are able to use the auditory information contained in multimodal stimuli to compensate for difficulty processing visually conveyed emotion.
Collapse
Affiliation(s)
- Valérie Milesi
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| | - Sezen Cekic
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| | - Julie Péron
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| | - Sascha Frühholz
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| | - Chiara Cristinzio
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland ; Laboratory for Neurology and Imaging of Cognition, Department of Neurology and Department of Neuroscience, Medical School, University of Geneva Geneva, Switzerland
| | - Margitta Seeck
- Epilepsy Unit, Department of Neurology, Geneva University Hospital Geneva, Switzerland
| | - Didier Grandjean
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| |
Collapse
|
35
|
Aging and response conflict solution: behavioural and functional connectivity changes. Brain Struct Funct 2014; 220:1739-57. [PMID: 24718622 DOI: 10.1007/s00429-014-0758-0] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2013] [Accepted: 03/16/2014] [Indexed: 12/13/2022]
Abstract
Healthy aging has been found associated with less efficient response conflict solution, but the cognitive and neural mechanisms have remained elusive. In a two-experiment study, we first examined the behavioural consequences of this putative age-related decline for conflicts induced by spatial stimulus-response incompatibility. We then used resting-state functional magnetic resonance imaging data from a large, independent sample of adults (n = 399; 18-85 years) to investigate age differences in functional connectivity between the nodes of a network previously found associated with incompatibility-induced response conflicts in the very same paradigm. As expected, overcoming interference from conflicting response tendencies took longer in older adults, even after accounting for potential mediator variables (general response speed and accuracy, motor speed, visuomotor coordination ability, and cognitive flexibility). Experiment 2 revealed selective age-related decreases in functional connectivity between bilateral anterior insula, pre-supplementary motor area, and right dorsolateral prefrontal cortex. Importantly, these age effects persisted after controlling for regional grey-matter atrophy assessed by voxel-based morphometry. Meta-analytic functional profiling using the BrainMap database showed these age-sensitive nodes to be more strongly linked to highly abstract cognition, as compared with the remaining network nodes, which were more strongly linked to action-related processing. These findings indicate changes in interregional coupling with age among task-relevant network nodes that are not specifically associated with conflict resolution per se. Rather, our behavioural and neural data jointly suggest that healthy aging is associated with difficulties in properly activating non-dominant but relevant task schemata necessary to exert efficient cognitive control over action.
Collapse
|
36
|
Diwadkar VA, Bakshi N, Gupta G, Pruitt P, White R, Eickhoff SB. Dysfunction and Dysconnection in Cortical-Striatal Networks during Sustained Attention: Genetic Risk for Schizophrenia or Bipolar Disorder and its Impact on Brain Network Function. Front Psychiatry 2014; 5:50. [PMID: 24847286 PMCID: PMC4023040 DOI: 10.3389/fpsyt.2014.00050] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2013] [Accepted: 04/28/2014] [Indexed: 01/08/2023] Open
Abstract
Abnormalities in the brain's attention network may represent early identifiable neurobiological impairments in individuals at increased risk for schizophrenia or bipolar disorder. Here, we provide evidence of dysfunctional regional and network function in adolescents at higher genetic risk for schizophrenia or bipolar disorder [henceforth higher risk (HGR)]. During fMRI, participants engaged in a sustained attention task with variable demands. The task alternated between attention (120 s), visual control (passive viewing; 120 s), and rest (20 s) epochs. Low and high demand attention conditions were created using the rapid presentation of two- or three-digit numbers. Subjects were required to detect repeated presentation of numbers. We demonstrate that the recruitment of cortical and striatal regions are disordered in HGR: relative to typical controls (TC), HGR showed lower recruitment of the dorsal prefrontal cortex, but higher recruitment of the superior parietal cortex. This imbalance was more dramatic in the basal ganglia. There, a group by task demand interaction was observed, such that increased attention demand led to increased engagement in TC, but disengagement in HGR. These activation studies were complemented by network analyses using dynamic causal modeling. Competing model architectures were assessed across a network of cortical-striatal regions, distinguished at a second level using random-effects Bayesian model selection. In the winning architecture, HGR were characterized by significant reductions in coupling across both frontal-striatal and frontal-parietal pathways. The effective connectivity analyses indicate emergent network dysconnection, consistent with findings in patients with schizophrenia. Emergent patterns of regional dysfunction and dysconnection in cortical-striatal pathways may provide functional biological signatures in the adolescent risk-state for psychiatric illness.
Collapse
Affiliation(s)
- Vaibhav A Diwadkar
- Department of Psychiatry and Behavioral Neurosciences, Wayne State University , Detroit, MI , USA
| | - Neil Bakshi
- Department of Psychiatry and Behavioral Neurosciences, Wayne State University , Detroit, MI , USA
| | - Gita Gupta
- Department of Psychiatry and Behavioral Neurosciences, Wayne State University , Detroit, MI , USA
| | - Patrick Pruitt
- Department of Psychiatry and Behavioral Neurosciences, Wayne State University , Detroit, MI , USA
| | - Richard White
- Department of Psychiatry and Behavioral Neurosciences, Wayne State University , Detroit, MI , USA
| | - Simon B Eickhoff
- Institute of Clinical Neuroscience and Medical Psychology, Heinrich-Heine University Düsseldorf , Düsseldorf , Germany ; Institute of Neuroscience and Medicine (INM-1), Research Center Jülich , Jülich , Germany
| |
Collapse
|
37
|
Freiherr J, Lundström JN, Habel U, Reetz K. Multisensory integration mechanisms during aging. Front Hum Neurosci 2013; 7:863. [PMID: 24379773 PMCID: PMC3861780 DOI: 10.3389/fnhum.2013.00863] [Citation(s) in RCA: 97] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2013] [Accepted: 11/26/2013] [Indexed: 11/25/2022] Open
Abstract
The rapid demographical shift occurring in our society implies that understanding of healthy aging and age-related diseases is one of our major future challenges. Sensory impairments have an enormous impact on our lives and are closely linked to cognitive functioning. Due to the inherent complexity of sensory perceptions, we are commonly presented with a complex multisensory stimulation and the brain integrates the information from the individual sensory channels into a unique and holistic percept. The cerebral processes involved are essential for our perception of sensory stimuli and becomes especially important during the perception of emotional content. Despite ongoing deterioration of the individual sensory systems during aging, there is evidence for an increase in, or maintenance of, multisensory integration processing in aging individuals. Within this comprehensive literature review on multisensory integration we aim to highlight basic mechanisms and potential compensatory strategies the human brain utilizes to help maintain multisensory integration capabilities during healthy aging to facilitate a broader understanding of age-related pathological conditions. Further our goal was to identify where further research is needed.
Collapse
Affiliation(s)
- Jessica Freiherr
- Diagnostic and Interventional Neuroradiology, RWTH Aachen University Aachen, Germany
| | - Johan N Lundström
- Department of Clinical Neuroscience, Karolinska Institute Stockholm, Sweden ; Monell Chemical Senses Center, Philadelphia PA, USA ; Department of Psychology, University of Pennsylvania Philadelphia, PA, USA
| | - Ute Habel
- Department of Psychiatry, Psychotherapy, and Psychosomatics, RWTH Aachen University Aachen, Germany ; JARA BRAIN - Translational Brain Medicine, RWTH Aachen University Aachen, Germany
| | - Kathrin Reetz
- JARA BRAIN - Translational Brain Medicine, RWTH Aachen University Aachen, Germany ; Department of Neurology, RWTH Aachen University Aachen, Germany ; Institute of Neuroscience and Medicine (INM-4), Research Center Jülich, Jülich Germany
| |
Collapse
|
38
|
Pehrs C, Deserno L, Bakels JH, Schlochtermeier LH, Kappelhoff H, Jacobs AM, Fritz TH, Koelsch S, Kuchinke L. How music alters a kiss: superior temporal gyrus controls fusiform-amygdalar effective connectivity. Soc Cogn Affect Neurosci 2013; 9:1770-8. [PMID: 24298171 DOI: 10.1093/scan/nst169] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
While watching movies, the brain integrates the visual information and the musical soundtrack into a coherent percept. Multisensory integration can lead to emotion elicitation on which soundtrack valences may have a modulatory impact. Here, dynamic kissing scenes from romantic comedies were presented to 22 participants (13 females) during functional magnetic resonance imaging scanning. The kissing scenes were either accompanied by happy music, sad music or no music. Evidence from cross-modal studies motivated a predefined three-region network for multisensory integration of emotion, consisting of fusiform gyrus (FG), amygdala (AMY) and anterior superior temporal gyrus (aSTG). The interactions in this network were investigated using dynamic causal models of effective connectivity. This revealed bilinear modulations by happy and sad music with suppression effects on the connectivity from FG and AMY to aSTG. Non-linear dynamic causal modeling showed a suppressive gating effect of aSTG on fusiform-amygdalar connectivity. In conclusion, fusiform to amygdala coupling strength is modulated via feedback through aSTG as region for multisensory integration of emotional material. This mechanism was emotion-specific and more pronounced for sad music. Therefore, soundtrack valences may modulate emotion elicitation in movies by differentially changing preprocessed visual information to the amygdala.
Collapse
Affiliation(s)
- Corinna Pehrs
- Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Stanford University, Stanford, CA 94305, USA, Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany, Department of Philosophy and Humanities, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Princeton University, Princeton, NJ 08540, USA, Department of Nuclear Medicine, University of Leipzig, Liebigstrasse 18, 04103 Leipzig, Germany, Institute for Psychoacoustics and Electronic Music (IPEM), Blandijnberg 2, B-9000 Ghent, Belgium and Department of Psychology, Experimental Psychology and Methods, Ruhr-Universität Bochum, Universitätsstraße 150, 44801 Bochum, Germany Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Stanford University, Stanford, CA 94305, USA, Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany, Department of Philosophy and Humanities, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germ
| | - Lorenz Deserno
- Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Stanford University, Stanford, CA 94305, USA, Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany, Department of Philosophy and Humanities, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Princeton University, Princeton, NJ 08540, USA, Department of Nuclear Medicine, University of Leipzig, Liebigstrasse 18, 04103 Leipzig, Germany, Institute for Psychoacoustics and Electronic Music (IPEM), Blandijnberg 2, B-9000 Ghent, Belgium and Department of Psychology, Experimental Psychology and Methods, Ruhr-Universität Bochum, Universitätsstraße 150, 44801 Bochum, Germany Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Stanford University, Stanford, CA 94305, USA, Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany, Department of Philosophy and Humanities, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germ
| | - Jan-Hendrik Bakels
- Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Stanford University, Stanford, CA 94305, USA, Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany, Department of Philosophy and Humanities, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Princeton University, Princeton, NJ 08540, USA, Department of Nuclear Medicine, University of Leipzig, Liebigstrasse 18, 04103 Leipzig, Germany, Institute for Psychoacoustics and Electronic Music (IPEM), Blandijnberg 2, B-9000 Ghent, Belgium and Department of Psychology, Experimental Psychology and Methods, Ruhr-Universität Bochum, Universitätsstraße 150, 44801 Bochum, Germany Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Stanford University, Stanford, CA 94305, USA, Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany, Department of Philosophy and Humanities, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germ
| | - Lorna H Schlochtermeier
- Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Stanford University, Stanford, CA 94305, USA, Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany, Department of Philosophy and Humanities, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Princeton University, Princeton, NJ 08540, USA, Department of Nuclear Medicine, University of Leipzig, Liebigstrasse 18, 04103 Leipzig, Germany, Institute for Psychoacoustics and Electronic Music (IPEM), Blandijnberg 2, B-9000 Ghent, Belgium and Department of Psychology, Experimental Psychology and Methods, Ruhr-Universität Bochum, Universitätsstraße 150, 44801 Bochum, Germany Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Stanford University, Stanford, CA 94305, USA, Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany, Department of Philosophy and Humanities, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germ
| | - Hermann Kappelhoff
- Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Stanford University, Stanford, CA 94305, USA, Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany, Department of Philosophy and Humanities, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Princeton University, Princeton, NJ 08540, USA, Department of Nuclear Medicine, University of Leipzig, Liebigstrasse 18, 04103 Leipzig, Germany, Institute for Psychoacoustics and Electronic Music (IPEM), Blandijnberg 2, B-9000 Ghent, Belgium and Department of Psychology, Experimental Psychology and Methods, Ruhr-Universität Bochum, Universitätsstraße 150, 44801 Bochum, Germany Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Stanford University, Stanford, CA 94305, USA, Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany, Department of Philosophy and Humanities, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germ
| | - Arthur M Jacobs
- Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Stanford University, Stanford, CA 94305, USA, Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany, Department of Philosophy and Humanities, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Princeton University, Princeton, NJ 08540, USA, Department of Nuclear Medicine, University of Leipzig, Liebigstrasse 18, 04103 Leipzig, Germany, Institute for Psychoacoustics and Electronic Music (IPEM), Blandijnberg 2, B-9000 Ghent, Belgium and Department of Psychology, Experimental Psychology and Methods, Ruhr-Universität Bochum, Universitätsstraße 150, 44801 Bochum, Germany Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Stanford University, Stanford, CA 94305, USA, Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany, Department of Philosophy and Humanities, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germ
| | - Thomas Hans Fritz
- Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Stanford University, Stanford, CA 94305, USA, Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany, Department of Philosophy and Humanities, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Princeton University, Princeton, NJ 08540, USA, Department of Nuclear Medicine, University of Leipzig, Liebigstrasse 18, 04103 Leipzig, Germany, Institute for Psychoacoustics and Electronic Music (IPEM), Blandijnberg 2, B-9000 Ghent, Belgium and Department of Psychology, Experimental Psychology and Methods, Ruhr-Universität Bochum, Universitätsstraße 150, 44801 Bochum, Germany Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Stanford University, Stanford, CA 94305, USA, Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany, Department of Philosophy and Humanities, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germ
| | - Stefan Koelsch
- Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Stanford University, Stanford, CA 94305, USA, Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany, Department of Philosophy and Humanities, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Princeton University, Princeton, NJ 08540, USA, Department of Nuclear Medicine, University of Leipzig, Liebigstrasse 18, 04103 Leipzig, Germany, Institute for Psychoacoustics and Electronic Music (IPEM), Blandijnberg 2, B-9000 Ghent, Belgium and Department of Psychology, Experimental Psychology and Methods, Ruhr-Universität Bochum, Universitätsstraße 150, 44801 Bochum, Germany Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Stanford University, Stanford, CA 94305, USA, Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany, Department of Philosophy and Humanities, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germ
| | - Lars Kuchinke
- Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Stanford University, Stanford, CA 94305, USA, Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany, Department of Philosophy and Humanities, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Princeton University, Princeton, NJ 08540, USA, Department of Nuclear Medicine, University of Leipzig, Liebigstrasse 18, 04103 Leipzig, Germany, Institute for Psychoacoustics and Electronic Music (IPEM), Blandijnberg 2, B-9000 Ghent, Belgium and Department of Psychology, Experimental Psychology and Methods, Ruhr-Universität Bochum, Universitätsstraße 150, 44801 Bochum, Germany Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany, Department of Psychology, Stanford University, Stanford, CA 94305, USA, Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany, Department of Philosophy and Humanities, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germ
| |
Collapse
|
39
|
Kohn N, Eickhoff SB, Scheller M, Laird AR, Fox PT, Habel U. Neural network of cognitive emotion regulation--an ALE meta-analysis and MACM analysis. Neuroimage 2013; 87:345-55. [PMID: 24220041 DOI: 10.1016/j.neuroimage.2013.11.001] [Citation(s) in RCA: 621] [Impact Index Per Article: 56.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2013] [Revised: 10/31/2013] [Accepted: 11/01/2013] [Indexed: 01/17/2023] Open
Abstract
Cognitive regulation of emotions is a fundamental prerequisite for intact social functioning which impacts on both well being and psychopathology. The neural underpinnings of this process have been studied intensively in recent years, without, however, a general consensus. We here quantitatively summarize the published literature on cognitive emotion regulation using activation likelihood estimation in fMRI and PET (23 studies/479 subjects). In addition, we assessed the particular functional contribution of identified regions and their interactions using quantitative functional inference and meta-analytic connectivity modeling, respectively. In doing so, we developed a model for the core brain network involved in emotion regulation of emotional reactivity. According to this, the superior temporal gyrus, angular gyrus and (pre) supplementary motor area should be involved in execution of regulation initiated by frontal areas. The dorsolateral prefrontal cortex may be related to regulation of cognitive processes such as attention, while the ventrolateral prefrontal cortex may not necessarily reflect the regulatory process per se, but signals salience and therefore the need to regulate. We also identified a cluster in the anterior middle cingulate cortex as a region, which is anatomically and functionally in an ideal position to influence behavior and subcortical structures related to affect generation. Hence this area may play a central, integrative role in emotion regulation. By focusing on regions commonly active across multiple studies, this proposed model should provide important a priori information for the assessment of dysregulated emotion regulation in psychiatric disorders.
Collapse
Affiliation(s)
- N Kohn
- Department of Psychiatry, Psychotherapy and Psychosomatic Medicine, RWTH Aachen University, Aachen, Germany; JARA Brain, Translational Brain Medicine, Jülich, Aachen, Germany.
| | - S B Eickhoff
- Institute of Neuroscience and Medicine (INM-1), Research Center, Jülich, Germany; Institute for Clinical Neuroscience and Medical Psychology, Heinrich-Heine University, Düsseldorf, Germany
| | - M Scheller
- Department of Psychiatry, Psychotherapy and Psychosomatic Medicine, RWTH Aachen University, Aachen, Germany; JARA Brain, Translational Brain Medicine, Jülich, Aachen, Germany
| | - A R Laird
- Department of Physics, Florida International University, Miami, FL, USA
| | - P T Fox
- Research Imaging Institute, University of Texas Health Science Center, San Antonio, TX, USA; Audie L. Murphy South Texas Veterans Administration Medical Center, San Antonio, TX, USA
| | - U Habel
- Department of Psychiatry, Psychotherapy and Psychosomatic Medicine, RWTH Aachen University, Aachen, Germany; JARA Brain, Translational Brain Medicine, Jülich, Aachen, Germany
| |
Collapse
|
40
|
Gerdes ABM, Wieser MJ, Bublatzky F, Kusay A, Plichta MM, Alpers GW. Emotional sounds modulate early neural processing of emotional pictures. Front Psychol 2013; 4:741. [PMID: 24151476 PMCID: PMC3799293 DOI: 10.3389/fpsyg.2013.00741] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2013] [Accepted: 09/24/2013] [Indexed: 11/13/2022] Open
Abstract
In our natural environment, emotional information is conveyed by converging visual and auditory information; multimodal integration is of utmost importance. In the laboratory, however, emotion researchers have mostly focused on the examination of unimodal stimuli. Few existing studies on multimodal emotion processing have focused on human communication such as the integration of facial and vocal expressions. Extending the concept of multimodality, the current study examines how the neural processing of emotional pictures is influenced by simultaneously presented sounds. Twenty pleasant, unpleasant, and neutral pictures of complex scenes were presented to 22 healthy participants. On the critical trials these pictures were paired with pleasant, unpleasant, and neutral sounds. Sound presentation started 500 ms before picture onset and each stimulus presentation lasted for 2 s. EEG was recorded from 64 channels and ERP analyses focused on the picture onset. In addition, valence and arousal ratings were obtained. Previous findings for the neural processing of emotional pictures were replicated. Specifically, unpleasant compared to neutral pictures were associated with an increased parietal P200 and a more pronounced centroparietal late positive potential (LPP), independent of the accompanying sound valence. For audiovisual stimulation, increased parietal P100 and P200 were found in response to all pictures which were accompanied by unpleasant or pleasant sounds compared to pictures with neutral sounds. Most importantly, incongruent audiovisual pairs of unpleasant pictures and pleasant sounds enhanced parietal P100 and P200 compared to pairings with congruent sounds. Taken together, the present findings indicate that emotional sounds modulate early stages of visual processing and, therefore, provide an avenue by which multimodal experience may enhance perception.
Collapse
Affiliation(s)
- Antje B M Gerdes
- Department of Psychology, School of Social Sciences, University of Mannheim Mannheim, Germany
| | | | | | | | | | | |
Collapse
|
41
|
Doi H, Shinohara K. Unconscious presentation of fearful face modulates electrophysiological responses to emotional prosody. Cereb Cortex 2013; 25:817-32. [PMID: 24108801 DOI: 10.1093/cercor/bht282] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Cross-modal integration of visual and auditory emotional cues is supposed to be advantageous in the accurate recognition of emotional signals. However, the neural locus of cross-modal integration between affective prosody and unconsciously presented facial expression in the neurologically intact population is still elusive at this point. The present study examined the influences of unconsciously presented facial expressions on the event-related potentials (ERPs) in emotional prosody recognition. In the experiment, fearful, happy, and neutral faces were presented without awareness by continuous flash suppression simultaneously with voices containing laughter and a fearful shout. The conventional peak analysis revealed that the ERPs were modulated interactively by emotional prosody and facial expression at multiple latency ranges, indicating that audio-visual integration of emotional signals takes place automatically without conscious awareness. In addition, the global field power during the late-latency range was larger for shout than for laughter only when a fearful face was presented unconsciously. The neural locus of this effect was localized to the left posterior fusiform gyrus, giving support to the view that the cortical region, traditionally considered to be unisensory region for visual processing, functions as the locus of audiovisual integration of emotional signals.
Collapse
Affiliation(s)
- Hirokazu Doi
- Graduate School of Biomedical Sciences, Nagasaki University, Nagasaki City, Nagasaki, Japan
| | - Kazuyuki Shinohara
- Graduate School of Biomedical Sciences, Nagasaki University, Nagasaki City, Nagasaki, Japan
| |
Collapse
|
42
|
Ethofer T, Bretscher J, Wiethoff S, Bisch J, Schlipf S, Wildgruber D, Kreifelts B. Functional responses and structural connections of cortical areas for processing faces and voices in the superior temporal sulcus. Neuroimage 2013; 76:45-56. [DOI: 10.1016/j.neuroimage.2013.02.064] [Citation(s) in RCA: 54] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2012] [Revised: 01/17/2013] [Accepted: 02/26/2013] [Indexed: 10/27/2022] Open
|
43
|
Furl N, Coppola R, Averbeck BB, Weinberger DR. Cross-frequency power coupling between hierarchically organized face-selective areas. Cereb Cortex 2013; 24:2409-20. [PMID: 23588186 DOI: 10.1093/cercor/bht097] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Neural oscillations are linked to perception and behavior and may reflect mechanisms for long-range communication between brain areas. We developed a causal model of oscillatory dynamics in the face perception network using magnetoencephalographic data from 51 normal volunteers. This model predicted induced responses to faces by estimating oscillatory power coupling between source locations corresponding to bilateral occipital and fusiform face areas (OFA and FFA) and the right superior temporal sulcus (STS). These sources showed increased alpha and theta and decreased beta power as well as selective responses to fearful facial expressions. We then used Bayesian model comparison to compare hypothetical models, which were motivated by previous connectivity data and a well-known theory of temporal lobe function. We confirmed this theory in detail by showing that the OFA bifurcated into 2 independent, hierarchical, feedforward pathways, with fearful expressions modulating power coupling only in the more dorsal (STS) pathway. The power coupling parameters showed a common pattern over connections. Low-frequency bands showed same-frequency power coupling, which, in the dorsal pathway, was modulated by fearful faces. Also, theta power showed a cross-frequency suppression of beta power. This combination of linear and nonlinear mechanisms could reflect computational mechanisms in hierarchical feedforward networks.
Collapse
Affiliation(s)
- Nicholas Furl
- Laboratory of Neuropsychology, NIMH/NIH MRC Cognition and Brain Sciences Unit, Cambridge, CB2 7EF, UK
| | | | | | - Daniel R Weinberger
- Genes, Cognition and Psychosis Program, Clinical Brain Disorders Branch NIMH/NIH, Bethesda MD, 20892, USA
| |
Collapse
|
44
|
Jomori I, Hoshiyama M, Uemura JI, Nakagawa Y, Hoshino A, Iwamoto Y. Effects of emotional music on visual processes in inferior temporal area. Cogn Neurosci 2013; 4:21-30. [DOI: 10.1080/17588928.2012.751366] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
45
|
Reading without the left ventral occipito-temporal cortex. Neuropsychologia 2012; 50:3621-35. [PMID: 23017598 PMCID: PMC3524457 DOI: 10.1016/j.neuropsychologia.2012.09.030] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2012] [Revised: 07/27/2012] [Accepted: 09/17/2012] [Indexed: 11/23/2022]
Abstract
The left ventral occipito-temporal cortex (LvOT) is thought to be essential for the rapid parallel letter processing that is required for skilled reading. Here we investigate whether rapid written word identification in skilled readers can be supported by neural pathways that do not involve LvOT. Hypotheses were derived from a stroke patient who acquired dyslexia following extensive LvOT damage. The patient followed a reading trajectory typical of that associated with pure alexia, re-gaining the ability to read aloud many words with declining performance as the length of words increased. Using functional MRI and dynamic causal modelling (DCM), we found that, when short (three to five letter) familiar words were read successfully, visual inputs to the patient’s occipital cortex were connected to left motor and premotor regions via activity in a central part of the left superior temporal sulcus (STS). The patient analysis therefore implied a left hemisphere “reading-without-LvOT” pathway that involved STS. We then investigated whether the same reading-without-LvOT pathway could be identified in 29 skilled readers and whether there was inter-subject variability in the degree to which skilled reading engaged LvOT. We found that functional connectivity in the reading-without-LvOT pathway was strongest in individuals who had the weakest functional connectivity in the LvOT pathway. This observation validates the findings of our patient’s case study. Our findings highlight the contribution of a left hemisphere reading pathway that is activated during the rapid identification of short familiar written words, particularly when LvOT is not involved. Preservation and use of this pathway may explain how patients are still able to read short words accurately when LvOT has been damaged.
Collapse
|
46
|
Greimel E, Nehrkorn B, Schulte-Rüther M, Fink GR, Nickl-Jockschat T, Herpertz-Dahlmann B, Konrad K, Eickhoff SB. Changes in grey matter development in autism spectrum disorder. Brain Struct Funct 2012; 218:929-42. [PMID: 22777602 PMCID: PMC3695319 DOI: 10.1007/s00429-012-0439-9] [Citation(s) in RCA: 82] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2011] [Accepted: 06/11/2012] [Indexed: 10/28/2022]
Abstract
Results on grey matter (GM) structural alterations in autism spectrum disorder (ASD) are inconclusive. Moreover, little is known about age effects on brain-structure abnormalities in ASD beyond childhood. Here, we aimed to examine regional GM volumes in a large sample of children, adolescents, and adults with ASD. Magnetic resonance imaging scans were obtained in 47 male ASD subjects and 51 matched healthy controls aged 8-50 years. We used whole-brain voxel-based morphometry to first assess group differences in regional GM volume across age. Moreover, taking a cross-sectional approach, group differences in age effects on regional GM volume were investigated. Compared to controls, ASD subjects showed reduced GM volumes in the anterior cingulate cortex, posterior superior temporal sulcus, and middle temporal gyrus. Investigation of group differences in age effects on regional GM volume revealed complex, region-specific alterations in ASD. While GM volumes in the amygdala, temporoparietal junction, septal nucleus and middle cingulate cortex increased in a negative quadratic fashion in both groups, data indicated that GM volume curves in ASD subjects were shifted to the left along the age axis. Moreover, while GM volume in the right precentral gyrus decreased linearly with age in ASD individuals, GM volume development in controls followed a U-shaped pattern. Based on a large sample, our voxel-based morphometry results on group differences in regional GM volumes help to resolve inconclusive findings from previous studies in ASD. Results on age-related changes of regional GM volumes suggest that ASD is characterized by complex alterations in lifetime trajectories of several brain regions that underpin social-cognitive and motor functions.
Collapse
Affiliation(s)
- Ellen Greimel
- Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, University Hospital of the RWTH Aachen, Aachen, Germany.
| | | | | | | | | | | | | | | |
Collapse
|