1
|
Trost W, Trevor C, Fernandez N, Steiner F, Frühholz S. Live music stimulates the affective brain and emotionally entrains listeners in real time. Proc Natl Acad Sci U S A 2024; 121:e2316306121. [PMID: 38408255 DOI: 10.1073/pnas.2316306121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 01/18/2024] [Indexed: 02/28/2024] Open
Abstract
Music is powerful in conveying emotions and triggering affective brain mechanisms. Affective brain responses in previous studies were however rather inconsistent, potentially because of the non-adaptive nature of recorded music used so far. Live music instead can be dynamic and adaptive and is often modulated in response to audience feedback to maximize emotional responses in listeners. Here, we introduce a setup for studying emotional responses to live music in a closed-loop neurofeedback setup. This setup linked live performances by musicians to neural processing in listeners, with listeners' amygdala activity was displayed to musicians in real time. Brain activity was measured using functional MRI, and especially amygdala activity was quantified in real time for the neurofeedback signal. Live pleasant and unpleasant piano music performed in response to amygdala neurofeedback from listeners was acoustically very different from comparable recorded music and elicited significantly higher and more consistent amygdala activity. Higher activity was also found in a broader neural network for emotion processing during live compared to recorded music. This finding included observations of the predominance for aversive coding in the ventral striatum while listening to unpleasant music, and involvement of the thalamic pulvinar nucleus, presumably for regulating attentional and cortical flow mechanisms. Live music also stimulated a dense functional neural network with the amygdala as a central node influencing other brain systems. Finally, only live music showed a strong and positive coupling between features of the musical performance and brain activity in listeners pointing to real-time and dynamic entrainment processes.
Collapse
Affiliation(s)
- Wiebke Trost
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Caitlyn Trevor
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Natalia Fernandez
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Florence Steiner
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich 8057, Switzerland
- Department of Psychology, University of Oslo, Oslo 0373, Norway
| |
Collapse
|
2
|
Jia G, Bai S, Lin Y, Wang X, Zhu L, Lyu C, Sun G, An K, Roe AW, Li X, Gao L. Representation of conspecific vocalizations in amygdala of awake marmosets. Natl Sci Rev 2023; 10:nwad194. [PMID: 37818111 PMCID: PMC10561708 DOI: 10.1093/nsr/nwad194] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 06/23/2023] [Accepted: 07/06/2023] [Indexed: 10/12/2023] Open
Abstract
Human speech and animal vocalizations are important for social communication and animal survival. Neurons in the auditory pathway are responsive to a range of sounds, from elementary sound features to complex acoustic sounds. For social communication, responses to distinct patterns of vocalization are usually highly specific to an individual conspecific call, in some species. This includes the specificity of sound patterns and embedded biological information. We conducted single-unit recordings in the amygdala of awake marmosets and presented calls used in marmoset communication, calls of other species and calls from specific marmoset individuals. We found that some neurons (47/262) in the amygdala distinguished 'Phee' calls from vocalizations of other animals and other types of marmoset vocalizations. Interestingly, a subset of Phee-responsive neurons (22/47) also exhibited selectivity to one out of the three Phees from two different 'caller' marmosets. Our findings suggest that, while it has traditionally been considered the key structure in the limbic system, the amygdala also represents a critical stage of socially relevant auditory perceptual processing.
Collapse
Affiliation(s)
- Guoqiang Jia
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
| | - Siyi Bai
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Yingxu Lin
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Xiaohui Wang
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Lin Zhu
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
| | - Chenfei Lyu
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
| | - Guanglong Sun
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
| | - Kang An
- College of Information, Mechanical and Electrical Engineering, Shanghai Normal University, Shanghai 201418, China
| | - Anna Wang Roe
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
- MOE Frontier Science Center for Brain Science and Brain-Machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310058, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Xinjian Li
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
- MOE Frontier Science Center for Brain Science and Brain-Machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310058, China
- Key Laboratory of Medical Neurobiology of Zhejiang Province, Zhejiang University School of Medicine, Hangzhou 310020, China
| | - Lixia Gao
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
- MOE Frontier Science Center for Brain Science and Brain-Machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310058, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
3
|
Leipold S, Abrams DA, Karraker S, Menon V. Neural decoding of emotional prosody in voice-sensitive auditory cortex predicts social communication abilities in children. Cereb Cortex 2023; 33:709-728. [PMID: 35296892 PMCID: PMC9890475 DOI: 10.1093/cercor/bhac095] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 02/11/2022] [Accepted: 02/12/2022] [Indexed: 02/04/2023] Open
Abstract
During social interactions, speakers signal information about their emotional state through their voice, which is known as emotional prosody. Little is known regarding the precise brain systems underlying emotional prosody decoding in children and whether accurate neural decoding of these vocal cues is linked to social skills. Here, we address critical gaps in the developmental literature by investigating neural representations of prosody and their links to behavior in children. Multivariate pattern analysis revealed that representations in the bilateral middle and posterior superior temporal sulcus (STS) divisions of voice-sensitive auditory cortex decode emotional prosody information in children. Crucially, emotional prosody decoding in middle STS was correlated with standardized measures of social communication abilities; more accurate decoding of prosody stimuli in the STS was predictive of greater social communication abilities in children. Moreover, social communication abilities were specifically related to decoding sadness, highlighting the importance of tuning in to negative emotional vocal cues for strengthening social responsiveness and functioning. Findings bridge an important theoretical gap by showing that the ability of the voice-sensitive cortex to detect emotional cues in speech is predictive of a child's social skills, including the ability to relate and interact with others.
Collapse
Affiliation(s)
- Simon Leipold
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Daniel A Abrams
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Shelby Karraker
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Vinod Menon
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA
- Stanford Neurosciences Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
4
|
Steiner F, Fernandez N, Dietziker J, Stämpfli SP, Seifritz E, Rey A, Frühholz FS. Affective speech modulates a cortico-limbic network in real time. Prog Neurobiol 2022; 214:102278. [DOI: 10.1016/j.pneurobio.2022.102278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 04/06/2022] [Accepted: 04/28/2022] [Indexed: 10/18/2022]
|
5
|
Murray T, O’Brien J, Sagiv N, Kumari V. Changes in functional connectivity associated with facial expression processing over the working adult lifespan. Cortex 2022; 151:211-223. [DOI: 10.1016/j.cortex.2022.03.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 01/06/2022] [Accepted: 03/01/2022] [Indexed: 11/03/2022]
|
6
|
Trait anxiety predicts amygdalar responses during direct processing of threat-related pictures. Sci Rep 2021; 11:18469. [PMID: 34531518 PMCID: PMC8446049 DOI: 10.1038/s41598-021-98023-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 08/31/2021] [Indexed: 11/11/2022] Open
Abstract
Previous studies on the associations between trait anxiety and amygdalar responses to threat stimuli have resulted in mixed findings, possibly due to sample characteristics, specific tasks, and analytical methods. The present functional magnetic resonance imaging (fMRI) study aimed to investigate linear or non-linear associations between trait anxiety and amygdalar responses in a sample of participants with low, medium, and high trait anxiety scores. During scanning, participants were presented with threat-related or neutral pictures and had either to solve an emotional task or an emotional-unrelated distraction task. Results showed that only during the explicit task trait anxiety was associated with right amygdalar responses to threat-related pictures as compared to neutral pictures. The best model was a cubic model with increased amygdala responses for very low and medium trait anxiety values but decreased amygdala activation for very high trait anxiety values. The findings imply a non-linear relation between trait anxiety and amygdala activation depending on task conditions.
Collapse
|
7
|
Oxytocinergic Modulation of Threat-Specific Amygdala Sensitization in Humans Is Critically Mediated by Serotonergic Mechanisms. BIOLOGICAL PSYCHIATRY: COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2021; 6:1081-1089. [PMID: 33894423 DOI: 10.1016/j.bpsc.2021.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Revised: 03/21/2021] [Accepted: 04/12/2021] [Indexed: 11/20/2022]
Abstract
BACKGROUND Overarching conceptualizations propose that the complex social-emotional effects of oxytocin (OXT) in humans are partly mediated by interactions with other neurotransmitter systems. Recent animal models suggest that the anxiolytic effects of OXT are critically mediated by the serotonin (5-HT) system, yet direct evidence in humans is lacking. METHODS To determine the role of 5-HT in OXT-induced attenuation of amygdala threat reactivity and sensitization/desensitization, we conducted a parallel-group, randomized, placebo-controlled, double-blind experiment during which 121 healthy subjects underwent a transient decrease in 5-HT signaling via acute tryptophan depletion or the corresponding placebo-control protocol before the administration of intranasal OXT or placebo intranasal spray, respectively. Mean and repetition-dependent changes in threat-specific amygdala reactivity toward threatening stimuli (angry faces) as assessed by functional magnetic resonance imaging served as the primary outcome. RESULTS No main or interaction effects of treatment on amygdala threat reactivity were observed, yet OXT switched bilateral amygdala threat sensitization to desensitization, and this effect was significantly attenuated during decreased central 5-HT signaling via pretreatment with acute tryptophan depletion. CONCLUSIONS The present findings provide the first evidence for a role of OXT in threat-specific amygdala desensitization in humans and suggest that these effects are critically mediated by the 5-HT system. OXT may have a therapeutic potential to facilitate amygdala desensitization, and adjunct upregulation of 5-HT neurotransmission may facilitate OXT's anxiolytic potential.
Collapse
|
8
|
Frühholz S, Dietziker J, Staib M, Trost W. Neurocognitive processing efficiency for discriminating human non-alarm rather than alarm scream calls. PLoS Biol 2021; 19:e3000751. [PMID: 33848299 PMCID: PMC8043411 DOI: 10.1371/journal.pbio.3000751] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Accepted: 02/15/2021] [Indexed: 11/19/2022] Open
Abstract
Across many species, scream calls signal the affective significance of events to other agents. Scream calls were often thought to be of generic alarming and fearful nature, to signal potential threats, with instantaneous, involuntary, and accurate recognition by perceivers. However, scream calls are more diverse in their affective signaling nature than being limited to fearfully alarming a threat, and thus the broader sociobiological relevance of various scream types is unclear. Here we used 4 different psychoacoustic, perceptual decision-making, and neuroimaging experiments in humans to demonstrate the existence of at least 6 psychoacoustically distinctive types of scream calls of both alarming and non-alarming nature, rather than there being only screams caused by fear or aggression. Second, based on perceptual and processing sensitivity measures for decision-making during scream recognition, we found that alarm screams (with some exceptions) were overall discriminated the worst, were responded to the slowest, and were associated with a lower perceptual sensitivity for their recognition compared with non-alarm screams. Third, the neural processing of alarm compared with non-alarm screams during an implicit processing task elicited only minimal neural signal and connectivity in perceivers, contrary to the frequent assumption of a threat processing bias of the primate neural system. These findings show that scream calls are more diverse in their signaling and communicative nature in humans than previously assumed, and, in contrast to a commonly observed threat processing bias in perceptual discriminations and neural processes, we found that especially non-alarm screams, and positive screams in particular, seem to have higher efficiency in speeded discriminations and the implicit neural processing of various scream types in humans.
Collapse
Affiliation(s)
- Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
- Department of Psychology, University of Oslo, Oslo, Norway
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland
- * E-mail:
| | - Joris Dietziker
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland
| | - Matthias Staib
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland
| | - Wiebke Trost
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland
| |
Collapse
|
9
|
Liu T, Ke J, Qi R, Zhang L, Zhang Z, Xu Q, Zhong Y, Lu G, Chen F. Altered functional connectivity of the amygdala and its subregions in typhoon-related post-traumatic stress disorder. Brain Behav 2021; 11:e01952. [PMID: 33205889 PMCID: PMC7821579 DOI: 10.1002/brb3.1952] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 10/06/2020] [Accepted: 10/31/2020] [Indexed: 01/01/2023] Open
Abstract
BACKGROUND New evidence suggests that the centromedial amygdala (CMA) and the basolateral amygdala (BLA) play different roles in threat processing. Our study aimed to investigate the effects of trauma and post-traumatic stress disorder (PTSD) on the functional connectivity (FC) of the amygdala and its subregions. METHODS Twenty-seven patients with typhoon-related PTSD, 33 trauma-exposed controls (TEC), and 30 healthy controls (HC) were scanned with a 3-Tesla magnetic resonance imaging scanner. The FCs of the BLA, the CMA, and the amygdala as a whole were examined using a seed-based approach, and then, the analysis of variance was used to compare the groups. RESULTS We demonstrated that the BLA had a stronger connectivity with the prefrontal cortices (PFCs) and angular gyrus in the PTSD group than in the TEC group. Additionally, compared with the PTSD and the HC groups, the TEC group exhibited decreased and increased BLA FC with the ventromedial PFC and postcentral gyrus (PoCG), respectively. Furthermore, the PTSD group showed abnormal FC between the salience network and default-mode network, as well as the executive control network. Compared with the HC group, the TEC group and the PTSD group both showed decreased BLA FC with the superior temporal gyrus (STG). Finally, the FCs between the bilateral amygdala (as a whole) and the vmPFC, and between the BLA and the vmPFC have a negative correlation with the severity of PTSD. CONCLUSIONS Decreased BLA-vmPFC FC and increased BLA-PoCG FC may reflect PTSD resilience factors. Trauma leads to decreased connectivity between the BLA and the STG, which could be further aggravated by PTSD.
Collapse
Affiliation(s)
- Tao Liu
- Department of Neurology, Hainan General Hospital (Hainan Hospital Affiliated to Hainan Medical College), Haikou, Hainan Province, China
| | - Jun Ke
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, Jiangsu Province, China
| | - Rongfeng Qi
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, Jiangsu Province, China
| | - Li Zhang
- Key Laboratory of Psychiatry and Mental Health of Hunan Province, Mental Health Institute, the Second Xiangya Hospital, National Technology Institute of Psychiatry, Central South University, Changsha, Hunan Province, China
| | - Zhiqiang Zhang
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, Jiangsu Province, China
| | - Qiang Xu
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, Jiangsu Province, China
| | - Yuan Zhong
- School of Psychology, Nanjing Normal University, Nanjing, Jiangsu Province, China
| | - Guangming Lu
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, Jiangsu Province, China
| | - Feng Chen
- Department of Radiology, Hainan General Hospital (Hainan Hospital Affiliated to Hainan Medical College), Haikou, Hainan Province, China
| |
Collapse
|
10
|
Swanborough H, Staib M, Frühholz S. Neurocognitive dynamics of near-threshold voice signal detection and affective voice evaluation. SCIENCE ADVANCES 2020; 6:6/50/eabb3884. [PMID: 33310844 PMCID: PMC7732184 DOI: 10.1126/sciadv.abb3884] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Accepted: 10/29/2020] [Indexed: 05/10/2023]
Abstract
Communication and voice signal detection in noisy environments are universal tasks for many species. The fundamental problem of detecting voice signals in noise (VIN) is underinvestigated especially in its temporal dynamic properties. We investigated VIN as a dynamic signal-to-noise ratio (SNR) problem to determine the neurocognitive dynamics of subthreshold evidence accrual and near-threshold voice signal detection. Experiment 1 showed that dynamic VIN, including a varying SNR and subthreshold sensory evidence accrual, is superior to similar conditions with nondynamic SNRs or with acoustically matched sounds. Furthermore, voice signals with affective meaning have a detection advantage during VIN. Experiment 2 demonstrated that VIN is driven by an effective neural integration in an auditory cortical-limbic network at and beyond the near-threshold detection point, which is preceded by activity in subcortical auditory nuclei. This demonstrates the superior recognition advantage of communication signals in dynamic noise contexts, especially when carrying socio-affective meaning.
Collapse
Affiliation(s)
- Huw Swanborough
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Matthias Staib
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich, Switzerland
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
- Department of Psychology, University of Oslo, Oslo, Norway
| |
Collapse
|
11
|
Nonverbal auditory communication - Evidence for integrated neural systems for voice signal production and perception. Prog Neurobiol 2020; 199:101948. [PMID: 33189782 DOI: 10.1016/j.pneurobio.2020.101948] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Revised: 10/12/2020] [Accepted: 11/04/2020] [Indexed: 12/24/2022]
Abstract
While humans have developed a sophisticated and unique system of verbal auditory communication, they also share a more common and evolutionarily important nonverbal channel of voice signaling with many other mammalian and vertebrate species. This nonverbal communication is mediated and modulated by the acoustic properties of a voice signal, and is a powerful - yet often neglected - means of sending and perceiving socially relevant information. From the viewpoint of dyadic (involving a sender and a signal receiver) voice signal communication, we discuss the integrated neural dynamics in primate nonverbal voice signal production and perception. Most previous neurobiological models of voice communication modelled these neural dynamics from the limited perspective of either voice production or perception, largely disregarding the neural and cognitive commonalities of both functions. Taking a dyadic perspective on nonverbal communication, however, it turns out that the neural systems for voice production and perception are surprisingly similar. Based on the interdependence of both production and perception functions in communication, we first propose a re-grouping of the neural mechanisms of communication into auditory, limbic, and paramotor systems, with special consideration for a subsidiary basal-ganglia-centered system. Second, we propose that the similarity in the neural systems involved in voice signal production and perception is the result of the co-evolution of nonverbal voice production and perception systems promoted by their strong interdependence in dyadic interactions.
Collapse
|
12
|
Guldner S, Nees F, McGettigan C. Vocomotor and Social Brain Networks Work Together to Express Social Traits in Voices. Cereb Cortex 2020; 30:6004-6020. [PMID: 32577719 DOI: 10.1093/cercor/bhaa175] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 05/08/2020] [Accepted: 05/31/2020] [Indexed: 11/14/2022] Open
Abstract
Voice modulation is important when navigating social interactions-tone of voice in a business negotiation is very different from that used to comfort an upset child. While voluntary vocal behavior relies on a cortical vocomotor network, social voice modulation may require additional social cognitive processing. Using functional magnetic resonance imaging, we investigated the neural basis for social vocal control and whether it involves an interplay of vocal control and social processing networks. Twenty-four healthy adult participants modulated their voice to express social traits along the dimensions of the social trait space (affiliation and competence) or to express body size (control for vocal flexibility). Naïve listener ratings showed that vocal modulations were effective in evoking social trait ratings along the two primary dimensions of the social trait space. Whereas basic vocal modulation engaged the vocomotor network, social voice modulation specifically engaged social processing regions including the medial prefrontal cortex, superior temporal sulcus, and precuneus. Moreover, these regions showed task-relevant modulations in functional connectivity to the left inferior frontal gyrus, a core vocomotor control network area. These findings highlight the impact of the integration of vocal motor control and social information processing for socially meaningful voice modulation.
Collapse
Affiliation(s)
- Stella Guldner
- Department of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim 68159, Germany.,Graduate School of Economic and Social Sciences, University of Mannheim, Mannheim 68159, Germany.,Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Frauke Nees
- Department of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim 68159, Germany.,Institute of Medical Psychology and Medical Sociology, University Medical Center Schleswig Holstein, Kiel University, Kiel 24105, Germany
| | - Carolyn McGettigan
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK.,Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
| |
Collapse
|
13
|
Dricu M, Frühholz S. A neurocognitive model of perceptual decision-making on emotional signals. Hum Brain Mapp 2020; 41:1532-1556. [PMID: 31868310 PMCID: PMC7267943 DOI: 10.1002/hbm.24893] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 11/18/2019] [Accepted: 11/29/2019] [Indexed: 01/09/2023] Open
Abstract
Humans make various kinds of decisions about which emotions they perceive from others. Although it might seem like a split-second phenomenon, deliberating over which emotions we perceive unfolds across several stages of decisional processing. Neurocognitive models of general perception postulate that our brain first extracts sensory information about the world then integrates these data into a percept and lastly interprets it. The aim of the present study was to build an evidence-based neurocognitive model of perceptual decision-making on others' emotions. We conducted a series of meta-analyses of neuroimaging data spanning 30 years on the explicit evaluations of others' emotional expressions. We find that emotion perception is rather an umbrella term for various perception paradigms, each with distinct neural structures that underline task-related cognitive demands. Furthermore, the left amygdala was responsive across all classes of decisional paradigms, regardless of task-related demands. Based on these observations, we propose a neurocognitive model that outlines the information flow in the brain needed for a successful evaluation of and decisions on other individuals' emotions. HIGHLIGHTS: Emotion classification involves heterogeneous perception and decision-making tasks Decision-making processes on emotions rarely covered by existing emotions theories We propose an evidence-based neuro-cognitive model of decision-making on emotions Bilateral brain processes for nonverbal decisions, left brain processes for verbal decisions Left amygdala involved in any kind of decision on emotions.
Collapse
Affiliation(s)
- Mihai Dricu
- Department of PsychologyUniversity of BernBernSwitzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of PsychologyUniversity of ZurichZurichSwitzerland
- Neuroscience Center Zurich (ZNZ)University of Zurich and ETH ZurichZurichSwitzerland
- Center for Integrative Human Physiology (ZIHP)University of ZurichZurichSwitzerland
| |
Collapse
|
14
|
Abstract
The processing of emotional nonlinguistic information in speech is defined as emotional prosody. This auditory nonlinguistic information is essential in the decoding of social interactions and in our capacity to adapt and react adequately by taking into account contextual information. An integrated model is proposed at the functional and brain levels, encompassing 5 main systems that involve cortical and subcortical neural networks relevant for the processing of emotional prosody in its major dimensions, including perception and sound organization; related action tendencies; and associated values that integrate complex social contexts and ambiguous situations.
Collapse
Affiliation(s)
- Didier Grandjean
- Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences, University of Geneva, Switzerland
| |
Collapse
|
15
|
Lin H, Müller-Bardorff M, Gathmann B, Brieke J, Mothes-Lasch M, Bruchmann M, Miltner WHR, Straube T. Stimulus arousal drives amygdalar responses to emotional expressions across sensory modalities. Sci Rep 2020; 10:1898. [PMID: 32024891 PMCID: PMC7002496 DOI: 10.1038/s41598-020-58839-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 12/23/2019] [Indexed: 11/08/2022] Open
Abstract
The factors that drive amygdalar responses to emotionally significant stimuli are still a matter of debate - particularly the proneness of the amygdala to respond to negatively-valenced stimuli has been discussed controversially. Furthermore, it is uncertain whether the amygdala responds in a modality-general fashion or whether modality-specific idiosyncrasies exist. Therefore, the present functional magnetic resonance imaging (fMRI) study systematically investigated amygdalar responding to stimulus valence and arousal of emotional expressions across visual and auditory modalities. During scanning, participants performed a gender judgment task while prosodic and facial emotional expressions were presented. The stimuli varied in stimulus valence and arousal by including neutral, happy and angry expressions of high and low emotional intensity. Results demonstrate amygdalar activation as a function of stimulus arousal and accordingly associated emotional intensity regardless of stimulus valence. Furthermore, arousal-driven amygdalar responding did not depend on the visual and auditory modalities of emotional expressions. Thus, the current results are consistent with the notion that the amygdala codes general stimulus relevance across visual and auditory modalities irrespective of valence. In addition, whole brain analyses revealed that effects in visual and auditory areas were driven mainly by high intense emotional facial and vocal stimuli, respectively, suggesting modality-specific representations of emotional expressions in auditory and visual cortices.
Collapse
Affiliation(s)
- Huiyan Lin
- Institute of Applied Psychology, School of Public Administration, Guangdong University of Finance, 510521, Guangzhou, China.
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany.
| | - Miriam Müller-Bardorff
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Bettina Gathmann
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Jaqueline Brieke
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Martin Mothes-Lasch
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Maximilian Bruchmann
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Wolfgang H R Miltner
- Department of Clinical Psychology, Friedrich Schiller University of Jena, 07743, Jena, Germany
| | - Thomas Straube
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| |
Collapse
|
16
|
Domínguez-Borràs J, Guex R, Méndez-Bértolo C, Legendre G, Spinelli L, Moratti S, Frühholz S, Mégevand P, Arnal L, Strange B, Seeck M, Vuilleumier P. Human amygdala response to unisensory and multisensory emotion input: No evidence for superadditivity from intracranial recordings. Neuropsychologia 2019; 131:9-24. [PMID: 31158367 DOI: 10.1016/j.neuropsychologia.2019.05.027] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Revised: 05/15/2019] [Accepted: 05/28/2019] [Indexed: 12/14/2022]
Abstract
The amygdala is crucially implicated in processing emotional information from various sensory modalities. However, there is dearth of knowledge concerning the integration and relative time-course of its responses across different channels, i.e., for auditory, visual, and audiovisual input. Functional neuroimaging data in humans point to a possible role of this region in the multimodal integration of emotional signals, but direct evidence for anatomical and temporal overlap of unisensory and multisensory-evoked responses in amygdala is still lacking. We recorded event-related potentials (ERPs) and oscillatory activity from 9 amygdalae using intracranial electroencephalography (iEEG) in patients prior to epilepsy surgery, and compared electrophysiological responses to fearful, happy, or neutral stimuli presented either in voices alone, faces alone, or voices and faces simultaneously delivered. Results showed differential amygdala responses to fearful stimuli, in comparison to neutral, reaching significance 100-200 ms post-onset for auditory, visual and audiovisual stimuli. At later latencies, ∼400 ms post-onset, amygdala response to audiovisual information was also amplified in comparison to auditory or visual stimuli alone. Importantly, however, we found no evidence for either super- or subadditivity effects in any of the bimodal responses. These results suggest, first, that emotion processing in amygdala occurs at globally similar early stages of perceptual processing for auditory, visual, and audiovisual inputs; second, that overall larger responses to multisensory information occur at later stages only; and third, that the underlying mechanisms of this multisensory gain may reflect a purely additive response to concomitant visual and auditory inputs. Our findings provide novel insights on emotion processing across the sensory pathways, and their convergence within the limbic system.
Collapse
Affiliation(s)
- Judith Domínguez-Borràs
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
| | - Raphaël Guex
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
| | | | - Guillaume Legendre
- Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Laurent Spinelli
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland.
| | - Stephan Moratti
- Department of Experimental Psychology, Complutense University of Madrid, Spain; Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Universidad Politécnica de Madrid, Spain.
| | - Sascha Frühholz
- Department of Psychology, University of Zurich, Switzerland.
| | - Pierre Mégevand
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Luc Arnal
- Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Bryan Strange
- Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Universidad Politécnica de Madrid, Spain; Department of Neuroimaging, Alzheimer's Disease Research Centre, Reina Sofia-CIEN Foundation, Madrid, Spain.
| | - Margitta Seeck
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland.
| | - Patrik Vuilleumier
- Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| |
Collapse
|
17
|
Calvo N, Abrevaya S, Martínez Cuitiño M, Steeb B, Zamora D, Sedeño L, Ibáñez A, García AM. Rethinking the Neural Basis of Prosody and Non-literal Language: Spared Pragmatics and Cognitive Compensation in a Bilingual With Extensive Right-Hemisphere Damage. Front Psychol 2019; 10:570. [PMID: 30941077 PMCID: PMC6433823 DOI: 10.3389/fpsyg.2019.00570] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Accepted: 02/28/2019] [Indexed: 11/13/2022] Open
Abstract
Above and beyond the critical contributions of left perisylvian regions to language, the neural networks supporting pragmatic aspects of verbal communication in native and non-native languages (L1s and L2, respectively) have often been ascribed to the right hemisphere (RH). However, several reports have shown that left-hemisphere activity associated with pragmatic domains (e.g., prosody, indirect speech, figurative language) is comparable to or even greater than that observed in the RH, challenging the proposed putative role of the latter for relevant domains. Against this background, we report on an adult bilingual patient showing preservation of pragmatic verbal skills in both languages (L1: Spanish, L2: English) despite bilateral damage mainly focused on the RH. After two strokes, the patient sustained lesions in several regions previously implicated in pragmatic functions (vast portions of the right fronto-insulo-temporal cortices, the bilateral amygdalae and insular cortices, and the left putamen). Yet, comparison of linguistic and pragmatic skills with matched controls revealed spared performance on multiple relevant tasks in both her L1 and L2. Despite mild difficulties in some aspects of L2 prosody, she showed no deficits in comprehending metaphors and idioms, or understanding indirect speech acts in either language. Basic verbal skills were also preserved in both languages, including verbal auditory discrimination, repetition of words and pseudo-words, cognate processing, grammaticality judgments, equivalent recognition, and word and sentence translation. Taken together, the evidence shows that multiple functions of verbal communication can be widely spared despite extensive damage to the RH, and that claims for a putative relation between pragmatics and the RH may have been overemphasized in the monolingual and bilingual literature. We further discuss the case in light of previous reports of pragmatic and linguistic deficits following brain lesions and address its relation to cognitive compensation in bilingual patients.
Collapse
Affiliation(s)
- Noelia Calvo
- Laboratory of Experimental Psychology and Neuroscience, Institute of Cognitive and Translational Neuroscience, INECO Foundation, Favaloro University, Buenos Aires, Argentina.,National Scientific and Technical Research Council, Buenos Aires, Argentina.,Faculty of Psychology, National University of Córdoba, Córdoba, Argentina
| | - Sofía Abrevaya
- Laboratory of Experimental Psychology and Neuroscience, Institute of Cognitive and Translational Neuroscience, INECO Foundation, Favaloro University, Buenos Aires, Argentina.,National Scientific and Technical Research Council, Buenos Aires, Argentina
| | - Macarena Martínez Cuitiño
- Faculty of Psychology, National University of Córdoba, Córdoba, Argentina.,Laboratory of Language Research (LILEN), Institute of Cognitive and Translational Neuroscience (INCYT), Buenos Aires, Argentina
| | - Brenda Steeb
- Laboratory of Language Research (LILEN), Institute of Cognitive and Translational Neuroscience (INCYT), Buenos Aires, Argentina
| | - Dolores Zamora
- Laboratory of Language Research (LILEN), Institute of Cognitive and Translational Neuroscience (INCYT), Buenos Aires, Argentina
| | - Lucas Sedeño
- Laboratory of Experimental Psychology and Neuroscience, Institute of Cognitive and Translational Neuroscience, INECO Foundation, Favaloro University, Buenos Aires, Argentina.,National Scientific and Technical Research Council, Buenos Aires, Argentina
| | - Agustín Ibáñez
- Laboratory of Experimental Psychology and Neuroscience, Institute of Cognitive and Translational Neuroscience, INECO Foundation, Favaloro University, Buenos Aires, Argentina.,National Scientific and Technical Research Council, Buenos Aires, Argentina.,Universidad Autónoma del Caribe, Barranquilla, Colombia.,Department of Psychology, Universidad Adolfo Ibáñez, Santiago, Chile.,Centre of Excellence in Cognition and Its Disorders, Australian Research Council, Sydney, NSW, Australia
| | - Adolfo M García
- Laboratory of Experimental Psychology and Neuroscience, Institute of Cognitive and Translational Neuroscience, INECO Foundation, Favaloro University, Buenos Aires, Argentina.,National Scientific and Technical Research Council, Buenos Aires, Argentina.,Faculty of Education, National University of Cuyo, Mendoza, Argentina
| |
Collapse
|
18
|
Hellbernd N, Sammler D. Neural bases of social communicative intentions in speech. Soc Cogn Affect Neurosci 2019; 13:604-615. [PMID: 29771359 PMCID: PMC6022564 DOI: 10.1093/scan/nsy034] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2017] [Accepted: 05/13/2018] [Indexed: 11/15/2022] Open
Abstract
Our ability to understand others’ communicative intentions in speech is key to successful social interaction. Indeed, misunderstanding an ‘excuse me’ as apology, while meant as criticism, may have important consequences. Recent behavioural studies have provided evidence that prosody, that is, vocal tone, is an important indicator for speakers’ intentions. Using a novel audio-morphing paradigm, the present functional magnetic resonance imaging study examined the neurocognitive mechanisms that allow listeners to ‘read’ speakers’ intents from vocal prosodic patterns. Participants categorized prosodic expressions that gradually varied in their acoustics between criticism, doubt, and suggestion. Categorizing typical exemplars of the three intentions induced activations along the ventral auditory stream, complemented by amygdala and mentalizing system. These findings likely depict the stepwise conversion of external perceptual information into abstract prosodic categories and internal social semantic concepts, including the speaker’s mental state. Ambiguous tokens, in turn, involved cingulo-opercular areas known to assist decision-making in case of conflicting cues. Auditory and decision-making processes were flexibly coupled with the amygdala, depending on prosodic typicality, indicating enhanced categorization efficiency of overtly relevant, meaningful prosodic signals. Altogether, the results point to a model in which auditory prosodic categorization and socio-inferential conceptualization cooperate to translate perceived vocal tone into a coherent representation of the speaker’s intent.
Collapse
Affiliation(s)
- Nele Hellbernd
- Otto Hahn Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, D-04103 Leipzig, Germany
| | - Daniela Sammler
- Otto Hahn Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, D-04103 Leipzig, Germany
| |
Collapse
|
19
|
Gabard-Durnam LJ, O'Muircheartaigh J, Dirks H, Dean DC, Tottenham N, Deoni S. Human amygdala functional network development: A cross-sectional study from 3 months to 5 years of age. Dev Cogn Neurosci 2018; 34:63-74. [PMID: 30075348 PMCID: PMC6252269 DOI: 10.1016/j.dcn.2018.06.004] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Revised: 06/11/2018] [Accepted: 06/12/2018] [Indexed: 01/10/2023] Open
Abstract
Although the amygdala's role in shaping social behavior is especially important during early post-natal development, very little is known of amygdala functional development before childhood. To address this gap, this study uses resting-state fMRI to examine early amygdalar functional network development in a cross-sectional sample of 80 children from 3-months to 5-years of age. Whole brain functional connectivity with the amygdala, and its laterobasal and superficial sub-regions, were largely similar to those seen in older children and adults. Functional distinctions between sub-region networks were already established. These patterns suggest many amygdala functional circuits are intact from infancy, especially those that are part of motor, visual, auditory and subcortical networks. Developmental changes in connectivity were observed between the laterobasal nucleus and bilateral ventral temporal and motor cortex as well as between the superficial nuclei and medial thalamus, occipital cortex and a different region of motor cortex. These results show amygdala-subcortical and sensory-cortex connectivity begins refinement prior to childhood, though connectivity changes with associative and frontal cortical areas, seen after early childhood, were not evident in this age range. These findings represent early steps in understanding amygdala network dynamics across infancy through early childhood, an important period of emotional and cognitive development.
Collapse
Affiliation(s)
- L J Gabard-Durnam
- Division of Developmental Medicine, Boston Children's Hospital, Harvard University, Boston, MA, 02115, USA
| | - J O'Muircheartaigh
- Department of Forensic and Neurodevelopmental Sciences & Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK; Centre for the Developing Brain, Department of Perinatal Imaging and Health, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK.
| | - H Dirks
- Advanced Baby Imaging Lab, Brown University School of Engineering, Providence, USA
| | - D C Dean
- Waisman Center, University of Wisconsin-Madison, Madison, WI, 53702, USA; Center for Healthy Minds, University of Wisconsin-Madison, Madison, WI, 53702, USA
| | - N Tottenham
- Department of Psychology, Columbia University, New York, NY, 10027, USA
| | - S Deoni
- Department of Pediatrics, Warren Alpert Medical School, Brown University, Providence, USA
| |
Collapse
|
20
|
Hemispheric specialization of the basal ganglia during vocal emotion decoding: Evidence from asymmetric Parkinson's disease and 18FDG PET. Neuropsychologia 2018; 119:1-11. [DOI: 10.1016/j.neuropsychologia.2018.07.023] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2018] [Revised: 07/10/2018] [Accepted: 07/19/2018] [Indexed: 11/15/2022]
|
21
|
Tully J, Gabay AS, Brown D, Murphy DGM, Blackwood N. The effect of intranasal oxytocin on neural response to facial emotions in healthy adults as measured by functional MRI: A systematic review. Psychiatry Res 2018; 272:17-29. [PMID: 29272737 PMCID: PMC6562202 DOI: 10.1016/j.pscychresns.2017.11.017] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/16/2017] [Revised: 11/24/2017] [Accepted: 11/25/2017] [Indexed: 12/28/2022]
Abstract
Abnormalities in responses to human facial emotions are associated with a range of psychiatric disorders. Addressing these abnormalities may therefore have significant clinical applications. Previous meta-analyses have demonstrated effects of the neuropeptide oxytocin on behavioural response to facial emotions, and effects on brain, as measured by functional MRI. Evidence suggests that these effects may be mediated by sex and the role of eye gaze. However, the specific effect of oxytocin on brain response to facial emotions in healthy adults has not been systematically analysed. To address this question, this further systematic review was conducted. Twenty-two studies met our inclusion criteria. In men, oxytocin consistently attenuated brain activity in response to negative emotional faces, particularly fear, compared with placebo, while in women, oxytocin enhanced activity. Brain regions consistently involved included the amygdala, fusiform gyrus and anterior cingulate cortex. In some studies, oxytocin increased fixation changes towards the eyes with enhanced amygdala and/or fusiform gyrus activation. By enhancing understanding of emotion processing in healthy subjects, these pharmacoimaging studies provide a theoretical basis for studying deficits in clinical populations. However, progress to date has been limited by low statistical power, methodological heterogeneity, and a lack of multimodal studies.
Collapse
Affiliation(s)
- John Tully
- Department of Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology and Neuroscience, Kings College London, London, United Kingdom.
| | - Anthony S Gabay
- Department of Neuroimaging, Kings College London, London, United Kingdom
| | - Danielle Brown
- Institute of Psychiatry, Psychology and Neuroscience, Kings College London, London, United Kingdom
| | - Declan G M Murphy
- Department of Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology and Neuroscience, Kings College London, London, United Kingdom
| | - Nigel Blackwood
- Department of Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology and Neuroscience, Kings College London, London, United Kingdom
| |
Collapse
|
22
|
Kedo O, Zilles K, Palomero-Gallagher N, Schleicher A, Mohlberg H, Bludau S, Amunts K. Receptor-driven, multimodal mapping of the human amygdala. Brain Struct Funct 2017; 223:1637-1666. [PMID: 29188378 DOI: 10.1007/s00429-017-1577-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2017] [Accepted: 11/20/2017] [Indexed: 12/23/2022]
Abstract
The human amygdala consists of subdivisions contributing to various functions. However, principles of structural organization at the cellular and molecular level are not well understood. Thus, we re-analyzed the cytoarchitecture of the amygdala and generated cytoarchitectonic probabilistic maps of ten subdivisions in stereotaxic space based on novel workflows and mapping tools. This parcellation was then used as a basis for analyzing the receptor expression for 15 receptor types. Receptor fingerprints, i.e., the characteristic balance between densities of all receptor types, were generated in each subdivision to comprehensively visualize differences and similarities in receptor architecture between the subdivisions. Fingerprints of the central and medial nuclei and the anterior amygdaloid area were highly similar. Fingerprints of the lateral, basolateral and basomedial nuclei were also similar to each other, while those of the remaining nuclei were distinct in shape. Similarities were further investigated by a hierarchical cluster analysis: a two-cluster solution subdivided the phylogenetically older part (central, medial nuclei, anterior amygdaloid area) from the remaining parts of the amygdala. A more fine-grained three-cluster solution replicated our previous parcellation including a laterobasal, superficial and centromedial group. Furthermore, it helped to better characterize the paralaminar nucleus with a molecular organization in-between the laterobasal and the superficial group. The multimodal cyto- and receptor-architectonic analysis of the human amygdala provides new insights into its microstructural organization, intersubject variability, localization in stereotaxic space and principles of receptor-based neurochemical differences.
Collapse
Affiliation(s)
- Olga Kedo
- Institute of Neuroscience and Medicine, INM-1, Research Centre Jülich, Jülich, Germany.
| | - Karl Zilles
- Institute of Neuroscience and Medicine, INM-1, Research Centre Jülich, Jülich, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen University, Aachen, Germany.,JARA-BRAIN, Jülich-Aachen Research Alliance, Aachen, Germany
| | | | - Axel Schleicher
- Institute of Neuroscience and Medicine, INM-1, Research Centre Jülich, Jülich, Germany
| | - Hartmut Mohlberg
- Institute of Neuroscience and Medicine, INM-1, Research Centre Jülich, Jülich, Germany
| | - Sebastian Bludau
- Institute of Neuroscience and Medicine, INM-1, Research Centre Jülich, Jülich, Germany
| | - Katrin Amunts
- Institute of Neuroscience and Medicine, INM-1, Research Centre Jülich, Jülich, Germany.,JARA-BRAIN, Jülich-Aachen Research Alliance, Aachen, Germany.,C. & O. Vogt Institute for Brain Research, University Hospital Düsseldorf, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| |
Collapse
|
23
|
Simon D, Becker M, Mothes-Lasch M, Miltner WHR, Straube T. Loud and angry: sound intensity modulates amygdala activation to angry voices in social anxiety disorder. Soc Cogn Affect Neurosci 2017; 12:409-416. [PMID: 27651541 PMCID: PMC5390751 DOI: 10.1093/scan/nsw131] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2015] [Accepted: 09/06/2016] [Indexed: 11/12/2022] Open
Abstract
Angry expressions of both voices and faces represent disorder-relevant stimuli in social anxiety disorder (SAD). Although individuals with SAD show greater amygdala activation to angry faces, previous work has failed to find comparable effects for angry voices. Here, we investigated whether voice sound-intensity, a modulator of a voice's threat-relevance, affects brain responses to angry prosody in SAD. We used event-related functional magnetic resonance imaging to explore brain responses to voices varying in sound intensity and emotional prosody in SAD patients and healthy controls (HCs). Angry and neutral voices were presented either with normal or high sound amplitude, while participants had to decide upon the speaker's gender. Loud vs normal voices induced greater insula activation, and angry vs neutral prosody greater orbitofrontal cortex activation in SAD as compared with HC subjects. Importantly, an interaction of sound intensity, prosody and group was found in the insula and the amygdala. In particular, the amygdala showed greater activation to loud angry voices in SAD as compared with HC subjects. This finding demonstrates a modulating role of voice sound-intensity on amygdalar hyperresponsivity to angry prosody in SAD and suggests that abnormal processing of interpersonal threat signals in amygdala extends beyond facial expressions in SAD.
Collapse
Affiliation(s)
- Doerte Simon
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Von-Esmarch-Str., 52, D-48149 Münster, Germany and
| | - Michael Becker
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Von-Esmarch-Str., 52, D-48149 Münster, Germany and
| | - Martin Mothes-Lasch
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Von-Esmarch-Str., 52, D-48149 Münster, Germany and
| | - Wolfgang H R Miltner
- Department of Biological and Clinical Psychology, Friedrich Schiller University, Jena
| | - Thomas Straube
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Von-Esmarch-Str., 52, D-48149 Münster, Germany and
| |
Collapse
|
24
|
Frühholz S, Schlegel K, Grandjean D. Amygdala structure and core dimensions of the affective personality. Brain Struct Funct 2017; 222:3915-3925. [PMID: 28512686 DOI: 10.1007/s00429-017-1444-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2017] [Accepted: 05/11/2017] [Indexed: 11/26/2022]
Abstract
While biological models of human personality propose that socio-affective traits and skills are rooted in the structure of the amygdala, empirical evidence remains sparse and inconsistent. Here, we used a comprehensive assessment of the affective personality and tested its association with global, local, and laterality measures of the amygdala structure. Results revealed three broad dimensions of the affective personality that were differentially related to bilateral amygdala structures. Dysfunctional and maladaptive affective traits were associated with a global size and local volume reduction of the amygdala, whereas adaptive emotional skills were linked to an increased size of the left amygdala. Furthermore, reduced asymmetry in the bilateral global amygdala volume was linked to higher affective instability and might be a potential precursor of psychiatric disorders. This study demonstrates that structural amygdala measures provide a neural basis for all major dimensions of the human personality related to adaptive and maladaptive socio-affective functioning.
Collapse
Affiliation(s)
- Sascha Frühholz
- Department of Psychology, University of Zurich, Binzmühlestrasse 14/18, 8050, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, 8057, Zurich, Switzerland.
- Center for Integrative Human Physiology (ZIHP), University of Zurich, 8057, Zurich, Switzerland.
- Swiss Center for Affective Sciences, University of Geneva, 1202, Geneva, Switzerland.
| | - Katja Schlegel
- Swiss Center for Affective Sciences, University of Geneva, 1202, Geneva, Switzerland
- Institute for Psychology, University of Bern, 3012, Bern, Switzerland
| | - Didier Grandjean
- Swiss Center for Affective Sciences, University of Geneva, 1202, Geneva, Switzerland
| |
Collapse
|
25
|
Jiang X, Sanford R, Pell MD. Neural systems for evaluating speaker (Un)believability. Hum Brain Mapp 2017; 38:3732-3749. [PMID: 28462535 DOI: 10.1002/hbm.23630] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2017] [Revised: 04/13/2017] [Accepted: 04/17/2017] [Indexed: 12/11/2022] Open
Abstract
Our voice provides salient cues about how confident we sound, which promotes inferences about how believable we are. However, the neural mechanisms involved in these social inferences are largely unknown. Employing functional magnetic resonance imaging, we examined the brain networks and individual differences underlying the evaluation of speaker believability from vocal expressions. Participants (n = 26) listened to statements produced in a confident, unconfident, or "prosodically unmarked" (neutral) voice, and judged how believable the speaker was on a 4-point scale. We found frontal-temporal networks were activated for different levels of confidence, with the left superior and inferior frontal gyrus more activated for confident statements, the right superior temporal gyrus for unconfident expressions, and bilateral cerebellum for statements in a neutral voice. Based on listener's believability judgment, we observed increased activation in the right superior parietal lobule (SPL) associated with higher believability, while increased left posterior central gyrus (PoCG) was associated with less believability. A psychophysiological interaction analysis found that the anterior cingulate cortex and bilateral caudate were connected to the right SPL when higher believability judgments were made, while supplementary motor area was connected with the left PoCG when lower believability judgments were made. Personal characteristics, such as interpersonal reactivity and the individual tendency to trust others, modulated the brain activations and the functional connectivity when making believability judgments. In sum, our data pinpoint neural mechanisms that are involved when inferring one's believability from a speaker's voice and establish ways that these mechanisms are modulated by individual characteristics of a listener. Hum Brain Mapp 38:3732-3749, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Xiaoming Jiang
- School of Communication Sciences and Disorders, McGill University, Montréal, Canada
| | - Ryan Sanford
- McConnell Brain Imaging Center, Montréal Neurological Institute, McGill University, Montréal, Canada
| | - Marc D Pell
- School of Communication Sciences and Disorders, McGill University, Montréal, Canada.,McConnell Brain Imaging Center, Montréal Neurological Institute, McGill University, Montréal, Canada
| |
Collapse
|
26
|
Gruber T, Grandjean D. A comparative neurological approach to emotional expressions in primate vocalizations. Neurosci Biobehav Rev 2016; 73:182-190. [PMID: 27993605 DOI: 10.1016/j.neubiorev.2016.12.004] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2016] [Revised: 12/01/2016] [Accepted: 12/03/2016] [Indexed: 12/20/2022]
Abstract
Different approaches from different research domains have crystallized debate over primate emotional processing and vocalizations in recent decades. On one side, researchers disagree about whether emotional states or processes in animals truly compare to those in humans. On the other, a long-held assumption is that primate vocalizations are innate communicative signals over which nonhuman primates have limited control and a mirror of the emotional state of the individuals producing them, despite growing evidence of intentional production for some vocalizations. Our goal is to connect both sides of the discussion in deciphering how the emotional content of primate calls compares with emotional vocal signals in humans. We focus particularly on neural bases of primate emotions and vocalizations to identify cerebral structures underlying emotion, vocal production, and comprehension in primates, and discuss whether particular structures or neuronal networks solely evolved for specific functions in the human brain. Finally, we propose a model to classify emotional vocalizations in primates according to four dimensions (learning, control, emotional, meaning) to allow comparing calls across species.
Collapse
Affiliation(s)
- Thibaud Gruber
- Swiss Center for Affective Sciences and Department of Psychology and Sciences of Education, University of Geneva, Geneva, Switzerland.
| | - Didier Grandjean
- Swiss Center for Affective Sciences and Department of Psychology and Sciences of Education, University of Geneva, Geneva, Switzerland
| |
Collapse
|
27
|
Perceiving emotional expressions in others: Activation likelihood estimation meta-analyses of explicit evaluation, passive perception and incidental perception of emotions. Neurosci Biobehav Rev 2016; 71:810-828. [DOI: 10.1016/j.neubiorev.2016.10.020] [Citation(s) in RCA: 62] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2016] [Revised: 09/17/2016] [Accepted: 10/24/2016] [Indexed: 01/09/2023]
|
28
|
Bowman C, Yamauchi T. Processing emotions in sounds: cross-domain aftereffects of vocal utterances and musical sounds. Cogn Emot 2016; 31:1610-1626. [PMID: 27848281 DOI: 10.1080/02699931.2016.1255588] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Nonlinguistic signals in the voice and musical instruments play a critical role in communicating emotion. Although previous research suggests a common mechanism for emotion processing in music and speech, the precise relationship between the two domains is unclear due to the paucity of direct evidence. By applying the adaptation paradigm developed by Bestelmeyer, Rouger, DeBruine, and Belin [2010. Auditory adaptation in vocal affect perception. Cognition, 117(2), 217-223. doi: 10.1016/j.cognition.2010.08.008 ], this study shows cross-domain aftereffects from vocal to musical sounds. Participants heard an angry or fearful sound four times, followed by a test sound and judged whether the test sound was angry or fearful. Results show cross-domain aftereffects in one direction - vocal utterances to musical sounds, not vice-versa. This effect occurred primarily for angry vocal sounds. It is argued that there is a unidirectional relationship between vocal and musical sounds where emotion processing of vocal sounds encompasses musical sounds but not vice-versa.
Collapse
Affiliation(s)
- Casady Bowman
- a Department of Psychology , Texas A&M University , College Station , TX , USA
| | - Takashi Yamauchi
- a Department of Psychology , Texas A&M University , College Station , TX , USA
| |
Collapse
|
29
|
Liu J, Fang J, Wang Z, Rong P, Hong Y, Fan Y, Wang X, Park J, Jin Y, Liu C, Zhu B, Kong J. Transcutaneous vagus nerve stimulation modulates amygdala functional connectivity in patients with depression. J Affect Disord 2016; 205:319-326. [PMID: 27559632 DOI: 10.1016/j.jad.2016.08.003] [Citation(s) in RCA: 90] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/15/2016] [Revised: 07/18/2016] [Accepted: 08/07/2016] [Indexed: 01/24/2023]
Abstract
BACKGROUND The amygdala is a key region in emotion processing, and studies have suggested that amygdala-frontal functional connectivity deficits could be modulated by antidepressants in major depressive disorder (MDD). Transcutaneous vagus nerve stimulation (tVNS), a non-invasive, peripheral neuromodulation method at the ear, has shown promising results in treating major depressive disorder (MDD) in several pilot studies. However, the neural mechanism underlying tVNS treatment of depression has not been fully investigated. In this study, we investigated how tVNS can modulate the amygdala-lateral prefrontal network resting state functional connectivity (rsFC) in mild or moderate major depressive disorder (MDD) patients. METHODS Forty-nine MDD patients were recruited and received tVNS or sham tVNS (stVNS) treatments for four weeks. Resting state fMRI scans were applied before and after treatments. RESULTS After 1 month of tVNS treatment, the 24-item Hamilton Depression Rating Scale (HAMD) scores were reduced significantly in the tVNS group as compared with the sham tVNS group. The rsFC in the tVNS group between the right amygdala and left dorsolateral prefrontal cortex was increased compared with sham tVNS. All the rsFC increases were also associated with HAMD reduction as well as reductions in the anxiety and retardation HAMD subscales. CONCLUSIONS tVNS can significantly modulate the amygdala-lateral prefrontal rsFC of MDD patients; our results provide insights into the brain mechanism of tVNS treatment for MDD patients.
Collapse
Affiliation(s)
- Jun Liu
- Guang'anmen Hospital, China Academy of Chinese Medical Sciences, Beijing 100053, China; Department of Psychiatry, Massachusetts General Hospital / Harvard Medical School, Charlestown, MA 02129, USA
| | - Jiliang Fang
- Guang'anmen Hospital, China Academy of Chinese Medical Sciences, Beijing 100053, China
| | - Zengjian Wang
- Department of Psychiatry, Massachusetts General Hospital / Harvard Medical School, Charlestown, MA 02129, USA; Center for the Study of Applied Psychology, Key Laboratory of Mental Health and Cognitive Science of Guangdong Province, School of Psychology, South China Normal University, Guangzhou, China
| | - Peijing Rong
- Institute of Acupuncture & Moxibustion, China Academy of Chinese Medical Sciences, Beijing 100700, China.
| | - Yang Hong
- Guang'anmen Hospital, China Academy of Chinese Medical Sciences, Beijing 100053, China
| | - Yangyang Fan
- Guang'anmen Hospital, China Academy of Chinese Medical Sciences, Beijing 100053, China
| | - Xiaoling Wang
- Guang'anmen Hospital, China Academy of Chinese Medical Sciences, Beijing 100053, China
| | - Joel Park
- Department of Psychiatry, Massachusetts General Hospital / Harvard Medical School, Charlestown, MA 02129, USA
| | - Yu Jin
- Department of Psychiatry, Massachusetts General Hospital / Harvard Medical School, Charlestown, MA 02129, USA
| | - Chunhong Liu
- Beijing Key Laboratory of Mental Disorders, Department of Radiology and Psychiatry, Beijing Anding Hospital, Capital Medical University, Beijing 100088, China
| | - Bing Zhu
- Institute of Acupuncture & Moxibustion, China Academy of Chinese Medical Sciences, Beijing 100700, China
| | - Jian Kong
- Department of Psychiatry, Massachusetts General Hospital / Harvard Medical School, Charlestown, MA 02129, USA.
| |
Collapse
|
30
|
Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions. Cortex 2016; 85:116-125. [PMID: 27855282 DOI: 10.1016/j.cortex.2016.10.013] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2016] [Revised: 09/19/2016] [Accepted: 10/19/2016] [Indexed: 11/23/2022]
Abstract
Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions.
Collapse
|
31
|
Rabellino D, Densmore M, Frewen PA, Théberge J, McKinnon MC, Lanius RA. Aberrant Functional Connectivity of the Amygdala Complexes in PTSD during Conscious and Subconscious Processing of Trauma-Related Stimuli. PLoS One 2016; 11:e0163097. [PMID: 27631496 PMCID: PMC5025207 DOI: 10.1371/journal.pone.0163097] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2015] [Accepted: 09/02/2016] [Indexed: 11/29/2022] Open
Abstract
Post-traumatic stress disorder (PTSD) is characterized by altered functional connectivity of the amygdala complexes at rest. However, amygdala complex connectivity during conscious and subconscious threat processing remains to be elucidated. Here, we investigate specific connectivity of the centromedial amygdala (CMA) and basolateral amygdala (BLA) during conscious and subconscious processing of trauma-related words among individuals with PTSD (n = 26) as compared to non-trauma-exposed controls (n = 20). Psycho-physiological interaction analyses were performed using the right and left amygdala complexes as regions of interest during conscious and subconscious trauma word processing. These analyses revealed a differential, context-dependent responses by each amygdala seed during trauma processing in PTSD. Specifically, relative to controls, during subconscious processing, individuals with PTSD demonstrated increased connectivity of the CMA with the superior frontal gyrus, accompanied by a pattern of decreased connectivity between the BLA and the superior colliculus. During conscious processing, relative to controls, individuals with PTSD showed increased connectivity between the CMA and the pulvinar. These findings demonstrate alterations in amygdala subregion functional connectivity in PTSD and highlight the disruption of the innate alarm network during both conscious and subconscious trauma processing in this disorder.
Collapse
Affiliation(s)
- Daniela Rabellino
- Department of Psychiatry, University of Western Ontario, London, ON, Canada
| | - Maria Densmore
- Department of Psychiatry, University of Western Ontario, London, ON, Canada
- Imaging Division, Lawson Health Research Institute, London, ON, Canada
| | - Paul A. Frewen
- Department of Psychiatry, University of Western Ontario, London, ON, Canada
- Department of Psychology, University of Western Ontario, London, ON, Canada
- Department of Neuroscience, University of Western Ontario, London, ON, Canada
| | - Jean Théberge
- Department of Psychiatry, University of Western Ontario, London, ON, Canada
- Imaging Division, Lawson Health Research Institute, London, ON, Canada
- Department of Medical Biophysics, University of Western Ontario, London, ON, Canada
| | - Margaret C. McKinnon
- Mood Disorders Program, St. Joseph's Healthcare, Hamilton, ON, Canada
- Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, ON, Canada
- Homewood Research Institute, Guelph, ON, Canada
| | - Ruth A. Lanius
- Department of Psychiatry, University of Western Ontario, London, ON, Canada
- Imaging Division, Lawson Health Research Institute, London, ON, Canada
- Department of Neuroscience, University of Western Ontario, London, ON, Canada
- * E-mail:
| |
Collapse
|
32
|
Symons AE, El-Deredy W, Schwartze M, Kotz SA. The Functional Role of Neural Oscillations in Non-Verbal Emotional Communication. Front Hum Neurosci 2016; 10:239. [PMID: 27252638 PMCID: PMC4879141 DOI: 10.3389/fnhum.2016.00239] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 05/09/2016] [Indexed: 12/18/2022] Open
Abstract
Effective interpersonal communication depends on the ability to perceive and interpret nonverbal emotional expressions from multiple sensory modalities. Current theoretical models propose that visual and auditory emotion perception involves a network of brain regions including the primary sensory cortices, the superior temporal sulcus (STS), and orbitofrontal cortex (OFC). However, relatively little is known about how the dynamic interplay between these regions gives rise to the perception of emotions. In recent years, there has been increasing recognition of the importance of neural oscillations in mediating neural communication within and between functional neural networks. Here we review studies investigating changes in oscillatory activity during the perception of visual, auditory, and audiovisual emotional expressions, and aim to characterize the functional role of neural oscillations in nonverbal emotion perception. Findings from the reviewed literature suggest that theta band oscillations most consistently differentiate between emotional and neutral expressions. While early theta synchronization appears to reflect the initial encoding of emotionally salient sensory information, later fronto-central theta synchronization may reflect the further integration of sensory information with internal representations. Additionally, gamma synchronization reflects facilitated sensory binding of emotional expressions within regions such as the OFC, STS, and, potentially, the amygdala. However, the evidence is more ambiguous when it comes to the role of oscillations within the alpha and beta frequencies, which vary as a function of modality (or modalities), presence or absence of predictive information, and attentional or task demands. Thus, the synchronization of neural oscillations within specific frequency bands mediates the rapid detection, integration, and evaluation of emotional expressions. Moreover, the functional coupling of oscillatory activity across multiples frequency bands supports a predictive coding model of multisensory emotion perception in which emotional facial and body expressions facilitate the processing of emotional vocalizations.
Collapse
Affiliation(s)
- Ashley E. Symons
- School of Psychological Sciences, University of ManchesterManchester, UK
| | - Wael El-Deredy
- School of Psychological Sciences, University of ManchesterManchester, UK
- School of Biomedical Engineering, Universidad de ValparaisoValparaiso, Chile
| | - Michael Schwartze
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
- Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht UniversityMaastricht, Netherlands
| | - Sonja A. Kotz
- School of Psychological Sciences, University of ManchesterManchester, UK
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
- Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht UniversityMaastricht, Netherlands
| |
Collapse
|
33
|
The sound of emotions-Towards a unifying neural network perspective of affective sound processing. Neurosci Biobehav Rev 2016; 68:96-110. [PMID: 27189782 DOI: 10.1016/j.neubiorev.2016.05.002] [Citation(s) in RCA: 109] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2016] [Revised: 05/01/2016] [Accepted: 05/04/2016] [Indexed: 12/15/2022]
Abstract
Affective sounds are an integral part of the natural and social environment that shape and influence behavior across a multitude of species. In human primates, these affective sounds span a repertoire of environmental and human sounds when we vocalize or produce music. In terms of neural processing, cortical and subcortical brain areas constitute a distributed network that supports our listening experience to these affective sounds. Taking an exhaustive cross-domain view, we accordingly suggest a common neural network that facilitates the decoding of the emotional meaning from a wide source of sounds rather than a traditional view that postulates distinct neural systems for specific affective sound types. This new integrative neural network view unifies the decoding of affective valence in sounds, and ascribes differential as well as complementary functional roles to specific nodes within a common neural network. It also highlights the importance of an extended brain network beyond the central limbic and auditory brain systems engaged in the processing of affective sounds.
Collapse
|
34
|
Frühholz S, van der Zwaag W, Saenz M, Belin P, Schobert AK, Vuilleumier P, Grandjean D. Neural decoding of discriminative auditory object features depends on their socio-affective valence. Soc Cogn Affect Neurosci 2016; 11:1638-49. [PMID: 27217117 DOI: 10.1093/scan/nsw066] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Accepted: 05/11/2016] [Indexed: 11/12/2022] Open
Abstract
Human voices consist of specific patterns of acoustic features that are considerably enhanced during affective vocalizations. These acoustic features are presumably used by listeners to accurately discriminate between acoustically or emotionally similar vocalizations. Here we used high-field 7T functional magnetic resonance imaging in human listeners together with a so-called experimental 'feature elimination approach' to investigate neural decoding of three important voice features of two affective valence categories (i.e. aggressive and joyful vocalizations). We found a valence-dependent sensitivity to vocal pitch (f0) dynamics and to spectral high-frequency cues already at the level of the auditory thalamus. Furthermore, pitch dynamics and harmonics-to-noise ratio (HNR) showed overlapping, but again valence-dependent sensitivity in tonotopic cortical fields during the neural decoding of aggressive and joyful vocalizations, respectively. For joyful vocalizations we also revealed sensitivity in the inferior frontal cortex (IFC) to the HNR and pitch dynamics. The data thus indicate that several auditory regions were sensitive to multiple, rather than single, discriminative voice features. Furthermore, some regions partly showed a valence-dependent hypersensitivity to certain features, such as pitch dynamic sensitivity in core auditory regions and in the IFC for aggressive vocalizations, and sensitivity to high-frequency cues in auditory belt and parabelt regions for joyful vocalizations.
Collapse
Affiliation(s)
- Sascha Frühholz
- Department of Psychology, University of Zurich, 8050 Zurich, Switzerland Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland
| | - Wietske van der Zwaag
- Center for Biomedical Imaging, Ecole Polytechnique Fédérale de Lausanne 1015 Lausanne, Switzerland
| | - Melissa Saenz
- Laboratoire de Recherche en Neuroimagerie, Department of Clinical Neurosciences, CHUV, 1011 Lausanne, Switzerland Institute of Bioengineering, Ecole Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
| | - Pascal Belin
- Department of Psychology, University of Glasgow, Glasgow G12 8QQ, UK
| | - Anne-Kathrin Schobert
- Laboratory for Neurology and Imaging of Cognition, Department of Neurology and Department Neuroscience, Medical School, University of Geneva, 1211 Geneva, Switzerland
| | - Patrik Vuilleumier
- Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland Laboratory for Neurology and Imaging of Cognition, Department of Neurology and Department Neuroscience, Medical School, University of Geneva, 1211 Geneva, Switzerland
| | - Didier Grandjean
- Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, University of Geneva, Geneva 1205, Switzerland
| |
Collapse
|
35
|
Heitmann CY, Feldker K, Neumeister P, Zepp BM, Peterburs J, Zwitserlood P, Straube T. Abnormal brain activation and connectivity to standardized disorder-related visual scenes in social anxiety disorder. Hum Brain Mapp 2016; 37:1559-72. [PMID: 26806013 PMCID: PMC6867294 DOI: 10.1002/hbm.23120] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Revised: 01/07/2016] [Accepted: 01/07/2016] [Indexed: 11/09/2022] Open
Abstract
Our understanding of altered emotional processing in social anxiety disorder (SAD) is hampered by a heterogeneity of findings, which is probably due to the vastly different methods and materials used so far. This is why the present functional magnetic resonance imaging (fMRI) study investigated immediate disorder-related threat processing in 30 SAD patients and 30 healthy controls (HC) with a novel, standardized set of highly ecologically valid, disorder-related complex visual scenes. SAD patients rated disorder-related as compared with neutral scenes as more unpleasant, arousing and anxiety-inducing than HC. On the neural level, disorder-related as compared with neutral scenes evoked differential responses in SAD patients in a widespread emotion processing network including (para-)limbic structures (e.g. amygdala, insula, thalamus, globus pallidus) and cortical regions (e.g. dorsomedial prefrontal cortex (dmPFC), posterior cingulate cortex (PCC), and precuneus). Functional connectivity analysis yielded an altered interplay between PCC/precuneus and paralimbic (insula) as well as cortical regions (dmPFC, precuneus) in SAD patients, which emphasizes a central role for PCC/precuneus in disorder-related scene processing. Hyperconnectivity of globus pallidus with amygdala, anterior cingulate cortex (ACC) and medial prefrontal cortex (mPFC) additionally underlines the relevance of this region in socially anxious threat processing. Our findings stress the importance of specific disorder-related stimuli for the investigation of altered emotion processing in SAD. Disorder-related threat processing in SAD reveals anomalies at multiple stages of emotion processing which may be linked to increased anxiety and to dysfunctionally elevated levels of self-referential processing reported in previous studies.
Collapse
Affiliation(s)
- Carina Yvonne Heitmann
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, MuensterGermany
| | - Katharina Feldker
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, MuensterGermany
| | - Paula Neumeister
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, MuensterGermany
| | - Britta Maria Zepp
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, MuensterGermany
| | - Jutta Peterburs
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, MuensterGermany
| | | | - Thomas Straube
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, MuensterGermany
| |
Collapse
|
36
|
Hrybouski S, Aghamohammadi-Sereshki A, Madan CR, Shafer AT, Baron CA, Seres P, Beaulieu C, Olsen F, Malykhin NV. Amygdala subnuclei response and connectivity during emotional processing. Neuroimage 2016; 133:98-110. [PMID: 26926791 DOI: 10.1016/j.neuroimage.2016.02.056] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2015] [Revised: 02/16/2016] [Accepted: 02/18/2016] [Indexed: 02/08/2023] Open
Abstract
The involvement of the human amygdala in emotion-related processing has been studied using functional magnetic resonance imaging (fMRI) for many years. However, despite the amygdala being comprised of several subnuclei, most studies investigated the role of the entire amygdala in processing of emotions. Here we combined a novel anatomical tracing protocol with event-related high-resolution fMRI acquisition to study the responsiveness of the amygdala subnuclei to negative emotional stimuli and to examine intra-amygdala functional connectivity. The greatest sensitivity to the negative emotional stimuli was observed in the centromedial amygdala, where the hemodynamic response amplitude elicited by the negative emotional stimuli was greater and peaked later than for neutral stimuli. Connectivity patterns converge with extant findings in animals, such that the centromedial amygdala was more connected with the nuclei of the basal amygdala than with the lateral amygdala. Current findings provide evidence of functional specialization within the human amygdala.
Collapse
Affiliation(s)
- Stanislau Hrybouski
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, AB T6G 2E1, Canada
| | | | - Christopher R Madan
- Department of Psychology, University of Alberta, Edmonton, AB T6G 2E9, Canada; Department of Psychology, Boston College, Chestnut Hill, MA 02467, USA
| | - Andrea T Shafer
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, AB T6G 2E1, Canada; Institute of Gerontology, Wayne State University, Detroit, MI 48202, USA
| | - Corey A Baron
- Department of Biomedical Engineering, University of Alberta, Edmonton, AB T6G 2V2, Canada
| | - Peter Seres
- Department of Biomedical Engineering, University of Alberta, Edmonton, AB T6G 2V2, Canada
| | - Christian Beaulieu
- Department of Biomedical Engineering, University of Alberta, Edmonton, AB T6G 2V2, Canada
| | - Fraser Olsen
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, AB T6G 2E1, Canada; Department of Biomedical Engineering, University of Alberta, Edmonton, AB T6G 2V2, Canada
| | - Nikolai V Malykhin
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, AB T6G 2E1, Canada; Department of Biomedical Engineering, University of Alberta, Edmonton, AB T6G 2V2, Canada; Department of Psychiatry, University of Alberta, Edmonton, AB T6G 2B7, Canada.
| |
Collapse
|
37
|
Korb S, Frühholz S, Grandjean D. Reappraising the voices of wrath. Soc Cogn Affect Neurosci 2015; 10:1644-60. [PMID: 25964502 PMCID: PMC4666101 DOI: 10.1093/scan/nsv051] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2014] [Revised: 04/08/2015] [Accepted: 05/07/2015] [Indexed: 11/12/2022] Open
Abstract
Cognitive reappraisal recruits prefrontal and parietal cortical areas. Because of the near exclusive usage in past research of visual stimuli to elicit emotions, it is unknown whether the same neural substrates underlie the reappraisal of emotions induced through other sensory modalities. Here, participants reappraised their emotions in order to increase or decrease their emotional response to angry prosody, or maintained their attention to it in a control condition. Neural activity was monitored with fMRI, and connectivity was investigated by using psychophysiological interaction analyses. A right-sided network encompassing the superior temporal gyrus, the superior temporal sulcus and the inferior frontal gyrus was found to underlie the processing of angry prosody. During reappraisal to increase emotional response, the left superior frontal gyrus showed increased activity and became functionally coupled to right auditory cortices. During reappraisal to decrease emotional response, a network that included the medial frontal gyrus and posterior parietal areas showed increased activation and greater functional connectivity with bilateral auditory regions. Activations pertaining to this network were more extended on the right side of the brain. Although directionality cannot be inferred from PPI analyses, the findings suggest a similar frontoparietal network for the reappraisal of visually and auditorily induced negative emotions.
Collapse
Affiliation(s)
- Sebastian Korb
- International School for Advanced Studies (SISSA), Trieste, Italy,
| | - Sascha Frühholz
- Swiss Center for Affective Sciences, Geneva, Switzerland, and Department of Psychology and Educational Sciences, University of Geneva, Switzerland
| | - Didier Grandjean
- Swiss Center for Affective Sciences, Geneva, Switzerland, and Department of Psychology and Educational Sciences, University of Geneva, Switzerland
| |
Collapse
|
38
|
Pannese A, Grandjean D, Frühholz S. Subcortical processing in auditory communication. Hear Res 2015; 328:67-77. [DOI: 10.1016/j.heares.2015.07.003] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2015] [Revised: 06/23/2015] [Accepted: 07/01/2015] [Indexed: 12/21/2022]
|
39
|
Péron J, Frühholz S, Ceravolo L, Grandjean D. Structural and functional connectivity of the subthalamic nucleus during vocal emotion decoding. Soc Cogn Affect Neurosci 2015; 11:349-56. [PMID: 26400857 DOI: 10.1093/scan/nsv118] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2015] [Accepted: 09/17/2015] [Indexed: 11/13/2022] Open
Abstract
Our understanding of the role played by the subthalamic nucleus (STN) in human emotion has recently advanced with STN deep brain stimulation, a neurosurgical treatment for Parkinson's disease and obsessive-compulsive disorder. However, the potential presence of several confounds related to pathological models raises the question of how much they affect the relevance of observations regarding the physiological function of the STN itself. This underscores the crucial importance of obtaining evidence from healthy participants. In this study, we tested the structural and functional connectivity between the STN and other brain regions related to vocal emotion in a healthy population by combining diffusion tensor imaging and psychophysiological interaction analysis from a high-resolution functional magnetic resonance imaging study. As expected, we showed that the STN is functionally connected to the structures involved in emotional prosody decoding, notably the orbitofrontal cortex, inferior frontal gyrus, auditory cortex, pallidum and amygdala. These functional results were corroborated by probabilistic fiber tracking, which revealed that the left STN is structurally connected to the amygdala and the orbitofrontal cortex. These results confirm, in healthy participants, the role played by the STN in human emotion and its structural and functional connectivity with the brain network involved in vocal emotions.
Collapse
Affiliation(s)
- Julie Péron
- Neuroscience of Emotion and Affective Dynamics laboratory, Department of Psychology and Swiss Centre for Affective Sciences, Campus Biotech, University of Geneva, Switzerland
| | - Sascha Frühholz
- Neuroscience of Emotion and Affective Dynamics laboratory, Department of Psychology and Swiss Centre for Affective Sciences, Campus Biotech, University of Geneva, Switzerland
| | - Leonardo Ceravolo
- Neuroscience of Emotion and Affective Dynamics laboratory, Department of Psychology and Swiss Centre for Affective Sciences, Campus Biotech, University of Geneva, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotion and Affective Dynamics laboratory, Department of Psychology and Swiss Centre for Affective Sciences, Campus Biotech, University of Geneva, Switzerland
| |
Collapse
|
40
|
Lei Y, Shao Y, Wang L, Ye E, Jin X, Zou F, Zhai T, Li W, Yang Z. Altered superficial amygdala-cortical functional link in resting state after 36 hours of total sleep deprivation. J Neurosci Res 2015; 93:1795-803. [DOI: 10.1002/jnr.23601] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2014] [Revised: 04/08/2015] [Accepted: 04/23/2015] [Indexed: 12/30/2022]
Affiliation(s)
- Yu Lei
- Cognitive and Mental Health Research Center; Beijing Institute of Basic Medical Sciences; Beijing People's Republic of China
| | - Yongcong Shao
- Cognitive and Mental Health Research Center; Beijing Institute of Basic Medical Sciences; Beijing People's Republic of China
| | - Lubin Wang
- Cognitive and Mental Health Research Center; Beijing Institute of Basic Medical Sciences; Beijing People's Republic of China
| | - Enmao Ye
- Cognitive and Mental Health Research Center; Beijing Institute of Basic Medical Sciences; Beijing People's Republic of China
| | - Xiao Jin
- Cognitive and Mental Health Research Center; Beijing Institute of Basic Medical Sciences; Beijing People's Republic of China
| | - Feng Zou
- Cognitive and Mental Health Research Center; Beijing Institute of Basic Medical Sciences; Beijing People's Republic of China
| | - Tianye Zhai
- Cognitive and Mental Health Research Center; Beijing Institute of Basic Medical Sciences; Beijing People's Republic of China
| | - Wuju Li
- Beijing Institute of Basic Medical Sciences; Beijing People's Republic of China
| | - Zheng Yang
- Cognitive and Mental Health Research Center; Beijing Institute of Basic Medical Sciences; Beijing People's Republic of China
| |
Collapse
|
41
|
Pernet CR, McAleer P, Latinus M, Gorgolewski KJ, Charest I, Bestelmeyer PEG, Watson RH, Fleming D, Crabbe F, Valdes-Sosa M, Belin P. The human voice areas: Spatial organization and inter-individual variability in temporal and extra-temporal cortices. Neuroimage 2015; 119:164-74. [PMID: 26116964 PMCID: PMC4768083 DOI: 10.1016/j.neuroimage.2015.06.050] [Citation(s) in RCA: 133] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2014] [Revised: 06/15/2015] [Accepted: 06/18/2015] [Indexed: 12/02/2022] Open
Abstract
fMRI studies increasingly examine functions and properties of non-primary areas of human auditory cortex. However there is currently no standardized localization procedure to reliably identify specific areas across individuals such as the standard ‘localizers’ available in the visual domain. Here we present an fMRI ‘voice localizer’ scan allowing rapid and reliable localization of the voice-sensitive ‘temporal voice areas’ (TVA) of human auditory cortex. We describe results obtained using this standardized localizer scan in a large cohort of normal adult subjects. Most participants (94%) showed bilateral patches of significantly greater response to vocal than non-vocal sounds along the superior temporal sulcus/gyrus (STS/STG). Individual activation patterns, although reproducible, showed high inter-individual variability in precise anatomical location. Cluster analysis of individual peaks from the large cohort highlighted three bilateral clusters of voice-sensitivity, or “voice patches” along posterior (TVAp), mid (TVAm) and anterior (TVAa) STS/STG, respectively. A series of extra-temporal areas including bilateral inferior prefrontal cortex and amygdalae showed small, but reliable voice-sensitivity as part of a large-scale cerebral voice network. Stimuli for the voice localizer scan and probabilistic maps in MNI space are available for download. Three “voice patches” along human superior temporal gyrus/sulcus. Anatomical location reproducible within- but variable between-individuals. Extended voice processing network includes amygdala and prefrontal cortex. Stimulus material for “voice localizer” scan available for download.
Collapse
Affiliation(s)
- Cyril R Pernet
- Cente for Clinical Brain Sciences, Neuroimaging Sciences, The University of Edinburgh, United Kingdom.
| | - Phil McAleer
- Institute of Neuroscience and Psychology, University of Glasgow, United Kingdom
| | - Marianne Latinus
- Institut des Neurosciences de La Timone, UMR 7289, CNRS & Université Aix-Marseille, France
| | | | - Ian Charest
- Cognition and Brain Sciences Unit, Medical Research Council, Cambridge, United Kingdom
| | | | - Rebecca H Watson
- Faculty of Psychology and Neuroscience, Maastricht University, The Netherlands
| | - David Fleming
- Institute of Neuroscience and Psychology, University of Glasgow, United Kingdom
| | - Frances Crabbe
- Institute of Neuroscience and Psychology, University of Glasgow, United Kingdom
| | | | - Pascal Belin
- Institute of Neuroscience and Psychology, University of Glasgow, United Kingdom; Institut des Neurosciences de La Timone, UMR 7289, CNRS & Université Aix-Marseille, France; Département de Psychologie, Université de Montréal, Canada.
| |
Collapse
|
42
|
Asymmetrical effects of unilateral right or left amygdala damage on auditory cortical processing of vocal emotions. Proc Natl Acad Sci U S A 2015; 112:1583-8. [PMID: 25605886 DOI: 10.1073/pnas.1411315112] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
We tested whether human amygdala lesions impair vocal processing in intact cortical networks. In two functional MRI experiments, patients with unilateral amygdala resection either listened to voices and nonvocal sounds or heard binaural vocalizations with attention directed toward or away from emotional information on one side. In experiment 1, all patients showed reduced activation to voices in the ipsilesional auditory cortex. In experiment 2, emotional voices evoked increased activity in both the auditory cortex and the intact amygdala for right-damaged patients, whereas no such effects were found for left-damaged amygdala patients. Furthermore, the left inferior frontal cortex was functionally connected with the intact amygdala in right-damaged patients, but only with homologous right frontal areas and not with the amygdala in left-damaged patients. Thus, unilateral amygdala damage leads to globally reduced ipsilesional cortical voice processing, but only left amygdala lesions are sufficient to suppress the enhanced auditory cortical processing of vocal emotions.
Collapse
|
43
|
Abstract
Accents provide information about the speaker's geographical, socio-economic, and ethnic background. Research in applied psychology and sociolinguistics suggests that we generally prefer our own accent to other varieties of our native language and attribute more positive traits to it. Despite the widespread influence of accents on social interactions, educational and work settings the neural underpinnings of this social bias toward our own accent and, what may drive this bias, are unexplored. We measured brain activity while participants from two different geographical backgrounds listened passively to 3 English accent types embedded in an adaptation design. Cerebral activity in several regions, including bilateral amygdalae, revealed a significant interaction between the participants' own accent and the accent they listened to: while repetition of own accents elicited an enhanced neural response, repetition of the other group's accent resulted in reduced responses classically associated with adaptation. Our findings suggest that increased social relevance of, or greater emotional sensitivity to in-group accents, may underlie the own-accent bias. Our results provide a neural marker for the bias associated with accents, and show, for the first time, that the neural response to speech is partly shaped by the geographical background of the listener.
Collapse
Affiliation(s)
| | - Pascal Belin
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK International Laboratories for Brain, Music and Sound Research, Université de Montréal & McGill University, Montréal, Canada Institut des Neurosciences de La Timone, UMR 7289, CNRS & Aix-Marseille Université, Marseille, France
| | - D Robert Ladd
- School of Philosophy, Psychology and Language Sciences, University of Edinburgh, UK
| |
Collapse
|
44
|
The functional profile of the human amygdala in affective processing: Insights from intracranial recordings. Cortex 2014; 60:10-33. [DOI: 10.1016/j.cortex.2014.06.010] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2013] [Revised: 01/30/2014] [Accepted: 06/04/2014] [Indexed: 11/21/2022]
|
45
|
Frühholz S, Trost W, Grandjean D. The role of the medial temporal limbic system in processing emotions in voice and music. Prog Neurobiol 2014; 123:1-17. [PMID: 25291405 DOI: 10.1016/j.pneurobio.2014.09.003] [Citation(s) in RCA: 83] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2014] [Revised: 09/16/2014] [Accepted: 09/29/2014] [Indexed: 01/15/2023]
Abstract
Subcortical brain structures of the limbic system, such as the amygdala, are thought to decode the emotional value of sensory information. Recent neuroimaging studies, as well as lesion studies in patients, have shown that the amygdala is sensitive to emotions in voice and music. Similarly, the hippocampus, another part of the temporal limbic system (TLS), is responsive to vocal and musical emotions, but its specific roles in emotional processing from music and especially from voices have been largely neglected. Here we review recent research on vocal and musical emotions, and outline commonalities and differences in the neural processing of emotions in the TLS in terms of emotional valence, emotional intensity and arousal, as well as in terms of acoustic and structural features of voices and music. We summarize the findings in a neural framework including several subcortical and cortical functional pathways between the auditory system and the TLS. This framework proposes that some vocal expressions might already receive a fast emotional evaluation via a subcortical pathway to the amygdala, whereas cortical pathways to the TLS are thought to be equally used for vocal and musical emotions. While the amygdala might be specifically involved in a coarse decoding of the emotional value of voices and music, the hippocampus might process more complex vocal and musical emotions, and might have an important role especially for the decoding of musical emotions by providing memory-based and contextual associations.
Collapse
Affiliation(s)
- Sascha Frühholz
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland.
| | - Wiebke Trost
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
46
|
Sensory contribution to vocal emotion deficit in Parkinson's disease after subthalamic stimulation. Cortex 2014; 63:172-83. [PMID: 25282055 DOI: 10.1016/j.cortex.2014.08.023] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2014] [Revised: 07/20/2014] [Accepted: 08/23/2014] [Indexed: 11/21/2022]
Abstract
Subthalamic nucleus (STN) deep brain stimulation in Parkinson's disease induces modifications in the recognition of emotion from voices (or emotional prosody). Nevertheless, the underlying mechanisms are still only poorly understood, and the role of acoustic features in these deficits has yet to be elucidated. Our aim was to identify the influence of acoustic features on changes in emotional prosody recognition following STN stimulation in Parkinson's disease. To this end, we analysed the performances of patients on vocal emotion recognition in pre-versus post-operative groups, as well as of matched controls, entering the acoustic features of the stimuli into our statistical models. Analyses revealed that the post-operative biased ratings on the Fear scale when patients listened to happy stimuli were correlated with loudness, while the biased ratings on the Sadness scale when they listened to happiness were correlated with fundamental frequency (F0). Furthermore, disturbed ratings on the Happiness scale when the post-operative patients listened to sadness were found to be correlated with F0. These results suggest that inadequate use of acoustic features following subthalamic stimulation has a significant impact on emotional prosody recognition in patients with Parkinson's disease, affecting the extraction and integration of acoustic cues during emotion perception.
Collapse
|
47
|
Milesi V, Cekic S, Péron J, Frühholz S, Cristinzio C, Seeck M, Grandjean D. Multimodal emotion perception after anterior temporal lobectomy (ATL). Front Hum Neurosci 2014; 8:275. [PMID: 24839437 PMCID: PMC4017134 DOI: 10.3389/fnhum.2014.00275] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2013] [Accepted: 04/14/2014] [Indexed: 11/30/2022] Open
Abstract
In the context of emotion information processing, several studies have demonstrated the involvement of the amygdala in emotion perception, for unimodal and multimodal stimuli. However, it seems that not only the amygdala, but several regions around it, may also play a major role in multimodal emotional integration. In order to investigate the contribution of these regions to multimodal emotion perception, five patients who had undergone unilateral anterior temporal lobe resection were exposed to both unimodal (vocal or visual) and audiovisual emotional and neutral stimuli. In a classic paradigm, participants were asked to rate the emotional intensity of angry, fearful, joyful, and neutral stimuli on visual analog scales. Compared with matched controls, patients exhibited impaired categorization of joyful expressions, whether the stimuli were auditory, visual, or audiovisual. Patients confused joyful faces with neutral faces, and joyful prosody with surprise. In the case of fear, unlike matched controls, patients provided lower intensity ratings for visual stimuli than for vocal and audiovisual ones. Fearful faces were frequently confused with surprised ones. When we controlled for lesion size, we no longer observed any overall difference between patients and controls in their ratings of emotional intensity on the target scales. Lesion size had the greatest effect on intensity perceptions and accuracy in the visual modality, irrespective of the type of emotion. These new findings suggest that a damaged amygdala, or a disrupted bundle between the amygdala and the ventral part of the occipital lobe, has a greater impact on emotion perception in the visual modality than it does in either the vocal or audiovisual one. We can surmise that patients are able to use the auditory information contained in multimodal stimuli to compensate for difficulty processing visually conveyed emotion.
Collapse
Affiliation(s)
- Valérie Milesi
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| | - Sezen Cekic
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| | - Julie Péron
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| | - Sascha Frühholz
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| | - Chiara Cristinzio
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland ; Laboratory for Neurology and Imaging of Cognition, Department of Neurology and Department of Neuroscience, Medical School, University of Geneva Geneva, Switzerland
| | - Margitta Seeck
- Epilepsy Unit, Department of Neurology, Geneva University Hospital Geneva, Switzerland
| | - Didier Grandjean
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| |
Collapse
|
48
|
Frühholz S, Klaas HS, Patel S, Grandjean D. Talking in Fury: The Cortico-Subcortical Network Underlying Angry Vocalizations. Cereb Cortex 2014; 25:2752-62. [PMID: 24735671 DOI: 10.1093/cercor/bhu074] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Although the neural basis for the perception of vocal emotions has been described extensively, the neural basis for the expression of vocal emotions is almost unknown. Here, we asked participants both to repeat and to express high-arousing angry vocalizations to command (i.e., evoked expressions). First, repeated expressions elicited activity in the left middle superior temporal gyrus (STG), pointing to a short auditory memory trace for the repetition of vocal expressions. Evoked expressions activated the left hippocampus, suggesting the retrieval of long-term stored scripts. Secondly, angry compared with neutral expressions elicited activity in the inferior frontal cortex IFC and the dorsal basal ganglia (BG), specifically during evoked expressions. Angry expressions also activated the amygdala and anterior cingulate cortex (ACC), and the latter correlated with pupil size as an indicator of bodily arousal during emotional output behavior. Though uncorrelated, both ACC activity and pupil diameter were also increased during repetition trials indicating increased control demands during the more constraint production type of precisely repeating prosodic intonations. Finally, different acoustic measures of angry expressions were associated with activity in the left STG, bilateral inferior frontal gyrus, and dorsal BG.
Collapse
Affiliation(s)
- Sascha Frühholz
- Neuroscience of Emotion and Affective Dynamics Laboratory (NEAD), Department of Psychology, University of Geneva, Geneva, Switzerland Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Hannah S Klaas
- Neuroscience of Emotion and Affective Dynamics Laboratory (NEAD), Department of Psychology, University of Geneva, Geneva, Switzerland
| | - Sona Patel
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotion and Affective Dynamics Laboratory (NEAD), Department of Psychology, University of Geneva, Geneva, Switzerland Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
49
|
Abrams DA, Lynch CJ, Cheng KM, Phillips J, Supekar K, Ryali S, Uddin LQ, Menon V. Underconnectivity between voice-selective cortex and reward circuitry in children with autism. Proc Natl Acad Sci U S A 2013; 110:12060-5. [PMID: 23776244 PMCID: PMC3718181 DOI: 10.1073/pnas.1302982110] [Citation(s) in RCA: 166] [Impact Index Per Article: 15.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Individuals with autism spectrum disorders (ASDs) often show insensitivity to the human voice, a deficit that is thought to play a key role in communication deficits in this population. The social motivation theory of ASD predicts that impaired function of reward and emotional systems impedes children with ASD from actively engaging with speech. Here we explore this theory by investigating distributed brain systems underlying human voice perception in children with ASD. Using resting-state functional MRI data acquired from 20 children with ASD and 19 age- and intelligence quotient-matched typically developing children, we examined intrinsic functional connectivity of voice-selective bilateral posterior superior temporal sulcus (pSTS). Children with ASD showed a striking pattern of underconnectivity between left-hemisphere pSTS and distributed nodes of the dopaminergic reward pathway, including bilateral ventral tegmental areas and nucleus accumbens, left-hemisphere insula, orbitofrontal cortex, and ventromedial prefrontal cortex. Children with ASD also showed underconnectivity between right-hemisphere pSTS, a region known for processing speech prosody, and the orbitofrontal cortex and amygdala, brain regions critical for emotion-related associative learning. The degree of underconnectivity between voice-selective cortex and reward pathways predicted symptom severity for communication deficits in children with ASD. Our results suggest that weak connectivity of voice-selective cortex and brain structures involved in reward and emotion may impair the ability of children with ASD to experience speech as a pleasurable stimulus, thereby impacting language and social skill development in this population. Our study provides support for the social motivation theory of ASD.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Vinod Menon
- Departments of Psychiatry and Behavioral Sciences and
- Neurology and Neurological Sciences
- Program in Neuroscience, and
- Stanford Institute for Neuro-Innovation and Translational Neurosciences, Stanford University School of Medicine, Palo Alto, CA 94304
| |
Collapse
|
50
|
Bach DR, Hurlemann R, Dolan RJ. Unimpaired discrimination of fearful prosody after amygdala lesion. Neuropsychologia 2013; 51:2070-4. [PMID: 23871880 PMCID: PMC3819998 DOI: 10.1016/j.neuropsychologia.2013.07.005] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2013] [Revised: 07/02/2013] [Accepted: 07/08/2013] [Indexed: 11/28/2022]
Abstract
Prosody (i.e. speech melody) is an important cue to infer an interlocutor's emotional state, complementing information from face expression and body posture. Inferring fear from face expression is reported as impaired after amygdala lesions. It remains unclear whether this deficit is specific to face expression, or is a more global fear recognition deficit. Here, we report data from two twins with bilateral amygdala lesions due to Urbach-Wiethe syndrome and show they are unimpaired in a multinomial emotional prosody classification task. In a two-alternative forced choice task, they demonstrate increased ability to discriminate fearful and neutral prosody, the opposite of what would be expected under an hypothesis of a global role for the amygdala in fear recognition. Hence, we provide evidence that the amygdala is not required for recognition of fearful prosody. Prosody recognition is assessed in two twin sisters with amygdala lesions due to Urbach–Wiethe syndrome. In a multinomial classification task, there is no impairment. In a two-alternative forced choice task, patients discriminate fearful and neutral prosody better than a control sample. This study provides evidence that the amygdala has no general role in fear recognition.
Collapse
Affiliation(s)
- Dominik R Bach
- Wellcome Trust Centre for Neuroimaging, University College London, UK; Zurich University Hospital of Psychiatry, Switzerland.
| | | | | |
Collapse
|