1
|
Ghasemahmad Z, Mrvelj A, Panditi R, Sharma B, Perumal KD, Wenstrup JJ. Emotional vocalizations alter behaviors and neurochemical release into the amygdala. eLife 2024; 12:RP88838. [PMID: 39008352 PMCID: PMC11249735 DOI: 10.7554/elife.88838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/16/2024] Open
Abstract
The basolateral amygdala (BLA), a brain center of emotional expression, contributes to acoustic communication by first interpreting the meaning of social sounds in the context of the listener's internal state, then organizing the appropriate behavioral responses. We propose that modulatory neurochemicals such as acetylcholine (ACh) and dopamine (DA) provide internal-state signals to the BLA while an animal listens to social vocalizations. We tested this in a vocal playback experiment utilizing highly affective vocal sequences associated with either mating or restraint, then sampled and analyzed fluids within the BLA for a broad range of neurochemicals and observed behavioral responses of adult male and female mice. In male mice, playback of restraint vocalizations increased ACh release and usually decreased DA release, while playback of mating sequences evoked the opposite neurochemical release patterns. In non-estrus female mice, patterns of ACh and DA release with mating playback were similar to males. Estrus females, however, showed increased ACh, associated with vigilance, as well as increased DA, associated with reward-seeking. Experimental groups that showed increased ACh release also showed the largest increases in an aversive behavior. These neurochemical release patterns and several behavioral responses depended on a single prior experience with the mating and restraint behaviors. Our results support a model in which ACh and DA provide contextual information to sound analyzing BLA neurons that modulate their output to downstream brain regions controlling behavioral responses to social vocalizations.
Collapse
Affiliation(s)
- Zahra Ghasemahmad
- Department of Anatomy and Neurobiology and Hearing Research Group, Northeast Ohio Medical UniversityRootstownUnited States
- School of Biomedical Sciences, Kent State UniversityKentUnited States
- Brain Health Research Institute, Kent State UniversityKentUnited States
| | - Aaron Mrvelj
- Department of Anatomy and Neurobiology and Hearing Research Group, Northeast Ohio Medical UniversityRootstownUnited States
| | - Rishitha Panditi
- Department of Anatomy and Neurobiology and Hearing Research Group, Northeast Ohio Medical UniversityRootstownUnited States
| | - Bhavya Sharma
- Department of Anatomy and Neurobiology and Hearing Research Group, Northeast Ohio Medical UniversityRootstownUnited States
| | - Karthic Drishna Perumal
- Department of Anatomy and Neurobiology and Hearing Research Group, Northeast Ohio Medical UniversityRootstownUnited States
| | - Jeffrey J Wenstrup
- Department of Anatomy and Neurobiology and Hearing Research Group, Northeast Ohio Medical UniversityRootstownUnited States
- School of Biomedical Sciences, Kent State UniversityKentUnited States
- Brain Health Research Institute, Kent State UniversityKentUnited States
| |
Collapse
|
2
|
Talwar S, Barbero FM, Calce RP, Collignon O. Automatic Brain Categorization of Discrete Auditory Emotion Expressions. Brain Topogr 2023; 36:854-869. [PMID: 37639111 PMCID: PMC10522533 DOI: 10.1007/s10548-023-00983-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 06/21/2023] [Indexed: 08/29/2023]
Abstract
Seamlessly extracting emotional information from voices is crucial for efficient interpersonal communication. However, it remains unclear how the brain categorizes vocal expressions of emotion beyond the processing of their acoustic features. In our study, we developed a new approach combining electroencephalographic recordings (EEG) in humans with a frequency-tagging paradigm to 'tag' automatic neural responses to specific categories of emotion expressions. Participants were presented with a periodic stream of heterogeneous non-verbal emotional vocalizations belonging to five emotion categories: anger, disgust, fear, happiness and sadness at 2.5 Hz (stimuli length of 350 ms with a 50 ms silent gap between stimuli). Importantly, unknown to the participant, a specific emotion category appeared at a target presentation rate of 0.83 Hz that would elicit an additional response in the EEG spectrum only if the brain discriminates the target emotion category from other emotion categories and generalizes across heterogeneous exemplars of the target emotion category. Stimuli were matched across emotion categories for harmonicity-to-noise ratio, spectral center of gravity and pitch. Additionally, participants were presented with a scrambled version of the stimuli with identical spectral content and periodicity but disrupted intelligibility. Both types of sequences had comparable envelopes and early auditory peripheral processing computed via the simulation of the cochlear response. We observed that in addition to the responses at the general presentation frequency (2.5 Hz) in both intact and scrambled sequences, a greater peak in the EEG spectrum at the target emotion presentation rate (0.83 Hz) and its harmonics emerged in the intact sequence in comparison to the scrambled sequence. The greater response at the target frequency in the intact sequence, together with our stimuli matching procedure, suggest that the categorical brain response elicited by a specific emotion is at least partially independent from the low-level acoustic features of the sounds. Moreover, responses at the fearful and happy vocalizations presentation rates elicited different topographies and different temporal dynamics, suggesting that different discrete emotions are represented differently in the brain. Our paradigm revealed the brain's ability to automatically categorize non-verbal vocal emotion expressions objectively (at a predefined frequency of interest), behavior-free, rapidly (in few minutes of recording time) and robustly (with a high signal-to-noise ratio), making it a useful tool to study vocal emotion processing and auditory categorization in general and in populations where behavioral assessments are more challenging.
Collapse
Affiliation(s)
- Siddharth Talwar
- Institute for Research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, University of Louvain (UCLouvain), Louvain, Belgium.
| | - Francesca M Barbero
- Institute for Research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, University of Louvain (UCLouvain), Louvain, Belgium
| | - Roberta P Calce
- Institute for Research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, University of Louvain (UCLouvain), Louvain, Belgium
| | - Olivier Collignon
- Institute for Research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, University of Louvain (UCLouvain), Louvain, Belgium.
- School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| |
Collapse
|
3
|
Disentangling emotional signals in the brain: an ALE meta-analysis of vocal affect perception. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023; 23:17-29. [PMID: 35945478 DOI: 10.3758/s13415-022-01030-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/24/2022] [Indexed: 11/08/2022]
Abstract
Recent advances in neuroimaging research on vocal emotion perception have revealed voice-sensitive areas specialized in processing affect. Experimental data on this subject is varied, investigating a wide range of emotions through different vocal signals and task demands. The present meta-analysis was designed to disentangle this diversity of results by summarizing neuroimaging data in the vocal emotion perception literature. Data from 44 experiments contrasting emotional and neutral voices was analyzed to assess brain areas involved in vocal affect perception in general, as well as depending on the type of voice signal (speech prosody or vocalizations), the task demands (implicit or explicit attention to emotions), and the specific emotion perceived. Results reassessed a consistent bilateral network of Emotional Voices Areas consisting of the superior temporal cortex and primary auditory regions. Specific activations and lateralization of these regions, as well as additional areas (insula, middle temporal gyrus) were further modulated by signal type and task demands. Exploring the sparser data on single emotions also suggested the recruitment of other regions (insula, inferior frontal gyrus, frontal operculum) for specific aspects of each emotion. These novel meta-analytic results suggest that while the bulk of vocal affect processing is localized in the STC, the complexity and variety of such vocal signals entails functional specificities in complex and varied cortical (and potentially subcortical) response pathways.
Collapse
|
4
|
Cheng L, Chiu Y, Lin Y, Li W, Hong T, Yang C, Shih C, Yeh T, Tseng WI, Yu H, Hsieh J, Chen L. Long-term musical training induces white matter plasticity in emotion and language networks. Hum Brain Mapp 2022; 44:5-17. [PMID: 36005832 PMCID: PMC9783470 DOI: 10.1002/hbm.26054] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Revised: 07/02/2022] [Accepted: 07/15/2022] [Indexed: 02/05/2023] Open
Abstract
Numerous studies have reported that long-term musical training can affect brain functionality and induce structural alterations in the brain. Singing is a form of vocal musical expression with an unparalleled capacity for communicating emotion; however, there has been relatively little research on neuroplasticity at the network level in vocalists (i.e., noninstrumental musicians). Our objective in this study was to elucidate changes in the neural network architecture following long-term training in the musical arts. We employed a framework based on graph theory to depict the connectivity and efficiency of structural networks in the brain, based on diffusion-weighted images obtained from 35 vocalists, 27 pianists, and 33 nonmusicians. Our results revealed that musical training (both voice and piano) could enhance connectivity among emotion-related regions of the brain, such as the amygdala. We also discovered that voice training reshaped the architecture of experience-dependent networks, such as those involved in vocal motor control, sensory feedback, and language processing. It appears that vocal-related changes in areas such as the insula, paracentral lobule, supramarginal gyrus, and putamen are associated with functional segregation, multisensory integration, and enhanced network interconnectivity. These results suggest that long-term musical training can strengthen or prune white matter connectivity networks in an experience-dependent manner.
Collapse
Affiliation(s)
- Li‐Kai Cheng
- Institute of Brain ScienceNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Integrated Brain Research Unit, Department of Medical ResearchTaipei Veterans General HospitalTaipeiTaiwan
| | - Yu‐Hsien Chiu
- Institute of Brain ScienceNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Integrated Brain Research Unit, Department of Medical ResearchTaipei Veterans General HospitalTaipeiTaiwan
| | - Ying‐Chia Lin
- Center for Advanced Imaging Innovation and Research (CAIR)NYU Grossman School of MedicineNew YorkNew YorkUSA,Center for Biomedical Imaging, Department of RadiologyNYU Grossman School of MedicineNew YorkNew YorkUSA
| | - Wei‐Chi Li
- Institute of Brain ScienceNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Integrated Brain Research Unit, Department of Medical ResearchTaipei Veterans General HospitalTaipeiTaiwan
| | - Tzu‐Yi Hong
- Institute of Brain ScienceNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Integrated Brain Research Unit, Department of Medical ResearchTaipei Veterans General HospitalTaipeiTaiwan
| | - Ching‐Ju Yang
- Institute of Brain ScienceNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Integrated Brain Research Unit, Department of Medical ResearchTaipei Veterans General HospitalTaipeiTaiwan
| | - Chung‐Heng Shih
- Institute of Brain ScienceNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Integrated Brain Research Unit, Department of Medical ResearchTaipei Veterans General HospitalTaipeiTaiwan
| | - Tzu‐Chen Yeh
- Institute of Brain ScienceNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Department of RadiologyTaipei Veterans General HospitalTaipeiTaiwan
| | - Wen‐Yih Isaac Tseng
- Institute of Medical Device and ImagingNational Taiwan University College of MedicineTaipeiTaiwan
| | - Hsin‐Yen Yu
- Graduate Institute of Arts and Humanities EducationTaipei National University of the ArtsTaipeiTaiwan
| | - Jen‐Chuen Hsieh
- Institute of Brain ScienceNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Integrated Brain Research Unit, Department of Medical ResearchTaipei Veterans General HospitalTaipeiTaiwan,Brain Research CenterNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Department of Biological Science and Technology, College of Biological Science and TechnologyNational Yang Ming Chiao Tung UniversityHsinchuTaiwan
| | - Li‐Fen Chen
- Institute of Brain ScienceNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Integrated Brain Research Unit, Department of Medical ResearchTaipei Veterans General HospitalTaipeiTaiwan,Brain Research CenterNational Yang Ming Chiao Tung UniversityTaipeiTaiwan
| |
Collapse
|
5
|
Steiner F, Fernandez N, Dietziker J, Stämpfli SP, Seifritz E, Rey A, Frühholz FS. Affective speech modulates a cortico-limbic network in real time. Prog Neurobiol 2022; 214:102278. [DOI: 10.1016/j.pneurobio.2022.102278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 04/06/2022] [Accepted: 04/28/2022] [Indexed: 10/18/2022]
|
6
|
Bertram T, Hoffmann Ayala D, Huber M, Brandl F, Starke G, Sorg C, Mulej Bratec S. Human threat circuits: Threats of pain, aggressive conspecific, and predator elicit distinct BOLD activations in the amygdala and hypothalamus. Front Psychiatry 2022; 13:1063238. [PMID: 36733415 PMCID: PMC9887727 DOI: 10.3389/fpsyt.2022.1063238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 12/16/2022] [Indexed: 01/12/2023] Open
Abstract
INTRODUCTION Threat processing, enabled by threat circuits, is supported by a remarkably conserved neural architecture across mammals. Threatening stimuli relevant for most species include the threat of being attacked by a predator or an aggressive conspecific and the threat of pain. Extensive studies in rodents have associated the threats of pain, predator attack and aggressive conspecific attack with distinct neural circuits in subregions of the amygdala, the hypothalamus and the periaqueductal gray. Bearing in mind the considerable conservation of both the anatomy of these regions and defensive behaviors across mammalian species, we hypothesized that distinct brain activity corresponding to the threats of pain, predator attack and aggressive conspecific attack would also exist in human subcortical brain regions. METHODS Forty healthy female subjects underwent fMRI scanning during aversive classical conditioning. In close analogy to rodent studies, threat stimuli consisted of painful electric shocks, a short video clip of an attacking bear and a short video clip of an attacking man. Threat processing was conceptualized as the expectation of the aversive stimulus during the presentation of the conditioned stimulus. RESULTS Our results demonstrate differential brain activations in the left and right amygdala as well as in the left hypothalamus for the threats of pain, predator attack and aggressive conspecific attack, for the first time showing distinct threat-related brain activity within the human subcortical brain. Specifically, the threat of pain showed an increase of activity in the left and right amygdala and the left hypothalamus compared to the threat of conspecific attack (pain > conspecific), and increased activity in the left amygdala compared to the threat of predator attack (pain > predator). Threat of conspecific attack revealed heightened activity in the right amygdala, both in comparison to threat of pain (conspecific > pain) and threat of predator attack (conspecific > predator). Finally, for the condition threat of predator attack we found increased activity in the bilateral amygdala and the hypothalamus when compared to threat of conspecific attack (predator > conspecific). No significant clusters were found for the contrast predator attack > pain. CONCLUSION Results suggest that threat type-specific circuits identified in rodents might be conserved in the human brain.
Collapse
Affiliation(s)
- Teresa Bertram
- Department of Neuroradiology, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany.,TUM-NIC Neuroimaging Center, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany.,Department of Psychiatry and Psychotherapy, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| | - Daniel Hoffmann Ayala
- Department of Neuroradiology, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany.,TUM-NIC Neuroimaging Center, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany.,Department of Neurosurgery, Klinikum Großhadern, Ludwig-Maximilians-University, Munich, Germany
| | - Maria Huber
- Department of Neuroradiology, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany.,TUM-NIC Neuroimaging Center, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| | - Felix Brandl
- Department of Neuroradiology, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany.,TUM-NIC Neuroimaging Center, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany.,Department of Psychiatry and Psychotherapy, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| | - Georg Starke
- Department of Neuroradiology, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany.,TUM-NIC Neuroimaging Center, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany.,College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Christian Sorg
- Department of Neuroradiology, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany.,TUM-NIC Neuroimaging Center, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany.,Department of Psychiatry and Psychotherapy, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| | - Satja Mulej Bratec
- Department of Neuroradiology, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany.,TUM-NIC Neuroimaging Center, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany.,Department of Psychology, Faculty of Arts, University of Maribor, Maribor, Slovenia
| |
Collapse
|
7
|
Olfactory learning and memory in the greater short-nosed fruit bat Cynopterus sphinx: the influence of conspecifics distress calls. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2021; 207:667-679. [PMID: 34426872 DOI: 10.1007/s00359-021-01505-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2020] [Revised: 07/13/2021] [Accepted: 08/04/2021] [Indexed: 10/20/2022]
Abstract
This study was designed to test whether Cynopterus sphinx distress calls influence olfactory learning and memory in conspecifics. Bats were exposed to distress calls/playbacks (PBs) of distress calls/modified calls and were then trained to novel odors. Bats exposed to distress calls/PBs made significantly fewer feeding attempts and bouts of PBs exposed to modified calls, which significantly induced the expression of c-Fos in the caudomedial neostriatum (NCM) and the amygdala compared to bats exposed to modified calls and trained controls. However, the expression of c-Fos in the hippocampus was not significantly different between the experimental groups. Further, protein phosphatase-1 (PP-1) expression was significantly lower, and the expression levels of E1A homologue of CREB-binding protein (CBP) (P300), brain-derived neurotrophic factor (BDNF) and its tyrosine kinase B1 (TrkB1) receptor were significantly higher in the hippocampus of control/bats exposed to modified calls compared to distress calls/PBs of distress call-exposed bats. Exposure to the call possibly alters the reciprocal interaction between the amygdala and the hippocampus, accordingly regulating the expression levels of PP1, P300 and BDNF and its receptor TrkB1 following training to the novel odor. Thus, the learning and memory consolidation processes were disrupted and showed fewer feeding attempts and bouts. This model may be helpful for understanding the contributions of stressful social communications to human disorders.
Collapse
|
8
|
Arantes ME, Cendes F. In Search of a New Paradigm for Functional Magnetic Resonance Experimentation With Language. Front Neurol 2020; 11:588. [PMID: 32670188 PMCID: PMC7326770 DOI: 10.3389/fneur.2020.00588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Accepted: 05/22/2020] [Indexed: 11/23/2022] Open
Abstract
Human language can convey a broad range of entities and relationships through processes that are highly complex and structured. All of these processes are happening somewhere inside our brains, and one way of precising these locations is through the usage of the functional magnetic resonance imaging. The great obstacle when experimenting with complex processes, however, is the need to control them while still having data that are representative of reality. When it comes to language, an interactional phenomenon in its nature, and that integrates a wide range of processes, a question emerges concerning how compatible it is with the current experimental methodology, and how much of it is lost in order to fit the controlled experimental environment. Because of its particularities, the fMRI technique imposes several limitations to the expression of language during experimentation. This paper discusses the different conceptions of language as a research object, the hardships of combining this object with the requirements of fMRI, and what are the current perspectives for this field of research.
Collapse
Affiliation(s)
| | - Fernando Cendes
- Laboratory of Neuroimaging, Department of Neurology, University of Campinas—UNICAMP, Campinas, Brazil
| |
Collapse
|
9
|
Abstract
The processing of emotional nonlinguistic information in speech is defined as emotional prosody. This auditory nonlinguistic information is essential in the decoding of social interactions and in our capacity to adapt and react adequately by taking into account contextual information. An integrated model is proposed at the functional and brain levels, encompassing 5 main systems that involve cortical and subcortical neural networks relevant for the processing of emotional prosody in its major dimensions, including perception and sound organization; related action tendencies; and associated values that integrate complex social contexts and ambiguous situations.
Collapse
Affiliation(s)
- Didier Grandjean
- Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences, University of Geneva, Switzerland
| |
Collapse
|
10
|
Lin H, Müller-Bardorff M, Gathmann B, Brieke J, Mothes-Lasch M, Bruchmann M, Miltner WHR, Straube T. Stimulus arousal drives amygdalar responses to emotional expressions across sensory modalities. Sci Rep 2020; 10:1898. [PMID: 32024891 PMCID: PMC7002496 DOI: 10.1038/s41598-020-58839-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 12/23/2019] [Indexed: 11/08/2022] Open
Abstract
The factors that drive amygdalar responses to emotionally significant stimuli are still a matter of debate - particularly the proneness of the amygdala to respond to negatively-valenced stimuli has been discussed controversially. Furthermore, it is uncertain whether the amygdala responds in a modality-general fashion or whether modality-specific idiosyncrasies exist. Therefore, the present functional magnetic resonance imaging (fMRI) study systematically investigated amygdalar responding to stimulus valence and arousal of emotional expressions across visual and auditory modalities. During scanning, participants performed a gender judgment task while prosodic and facial emotional expressions were presented. The stimuli varied in stimulus valence and arousal by including neutral, happy and angry expressions of high and low emotional intensity. Results demonstrate amygdalar activation as a function of stimulus arousal and accordingly associated emotional intensity regardless of stimulus valence. Furthermore, arousal-driven amygdalar responding did not depend on the visual and auditory modalities of emotional expressions. Thus, the current results are consistent with the notion that the amygdala codes general stimulus relevance across visual and auditory modalities irrespective of valence. In addition, whole brain analyses revealed that effects in visual and auditory areas were driven mainly by high intense emotional facial and vocal stimuli, respectively, suggesting modality-specific representations of emotional expressions in auditory and visual cortices.
Collapse
Affiliation(s)
- Huiyan Lin
- Institute of Applied Psychology, School of Public Administration, Guangdong University of Finance, 510521, Guangzhou, China.
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany.
| | - Miriam Müller-Bardorff
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Bettina Gathmann
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Jaqueline Brieke
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Martin Mothes-Lasch
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Maximilian Bruchmann
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Wolfgang H R Miltner
- Department of Clinical Psychology, Friedrich Schiller University of Jena, 07743, Jena, Germany
| | - Thomas Straube
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| |
Collapse
|
11
|
Grisendi T, Reynaud O, Clarke S, Da Costa S. Processing pathways for emotional vocalizations. Brain Struct Funct 2019; 224:2487-2504. [DOI: 10.1007/s00429-019-01912-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Accepted: 06/12/2019] [Indexed: 01/06/2023]
|
12
|
Domínguez-Borràs J, Guex R, Méndez-Bértolo C, Legendre G, Spinelli L, Moratti S, Frühholz S, Mégevand P, Arnal L, Strange B, Seeck M, Vuilleumier P. Human amygdala response to unisensory and multisensory emotion input: No evidence for superadditivity from intracranial recordings. Neuropsychologia 2019; 131:9-24. [PMID: 31158367 DOI: 10.1016/j.neuropsychologia.2019.05.027] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Revised: 05/15/2019] [Accepted: 05/28/2019] [Indexed: 12/14/2022]
Abstract
The amygdala is crucially implicated in processing emotional information from various sensory modalities. However, there is dearth of knowledge concerning the integration and relative time-course of its responses across different channels, i.e., for auditory, visual, and audiovisual input. Functional neuroimaging data in humans point to a possible role of this region in the multimodal integration of emotional signals, but direct evidence for anatomical and temporal overlap of unisensory and multisensory-evoked responses in amygdala is still lacking. We recorded event-related potentials (ERPs) and oscillatory activity from 9 amygdalae using intracranial electroencephalography (iEEG) in patients prior to epilepsy surgery, and compared electrophysiological responses to fearful, happy, or neutral stimuli presented either in voices alone, faces alone, or voices and faces simultaneously delivered. Results showed differential amygdala responses to fearful stimuli, in comparison to neutral, reaching significance 100-200 ms post-onset for auditory, visual and audiovisual stimuli. At later latencies, ∼400 ms post-onset, amygdala response to audiovisual information was also amplified in comparison to auditory or visual stimuli alone. Importantly, however, we found no evidence for either super- or subadditivity effects in any of the bimodal responses. These results suggest, first, that emotion processing in amygdala occurs at globally similar early stages of perceptual processing for auditory, visual, and audiovisual inputs; second, that overall larger responses to multisensory information occur at later stages only; and third, that the underlying mechanisms of this multisensory gain may reflect a purely additive response to concomitant visual and auditory inputs. Our findings provide novel insights on emotion processing across the sensory pathways, and their convergence within the limbic system.
Collapse
Affiliation(s)
- Judith Domínguez-Borràs
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
| | - Raphaël Guex
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
| | | | - Guillaume Legendre
- Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Laurent Spinelli
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland.
| | - Stephan Moratti
- Department of Experimental Psychology, Complutense University of Madrid, Spain; Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Universidad Politécnica de Madrid, Spain.
| | - Sascha Frühholz
- Department of Psychology, University of Zurich, Switzerland.
| | - Pierre Mégevand
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Luc Arnal
- Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Bryan Strange
- Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Universidad Politécnica de Madrid, Spain; Department of Neuroimaging, Alzheimer's Disease Research Centre, Reina Sofia-CIEN Foundation, Madrid, Spain.
| | - Margitta Seeck
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland.
| | - Patrik Vuilleumier
- Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| |
Collapse
|
13
|
Koch K, Stegmaier S, Schwarz L, Erb M, Thomas M, Scheffler K, Wildgruber D, Nieratschker V, Ethofer T. CACNA1C risk variant affects microstructural connectivity of the amygdala. Neuroimage Clin 2019; 22:101774. [PMID: 30909026 PMCID: PMC6434179 DOI: 10.1016/j.nicl.2019.101774] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 01/29/2019] [Accepted: 03/10/2019] [Indexed: 11/28/2022]
Abstract
Deficits in perception of emotional prosody have been described in patients with affective disorders at behavioral and neural level. In the current study, we use an imaging genetics approach to examine the impact of CACNA1C, one of the most promising genetic risk factors for psychiatric disorders, on prosody processing on a behavioral, functional and microstructural level. Using functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) we examined key areas involved in prosody processing, i.e. the amygdala and voice areas, in a healthy population. We found stronger activation to emotional than neutral prosody in the voice areas and the amygdala, but CACNA1C rs1006737 genotype had no influence on fMRI activity. However, significant microstructural differences (i.e. mean diffusivity) between CACNA1C rs1006737 risk allele carriers and non carriers were found in the amygdala, but not the voice areas. These modifications in brain architecture associated with CACNA1C might reflect a neurobiological marker predisposing to affective disorders and concomitant alterations in emotion perception.
Collapse
Affiliation(s)
- Katharina Koch
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany.
| | - Sophia Stegmaier
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Lena Schwarz
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Michael Erb
- Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany
| | - Mara Thomas
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Klaus Scheffler
- Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany; Max-Planck-Institute for Biological Cybernetics, University of Tuebingen, Tuebingen, Germany
| | - Dirk Wildgruber
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Vanessa Nieratschker
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany; Werner Reichardt Center for Integrative Neuroscience, University of Tuebingen, Tuebingen, Germany
| | - Thomas Ethofer
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany; Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany
| |
Collapse
|
14
|
Hellbernd N, Sammler D. Neural bases of social communicative intentions in speech. Soc Cogn Affect Neurosci 2019; 13:604-615. [PMID: 29771359 PMCID: PMC6022564 DOI: 10.1093/scan/nsy034] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2017] [Accepted: 05/13/2018] [Indexed: 11/15/2022] Open
Abstract
Our ability to understand others’ communicative intentions in speech is key to successful social interaction. Indeed, misunderstanding an ‘excuse me’ as apology, while meant as criticism, may have important consequences. Recent behavioural studies have provided evidence that prosody, that is, vocal tone, is an important indicator for speakers’ intentions. Using a novel audio-morphing paradigm, the present functional magnetic resonance imaging study examined the neurocognitive mechanisms that allow listeners to ‘read’ speakers’ intents from vocal prosodic patterns. Participants categorized prosodic expressions that gradually varied in their acoustics between criticism, doubt, and suggestion. Categorizing typical exemplars of the three intentions induced activations along the ventral auditory stream, complemented by amygdala and mentalizing system. These findings likely depict the stepwise conversion of external perceptual information into abstract prosodic categories and internal social semantic concepts, including the speaker’s mental state. Ambiguous tokens, in turn, involved cingulo-opercular areas known to assist decision-making in case of conflicting cues. Auditory and decision-making processes were flexibly coupled with the amygdala, depending on prosodic typicality, indicating enhanced categorization efficiency of overtly relevant, meaningful prosodic signals. Altogether, the results point to a model in which auditory prosodic categorization and socio-inferential conceptualization cooperate to translate perceived vocal tone into a coherent representation of the speaker’s intent.
Collapse
Affiliation(s)
- Nele Hellbernd
- Otto Hahn Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, D-04103 Leipzig, Germany
| | - Daniela Sammler
- Otto Hahn Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, D-04103 Leipzig, Germany
| |
Collapse
|
15
|
Agnew ZK, Banissy MJ, McGettigan C, Walsh V, Scott SK. Investigating the Neural Basis of Theta Burst Stimulation to Premotor Cortex on Emotional Vocalization Perception: A Combined TMS-fMRI Study. Front Hum Neurosci 2018; 12:150. [PMID: 29867402 PMCID: PMC5962765 DOI: 10.3389/fnhum.2018.00150] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2017] [Accepted: 04/04/2018] [Indexed: 12/01/2022] Open
Abstract
Previous studies have established a role for premotor cortex in the processing of auditory emotional vocalizations. Inhibitory continuous theta burst transcranial magnetic stimulation (cTBS) applied to right premotor cortex selectively increases the reaction time to a same-different task, implying a causal role for right ventral premotor cortex (PMv) in the processing of emotional sounds. However, little is known about the functional networks to which PMv contribute across the cortical hemispheres. In light of these data, the present study aimed to investigate how and where in the brain cTBS affects activity during the processing of auditory emotional vocalizations. Using functional neuroimaging, we report that inhibitory cTBS applied to the right premotor cortex (compared to vertex control site) results in three distinct response profiles: following stimulation of PMv, widespread frontoparietal cortices, including a site close to the target site, and parahippocampal gyrus displayed an increase in activity, whereas the reverse response profile was apparent in a set of midline structures and right IFG. A third response profile was seen in left supramarginal gyrus in which activity was greater post-stimulation at both stimulation sites. Finally, whilst previous studies have shown a condition specific behavioral effect following cTBS to premotor cortex, we did not find a condition specific neural change in BOLD response. These data demonstrate a complex relationship between cTBS and activity in widespread neural networks and are discussed in relation to both emotional processing and the neural basis of cTBS.
Collapse
Affiliation(s)
- Zarinah K Agnew
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom.,Otolaryngology-Head & Neck Surgery Clinic, University of California, San Francisco, San Francisco, CA, United States
| | - Michael J Banissy
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom.,Department of Psychology, Goldsmiths, University of London, London, United Kingdom
| | | | - Vincent Walsh
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| |
Collapse
|
16
|
Koch K, Stegmaier S, Schwarz L, Erb M, Reinl M, Scheffler K, Wildgruber D, Ethofer T. Neural correlates of processing emotional prosody in unipolar depression. Hum Brain Mapp 2018; 39:3419-3427. [PMID: 29682814 DOI: 10.1002/hbm.24185] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2017] [Revised: 03/15/2018] [Accepted: 04/09/2018] [Indexed: 12/11/2022] Open
Abstract
Major depressive disorder (MDD) is characterized by a biased emotion perception. In the auditory domain, MDD patients have been shown to exhibit attenuated processing of positive emotions expressed by speech melody (prosody). So far, no neuroimaging studies examining the neural basis of altered processing of emotional prosody in MDD are available. In this study, we addressed this issue by examining the emotion bias in MDD during evaluation of happy, neutral, and angry prosodic stimuli on a five-point Likert scale during functional magnetic resonance imaging (fMRI). As expected, MDD patients rated happy prosody less intense than healthy controls (HC). At neural level, stronger activation in the middle superior temporal gyrus (STG) and the amygdala was found in all participants when processing emotional as compared to neutral prosody. MDD patients exhibited an increased activation of the amygdala during processing prosody irrespective of valence while no significant differences between groups were found for the STG, indicating that altered processing of prosodic emotions in MDD occurs rather within the amygdala than in auditory areas. Concurring with the valence-specific behavioral effect of attenuated evaluation of positive prosodic stimuli, activation within the left amygdala of MDD patients correlated with ratings of happy, but not neutral or angry prosody. Our study provides first insights in the neural basis of reduced experience of positive information and an abnormally increased amygdala activity during prosody processing.
Collapse
Affiliation(s)
- Katharina Koch
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Sophia Stegmaier
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Lena Schwarz
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Michael Erb
- Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany
| | - Maren Reinl
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Klaus Scheffler
- Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany.,Max-Planck-Institute for Biological Cybernetics, University of Tuebingen, Tuebingen, Germany
| | - Dirk Wildgruber
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Thomas Ethofer
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany.,Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany
| |
Collapse
|
17
|
Gruber T, Grandjean D. A comparative neurological approach to emotional expressions in primate vocalizations. Neurosci Biobehav Rev 2016; 73:182-190. [PMID: 27993605 DOI: 10.1016/j.neubiorev.2016.12.004] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2016] [Revised: 12/01/2016] [Accepted: 12/03/2016] [Indexed: 12/20/2022]
Abstract
Different approaches from different research domains have crystallized debate over primate emotional processing and vocalizations in recent decades. On one side, researchers disagree about whether emotional states or processes in animals truly compare to those in humans. On the other, a long-held assumption is that primate vocalizations are innate communicative signals over which nonhuman primates have limited control and a mirror of the emotional state of the individuals producing them, despite growing evidence of intentional production for some vocalizations. Our goal is to connect both sides of the discussion in deciphering how the emotional content of primate calls compares with emotional vocal signals in humans. We focus particularly on neural bases of primate emotions and vocalizations to identify cerebral structures underlying emotion, vocal production, and comprehension in primates, and discuss whether particular structures or neuronal networks solely evolved for specific functions in the human brain. Finally, we propose a model to classify emotional vocalizations in primates according to four dimensions (learning, control, emotional, meaning) to allow comparing calls across species.
Collapse
Affiliation(s)
- Thibaud Gruber
- Swiss Center for Affective Sciences and Department of Psychology and Sciences of Education, University of Geneva, Geneva, Switzerland.
| | - Didier Grandjean
- Swiss Center for Affective Sciences and Department of Psychology and Sciences of Education, University of Geneva, Geneva, Switzerland
| |
Collapse
|
18
|
Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions. Cortex 2016; 85:116-125. [PMID: 27855282 DOI: 10.1016/j.cortex.2016.10.013] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2016] [Revised: 09/19/2016] [Accepted: 10/19/2016] [Indexed: 11/23/2022]
Abstract
Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions.
Collapse
|
19
|
Korb S, Frühholz S, Grandjean D. Reappraising the voices of wrath. Soc Cogn Affect Neurosci 2015; 10:1644-60. [PMID: 25964502 PMCID: PMC4666101 DOI: 10.1093/scan/nsv051] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2014] [Revised: 04/08/2015] [Accepted: 05/07/2015] [Indexed: 11/12/2022] Open
Abstract
Cognitive reappraisal recruits prefrontal and parietal cortical areas. Because of the near exclusive usage in past research of visual stimuli to elicit emotions, it is unknown whether the same neural substrates underlie the reappraisal of emotions induced through other sensory modalities. Here, participants reappraised their emotions in order to increase or decrease their emotional response to angry prosody, or maintained their attention to it in a control condition. Neural activity was monitored with fMRI, and connectivity was investigated by using psychophysiological interaction analyses. A right-sided network encompassing the superior temporal gyrus, the superior temporal sulcus and the inferior frontal gyrus was found to underlie the processing of angry prosody. During reappraisal to increase emotional response, the left superior frontal gyrus showed increased activity and became functionally coupled to right auditory cortices. During reappraisal to decrease emotional response, a network that included the medial frontal gyrus and posterior parietal areas showed increased activation and greater functional connectivity with bilateral auditory regions. Activations pertaining to this network were more extended on the right side of the brain. Although directionality cannot be inferred from PPI analyses, the findings suggest a similar frontoparietal network for the reappraisal of visually and auditorily induced negative emotions.
Collapse
Affiliation(s)
- Sebastian Korb
- International School for Advanced Studies (SISSA), Trieste, Italy,
| | - Sascha Frühholz
- Swiss Center for Affective Sciences, Geneva, Switzerland, and Department of Psychology and Educational Sciences, University of Geneva, Switzerland
| | - Didier Grandjean
- Swiss Center for Affective Sciences, Geneva, Switzerland, and Department of Psychology and Educational Sciences, University of Geneva, Switzerland
| |
Collapse
|
20
|
Pannese A, Grandjean D, Frühholz S. Subcortical processing in auditory communication. Hear Res 2015; 328:67-77. [DOI: 10.1016/j.heares.2015.07.003] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2015] [Revised: 06/23/2015] [Accepted: 07/01/2015] [Indexed: 12/21/2022]
|
21
|
Pattyn T, Van Den Eede F, Vanneste S, Cassiers L, Veltman DJ, Van De Heyning P, Sabbe BCG. Tinnitus and anxiety disorders: A review. Hear Res 2015; 333:255-265. [PMID: 26342399 DOI: 10.1016/j.heares.2015.08.014] [Citation(s) in RCA: 129] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2015] [Revised: 05/06/2015] [Accepted: 08/27/2015] [Indexed: 12/28/2022]
Abstract
BACKGROUND The most common form of tinnitus is a subjective, auditory, and distressing phantom phenomenon. Comorbidity with depression is high but other important psychiatric disorders such as anxiety disorders have received less attention. The current paper reviews the literature on the associations between tinnitus and anxiety disorders and the underlying pathophysiology, and discusses the clinical implications. METHODOLOGY PubMed and Web of Science were searched for all articles published up until October 2014 using combinations of the following search strings "Tinnitus", "Anxiety disorder", "Panic Disorder", "Generalized Anxiety Disorder", "Post traumatic stress disorder", "PTSD" "Social Phobia", "Phobia Disorder", "Obsessive Compulsive Disorder", "Agoraphobia". RESULTS A total of 117 relevant papers were included. A 45% lifetime prevalence of anxiety disorders is reported in tinnitus populations, while an important overlap in associated (sub)cortical brain areas and cortico-subcortical networks involved in attention, distress, and memory functions is suggested. A disturbed hypothalamic-pituitary-adrenal axis function can be found in tinnitus and in anxiety disorders but, in comorbidity, the direction of the dysfunction is unclear. CONCLUSION Comorbidity is high and screening for and treatment of anxiety disorders is recommended in moderate to severe tinnitus, as, given the overlap in the structural and functional brain circuitries involved, theoretically, their management could improve (subjective) levels of tinnitus although further empirical research on this topic is required.
Collapse
Affiliation(s)
- T Pattyn
- University of Antwerp, Collaborative Antwerp Psychiatric Research Institute (CAPRI), Antwerp, Belgium; University Department of Psychiatry, Campus Antwerp University Hospital, Antwerp, Belgium.
| | - F Van Den Eede
- University Department of Psychiatry, Campus Antwerp University Hospital, Antwerp, Belgium; University of Antwerp, Collaborative Antwerp Psychiatric Research Institute (CAPRI), Antwerp, Belgium
| | - S Vanneste
- University of Antwerp, Department of Translational Neuroscience, Faculty of Medicine, Antwerp, Belgium; University of Texas, School of Behavioral and Brain Sciences, Dallas, Richardson, TX, United States
| | - L Cassiers
- University of Antwerp, Collaborative Antwerp Psychiatric Research Institute (CAPRI), Antwerp, Belgium; University Department of Psychiatry, Campus Antwerp University Hospital, Antwerp, Belgium
| | - D J Veltman
- VU University Medical Centre, Department of Psychiatry and EMGO Institute of Health and Care Research and Neuroscience Campus Amsterdam, Amsterdam, The Netherlands
| | - P Van De Heyning
- University of Antwerp, Department of Translational Neuroscience, Faculty of Medicine, Antwerp, Belgium; Department of Otorhinolaryngology and Head & Neck Surgery, Antwerp University Hospital, Antwerp, Belgium
| | - B C G Sabbe
- University of Antwerp, Collaborative Antwerp Psychiatric Research Institute (CAPRI), Antwerp, Belgium; University Department of Psychiatry, Campus Psychiatric Hospital Duffel, Duffel, Belgium
| |
Collapse
|
22
|
Asymmetrical effects of unilateral right or left amygdala damage on auditory cortical processing of vocal emotions. Proc Natl Acad Sci U S A 2015; 112:1583-8. [PMID: 25605886 DOI: 10.1073/pnas.1411315112] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
We tested whether human amygdala lesions impair vocal processing in intact cortical networks. In two functional MRI experiments, patients with unilateral amygdala resection either listened to voices and nonvocal sounds or heard binaural vocalizations with attention directed toward or away from emotional information on one side. In experiment 1, all patients showed reduced activation to voices in the ipsilesional auditory cortex. In experiment 2, emotional voices evoked increased activity in both the auditory cortex and the intact amygdala for right-damaged patients, whereas no such effects were found for left-damaged amygdala patients. Furthermore, the left inferior frontal cortex was functionally connected with the intact amygdala in right-damaged patients, but only with homologous right frontal areas and not with the amygdala in left-damaged patients. Thus, unilateral amygdala damage leads to globally reduced ipsilesional cortical voice processing, but only left amygdala lesions are sufficient to suppress the enhanced auditory cortical processing of vocal emotions.
Collapse
|
23
|
Jessen S, Kotz SA. Affect differentially modulates brain activation in uni- and multisensory body-voice perception. Neuropsychologia 2015; 66:134-43. [DOI: 10.1016/j.neuropsychologia.2014.10.038] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2014] [Revised: 09/22/2014] [Accepted: 10/30/2014] [Indexed: 10/24/2022]
|
24
|
Abstract
Accents provide information about the speaker's geographical, socio-economic, and ethnic background. Research in applied psychology and sociolinguistics suggests that we generally prefer our own accent to other varieties of our native language and attribute more positive traits to it. Despite the widespread influence of accents on social interactions, educational and work settings the neural underpinnings of this social bias toward our own accent and, what may drive this bias, are unexplored. We measured brain activity while participants from two different geographical backgrounds listened passively to 3 English accent types embedded in an adaptation design. Cerebral activity in several regions, including bilateral amygdalae, revealed a significant interaction between the participants' own accent and the accent they listened to: while repetition of own accents elicited an enhanced neural response, repetition of the other group's accent resulted in reduced responses classically associated with adaptation. Our findings suggest that increased social relevance of, or greater emotional sensitivity to in-group accents, may underlie the own-accent bias. Our results provide a neural marker for the bias associated with accents, and show, for the first time, that the neural response to speech is partly shaped by the geographical background of the listener.
Collapse
Affiliation(s)
| | - Pascal Belin
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK International Laboratories for Brain, Music and Sound Research, Université de Montréal & McGill University, Montréal, Canada Institut des Neurosciences de La Timone, UMR 7289, CNRS & Aix-Marseille Université, Marseille, France
| | - D Robert Ladd
- School of Philosophy, Psychology and Language Sciences, University of Edinburgh, UK
| |
Collapse
|
25
|
Gebauer L, Skewes J, Hørlyck L, Vuust P. Atypical perception of affective prosody in Autism Spectrum Disorder. NEUROIMAGE-CLINICAL 2014; 6:370-8. [PMID: 25379450 PMCID: PMC4218934 DOI: 10.1016/j.nicl.2014.08.025] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2014] [Revised: 08/12/2014] [Accepted: 08/31/2014] [Indexed: 11/30/2022]
Abstract
Autism Spectrum Disorder (ASD) is characterized by impairments in language and social–emotional cognition. Yet, findings of emotion recognition from affective prosody in individuals with ASD are inconsistent. This study investigated emotion recognition and neural processing of affective prosody in high-functioning adults with ASD relative to neurotypical (NT) adults. Individuals with ASD showed mostly typical brain activation of the fronto-temporal and subcortical brain regions in response to affective prosody. Yet, the ASD group showed a trend towards increased activation of the right caudate during processing of affective prosody and rated the emotional intensity lower than NT individuals. This is likely associated with increased attentional task demands in this group, which might contribute to social–emotional impairments. This study investigated processing of affective prosody in ASD Including 19 high-functioning adults with ASD and 20 neurotypical (NT) adults Using behavioral measures for emotion recognition and fMRI Individuals with ASD showed lower ratings of emotional intensity than NT A trend towards increased activation of caudate during affective prosody in ASD
Collapse
Affiliation(s)
- Line Gebauer
- Center of Functionally Integrative Neuroscience, Aarhus University, Building 10G, 5th Floor, Noerrebrogade 44, Aarhus C 8000, Denmark ; Interacting Minds Centre, Aarhus University, Building 1483, 3rd Floor, Jens Chr. Skous Vej 4, Aarhus C 8000, Denmark
| | - Joshua Skewes
- Interacting Minds Centre, Aarhus University, Building 1483, 3rd Floor, Jens Chr. Skous Vej 4, Aarhus C 8000, Denmark
| | - Lone Hørlyck
- Center of Functionally Integrative Neuroscience, Aarhus University, Building 10G, 5th Floor, Noerrebrogade 44, Aarhus C 8000, Denmark
| | - Peter Vuust
- Center of Functionally Integrative Neuroscience, Aarhus University, Building 10G, 5th Floor, Noerrebrogade 44, Aarhus C 8000, Denmark ; Royal Academy of Music, Skovgaardsgade 2C, Aarhus C 8000, Denmark
| |
Collapse
|
26
|
Frühholz S, Trost W, Grandjean D. The role of the medial temporal limbic system in processing emotions in voice and music. Prog Neurobiol 2014; 123:1-17. [PMID: 25291405 DOI: 10.1016/j.pneurobio.2014.09.003] [Citation(s) in RCA: 83] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2014] [Revised: 09/16/2014] [Accepted: 09/29/2014] [Indexed: 01/15/2023]
Abstract
Subcortical brain structures of the limbic system, such as the amygdala, are thought to decode the emotional value of sensory information. Recent neuroimaging studies, as well as lesion studies in patients, have shown that the amygdala is sensitive to emotions in voice and music. Similarly, the hippocampus, another part of the temporal limbic system (TLS), is responsive to vocal and musical emotions, but its specific roles in emotional processing from music and especially from voices have been largely neglected. Here we review recent research on vocal and musical emotions, and outline commonalities and differences in the neural processing of emotions in the TLS in terms of emotional valence, emotional intensity and arousal, as well as in terms of acoustic and structural features of voices and music. We summarize the findings in a neural framework including several subcortical and cortical functional pathways between the auditory system and the TLS. This framework proposes that some vocal expressions might already receive a fast emotional evaluation via a subcortical pathway to the amygdala, whereas cortical pathways to the TLS are thought to be equally used for vocal and musical emotions. While the amygdala might be specifically involved in a coarse decoding of the emotional value of voices and music, the hippocampus might process more complex vocal and musical emotions, and might have an important role especially for the decoding of musical emotions by providing memory-based and contextual associations.
Collapse
Affiliation(s)
- Sascha Frühholz
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland.
| | - Wiebke Trost
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
27
|
Schmidt AT, Hanten G, Li X, Wilde EA, Ibarra AP, Chu ZD, Helbling AR, Shah S, Levin HS. Emotional prosody and diffusion tensor imaging in children after traumatic brain injury. Brain Inj 2014; 27:1528-35. [PMID: 24266795 DOI: 10.3109/02699052.2013.828851] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
PRIMARY OBJECTIVE Brain structures and their white matter connections that may contribute to emotion processing and may be vulnerable to disruption by a traumatic brain injury (TBI) occurring in childhood have not been thoroughly explored. RESEARCH DESIGN AND METHODS The current investigation examines the relationship between diffusion tensor imaging (DTI) metrics, including fractional anisotropy (FA) and apparent diffusion coefficient (ADC), and 3-month post-injury performance on a task of emotion prosody recognition and a control task of phonological discrimination in a group of 91 children who sustained either a moderate-to-severe TBI (n = 45) or orthopaedic injury (OI) (n = 46). MAIN OUTCOMES AND RESULTS Brain-behaviour findings within OI participants confirmed relationships between several significant white matter tracts in emotional prosody performance (i.e. the cingulum bundle, genu of the corpus callosum, inferior longitudinal fasciculus (ILF) and the inferior fronto-occipital fasciculus (IFOF). The cingulum and genu were also related to phonological discrimination performance. The TBI group demonstrated few strong brain behaviour relationships, with significant findings emerging only in the cingulum bundle for Emotional Prosody and the genu for Phonological Processing. CONCLUSION The lack of clear relationships in the TBI group is discussed in terms of the likely disruption to cortical networks secondary to significant brain injuries.
Collapse
Affiliation(s)
- Adam T Schmidt
- Department of Psychology and Philosophy, Sam Houston State University , Huntsville, TX , USA
| | | | | | | | | | | | | | | | | |
Collapse
|
28
|
Milesi V, Cekic S, Péron J, Frühholz S, Cristinzio C, Seeck M, Grandjean D. Multimodal emotion perception after anterior temporal lobectomy (ATL). Front Hum Neurosci 2014; 8:275. [PMID: 24839437 PMCID: PMC4017134 DOI: 10.3389/fnhum.2014.00275] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2013] [Accepted: 04/14/2014] [Indexed: 11/30/2022] Open
Abstract
In the context of emotion information processing, several studies have demonstrated the involvement of the amygdala in emotion perception, for unimodal and multimodal stimuli. However, it seems that not only the amygdala, but several regions around it, may also play a major role in multimodal emotional integration. In order to investigate the contribution of these regions to multimodal emotion perception, five patients who had undergone unilateral anterior temporal lobe resection were exposed to both unimodal (vocal or visual) and audiovisual emotional and neutral stimuli. In a classic paradigm, participants were asked to rate the emotional intensity of angry, fearful, joyful, and neutral stimuli on visual analog scales. Compared with matched controls, patients exhibited impaired categorization of joyful expressions, whether the stimuli were auditory, visual, or audiovisual. Patients confused joyful faces with neutral faces, and joyful prosody with surprise. In the case of fear, unlike matched controls, patients provided lower intensity ratings for visual stimuli than for vocal and audiovisual ones. Fearful faces were frequently confused with surprised ones. When we controlled for lesion size, we no longer observed any overall difference between patients and controls in their ratings of emotional intensity on the target scales. Lesion size had the greatest effect on intensity perceptions and accuracy in the visual modality, irrespective of the type of emotion. These new findings suggest that a damaged amygdala, or a disrupted bundle between the amygdala and the ventral part of the occipital lobe, has a greater impact on emotion perception in the visual modality than it does in either the vocal or audiovisual one. We can surmise that patients are able to use the auditory information contained in multimodal stimuli to compensate for difficulty processing visually conveyed emotion.
Collapse
Affiliation(s)
- Valérie Milesi
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| | - Sezen Cekic
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| | - Julie Péron
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| | - Sascha Frühholz
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| | - Chiara Cristinzio
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland ; Laboratory for Neurology and Imaging of Cognition, Department of Neurology and Department of Neuroscience, Medical School, University of Geneva Geneva, Switzerland
| | - Margitta Seeck
- Epilepsy Unit, Department of Neurology, Geneva University Hospital Geneva, Switzerland
| | - Didier Grandjean
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| |
Collapse
|
29
|
Frühholz S, Klaas HS, Patel S, Grandjean D. Talking in Fury: The Cortico-Subcortical Network Underlying Angry Vocalizations. Cereb Cortex 2014; 25:2752-62. [PMID: 24735671 DOI: 10.1093/cercor/bhu074] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Although the neural basis for the perception of vocal emotions has been described extensively, the neural basis for the expression of vocal emotions is almost unknown. Here, we asked participants both to repeat and to express high-arousing angry vocalizations to command (i.e., evoked expressions). First, repeated expressions elicited activity in the left middle superior temporal gyrus (STG), pointing to a short auditory memory trace for the repetition of vocal expressions. Evoked expressions activated the left hippocampus, suggesting the retrieval of long-term stored scripts. Secondly, angry compared with neutral expressions elicited activity in the inferior frontal cortex IFC and the dorsal basal ganglia (BG), specifically during evoked expressions. Angry expressions also activated the amygdala and anterior cingulate cortex (ACC), and the latter correlated with pupil size as an indicator of bodily arousal during emotional output behavior. Though uncorrelated, both ACC activity and pupil diameter were also increased during repetition trials indicating increased control demands during the more constraint production type of precisely repeating prosodic intonations. Finally, different acoustic measures of angry expressions were associated with activity in the left STG, bilateral inferior frontal gyrus, and dorsal BG.
Collapse
Affiliation(s)
- Sascha Frühholz
- Neuroscience of Emotion and Affective Dynamics Laboratory (NEAD), Department of Psychology, University of Geneva, Geneva, Switzerland Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Hannah S Klaas
- Neuroscience of Emotion and Affective Dynamics Laboratory (NEAD), Department of Psychology, University of Geneva, Geneva, Switzerland
| | - Sona Patel
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotion and Affective Dynamics Laboratory (NEAD), Department of Psychology, University of Geneva, Geneva, Switzerland Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
30
|
Zimmer U, Koschutnig K, Ebner F, Ischebeck A. Successful contextual integration of loose mental associations as evidenced by emotional conflict-processing. PLoS One 2014; 9:e91470. [PMID: 24618674 PMCID: PMC3950074 DOI: 10.1371/journal.pone.0091470] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2013] [Accepted: 02/03/2014] [Indexed: 12/01/2022] Open
Abstract
Often we cannot resist emotional distraction, because emotions capture our attention. For example, in TV-commercials, tempting emotional voices add an emotional expression to a formerly neutral product. Here, we used a Stroop-like conflict paradigm as a tool to investigate whether emotional capture results in contextual integration of loose mental associations. Specifically, we tested whether the associatively connected meaning of an ignored auditory emotion with a non-emotional neutral visual target would yield a modulation of activation sensitive to emotional conflict in the brain. In an fMRI-study, nineteen participants detected the presence or absence of a little worm hidden in the picture of an apple, while ignoring a voice with an emotional sound of taste (delicious/disgusting). Our results indicate a modulation due to emotional conflict, pronounced most strongly when processing conflict in the context of disgust (conflict: disgust/no-worm vs. no conflict: disgust/worm). For conflict in the context of disgust, insula activity was increased, with activity correlating positively with reaction time in the conflict case. Conflict in the context of deliciousness resulted in increased amygdala activation, possibly due to the resulting “negative” emotion in incongruent versus congruent combinations. These results indicate that our associative stimulus-combinations showed a conflict-dependent modulation of activity in emotional brain areas. This shows that the emotional sounds were successfully contextually integrated with the loosely associated neutral pictures.
Collapse
Affiliation(s)
- Ulrike Zimmer
- Department of Psychology, University of Graz, Graz, Austria
| | - Karl Koschutnig
- Department of Radiology, Medical University of Graz, Graz, Austria
| | - Franz Ebner
- Department of Radiology, Medical University of Graz, Graz, Austria
| | - Anja Ischebeck
- Department of Psychology, University of Graz, Graz, Austria
| |
Collapse
|
31
|
Brück C, Kreifelts B, Gößling-Arnold C, Wertheimer J, Wildgruber D. 'Inner voices': the cerebral representation of emotional voice cues described in literary texts. Soc Cogn Affect Neurosci 2014; 9:1819-27. [PMID: 24396008 DOI: 10.1093/scan/nst180] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
While non-verbal affective voice cues are generally recognized as a crucial behavioral guide in any day-to-day conversation their role as a powerful source of information may extend well beyond close-up personal interactions and include other modes of communication such as written discourse or literature as well. Building on the assumption that similarities between the different 'modes' of voice cues may not only be limited to their functional role but may also include cerebral mechanisms engaged in the decoding process, the present functional magnetic resonance imaging study aimed at exploring brain responses associated with processing emotional voice signals described in literary texts. Emphasis was placed on evaluating 'voice' sensitive as well as task- and emotion-related modulations of brain activation frequently associated with the decoding of acoustic vocal cues. Obtained findings suggest that several similarities emerge with respect to the perception of acoustic voice signals: results identify the superior temporal, lateral and medial frontal cortex as well as the posterior cingulate cortex and cerebellum to contribute to the decoding process, with similarities to acoustic voice perception reflected in a 'voice'-cue preference of temporal voice areas as well as an emotion-related modulation of the medial frontal cortex and a task-modulated response of the lateral frontal cortex.
Collapse
Affiliation(s)
- Carolin Brück
- Department of Psychiatry and Psychotherapy, Eberhard Karls University, Tübingen 72076, Germany, Werner Reichardt Centre for Integrative Neuroscience (CIN), Tübingen 72076, Germany and Department of Comparative Literature, Eberhard Karls University, Tübingen 72074, Germany Department of Psychiatry and Psychotherapy, Eberhard Karls University, Tübingen 72076, Germany, Werner Reichardt Centre for Integrative Neuroscience (CIN), Tübingen 72076, Germany and Department of Comparative Literature, Eberhard Karls University, Tübingen 72074, Germany
| | - Benjamin Kreifelts
- Department of Psychiatry and Psychotherapy, Eberhard Karls University, Tübingen 72076, Germany, Werner Reichardt Centre for Integrative Neuroscience (CIN), Tübingen 72076, Germany and Department of Comparative Literature, Eberhard Karls University, Tübingen 72074, Germany
| | - Christina Gößling-Arnold
- Department of Psychiatry and Psychotherapy, Eberhard Karls University, Tübingen 72076, Germany, Werner Reichardt Centre for Integrative Neuroscience (CIN), Tübingen 72076, Germany and Department of Comparative Literature, Eberhard Karls University, Tübingen 72074, Germany Department of Psychiatry and Psychotherapy, Eberhard Karls University, Tübingen 72076, Germany, Werner Reichardt Centre for Integrative Neuroscience (CIN), Tübingen 72076, Germany and Department of Comparative Literature, Eberhard Karls University, Tübingen 72074, Germany
| | - Jürgen Wertheimer
- Department of Psychiatry and Psychotherapy, Eberhard Karls University, Tübingen 72076, Germany, Werner Reichardt Centre for Integrative Neuroscience (CIN), Tübingen 72076, Germany and Department of Comparative Literature, Eberhard Karls University, Tübingen 72074, Germany Department of Psychiatry and Psychotherapy, Eberhard Karls University, Tübingen 72076, Germany, Werner Reichardt Centre for Integrative Neuroscience (CIN), Tübingen 72076, Germany and Department of Comparative Literature, Eberhard Karls University, Tübingen 72074, Germany
| | - Dirk Wildgruber
- Department of Psychiatry and Psychotherapy, Eberhard Karls University, Tübingen 72076, Germany, Werner Reichardt Centre for Integrative Neuroscience (CIN), Tübingen 72076, Germany and Department of Comparative Literature, Eberhard Karls University, Tübingen 72074, Germany Department of Psychiatry and Psychotherapy, Eberhard Karls University, Tübingen 72076, Germany, Werner Reichardt Centre for Integrative Neuroscience (CIN), Tübingen 72076, Germany and Department of Comparative Literature, Eberhard Karls University, Tübingen 72074, Germany
| |
Collapse
|
32
|
Bach DR, Hurlemann R, Dolan RJ. Unimpaired discrimination of fearful prosody after amygdala lesion. Neuropsychologia 2013; 51:2070-4. [PMID: 23871880 PMCID: PMC3819998 DOI: 10.1016/j.neuropsychologia.2013.07.005] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2013] [Revised: 07/02/2013] [Accepted: 07/08/2013] [Indexed: 11/28/2022]
Abstract
Prosody (i.e. speech melody) is an important cue to infer an interlocutor's emotional state, complementing information from face expression and body posture. Inferring fear from face expression is reported as impaired after amygdala lesions. It remains unclear whether this deficit is specific to face expression, or is a more global fear recognition deficit. Here, we report data from two twins with bilateral amygdala lesions due to Urbach-Wiethe syndrome and show they are unimpaired in a multinomial emotional prosody classification task. In a two-alternative forced choice task, they demonstrate increased ability to discriminate fearful and neutral prosody, the opposite of what would be expected under an hypothesis of a global role for the amygdala in fear recognition. Hence, we provide evidence that the amygdala is not required for recognition of fearful prosody. Prosody recognition is assessed in two twin sisters with amygdala lesions due to Urbach–Wiethe syndrome. In a multinomial classification task, there is no impairment. In a two-alternative forced choice task, patients discriminate fearful and neutral prosody better than a control sample. This study provides evidence that the amygdala has no general role in fear recognition.
Collapse
Affiliation(s)
- Dominik R Bach
- Wellcome Trust Centre for Neuroimaging, University College London, UK; Zurich University Hospital of Psychiatry, Switzerland.
| | | | | |
Collapse
|
33
|
Poremba A, Bigelow J, Rossi B. Processing of communication sounds: contributions of learning, memory, and experience. Hear Res 2013; 305:31-44. [PMID: 23792078 DOI: 10.1016/j.heares.2013.06.005] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2012] [Revised: 05/09/2013] [Accepted: 06/10/2013] [Indexed: 11/17/2022]
Abstract
Abundant evidence from both field and lab studies has established that conspecific vocalizations (CVs) are of critical ecological significance for a wide variety of species, including humans, non-human primates, rodents, and other mammals and birds. Correspondingly, a number of experiments have demonstrated behavioral processing advantages for CVs, such as in discrimination and memory tasks. Further, a wide range of experiments have described brain regions in many species that appear to be specialized for processing CVs. For example, several neural regions have been described in both mammals and birds wherein greater neural responses are elicited by CVs than by comparison stimuli such as heterospecific vocalizations, nonvocal complex sounds, and artificial stimuli. These observations raise the question of whether these regions reflect domain-specific neural mechanisms dedicated to processing CVs, or alternatively, if these regions reflect domain-general neural mechanisms for representing complex sounds of learned significance. Inasmuch as CVs can be viewed as complex combinations of basic spectrotemporal features, the plausibility of the latter position is supported by a large body of literature describing modulated cortical and subcortical representation of a variety of acoustic features that have been experimentally associated with stimuli of natural behavioral significance (such as food rewards). Herein, we review a relatively small body of existing literature describing the roles of experience, learning, and memory in the emergence of species-typical neural representations of CVs and auditory system plasticity. In both songbirds and mammals, manipulations of auditory experience as well as specific learning paradigms are shown to modulate neural responses evoked by CVs, either in terms of overall firing rate or temporal firing patterns. In some cases, CV-sensitive neural regions gradually acquire representation of non-CV stimuli with which subjects have training and experience. These results parallel literature in humans describing modulation of responses in face-sensitive neural regions through learning and experience. Thus, although many questions remain, the available evidence is consistent with the notion that CVs may acquire distinct neural representation through domain-general mechanisms for representing complex auditory objects that are of learned importance to the animal. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
Affiliation(s)
- Amy Poremba
- University of Iowa, Dept. of Psychology, Div. Behavioral & Cognitive Neuroscience, E11 SSH, Iowa City, IA 52242, USA; University of Iowa, Neuroscience Program, Iowa City, IA 52242, USA.
| | | | | |
Collapse
|
34
|
Goerlich-Dobre KS, Witteman J, Schiller NO, van Heuven VJP, Aleman A, Martens S. Blunted feelings: alexithymia is associated with a diminished neural response to speech prosody. Soc Cogn Affect Neurosci 2013; 9:1108-17. [PMID: 23681887 DOI: 10.1093/scan/nst075] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
How we perceive emotional signals from our environment depends on our personality. Alexithymia, a personality trait characterized by difficulties in emotion regulation has been linked to aberrant brain activity for visual emotional processing. Whether alexithymia also affects the brain's perception of emotional speech prosody is currently unknown. We used functional magnetic resonance imaging to investigate the impact of alexithymia on hemodynamic activity of three a priori regions of the prosody network: the superior temporal gyrus (STG), the inferior frontal gyrus and the amygdala. Twenty-two subjects performed an explicit task (emotional prosody categorization) and an implicit task (metrical stress evaluation) on the same prosodic stimuli. Irrespective of task, alexithymia was associated with a blunted response of the right STG and the bilateral amygdalae to angry, surprised and neutral prosody. Individuals with difficulty describing feelings deactivated the left STG and the bilateral amygdalae to a lesser extent in response to angry compared with neutral prosody, suggesting that they perceived angry prosody as relatively more salient than neutral prosody. In conclusion, alexithymia may be associated with a generally blunted neural response to speech prosody. Such restricted prosodic processing may contribute to problems in social communication associated with this personality trait.
Collapse
Affiliation(s)
- Katharina Sophia Goerlich-Dobre
- Neuroimaging Center, Department of Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands, Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany, LIBC Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands, LUCL Leiden University Centre for Linguistics, Leiden University, Leiden, The Netherlands, and Department of Psychology, University of Groningen, Groningen, The NetherlandsNeuroimaging Center, Department of Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands, Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany, LIBC Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands, LUCL Leiden University Centre for Linguistics, Leiden University, Leiden, The Netherlands, and Department of Psychology, University of Groningen, Groningen, The Netherlands
| | - Jurriaan Witteman
- Neuroimaging Center, Department of Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands, Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany, LIBC Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands, LUCL Leiden University Centre for Linguistics, Leiden University, Leiden, The Netherlands, and Department of Psychology, University of Groningen, Groningen, The NetherlandsNeuroimaging Center, Department of Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands, Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany, LIBC Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands, LUCL Leiden University Centre for Linguistics, Leiden University, Leiden, The Netherlands, and Department of Psychology, University of Groningen, Groningen, The Netherlands
| | - Niels O Schiller
- Neuroimaging Center, Department of Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands, Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany, LIBC Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands, LUCL Leiden University Centre for Linguistics, Leiden University, Leiden, The Netherlands, and Department of Psychology, University of Groningen, Groningen, The NetherlandsNeuroimaging Center, Department of Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands, Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany, LIBC Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands, LUCL Leiden University Centre for Linguistics, Leiden University, Leiden, The Netherlands, and Department of Psychology, University of Groningen, Groningen, The Netherlands
| | - Vincent J P van Heuven
- Neuroimaging Center, Department of Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands, Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany, LIBC Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands, LUCL Leiden University Centre for Linguistics, Leiden University, Leiden, The Netherlands, and Department of Psychology, University of Groningen, Groningen, The NetherlandsNeuroimaging Center, Department of Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands, Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany, LIBC Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands, LUCL Leiden University Centre for Linguistics, Leiden University, Leiden, The Netherlands, and Department of Psychology, University of Groningen, Groningen, The Netherlands
| | - André Aleman
- Neuroimaging Center, Department of Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands, Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany, LIBC Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands, LUCL Leiden University Centre for Linguistics, Leiden University, Leiden, The Netherlands, and Department of Psychology, University of Groningen, Groningen, The NetherlandsNeuroimaging Center, Department of Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands, Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany, LIBC Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands, LUCL Leiden University Centre for Linguistics, Leiden University, Leiden, The Netherlands, and Department of Psychology, University of Groningen, Groningen, The Netherlands
| | - Sander Martens
- Neuroimaging Center, Department of Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands, Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany, LIBC Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands, LUCL Leiden University Centre for Linguistics, Leiden University, Leiden, The Netherlands, and Department of Psychology, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
35
|
Moreno-Torres I, Berthier ML, Mar Cid MD, Green C, Gutiérrez A, García-Casares N, Froudist Walsh S, Nabrozidis A, Sidorova J, Dávila G, Carnero-Pardo C. Foreign accent syndrome: A multimodal evaluation in the search of neuroscience-driven treatments. Neuropsychologia 2013. [DOI: 10.1016/j.neuropsychologia.2012.11.010] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
36
|
Abstract
PURPOSE OF REVIEW Tinnitus is the sensation of hearing a sound when no external auditory stimulus is present. Most individuals experience tinnitus for brief, unobtrusive periods. However, chronic sensation of tinnitus affects approximately 17% (44 million people) of the general US population. Tinnitus, usually a benign symptom, can be constant, loud and annoying to the point that it causes significant emotional distress, poor sleep, less efficient activities of daily living, anxiety, depression and suicidal ideation/attempts. Tinnitus remains a major challenge to physicians because its pathophysiology is poorly understood and there are few management options to offer to patients. The purpose of this article is to describe the current understanding of central neural mechanisms in tinnitus and to summarize recent developments in clinical approaches to tinnitus patients. RECENT FINDINGS Recently developed animal models of tinnitus provide the possibility to determine neuronal mechanisms of tinnitus generation and to test the effects of various treatments. The latest research using animal models has identified a number of abnormal changes, in both auditory and nonauditory brain regions, that underlie tinnitus. Furthermore this research sheds light on cellular mechanisms that are responsible for development of these abnormal changes. SUMMARY Tinnitus remains a challenging disorder for patients, physicians, audiologists and scientists studying tinnitus-related brain changes. This article reviews recent findings of brain changes in animal models associated with tinnitus and a brief review of clinical approach to tinnitus patients.
Collapse
|
37
|
Frühholz S, Grandjean D. Amygdala subregions differentially respond and rapidly adapt to threatening voices. Cortex 2012; 49:1394-403. [PMID: 22938844 DOI: 10.1016/j.cortex.2012.08.003] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2012] [Revised: 04/27/2012] [Accepted: 08/03/2012] [Indexed: 10/28/2022]
Abstract
Emotional states can influence the human voice during speech utterances. Here, we tested the sensitivity and signal adaptation of functional activity located in amygdala subregions to threatening voices during high-resolution functional magnetic resonance imaging. Bilateral superficial (SF) complex and the right laterobasal (LB) complex of the amygdala were generally sensitive to emotional cues from speech prosody. Activity was stronger, however, when listeners directly focused on the emotional prosody of the voice instead of attending to a nonemotional feature. Explicit attention to prosody especially elicited activity in the right LB complex. Furthermore, the right SF specifically showed an effect of sensitization indicated by a significant signal increase in response to emotional voices which were preceded by neutral events. The bilateral SF showed signal habituation to repeated emotional voices indicated by a significant signal decrease for an emotional event preceded by another emotional event. The right SF and LB finally showed an effect of desensitization after the processing of emotional voices indicated by a signal decrease for neutral events that followed emotional events. Thus, different amygdala subregions are sensitive to threatening emotional voices, and their activity depends on the attentional focus as well as on the proximal temporal context of other neutral and emotional events.
Collapse
Affiliation(s)
- Sascha Frühholz
- Neuroscience of Emotion and Affective Dynamics (NEAD) Laboratory, Department of Psychology, University of Geneva, Geneva, Switzerland.
| | | |
Collapse
|
38
|
Witteman J, Van Heuven VJP, Schiller NO. Hearing feelings: a quantitative meta-analysis on the neuroimaging literature of emotional prosody perception. Neuropsychologia 2012; 50:2752-2763. [PMID: 22841991 DOI: 10.1016/j.neuropsychologia.2012.07.026] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2012] [Revised: 06/29/2012] [Accepted: 07/13/2012] [Indexed: 10/28/2022]
Abstract
With the advent of neuroimaging considerable progress has been made in uncovering the neural network involved in the perception of emotional prosody. However, the exact neuroanatomical underpinnings of the emotional prosody perception process remain unclear. Furthermore, it is unclear what the intrahemispheric basis might be of the relative right-hemispheric specialization for emotional prosody perception that has been found previously in the lesion literature. In an attempt to shed light on these issues, quantitative meta-analyses of the neuroimaging literature were performed to investigate which brain areas are robustly associated with stimulus-driven and task-dependent perception of emotional prosody. Also, lateralization analyses were performed to investigate whether statistically reliable hemispheric specialization across studies can be found in these networks. A bilateral temporofrontal network was found to be implicated in emotional prosody perception, generally supporting previously proposed models of emotional prosody perception. Right-lateralized convergence across studies was found in (early) auditory processing areas, suggesting that the right hemispheric specialization for emotional prosody perception reported previously in the lesion literature might be driven by hemispheric specialization for non-prosody-specific fundamental acoustic dimensions of the speech signal.
Collapse
Affiliation(s)
- Jurriaan Witteman
- Leiden Institute for Brain and Cognition, Leiden University, The Netherlands; Leiden University Centre for Linguistics, Leiden University, The Netherlands.
| | - Vincent J P Van Heuven
- Leiden Institute for Brain and Cognition, Leiden University, The Netherlands; Leiden University Centre for Linguistics, Leiden University, The Netherlands
| | - Niels O Schiller
- Leiden Institute for Brain and Cognition, Leiden University, The Netherlands; Leiden University Centre for Linguistics, Leiden University, The Netherlands
| |
Collapse
|
39
|
Cerebral integration of verbal and nonverbal emotional cues: Impact of individual nonverbal dominance. Neuroimage 2012; 61:738-47. [DOI: 10.1016/j.neuroimage.2012.03.085] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2011] [Revised: 03/26/2012] [Accepted: 03/29/2012] [Indexed: 11/20/2022] Open
|
40
|
Peterson DC, Wenstrup JJ. Selectivity and persistent firing responses to social vocalizations in the basolateral amygdala. Neuroscience 2012; 217:154-71. [PMID: 22569154 DOI: 10.1016/j.neuroscience.2012.04.069] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2012] [Revised: 04/27/2012] [Accepted: 04/28/2012] [Indexed: 10/28/2022]
Abstract
This study examined responsiveness to acoustic stimuli among neurons of the basolateral amygdala. While recording from single neurons in awake mustached bats (Pteronotus parnellii), we presented a wide range of acoustic stimuli including tonal, noise, and vocal signals. While many neurons displayed phasic or sustained responses locked to effective auditory stimuli, the majority of neurons (n=58) displayed a persistent excitatory discharge that lasted well beyond stimulus duration and filled the interval between successive stimuli. Persistent firing usually began seconds (median value, 5.4 s) after the initiation of a train of repeated stimuli and lasted, in the majority of neurons, for at least 2 min after the end of the stimulus train. Auditory-responsive amygdalar neurons were generally excited by one stimulus or very few stimuli. Most neurons did not respond well to synthetic stimuli including tones, noise bursts or frequency-modulated sweeps, but instead responded only to vocal stimuli (82 of 87 neurons). Furthermore, most neurons were highly selective among vocal stimuli. On average, neurons responded to 1.7 of 15 different syllables or syllable sequences. The largest percentage of neurons responded to a hiss-like rectangular broadband noise burst (rBNB) call associated with aggressive interactions. Responsiveness to effective vocal stimuli was reduced or eliminated when the spectrotemporal features of the stimuli were altered in a subset of neurons. Chemical activation of the medial geniculate body (MG) increased both background and evoked firing. Among 39 histologically localized recording sites, we saw no evidence of topographic organization in terms of temporal response pattern, habituation, or the affect of calls to which neurons responded. Overall, these studies demonstrate that amygdalar neurons in the mustached bat show high selectivity to vocal stimuli, and suggest that persistent firing may be an important feature of amygdalar responses to social vocalizations.
Collapse
Affiliation(s)
- D C Peterson
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, 4209 State Route 44, Rootstown, Ohio 44272-0095, USA.
| | | |
Collapse
|
41
|
Hervé PY, Razafimandimby A, Vigneau M, Mazoyer B, Tzourio-Mazoyer N. Disentangling the brain networks supporting affective speech comprehension. Neuroimage 2012; 61:1255-67. [PMID: 22507230 DOI: 10.1016/j.neuroimage.2012.03.073] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2011] [Revised: 01/30/2012] [Accepted: 03/18/2012] [Indexed: 11/27/2022] Open
Abstract
Areas involved in social cognition, such as the medial prefrontal cortex (mPFC) and the left temporo-parietal junction (TPJ) appear to be active during the classification of sentences according to emotional criteria (happy, angry or sad, [Beaucousin et al., 2007]). These two regions are frequently co-activated in studies about theory of mind (ToM). To confirm that these regions constitute a coherent network during affective speech comprehension, new event-related functional magnetic resonance imaging data were acquired, using the emotional and grammatical-person sentence classification tasks on a larger sample of 51 participants. The comparison of the emotional and grammatical tasks confirmed the previous findings. Functional connectivity analyses established a clear demarcation between a "Medial" network, including the mPFC and TPJ regions, and a bilateral "Language" network, which gathered inferior frontal and temporal areas. These findings suggest that emotional speech comprehension results from interactions between language, ToM and emotion processing networks. The language network, active during both tasks, would be involved in the extraction of lexical and prosodic emotional cues, while the medial network, active only during the emotional task, would drive the making of inferences about the sentences' emotional content, based on their meanings. The left and right amygdalae displayed a stronger response during the emotional condition, but were seldom correlated with the other regions, and thus formed a third entity. Finally, distinct regions belonging to the Language and Medial networks were found in the left angular gyrus, where these two systems could interface.
Collapse
Affiliation(s)
- Pierre-Yves Hervé
- Univ. Bordeaux, Groupe d'Imagerie Neurofonctionnelle, UMR 5296, F-33000 Bordeaux, France.
| | | | | | | | | |
Collapse
|
42
|
Brain 'talks over' boring quotes: top-down activation of voice-selective areas while listening to monotonous direct speech quotations. Neuroimage 2012; 60:1832-42. [PMID: 22306805 DOI: 10.1016/j.neuroimage.2012.01.111] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2011] [Revised: 12/31/2011] [Accepted: 01/22/2012] [Indexed: 11/24/2022] Open
Abstract
In human communication, direct speech (e.g., Mary said, "I'm hungry") is perceived as more vivid than indirect speech (e.g., Mary said that she was hungry). This vividness distinction has previously been found to underlie silent reading of quotations: Using functional magnetic resonance imaging (fMRI), we found that direct speech elicited higher brain activity in the temporal voice areas (TVA) of the auditory cortex than indirect speech, consistent with an "inner voice" experience in reading direct speech. Here we show that listening to monotonously spoken direct versus indirect speech quotations also engenders differential TVA activity. This suggests that individuals engage in top-down simulations or imagery of enriched supra-segmental acoustic representations while listening to monotonous direct speech. The findings shed new light on the acoustic nature of the "inner voice" in understanding direct speech.
Collapse
|
43
|
Tracy DK, Ho DK, O'Daly O, Michalopoulou P, Lloyd LC, Dimond E, Matsumoto K, Shergill SS. It's not what you say but the way that you say it: an fMRI study of differential lexical and non-lexical prosodic pitch processing. BMC Neurosci 2011; 12:128. [PMID: 22185438 PMCID: PMC3258233 DOI: 10.1186/1471-2202-12-128] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2011] [Accepted: 12/20/2011] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND This study aims to identify the neural substrate involved in prosodic pitch processing. Functional magnetic resonance imaging was used to test the premise that prosody pitch processing is primarily subserved by the right cortical hemisphere.Two experimental paradigms were used, firstly pairs of spoken sentences, where the only variation was a single internal phrase pitch change, and secondly, a matched condition utilizing pitch changes within analogous tone-sequence phrases. This removed the potential confounder of lexical evaluation. fMRI images were obtained using these paradigms. RESULTS Activation was significantly greater within the right frontal and temporal cortices during the tone-sequence stimuli relative to the sentence stimuli. CONCLUSION This study showed that pitch changes, stripped of lexical information, are mainly processed by the right cerebral hemisphere, whilst the processing of analogous, matched, lexical pitch change is preferentially left sided. These findings, showing hemispherical differentiation of processing based on stimulus complexity, are in accord with a 'task dependent' hypothesis of pitch processing.
Collapse
Affiliation(s)
- Derek K Tracy
- CSI Lab, Institute of Psychiatry, King's College London, UK
| | - David K Ho
- CSI Lab, Institute of Psychiatry, King's College London, UK
| | - Owen O'Daly
- CSI Lab, Institute of Psychiatry, King's College London, UK
| | | | - Lisa C Lloyd
- CSI Lab, Institute of Psychiatry, King's College London, UK
| | - Eleanor Dimond
- CSI Lab, Institute of Psychiatry, King's College London, UK
| | - Kazunori Matsumoto
- Department of Psychiatry, Tohoku University School of Medicine, Sendai, Japan
| | | |
Collapse
|
44
|
Hearing others' pain: neural activity related to empathy. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2011; 11:386-95. [PMID: 21533882 DOI: 10.3758/s13415-011-0035-0] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
The human voice is one of the principal conveyers of social and affective communication. Recent neuroimaging studies have suggested that observing pain in others activates neural representations similar to those from the first-hand experience of pain; however, studies on pain expressions in the auditory channel are lacking. We conducted a functional magnetic resonance imaging study to examine brain responses to emotional exclamations of others' pain. The control condition comprised positive (e.g., laughing) or negative (e.g., snoring) stimuli of the human voice that were not associated with pain and suffering. Compared to these control stimuli, pain-related exclamations elicited increased activation in the superior and middle temporal gyri, left insula, secondary somatosensory cortices, thalamus, and right cerebellum, as well as deactivation in the anterior cingulate cortex. The left anterior insular and thalamic activations correlated significantly with the Empathic Concern subscale of the Interpersonal Reactivity Index. Thus, the brain regions involved in hearing others' pain are similar to those activated in the empathic processing of visual stimuli. Additionally, the findings emphasise the modulating role of interindividual differences in affective empathy.
Collapse
|
45
|
Brück C, Kreifelts B, Wildgruber D. Emotional voices in context: A neurobiological model of multimodal affective information processing. Phys Life Rev 2011; 8:383-403. [DOI: 10.1016/j.plrev.2011.10.002] [Citation(s) in RCA: 108] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2011] [Accepted: 10/11/2011] [Indexed: 11/27/2022]
|
46
|
Leitman DI, Wolf DH, Laukka P, Ragland JD, Valdez JN, Turetsky BI, Gur RE, Gur RC. Not pitch perfect: sensory contributions to affective communication impairment in schizophrenia. Biol Psychiatry 2011; 70:611-8. [PMID: 21762876 DOI: 10.1016/j.biopsych.2011.05.032] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/14/2011] [Revised: 05/09/2011] [Accepted: 05/24/2011] [Indexed: 11/20/2022]
Abstract
BACKGROUND Schizophrenia patients have vocal affect (prosody) deficits that are treatment resistant and associated with negative symptoms and poor outcome. The neural correlates of this dysfunction are unclear. Prior study has suggested that schizophrenia vocal affect perception deficits stem from an inability to use acoustic cues, notably pitch, in decoding emotion. METHODS Functional magnetic resonance imaging was performed in 24 schizophrenia patients and 28 healthy control subjects, during the performance of a four-choice (happiness, fear, anger, neutral) vocal affect identification task in which items for each emotion varied parametrically in affective salient acoustic cue levels. RESULTS We observed that parametric increases in cue levels in schizophrenia failed to produce the same identification rate increases as in control subjects. These deficits correlated with diminished reciprocal activation changes in superior temporal and inferior frontal gyri and reduced temporo-frontal connectivity. Task activation also correlated with independent measures of pitch perception and negative symptom severity. CONCLUSIONS These findings illustrate the interplay between sensory and higher-order cognitive dysfunction in schizophrenia. Sensory contributions to vocal affect deficits also suggest that this neurobehavioral marker could be targeted by pharmacological or behavioral remediation of acoustic feature discrimination.
Collapse
Affiliation(s)
- David I Leitman
- Department of Psychiatry-Neuropsychiatry Program, Brain Behavior Laboratory, University of Pennsylvania School of Medicine, Philadelphia, Pennsylvania 19104-4283, USA.
| | | | | | | | | | | | | | | |
Collapse
|
47
|
Jessen S, Kotz SA. The temporal dynamics of processing emotions from vocal, facial, and bodily expressions. Neuroimage 2011; 58:665-74. [PMID: 21718792 DOI: 10.1016/j.neuroimage.2011.06.035] [Citation(s) in RCA: 114] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2011] [Revised: 04/01/2011] [Accepted: 06/13/2011] [Indexed: 11/17/2022] Open
Affiliation(s)
- S Jessen
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1A, 04107 Leipzig, Germany.
| | | |
Collapse
|
48
|
Viinikainen M, Kätsyri J, Sams M. Representation of perceived sound valence in the human brain. Hum Brain Mapp 2011; 33:2295-305. [PMID: 21826759 DOI: 10.1002/hbm.21362] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2010] [Revised: 03/23/2011] [Accepted: 04/26/2011] [Indexed: 11/09/2022] Open
Abstract
Perceived emotional valence of sensory stimuli influences their processing in various cortical and subcortical structures. Recent evidence suggests that negative and positive valences are processed separately, not along a single linear continuum. Here, we examined how brain is activated when subjects are listening to auditory stimuli varying parametrically in perceived valence (very unpleasant-neutral-very pleasant). Seventeen healthy volunteers were scanned in 3 Tesla while listening to International Affective Digital Sounds (IADS-2) in a block design paradigm. We found a strong quadratic U-shaped relationship between valence and blood oxygen level dependent (BOLD) signal strength in the medial prefrontal cortex, auditory cortex, and amygdala. Signals were the weakest for neutral stimuli and increased progressively for more unpleasant or pleasant stimuli. The results strengthen the view that valence is a crucial factor in neural processing of emotions. An alternative explanation is salience, which increases with both negative and positive valences.
Collapse
Affiliation(s)
- Mikko Viinikainen
- Mind and Brain Laboratory, Department of Biomedical Engineering and Computational Science, Aalto University School of Science, Finland.
| | | | | |
Collapse
|
49
|
Frühholz S, Ceravolo L, Grandjean D. Specific Brain Networks during Explicit and Implicit Decoding of Emotional Prosody. Cereb Cortex 2011; 22:1107-17. [DOI: 10.1093/cercor/bhr184] [Citation(s) in RCA: 129] [Impact Index Per Article: 9.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|
50
|
Mothes-Lasch M, Mentzel HJ, Miltner WHR, Straube T. Visual attention modulates brain activation to angry voices. J Neurosci 2011; 31:9594-8. [PMID: 21715624 PMCID: PMC6623173 DOI: 10.1523/jneurosci.6665-10.2011] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2010] [Revised: 03/24/2011] [Accepted: 04/19/2011] [Indexed: 11/21/2022] Open
Abstract
In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.
Collapse
Affiliation(s)
- Martin Mothes-Lasch
- Department of Biological and Clinical Psychology, Friedrich Schiller University, D-07743 Jena, Germany, and
| | - Hans-Joachim Mentzel
- Institute of Diagnostic and Interventional Radiology, Friedrich Schiller University, D-07740 Jena, Germany
| | - Wolfgang H. R. Miltner
- Department of Biological and Clinical Psychology, Friedrich Schiller University, D-07743 Jena, Germany, and
| | - Thomas Straube
- Department of Biological and Clinical Psychology, Friedrich Schiller University, D-07743 Jena, Germany, and
| |
Collapse
|