1
|
Papanicolaou AC. Non-Invasive Mapping of the Neuronal Networks of Language. Brain Sci 2023; 13:1457. [PMID: 37891824 PMCID: PMC10605023 DOI: 10.3390/brainsci13101457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 09/13/2023] [Accepted: 10/05/2023] [Indexed: 10/29/2023] Open
Abstract
This review consists of three main sections. In the first, the Introduction, the main theories of the neuronal mediation of linguistic operations, derived mostly from studies of the effects of focal lesions on linguistic performance, are summarized. These models furnish the conceptual framework on which the design of subsequent functional neuroimaging investigations is based. In the second section, the methods of functional neuroimaging, especially those of functional Magnetic Resonance Imaging (fMRI) and of Magnetoencephalography (MEG), are detailed along with the specific activation tasks employed in presurgical functional mapping. The reliability of these non-invasive methods and their validity, judged against the results of the invasive methods, namely, the "Wada" procedure and Cortical Stimulation Mapping (CSM), is assessed and their use in presurgical mapping is justified. In the third and final section, the applications of fMRI and MEG in basic research are surveyed in the following six sub-sections, each dealing with the assessment of the neuronal networks for (1) the acoustic and phonological, (2) for semantic, (3) for syntactic, (4) for prosodic operations, (5) for sign language and (6) for the operations of reading and the mechanisms of dyslexia.
Collapse
Affiliation(s)
- Andrew C Papanicolaou
- Department of Pediatrics, Division of Pediatric Neurology, College of Medicine, University of Tennessee Health Science Center, Memphis, TN 38013, USA
| |
Collapse
|
2
|
Dang Q, Ma F, Yuan Q, Fu Y, Chen K, Zhang Z, Lu C, Guo T. Processing negative emotion in two languages of bilinguals: Accommodation and assimilation of the neural pathways based on a meta-analysis. Cereb Cortex 2023:7133665. [PMID: 37083264 DOI: 10.1093/cercor/bhad121] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 03/14/2023] [Accepted: 03/17/2023] [Indexed: 04/22/2023] Open
Abstract
Numerous functional magnetic resonance imaging (fMRI) studies have examined the neural mechanisms of negative emotional words, but scarce evidence is available for the interactions among related brain regions from the functional brain connectivity perspective. Moreover, few studies have addressed the neural networks for negative word processing in bilinguals. To fill this gap, the current study examined the brain networks for processing negative words in the first language (L1) and the second language (L2) with Chinese-English bilinguals. To identify objective indicators associated with negative word processing, we first conducted a coordinate-based meta-analysis on contrasts between negative and neutral words (including 32 contrasts from 1589 participants) using the activation likelihood estimation method. Results showed that the left medial prefrontal cortex (mPFC), the left inferior frontal gyrus (IFG), the left posterior cingulate cortex (PCC), the left amygdala, the left inferior temporal gyrus (ITG), and the left thalamus were involved in processing negative words. Next, these six clusters were used as regions of interest in effective connectivity analyses using extended unified structural equation modeling to pinpoint the brain networks for bilingual negative word processing. Brain network results revealed two pathways for negative word processing in L1: a dorsal pathway consisting of the left IFG, the left mPFC, and the left PCC, and a ventral pathway involving the left amygdala, the left ITG, and the left thalamus. We further investigated the similarity and difference between brain networks for negative word processing in L1 and L2. The findings revealed similarities in the dorsal pathway, as well as differences primarily in the ventral pathway, indicating both neural assimilation and accommodation across processing negative emotion in two languages of bilinguals.
Collapse
Affiliation(s)
- Qinpu Dang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Fengyang Ma
- School of Education, University of Cincinnati, Cincinnati, OH 45219, USA
| | - Qiming Yuan
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yongben Fu
- The Psychological Education and Counseling Center, Huazhong Agricultural University, Wuhan 430070, China
| | - Keyue Chen
- Division of Psychology and Language Sciences, University College London, London WC1E 6BT, UK
| | - Zhaoqi Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Chunming Lu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
- Center for Collaboration and Innovation in Brain and Learning Sciences, Beijing Normal University, Beijing 100875, China
| | - Taomei Guo
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
- Center for Collaboration and Innovation in Brain and Learning Sciences, Beijing Normal University, Beijing 100875, China
| |
Collapse
|
3
|
Disentangling emotional signals in the brain: an ALE meta-analysis of vocal affect perception. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023; 23:17-29. [PMID: 35945478 DOI: 10.3758/s13415-022-01030-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/24/2022] [Indexed: 11/08/2022]
Abstract
Recent advances in neuroimaging research on vocal emotion perception have revealed voice-sensitive areas specialized in processing affect. Experimental data on this subject is varied, investigating a wide range of emotions through different vocal signals and task demands. The present meta-analysis was designed to disentangle this diversity of results by summarizing neuroimaging data in the vocal emotion perception literature. Data from 44 experiments contrasting emotional and neutral voices was analyzed to assess brain areas involved in vocal affect perception in general, as well as depending on the type of voice signal (speech prosody or vocalizations), the task demands (implicit or explicit attention to emotions), and the specific emotion perceived. Results reassessed a consistent bilateral network of Emotional Voices Areas consisting of the superior temporal cortex and primary auditory regions. Specific activations and lateralization of these regions, as well as additional areas (insula, middle temporal gyrus) were further modulated by signal type and task demands. Exploring the sparser data on single emotions also suggested the recruitment of other regions (insula, inferior frontal gyrus, frontal operculum) for specific aspects of each emotion. These novel meta-analytic results suggest that while the bulk of vocal affect processing is localized in the STC, the complexity and variety of such vocal signals entails functional specificities in complex and varied cortical (and potentially subcortical) response pathways.
Collapse
|
4
|
Tomasello R, Grisoni L, Boux I, Sammler D, Pulvermüller F. OUP accepted manuscript. Cereb Cortex 2022; 32:4885-4901. [PMID: 35136980 PMCID: PMC9626830 DOI: 10.1093/cercor/bhab522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 11/16/2021] [Accepted: 12/17/2021] [Indexed: 11/20/2022] Open
Abstract
During conversations, speech prosody provides important clues about the speaker’s communicative intentions. In many languages, a rising vocal pitch at the end of a sentence typically expresses a question function, whereas a falling pitch suggests a statement. Here, the neurophysiological basis of intonation and speech act understanding were investigated with high-density electroencephalography (EEG) to determine whether prosodic features are reflected at the neurophysiological level. Already approximately 100 ms after the sentence-final word differing in prosody, questions, and statements expressed with the same sentences led to different neurophysiological activity recorded in the event-related potential. Interestingly, low-pass filtered sentences and acoustically matched nonvocal musical signals failed to show any neurophysiological dissociations, thus suggesting that the physical intonation alone cannot explain this modulation. Our results show rapid neurophysiological indexes of prosodic communicative information processing that emerge only when pragmatic and lexico-semantic information are fully expressed. The early enhancement of question-related activity compared with statements was due to sources in the articulatory-motor region, which may reflect the richer action knowledge immanent to questions, namely the expectation of the partner action of answering the question. The present findings demonstrate a neurophysiological correlate of prosodic communicative information processing, which enables humans to rapidly detect and understand speaker intentions in linguistic interactions.
Collapse
Affiliation(s)
- Rosario Tomasello
- Address correspondence to Rosario Tomasello, Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany.
| | - Luigi Grisoni
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, 14195 Berlin, Germany
- Cluster of Excellence ‘Matters of Activity. Image Space Material’, Humboldt Universität zu Berlin, 10099 Berlin, Germany
| | - Isabella Boux
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, 14195 Berlin, Germany
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10117 Berlin, Germany
- Einstein Center for Neurosciences, 10117 Berlin, Germany
| | - Daniela Sammler
- Research Group ‘Neurocognition of Music and Language’, Max Planck Institute for Empirical Aesthetics, 60322 Frankfurt am Main, Germany
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, 14195 Berlin, Germany
- Cluster of Excellence ‘Matters of Activity. Image Space Material’, Humboldt Universität zu Berlin, 10099 Berlin, Germany
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10117 Berlin, Germany
- Einstein Center for Neurosciences, 10117 Berlin, Germany
| |
Collapse
|
5
|
Pierce JE, Péron JA. Reward-Based Learning and Emotional Habit Formation in the Cerebellum. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1378:125-140. [DOI: 10.1007/978-3-030-99550-8_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
6
|
Sihvonen AJ, Sammler D, Ripollés P, Leo V, Rodríguez-Fornells A, Soinila S, Särkämö T. Right ventral stream damage underlies both poststroke aprosodia and amusia. Eur J Neurol 2021; 29:873-882. [PMID: 34661326 DOI: 10.1111/ene.15148] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 10/07/2021] [Accepted: 10/09/2021] [Indexed: 11/29/2022]
Abstract
BACKGROUND AND PURPOSE This study was undertaken to determine and compare lesion patterns and structural dysconnectivity underlying poststroke aprosodia and amusia, using a data-driven multimodal neuroimaging approach. METHODS Thirty-nine patients with right or left hemisphere stroke were enrolled in a cohort study and tested for linguistic and affective prosody perception and musical pitch and rhythm perception at subacute and 3-month poststroke stages. Participants listened to words spoken with different prosodic stress that changed their meaning, and to words spoken with six different emotions, and chose which meaning or emotion was expressed. In the music tasks, participants judged pairs of short melodies as the same or different in terms of pitch or rhythm. Structural magnetic resonance imaging data were acquired at both stages, and machine learning-based lesion-symptom mapping and deterministic tractography were used to identify lesion patterns and damaged white matter pathways giving rise to aprosodia and amusia. RESULTS Both aprosodia and amusia were behaviorally strongly correlated and associated with similar lesion patterns in right frontoinsular and striatal areas. In multiple regression models, reduced fractional anisotropy and lower tract volume of the right inferior fronto-occipital fasciculus were the strongest predictors for both disorders, over time. CONCLUSIONS These results highlight a common origin of aprosodia and amusia, both arising from damage and disconnection of the right ventral auditory stream integrating rhythmic-melodic acoustic information in prosody and music. Comorbidity of these disabilities may worsen the prognosis and affect rehabilitation success.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,Centre for Clinical Research, University of Queensland, Brisbane, Queensland, Australia
| | - Daniela Sammler
- Research Group "Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Pablo Ripollés
- Department of Psychology, New York University, New York, New York, USA
| | - Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Antoni Rodríguez-Fornells
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute, L'Hospitalet de Llobregat, Spain.,Department of Cognition, Development, and Education Psychology, University of Barcelona, Barcelona, Spain.,Catalan Institution for Research and Advanced Studies, Barcelona, Spain
| | - Seppo Soinila
- Neurocenter, Turku University Hospital and Division of Clinical Neurosciences, University of Turku, Turku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
7
|
Multiple prosodic meanings are conveyed through separate pitch ranges: Evidence from perception of focus and surprise in Mandarin Chinese. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2021; 21:1164-1175. [PMID: 34331268 DOI: 10.3758/s13415-021-00930-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/06/2021] [Indexed: 11/08/2022]
Abstract
F0 variation is a crucial feature in speech prosody, which can convey linguistic information such as focus and paralinguistic meanings such as surprise. How can multiple layers of information be represented with F0 in speech: are they divided into discrete layers of pitch or overlapped without clear divisions? We investigated this question by assessing pitch perception of focus and surprise in Mandarin Chinese. Seventeen native Mandarin listeners rated the strength of focus and surprise conveyed by the same set of synthetically manipulated sentences. An fMRI experiment was conducted to assess neural correlates of the listeners' perceptual response to the stimuli. The results showed that behaviourally, the perceptual threshold for focus was 3 semitones and that for surprise was 5 semitones above the baseline. Moreover, the pitch range of 5-12 semitones above the baseline signalled both focus and surprise, suggesting a considerable overlap between the two types of prosodic information within this range. The neuroimaging data positively correlated with the variations in behavioural data. Also, a ceiling effect was found as no significant behavioural differences or neural activities were shown after reaching a certain pitch level for the perception of focus and surprise respectively. Together, the results suggest that different layers of prosodic information are represented in F0 through different pitch ranges: paralinguistic information is represented at a pitch range beyond that used by linguistic information. Meanwhile, the representation of paralinguistic information is achieved without obscuring linguistic prosody, thus allowing F0 to represent the two layers of information in parallel.
Collapse
|
8
|
Chan HL, Low I, Chen LF, Chen YS, Chu IT, Hsieh JC. A novel beamformer-based imaging of phase-amplitude coupling (BIPAC) unveiling the inter-regional connectivity of emotional prosody processing in women with primary dysmenorrhea. J Neural Eng 2021; 18. [PMID: 33691295 DOI: 10.1088/1741-2552/abed83] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Accepted: 03/10/2021] [Indexed: 12/30/2022]
Abstract
Objective. Neural communication or the interactions of brain regions play a key role in the formation of functional neural networks. A type of neural communication can be measured in the form of phase-amplitude coupling (PAC), which is the coupling between the phase of low-frequency oscillations and the amplitude of high-frequency oscillations. This paper presents a beamformer-based imaging method, beamformer-based imaging of PAC (BIPAC), to quantify the strength of PAC between a seed region and other brain regions.Approach. A dipole is used to model the ensemble of neural activity within a group of nearby neurons and represents a mixture of multiple source components of cortical activity. From ensemble activity at each brain location, the source component with the strongest coupling to the seed activity is extracted, while unrelated components are suppressed to enhance the sensitivity of coupled-source estimation.Main results. In evaluations using simulation data sets, BIPAC proved advantageous with regard to estimation accuracy in source localization, orientation, and coupling strength. BIPAC was also applied to the analysis of magnetoencephalographic signals recorded from women with primary dysmenorrhea in an implicit emotional prosody experiment. In response to negative emotional prosody, auditory areas revealed strong PAC with the ventral auditory stream and occipitoparietal areas in the theta-gamma and alpha-gamma bands, which may respectively indicate the recruitment of auditory sensory memory and attention reorientation. Moreover, patients with more severe pain experience appeared to have stronger coupling between auditory areas and temporoparietal regions.Significance. Our findings indicate that the implicit processing of emotional prosody is altered by menstrual pain experience. The proposed BIPAC is feasible and applicable to imaging inter-regional connectivity based on cross-frequency coupling estimates. The experimental results also demonstrate that BIPAC is capable of revealing autonomous brain processing and neurodynamics, which are more subtle than active and attended task-driven processing.
Collapse
Affiliation(s)
- Hui-Ling Chan
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Intan Low
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan.,Integrated Brain Research Unit, Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Li-Fen Chen
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan.,Integrated Brain Research Unit, Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan.,Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yong-Sheng Chen
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Ian-Ting Chu
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Jen-Chuen Hsieh
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan.,Integrated Brain Research Unit, Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan
| |
Collapse
|
9
|
Durfee AZ, Sheppard SM, Meier EL, Bunker L, Cui E, Crainiceanu C, Hillis AE. Explicit Training to Improve Affective Prosody Recognition in Adults with Acute Right Hemisphere Stroke. Brain Sci 2021; 11:brainsci11050667. [PMID: 34065453 PMCID: PMC8161405 DOI: 10.3390/brainsci11050667] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 05/15/2021] [Accepted: 05/18/2021] [Indexed: 11/16/2022] Open
Abstract
Difficulty recognizing affective prosody (receptive aprosodia) can occur following right hemisphere damage (RHD). Not all individuals spontaneously recover their ability to recognize affective prosody, warranting behavioral intervention. However, there is a dearth of evidence-based receptive aprosodia treatment research in this clinical population. The purpose of the current study was to investigate an explicit training protocol targeting affective prosody recognition in adults with RHD and receptive aprosodia. Eighteen adults with receptive aprosodia due to acute RHD completed affective prosody recognition before and after a short training session that targeted proposed underlying perceptual and conceptual processes. Behavioral impairment and lesion characteristics were investigated as possible influences on training effectiveness. Affective prosody recognition improved following training, and recognition accuracy was higher for pseudo- vs. real-word sentences. Perceptual deficits were associated with the most posterior infarcts, conceptual deficits were associated with frontal infarcts, and a combination of perceptual-conceptual deficits were related to temporoparietal and subcortical infarcts. Several right hemisphere ventral stream regions and pathways along with frontal and parietal hypoperfusion predicted training effectiveness. Explicit acoustic-prosodic-emotion training improves affective prosody recognition, but it may not be appropriate for everyone. Factors such as linguistic context and lesion location should be considered when planning prosody training.
Collapse
Affiliation(s)
- Alexandra Zezinka Durfee
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA; (A.Z.D.); (E.L.M.); (L.B.)
| | - Shannon M. Sheppard
- Department of Communication Sciences and Disorders, Chapman University, Irvine, CA 92618, USA;
| | - Erin L. Meier
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA; (A.Z.D.); (E.L.M.); (L.B.)
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MD 02115, USA
| | - Lisa Bunker
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA; (A.Z.D.); (E.L.M.); (L.B.)
| | - Erjia Cui
- Department of Biostatistics, Johns Hopkins University, Baltimore, MD 21205, USA; (E.C.); (C.C.)
| | - Ciprian Crainiceanu
- Department of Biostatistics, Johns Hopkins University, Baltimore, MD 21205, USA; (E.C.); (C.C.)
| | - Argye E. Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA; (A.Z.D.); (E.L.M.); (L.B.)
- Department of Physical Medicine and Rehabilitation, Johns Hopkins University, Baltimore, MD 21287, USA
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA
- Correspondence:
| |
Collapse
|
10
|
Basal ganglia and cerebellum contributions to vocal emotion processing as revealed by high-resolution fMRI. Sci Rep 2021; 11:10645. [PMID: 34017050 PMCID: PMC8138027 DOI: 10.1038/s41598-021-90222-6] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Accepted: 05/07/2021] [Indexed: 12/03/2022] Open
Abstract
Until recently, brain networks underlying emotional voice prosody decoding and processing were focused on modulations in primary and secondary auditory, ventral frontal and prefrontal cortices, and the amygdala. Growing interest for a specific role of the basal ganglia and cerebellum was recently brought into the spotlight. In the present study, we aimed at characterizing the role of such subcortical brain regions in vocal emotion processing, at the level of both brain activation and functional and effective connectivity, using high resolution functional magnetic resonance imaging. Variance explained by low-level acoustic parameters (fundamental frequency, voice energy) was also modelled. Wholebrain data revealed expected contributions of the temporal and frontal cortices, basal ganglia and cerebellum to vocal emotion processing, while functional connectivity analyses highlighted correlations between basal ganglia and cerebellum, especially for angry voices. Seed-to-seed and seed-to-voxel effective connectivity revealed direct connections within the basal ganglia—especially between the putamen and external globus pallidus—and between the subthalamic nucleus and the cerebellum. Our results speak in favour of crucial contributions of the basal ganglia, especially the putamen, external globus pallidus and subthalamic nucleus, and several cerebellar lobules and nuclei for an efficient decoding of and response to vocal emotions.
Collapse
|
11
|
Leung JH, Purdy SC, Corballis PM. Improving Emotion Perception in Children with Autism Spectrum Disorder with Computer-Based Training and Hearing Amplification. Brain Sci 2021; 11:brainsci11040469. [PMID: 33917776 PMCID: PMC8068114 DOI: 10.3390/brainsci11040469] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Revised: 03/22/2021] [Accepted: 04/06/2021] [Indexed: 11/16/2022] Open
Abstract
Individuals with Autism Spectrum Disorder (ASD) experience challenges with social communication, often involving emotional elements of language. This may stem from underlying auditory processing difficulties, especially when incoming speech is nuanced or complex. This study explored the effects of auditory training on social perception abilities of children with ASD. The training combined use of a remote-microphone hearing system and computerized emotion perception training. At baseline, children with ASD had poorer social communication scores and delayed mismatch negativity (MMN) compared to typically developing children. Behavioral results, measured pre- and post-intervention, revealed increased social perception scores in children with ASD to the extent that they outperformed their typically developing peers post-intervention. Electrophysiology results revealed changes in neural responses to emotional speech stimuli. Post-intervention, mismatch responses of children with ASD more closely resembled their neurotypical peers, with shorter MMN latencies, a significantly heightened P2 wave, and greater differentiation of emotional stimuli, consistent with their improved behavioral results. This study sets the foundation for further investigation into connections between auditory processing difficulties and social perception and communication for individuals with ASD, and provides a promising indication that combining amplified hearing and computer-based targeted social perception training using emotional speech stimuli may have neuro-rehabilitative benefits.
Collapse
|
12
|
O'Connell K, Marsh AA, Edwards DF, Dromerick AW, Seydell-Greenwald A. Emotion recognition impairments and social well-being following right-hemisphere stroke. Neuropsychol Rehabil 2021; 32:1337-1355. [PMID: 33615994 PMCID: PMC8379297 DOI: 10.1080/09602011.2021.1888756] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Accurately recognizing and responding to the emotions of others is essential for proper social communication and helps bind strong relationships that are particularly important for stroke survivors. Emotion recognition typically engages cortical areas that are predominantly right-lateralized including superior temporal and inferior frontal gyri - regions frequently impacted by right-hemisphere stroke. Since prior work already links right-hemisphere stroke to deficits in emotion recognition, this research aims to extend these findings to determine whether impaired emotion recognition after right-hemisphere stroke is associated with worse social well-being outcomes. Eighteen right-hemisphere stroke patients (≥6 months post-stroke) and 21 neurologically healthy controls completed a multimodal emotion recognition test (Geneva Emotion Recognition Test - Short) and reported engagement in social/non-social activities and levels of social support. Right-hemisphere stroke was associated with worse emotion recognition accuracy, though not all patients exhibited impairment. In line with hypotheses, emotion recognition impairments were associated with greater loss of social activities after stroke, an effect that could not be attributed to stroke severity or loss of non-social activities. Impairments were also linked to reduced patient-reported social support. Results implicate emotion recognition difficulties as a potential antecedent of social withdrawal after stroke and warrant future research to test emotion recognition training post-stroke.
Collapse
Affiliation(s)
- Katherine O'Connell
- Interdisciplinary Program in Neuroscience, Georgetown University, Washington, DC, USA
| | - Abigail A Marsh
- Department of Psychology, Georgetown University, Washington, DC, USA
| | - Dorothy Farrar Edwards
- Department of Kinesiology and Medicine, University of Wisconsin-Madison, Madison, WI, USA
| | - Alexander W Dromerick
- MedStar National Rehabilitation Hospital, Washington, DC, USA.,Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA
| | - Anna Seydell-Greenwald
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA
| |
Collapse
|
13
|
He Y, Steines M, Sammer G, Nagels A, Kircher T, Straube B. Modality-specific dysfunctional neural processing of social-abstract and non-social-concrete information in schizophrenia. Neuroimage Clin 2021; 29:102568. [PMID: 33524805 PMCID: PMC7851842 DOI: 10.1016/j.nicl.2021.102568] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 01/11/2021] [Accepted: 01/13/2021] [Indexed: 11/09/2022]
Abstract
Schizophrenia is characterized by marked communication dysfunctions encompassing potential impairments in the processing of social-abstract and non-social-concrete information, especially in everyday situations where multiple modalities are present in the form of speech and gesture. To date, the neurobiological basis of these deficits remains elusive. In a functional magnetic resonance imaging (fMRI) study, 17 patients with schizophrenia or schizoaffective disorder, and 18 matched controls watched videos of an actor speaking, gesturing (unimodal), and both speaking and gesturing (bimodal) about social or non-social events in a naturalistic way. Participants were asked to judge whether each video contains person-related (social) or object-related (non-social) information. When processing social-abstract content, patients showed reduced activation in the medial prefrontal cortex (mPFC) only in the gesture but not in the speech condition. For non-social-concrete content, remarkably, reduced neural activation for patients in the left postcentral gyrus and the right insula was observed only in the speech condition. Moreover, in the bimodal conditions, patients displayed improved task performance and comparable activation to controls in both social and non-social content. To conclude, patients with schizophrenia displayed modality-specific aberrant neural processing of social and non-social information, which is not present for the bimodal conditions. This finding provides novel insights into dysfunctional multimodal communication in schizophrenia, and may have potential therapeutic implications.
Collapse
Affiliation(s)
- Yifei He
- Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Marburg, Germany; Center for Mind, Brain and Behavior - CMBB, Hans-Meerwein-Straße 6, 35032 Marburg, Germany.
| | - Miriam Steines
- Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Marburg, Germany; Center for Mind, Brain and Behavior - CMBB, Hans-Meerwein-Straße 6, 35032 Marburg, Germany
| | - Gebhard Sammer
- Cognitive Neuroscience at Centre for Psychiatry, Justus-Liebig University Giessen, Giessen, Germany
| | - Arne Nagels
- Department of General Linguistics, Johannes-Gutenberg University Mainz, Mainz, Germany
| | - Tilo Kircher
- Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Marburg, Germany; Center for Mind, Brain and Behavior - CMBB, Hans-Meerwein-Straße 6, 35032 Marburg, Germany
| | - Benjamin Straube
- Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Marburg, Germany; Center for Mind, Brain and Behavior - CMBB, Hans-Meerwein-Straße 6, 35032 Marburg, Germany
| |
Collapse
|
14
|
Bello UM, Kranz GS, Winser SJ, Chan CCH. Neural Processes Underlying Mirror-Induced Visual Illusion: An Activation Likelihood Estimation Meta-Analysis. Front Hum Neurosci 2020; 14:276. [PMID: 32848663 PMCID: PMC7412952 DOI: 10.3389/fnhum.2020.00276] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Accepted: 06/18/2020] [Indexed: 12/02/2022] Open
Abstract
Introduction: Neuroimaging studies on neural processes associated with mirror-induced visual illusion (MVI) are growing in number. Previous systematic reviews on these studies used qualitative approaches. Objective: The present study conducted activation likelihood estimation (ALE) meta-analysis to locate the brain areas for unfolding the neural processes associated with the MVI. Method: We searched the CINAHL, MEDLINE, Scopus, and PubMed databases and identified eight studies (with 14 experiments) that met the inclusion criteria. Results: Contrasting with a rest condition, strong convergence in the bilateral primary and premotor areas and the inferior parietal lobule suggested top-down motor planning and execution. In addition, convergence was identified in the ipsilateral precuneus, cerebellum, superior frontal gyrus, and superior parietal lobule, clusters corresponding to the static hidden hand indicating self-processing operations, somatosensory processing, and motor control. When contrasting with an active movement condition, additional substantial convergence was revealed in visual-related areas, such as the ipsilateral cuneus, fusiform gyrus, middle occipital gyrus (visual area V2) and lingual gyrus, which mediate basic visual processing. Conclusions: To the best of our knowledge, the current meta-analysis is the first to reveal the visualization, mental rehearsal and motor-related processes underpinning the MVI and offers theoretical support on using MVI as a clinical intervention for post-stroke patients.
Collapse
Affiliation(s)
- Umar Muhammad Bello
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, China.,Department of Physiotherapy, Yobe State University Teaching Hospital, Damaturu, Nigeria
| | - Georg S Kranz
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, China.,Department of Psychiatry and Psychotherapy, Medical University of Vienna, Vienna, Austria
| | - Stanley John Winser
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, China
| | - Chetwyn C H Chan
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, China.,Applied Cognitive Neuroscience Laboratory, The Hong Kong Polytechnic University, Hong Kong, China.,University Research Facility in Behavioral and Systems Neuroscience, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
15
|
Weed E, Fusaroli R. Acoustic Measures of Prosody in Right-Hemisphere Damage: A Systematic Review and Meta-Analysis. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:1762-1775. [PMID: 32432947 DOI: 10.1044/2020_jslhr-19-00241] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose The aim of the study was to use systematic review and meta-analysis to quantitatively assess the currently available acoustic evidence for prosodic production impairments as a result of right-hemisphere damage (RHD), as well as to develop methodological recommendations for future studies. Method We systematically reviewed papers reporting acoustic features of prosodic production in RHD in order to identify shortcomings in the literature and make recommendations for future studies. We estimated the meta-analytic effect size of the acoustic features. We extracted standardized mean differences from 16 papers and estimated aggregated effect sizes using hierarchical Bayesian regression models. Results RHD did present reduced fundamental frequency variation, but the trait was shared with left-hemisphere damage. RHD also presented evidence for increased pause duration. No meta-analytic evidence for an effect of prosody type (emotional vs. linguistic) was found. Conclusions Taken together, the currently available acoustic data show only a weak specific effect of RHD on prosody production. However, the results are not definitive, as more reliable analyses are hindered by small sample sizes, lack of detail on lesion location, and divergent measuring techniques. We propose recommendations to overcome these issues: Cumulative science practices (e.g., open data and code sharing), more nuanced speech signal processing techniques, and the integration of acoustic measures and perceptual judgments are recommended to more effectively investigate prosody in RHD.
Collapse
Affiliation(s)
- Ethan Weed
- School of Communication and Culture, Aarhus University, Denmark
| | | |
Collapse
|
16
|
Proverbio AM, Santoni S, Adorni R. ERP Markers of Valence Coding in Emotional Speech Processing. iScience 2020; 23:100933. [PMID: 32151976 PMCID: PMC7063241 DOI: 10.1016/j.isci.2020.100933] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Revised: 12/20/2019] [Accepted: 02/19/2020] [Indexed: 11/01/2022] Open
Abstract
How is auditory emotional information processed? The study's aim was to compare cerebral responses to emotionally positive or negative spoken phrases matched for structure and content. Twenty participants listened to 198 vocal stimuli while detecting filler phrases containing first names. EEG was recorded from 128 sites. Three event-related potential (ERP) components were quantified and found to be sensitive to emotional valence since 350 ms of latency. P450 and late positivity were enhanced by positive content, whereas anterior negativity was larger to negative content. A similar set of markers (P300, N400, LP) was found previously for the processing of positive versus negative affective vocalizations, prosody, and music, which suggests a common neural mechanism for extracting the emotional content of auditory information. SwLORETA applied to potentials recorded between 350 and 550 ms showed that negative speech activated the right temporo/parietal areas (BA40, BA20/21), whereas positive speech activated the left homologous and inferior frontal areas.
Collapse
Affiliation(s)
- Alice Mado Proverbio
- Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy.
| | - Sacha Santoni
- Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy
| | - Roberta Adorni
- Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy
| |
Collapse
|
17
|
What you say versus how you say it: Comparing sentence comprehension and emotional prosody processing using fMRI. Neuroimage 2019; 209:116509. [PMID: 31899288 DOI: 10.1016/j.neuroimage.2019.116509] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 12/23/2019] [Accepted: 12/26/2019] [Indexed: 11/24/2022] Open
Abstract
While language processing is often described as lateralized to the left hemisphere (LH), the processing of emotion carried by vocal intonation is typically attributed to the right hemisphere (RH) and more specifically, to areas mirroring the LH language areas. However, the evidence base for this hypothesis is inconsistent, with some studies supporting right-lateralization but others favoring bilateral involvement in emotional prosody processing. Here we compared fMRI activations for an emotional prosody task with those for a sentence comprehension task in 20 neurologically healthy adults, quantifying lateralization using a lateralization index. We observed right-lateralized frontotemporal activations for emotional prosody that roughly mirrored the left-lateralized activations for sentence comprehension. In addition, emotional prosody also evoked bilateral activation in pars orbitalis (BA47), amygdala, and anterior insula. These findings are consistent with the idea that analysis of the auditory speech signal is split between the hemispheres, possibly according to their preferred temporal resolution, with the left preferentially encoding phonetic and the right encoding prosodic information. Once processed, emotional prosody information is fed to domain-general emotion processing areas and integrated with semantic information, resulting in additional bilateral activations.
Collapse
|
18
|
Manno FAM, Lau C, Fernandez-Ruiz J, Manno SHC, Cheng SH, Barrios FA. The human amygdala disconnecting from auditory cortex preferentially discriminates musical sound of uncertain emotion by altering hemispheric weighting. Sci Rep 2019; 9:14787. [PMID: 31615998 PMCID: PMC6794305 DOI: 10.1038/s41598-019-50042-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Accepted: 08/24/2019] [Indexed: 02/06/2023] Open
Abstract
How do humans discriminate emotion from non-emotion? The specific psychophysical cues and neural responses involved with resolving emotional information in sound are unknown. In this study we used a discrimination psychophysical-fMRI sparse sampling paradigm to locate threshold responses to happy and sad acoustic stimuli. The fine structure and envelope of auditory signals were covaried to vary emotional certainty. We report that emotion identification at threshold in music utilizes fine structure cues. The auditory cortex was activated but did not vary with emotional uncertainty. Amygdala activation was modulated by emotion identification and was absent when emotional stimuli were chance identifiable, especially in the left hemisphere. The right hemisphere amygdala was considerably more deactivated in response to uncertain emotion. The threshold of emotion was signified by a right amygdala deactivation and change of left amygdala greater than right amygdala activation. Functional sex differences were noted during binaural uncertain emotional stimuli presentations, where the right amygdala showed larger activation in females. Negative control (silent stimuli) experiments investigated sparse sampling of silence to ensure modulation effects were inherent to emotional resolvability. No functional modulation of Heschl's gyrus occurred during silence; however, during rest the amygdala baseline state was asymmetrically lateralized. The evidence indicates changing hemispheric activation and deactivation patterns between the left and right amygdala is a hallmark feature of discriminating emotion from non-emotion in music.
Collapse
Affiliation(s)
- Francis A M Manno
- School of Biomedical Engineering, Faculty of Engineering, The University of Sydney, Sydney, New South Wales, Australia.
- Department of Physics, City University of Hong Kong, HKSAR, China.
| | - Condon Lau
- Department of Physics, City University of Hong Kong, HKSAR, China.
| | - Juan Fernandez-Ruiz
- Departamento de Fisiología, Facultad de Medicina, Universidad Nacional Autónoma de México, México City, 04510, Mexico
| | | | - Shuk Han Cheng
- Department of Biomedical Sciences, City University of Hong Kong, HKSAR, China
| | - Fernando A Barrios
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Juriquilla, Querétaro, Mexico.
| |
Collapse
|
19
|
Zhao C, Chronaki G, Schiessl I, Wan MW, Abel KM. Is infant neural sensitivity to vocal emotion associated with mother-infant relational experience? PLoS One 2019; 14:e0212205. [PMID: 30811431 PMCID: PMC6392422 DOI: 10.1371/journal.pone.0212205] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 01/29/2019] [Indexed: 12/20/2022] Open
Abstract
An early understanding of others' vocal emotions provides infants with a distinct advantage for eliciting appropriate care from caregivers and for navigating their social world. Consistent with this notion, an emerging literature suggests that a temporal cortical response to the prosody of emotional speech is observable in the first year of life. Furthermore, neural specialisation to vocal emotion in infancy may vary according to early experience. Neural sensitivity to emotional non-speech vocalisations was investigated in 29 six-month-old infants using near-infrared spectroscopy (fNIRS). Both angry and happy vocalisations evoked increased activation in the temporal cortices (relative to neutral and angry vocalisations respectively), and the strength of the angry minus neutral effect was positively associated with the degree of directiveness in the mothers' play interactions with their infant. This first fNIRS study of infant vocal emotion processing implicates bilateral temporal mechanisms similar to those found in adults and suggests that infants who experience more directive caregiving or social play may more strongly or preferentially process vocal anger by six months of age.
Collapse
Affiliation(s)
- Chen Zhao
- Centre for Women’s Mental Health, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
| | - Georgia Chronaki
- Developmental Cognitive Neuroscience (DCN) Laboratory, School of Psychology, University of Central Lancashire, Preston, United Kingdom
- Division of Neuroscience & Experimental Psychology, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
- Developmental Brain-Behaviour Laboratory, Psychology, University of Southampton, United Kingdom
| | - Ingo Schiessl
- Division of Neuroscience & Experimental Psychology, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
| | - Ming Wai Wan
- Centre for Women’s Mental Health, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
| | - Kathryn M. Abel
- Centre for Women’s Mental Health, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
- Greater Manchester Mental Health NHS Foundation Trust, Manchester, United Kingdom
| |
Collapse
|
20
|
Zhang D, Chen Y, Hou X, Wu YJ. Near-infrared spectroscopy reveals neural perception of vocal emotions in human neonates. Hum Brain Mapp 2019; 40:2434-2448. [PMID: 30697881 DOI: 10.1002/hbm.24534] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2018] [Revised: 01/19/2019] [Accepted: 01/20/2019] [Indexed: 12/20/2022] Open
Abstract
Processing affective prosody, that is the emotional tone of a speaker, is fundamental to human communication and adaptive behaviors. Previous studies have mainly focused on adults and infants; thus the neural mechanisms underlying the processing of affective prosody in newborns remain unclear. Here, we used near-infrared spectroscopy to examine the ability of 0-to-4-day-old neonates to discriminate emotions conveyed by speech prosody in their maternal language and a foreign language. Happy, fearful, and angry prosodies enhanced neural activation in the right superior temporal gyrus relative to neutral prosody in the maternal but not the foreign language. Happy prosody elicited greater activation than negative prosody in the left superior frontal gyrus and the left angular gyrus, regions that have not been associated with affective prosody processing in infants or adults. These findings suggest that sensitivity to affective prosody is formed through prenatal exposure to vocal stimuli of the maternal language. Furthermore, the sensitive neural correlates appeared more distributed in neonates than infants, indicating a high-level of neural specialization between the neonatal stage and early infancy. Finally, neonates showed preferential neural responses to positive over negative prosody, which is contrary to the "negativity bias" phenomenon established in adult and infant studies.
Collapse
Affiliation(s)
- Dandan Zhang
- College of Psychology and Sociology, Shenzhen University, Shenzhen, China.,Shenzhen Key Laboratory of Affective and Social Cognitive Science, Shenzhen University, Shenzhen, China
| | - Yu Chen
- College of Psychology and Sociology, Shenzhen University, Shenzhen, China
| | - Xinlin Hou
- Department of Pediatrics, Peking University First Hospital, Beijing, China
| | - Yan Jing Wu
- Faculty of Foreign Languages, Ningbo University, Ningbo, China
| |
Collapse
|
21
|
Witteman J, Van IJzendoorn MH, Rilling JK, Bos PA, Schiller NO, Bakermans-Kranenburg MJ. Towards a neural model of infant cry perception. Neurosci Biobehav Rev 2019; 99:23-32. [PMID: 30710581 DOI: 10.1016/j.neubiorev.2019.01.026] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2018] [Revised: 01/25/2019] [Accepted: 01/26/2019] [Indexed: 01/23/2023]
Abstract
Previous work suggests that infant cry perception is supported by an evolutionary old neural network consisting of the auditory system, the thalamocingulate circuit, the frontoinsular system, the reward pathway and the medial prefrontal cortex. Furthermore, gender and parenthood have been proposed to modulate processing of infant cries. The present meta-analysis (N = 350) confirmed involvement of the auditory system, the thalamocingulate circuit, the dorsal anterior insula, the pre-supplementary motor area and dorsomedial prefrontal cortex and the inferior frontal gyrus in infant cry perception, but not of the reward pathway. Structures related to motoric processing, possibly supporting the preparation of a parenting response, were also involved. Finally, females (more than males) and parents (more than non-parents) recruited a cortico-limbic sensorimotor integration network, offering a neural explanation for previously observed enhanced processing of infant cries in these sub-groups. Based on the results, an updated neural model of infant cry perception is presented.
Collapse
Affiliation(s)
- J Witteman
- Leiden Institute for Brain and Cognition / Leiden University Centre for Linguistics, Leiden University, Van Wijkplaats 2, r2.02b, 2311 BV Leiden, the Netherlands.
| | - M H Van IJzendoorn
- Capital Normal University, Beijing, China, No. 83 Xi San Huan Bei Lu, Haidian, Beijing Beijing Municipality, 100089, China; Erasmus University Rotterdam, the Netherlands, Mandeville Building, Room T15-10, P.O. Box 1738
- 3000 DR Rotterdam, the Netherlands
| | - J K Rilling
- Emory College of Arts and Sciences, Dept. of Anthropology, 1462 Clifton Rd, GA 30329, Atlanta, United States of America
| | - P A Bos
- Utrecht University, Faculty of Social Science, Martinus J. Langeveldgebouw, Heidelberglaan 1, 3584 CS Utrecht, the Netherlands
| | - N O Schiller
- Leiden Institute for Brain and Cognition / Leiden University Centre for Linguistics, Leiden University, Van Wijkplaats 2, r2.02b, 2311 BV Leiden, the Netherlands
| | - M J Bakermans-Kranenburg
- Leiden Institute for Brain and Cognition / Leiden University Centre for Linguistics, Leiden University, Van Wijkplaats 2, r2.02b, 2311 BV Leiden, the Netherlands; Clinical Child & Family Studies, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081 BT Amsterdam, the Netherlands
| |
Collapse
|
22
|
Hemispheric specialization of the basal ganglia during vocal emotion decoding: Evidence from asymmetric Parkinson's disease and 18FDG PET. Neuropsychologia 2018; 119:1-11. [DOI: 10.1016/j.neuropsychologia.2018.07.023] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2018] [Revised: 07/10/2018] [Accepted: 07/19/2018] [Indexed: 11/15/2022]
|
23
|
Lindström R, Lepistö-Paisley T, Makkonen T, Reinvall O, Nieminen-von Wendt T, Alén R, Kujala T. Atypical perceptual and neural processing of emotional prosodic changes in children with autism spectrum disorders. Clin Neurophysiol 2018; 129:2411-2420. [PMID: 30278390 DOI: 10.1016/j.clinph.2018.08.018] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 07/20/2018] [Accepted: 08/22/2018] [Indexed: 11/29/2022]
Abstract
OBJECTIVE The present study explored the processing of emotional speech prosody in school-aged children with autism spectrum disorders (ASD) but without marked language impairments (children with ASD [no LI]). METHODS The mismatch negativity (MMN)/the late discriminative negativity (LDN), reflecting pre-attentive auditory discrimination processes, and the P3a, indexing involuntary orienting to attention-catching changes, were recorded to natural word stimuli uttered with different emotional connotations (neutral, sad, scornful and commanding). Perceptual prosody discrimination was addressed with a behavioral sound-discrimination test. RESULTS Overall, children with ASD (no LI) were slower in behaviorally discriminating prosodic features of speech stimuli than typically developed control children. Further, smaller standard-stimulus event related potentials (ERPs) and MMN/LDNs were found in children with ASD (no LI) than in controls. In addition, the amplitude of the P3a was diminished and differentially distributed on the scalp in children with ASD (no LI) than in control children. CONCLUSIONS Processing of words and changes in emotional speech prosody is impaired at various levels of information processing in school-aged children with ASD (no LI). SIGNIFICANCE The results suggest that low-level speech sound discrimination and orienting deficits might contribute to emotional speech prosody processing impairments observed in ASD.
Collapse
Affiliation(s)
- R Lindström
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland.
| | - T Lepistö-Paisley
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland; Department of Pediatric Neurology, Helsinki University Hospital, Helsinki, Finland
| | - T Makkonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - O Reinvall
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland; Department of Pediatric Neurology, Helsinki University Hospital, Helsinki, Finland
| | - T Nieminen-von Wendt
- Neuropsychiatric Rehabilitation and Medical Centre NeuroMental, Helsinki, Finland
| | - R Alén
- Department of Child Neurology, Central Finland Central Hospital, Jyväskylä, Finland
| | - T Kujala
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| |
Collapse
|
24
|
Sammler D, Cunitz K, Gierhan SME, Anwander A, Adermann J, Meixensberger J, Friederici AD. White matter pathways for prosodic structure building: A case study. BRAIN AND LANGUAGE 2018; 183:1-10. [PMID: 29758365 DOI: 10.1016/j.bandl.2018.05.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Revised: 03/14/2018] [Accepted: 05/03/2018] [Indexed: 06/08/2023]
Abstract
The relevance of left dorsal and ventral fiber pathways for syntactic and semantic comprehension is well established, while pathways for prosody are little explored. The present study examined linguistic prosodic structure building in a patient whose right arcuate/superior longitudinal fascicles and posterior corpus callosum were transiently compromised by a vasogenic peritumoral edema. Compared to ten matched healthy controls, the patient's ability to detect irregular prosodic structure significantly improved between pre- and post-surgical assessment. This recovery was accompanied by an increase in average fractional anisotropy (FA) in right dorsal and posterior transcallosal fiber tracts. Neither general cognitive abilities nor (non-prosodic) syntactic comprehension nor FA in right ventral and left dorsal fiber tracts showed a similar pre-post increase. Together, these findings suggest a contribution of right dorsal and inter-hemispheric pathways to prosody perception, including the right-dorsal tracking and structuring of prosodic pitch contours that is transcallosally informed by concurrent syntactic information.
Collapse
Affiliation(s)
- Daniela Sammler
- Otto Hahn Group "Neural Bases of Intonation in Speech and Music", Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany.
| | - Katrin Cunitz
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital Ulm, Steinhövelstraße 5, 89075 Ulm, Germany
| | - Sarah M E Gierhan
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Berlin School of Mind and Brain, Humboldt University Berlin, Unter den Linden 6, 10099 Berlin, Germany
| | - Alfred Anwander
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany
| | - Jens Adermann
- University Hospital Leipzig, Clinic and Policlinic for Neurosurgery, Liebigstraße 20, 04103 Leipzig, Germany
| | - Jürgen Meixensberger
- University Hospital Leipzig, Clinic and Policlinic for Neurosurgery, Liebigstraße 20, 04103 Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Berlin School of Mind and Brain, Humboldt University Berlin, Unter den Linden 6, 10099 Berlin, Germany
| |
Collapse
|
25
|
Liang B, Du Y. The Functional Neuroanatomy of Lexical Tone Perception: An Activation Likelihood Estimation Meta-Analysis. Front Neurosci 2018; 12:495. [PMID: 30087589 PMCID: PMC6066585 DOI: 10.3389/fnins.2018.00495] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Accepted: 07/02/2018] [Indexed: 11/13/2022] Open
Abstract
In tonal language such as Chinese, lexical tone serves as a phonemic feature in determining word meaning. Meanwhile, it is close to prosody in terms of suprasegmental pitch variations and larynx-based articulation. The important yet mixed nature of lexical tone has evoked considerable studies, but no consensus has been reached on its functional neuroanatomy. This meta-analysis aimed at uncovering the neural network of lexical tone perception in comparison with that of phoneme and prosody in a unified framework. Independent Activation Likelihood Estimation meta-analyses were conducted for different linguistic elements: lexical tone by native tonal language speakers, lexical tone by non-tonal language speakers, phoneme, word-level prosody, and sentence-level prosody. Results showed that lexical tone and prosody studies demonstrated more extensive activations in the right than the left auditory cortex, whereas the opposite pattern was found for phoneme studies. Only tonal language speakers consistently recruited the left anterior superior temporal gyrus (STG) for processing lexical tone, an area implicated in phoneme processing and word-form recognition. Moreover, an anterior-lateral to posterior-medial gradient of activation as a function of element timescale was revealed in the right STG, in which the activation for lexical tone lied between that for phoneme and that for prosody. Another topological pattern was shown on the left precentral gyrus (preCG), with the activation for lexical tone overlapped with that for prosody but ventral to that for phoneme. These findings provide evidence that the neural network for lexical tone perception is hybrid with those for phoneme and prosody. That is, resembling prosody, lexical tone perception, regardless of language experience, involved right auditory cortex, with activation localized between sites engaged by phonemic and prosodic processing, suggesting a hierarchical organization of representations in the right auditory cortex. For tonal language speakers, lexical tone additionally engaged the left STG lexical mapping network, consistent with the phonemic representation. Similarly, when processing lexical tone, only tonal language speakers engaged the left preCG site implicated in prosody perception, consistent with tonal language speakers having stronger articulatory representations for lexical tone in the laryngeal sensorimotor network. A dynamic dual-stream model for lexical tone perception was proposed and discussed.
Collapse
Affiliation(s)
- Baishen Liang
- CAS Key Laboratory of Behavioral Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
26
|
Morningstar M, Nelson EE, Dirks MA. Maturation of vocal emotion recognition: Insights from the developmental and neuroimaging literature. Neurosci Biobehav Rev 2018; 90:221-230. [DOI: 10.1016/j.neubiorev.2018.04.019] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Revised: 03/16/2018] [Accepted: 04/24/2018] [Indexed: 01/05/2023]
|
27
|
Klasen M, von Marschall C, Isman G, Zvyagintsev M, Gur RC, Mathiak K. Prosody production networks are modulated by sensory cues and social context. Soc Cogn Affect Neurosci 2018. [PMID: 29514331 PMCID: PMC5928400 DOI: 10.1093/scan/nsy015] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
The neurobiology of emotional prosody production is not well investigated. In particular, the effects of cues and social context are not known. The present study sought to differentiate cued from free emotion generation and the effect of social feedback from a human listener. Online speech filtering enabled functional magnetic resonance imaging during prosodic communication in 30 participants. Emotional vocalizations were (i) free, (ii) auditorily cued, (iii) visually cued or (iv) with interactive feedback. In addition to distributed language networks, cued emotions increased activity in auditory and—in case of visual stimuli—visual cortex. Responses were larger in posterior superior temporal gyrus at the right hemisphere and the ventral striatum when participants were listened to and received feedback from the experimenter. Sensory, language and reward networks contributed to prosody production and were modulated by cues and social context. The right posterior superior temporal gyrus is a central hub for communication in social interactions—in particular for interpersonal evaluation of vocal emotions.
Collapse
Affiliation(s)
- Martin Klasen
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| | - Clara von Marschall
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| | - Güldehen Isman
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Mikhail Zvyagintsev
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| | - Ruben C Gur
- Department of Psychiatry, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| |
Collapse
|
28
|
Speech Prosodies of Different Emotional Categories Activate Different Brain Regions in Adult Cortex: an fNIRS Study. Sci Rep 2018; 8:218. [PMID: 29317758 PMCID: PMC5760650 DOI: 10.1038/s41598-017-18683-2] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 12/14/2017] [Indexed: 11/12/2022] Open
Abstract
Emotional expressions of others embedded in speech prosodies are important for social interactions. This study used functional near-infrared spectroscopy to investigate how speech prosodies of different emotional categories are processed in the cortex. The results demonstrated several cerebral areas critical for emotional prosody processing. We confirmed that the superior temporal cortex, especially the right middle and posterior parts of superior temporal gyrus (BA 22/42), primarily works to discriminate between emotional and neutral prosodies. Furthermore, the results suggested that categorization of emotions occurs within a high-level brain region–the frontal cortex, since the brain activation patterns were distinct when positive (happy) were contrasted to negative (fearful and angry) prosody in the left middle part of inferior frontal gyrus (BA 45) and the frontal eye field (BA8), and when angry were contrasted to neutral prosody in bilateral orbital frontal regions (BA 10/11). These findings verified and extended previous fMRI findings in adult brain and also provided a “developed version” of brain activation for our following neonatal study.
Collapse
|
29
|
Simon D, Becker M, Mothes-Lasch M, Miltner WHR, Straube T. Loud and angry: sound intensity modulates amygdala activation to angry voices in social anxiety disorder. Soc Cogn Affect Neurosci 2017; 12:409-416. [PMID: 27651541 PMCID: PMC5390751 DOI: 10.1093/scan/nsw131] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2015] [Accepted: 09/06/2016] [Indexed: 11/12/2022] Open
Abstract
Angry expressions of both voices and faces represent disorder-relevant stimuli in social anxiety disorder (SAD). Although individuals with SAD show greater amygdala activation to angry faces, previous work has failed to find comparable effects for angry voices. Here, we investigated whether voice sound-intensity, a modulator of a voice's threat-relevance, affects brain responses to angry prosody in SAD. We used event-related functional magnetic resonance imaging to explore brain responses to voices varying in sound intensity and emotional prosody in SAD patients and healthy controls (HCs). Angry and neutral voices were presented either with normal or high sound amplitude, while participants had to decide upon the speaker's gender. Loud vs normal voices induced greater insula activation, and angry vs neutral prosody greater orbitofrontal cortex activation in SAD as compared with HC subjects. Importantly, an interaction of sound intensity, prosody and group was found in the insula and the amygdala. In particular, the amygdala showed greater activation to loud angry voices in SAD as compared with HC subjects. This finding demonstrates a modulating role of voice sound-intensity on amygdalar hyperresponsivity to angry prosody in SAD and suggests that abnormal processing of interpersonal threat signals in amygdala extends beyond facial expressions in SAD.
Collapse
Affiliation(s)
- Doerte Simon
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Von-Esmarch-Str., 52, D-48149 Münster, Germany and
| | - Michael Becker
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Von-Esmarch-Str., 52, D-48149 Münster, Germany and
| | - Martin Mothes-Lasch
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Von-Esmarch-Str., 52, D-48149 Münster, Germany and
| | - Wolfgang H R Miltner
- Department of Biological and Clinical Psychology, Friedrich Schiller University, Jena
| | - Thomas Straube
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Von-Esmarch-Str., 52, D-48149 Münster, Germany and
| |
Collapse
|
30
|
Convergence of semantics and emotional expression within the IFG pars orbitalis. Neuroimage 2017; 156:240-248. [DOI: 10.1016/j.neuroimage.2017.04.020] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Revised: 03/16/2017] [Accepted: 04/07/2017] [Indexed: 10/19/2022] Open
|
31
|
Mattavelli G, Pisoni A, Casarotti A, Comi A, Sera G, Riva M, Bizzi A, Rossi M, Bello L, Papagno C. Consequences of brain tumour resection on emotion recognition. J Neuropsychol 2017; 13:1-21. [PMID: 28700143 DOI: 10.1111/jnp.12130] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2017] [Revised: 04/13/2017] [Indexed: 11/27/2022]
Abstract
Emotion processing impairments are common in patients undergoing brain surgery for fronto-temporal tumour resection, with potential consequences on social interactions. However, evidence is controversial concerning side and site of lesions causing such deficits. This study investigates visual and auditory emotion recognition in brain tumour patients with the aim of clarifying which lesion sites are related to impairments in emotion processing from different modalities. Thirty-four patients were evaluated, before and after surgery, on facial expression and emotional prosody recognition; voxel-based lesion-symptom mapping (VLSM) analyses were performed on patients' post-surgery MRI images. Results showed that patients' performance decreased after surgery in both visual and auditory modalities, but, in general, recovered 3 months after surgery. In facial expression recognition, left brain-damaged patients showed greater post-surgery deterioration than right brain-damaged ones, whose performance specifically decreased for sadness and fear. VLSM analysis revealed two segregated areas in the left hemisphere accounting for post-surgery scores for happy (fronto-temporo-insular region) and surprised (middle frontal gyrus and inferior fronto-occipital fasciculus) facial expressions. Our findings demonstrate that surgical removal of tumours in the fronto-temporal region produces impairment in facial emotion recognition with an overall recovery at 3 months, suggesting a partially different representation of positive and negative emotions in the left and right hemispheres for visually - but not auditory - presented emotions; moreover, we show that deficits in specific expression recognition are associated with discrete lesion locations.
Collapse
Affiliation(s)
- Giulia Mattavelli
- Department of Psychology, University of Milano-Bicocca, Italy.,NeuroMi-Milan Center for Neuroscience, Italy
| | - Alberto Pisoni
- Department of Psychology, University of Milano-Bicocca, Italy.,NeuroMi-Milan Center for Neuroscience, Italy
| | | | - Alessandro Comi
- Unit of Oncological Neurosurgery, Humanitas Research Hospital, Rozzano, Italy
| | - Giada Sera
- Department of Psychology, University of Milano-Bicocca, Italy
| | - Marco Riva
- Unit of Oncological Neurosurgery, Humanitas Research Hospital, Rozzano, Italy
| | - Alberto Bizzi
- Neuroradiology Department, IRCCS Foundation Neurological Institute Carlo Besta, Milan, Italy
| | - Marco Rossi
- Unit of Oncological Neurosurgery, Humanitas Research Hospital, Rozzano, Italy
| | - Lorenzo Bello
- Unit of Oncological Neurosurgery, Humanitas Research Hospital, Rozzano, Italy.,Department of Medical Biotechnology and Translational Medicine, University of Milan, Italy
| | - Costanza Papagno
- Department of Psychology, University of Milano-Bicocca, Italy.,CIMeC and CeRiN, University of Trento, Rovereto, Italy
| |
Collapse
|
32
|
Péron J, Renaud O, Haegelen C, Tamarit L, Milesi V, Houvenaghel JF, Dondaine T, Vérin M, Sauleau P, Grandjean D. Vocal emotion decoding in the subthalamic nucleus: An intracranial ERP study in Parkinson's disease. BRAIN AND LANGUAGE 2017; 168:1-11. [PMID: 28088666 DOI: 10.1016/j.bandl.2016.12.003] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2016] [Revised: 11/22/2016] [Accepted: 12/12/2016] [Indexed: 05/13/2023]
Abstract
Using intracranial local field potential (LFP) recordings in patients with Parkinson's disease (PD) undergoing deep brain stimulation (DBS), we explored the electrophysiological activity of the subthalamic nucleus (STN) in response to emotional stimuli in the auditory modality. Previous studies focused on the influence of visual stimuli. To this end, we recorded LFPs within the STN in response to angry, happy, and neutral prosodies in 13 patients with PD who had just undergone implantation of DBS electrodes. We observed specific modulation of the right STN in response to anger and happiness, as opposed to neutral prosody, occurring at around 200-300ms post-onset, and later at around 850-950ms post-onset for anger and at around 3250-3350ms post-onset for happiness. Taken together with previous reports of modulated STN activity in response to emotional visual stimuli, the present results appear to confirm that the STN is involved in emotion processing irrespective of stimulus valence and sensory modality.
Collapse
Affiliation(s)
- Julie Péron
- 'Neuroscience of Emotion and Affective Dynamics' Laboratory, Department of Psychology & Swiss Center for Affective Sciences, University of Geneva, 40 bd du Pont d'Arve, 1205 Geneva, Switzerland; Neuropsychology Unit, Department of Neurology, University Hospitals of Geneva, Rue Gabrielle-Perret-Gentil 4, 1205 Geneva, Switzerland.
| | - Olivier Renaud
- Methodology and Data Analysis Unit, Department of Psychology, University of Geneva, 40 bd du Pont d'Arve, 1205 Geneva, Switzerland
| | - Claire Haegelen
- Neurosurgery Department, Pontchaillou Hospital, Rennes University Hospital, rue Henri Le Guilloux, 35033 Rennes, France; INSERM, LTSI U1099, Faculty of Medicine, CS 34317, University of Rennes I, F-35042 Rennes, France
| | - Lucas Tamarit
- Neuropsychology Unit, Department of Neurology, University Hospitals of Geneva, Rue Gabrielle-Perret-Gentil 4, 1205 Geneva, Switzerland
| | - Valérie Milesi
- 'Neuroscience of Emotion and Affective Dynamics' Laboratory, Department of Psychology & Swiss Center for Affective Sciences, University of Geneva, 40 bd du Pont d'Arve, 1205 Geneva, Switzerland; Neuropsychology Unit, Department of Neurology, University Hospitals of Geneva, Rue Gabrielle-Perret-Gentil 4, 1205 Geneva, Switzerland
| | - Jean-François Houvenaghel
- 'Behavior and Basal Ganglia' Research Unit (EA 4712), University of Rennes 1, Rennes University Hospital, rue Henri Le Guilloux, 35033 Rennes, France; Neurology Department, Pontchaillou Hospital, Rennes University Hospital, rue Henri Le Guilloux, 35033 Rennes, France
| | - Thibaut Dondaine
- 'Behavior and Basal Ganglia' Research Unit (EA 4712), University of Rennes 1, Rennes University Hospital, rue Henri Le Guilloux, 35033 Rennes, France; Neurology Department, Pontchaillou Hospital, Rennes University Hospital, rue Henri Le Guilloux, 35033 Rennes, France; Adult Psychiatry Department, Guillaume Régnier Hospital, 108 avenue du Général Leclerc, 35703 Rennes, France
| | - Marc Vérin
- 'Behavior and Basal Ganglia' Research Unit (EA 4712), University of Rennes 1, Rennes University Hospital, rue Henri Le Guilloux, 35033 Rennes, France; Neurology Department, Pontchaillou Hospital, Rennes University Hospital, rue Henri Le Guilloux, 35033 Rennes, France
| | - Paul Sauleau
- 'Behavior and Basal Ganglia' Research Unit (EA 4712), University of Rennes 1, Rennes University Hospital, rue Henri Le Guilloux, 35033 Rennes, France; Physiology Department, Pontchaillou Hospital, Rennes University Hospital, rue Henri Le Guilloux, 35033 Rennes, France
| | - Didier Grandjean
- 'Neuroscience of Emotion and Affective Dynamics' Laboratory, Department of Psychology & Swiss Center for Affective Sciences, University of Geneva, 40 bd du Pont d'Arve, 1205 Geneva, Switzerland; Neuropsychology Unit, Department of Neurology, University Hospitals of Geneva, Rue Gabrielle-Perret-Gentil 4, 1205 Geneva, Switzerland
| |
Collapse
|
33
|
Jin Y, Mao Z, Ling Z, Xu X, Xie G, Yu X. Altered emotional prosody processing in patients with Parkinson's disease after subthalamic nucleus stimulation. Neuropsychiatr Dis Treat 2017; 13:2965-2975. [PMID: 29270014 PMCID: PMC5729839 DOI: 10.2147/ndt.s153505] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
Abstract
BACKGROUND Patients with Parkinson's disease (PD) exhibit deficits in recognizing and expressing vocal emotional prosody. The aim of this study was to explore emotional prosody processing in patients with PD shortly after subthalamic nucleus (STN) deep brain stimulation (DBS). METHODS Two groups of patients with PD (pre-DBS and post-DBS) and one healthy control (HC) group were recruited as participants. All participants (PD and HC) were assessed using the Montreal Affective Voices database 50 Voices Recognition test. All participants were asked to nonverbally express five basic emotions (happiness, anger, fear, sadness, and neutral) to test emotional prosody expression. Fifteen native Chinese speakers were recruited as raters. We recorded the accuracy rate, reaction time, confidence level, and two acoustic parameters (mean pitch and mean intensity). RESULTS The PD groups scored lower than the HC group in recognizing and expressing emotional prosody. STN DBS had no significant effect on the recognition of emotional prosody but had a significant effect on fear prosody expression. Pearson's correlation analysis revealed significant correlations between performance on emotional prosody recognition tests and performance on emotional prosody expression tests in both the pre-DBS PD and post-DBS PD groups. CONCLUSION Shortly after STN DBS, the ability to recognize emotional prosody was not altered, but fear expression was impaired. We identified associations between abnormalities in emotional prosody recognition and expression deficits both before and after STN DBS, indicating that the processes involved in recognizing and expressing emotional prosody may share a common system.
Collapse
Affiliation(s)
- Yazhou Jin
- Department of Neurosurgery, People's Liberation Army General Hospital, Beijing, People's Republic of China
| | - Zhiqi Mao
- Department of Neurosurgery, People's Liberation Army General Hospital, Beijing, People's Republic of China
| | - Zhipei Ling
- Department of Neurosurgery, People's Liberation Army General Hospital, Beijing, People's Republic of China
| | - Xin Xu
- Department of Neurosurgery, People's Liberation Army General Hospital, Beijing, People's Republic of China
| | - Guang Xie
- Department of Neurosurgery, People's Liberation Army General Hospital, Beijing, People's Republic of China
| | - Xinguang Yu
- Department of Neurosurgery, People's Liberation Army General Hospital, Beijing, People's Republic of China
| |
Collapse
|
34
|
Castelluccio BC, Myers EB, Schuh JM, Eigsti IM. Neural Substrates of Processing Anger in Language: Contributions of Prosody and Semantics. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2016; 45:1359-1367. [PMID: 26645465 DOI: 10.1007/s10936-015-9405-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Emotions are conveyed primarily through two channels in language: semantics and prosody. While many studies confirm the role of a left hemisphere network in processing semantic emotion, there has been debate over the role of the right hemisphere in processing prosodic emotion. Some evidence suggests a preferential role for the right hemisphere, and other evidence supports a bilateral model. The relative contributions of semantics and prosody to the overall processing of affect in language are largely unexplored. The present work used functional magnetic resonance imaging to elucidate the neural bases of processing anger conveyed by prosody or semantic content. Results showed a robust, distributed, bilateral network for processing angry prosody and a more modest left hemisphere network for processing angry semantics when compared to emotionally neutral stimuli. Findings suggest the nervous system may be more responsive to prosodic cues in speech than to the semantic content of speech.
Collapse
Affiliation(s)
- Brian C Castelluccio
- Department of Psychological Sciences, University of Connecticut, 406 Babbidge Road, Unit 1020, Storrs, CT, 06269-1020, USA.
| | - Emily B Myers
- Department of Psychological Sciences, University of Connecticut, 406 Babbidge Road, Unit 1020, Storrs, CT, 06269-1020, USA
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, CT, USA
- Haskins Laboratories, New Haven, CT, USA
| | - Jillian M Schuh
- Department of Neurology, Division of Neuropsychology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Inge-Marie Eigsti
- Department of Psychological Sciences, University of Connecticut, 406 Babbidge Road, Unit 1020, Storrs, CT, 06269-1020, USA
- Haskins Laboratories, New Haven, CT, USA
| |
Collapse
|
35
|
Tseng HH, Roiser JP, Modinos G, Falkenberg I, Samson C, McGuire P, Allen P. Corticolimbic dysfunction during facial and prosodic emotional recognition in first-episode psychosis patients and individuals at ultra-high risk. Neuroimage Clin 2016; 12:645-654. [PMID: 27747152 PMCID: PMC5053033 DOI: 10.1016/j.nicl.2016.09.006] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2016] [Revised: 08/22/2016] [Accepted: 09/06/2016] [Indexed: 01/17/2023]
Abstract
Emotional processing dysfunction is widely reported in patients with chronic schizophrenia and first-episode psychosis (FEP), and has been linked to functional abnormalities of corticolimbic regions. However, corticolimbic dysfunction is less studied in people at ultra-high risk for psychosis (UHR), particularly during processing prosodic voices. We examined corticolimbic response during an emotion recognition task in 18 UHR participants and compared them with 18 FEP patients and 21 healthy controls (HC). Emotional recognition accuracy and corticolimbic response were measured during functional magnetic resonance imaging (fMRI) using emotional dynamic facial and prosodic voice stimuli. Relative to HC, both UHR and FEP groups showed impaired overall emotion recognition accuracy. Whilst during face trials, both UHR and FEP groups did not show significant differences in brain activation relative to HC, during voice trials, FEP patients showed reduced activation across corticolimbic networks including the amygdala. UHR participants showed a trend for increased response in the caudate nucleus during the processing of emotionally valenced prosodic voices relative to HC. The results indicate that corticolimbic dysfunction seen in FEP patients is also present, albeit to a lesser extent, in an UHR cohort, and may represent a neural substrate for emotional processing difficulties prior to the onset of florid psychosis.
Collapse
Affiliation(s)
- Huai-Hsuan Tseng
- Institute of Psychiatry, King's College London, United Kingdom
- Department of Psychiatry, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
| | - Jonathan P. Roiser
- Institute of Cognitive Neuroscience, University College London, United Kingdom
| | - Gemma Modinos
- Institute of Psychiatry, King's College London, United Kingdom
| | - Irina Falkenberg
- Institute of Psychiatry, King's College London, United Kingdom
- Philipps-University Marburg, Marburg, Germany
| | - Carly Samson
- Institute of Psychiatry, King's College London, United Kingdom
| | - Philip McGuire
- Institute of Psychiatry, King's College London, United Kingdom
| | - Paul Allen
- Institute of Psychiatry, King's College London, United Kingdom
- Department of Psychology, University of Roehampton, London, United Kingdom
| |
Collapse
|
36
|
Frühholz S, van der Zwaag W, Saenz M, Belin P, Schobert AK, Vuilleumier P, Grandjean D. Neural decoding of discriminative auditory object features depends on their socio-affective valence. Soc Cogn Affect Neurosci 2016; 11:1638-49. [PMID: 27217117 DOI: 10.1093/scan/nsw066] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Accepted: 05/11/2016] [Indexed: 11/12/2022] Open
Abstract
Human voices consist of specific patterns of acoustic features that are considerably enhanced during affective vocalizations. These acoustic features are presumably used by listeners to accurately discriminate between acoustically or emotionally similar vocalizations. Here we used high-field 7T functional magnetic resonance imaging in human listeners together with a so-called experimental 'feature elimination approach' to investigate neural decoding of three important voice features of two affective valence categories (i.e. aggressive and joyful vocalizations). We found a valence-dependent sensitivity to vocal pitch (f0) dynamics and to spectral high-frequency cues already at the level of the auditory thalamus. Furthermore, pitch dynamics and harmonics-to-noise ratio (HNR) showed overlapping, but again valence-dependent sensitivity in tonotopic cortical fields during the neural decoding of aggressive and joyful vocalizations, respectively. For joyful vocalizations we also revealed sensitivity in the inferior frontal cortex (IFC) to the HNR and pitch dynamics. The data thus indicate that several auditory regions were sensitive to multiple, rather than single, discriminative voice features. Furthermore, some regions partly showed a valence-dependent hypersensitivity to certain features, such as pitch dynamic sensitivity in core auditory regions and in the IFC for aggressive vocalizations, and sensitivity to high-frequency cues in auditory belt and parabelt regions for joyful vocalizations.
Collapse
Affiliation(s)
- Sascha Frühholz
- Department of Psychology, University of Zurich, 8050 Zurich, Switzerland Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland
| | - Wietske van der Zwaag
- Center for Biomedical Imaging, Ecole Polytechnique Fédérale de Lausanne 1015 Lausanne, Switzerland
| | - Melissa Saenz
- Laboratoire de Recherche en Neuroimagerie, Department of Clinical Neurosciences, CHUV, 1011 Lausanne, Switzerland Institute of Bioengineering, Ecole Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
| | - Pascal Belin
- Department of Psychology, University of Glasgow, Glasgow G12 8QQ, UK
| | - Anne-Kathrin Schobert
- Laboratory for Neurology and Imaging of Cognition, Department of Neurology and Department Neuroscience, Medical School, University of Geneva, 1211 Geneva, Switzerland
| | - Patrik Vuilleumier
- Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland Laboratory for Neurology and Imaging of Cognition, Department of Neurology and Department Neuroscience, Medical School, University of Geneva, 1211 Geneva, Switzerland
| | - Didier Grandjean
- Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, University of Geneva, Geneva 1205, Switzerland
| |
Collapse
|
37
|
Ceravolo L, Frühholz S, Grandjean D. Modulation of Auditory Spatial Attention by Angry Prosody: An fMRI Auditory Dot-Probe Study. Front Neurosci 2016; 10:216. [PMID: 27242420 PMCID: PMC4864064 DOI: 10.3389/fnins.2016.00216] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2015] [Accepted: 04/29/2016] [Indexed: 11/16/2022] Open
Abstract
Emotional stimuli have been shown to modulate attentional orienting through signals sent by subcortical brain regions that modulate visual perception at early stages of processing. Fewer studies, however, have investigated a similar effect of emotional stimuli on attentional orienting in the auditory domain together with an investigation of brain regions underlying such attentional modulation, which is the general aim of the present study. Therefore, we used an original auditory dot-probe paradigm involving simultaneously presented neutral and angry non-speech vocal utterances lateralized to either the left or the right auditory space, immediately followed by a short and lateralized single sine wave tone presented in the same (valid trial) or in the opposite space as the preceding angry voice (invalid trial). Behavioral results showed an expected facilitation effect for target detection during valid trials while functional data showed greater activation in the middle and posterior superior temporal sulci (STS) and in the medial frontal cortex for valid vs. invalid trials. The use of reaction time facilitation [absolute value of the Z-score of valid-(invalid+neutral)] as a group covariate extended enhanced activity in the amygdalae, auditory thalamus, and visual cortex. Taken together, our results suggest the involvement of a large and distributed network of regions among which the STS, thalamus, and amygdala are crucial for the decoding of angry prosody, as well as for orienting and maintaining attention within an auditory space that was previously primed by a vocal emotional event.
Collapse
Affiliation(s)
- Leonardo Ceravolo
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of GenevaGeneva, Switzerland; Department of Psychology, Swiss Center for Affective Sciences, University of GenevaGeneva, Switzerland
| | - Sascha Frühholz
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of GenevaGeneva, Switzerland; Department of Psychology, Swiss Center for Affective Sciences, University of GenevaGeneva, Switzerland; Department of Psychology, University of ZurichZurich, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of GenevaGeneva, Switzerland; Department of Psychology, Swiss Center for Affective Sciences, University of GenevaGeneva, Switzerland
| |
Collapse
|
38
|
Ceravolo L, Frühholz S, Grandjean D. Proximal vocal threat recruits the right voice-sensitive auditory cortex. Soc Cogn Affect Neurosci 2016; 11:793-802. [PMID: 26746180 DOI: 10.1093/scan/nsw004] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2015] [Accepted: 01/04/2016] [Indexed: 11/14/2022] Open
Abstract
The accurate estimation of the proximity of threat is important for biological survival and to assess relevant events of everyday life. We addressed the question of whether proximal as compared with distal vocal threat would lead to a perceptual advantage for the perceiver. Accordingly, we sought to highlight the neural mechanisms underlying the perception of proximal vs distal threatening vocal signals by the use of functional magnetic resonance imaging. Although we found that the inferior parietal and superior temporal cortex of human listeners generally decoded the spatial proximity of auditory vocalizations, activity in the right voice-sensitive auditory cortex was specifically enhanced for proximal aggressive relative to distal aggressive voices as compared with neutral voices. Our results shed new light on the processing of imminent danger signaled by proximal vocal threat and show the crucial involvement of the right mid voice-sensitive auditory cortex in such processing.
Collapse
Affiliation(s)
- Leonardo Ceravolo
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, Swiss Center for Affective Sciences, University of Geneva, CH-1202 Geneva, Switzerland and
| | - Sascha Frühholz
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, Swiss Center for Affective Sciences, University of Geneva, CH-1202 Geneva, Switzerland and Department of Psychology, University of Zurich, 8050 Zurich, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, Swiss Center for Affective Sciences, University of Geneva, CH-1202 Geneva, Switzerland and
| |
Collapse
|
39
|
Korb S, Frühholz S, Grandjean D. Reappraising the voices of wrath. Soc Cogn Affect Neurosci 2015; 10:1644-60. [PMID: 25964502 PMCID: PMC4666101 DOI: 10.1093/scan/nsv051] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2014] [Revised: 04/08/2015] [Accepted: 05/07/2015] [Indexed: 11/12/2022] Open
Abstract
Cognitive reappraisal recruits prefrontal and parietal cortical areas. Because of the near exclusive usage in past research of visual stimuli to elicit emotions, it is unknown whether the same neural substrates underlie the reappraisal of emotions induced through other sensory modalities. Here, participants reappraised their emotions in order to increase or decrease their emotional response to angry prosody, or maintained their attention to it in a control condition. Neural activity was monitored with fMRI, and connectivity was investigated by using psychophysiological interaction analyses. A right-sided network encompassing the superior temporal gyrus, the superior temporal sulcus and the inferior frontal gyrus was found to underlie the processing of angry prosody. During reappraisal to increase emotional response, the left superior frontal gyrus showed increased activity and became functionally coupled to right auditory cortices. During reappraisal to decrease emotional response, a network that included the medial frontal gyrus and posterior parietal areas showed increased activation and greater functional connectivity with bilateral auditory regions. Activations pertaining to this network were more extended on the right side of the brain. Although directionality cannot be inferred from PPI analyses, the findings suggest a similar frontoparietal network for the reappraisal of visually and auditorily induced negative emotions.
Collapse
Affiliation(s)
- Sebastian Korb
- International School for Advanced Studies (SISSA), Trieste, Italy,
| | - Sascha Frühholz
- Swiss Center for Affective Sciences, Geneva, Switzerland, and Department of Psychology and Educational Sciences, University of Geneva, Switzerland
| | - Didier Grandjean
- Swiss Center for Affective Sciences, Geneva, Switzerland, and Department of Psychology and Educational Sciences, University of Geneva, Switzerland
| |
Collapse
|
40
|
Neural Processing of Emotional Prosody across the Adult Lifespan. BIOMED RESEARCH INTERNATIONAL 2015; 2015:590216. [PMID: 26583118 PMCID: PMC4637042 DOI: 10.1155/2015/590216] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2015] [Revised: 08/17/2015] [Accepted: 08/30/2015] [Indexed: 11/17/2022]
Abstract
Emotion recognition deficits emerge with the increasing age, in particular, a decline in the identification of sadness. However, little is known about the age-related changes of emotion processing in sensory, affective, and executive brain areas. This functional magnetic resonance imaging (fMRI) study investigated neural correlates of auditory processing of prosody across adult lifespan. Unattended detection of emotional prosody changes was assessed in 21 young (age range: 18-35 years), 19 middle-aged (age range: 36-55 years), and 15 older (age range: 56-75 years) adults. Pseudowords uttered with neutral prosody were standards in an oddball paradigm with angry, sad, happy, and gender deviants (total 20% deviants). Changes in emotional prosody and voice gender elicited bilateral superior temporal gyri (STG) responses reflecting automatic encoding of prosody. At the right STG, responses to sad deviants decreased linearly with age, whereas happy events exhibited a nonlinear relationship. In contrast to behavioral data, no age by sex interaction emerged on the neural networks. The aging decline of emotion processing of prosodic cues emerges already at an early automatic stage of information processing at the level of the auditory cortex. However, top-down modulation may lead to an additional perceptional bias, for example, towards positive stimuli, and may depend on context factors such as the listener's sex.
Collapse
|
41
|
Péron J, Frühholz S, Ceravolo L, Grandjean D. Structural and functional connectivity of the subthalamic nucleus during vocal emotion decoding. Soc Cogn Affect Neurosci 2015; 11:349-56. [PMID: 26400857 DOI: 10.1093/scan/nsv118] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2015] [Accepted: 09/17/2015] [Indexed: 11/13/2022] Open
Abstract
Our understanding of the role played by the subthalamic nucleus (STN) in human emotion has recently advanced with STN deep brain stimulation, a neurosurgical treatment for Parkinson's disease and obsessive-compulsive disorder. However, the potential presence of several confounds related to pathological models raises the question of how much they affect the relevance of observations regarding the physiological function of the STN itself. This underscores the crucial importance of obtaining evidence from healthy participants. In this study, we tested the structural and functional connectivity between the STN and other brain regions related to vocal emotion in a healthy population by combining diffusion tensor imaging and psychophysiological interaction analysis from a high-resolution functional magnetic resonance imaging study. As expected, we showed that the STN is functionally connected to the structures involved in emotional prosody decoding, notably the orbitofrontal cortex, inferior frontal gyrus, auditory cortex, pallidum and amygdala. These functional results were corroborated by probabilistic fiber tracking, which revealed that the left STN is structurally connected to the amygdala and the orbitofrontal cortex. These results confirm, in healthy participants, the role played by the STN in human emotion and its structural and functional connectivity with the brain network involved in vocal emotions.
Collapse
Affiliation(s)
- Julie Péron
- Neuroscience of Emotion and Affective Dynamics laboratory, Department of Psychology and Swiss Centre for Affective Sciences, Campus Biotech, University of Geneva, Switzerland
| | - Sascha Frühholz
- Neuroscience of Emotion and Affective Dynamics laboratory, Department of Psychology and Swiss Centre for Affective Sciences, Campus Biotech, University of Geneva, Switzerland
| | - Leonardo Ceravolo
- Neuroscience of Emotion and Affective Dynamics laboratory, Department of Psychology and Swiss Centre for Affective Sciences, Campus Biotech, University of Geneva, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotion and Affective Dynamics laboratory, Department of Psychology and Swiss Centre for Affective Sciences, Campus Biotech, University of Geneva, Switzerland
| |
Collapse
|
42
|
Abstract
Individuals with schizophrenia exhibit impaired social cognition, which manifests as difficulties in identifying emotions, feeing connected to others, inferring people's thoughts and reacting emotionally to others. These social cognitive impairments interfere with social connections and are strong determinants of the degree of impaired daily functioning in such individuals. Here, we review recent findings from the fields of social cognition and social neuroscience and identify the social processes that are impaired in schizophrenia. We also consider empathy as an example of a complex social cognitive function that integrates several social processes and is impaired in schizophrenia. This information may guide interventions to improve social cognition in patients with this disorder.
Collapse
|
43
|
Asaridou SS, Takashima A, Dediu D, Hagoort P, McQueen JM. Repetition Suppression in the Left Inferior Frontal Gyrus Predicts Tone Learning Performance. Cereb Cortex 2015; 26:2728-42. [PMID: 26113631 DOI: 10.1093/cercor/bhv126] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Do individuals differ in how efficiently they process non-native sounds? To what extent do these differences relate to individual variability in sound-learning aptitude? We addressed these questions by assessing the sound-learning abilities of Dutch native speakers as they were trained on non-native tone contrasts. We used fMRI repetition suppression to the non-native tones to measure participants' neuronal processing efficiency before and after training. Although all participants improved in tone identification with training, there was large individual variability in learning performance. A repetition suppression effect to tone was found in the bilateral inferior frontal gyri (IFGs) before training. No whole-brain effect was found after training; a region-of-interest analysis, however, showed that, after training, repetition suppression to tone in the left IFG correlated positively with learning. That is, individuals who were better in learning the non-native tones showed larger repetition suppression in this area. Crucially, this was true even before training. These findings add to existing evidence that the left IFG plays an important role in sound learning and indicate that individual differences in learning aptitude stem from differences in the neuronal efficiency with which non-native sounds are processed.
Collapse
Affiliation(s)
- Salomi S Asaridou
- Neurobiology of Language, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands Donders Institute for Brain, Cognition and Behaviour
| | - Atsuko Takashima
- Donders Institute for Brain, Cognition and Behaviour Behavioural Science Institute, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Dan Dediu
- Neurobiology of Language, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands Donders Institute for Brain, Cognition and Behaviour
| | - Peter Hagoort
- Neurobiology of Language, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands Donders Institute for Brain, Cognition and Behaviour
| | - James M McQueen
- Neurobiology of Language, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands Donders Institute for Brain, Cognition and Behaviour Behavioural Science Institute, Radboud University Nijmegen, Nijmegen, The Netherlands
| |
Collapse
|
44
|
Belyk M, Brown S. Pitch underlies activation of the vocal system during affective vocalization. Soc Cogn Affect Neurosci 2015; 11:1078-88. [PMID: 26078385 DOI: 10.1093/scan/nsv074] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2014] [Accepted: 06/04/2015] [Indexed: 11/12/2022] Open
Abstract
Affective prosody is that aspect of speech that conveys a speaker's emotional state through modulations in various vocal parameters, most prominently pitch. While a large body of research implicates the cingulate vocalization area in controlling affective vocalizations in monkeys, no systematic test of functional homology for this area has yet been reported in humans. In this study, we used functional magnetic resonance imaging to compare brain activations when subjects produced affective vocalizations in the form of exclamations vs non-affective vocalizations with similar pitch contours. We also examined the perception of affective vocalizations by having participants make judgments about either the emotions being conveyed by recorded affective vocalizations or the pitch contours of the same vocalizations. Production of affective vocalizations and matched pitch contours activated a highly overlapping set of brain areas, including the larynx-phonation area of the primary motor cortex and a region of the anterior cingulate cortex that is consistent with the macro-anatomical position of the cingulate vocalization area. This overlap contradicts the dominant view that these areas form two distinct vocal pathways with dissociable functions. Instead, we propose that these brain areas are nodes in a single vocal network, with an emphasis on pitch modulation as a vehicle for affective expression.
Collapse
Affiliation(s)
- Michel Belyk
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Steven Brown
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
45
|
Gainotti G. Is the difference between right and left ATLs due to the distinction between general and social cognition or between verbal and non-verbal representations? Neurosci Biobehav Rev 2015; 51:296-312. [DOI: 10.1016/j.neubiorev.2015.02.004] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2014] [Revised: 12/24/2014] [Accepted: 02/07/2015] [Indexed: 01/16/2023]
|
46
|
Park M, Gutyrchik E, Welker L, Carl P, Pöppel E, Zaytseva Y, Meindl T, Blautzik J, Reiser M, Bao Y. Sadness is unique: neural processing of emotions in speech prosody in musicians and non-musicians. Front Hum Neurosci 2015; 8:1049. [PMID: 25688196 PMCID: PMC4311618 DOI: 10.3389/fnhum.2014.01049] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2014] [Accepted: 12/15/2014] [Indexed: 01/30/2023] Open
Abstract
Musical training has been shown to have positive effects on several aspects of speech processing, however, the effects of musical training on the neural processing of speech prosody conveying distinct emotions are yet to be better understood. We used functional magnetic resonance imaging (fMRI) to investigate whether the neural responses to speech prosody conveying happiness, sadness, and fear differ between musicians and non-musicians. Differences in processing of emotional speech prosody between the two groups were only observed when sadness was expressed. Musicians showed increased activation in the middle frontal gyrus, the anterior medial prefrontal cortex, the posterior cingulate cortex and the retrosplenial cortex. Our results suggest an increased sensitivity of emotional processing in musicians with respect to sadness expressed in speech, possibly reflecting empathic processes.
Collapse
Affiliation(s)
- Mona Park
- Institute of Medical Psychology, Ludwig-Maximilians-Universität Munich, Germany ; Human Science Center, Ludwig-Maximilians-Universität Munich, Germany ; Parmenides Center for Art and Science Pullach, Germany
| | - Evgeny Gutyrchik
- Institute of Medical Psychology, Ludwig-Maximilians-Universität Munich, Germany ; Human Science Center, Ludwig-Maximilians-Universität Munich, Germany ; Parmenides Center for Art and Science Pullach, Germany
| | - Lorenz Welker
- Human Science Center, Ludwig-Maximilians-Universität Munich, Germany ; Institute of Musicology, Ludwig-Maximilians-Universität Munich, Germany
| | - Petra Carl
- Institute of Medical Psychology, Ludwig-Maximilians-Universität Munich, Germany ; Human Science Center, Ludwig-Maximilians-Universität Munich, Germany
| | - Ernst Pöppel
- Institute of Medical Psychology, Ludwig-Maximilians-Universität Munich, Germany ; Human Science Center, Ludwig-Maximilians-Universität Munich, Germany ; Parmenides Center for Art and Science Pullach, Germany ; Department of Psychology and Key Laboratory of Machine Perception (MoE), Peking University Beijing, China ; Institute of Psychology, Chinese Academy of Sciences Beijing, China
| | - Yuliya Zaytseva
- Institute of Medical Psychology, Ludwig-Maximilians-Universität Munich, Germany ; Human Science Center, Ludwig-Maximilians-Universität Munich, Germany ; Parmenides Center for Art and Science Pullach, Germany ; Moscow Research Institute of Psychiatry Moscow, Russia ; Prague Psychiatric Centre, 3rd Faculty of Medicine, Charles University in Prague Prague, Czech Republic
| | - Thomas Meindl
- Institute of Clinical Radiology, Ludwig-Maximilians-Universität Munich, Germany
| | - Janusch Blautzik
- Institute of Clinical Radiology, Ludwig-Maximilians-Universität Munich, Germany
| | - Maximilian Reiser
- Institute of Clinical Radiology, Ludwig-Maximilians-Universität Munich, Germany
| | - Yan Bao
- Institute of Medical Psychology, Ludwig-Maximilians-Universität Munich, Germany ; Human Science Center, Ludwig-Maximilians-Universität Munich, Germany ; Parmenides Center for Art and Science Pullach, Germany ; Department of Psychology and Key Laboratory of Machine Perception (MoE), Peking University Beijing, China
| |
Collapse
|
47
|
Pinheiro AP, Vasconcelos M, Dias M, Arrais N, Gonçalves ÓF. The music of language: an ERP investigation of the effects of musical training on emotional prosody processing. BRAIN AND LANGUAGE 2015; 140:24-34. [PMID: 25461917 DOI: 10.1016/j.bandl.2014.10.009] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2014] [Revised: 09/30/2014] [Accepted: 10/22/2014] [Indexed: 06/04/2023]
Abstract
Recent studies have demonstrated the positive effects of musical training on the perception of vocally expressed emotion. This study investigated the effects of musical training on event-related potential (ERP) correlates of emotional prosody processing. Fourteen musicians and fourteen control subjects listened to 228 sentences with neutral semantic content, differing in prosody (one third with neutral, one third with happy and one third with angry intonation), with intelligible semantic content (semantic content condition--SCC) and unintelligible semantic content (pure prosody condition--PPC). Reduced P50 amplitude was found in musicians. A difference between SCC and PPC conditions was found in P50 and N100 amplitude in non-musicians only, and in P200 amplitude in musicians only. Furthermore, musicians were more accurate in recognizing angry prosody in PPC sentences. These findings suggest that auditory expertise characterizing extensive musical training may impact different stages of vocal emotional processing.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Cognitive Neuroscience Lab, Department of Psychiatry, Harvard Medical School, Boston, MA, USA.
| | - Margarida Vasconcelos
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Marcelo Dias
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Nuno Arrais
- Music Department, Institute of Arts and Human Sciences, University of Minho, Braga, Portugal
| | - Óscar F Gonçalves
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Spaulding Center of Neuromodulation, Department of Physical Medicine & Rehabilitation, Spaulding Rehabilitation Hospital and Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
48
|
Abstract
Accents provide information about the speaker's geographical, socio-economic, and ethnic background. Research in applied psychology and sociolinguistics suggests that we generally prefer our own accent to other varieties of our native language and attribute more positive traits to it. Despite the widespread influence of accents on social interactions, educational and work settings the neural underpinnings of this social bias toward our own accent and, what may drive this bias, are unexplored. We measured brain activity while participants from two different geographical backgrounds listened passively to 3 English accent types embedded in an adaptation design. Cerebral activity in several regions, including bilateral amygdalae, revealed a significant interaction between the participants' own accent and the accent they listened to: while repetition of own accents elicited an enhanced neural response, repetition of the other group's accent resulted in reduced responses classically associated with adaptation. Our findings suggest that increased social relevance of, or greater emotional sensitivity to in-group accents, may underlie the own-accent bias. Our results provide a neural marker for the bias associated with accents, and show, for the first time, that the neural response to speech is partly shaped by the geographical background of the listener.
Collapse
Affiliation(s)
| | - Pascal Belin
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK International Laboratories for Brain, Music and Sound Research, Université de Montréal & McGill University, Montréal, Canada Institut des Neurosciences de La Timone, UMR 7289, CNRS & Aix-Marseille Université, Marseille, France
| | - D Robert Ladd
- School of Philosophy, Psychology and Language Sciences, University of Edinburgh, UK
| |
Collapse
|
49
|
Sensory contribution to vocal emotion deficit in Parkinson's disease after subthalamic stimulation. Cortex 2014; 63:172-83. [PMID: 25282055 DOI: 10.1016/j.cortex.2014.08.023] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2014] [Revised: 07/20/2014] [Accepted: 08/23/2014] [Indexed: 11/21/2022]
Abstract
Subthalamic nucleus (STN) deep brain stimulation in Parkinson's disease induces modifications in the recognition of emotion from voices (or emotional prosody). Nevertheless, the underlying mechanisms are still only poorly understood, and the role of acoustic features in these deficits has yet to be elucidated. Our aim was to identify the influence of acoustic features on changes in emotional prosody recognition following STN stimulation in Parkinson's disease. To this end, we analysed the performances of patients on vocal emotion recognition in pre-versus post-operative groups, as well as of matched controls, entering the acoustic features of the stimuli into our statistical models. Analyses revealed that the post-operative biased ratings on the Fear scale when patients listened to happy stimuli were correlated with loudness, while the biased ratings on the Sadness scale when they listened to happiness were correlated with fundamental frequency (F0). Furthermore, disturbed ratings on the Happiness scale when the post-operative patients listened to sadness were found to be correlated with F0. These results suggest that inadequate use of acoustic features following subthalamic stimulation has a significant impact on emotional prosody recognition in patients with Parkinson's disease, affecting the extraction and integration of acoustic cues during emotion perception.
Collapse
|
50
|
Jacob H, Brück C, Plewnia C, Wildgruber D. Cerebral processing of prosodic emotional signals: evaluation of a network model using rTMS. PLoS One 2014; 9:e105509. [PMID: 25171220 PMCID: PMC4149421 DOI: 10.1371/journal.pone.0105509] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2014] [Accepted: 07/24/2014] [Indexed: 11/20/2022] Open
Abstract
A great number of functional imaging studies contributed to developing a cerebral network model illustrating the processing of prosody in the brain. According to this model, the processing of prosodic emotional signals is divided into three main steps, each related to different brain areas. The present study sought to evaluate parts of the aforementioned model by using low-frequency repetitive transcranial magnetic stimulation (rTMS) over two important brain regions identified by the model: the superior temporal cortex (Experiment 1) and the inferior frontal cortex (Experiment 2). The aim of both experiments was to reduce cortical activity in the respective brain areas and evaluate whether these reductions lead to measurable behavioral effects during prosody processing. However, results obtained in this study revealed no rTMS effects on the acquired behavioral data. Possible explanations for these findings are discussed in the paper.
Collapse
Affiliation(s)
- Heike Jacob
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
- Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
- * E-mail:
| | - Carolin Brück
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
| | - Christian Plewnia
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
| | - Dirk Wildgruber
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
| |
Collapse
|