1
|
Lo CW, Meyer L. Chunk boundaries disrupt dependency processing in an AG: Reconciling incremental processing and discrete sampling. PLoS One 2024; 19:e0305333. [PMID: 38889141 PMCID: PMC11185458 DOI: 10.1371/journal.pone.0305333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 05/29/2024] [Indexed: 06/20/2024] Open
Abstract
Language is rooted in our ability to compose: We link words together, fusing their meanings. Links are not limited to neighboring words but often span intervening words. The ability to process these non-adjacent dependencies (NADs) conflicts with the brain's sampling of speech: We consume speech in chunks that are limited in time, containing only a limited number of words. It is unknown how we link words together that belong to separate chunks. Here, we report that we cannot-at least not so well. In our electroencephalography (EEG) study, 37 human listeners learned chunks and dependencies from an artificial grammar (AG) composed of syllables. Multi-syllable chunks to be learned were equal-sized, allowing us to employ a frequency-tagging approach. On top of chunks, syllable streams contained NADs that were either confined to a single chunk or crossed a chunk boundary. Frequency analyses of the EEG revealed a spectral peak at the chunk rate, showing that participants learned the chunks. NADs that cross boundaries were associated with smaller electrophysiological responses than within-chunk NADs. This shows that NADs are processed readily when they are confined to the same chunk, but not as well when crossing a chunk boundary. Our findings help to reconcile the classical notion that language is processed incrementally with recent evidence for discrete perceptual sampling of speech. This has implications for language acquisition and processing as well as for the general view of syntax in human language.
Collapse
Affiliation(s)
- Chia-Wen Lo
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Lars Meyer
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- University Clinic Münster, Münster, Germany
| |
Collapse
|
2
|
Kao C, Zhang Y. Detecting Emotional Prosody in Real Words: Electrophysiological Evidence From a Modified Multifeature Oddball Paradigm. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:2988-2998. [PMID: 37379567 DOI: 10.1044/2023_jslhr-22-00652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2023]
Abstract
PURPOSE Emotional voice conveys important social cues that demand listeners' attention and timely processing. This event-related potential study investigated the feasibility of a multifeature oddball paradigm to examine adult listeners' neural responses to detecting emotional prosody changes in nonrepeating naturally spoken words. METHOD Thirty-three adult listeners completed the experiment by passively listening to the words in neutral and three alternating emotions while watching a silent movie. Previous research documented preattentive change-detection electrophysiological responses (e.g., mismatch negativity [MMN], P3a) to emotions carried by fixed syllables or words. Given that the MMN and P3a have also been shown to reflect extraction of abstract regularities over repetitive acoustic patterns, this study employed a multifeature oddball paradigm to compare listeners' MMN and P3a to emotional prosody change from neutral to angry, happy, and sad emotions delivered with hundreds of nonrepeating words in a single recording session. RESULTS Both MMN and P3a were successfully elicited by the emotional prosodic change over the varying linguistic context. Angry prosody elicited the strongest MMN compared with happy and sad prosodies. Happy prosody elicited the strongest P3a in the centro-frontal electrodes, and angry prosody elicited the smallest P3a. CONCLUSIONS The results demonstrated that listeners were able to extract the acoustic patterns for each emotional prosody category over constantly changing spoken words. The findings confirm the feasibility of the multifeature oddball paradigm in investigating emotional speech processing beyond simple acoustic change detection, which may potentially be applied to pediatric and clinical populations.
Collapse
Affiliation(s)
- Chieh Kao
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities
- Center for Cognitive Sciences, University of Minnesota, Twin Cities
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities
- Masonic Institute for the Developing Brain, University of Minnesota, Twin Cities
| |
Collapse
|
3
|
Mauchand M, Pell MD. Listen to my feelings! How prosody and accent drive the empathic relevance of complaining speech. Neuropsychologia 2022; 175:108356. [PMID: 36037914 DOI: 10.1016/j.neuropsychologia.2022.108356] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 08/04/2022] [Accepted: 08/22/2022] [Indexed: 10/15/2022]
Abstract
Interpersonal communication often involves sharing our feelings with others; complaining, for example, aims to elicit empathy in listeners by vocally expressing a speaker's suffering. Despite the growing neuroscientific interest in the phenomenon of empathy, few have investigated how it is elicited in real time by vocal signals (prosody), and how this might be affected by interpersonal factors, such as a speaker's cultural background (based on their accent). To investigate the neural processes at play when hearing spoken complaints, twenty-six French participants listened to complaining and neutral utterances produced by in-group French and out-group Québécois (i.e., French-Canadian) speakers. Participants rated how hurt the speaker felt while their cerebral activity was monitored with electroencephalography (EEG). Principal Component Analysis of Event-Related Potentials (ERPs) taken at utterance onset showed culture-dependent time courses of emotive prosody processing. The high motivational relevance of ingroup complaints increased the P200 response compared to all other utterance types; in contrast, outgroup complaints selectively elicited an early posterior negativity in the same time window, followed by an increased N400 (due to ongoing effort to derive affective meaning from outgroup voices). Ingroup neutral utterances evoked a late negativity which may reflect re-analysis of emotively less salient, but culturally relevant ingroup speech. Results highlight the time-course of neurocognitive responses that contribute to emotive speech processing for complaints, establishing the critical role of prosody as well as social-relational factors (i.e., cultural identity) on how listeners are likely to "empathize" with a speaker.
Collapse
Affiliation(s)
- Maël Mauchand
- McGill University, School of Communication Sciences and Disorders, Montréal, Québec, Canada.
| | - Marc D Pell
- McGill University, School of Communication Sciences and Disorders, Montréal, Québec, Canada
| |
Collapse
|
4
|
Durfee AZ, Sheppard SM, Blake ML, Hillis AE. Lesion loci of impaired affective prosody: A systematic review of evidence from stroke. Brain Cogn 2021; 152:105759. [PMID: 34118500 PMCID: PMC8324538 DOI: 10.1016/j.bandc.2021.105759] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 05/06/2021] [Accepted: 05/24/2021] [Indexed: 02/06/2023]
Abstract
Affective prosody, or the changes in rate, rhythm, pitch, and loudness that convey emotion, has long been implicated as a function of the right hemisphere (RH), yet there is a dearth of literature identifying the specific neural regions associated with its processing. The current systematic review aimed to evaluate the evidence on affective prosody localization in the RH. One hundred and ninety articles from 1970 to February 2020 investigating affective prosody comprehension and production in patients with focal brain damage were identified via database searches. Eleven articles met inclusion criteria, passed quality reviews, and were analyzed for affective prosody localization. Acute, subacute, and chronic lesions demonstrated similar profile characteristics. Localized right antero-superior (i.e., dorsal stream) regions contributed to affective prosody production impairments, whereas damage to more postero-lateral (i.e., ventral stream) regions resulted in affective prosody comprehension deficits. This review provides support that distinct RH regions are vital for affective prosody comprehension and production, aligning with literature reporting RH activation for affective prosody processing in healthy adults as well. The impact of study design on resulting interpretations is discussed.
Collapse
Affiliation(s)
- Alexandra Zezinka Durfee
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States.
| | - Shannon M Sheppard
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States; Department of Communication Sciences and Disorders, Chapman University Crean College of Health and Behavioral Sciences, Irvine, CA 92618, United States
| | - Margaret L Blake
- Department of Communication Sciences and Disorders, University of Houston College of Liberal Arts and Social Sciences, Houston, TX 77204, United States
| | - Argye E Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States; Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States; Department of Cognitive Science, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD 21218, United States
| |
Collapse
|
5
|
Chan HL, Low I, Chen LF, Chen YS, Chu IT, Hsieh JC. A novel beamformer-based imaging of phase-amplitude coupling (BIPAC) unveiling the inter-regional connectivity of emotional prosody processing in women with primary dysmenorrhea. J Neural Eng 2021; 18. [PMID: 33691295 DOI: 10.1088/1741-2552/abed83] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Accepted: 03/10/2021] [Indexed: 12/30/2022]
Abstract
Objective. Neural communication or the interactions of brain regions play a key role in the formation of functional neural networks. A type of neural communication can be measured in the form of phase-amplitude coupling (PAC), which is the coupling between the phase of low-frequency oscillations and the amplitude of high-frequency oscillations. This paper presents a beamformer-based imaging method, beamformer-based imaging of PAC (BIPAC), to quantify the strength of PAC between a seed region and other brain regions.Approach. A dipole is used to model the ensemble of neural activity within a group of nearby neurons and represents a mixture of multiple source components of cortical activity. From ensemble activity at each brain location, the source component with the strongest coupling to the seed activity is extracted, while unrelated components are suppressed to enhance the sensitivity of coupled-source estimation.Main results. In evaluations using simulation data sets, BIPAC proved advantageous with regard to estimation accuracy in source localization, orientation, and coupling strength. BIPAC was also applied to the analysis of magnetoencephalographic signals recorded from women with primary dysmenorrhea in an implicit emotional prosody experiment. In response to negative emotional prosody, auditory areas revealed strong PAC with the ventral auditory stream and occipitoparietal areas in the theta-gamma and alpha-gamma bands, which may respectively indicate the recruitment of auditory sensory memory and attention reorientation. Moreover, patients with more severe pain experience appeared to have stronger coupling between auditory areas and temporoparietal regions.Significance. Our findings indicate that the implicit processing of emotional prosody is altered by menstrual pain experience. The proposed BIPAC is feasible and applicable to imaging inter-regional connectivity based on cross-frequency coupling estimates. The experimental results also demonstrate that BIPAC is capable of revealing autonomous brain processing and neurodynamics, which are more subtle than active and attended task-driven processing.
Collapse
Affiliation(s)
- Hui-Ling Chan
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Intan Low
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan.,Integrated Brain Research Unit, Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Li-Fen Chen
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan.,Integrated Brain Research Unit, Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan.,Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yong-Sheng Chen
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Ian-Ting Chu
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Jen-Chuen Hsieh
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan.,Integrated Brain Research Unit, Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan
| |
Collapse
|
6
|
Sonderfeld M, Mathiak K, Häring GS, Schmidt S, Habel U, Gur R, Klasen M. Supramodal neural networks support top-down processing of social signals. Hum Brain Mapp 2020; 42:676-689. [PMID: 33073911 PMCID: PMC7814753 DOI: 10.1002/hbm.25252] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 08/08/2020] [Accepted: 09/29/2020] [Indexed: 12/17/2022] Open
Abstract
The perception of facial and vocal stimuli is driven by sensory input and cognitive top‐down influences. Important top‐down influences are attentional focus and supramodal social memory representations. The present study investigated the neural networks underlying these top‐down processes and their role in social stimulus classification. In a neuroimaging study with 45 healthy participants, we employed a social adaptation of the Implicit Association Test. Attentional focus was modified via the classification task, which compared two domains of social perception (emotion and gender), using the exactly same stimulus set. Supramodal memory representations were addressed via congruency of the target categories for the classification of auditory and visual social stimuli (voices and faces). Functional magnetic resonance imaging identified attention‐specific and supramodal networks. Emotion classification networks included bilateral anterior insula, pre‐supplementary motor area, and right inferior frontal gyrus. They were pure attention‐driven and independent from stimulus modality or congruency of the target concepts. No neural contribution of supramodal memory representations could be revealed for emotion classification. In contrast, gender classification relied on supramodal memory representations in rostral anterior cingulate and ventromedial prefrontal cortices. In summary, different domains of social perception involve different top‐down processes which take place in clearly distinguishable neural networks.
Collapse
Affiliation(s)
- Melina Sonderfeld
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Gianna S Häring
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Sarah Schmidt
- Life & Brain - Institute for Experimental Epileptology and Cognition Research, Bonn, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Raquel Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Martin Klasen
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany.,Interdisciplinary Training Centre for Medical Education and Patient Safety - AIXTRA, Medical Faculty, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
7
|
Sun Q, Fang Y, Peng X, Shi Y, Chen J, Wang L, Tan L. Hyper-Activated Brain Resting-State Network and Mismatch Negativity Deficit in Schizophrenia With Auditory Verbal Hallucination Revealed by an Event-Related Potential Evidence. Front Psychiatry 2020; 11:765. [PMID: 32903707 PMCID: PMC7438905 DOI: 10.3389/fpsyt.2020.00765] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/22/2020] [Accepted: 07/20/2020] [Indexed: 11/13/2022] Open
Abstract
Schizophrenia is a holergasia with unclear mechanism and high heterogeneity. Auditory verbal hallucination (AVH) study might help in understanding schizophrenia from the perspective of individual symptoms. This study aimed to investigate the activities of the resting-state networks (RSN) in the electroencephalogram (EEG) and mismatch negativity (MMN) in task-related state of schizophrenia patients with AVH. We recruited 30 schizophrenia patients without any medication for more than 4 weeks (15 AVH patients and 15 Non-AVH patients) and 15 healthy controls. We recorded the EEG data of the participants in the resting-state for 7 min and the event-related potential (ERP) data under an auditory oddball paradigm. In the resting-state EEG network, AVH patients exhibited a higher clustering coefficient than Non-AVH patients and healthy controls on delta and beta bands and a shorter characteristic path length than Non-AVH patients and healthy controls on all frequency bands. For ERP data, AVH patients showed a lower MMN amplitude than healthy controls (p = 0.017) and Non-AVH patients (p = 0.033). What's more, MMN amplitude was positively correlated with clustering coefficient, and negatively correlated with characteristic path length on delta, theta, beta and gamma band in AVH patients. Our results indicate that AVH patients showed a hyper-activity in resting-state and may have impaired higher-order auditory expectations in the task-related state than healthy controls and Non-AVH patients. And it seems reasonable to conclude that the formation of AVH may occupy certain brain resources and compete for brain resources with external auditory stimuli.
Collapse
Affiliation(s)
- Qiaoling Sun
- Department of Psychiatry, Mental Health Institute of the Second Xiangya Hospital, Central South University, Changsha, China
| | - Yehua Fang
- Department of Clinical Psychology, Zhuzhou Central Hospital, Zhuzhou, China
| | - Xuemei Peng
- Department Psychology, Xiangtan Central Hospital, Xiangtan, China
| | - Yongyan Shi
- Department of Psychiatry, Mental Health Institute of the Second Xiangya Hospital, Central South University, Changsha, China
| | - Jinhong Chen
- Department of Sleeping Disorders & Neurosis, Brain Hospital of Hunan Province, Changsha, China
| | - Lifeng Wang
- Department of Clinical Psychology, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Liwen Tan
- Department of Psychiatry, Mental Health Institute of the Second Xiangya Hospital, Central South University, Changsha, China
| |
Collapse
|
8
|
Chen C, Chan CW, Cheng Y. Test-Retest Reliability of Mismatch Negativity (MMN) to Emotional Voices. Front Hum Neurosci 2018; 12:453. [PMID: 30498437 PMCID: PMC6249375 DOI: 10.3389/fnhum.2018.00453] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Accepted: 10/24/2018] [Indexed: 12/20/2022] Open
Abstract
A voice from kin species conveys indispensable social and affective signals with uniquely phylogenetic and ontogenetic standpoints. However, the neural underpinning of emotional voices, beyond low-level acoustic features, activates a processing chain that proceeds from the auditory pathway to the brain structures implicated in cognition and emotion. By using a passive auditory oddball paradigm, which employs emotional voices, this study investigates the test–retest reliability of emotional mismatch negativity (MMN), indicating that the deviants of positively (happily)- and negatively (angrily)-spoken syllables, as compared to neutral standards, can trigger MMN as a response to an automatic discrimination of emotional salience. The neurophysiological estimates of MMN to positive and negative deviants appear to be highly reproducible, irrespective of the subject’s attentional disposition: whether the subjects are set to a condition that involves watching a silent movie or do a working memory task. Specifically, negativity bias is evinced as threatening, relative to positive vocalizations, consistently inducing larger MMN amplitudes, regardless of the day and the time of a day. The present findings provide evidence to support the fact that emotional MMN offers a stable platform to detect subtle changes in current emotional shifts.
Collapse
Affiliation(s)
- Chenyi Chen
- Department of Physical Medicine and Rehabilitation, National Yang-Ming University Hospital, Yilan, Taiwan.,Graduate Institute of Injury Prevention and Control, Taipei Medical University, Taipei, Taiwan.,Institute of Humanities in Medicine, Taipei Medical University, Taipei, Taiwan.,Research Center of Brain and Consciousness, Shuang Ho Hospital, Taipei Medical University, Taipei, Taiwan
| | - Chia-Wen Chan
- Graduate Institute of Injury Prevention and Control, Taipei Medical University, Taipei, Taiwan
| | - Yawei Cheng
- Department of Physical Medicine and Rehabilitation, National Yang-Ming University Hospital, Yilan, Taiwan.,Institute of Neuroscience and Brain Research Center, National Yang-Ming University, Taipei, Taiwan.,Department of Research and Education, Taipei City Hospital, Taipei, Taiwan
| |
Collapse
|
9
|
Charpentier J, Kovarski K, Houy-Durand E, Malvy J, Saby A, Bonnet-Brilhault F, Latinus M, Gomot M. Emotional prosodic change detection in autism Spectrum disorder: an electrophysiological investigation in children and adults. J Neurodev Disord 2018; 10:28. [PMID: 30227832 PMCID: PMC6145332 DOI: 10.1186/s11689-018-9246-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Accepted: 09/07/2018] [Indexed: 12/12/2022] Open
Abstract
Background Autism spectrum disorder (ASD) is characterized by atypical behaviors in social environments and in reaction to changing events. While this dyad of symptoms is at the core of the pathology along with atypical sensory behaviors, most studies have investigated only one dimension. A focus on the sameness dimension has shown that intolerance to change is related to an atypical pre-attentional detection of irregularity. In the present study, we addressed the same process in response to emotional change in order to evaluate the interplay between alterations of change detection and socio-emotional processing in children and adults with autism. Methods Brain responses to neutral and emotional prosodic deviancies (mismatch negativity (MMN) and P3a, reflecting change detection and orientation of attention toward change, respectively) were recorded in children and adults with autism and in controls. Comparison of neutral and emotional conditions allowed distinguishing between general deviancy and emotional deviancy effects. Moreover, brain responses to the same neutral and emotional stimuli were recorded when they were not deviants to evaluate the sensory processing of these vocal stimuli. Results In controls, change detection was modulated by prosody: in children, this was characterized by a lateralization of emotional MMN to the right hemisphere, and in adults, by an earlier MMN for emotional deviancy than for neutral deviancy. In ASD, an overall atypical change detection was observed with an earlier MMN and a larger P3a compared to controls suggesting an unusual pre-attentional orientation toward any changes in the auditory environment. Moreover, in children with autism, deviancy detection depicted reduced MMN amplitude. In addition in children with autism, contrary to adults with autism, no modulation of the MMN by prosody was present and sensory processing of both neutral and emotional vocal stimuli appeared atypical. Conclusions Overall, change detection remains altered in people with autism. However, differences between children and adults with ASD evidence a trend toward normalization of vocal processing and of the automatic detection of emotion deviancy with age.
Collapse
Affiliation(s)
| | - K Kovarski
- UMR1253, INSERM, Université de Tours, TOURS, France
| | - E Houy-Durand
- UMR1253, INSERM, Université de Tours, TOURS, France.,Centre Universitaire de Pédopsychiatrie, CHRU de Tours, TOURS, France
| | - J Malvy
- UMR1253, INSERM, Université de Tours, TOURS, France.,Centre Universitaire de Pédopsychiatrie, CHRU de Tours, TOURS, France
| | - A Saby
- Centre Universitaire de Pédopsychiatrie, CHRU de Tours, TOURS, France
| | - F Bonnet-Brilhault
- UMR1253, INSERM, Université de Tours, TOURS, France.,Centre Universitaire de Pédopsychiatrie, CHRU de Tours, TOURS, France
| | - M Latinus
- UMR1253, INSERM, Université de Tours, TOURS, France
| | - M Gomot
- UMR1253, INSERM, Université de Tours, TOURS, France.
| |
Collapse
|
10
|
Brain mechanisms involved in angry prosody change detection in school-age children and adults, revealed by electrophysiology. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2018; 18:748-763. [DOI: 10.3758/s13415-018-0602-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
11
|
Spengler FB, Scheele D, Marsh N, Kofferath C, Flach A, Schwarz S, Stoffel-Wagner B, Maier W, Hurlemann R. Oxytocin facilitates reciprocity in social communication. Soc Cogn Affect Neurosci 2018; 12:1325-1333. [PMID: 28444316 PMCID: PMC5597889 DOI: 10.1093/scan/nsx061] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2016] [Accepted: 04/17/2017] [Indexed: 12/31/2022] Open
Abstract
Synchrony in social groups may confer significant evolutionary advantages by improving group cohesion and social interaction. However, the neurobiological mechanisms translating social synchrony into refined social information transmission between interacting individuals are still elusive. In two successively conducted experiments involving a total of 306 healthy volunteers, we explored the involvement of the neuropeptide oxytocin (OXT) in reciprocal social interaction. First, we show that synchronous social interactions evoke heightened endogenous OXT release in dyadic partners. In a second step, we examined the consequences of elevated OXT concentrations on emotion transmission by intranasally administering synthetic OXT before recording emotional expressions. Intriguingly, our data demonstrate that the subjects’ facial and vocal expressiveness of fear and happiness is enhanced after OXT compared with placebo administration. Collectively, our findings point to a central role of social synchrony in facilitating reciprocal communication between individuals via heightened OXT signaling. Elevated OXT concentrations among synchronized individuals seem to augment the partners’ emotional expressiveness, thereby contributing to improved transmission of emotional information in social communication.
Collapse
Affiliation(s)
- Franny B Spengler
- Department of Psychiatry.,Division of Medical Psychology, University of Bonn, 53105 Bonn, Germany
| | - Dirk Scheele
- Department of Psychiatry.,Division of Medical Psychology, University of Bonn, 53105 Bonn, Germany
| | - Nina Marsh
- Department of Psychiatry.,Division of Medical Psychology, University of Bonn, 53105 Bonn, Germany
| | - Charlotte Kofferath
- Department of Psychiatry.,Division of Medical Psychology, University of Bonn, 53105 Bonn, Germany
| | - Aileen Flach
- Department of Psychiatry.,Division of Medical Psychology, University of Bonn, 53105 Bonn, Germany
| | - Sarah Schwarz
- Department of Psychiatry.,Division of Medical Psychology, University of Bonn, 53105 Bonn, Germany
| | - Birgit Stoffel-Wagner
- Department of Clinical Chemistry and Clinical Pharmacology, University of Bonn, 53105 Bonn, Germany
| | - Wolfgang Maier
- Department of Psychiatry.,Division of Medical Psychology, University of Bonn, 53105 Bonn, Germany.,German Center for Neurodegenerative Diseases (DZNE), 53175 Bonn, Germany
| | - René Hurlemann
- Department of Psychiatry.,Division of Medical Psychology, University of Bonn, 53105 Bonn, Germany
| |
Collapse
|
12
|
Klasen M, von Marschall C, Isman G, Zvyagintsev M, Gur RC, Mathiak K. Prosody production networks are modulated by sensory cues and social context. Soc Cogn Affect Neurosci 2018. [PMID: 29514331 PMCID: PMC5928400 DOI: 10.1093/scan/nsy015] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
The neurobiology of emotional prosody production is not well investigated. In particular, the effects of cues and social context are not known. The present study sought to differentiate cued from free emotion generation and the effect of social feedback from a human listener. Online speech filtering enabled functional magnetic resonance imaging during prosodic communication in 30 participants. Emotional vocalizations were (i) free, (ii) auditorily cued, (iii) visually cued or (iv) with interactive feedback. In addition to distributed language networks, cued emotions increased activity in auditory and—in case of visual stimuli—visual cortex. Responses were larger in posterior superior temporal gyrus at the right hemisphere and the ventral striatum when participants were listened to and received feedback from the experimenter. Sensory, language and reward networks contributed to prosody production and were modulated by cues and social context. The right posterior superior temporal gyrus is a central hub for communication in social interactions—in particular for interpersonal evaluation of vocal emotions.
Collapse
Affiliation(s)
- Martin Klasen
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| | - Clara von Marschall
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| | - Güldehen Isman
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Mikhail Zvyagintsev
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| | - Ruben C Gur
- Department of Psychiatry, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| |
Collapse
|
13
|
Carminati M, Fiori-Duharcourt N, Isel F. Neurophysiological differentiation between preattentive and attentive processing of emotional expressions on French vowels. Biol Psychol 2017; 132:55-63. [PMID: 29102707 DOI: 10.1016/j.biopsycho.2017.10.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2017] [Revised: 10/17/2017] [Accepted: 10/30/2017] [Indexed: 12/29/2022]
Abstract
The present electrophysiological study investigated the processing of emotional prosody by minimizing as much as possible the effect of emotional information conveyed by the lexical-semantic context. Emotionally colored French vowels (i.e., happiness, sadness, fear, and neutral) were presented in a mismatch negativity (MMN) oddball paradigm. Both the MMN, i.e., an event-related potential (ERP) component thought to reflect preattentive change detection, and the P3a, i.e., an ERP marker of involuntary orientation of attention toward deviant stimuli, were significantly modulated by the emotional deviants compared to the neutral ones. Critically, the largest amplitude (MMN, P3a) and the shortest peak latency (MMN) were observed for fear deviants, all other things being equal. Taken together, the present findings lend support to a sequential neurocognitive model of emotion processing (Scherer, 2001) which postulates, among other checks, a first stage of automatic emotion detection (MMN) followed by a second stage of subjective evaluation of the stimulus or event (P3a). Consistently with previous studies, our data suggest that among the six universal emotions, fear could have a special status probably because of its adaptive role in the evolution of the human species.
Collapse
Affiliation(s)
- Mathilde Carminati
- Laboratory Vision Action Cognition - EA 7326, Institute of Psychology, Paris Descartes University - Sorbonne Paris Cité, France.
| | - Nicole Fiori-Duharcourt
- Laboratory Vision Action Cognition - EA 7326, Institute of Psychology, Paris Descartes University - Sorbonne Paris Cité, France
| | - Frédéric Isel
- University Paris Nanterre - Paris Lumières, CNRS, UMR 7114 Models, Dynamics, Corpora, France
| |
Collapse
|
14
|
Abstract
It is of the utmost importance for an organism to rapidly detect and react to changes in its environment. The oddball paradigm has repeatedly been used to explore the underlying cognitive and neurophysiological components of change detection. It is also used to investigate the special role of emotional content in perception and attention (emotional oddball paradigm; EOP). In this article, the EOP is systematically reviewed. The EOP is, for instance, used as a tool to address questions as to what degree emotional deviant stimuli trigger orientation reactions, which role the emotional context plays in the processing of deviant information, and how the processing of emotional deviant information differs interindividually (including clinical populations). Two main variants with regard to the emotionality of standards and deviants are defined. Most of the identified EOP studies report EEG data but an overview of behavioral data is also provided in this review. We integrate evidence from 99 EOP experiments and shape the EOP's theoretical background under the consideration of other paradigms’ mechanisms and theories.
Collapse
|
15
|
Is laughter a better vocal change detector than a growl? Cortex 2017; 92:233-248. [DOI: 10.1016/j.cortex.2017.03.018] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2016] [Revised: 01/26/2017] [Accepted: 03/27/2017] [Indexed: 11/23/2022]
|
16
|
Chen X, Han L, Pan Z, Luo Y, Wang P. Influence of attention on bimodal integration during emotional change decoding: ERP evidence. Int J Psychophysiol 2016; 106:14-20. [PMID: 27238075 DOI: 10.1016/j.ijpsycho.2016.05.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2016] [Revised: 04/30/2016] [Accepted: 05/25/2016] [Indexed: 10/21/2022]
Abstract
Recent findings on audiovisual emotional interactions suggest that selective attention affects cross-sensory interaction from an early processing stage. However, the influence of attention manipulation on facial-vocal integration during emotional change perception is still elusive at this point. To address this issue, we asked participants to detect emotional changes conveyed by prosodies (vocal task) or facial expressions (facial task) while facial, vocal, and facial-vocal expressions were presented. At the same time, behavioral responses and electroencephalogram (EEG) were recorded. Behavioral results showed that bimodal emotional changes were detected with shorter response latencies compared to each unimodal condition, suggesting that bimodal emotional cues facilitated emotional change detection. Moreover, while the P3 amplitudes were larger for the bimodal change condition than for the sum of the two unimodal conditions regardless of attention direction, the N1 amplitudes were larger for the bimodal emotional change condition than for the sum of the two unimodal conditions under the attend-voice condition, but not under the attend-face condition. These findings suggest that selective attention modulates facial-vocal integration during emotional change perception in early sensory processing, but not in late cognitive processing stages.
Collapse
Affiliation(s)
- Xuhai Chen
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China; Key Laboratory of Modern Teaching Technology, Ministry of Education, Shaanxi Normal University, Xi'an 710062, China.
| | - Lingzi Han
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China
| | - Zhihui Pan
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China
| | - Yangmei Luo
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China
| | - Ping Wang
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China.
| |
Collapse
|
17
|
Abstract
UNLABELLED Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal ("prosodic") features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention. SIGNIFICANCE STATEMENT Schizophrenia patients show deficits in the ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive impairments in social cognition and global functional outcome. This study evaluated neural substrates of impaired AER in schizophrenia using a combined event-related potential/resting-state fMRI approach. Patients showed impaired mismatch negativity response to emotionally relevant frequency modulated tones along with impaired functional connectivity between auditory and medial temporal (anterior insula) cortex. These deficits contributed in parallel to impaired AER and accounted for ∼50% of variance in AER performance. Overall, these findings demonstrate the importance of both auditory-level dysfunction and impaired auditory/insula connectivity in the pathophysiology of social cognitive dysfunction in schizophrenia.
Collapse
|
18
|
Schirmer A, Escoffier N, Cheng X, Feng Y, Penney TB. Detecting Temporal Change in Dynamic Sounds: On the Role of Stimulus Duration, Speed, and Emotion. Front Psychol 2016; 6:2055. [PMID: 26793161 PMCID: PMC4710701 DOI: 10.3389/fpsyg.2015.02055] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Accepted: 12/24/2015] [Indexed: 11/16/2022] Open
Abstract
For dynamic sounds, such as vocal expressions, duration often varies alongside speed. Compared to longer sounds, shorter sounds unfold more quickly. Here, we asked whether listeners implicitly use this confound when representing temporal regularities in their environment. In addition, we explored the role of emotions in this process. Using a mismatch negativity (MMN) paradigm, we asked participants to watch a silent movie while passively listening to a stream of task-irrelevant sounds. In Experiment 1, one surprised and one neutral vocalization were compressed and stretched to create stimuli of 378 and 600 ms duration. Stimuli were presented in four blocks, two of which used surprised and two of which used neutral expressions. In one surprised and one neutral block, short and long stimuli served as standards and deviants, respectively. In the other two blocks, the assignment of standards and deviants was reversed. We observed a climbing MMN-like negativity shortly after deviant onset, which suggests that listeners implicitly track sound speed and detect speed changes. Additionally, this MMN-like effect emerged earlier and was larger for long than short deviants, suggesting greater sensitivity to duration increments or slowing down than to decrements or speeding up. Last, deviance detection was facilitated in surprised relative to neutral blocks, indicating that emotion enhances temporal processing. Experiment 2 was comparable to Experiment 1 with the exception that sounds were spectrally rotated to remove vocal emotional content. This abolished the emotional processing benefit, but preserved the other effects. Together, these results provide insights into listener sensitivity to sound speed and raise the possibility that speed biases duration judgements implicitly in a feed-forward manner. Moreover, this bias may be amplified for duration increments relative to decrements and within an emotional relative to a neutral stimulus context.
Collapse
Affiliation(s)
- Annett Schirmer
- Department of Psychology, National University of Singapore, SingaporeSingapore; Life Sciences Institute Programme in Neurobiology and Ageing, National University of Singapore, SingaporeSingapore; Duke-NUS Graduate Medical School, SingaporeSingapore
| | - Nicolas Escoffier
- Department of Psychology, National University of Singapore, SingaporeSingapore; Life Sciences Institute Programme in Neurobiology and Ageing, National University of Singapore, SingaporeSingapore
| | - Xiaoqin Cheng
- Life Sciences Institute Programme in Neurobiology and Ageing, National University of Singapore, SingaporeSingapore; Graduate School for Integrative Sciences and Engineering, National University of Singapore, SingaporeSingapore
| | - Yenju Feng
- Life Sciences Institute Programme in Neurobiology and Ageing, National University of Singapore, SingaporeSingapore; Graduate School for Integrative Sciences and Engineering, National University of Singapore, SingaporeSingapore
| | - Trevor B Penney
- Department of Psychology, National University of Singapore, SingaporeSingapore; Life Sciences Institute Programme in Neurobiology and Ageing, National University of Singapore, SingaporeSingapore
| |
Collapse
|
19
|
Folyi T, Liesefeld HR, Wentura D. Attentional enhancement for positive and negative tones at an early stage of auditory processing. Biol Psychol 2015; 114:23-32. [PMID: 26678665 DOI: 10.1016/j.biopsycho.2015.12.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2014] [Revised: 12/03/2015] [Accepted: 12/04/2015] [Indexed: 10/22/2022]
Abstract
We report an event-related potential (ERP) study based on the hypothesis that valenced (i.e., positive and/or negative) tones are prioritized over neutral ones at an early, perceptual stage of auditory processing. In order to avoid perceptual confounds, we induced valence experimentally during a learning phase by assigning positive, negative, and neutral valences to tone-frequencies in a balanced design. In a subsequent test phase, EEG was recorded while these tones were entirely task-irrelevant. The amplitude of the auditory N1 was increased for valenced compared with neutral tones, indicating enhanced attention. While behavioral results of the learning phase, and both implicit and explicit measures of tone evaluation indicated differentiation between positive and negative valence, there was no such differentiation on the N1 amplitude. Our results suggest that it is the general relevance of the valenced tones that governs early attentional processes.
Collapse
Affiliation(s)
- Tímea Folyi
- Department of Psychology, Saarland University, Saarbrücken, Germany.
| | | | - Dirk Wentura
- Department of Psychology, Saarland University, Saarbrücken, Germany.
| |
Collapse
|
20
|
Neural Processing of Emotional Prosody across the Adult Lifespan. BIOMED RESEARCH INTERNATIONAL 2015; 2015:590216. [PMID: 26583118 PMCID: PMC4637042 DOI: 10.1155/2015/590216] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2015] [Revised: 08/17/2015] [Accepted: 08/30/2015] [Indexed: 11/17/2022]
Abstract
Emotion recognition deficits emerge with the increasing age, in particular, a decline in the identification of sadness. However, little is known about the age-related changes of emotion processing in sensory, affective, and executive brain areas. This functional magnetic resonance imaging (fMRI) study investigated neural correlates of auditory processing of prosody across adult lifespan. Unattended detection of emotional prosody changes was assessed in 21 young (age range: 18-35 years), 19 middle-aged (age range: 36-55 years), and 15 older (age range: 56-75 years) adults. Pseudowords uttered with neutral prosody were standards in an oddball paradigm with angry, sad, happy, and gender deviants (total 20% deviants). Changes in emotional prosody and voice gender elicited bilateral superior temporal gyri (STG) responses reflecting automatic encoding of prosody. At the right STG, responses to sad deviants decreased linearly with age, whereas happy events exhibited a nonlinear relationship. In contrast to behavioral data, no age by sex interaction emerged on the neural networks. The aging decline of emotion processing of prosodic cues emerges already at an early automatic stage of information processing at the level of the auditory cortex. However, top-down modulation may lead to an additional perceptional bias, for example, towards positive stimuli, and may depend on context factors such as the listener's sex.
Collapse
|
21
|
What Do You Mean by That?! An Electrophysiological Study of Emotional and Attitudinal Prosody. PLoS One 2015; 10:e0132947. [PMID: 26176622 PMCID: PMC4503638 DOI: 10.1371/journal.pone.0132947] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2014] [Accepted: 06/21/2015] [Indexed: 11/29/2022] Open
Abstract
The use of prosody during verbal communication is pervasive in everyday language and whilst there is a wealth of research examining the prosodic processing of emotional information, much less is known about the prosodic processing of attitudinal information. The current study investigated the online neural processes underlying the prosodic processing of non-verbal emotional and attitudinal components of speech via the analysis of event-related brain potentials related to the processing of anger and sarcasm. To examine these, sentences with prosodic expectancy violations created by cross-splicing a prosodically neutral head (‘he has’) and a prosodically neutral, angry, or sarcastic ending (e.g., ‘a serious face’) were used. Task demands were also manipulated, with participants in one experiment performing prosodic classification and participants in another performing probe-verification. Overall, whilst minor differences were found across the tasks, the results suggest that angry and sarcastic prosodic expectancy violations follow a similar processing time-course underpinned by similar neural resources.
Collapse
|
22
|
Chen X, Pan Z, Wang P, Yang X, Liu P, You X, Yuan J. The integration of facial and vocal cues during emotional change perception: EEG markers. Soc Cogn Affect Neurosci 2015; 11:1152-61. [PMID: 26130820 DOI: 10.1093/scan/nsv083] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2015] [Accepted: 06/24/2015] [Indexed: 11/13/2022] Open
Abstract
The ability to detect emotional changes is of primary importance for social living. Though emotional signals are often conveyed by multiple modalities, how emotional changes in vocal and facial modalities integrate into a unified percept has yet to be directly investigated. To address this issue, we asked participants to detect emotional changes delivered by facial, vocal and facial-vocal expressions while behavioral responses and electroencephalogram were recorded. Behavioral results showed that bimodal emotional changes were detected with higher accuracy and shorter response latencies compared with each unimodal condition. Moreover, the detection of emotional change, regardless of modalities, was associated with enhanced amplitudes in the N2 and P3 component, as well as greater theta synchronization. More importantly, the P3 amplitudes and theta synchronization were larger for the bimodal emotional change condition than for the sum of the two unimodal conditions. The superadditive responses in P3 amplitudes and theta synchronization were both positively correlated with the magnitude of the bimodal superadditivity in accuracy. These behavioral and electrophysiological data consistently illustrated an effect of audiovisual integration during the detection of emotional changes, which is most likely mediated by the P3 activity and theta oscillations in brain responses.
Collapse
Affiliation(s)
- Xuhai Chen
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China, Key Laboratory of Modern Teaching Technology, Ministry of Education, Shaanxi Normal University, Xi'an 710062, China
| | - Zhihui Pan
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China
| | - Ping Wang
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China
| | - Xiaohong Yang
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China, and
| | - Peng Liu
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China
| | - Xuqun You
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China
| | - Jiajin Yuan
- Key Laboratory of Cognition and Personality of Ministry of Education, School of Psychology, Southwest University, Chongqing 400715, China
| |
Collapse
|
23
|
Pinheiro AP, Vasconcelos M, Dias M, Arrais N, Gonçalves ÓF. The music of language: an ERP investigation of the effects of musical training on emotional prosody processing. BRAIN AND LANGUAGE 2015; 140:24-34. [PMID: 25461917 DOI: 10.1016/j.bandl.2014.10.009] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2014] [Revised: 09/30/2014] [Accepted: 10/22/2014] [Indexed: 06/04/2023]
Abstract
Recent studies have demonstrated the positive effects of musical training on the perception of vocally expressed emotion. This study investigated the effects of musical training on event-related potential (ERP) correlates of emotional prosody processing. Fourteen musicians and fourteen control subjects listened to 228 sentences with neutral semantic content, differing in prosody (one third with neutral, one third with happy and one third with angry intonation), with intelligible semantic content (semantic content condition--SCC) and unintelligible semantic content (pure prosody condition--PPC). Reduced P50 amplitude was found in musicians. A difference between SCC and PPC conditions was found in P50 and N100 amplitude in non-musicians only, and in P200 amplitude in musicians only. Furthermore, musicians were more accurate in recognizing angry prosody in PPC sentences. These findings suggest that auditory expertise characterizing extensive musical training may impact different stages of vocal emotional processing.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Cognitive Neuroscience Lab, Department of Psychiatry, Harvard Medical School, Boston, MA, USA.
| | - Margarida Vasconcelos
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Marcelo Dias
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Nuno Arrais
- Music Department, Institute of Arts and Human Sciences, University of Minho, Braga, Portugal
| | - Óscar F Gonçalves
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Spaulding Center of Neuromodulation, Department of Physical Medicine & Rehabilitation, Spaulding Rehabilitation Hospital and Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
24
|
Chen X, Pan Z, Wang P, Zhang L, Yuan J. EEG oscillations reflect task effects for the change detection in vocal emotion. Cogn Neurodyn 2014; 9:351-8. [PMID: 25972983 DOI: 10.1007/s11571-014-9326-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2014] [Revised: 11/27/2014] [Accepted: 12/14/2014] [Indexed: 12/01/2022] Open
Abstract
How task focus affects recognition of change in vocal emotion remains in debate. In this study, we investigated the role of task focus for change detection in emotional prosody by measuring changes in event-related electroencephalogram (EEG) power. EEG was recorded for prosodies with and without emotion change while subjects performed emotion change detection task (explicit) and visual probe detection task (implicit). We found that vocal emotion change induced theta event-related synchronization during 100-600 ms regardless of task focus. More importantly, vocal emotion change induced significant beta event-related desynchronization during 400-750 ms under explicit instead of implicit task condition. These findings suggest that the detection of emotional changes is independent of task focus, while the task focus effect in neural processing of vocal emotion change is specific to the integration of emotional deviations.
Collapse
Affiliation(s)
- Xuhai Chen
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, 199# South Chang'an Road, Xi'an, 710062 China ; Key Laboratory of Modern Teaching Technology, Ministry of Education, Shaanxi Normal University, Xi'an, 710062 China
| | - Zhihui Pan
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, 199# South Chang'an Road, Xi'an, 710062 China
| | - Ping Wang
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, 199# South Chang'an Road, Xi'an, 710062 China
| | - Lijie Zhang
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, 199# South Chang'an Road, Xi'an, 710062 China
| | - Jiajin Yuan
- Key Laboratory of Cognition and Personality of Ministry of Education, School of Psychology, Southwest University, Chongqing, 400715 China
| |
Collapse
|
25
|
Sokka L, Huotilainen M, Leinikka M, Korpela J, Henelius A, Alain C, Müller K, Pakarinen S. Alterations in attention capture to auditory emotional stimuli in job burnout: An event-related potential study. Int J Psychophysiol 2014; 94:427-36. [PMID: 25448269 DOI: 10.1016/j.ijpsycho.2014.11.001] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2014] [Revised: 10/29/2014] [Accepted: 11/02/2014] [Indexed: 11/16/2022]
Affiliation(s)
- Laura Sokka
- Finnish Institute of Occupational Health, Topeliuksenkatu 41 a A, 00250 Helsinki, Finland.
| | - Minna Huotilainen
- Finnish Institute of Occupational Health, Topeliuksenkatu 41 a A, 00250 Helsinki, Finland
| | - Marianne Leinikka
- Finnish Institute of Occupational Health, Topeliuksenkatu 41 a A, 00250 Helsinki, Finland
| | - Jussi Korpela
- Finnish Institute of Occupational Health, Topeliuksenkatu 41 a A, 00250 Helsinki, Finland
| | - Andreas Henelius
- Finnish Institute of Occupational Health, Topeliuksenkatu 41 a A, 00250 Helsinki, Finland
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, 3560 Bathurst Street, Toronto, Ontario, Canada M6A 2E1; Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Kiti Müller
- Finnish Institute of Occupational Health, Topeliuksenkatu 41 a A, 00250 Helsinki, Finland
| | - Satu Pakarinen
- Finnish Institute of Occupational Health, Topeliuksenkatu 41 a A, 00250 Helsinki, Finland
| |
Collapse
|
26
|
Chen C, Lee YH, Cheng Y. Anterior insular cortex activity to emotional salience of voices in a passive oddball paradigm. Front Hum Neurosci 2014; 8:743. [PMID: 25346670 PMCID: PMC4193252 DOI: 10.3389/fnhum.2014.00743] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2014] [Accepted: 09/03/2014] [Indexed: 11/23/2022] Open
Abstract
The human voice, which has a pivotal role in communication, is processed in specialized brain regions. Although a general consensus holds that the anterior insular cortex (AIC) plays a critical role in negative emotional experience, previous studies have not observed AIC activation in response to hearing disgust in voices. We used magnetoencephalography to measure the magnetic counterparts of mismatch negativity (MMNm) and P3a (P3am) in healthy adults while the emotionally meaningless syllables dada, spoken as neutral, happy, or disgusted prosodies, along with acoustically matched simple and complex tones, were presented in a passive oddball paradigm. The results revealed that disgusted relative to happy syllables elicited stronger MMNm-related cortical activities in the right AIC and precentral gyrus along with the left posterior insular cortex, supramarginal cortex, transverse temporal cortex, and upper bank of superior temporal cortex. The AIC activity specific to disgusted syllables (corrected p < 0.05) was associated with the hit rate of the emotional categorization task. These findings may clarify the neural correlates of emotional MMNm and lend support to the role of AIC in the processing of emotional salience already at the preattentive level.
Collapse
Affiliation(s)
- Chenyi Chen
- Institute of Neuroscience, National Yang-Ming University Taipei, Taiwan
| | - Yu-Hsuan Lee
- Institute of Neuroscience, National Yang-Ming University Taipei, Taiwan
| | - Yawei Cheng
- Institute of Neuroscience, National Yang-Ming University Taipei, Taiwan ; Department of Rehabilitation, National Yang-Ming University Yilan, Taiwan ; Department of Education and Research, Taipei City Hospital Taipei, Taiwan
| |
Collapse
|
27
|
Upregulation of the rostral anterior cingulate cortex can alter the perception of emotions: fMRI-based neurofeedback at 3 and 7 T. Brain Topogr 2014; 28:197-207. [PMID: 25087073 DOI: 10.1007/s10548-014-0384-4] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2014] [Accepted: 07/14/2014] [Indexed: 10/24/2022]
Abstract
Recent advances in real-time functional magnetic resonance imaging (rt-fMRI) techniques enable online feedback about momentary brain activity from a localized region of interest. The anterior cingulate cortex (ACC) as a central hub for cognitive and emotional networks and its modulation has been suggested to elicit mood changes. In the presented real-time fMRI neurofeedback experiment at a 3 and a 7 T scanner we enabled participants to regulate ACC activity within one training session. The session consisted of three training runs of 8.5 min where subjects received online feedback about their current ACC activity. Before and after each run we presented emotional prosody. Subjects rated these stimuli according to their emotional valence and arousal, which served as an implicit mood measure. We found increases in ACC activation at 3 T (n = 15) and at 7 T (n = 9) with a higher activation success for the 3 T group. FMRI signal control of the rostral ACC depended on signal quality and predicted a valence bias in the rating of emotional prosody. Real-time fMRI neurofeedback of the ACC is feasible at different magnetic field strengths and can modulate localized ACC activity and emotion perception. It promises non-invasive therapeutic approaches for different psychiatric disorders characterized by impaired self-regulation.
Collapse
|
28
|
Jiang A, Yang J, Yang Y. MMN responses during implicit processing of changes in emotional prosody: an ERP study using Chinese pseudo-syllables. Cogn Neurodyn 2014; 8:499-508. [PMID: 26396648 DOI: 10.1007/s11571-014-9303-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2013] [Revised: 06/17/2014] [Accepted: 07/15/2014] [Indexed: 11/26/2022] Open
Abstract
In this study, we tested the underlying mechanisms of early emotional prosody perception, especially examined whether change detection in oddball paradigm was caused by emotional category and physical properties. Using implicit oddball paradigms, the current study manipulated the cues for detecting deviant stimuli from standards in three conditions: the simultaneous changes in emotional category and physical properties (EP condition), change in emotional category alone (E condition), and change in physical properties alone (P condition). ERP results revealed that physical property change increased brain responses to deviant stimuli in the EP than in the E condition at early stage 90-160 ms, suggesting that physical property change of emotional sounds can also be detected at the early stage. At the later stage 160-260 ms, the simultaneous and respective changes in emotional category and physical properties were reliably detected, and the sum of the brain responses to the corresponding changes in E and P conditions was equal to the brain responses to the simultaneous changes in EP condition. Source analysis further revealed that stimuli-driven regions (inferior parietal lobule), temporal and frontal cortices were activated at early stage, while only frontal cortices for higher cognitive processing were activated at later stage. These findings suggest that emotional prosody changes in physical properties and emotion category are perceived as domain-general change information in emotional prosody perception.
Collapse
Affiliation(s)
- Aishi Jiang
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Chaoyang District, Beijing, 100101 China ; Graduate University of Chinese Academy of Sciences, Beijing, 100049 China
| | - Jianfeng Yang
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Chaoyang District, Beijing, 100101 China
| | - Yufang Yang
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Chaoyang District, Beijing, 100101 China
| |
Collapse
|
29
|
Pang X, Xu J, Chang Y, Tang D, Zheng Y, Liu Y, Sun Y. Mismatch negativity of sad syllables is absent in patients with major depressive disorder. PLoS One 2014; 9:e91995. [PMID: 24658084 PMCID: PMC3962367 DOI: 10.1371/journal.pone.0091995] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2013] [Accepted: 02/18/2014] [Indexed: 11/29/2022] Open
Abstract
BACKGROUND Major depressive disorder (MDD) is an important and highly prevalent mental disorder characterized by anhedonia and a lack of interest in everyday activities. Additionally, patients with MDD appear to have deficits in various cognitive abilities. Although a number of studies investigating the central auditory processing of low-level sound features in patients with MDD have demonstrated that this population exhibits impairments in automatic processing, the influence of emotional voice processing has yet to be addressed. To explore the automatic processing of emotional prosodies in patients with MDD, we analyzed the ability to detect automatic changes using event-related potentials (ERPs). METHOD This study included 18 patients with MDD and 22 age- and sex-matched healthy controls. Subjects were instructed to watch a silent movie but to ignore the afferent acoustic emotional prosodies presented to both ears while continuous electroencephalographic activity was synchronously recorded. Prosodies included meaningless syllables, such as "dada" spoken with happy, angry, sad, or neutral tones. The mean amplitudes of the ERPs elicited by emotional stimuli and the peak latency of the emotional differential waveforms were analyzed. RESULTS The sad MMN was absent in patients with MDD, whereas the happy and angry MMN components were similar across groups. The abnormal sad emotional MMN component was not significantly correlated with the HRSD-17 and HAMA scores, respectively. CONCLUSION The data indicate that patients with MDD are impaired in their ability to automatically process sad prosody, whereas their ability to process happy and angry prosodies remains normal. The dysfunctional sad emotion-related MMN in patients with MDD were not correlated with depression symptoms. The blunted MMN of sad prosodies could be considered a trait of MDD.
Collapse
Affiliation(s)
- Xiaomei Pang
- Department of Neurology and Psychiatry, First Affiliated Hospital, Dalian Medical University, Liaoning Province, China
- Research Institute of Integrated Traditional and Western Medicine, Dalian Medical University, Liaoning Province, China
| | - Jing Xu
- Department of Neurology and Psychiatry, First Affiliated Hospital, Dalian Medical University, Liaoning Province, China
- Research Institute of Integrated Traditional and Western Medicine, Dalian Medical University, Liaoning Province, China
| | - Yi Chang
- Department of Neurology and Psychiatry, First Affiliated Hospital, Dalian Medical University, Liaoning Province, China
| | - Di Tang
- Department of Neurology and Psychiatry, First Affiliated Hospital, Dalian Medical University, Liaoning Province, China
- Research Institute of Integrated Traditional and Western Medicine, Dalian Medical University, Liaoning Province, China
| | - Ya Zheng
- Department of Psychology, Dalian Medical University, Liaoning Province, China
| | - Yanhua Liu
- Department of Neurology and Psychiatry, First Affiliated Hospital, Dalian Medical University, Liaoning Province, China
| | - Yiming Sun
- Department of Neurology and Psychiatry, First Affiliated Hospital, Dalian Medical University, Liaoning Province, China
| |
Collapse
|
30
|
Demenescu LR, Mathiak KA, Mathiak K. Age- and Gender-Related Variations of Emotion Recognition in Pseudowords and Faces. Exp Aging Res 2014; 40:187-207. [DOI: 10.1080/0361073x.2014.882210] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
31
|
Activation of auditory cortex by anticipating and hearing emotional sounds: an MEG study. PLoS One 2013; 8:e80284. [PMID: 24278270 PMCID: PMC3835909 DOI: 10.1371/journal.pone.0080284] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2012] [Accepted: 09/26/2013] [Indexed: 11/19/2022] Open
Abstract
To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period.
Collapse
|
32
|
Fast parametric evaluation of central speech-sound processing with mismatch negativity (MMN). Int J Psychophysiol 2013. [DOI: 10.1016/j.ijpsycho.2012.11.010] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
33
|
Lindström R, Lepistö T, Makkonen T, Kujala T. Processing of prosodic changes in natural speech stimuli in school-age children. Int J Psychophysiol 2012; 86:229-37. [DOI: 10.1016/j.ijpsycho.2012.09.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2012] [Revised: 09/02/2012] [Accepted: 09/26/2012] [Indexed: 10/27/2022]
|
34
|
Regenbogen C, Schneider DA, Finkelmeyer A, Kohn N, Derntl B, Kellermann T, Gur RE, Schneider F, Habel U. The differential contribution of facial expressions, prosody, and speech content to empathy. Cogn Emot 2012; 26:995-1014. [DOI: 10.1080/02699931.2011.631296] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
35
|
Goerlich KS, Aleman A, Martens S. The sound of feelings: electrophysiological responses to emotional speech in alexithymia. PLoS One 2012; 7:e36951. [PMID: 22615853 PMCID: PMC3352858 DOI: 10.1371/journal.pone.0036951] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2011] [Accepted: 04/13/2012] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND Alexithymia is a personality trait characterized by difficulties in the cognitive processing of emotions (cognitive dimension) and in the experience of emotions (affective dimension). Previous research focused mainly on visual emotional processing in the cognitive alexithymia dimension. We investigated the impact of both alexithymia dimensions on electrophysiological responses to emotional speech in 60 female subjects. METHODOLOGY During unattended processing, subjects watched a movie while an emotional prosody oddball paradigm was presented in the background. During attended processing, subjects detected deviants in emotional prosody. The cognitive alexithymia dimension was associated with a left-hemisphere bias during early stages of unattended emotional speech processing, and with generally reduced amplitudes of the late P3 component during attended processing. In contrast, the affective dimension did not modulate unattended emotional prosody perception, but was associated with reduced P3 amplitudes during attended processing particularly to emotional prosody spoken in high intensity. CONCLUSIONS Our results provide evidence for a dissociable impact of the two alexithymia dimensions on electrophysiological responses during the attended and unattended processing of emotional prosody. The observed electrophysiological modulations are indicative of a reduced sensitivity to the emotional qualities of speech, which may be a contributing factor to problems in interpersonal communication associated with alexithymia.
Collapse
Affiliation(s)
- Katharina Sophia Goerlich
- Neuroimaging Center, Department of Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| | | | | |
Collapse
|
36
|
It's special the way you say it: an ERP investigation on the temporal dynamics of two types of prosody. Neuropsychologia 2012; 50:1609-20. [PMID: 22465251 DOI: 10.1016/j.neuropsychologia.2012.03.014] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2011] [Revised: 03/07/2012] [Accepted: 03/12/2012] [Indexed: 11/23/2022]
Abstract
Sentence prosody is long known to serve both linguistic functions (e.g. to differentiate between questions and statements) and emotional functions (e.g. to detect the emotional state of a speaker). These different functions of prosodic information need to be encoded rapidly during sentence comprehension to ensure successful speech communication. However, systematic investigations of the comparative nature of these two functions, i.e. are the two functions of prosody independent or interdependent, are sparse. The question at hand is whether the two prosodic functions engage a similar neural network and run a similar time-course or not. To this aim we investigated whether emotional and linguistic prosody are processed independently or dependently in an event-related brain potential (ERP) experiment. We merged a prosodically neutral head of a sentence to a second half of a sentence that differed in emotional and/or linguistic prosody. In a within-subjects design, two tasks were administered: in the "emotion task", participants judged whether the sentence that they had just heard was spoken in a neutral tone of voice or not (emotional task); in the "linguistic task", participants decided whether the sentence was a declarative sentence or not. As predicted, the previously reported prosodic expectancy positivity (PEP) was elicited by linguistic and emotional prosodic expectancy violations. However, the latency and distribution of the ERP component differed: whilst responses to emotional prosodic expectancy violations were elicited shortly after an expectancy violation (∼470 ms post splicing-point) and most prominently at posterior electrode-sites, the positivity in response to linguistic prosody had a later onset (∼620 ms post splicing-point) with a more frontal distribution. Interestingly, responses to combined (linguistic and emotional) expectancy violations resulted in a broadly distributed positivity with an onset of ∼170 ms post expectancy violation. These effects were found irrespective of the task setting. Given the differences in latency and distribution, we conclude that the processing of emotional and linguistic prosody relies at least partly on differing neural mechanisms and that emotional prosodic aspects of language are processed in a prioritized processing stream.
Collapse
|
37
|
Rigoulot S, Pell MD. Seeing emotion with your ears: emotional prosody implicitly guides visual attention to faces. PLoS One 2012; 7:e30740. [PMID: 22303454 PMCID: PMC3268762 DOI: 10.1371/journal.pone.0030740] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2011] [Accepted: 12/22/2011] [Indexed: 11/17/2022] Open
Abstract
Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions.
Collapse
Affiliation(s)
- Simon Rigoulot
- McGill University, Faculty of Medicine, School of Communication Sciences and Disorders, Montreal, Quebec, Canada.
| | | |
Collapse
|
38
|
Abstract
Supramodal representation of emotion and its neural substrates have recently attracted attention as a marker of social cognition. However, the question whether perceptual integration of facial and vocal emotions takes place in primary sensory areas, multimodal cortices, or in affective structures remains unanswered yet. Using novel computer-generated stimuli, we combined emotional faces and voices in congruent and incongruent ways and assessed functional brain data (fMRI) during an emotional classification task. Both congruent and incongruent audiovisual stimuli evoked larger responses in thalamus and superior temporal regions compared with unimodal conditions. Congruent emotions were characterized by activation in amygdala, insula, ventral posterior cingulate (vPCC), temporo-occipital, and auditory cortices; incongruent emotions activated a frontoparietal network and bilateral caudate nucleus, indicating a greater processing load in working memory and emotion-encoding areas. The vPCC alone exhibited differential reactions to congruency and incongruency for all emotion categories and can thus be considered a central structure for supramodal representation of complex emotional information. Moreover, the left amygdala reflected supramodal representation of happy stimuli. These findings document that emotional information does not merge at the perceptual audiovisual integration level in unimodal or multimodal areas, but in vPCC and amygdala.
Collapse
|
39
|
Leitman DI, Sehatpour P, Garidis C, Gomez-Ramirez M, Javitt DC. Preliminary Evidence of Pre-Attentive Distinctions of Frequency-Modulated Tones that Convey Affect. Front Hum Neurosci 2011; 5:96. [PMID: 22053152 PMCID: PMC3205480 DOI: 10.3389/fnhum.2011.00096] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2011] [Accepted: 08/19/2011] [Indexed: 11/13/2022] Open
Abstract
Recognizing emotion is an evolutionary imperative. An early stage of auditory scene analysis involves the perceptual grouping of acoustic features, which can be based on both temporal coincidence and spectral features such as perceived pitch. Perceived pitch, or fundamental frequency (F0), is an especially salient cue for differentiating affective intent through speech intonation (prosody). We hypothesized that: (1) simple frequency-modulated tone abstractions, based on the parameters of actual prosodic stimuli, would be reliably classified as representing differing emotional categories; and (2) that such differences would yield significant mismatch negativities (MMNs) – an index of pre-attentive deviance detection within the auditory environment. We constructed a set of FM tones that approximated the F0 mean and variation of reliably recognized happy and neutral prosodic stimuli. These stimuli were presented to 13 subjects using a passive listening oddball paradigm. We additionally included stimuli with no frequency modulation (FM) and FM tones with identical carrier frequencies but differing modulation depths as control conditions. Following electrophysiological recording, subjects were asked to identify the sounds they heard as happy, sad, angry, or neutral. We observed that FM tones abstracted from happy and no-expression speech stimuli elicited MMNs. Post hoc behavioral testing revealed that subjects reliably identified the FM tones in a consistent manner. Finally, we also observed that FM tones and no-FM tones elicited equivalent MMNs. MMNs to FM tones that differentiate affect suggests that these abstractions may be sufficient to characterize prosodic distinctions, and that these distinctions can be represented in pre-attentive auditory sensory memory.
Collapse
Affiliation(s)
- David I Leitman
- Neuropsychiatry Section, Department of Psychiatry, University of Pennsylvania School of Medicine Philadelphia, PA, USA
| | | | | | | | | |
Collapse
|
40
|
Straube T, Mothes-Lasch M, Miltner WHR. Neural mechanisms of the automatic processing of emotional information from faces and voices. Br J Psychol 2011; 102:830-48. [DOI: 10.1111/j.2044-8295.2011.02056.x] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
41
|
Föcker J, Gondan M, Röder B. Preattentive processing of audio-visual emotional signals. Acta Psychol (Amst) 2011; 137:36-47. [PMID: 21397889 DOI: 10.1016/j.actpsy.2011.02.004] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2010] [Revised: 02/16/2011] [Accepted: 02/17/2011] [Indexed: 11/27/2022] Open
Abstract
Previous research has shown that redundant information in faces and voices leads to faster emotional categorization compared to incongruent emotional information even when attending to only one modality. The aim of the present study was to test whether these crossmodal effects are predominantly due to a response conflict rather than interference at earlier, e.g. perceptual processing stages. In Experiment 1, participants had to categorize the valence and rate the intensity of happy, sad, angry and neutral unimodal or bimodal face-voice stimuli. They were asked to rate either the facial or vocal expression and ignore the emotion expressed in the other modality. Participants responded faster and more precisely to emotionally congruent compared to incongruent face-voice pairs in both the Attend Face and in the Attend Voice condition. Moreover, when attending to faces, emotionally congruent bimodal stimuli were more efficiently processed than unimodal visual stimuli. To study the role of a possible response conflict, Experiment 2 used a modified paradigm in which emotional and response conflicts were disentangled. Incongruency effects were significant even in the absence of response conflicts. The results suggest that emotional signals available through different sensory channels are automatically combined prior to response selection.
Collapse
|
42
|
Chen X, Zhao L, Jiang A, Yang Y. Event-related potential correlates of the expectancy violation effect during emotional prosody processing. Biol Psychol 2010; 86:158-67. [PMID: 21093531 DOI: 10.1016/j.biopsycho.2010.11.004] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2010] [Revised: 11/02/2010] [Accepted: 11/09/2010] [Indexed: 11/18/2022]
Abstract
The present study investigated the expectancy violation effects evoked by deviation in sentential emotional prosody (EP), and their association with the deviation patterns. Event-related potentials (ERPs) were recorded for mismatching EPs with different patterns of deviation and for matching control EPs while subjects performed emotional congruousness judgment in Experiment 1 and visual probe detection tasks in Experiment 2. In the control experiment, EPs and acoustically matched non-emotional materials were presented and ERPs were recorded while participants judged the sound intensity congruousness. It was found that an early negativity, whose peak latency varied with deviation pattern, was elicited by mismatching EPs relative to matching ones, irrespective of task-relevance. A late positivity was specifically induced by mismatching EPs, and was modulated by both deviation pattern and task-relevance. Moreover, these effects cannot be simply attributed to the change in non-emotional acoustic properties. These findings suggest that the brain detects the EP deviation rapidly, and then integrates it with context for comprehension, during which the emotionality plays a role of speeding up the perception and enhancing vigilance.
Collapse
Affiliation(s)
- Xuhai Chen
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Chaoyang District, Beijing 100101, China
| | | | | | | |
Collapse
|
43
|
Mathiak K, Junghöfer M, Pantev C, Rockstroh B. [Magnetoencephalography in psychiatry]. DER NERVENARZT 2010; 81:7-15. [PMID: 20024527 DOI: 10.1007/s00115-009-2829-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Neuropsychiatric disorders usually come with only sublime structural changes. Functional imaging can point at specific disturbances in information processing in neural networks. Besides imaging of receptor and metabolic functions with PET and fMRI, electromagnetic methods such as electroencephalography (EEG) and magnetoencephalography (MEG) offer the possibility for imaging of dynamic dysfunctions. As compared to EEG, MEG has a shorter history and is less common despite offering considerable advantages in temporospatial resolution and sensitivity to detect impaired signal processing and network functioning which renders it particularly interesting for psychiatric applications. Disturbed processing in the auditory and visual domain emerging in schizophrenic, affective and anxiety disorders can be detected with high sensitivity. Moreover, the neuromagnetic baseline activity allows conclusions to be drawn regarding neural network functions. Due to its high sensitivity to single deficits in information processing and to pharmacological effects, MEG will achieve clinical significance in specific areas.
Collapse
Affiliation(s)
- K Mathiak
- Klinik für Psychiatrie und Psychotherapie, Universitätsklinikum Aachen, RWTH Aachen, Pauwelsstrasse 30, 52074 Aachen.
| | | | | | | |
Collapse
|