1
|
Ringer H, Rösch SA, Roeber U, Deller J, Escera C, Grimm S. That sounds awful! Does sound unpleasantness modulate the mismatch negativity and its habituation? Psychophysiology 2024; 61:e14450. [PMID: 37779371 DOI: 10.1111/psyp.14450] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 08/05/2023] [Accepted: 08/28/2023] [Indexed: 10/03/2023]
Abstract
There are sounds that most people perceive as highly unpleasant, for instance, the sound of rubbing pieces of polystyrene together. Previous research showed larger physiological and neural responses for such aversive compared to neutral sounds. Hitherto, it remains unclear whether habituation, i.e., diminished responses to repeated stimulus presentation, which is typically reported for neutral sounds, occurs to the same extent for aversive stimuli. We measured the mismatch negativity (MMN) in response to rare occurrences of aversive or neutral deviant sounds within an auditory oddball sequence in 24 healthy participants, while they performed a demanding visual distractor task. Deviants occurred as single events (i.e., between two standards) or as double deviants (i.e., repeating the identical deviant sound in two consecutive trials). All deviants elicited a clear MMN, and amplitudes were larger for aversive than for neutral deviants (irrespective of their position within a deviant pair). This supports the claim of preattentive emotion evaluation during early auditory processing. In contrast to our expectations, MMN amplitudes did not show habituation, but increased in response to deviant repetition-similarly for aversive and neutral deviants. A more fine-grained analysis of individual MMN amplitudes in relation to individual arousal and valence ratings of each sound item revealed that stimulus-specific MMN amplitudes were best predicted by the interaction of deviant position and perceived arousal, but not by valence. Deviants with perceived higher arousal elicited larger MMN amplitudes only at the first deviant position, indicating that the MMN reflects preattentive processing of the emotional content of sounds.
Collapse
Affiliation(s)
- Hanna Ringer
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Sarah Alica Rösch
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Integrated Research and Treatment Center (IFB) Adiposity Diseases, Behavioral Medicine Research Unit, Leipzig University Medical Center, Leipzig, Germany
| | - Urte Roeber
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| | - Julia Deller
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig, Leipzig, Germany
| | - Carles Escera
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, Faculty of Psychology, University of Barcelona, Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| | - Sabine Grimm
- Physics of Cognition Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| |
Collapse
|
2
|
Kao C, Zhang Y. Detecting Emotional Prosody in Real Words: Electrophysiological Evidence From a Modified Multifeature Oddball Paradigm. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:2988-2998. [PMID: 37379567 DOI: 10.1044/2023_jslhr-22-00652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2023]
Abstract
PURPOSE Emotional voice conveys important social cues that demand listeners' attention and timely processing. This event-related potential study investigated the feasibility of a multifeature oddball paradigm to examine adult listeners' neural responses to detecting emotional prosody changes in nonrepeating naturally spoken words. METHOD Thirty-three adult listeners completed the experiment by passively listening to the words in neutral and three alternating emotions while watching a silent movie. Previous research documented preattentive change-detection electrophysiological responses (e.g., mismatch negativity [MMN], P3a) to emotions carried by fixed syllables or words. Given that the MMN and P3a have also been shown to reflect extraction of abstract regularities over repetitive acoustic patterns, this study employed a multifeature oddball paradigm to compare listeners' MMN and P3a to emotional prosody change from neutral to angry, happy, and sad emotions delivered with hundreds of nonrepeating words in a single recording session. RESULTS Both MMN and P3a were successfully elicited by the emotional prosodic change over the varying linguistic context. Angry prosody elicited the strongest MMN compared with happy and sad prosodies. Happy prosody elicited the strongest P3a in the centro-frontal electrodes, and angry prosody elicited the smallest P3a. CONCLUSIONS The results demonstrated that listeners were able to extract the acoustic patterns for each emotional prosody category over constantly changing spoken words. The findings confirm the feasibility of the multifeature oddball paradigm in investigating emotional speech processing beyond simple acoustic change detection, which may potentially be applied to pediatric and clinical populations.
Collapse
Affiliation(s)
- Chieh Kao
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities
- Center for Cognitive Sciences, University of Minnesota, Twin Cities
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities
- Masonic Institute for the Developing Brain, University of Minnesota, Twin Cities
| |
Collapse
|
3
|
Mauchand M, Pell MD. Listen to my feelings! How prosody and accent drive the empathic relevance of complaining speech. Neuropsychologia 2022; 175:108356. [PMID: 36037914 DOI: 10.1016/j.neuropsychologia.2022.108356] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 08/04/2022] [Accepted: 08/22/2022] [Indexed: 10/15/2022]
Abstract
Interpersonal communication often involves sharing our feelings with others; complaining, for example, aims to elicit empathy in listeners by vocally expressing a speaker's suffering. Despite the growing neuroscientific interest in the phenomenon of empathy, few have investigated how it is elicited in real time by vocal signals (prosody), and how this might be affected by interpersonal factors, such as a speaker's cultural background (based on their accent). To investigate the neural processes at play when hearing spoken complaints, twenty-six French participants listened to complaining and neutral utterances produced by in-group French and out-group Québécois (i.e., French-Canadian) speakers. Participants rated how hurt the speaker felt while their cerebral activity was monitored with electroencephalography (EEG). Principal Component Analysis of Event-Related Potentials (ERPs) taken at utterance onset showed culture-dependent time courses of emotive prosody processing. The high motivational relevance of ingroup complaints increased the P200 response compared to all other utterance types; in contrast, outgroup complaints selectively elicited an early posterior negativity in the same time window, followed by an increased N400 (due to ongoing effort to derive affective meaning from outgroup voices). Ingroup neutral utterances evoked a late negativity which may reflect re-analysis of emotively less salient, but culturally relevant ingroup speech. Results highlight the time-course of neurocognitive responses that contribute to emotive speech processing for complaints, establishing the critical role of prosody as well as social-relational factors (i.e., cultural identity) on how listeners are likely to "empathize" with a speaker.
Collapse
Affiliation(s)
- Maël Mauchand
- McGill University, School of Communication Sciences and Disorders, Montréal, Québec, Canada.
| | - Marc D Pell
- McGill University, School of Communication Sciences and Disorders, Montréal, Québec, Canada
| |
Collapse
|
4
|
Zora H, Csépe V. Perception of Prosodic Modulations of Linguistic and Paralinguistic Origin: Evidence From Early Auditory Event-Related Potentials. Front Neurosci 2022; 15:797487. [PMID: 35002610 PMCID: PMC8733303 DOI: 10.3389/fnins.2021.797487] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 11/29/2021] [Indexed: 11/13/2022] Open
Abstract
How listeners handle prosodic cues of linguistic and paralinguistic origin is a central question for spoken communication. In the present EEG study, we addressed this question by examining neural responses to variations in pitch accent (linguistic) and affective (paralinguistic) prosody in Swedish words, using a passive auditory oddball paradigm. The results indicated that changes in pitch accent and affective prosody elicited mismatch negativity (MMN) responses at around 200 ms, confirming the brain’s pre-attentive response to any prosodic modulation. The MMN amplitude was, however, statistically larger to the deviation in affective prosody in comparison to the deviation in pitch accent and affective prosody combined, which is in line with previous research indicating not only a larger MMN response to affective prosody in comparison to neutral prosody but also a smaller MMN response to multidimensional deviants than unidimensional ones. The results, further, showed a significant P3a response to the affective prosody change in comparison to the pitch accent change at around 300 ms, in accordance with previous findings showing an enhanced positive response to emotional stimuli. The present findings provide evidence for distinct neural processing of different prosodic cues, and statistically confirm the intrinsic perceptual and motivational salience of paralinguistic information in spoken communication.
Collapse
Affiliation(s)
- Hatice Zora
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
| | - Valéria Csépe
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary
| |
Collapse
|
5
|
Li J, Yang J, Qin Y, Zhang Y. Expert and Novice Goalkeepers' Perceptions of Changes During Open Play Soccer. Percept Mot Skills 2021; 128:2725-2744. [PMID: 34459301 DOI: 10.1177/00315125211040750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In the present study we investigated expert and novice football (i.e., soccer) goalkeepers' three stages of perceiving changes in open play situations-detection, localization, and identification-with and without time constraints. We adopted the continual cycling flicker paradigm to investigate goalkeepers' perceptions when provided with sufficient time (Experiment 1), and we utilized the limited display one-shot change detection paradigm to study their perceptions under time constraints (Experiment 2). Images of goalkeepers' first-person views of open play soccer scenes were used as stimuli. Semantic or non-semantic changes in these scenes were produced by modifying one element in each image. Separate groups of expert and novice goalkeepers were required to detect, localize, and identify the scene changes. We found that expert goalkeepers detected scene changes more quickly than novices under both time allowances. Furthermore, compared to novices, experts localized the changes more accurately under time constraints and identified the changes more quickly when given sufficient time. Additionally, semantic changes were detected more quickly and localized and identified more accurately than non-semantic changes when there was sufficient time. Under time constraints expert goalkeepers' greater efficiency was likely due to pre-attentive processing; with sufficient time, they were able to focus attention to extracting detailed information for identification.
Collapse
Affiliation(s)
- Jie Li
- Center for Cognition and Brain Disorders, the Affiliated Hospital, Hangzhou Normal University, Hangzhou, China.,Institutes of Psychological Sciences, Hangzhou Normal University, Hangzhou, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou Normal University, Hangzhou, China.,School of Psychology, 47838Beijing Sport University, Beijing Sport University, Beijing, China
| | - Jing Yang
- School of Psychology, 47838Beijing Sport University, Beijing Sport University, Beijing, China.,Beijing Jianhua Experimental Etown School, Beijing, China
| | - Yue Qin
- School of Psychology, 47838Beijing Sport University, Beijing Sport University, Beijing, China
| | - Yu Zhang
- School of Psychology, 47838Beijing Sport University, Beijing Sport University, Beijing, China
| |
Collapse
|
6
|
Rachman L, Dubal S, Aucouturier JJ. Happy you, happy me: expressive changes on a stranger's voice recruit faster implicit processes than self-produced expressions. Soc Cogn Affect Neurosci 2020; 14:559-568. [PMID: 31044241 PMCID: PMC6545538 DOI: 10.1093/scan/nsz030] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2018] [Revised: 04/09/2019] [Accepted: 04/21/2019] [Indexed: 01/09/2023] Open
Abstract
In social interactions, people have to pay attention both to the ‘what’ and ‘who’. In particular, expressive changes heard on speech signals have to be integrated with speaker identity, differentiating e.g. self- and other-produced signals. While previous research has shown that self-related visual information processing is facilitated compared to non-self stimuli, evidence in the auditory modality remains mixed. Here, we compared electroencephalography (EEG) responses to expressive changes in sequence of self- or other-produced speech sounds using a mismatch negativity (MMN) passive oddball paradigm. Critically, to control for speaker differences, we used programmable acoustic transformations to create voice deviants that differed from standards in exactly the same manner, making EEG responses to such deviations comparable between sequences. Our results indicate that expressive changes on a stranger’s voice are highly prioritized in auditory processing compared to identical changes on the self-voice. Other-voice deviants generate earlier MMN onset responses and involve stronger cortical activations in a left motor and somatosensory network suggestive of an increased recruitment of resources for less internally predictable, and therefore perhaps more socially relevant, signals.
Collapse
Affiliation(s)
- Laura Rachman
- Inserm U, CNRS UMR, Sorbonne Université UMR S, Institut du Cerveau et de la Moelle épinière, Social and Affective Neuroscience Lab, Paris, France.,Science & Technology of Music and Sound, UMR (CNRS/IRCAM/Sorbonne Université), Paris, France
| | - Stéphanie Dubal
- Inserm U, CNRS UMR, Sorbonne Université UMR S, Institut du Cerveau et de la Moelle épinière, Social and Affective Neuroscience Lab, Paris, France
| | - Jean-Julien Aucouturier
- Science & Technology of Music and Sound, UMR (CNRS/IRCAM/Sorbonne Université), Paris, France
| |
Collapse
|
7
|
Zora H, Rudner M, Montell Magnusson AK. Concurrent affective and linguistic prosody with the same emotional valence elicits a late positive ERP response. Eur J Neurosci 2019; 51:2236-2249. [PMID: 31872480 PMCID: PMC7383972 DOI: 10.1111/ejn.14658] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2018] [Revised: 11/22/2019] [Accepted: 12/18/2019] [Indexed: 01/07/2023]
Abstract
Change in linguistic prosody generates a mismatch negativity response (MMN), indicating neural representation of linguistic prosody, while change in affective prosody generates a positive response (P3a), reflecting its motivational salience. However, the neural response to concurrent affective and linguistic prosody is unknown. The present paper investigates the integration of these two prosodic features in the brain by examining the neural response to separate and concurrent processing by electroencephalography (EEG). A spoken pair of Swedish words—[ˈfɑ́ːsɛn] phase and [ˈfɑ̀ːsɛn] damn—that differed in emotional semantics due to linguistic prosody was presented to 16 subjects in an angry and neutral affective prosody using a passive auditory oddball paradigm. Acoustically matched pseudowords—[ˈvɑ́ːsɛm] and [ˈvɑ̀ːsɛm]—were used as controls. Following the constructionist concept of emotions, accentuating the conceptualization of emotions based on language, it was hypothesized that concurrent affective and linguistic prosody with the same valence—angry [ˈfɑ̀ːsɛn] damn—would elicit a unique late EEG signature, reflecting the temporal integration of affective voice with emotional semantics of prosodic origin. In accordance, linguistic prosody elicited an MMN at 300–350 ms, and affective prosody evoked a P3a at 350–400 ms, irrespective of semantics. Beyond these responses, concurrent affective and linguistic prosody evoked a late positive component (LPC) at 820–870 ms in frontal areas, indicating the conceptualization of affective prosody based on linguistic prosody. This study provides evidence that the brain does not only distinguish between these two functions of prosody but also integrates them based on language and experience.
Collapse
Affiliation(s)
- Hatice Zora
- Department of Linguistics, Stockholm University, Stockholm, Sweden
| | - Mary Rudner
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| | - Anna K Montell Magnusson
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden.,Department of Clinical Science, Intervention, and Technology, Karolinska Institutet, Stockholm, Sweden.,Department of Biomedical and Clinical Sciences, Linköping University, Linköping, Sweden
| |
Collapse
|
8
|
Insensitivity of auditory mismatch negativity to classical fear conditioning and extinction in healthy humans. Neuroreport 2019; 30:468-472. [PMID: 30817683 DOI: 10.1097/wnr.0000000000001221] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
The relationship between auditory mismatch negativity (MMN) and the neural cognitive processes of fear has been suggested in both healthy participants and patients with fear-related mental disorders such as post-traumatic stress disorder and panic disorder. The present study sought to confirm whether the MMN is affected by classical fear conditioning in healthy participants. MMN amplitude, N1 amplitude, and skin conductance level (SCL) in 20 healthy volunteers during a fear-conditioning paradigm consisting of three phases (habituation, fear acquisition, and fear extinction) were recorded. Red and blue light signals were presented as the conditioned stimuli CS+ (threat cue) and CS- (safety cue), respectively. In addition, an aversive electrical stimulus was delivered as the unconditioned stimulus with CS+ in the fear-acquisition phase. No MMN amplitude changes were observed between the CS types during the three phases. In the acquisition phase, the mean SCL during CS+ was significantly higher than that during CS-. The MMN amplitude and deviant N1 amplitude in the extinction phase were significantly lower than those in the other phases regardless of the CS type. Despite the clear alteration of SCL between CS types in the acquisition phase, no significant differences in MMN were observed. Decreased MMN and deviant N1 in the fear-extinction phase were considered to be mainly due to decreased arousal or attention level. Results indicate that the auditory MMN amplitude was not affected by the cognitive process of fear recognized by other sense modalities.
Collapse
|