1
|
Thiffault F, Cinq-Mars J, Brisson B, Blanchette I. Hearing fearful prosody impairs visual working memory maintenance. Int J Psychophysiol 2024; 199:112338. [PMID: 38552908 DOI: 10.1016/j.ijpsycho.2024.112338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 03/10/2024] [Accepted: 03/25/2024] [Indexed: 04/06/2024]
Abstract
Interference by distractors has been associated multiple times with diminished visual and auditory working memory (WM) performance. Negative emotional distractors in particular lead to detrimental effects on WM. However, these associations have only been seen when distractors and items to maintain in WM are from the same sensory modality. In this study, we investigate cross-modal interference on WM. We invited 20 participants to complete a visual change-detection task, assessing visual WM (VWM), while hearing emotional (fearful) and neutral auditory distractors. Electrophysiological activity was recorded to measure contralateral delay activity (CDA) and auditory P2 event-related potentials (ERP), indexing WM maintenance and distractor salience respectively. At the behavioral level, fearful prosody didn't decrease significantly working memory accuracy, compared to neutral prosody. Regarding ERPs, fearful distractors evoked a greater P2 amplitude than neutral distractors. Correlations between the two ERP potentials indicated that P2 amplitude difference between the two types of prosody was associated with the difference in CDA amplitude for fearful and neutral trials. This association suggests that cognitive resources required to process fearful prosody detrimentally impact VWM maintenance. That result provides a piece of additional evidence that negative emotional stimuli produce greater interference than neutral stimuli and that the cognitive resources used to process stimuli from different modalities come from a common pool.
Collapse
Affiliation(s)
- François Thiffault
- CogNAC Research Group (Cognition, Neurosciences, Affect et Comportement), Québec, Canada; Département de Psychologie, Université du Québec à Trois-Rivières, Québec, Canada.
| | - Justine Cinq-Mars
- CogNAC Research Group (Cognition, Neurosciences, Affect et Comportement), Québec, Canada; Département de Psychologie, Université du Québec à Trois-Rivières, Québec, Canada
| | - Benoît Brisson
- CogNAC Research Group (Cognition, Neurosciences, Affect et Comportement), Québec, Canada; Département de Psychologie, Université du Québec à Trois-Rivières, Québec, Canada.
| | - Isabelle Blanchette
- CogNAC Research Group (Cognition, Neurosciences, Affect et Comportement), Québec, Canada; École de Psychologie, Université Laval, Québec, Québec, Canada; CERVO Brain Research Center, Québec, Québec, Canada.
| |
Collapse
|
2
|
Herbert C. Brain-computer interfaces and human factors: the role of language and cultural differences-Still a missing gap? Front Hum Neurosci 2024; 18:1305445. [PMID: 38665897 PMCID: PMC11043545 DOI: 10.3389/fnhum.2024.1305445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 02/02/2024] [Indexed: 04/28/2024] Open
Abstract
Brain-computer interfaces (BCIs) aim at the non-invasive investigation of brain activity for supporting communication and interaction of the users with their environment by means of brain-machine assisted technologies. Despite technological progress and promising research aimed at understanding the influence of human factors on BCI effectiveness, some topics still remain unexplored. The aim of this article is to discuss why it is important to consider the language of the user, its embodied grounding in perception, action and emotions, and its interaction with cultural differences in information processing in future BCI research. Based on evidence from recent studies, it is proposed that detection of language abilities and language training are two main topics of enquiry of future BCI studies to extend communication among vulnerable and healthy BCI users from bench to bedside and real world applications. In addition, cultural differences shape perception, actions, cognition, language and emotions subjectively, behaviorally as well as neuronally. Therefore, BCI applications should consider cultural differences in information processing to develop culture- and language-sensitive BCI applications for different user groups and BCIs, and investigate the linguistic and cultural contexts in which the BCI will be used.
Collapse
Affiliation(s)
- Cornelia Herbert
- Applied Emotion and Motivation Psychology, Institute of Psychology and Education, Ulm University, Ulm, Germany
| |
Collapse
|
3
|
Paquette S, Gouin S, Lehmann A. Improving emotion perception in cochlear implant users: insights from machine learning analysis of EEG signals. BMC Neurol 2024; 24:115. [PMID: 38589815 PMCID: PMC11000345 DOI: 10.1186/s12883-024-03616-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Accepted: 03/29/2024] [Indexed: 04/10/2024] Open
Abstract
BACKGROUND Although cochlear implants can restore auditory inputs to deafferented auditory cortices, the quality of the sound signal transmitted to the brain is severely degraded, limiting functional outcomes in terms of speech perception and emotion perception. The latter deficit negatively impacts cochlear implant users' social integration and quality of life; however, emotion perception is not currently part of rehabilitation. Developing rehabilitation programs incorporating emotional cognition requires a deeper understanding of cochlear implant users' residual emotion perception abilities. METHODS To identify the neural underpinnings of these residual abilities, we investigated whether machine learning techniques could be used to identify emotion-specific patterns of neural activity in cochlear implant users. Using existing electroencephalography data from 22 cochlear implant users, we employed a random forest classifier to establish if we could model and subsequently predict from participants' brain responses the auditory emotions (vocal and musical) presented to them. RESULTS Our findings suggest that consistent emotion-specific biomarkers exist in cochlear implant users, which could be used to develop effective rehabilitation programs incorporating emotion perception training. CONCLUSIONS This study highlights the potential of machine learning techniques to improve outcomes for cochlear implant users, particularly in terms of emotion perception.
Collapse
Affiliation(s)
- Sebastien Paquette
- Psychology Department, Faculty of Arts and Science, Trent University, Peterborough, ON, Canada.
- Research Institute of the McGill University Health Centre (RI-MUHC), Montreal, QC, Canada.
- Centre for Research On Brain, Language, and Music (CRBLM), International Laboratory for Brain, Music & Sound Research (BRAMS), Psychology Department, University of Montreal, Montreal, QC, Canada.
| | - Samir Gouin
- Centre for Research On Brain, Language, and Music (CRBLM), International Laboratory for Brain, Music & Sound Research (BRAMS), Psychology Department, University of Montreal, Montreal, QC, Canada
- Faculty of Medicine and Health Sciences, Department of Otolaryngology-Head and Neck Surgery, McGill University, Montreal, QC, Canada
| | - Alexandre Lehmann
- Research Institute of the McGill University Health Centre (RI-MUHC), Montreal, QC, Canada
- Centre for Research On Brain, Language, and Music (CRBLM), International Laboratory for Brain, Music & Sound Research (BRAMS), Psychology Department, University of Montreal, Montreal, QC, Canada
- Faculty of Medicine and Health Sciences, Department of Otolaryngology-Head and Neck Surgery, McGill University, Montreal, QC, Canada
| |
Collapse
|
4
|
Day TC, Malik I, Boateng S, Hauschild KM, Lerner MD. Vocal Emotion Recognition in Autism: Behavioral Performance and Event-Related Potential (ERP) Response. J Autism Dev Disord 2024; 54:1235-1248. [PMID: 36694007 DOI: 10.1007/s10803-023-05898-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2023] [Indexed: 01/25/2023]
Abstract
Autistic youth display difficulties in emotion recognition, yet little research has examined behavioral and neural indices of vocal emotion recognition (VER). The current study examines behavioral and event-related potential (N100, P200, Late Positive Potential [LPP]) indices of VER in autistic and non-autistic youth. Participants (N = 164) completed an emotion recognition task, the Diagnostic Analyses of Nonverbal Accuracy (DANVA-2) which included VER, during EEG recording. The LPP amplitude was larger in response to high intensity VER, and social cognition predicted VER errors. Verbal IQ, not autism, was related to VER errors. An interaction between VER intensity and social communication impairments revealed these impairments were related to larger LPP amplitudes during low intensity VER. Taken together, differences in VER may be due to higher order cognitive processes, not basic, early perception (N100, P200), and verbal cognitive abilities may underlie behavioral, yet occlude neural, differences in VER processing.
Collapse
Affiliation(s)
- Talena C Day
- Psychology Department, Stony Brook University, Stony Brook, Psychology B-354, Stony Brook, NY, 11794-2500, USA
| | - Isha Malik
- Psychology Department, Stony Brook University, Stony Brook, Psychology B-354, Stony Brook, NY, 11794-2500, USA
| | - Sydney Boateng
- Psychology Department, Stony Brook University, Stony Brook, Psychology B-354, Stony Brook, NY, 11794-2500, USA
| | | | - Matthew D Lerner
- Psychology Department, Stony Brook University, Stony Brook, Psychology B-354, Stony Brook, NY, 11794-2500, USA.
| |
Collapse
|
5
|
Duville MM, Alonso-Valerdi LM, Ibarra-Zarate DI. Improved emotion differentiation under reduced acoustic variability of speech in autism. BMC Med 2024; 22:121. [PMID: 38486293 PMCID: PMC10941423 DOI: 10.1186/s12916-024-03341-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 03/05/2024] [Indexed: 03/17/2024] Open
Abstract
BACKGROUND Socio-emotional impairments are among the diagnostic criteria for autism spectrum disorder (ASD), but the actual knowledge has substantiated both altered and intact emotional prosodies recognition. Here, a Bayesian framework of perception is considered suggesting that the oversampling of sensory evidence would impair perception within highly variable environments. However, reliable hierarchical structures for spectral and temporal cues would foster emotion discrimination by autistics. METHODS Event-related spectral perturbations (ERSP) extracted from electroencephalographic (EEG) data indexed the perception of anger, disgust, fear, happiness, neutral, and sadness prosodies while listening to speech uttered by (a) human or (b) synthesized voices characterized by reduced volatility and variability of acoustic environments. The assessment of mechanisms for perception was extended to the visual domain by analyzing the behavioral accuracy within a non-social task in which dynamics of precision weighting between bottom-up evidence and top-down inferences were emphasized. Eighty children (mean 9.7 years old; standard deviation 1.8) volunteered including 40 autistics. The symptomatology was assessed at the time of the study via the Autism Diagnostic Observation Schedule, Second Edition, and parents' responses on the Autism Spectrum Rating Scales. A mixed within-between analysis of variance was conducted to assess the effects of group (autism versus typical development), voice, emotions, and interaction between factors. A Bayesian analysis was implemented to quantify the evidence in favor of the null hypothesis in case of non-significance. Post hoc comparisons were corrected for multiple testing. RESULTS Autistic children presented impaired emotion differentiation while listening to speech uttered by human voices, which was improved when the acoustic volatility and variability of voices were reduced. Divergent neural patterns were observed from neurotypicals to autistics, emphasizing different mechanisms for perception. Accordingly, behavioral measurements on the visual task were consistent with the over-precision ascribed to the environmental variability (sensory processing) that weakened performance. Unlike autistic children, neurotypicals could differentiate emotions induced by all voices. CONCLUSIONS This study outlines behavioral and neurophysiological mechanisms that underpin responses to sensory variability. Neurobiological insights into the processing of emotional prosodies emphasized the potential of acoustically modified emotional prosodies to improve emotion differentiation by autistics. TRIAL REGISTRATION BioMed Central ISRCTN Registry, ISRCTN18117434. Registered on September 20, 2020.
Collapse
Affiliation(s)
- Mathilde Marie Duville
- Escuela de Ingeniería y Ciencias, Tecnologico de Monterrey, Ave. Eugenio Garza Sada 2501 Sur, Col: Tecnológico, Monterrey, N.L, 64700, México.
| | - Luz María Alonso-Valerdi
- Escuela de Ingeniería y Ciencias, Tecnologico de Monterrey, Ave. Eugenio Garza Sada 2501 Sur, Col: Tecnológico, Monterrey, N.L, 64700, México
| | - David I Ibarra-Zarate
- Escuela de Ingeniería y Ciencias, Tecnologico de Monterrey, Ave. Eugenio Garza Sada 2501 Sur, Col: Tecnológico, Monterrey, N.L, 64700, México
| |
Collapse
|
6
|
Sarzedas J, Lima CF, Roberto MS, Scott SK, Pinheiro AP, Conde T. Blindness influences emotional authenticity perception in voices: Behavioral and ERP evidence. Cortex 2024; 172:254-270. [PMID: 38123404 DOI: 10.1016/j.cortex.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/31/2023] [Accepted: 11/10/2023] [Indexed: 12/23/2023]
Abstract
The ability to distinguish spontaneous from volitional emotional expressions is an important social skill. How do blind individuals perceive emotional authenticity? Unlike sighted individuals, they cannot rely on facial and body language cues, relying instead on vocal cues alone. Here, we combined behavioral and ERP measures to investigate authenticity perception in laughter and crying in individuals with early- or late-blindness onset. Early-blind, late-blind, and sighted control participants (n = 17 per group, N = 51) completed authenticity and emotion discrimination tasks while EEG data were recorded. The stimuli consisted of laughs and cries that were either spontaneous or volitional. The ERP analysis focused on the N1, P2, and late positive potential (LPP). Behaviorally, early-blind participants showed intact authenticity perception, but late-blind participants performed worse than controls. There were no group differences in the emotion discrimination task. In brain responses, all groups were sensitive to laughter authenticity at the P2 stage, and to crying authenticity at the early LPP stage. Nevertheless, only early-blind participants were sensitive to crying authenticity at the N1 and middle LPP stages, and to laughter authenticity at the early LPP stage. Furthermore, early-blind and sighted participants were more sensitive than late-blind ones to crying authenticity at the P2 and late LPP stages. Altogether, these findings suggest that early blindness relates to facilitated brain processing of authenticity in voices, both at early sensory and late cognitive-evaluative stages. Late-onset blindness, in contrast, relates to decreased sensitivity to authenticity at behavioral and brain levels.
Collapse
Affiliation(s)
- João Sarzedas
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - César F Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal; Institute of Cognitive Neuroscience, University College London, London, UK
| | - Magda S Roberto
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| | - Tatiana Conde
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| |
Collapse
|
7
|
Nussbaum C, Schirmer A, Schweinberger SR. Electrophysiological Correlates of Vocal Emotional Processing in Musicians and Non-Musicians. Brain Sci 2023; 13:1563. [PMID: 38002523 PMCID: PMC10670383 DOI: 10.3390/brainsci13111563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 10/31/2023] [Accepted: 11/03/2023] [Indexed: 11/26/2023] Open
Abstract
Musicians outperform non-musicians in vocal emotion recognition, but the underlying mechanisms are still debated. Behavioral measures highlight the importance of auditory sensitivity towards emotional voice cues. However, it remains unclear whether and how this group difference is reflected at the brain level. Here, we compared event-related potentials (ERPs) to acoustically manipulated voices between musicians (n = 39) and non-musicians (n = 39). We used parameter-specific voice morphing to create and present vocal stimuli that conveyed happiness, fear, pleasure, or sadness, either in all acoustic cues or selectively in either pitch contour (F0) or timbre. Although the fronto-central P200 (150-250 ms) and N400 (300-500 ms) components were modulated by pitch and timbre, differences between musicians and non-musicians appeared only for a centro-parietal late positive potential (500-1000 ms). Thus, this study does not support an early auditory specialization in musicians but suggests instead that musicality affects the manner in which listeners use acoustic voice cues during later, controlled aspects of emotion evaluation.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Voice Research Unit, Friedrich Schiller University, 07743 Jena, Germany
| | - Annett Schirmer
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Institute of Psychology, University of Innsbruck, 6020 Innsbruck, Austria
| | - Stefan R. Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Voice Research Unit, Friedrich Schiller University, 07743 Jena, Germany
- Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland
| |
Collapse
|
8
|
Talwar S, Barbero FM, Calce RP, Collignon O. Automatic Brain Categorization of Discrete Auditory Emotion Expressions. Brain Topogr 2023; 36:854-869. [PMID: 37639111 PMCID: PMC10522533 DOI: 10.1007/s10548-023-00983-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 06/21/2023] [Indexed: 08/29/2023]
Abstract
Seamlessly extracting emotional information from voices is crucial for efficient interpersonal communication. However, it remains unclear how the brain categorizes vocal expressions of emotion beyond the processing of their acoustic features. In our study, we developed a new approach combining electroencephalographic recordings (EEG) in humans with a frequency-tagging paradigm to 'tag' automatic neural responses to specific categories of emotion expressions. Participants were presented with a periodic stream of heterogeneous non-verbal emotional vocalizations belonging to five emotion categories: anger, disgust, fear, happiness and sadness at 2.5 Hz (stimuli length of 350 ms with a 50 ms silent gap between stimuli). Importantly, unknown to the participant, a specific emotion category appeared at a target presentation rate of 0.83 Hz that would elicit an additional response in the EEG spectrum only if the brain discriminates the target emotion category from other emotion categories and generalizes across heterogeneous exemplars of the target emotion category. Stimuli were matched across emotion categories for harmonicity-to-noise ratio, spectral center of gravity and pitch. Additionally, participants were presented with a scrambled version of the stimuli with identical spectral content and periodicity but disrupted intelligibility. Both types of sequences had comparable envelopes and early auditory peripheral processing computed via the simulation of the cochlear response. We observed that in addition to the responses at the general presentation frequency (2.5 Hz) in both intact and scrambled sequences, a greater peak in the EEG spectrum at the target emotion presentation rate (0.83 Hz) and its harmonics emerged in the intact sequence in comparison to the scrambled sequence. The greater response at the target frequency in the intact sequence, together with our stimuli matching procedure, suggest that the categorical brain response elicited by a specific emotion is at least partially independent from the low-level acoustic features of the sounds. Moreover, responses at the fearful and happy vocalizations presentation rates elicited different topographies and different temporal dynamics, suggesting that different discrete emotions are represented differently in the brain. Our paradigm revealed the brain's ability to automatically categorize non-verbal vocal emotion expressions objectively (at a predefined frequency of interest), behavior-free, rapidly (in few minutes of recording time) and robustly (with a high signal-to-noise ratio), making it a useful tool to study vocal emotion processing and auditory categorization in general and in populations where behavioral assessments are more challenging.
Collapse
Affiliation(s)
- Siddharth Talwar
- Institute for Research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, University of Louvain (UCLouvain), Louvain, Belgium.
| | - Francesca M Barbero
- Institute for Research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, University of Louvain (UCLouvain), Louvain, Belgium
| | - Roberta P Calce
- Institute for Research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, University of Louvain (UCLouvain), Louvain, Belgium
| | - Olivier Collignon
- Institute for Research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, University of Louvain (UCLouvain), Louvain, Belgium.
- School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| |
Collapse
|
9
|
Duville MM, Ibarra-Zarate DI, Alonso-Valerdi LM. Autistic traits shape neuronal oscillations during emotion perception under attentional load modulation. Sci Rep 2023; 13:8178. [PMID: 37210415 DOI: 10.1038/s41598-023-35013-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 05/11/2023] [Indexed: 05/22/2023] Open
Abstract
Emotional content is particularly salient, but situational factors such as cognitive load may disturb the attentional prioritization towards affective stimuli and interfere with their processing. In this study, 31 autistic and 31 typically developed children volunteered to assess their perception of affective prosodies via event-related spectral perturbations of neuronal oscillations recorded by electroencephalography under attentional load modulations induced by Multiple Object Tracking or neutral images. Although intermediate load optimized emotion processing by typically developed children, load and emotion did not interplay in children with autism. Results also outlined impaired emotional integration emphasized in theta, alpha and beta oscillations at early and late stages, and lower attentional ability indexed by the tracking capacity. Furthermore, both tracking capacity and neuronal patterns of emotion perception during task were predicted by daily-life autistic behaviors. These findings highlight that intermediate load may encourage emotion processing in typically developed children. However, autism aligns with impaired affective processing and selective attention, both insensitive to load modulations. Results were discussed within a Bayesian perspective that suggests atypical updating in precision between sensations and hidden states, towards poor contextual evaluations. For the first time, implicit emotion perception assessed by neuronal markers was integrated with environmental demands to characterize autism.
Collapse
Affiliation(s)
- Mathilde Marie Duville
- Tecnologico de Monterrey, Escuela de Ingeniería y Ciencias, Ave. Eugenio Garza Sada 2501, 64849, Monterrey, NL, México.
| | - David I Ibarra-Zarate
- Tecnologico de Monterrey, Escuela de Ingeniería y Ciencias, Ave. Eugenio Garza Sada 2501, 64849, Monterrey, NL, México
| | - Luz María Alonso-Valerdi
- Tecnologico de Monterrey, Escuela de Ingeniería y Ciencias, Ave. Eugenio Garza Sada 2501, 64849, Monterrey, NL, México
| |
Collapse
|
10
|
Pinheiro AP, Sarzedas J, Roberto MS, Kotz SA. Attention and emotion shape self-voice prioritization in speech processing. Cortex 2023; 158:83-95. [PMID: 36473276 DOI: 10.1016/j.cortex.2022.10.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 09/27/2022] [Accepted: 10/06/2022] [Indexed: 01/18/2023]
Abstract
Both self-voice and emotional speech are salient signals that are prioritized in perception. Surprisingly, self-voice perception has been investigated to a lesser extent than the self-face. Therefore, it remains to be clarified whether self-voice prioritization is boosted by emotion, and whether self-relevance and emotion interact differently when attention is focused on who is speaking vs. what is being said. Thirty participants listened to 210 prerecorded words spoken in one's own or an unfamiliar voice and differing in emotional valence in two tasks, manipulating the attention focus on either speaker identity or speech emotion. Event-related potentials (ERP) of the electroencephalogram (EEG) informed on the temporal dynamics of self-relevance, emotion, and attention effects. Words spoken in one's own voice elicited a larger N1 and Late Positive Potential (LPP), but smaller N400. Identity and emotion interactively modulated the P2 (self-positivity bias) and LPP (self-negativity bias). Attention to speaker identity modulated more strongly ERP responses within 600 ms post-word onset (N1, P2, N400), whereas attention to speech emotion altered the late component (LPP). However, attention did not modulate the interaction of self-relevance and emotion. These findings suggest that the self-voice is prioritized for neural processing at early sensory stages, and that both emotion and attention shape self-voice prioritization in speech processing. They also confirm involuntary processing of salient signals (self-relevance and emotion) even in situations in which attention is deliberately directed away from those cues. These findings have important implications for a better understanding of symptoms thought to arise from aberrant self-voice monitoring such as auditory verbal hallucinations.
Collapse
Affiliation(s)
- Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal; Basic and Applied NeuroDynamics Lab, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands.
| | - João Sarzedas
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Magda S Roberto
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Sonja A Kotz
- Basic and Applied NeuroDynamics Lab, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| |
Collapse
|
11
|
Lin Y, Fan X, Chen Y, Zhang H, Chen F, Zhang H, Ding H, Zhang Y. Neurocognitive Dynamics of Prosodic Salience over Semantics during Explicit and Implicit Processing of Basic Emotions in Spoken Words. Brain Sci 2022; 12:brainsci12121706. [PMID: 36552167 PMCID: PMC9776349 DOI: 10.3390/brainsci12121706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/06/2022] [Accepted: 12/07/2022] [Indexed: 12/15/2022] Open
Abstract
How language mediates emotional perception and experience is poorly understood. The present event-related potential (ERP) study examined the explicit and implicit processing of emotional speech to differentiate the relative influences of communication channel, emotion category and task type in the prosodic salience effect. Thirty participants (15 women) were presented with spoken words denoting happiness, sadness and neutrality in either the prosodic or semantic channel. They were asked to judge the emotional content (explicit task) and speakers' gender (implicit task) of the stimuli. Results indicated that emotional prosody (relative to semantics) triggered larger N100, P200 and N400 amplitudes with greater delta, theta and alpha inter-trial phase coherence (ITPC) and event-related spectral perturbation (ERSP) values in the corresponding early time windows, and continued to produce larger LPC amplitudes and faster responses during late stages of higher-order cognitive processing. The relative salience of prosodic and semantics was modulated by emotion and task, though such modulatory effects varied across different processing stages. The prosodic salience effect was reduced for sadness processing and in the implicit task during early auditory processing and decision-making but reduced for happiness processing in the explicit task during conscious emotion processing. Additionally, across-trial synchronization of delta, theta and alpha bands predicted the ERP components with higher ITPC and ERSP values significantly associated with stronger N100, P200, N400 and LPC enhancement. These findings reveal the neurocognitive dynamics of emotional speech processing with prosodic salience tied to stage-dependent emotion- and task-specific effects, which can reveal insights into understanding language and emotion processing from cross-linguistic/cultural and clinical perspectives.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xinran Fan
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yueqi Chen
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Hao Zhang
- School of Foreign Languages and Literature, Shandong University, Jinan 250100, China
| | - Fei Chen
- School of Foreign Languages, Hunan University, Changsha 410012, China
| | - Hui Zhang
- School of International Education, Shandong University, Jinan 250100, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
- Correspondence: (H.D.); (Y.Z.); Tel.: +86-213-420-5664 (H.D.); +1-612-624-7818 (Y.Z.)
| | - Yang Zhang
- Department of Speech-Language-Hearing Science & Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN 55455, USA
- Correspondence: (H.D.); (Y.Z.); Tel.: +86-213-420-5664 (H.D.); +1-612-624-7818 (Y.Z.)
| |
Collapse
|
12
|
Proverbio AM, Tacchini M, Jiang K. Event-related brain potential markers of visual and auditory perception: A useful tool for brain computer interface systems. Front Behav Neurosci 2022; 16:1025870. [PMID: 36523756 PMCID: PMC9744781 DOI: 10.3389/fnbeh.2022.1025870] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 11/03/2022] [Indexed: 06/27/2024] Open
Abstract
OBJECTIVE A majority of BCI systems, enabling communication with patients with locked-in syndrome, are based on electroencephalogram (EEG) frequency analysis (e.g., linked to motor imagery) or P300 detection. Only recently, the use of event-related brain potentials (ERPs) has received much attention, especially for face or music recognition, but neuro-engineering research into this new approach has not been carried out yet. The aim of this study was to provide a variety of reliable ERP markers of visual and auditory perception for the development of new and more complex mind-reading systems for reconstructing the mental content from brain activity. METHODS A total of 30 participants were shown 280 color pictures (adult, infant, and animal faces; human bodies; written words; checkerboards; and objects) and 120 auditory files (speech, music, and affective vocalizations). This paradigm did not involve target selection to avoid artifactual waves linked to decision-making and response preparation (e.g., P300 and motor potentials), masking the neural signature of semantic representation. Overall, 12,000 ERP waveforms × 126 electrode channels (1 million 512,000 ERP waveforms) were processed and artifact-rejected. RESULTS Clear and distinct category-dependent markers of perceptual and cognitive processing were identified through statistical analyses, some of which were novel to the literature. Results are discussed from the view of current knowledge of ERP functional properties and with respect to machine learning classification methods previously applied to similar data. CONCLUSION The data showed a high level of accuracy (p ≤ 0.01) in the discriminating the perceptual categories eliciting the various electrical potentials by statistical analyses. Therefore, the ERP markers identified in this study could be significant tools for optimizing BCI systems [pattern recognition or artificial intelligence (AI) algorithms] applied to EEG/ERP signals.
Collapse
Affiliation(s)
- Alice Mado Proverbio
- Laboratory of Cognitive Electrophysiology, Department of Psychology, University of Milano-Bicocca, Milan, Italy
| | - Marta Tacchini
- Laboratory of Cognitive Electrophysiology, Department of Psychology, University of Milano-Bicocca, Milan, Italy
| | - Kaijun Jiang
- Laboratory of Cognitive Electrophysiology, Department of Psychology, University of Milano-Bicocca, Milan, Italy
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
| |
Collapse
|
13
|
Duville MM, Alonso-Valerdi LM, Ibarra-Zarate DI. Neuronal and behavioral affective perceptions of human and naturalness-reduced emotional prosodies. Front Comput Neurosci 2022; 16:1022787. [DOI: 10.3389/fncom.2022.1022787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 10/24/2022] [Indexed: 11/19/2022] Open
Abstract
Artificial voices are nowadays embedded into our daily lives with latest neural voices approaching human voice consistency (naturalness). Nevertheless, behavioral, and neuronal correlates of the perception of less naturalistic emotional prosodies are still misunderstood. In this study, we explored the acoustic tendencies that define naturalness from human to synthesized voices. Then, we created naturalness-reduced emotional utterances by acoustic editions of human voices. Finally, we used Event-Related Potentials (ERP) to assess the time dynamics of emotional integration when listening to both human and synthesized voices in a healthy adult sample. Additionally, listeners rated their perceptions for valence, arousal, discrete emotions, naturalness, and intelligibility. Synthesized voices were characterized by less lexical stress (i.e., reduced difference between stressed and unstressed syllables within words) as regards duration and median pitch modulations. Besides, spectral content was attenuated toward lower F2 and F3 frequencies and lower intensities for harmonics 1 and 4. Both psychometric and neuronal correlates were sensitive to naturalness reduction. (1) Naturalness and intelligibility ratings dropped with emotional utterances synthetization, (2) Discrete emotion recognition was impaired as naturalness declined, consistent with P200 and Late Positive Potentials (LPP) being less sensitive to emotional differentiation at lower naturalness, and (3) Relative P200 and LPP amplitudes between prosodies were modulated by synthetization. Nevertheless, (4) Valence and arousal perceptions were preserved at lower naturalness, (5) Valence (arousal) ratings correlated negatively (positively) with Higuchi’s fractal dimension extracted on neuronal data under all naturalness perturbations, (6) Inter-Trial Phase Coherence (ITPC) and standard deviation measurements revealed high inter-individual heterogeneity for emotion perception that is still preserved as naturalness reduces. Notably, partial between-participant synchrony (low ITPC), along with high amplitude dispersion on ERPs at both early and late stages emphasized miscellaneous emotional responses among subjects. In this study, we highlighted for the first time both behavioral and neuronal basis of emotional perception under acoustic naturalness alterations. Partial dependencies between ecological relevance and emotion understanding outlined the modulation but not the annihilation of emotional integration by synthetization.
Collapse
|
14
|
Martins I, Lima CF, Pinheiro AP. Enhanced salience of musical sounds in singers and instrumentalists. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2022; 22:1044-1062. [PMID: 35501427 DOI: 10.3758/s13415-022-01007-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/10/2022] [Indexed: 06/14/2023]
Abstract
Music training has been linked to facilitated processing of emotional sounds. However, most studies have focused on speech, and less is known about musicians' brain responses to other emotional sounds and in relation to instrument-specific experience. The current study combined behavioral and EEG methods to address two novel questions related to the perception of auditory emotional cues: whether and how long-term music training relates to a distinct emotional processing of nonverbal vocalizations and music; and whether distinct training profiles (vocal vs. instrumental) modulate brain responses to emotional sounds from early to late processing stages. Fifty-eight participants completed an EEG implicit emotional processing task, in which musical and vocal sounds differing in valence were presented as nontarget stimuli. After this task, participants explicitly evaluated the same sounds regarding the emotion being expressed, their valence, and arousal. Compared with nonmusicians, musicians displayed enhanced salience detection (P2), attention orienting (P3), and elaborative processing (Late Positive Potential) of musical (vs. vocal) sounds in event-related potential (ERP) data. The explicit evaluation of musical sounds also was distinct in musicians: accuracy in the emotional recognition of musical sounds was similar across valence types in musicians, who also judged musical sounds to be more pleasant and more arousing than nonmusicians. Specific profiles of music training (singers vs. instrumentalists) did not relate to differences in the processing of vocal vs. musical sounds. Together, these findings reveal that music has a privileged status in the auditory system of long-term musically trained listeners, irrespective of their instrument-specific experience.
Collapse
Affiliation(s)
- Inês Martins
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal
| | - César F Lima
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisbon, Portugal
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal.
| |
Collapse
|
15
|
Maltezou-Papastylianou C, Russo R, Wallace D, Harmsworth C, Paulmann S. Different stages of emotional prosody processing in healthy ageing–evidence from behavioural responses, ERPs, tDCS, and tRNS. PLoS One 2022; 17:e0270934. [PMID: 35862317 PMCID: PMC9302842 DOI: 10.1371/journal.pone.0270934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 06/21/2022] [Indexed: 11/22/2022] Open
Abstract
Past research suggests that the ability to recognise the emotional intent of a speaker decreases as a function of age. Yet, few studies have looked at the underlying cause for this effect in a systematic way. This paper builds on the view that emotional prosody perception is a multi-stage process and explores which step of the recognition processing line is impaired in healthy ageing using time-sensitive event-related brain potentials (ERPs). Results suggest that early processes linked to salience detection as reflected in the P200 component and initial build-up of emotional representation as linked to a subsequent negative ERP component are largely unaffected in healthy ageing. The two groups show, however, emotional prosody recognition differences: older participants recognise emotional intentions of speakers less well than younger participants do. These findings were followed up by two neuro-stimulation studies specifically targeting the inferior frontal cortex to test if recognition improves during active stimulation relative to sham. Overall, results suggests that neither tDCS nor high-frequency tRNS stimulation at 2mA for 30 minutes facilitates emotional prosody recognition rates in healthy older adults.
Collapse
Affiliation(s)
| | - Riccardo Russo
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester, United Kingdom
- Department of Brain and Behavioural Sciences, Universita’ di Pavia, Pavia, Italy
| | - Denise Wallace
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester, United Kingdom
| | - Chelsea Harmsworth
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester, United Kingdom
| | - Silke Paulmann
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester, United Kingdom
- * E-mail:
| |
Collapse
|
16
|
Weyers I, Mueller J. A Special Role of Syllables, But Not Vowels or Consonants, for Nonadjacent Dependency Learning. J Cogn Neurosci 2022; 34:1467-1487. [PMID: 35604359 DOI: 10.1162/jocn_a_01874] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Successful language processing entails tracking (morpho)syntactic relationships between distant units of speech, so-called nonadjacent dependencies (NADs). Many cues to such dependency relations have been identified, yet the linguistic elements encoding them have received little attention. In the present investigation, we tested whether and how these elements, here syllables, consonants, and vowels, affect behavioral learning success as well as learning-related changes in neural activity in relation to item-specific NAD learning. In a set of two EEG studies with adults, we compared learning under conditions where either all segment types (Experiment 1) or only one segment type (Experiment 2) was informative. The collected behavioral and ERP data indicate that, when all three segment types are available, participants mainly rely on the syllable for NAD learning. With only one segment type available for learning, adults also perform most successfully with syllable-based dependencies. Although we find no evidence for successful learning across vowels in Experiment 2, dependencies between consonants seem to be identified at least passively at the phonetic-feature level. Together, these results suggest that successful item-specific NAD learning may depend on the availability of syllabic information. Furthermore, they highlight consonants' distinctive power to support lexical processes. Although syllables show a clear facilitatory function for NAD learning, the underlying mechanisms of this advantage require further research.
Collapse
|
17
|
Nussbaum C, Schirmer A, Schweinberger SR. Contributions of fundamental frequency and timbre to vocal emotion perception and their electrophysiological correlates. Soc Cogn Affect Neurosci 2022; 17:1145-1154. [PMID: 35522247 PMCID: PMC9714422 DOI: 10.1093/scan/nsac033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 04/12/2022] [Accepted: 05/06/2022] [Indexed: 01/12/2023] Open
Abstract
Our ability to infer a speaker's emotional state depends on the processing of acoustic parameters such as fundamental frequency (F0) and timbre. Yet, how these parameters are processed and integrated to inform emotion perception remains largely unknown. Here we pursued this issue using a novel parameter-specific voice morphing technique to create stimuli with emotion modulations in only F0 or only timbre. We used these stimuli together with fully modulated vocal stimuli in an event-related potential (ERP) study in which participants listened to and identified stimulus emotion. ERPs (P200 and N400) and behavioral data converged in showing that both F0 and timbre support emotion processing but do so differently for different emotions: Whereas F0 was most relevant for responses to happy, fearful and sad voices, timbre was most relevant for responses to voices expressing pleasure. Together, these findings offer original insights into the relative significance of different acoustic parameters for early neuronal representations of speaker emotion and show that such representations are predictive of subsequent evaluative judgments.
Collapse
Affiliation(s)
- Christine Nussbaum
- Correspondence should be addressed to Christine Nussbaum, Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Leutragraben 1, Jena 07743, Germany. E-mail:
| | - Annett Schirmer
- Department of Psychology, The Chinese University of Hong Kong, Shatin 999077, Hong Kong SAR,Brain and Mind Institute, The Chinese University of Hong Kong, Shatin 999077, Hong Kong SAR,Center for Cognition and Brain Studies, The Chinese University of Hong Kong, Shatin 999077, Hong Kong SAR
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena 07743, Germany,Voice Research Unit, Friedrich Schiller University, Jena 07743, Germany,Swiss Center for Affective Sciences, University of Geneva, Geneva 1202, Switzerland
| |
Collapse
|
18
|
Shi Z, Groechel TR, Jain S, Chima K, Rudovic O(O, Matarić MJ. Toward Personalized Affect-Aware Socially Assistive Robot Tutors for Long-Term Interventions with Children with Autism. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Affect-aware socially assistive robotics (SAR) has shown great potential for augmenting interventions for children with autism spectrum disorders (ASD). However, current SAR cannot yet perceive the unique and diverse set of atypical cognitive-affective behaviors from children with ASD in an automatic and personalized fashion in long-term (multi-session) real-world interactions. To bridge this gap, this work designed and validated personalized models of arousal and valence for children with ASD using a multi-session in-home dataset of SAR interventions. By training machine learning (ML) algorithms with supervised domain adaptation (s-DA), the personalized models were able to trade off between the limited individual data and the more abundant less personal data pooled from other study participants. We evaluated the effects of personalization on a long-term multimodal dataset consisting of 4 children with ASD with a total of 19 sessions, and derived inter-rater reliability (IR) scores for binary arousal (IR = 83%) and valence (IR = 81%) labels between human annotators. Our results show that personalized Gradient Boosted Decision Trees (XGBoost) models with s-DA outperformed two non-personalized individualized and generic model baselines not only on the weighted average of all sessions, but also statistically (
p
<.05) across individual sessions. This work paves the way for the development of personalized autonomous SAR systems tailored toward individuals with atypical cognitive-affective and socio-emotional needs.
Collapse
|
19
|
Wang M, Tokimoto S, Song G, Ueno T, Koizumi M, Kiyama S. Different Neural Responses for Unfinished Sentence as a Conventional Indirect Refusal Between Native and Non-native Speakers: An Event-Related Potential Study. Front Psychol 2022; 13:806023. [PMID: 35310221 PMCID: PMC8929272 DOI: 10.3389/fpsyg.2022.806023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 01/24/2022] [Indexed: 11/13/2022] Open
Abstract
Refusal is considered a face-threatening act (FTA), since it contradicts the inviter’s expectations. In the case of Japanese, native speakers (NS) are known to prefer to leave sentences unfinished for a conventional indirect refusal. Successful comprehension of this indirect refusal depends on whether the addressee is fully conventionalized to the preference for syntactic unfinishedness so that they can identify the true intention of the refusal. Then, non-native speakers (NNS) who are not fully accustomed to the convention may be confused by the indirect style. In the present study, we used event-related potentials (ERPs) of electroencephalography in an attempt to differentiate the neural substrates for perceiving unfinished sentences in a conventionalized indirect refusal as an FTA between NS and NNS, in terms of the unfinishedness and indirectness of the critical sentence. In addition, we examined the effects of individual differences in mentalization, or the theory of mind, which refers to the ability to infer the mental states of others. We found several different ERP effects for these refusals between NS and NNS. NNS induced stronger P600 effects for the unfinishedness of the refusal sentences, suggesting their perceived syntactic anomaly. This was not evoked in NS. NNS also revealed the effects of N400 and P300 for the indirectness of refusal sentences, which can be interpreted as their increased processing load for pragmatic processing in the inexperienced contextual flow. We further found that the NNS’s individual mentalizing ability correlates with the effect of N400 mentioned above, indicating that lower mentalizers evoke higher N400 for indirect refusal. NS, on the contrary, did not yield these effects reflecting the increased pragmatic processing load. Instead, they evoked earlier ERPs of early posterior negativity (EPN) and P200, both of which are known as indices of emotional processing, for finished sentences of refusal than for unfinished ones. We interpreted these effects as a NS’s dispreference for finished sentences to realize an FTA, given that unfinished sentences are considered more polite and more conventionalized in Japanese social encounters. Overall, these findings provide evidence that a syntactic anomaly inherent in a cultural convention as well as individual mentalizing ability plays an important role in understanding an indirect speech act of face-threatening refusal.
Collapse
Affiliation(s)
- Min Wang
- Department of Linguistics, Graduate School of Arts and Letters, Tohoku University, Sendai, Japan
| | - Shingo Tokimoto
- Department of English Language Studies, Mejiro University, Tokyo, Japan
| | - Ge Song
- Department of Linguistics, Graduate School of Arts and Letters, Tohoku University, Sendai, Japan
| | - Takashi Ueno
- Department of Social Welfare, Faculty of Comprehensive Welfare, Tohoku Fukushi University, Sendai, Japan
| | - Masatoshi Koizumi
- Department of Linguistics, Graduate School of Arts and Letters, Tohoku University, Sendai, Japan
| | - Sachiko Kiyama
- Department of Linguistics, Graduate School of Arts and Letters, Tohoku University, Sendai, Japan
| |
Collapse
|
20
|
The Time Course of Emotional Authenticity Detection in Nonverbal Vocalizations. Cortex 2022; 151:116-132. [DOI: 10.1016/j.cortex.2022.02.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 12/23/2021] [Accepted: 02/16/2022] [Indexed: 11/24/2022]
|
21
|
Shi H, Li M, Shangguan C, Lu J. Collective self-referential processing evoked by different national symbols: an event-related potential study. Neurosci Lett 2022; 773:136496. [PMID: 35121057 DOI: 10.1016/j.neulet.2022.136496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Revised: 01/20/2022] [Accepted: 01/27/2022] [Indexed: 11/26/2022]
Abstract
The collective self is an important representation of self-concept, especially for people in collectivism culture. However, it is not clear whether there are differences in the self-reference effects caused by different collective self-relevant stimuli. The present study aimed to explore the temporal characteristics of collective self-referential processing evoked by polarized and unpolarized national symbols. Event-related potentials (ERPs) were recorded for pictures of national symbols and self-irrelevant pictures when 25 female participants performed a three-stimulus oddball task. The results indicate that compared to self-irrelevant pictures, both national symbols elicited collective self-reference effects on N2, P3, and LPP amplitudes. Polarized and unpolarized national symbols showed differences in N2 and P3 amplitudes. Moreover, national identity level was correlated with N2 and P3 amplitudes elicited by unpolarized symbols, and early LPP amplitudes elicited by both symbols. These results suggest greater recruitment of resources to process national symbols, and inconsistent time courses of processing different national symbols. Polarized symbols may consume more resources because of the internal complexity of their self-representations. The present study expands the research on collective self and its self-referential effect on women, and provides some enlightenment for understanding the internal factors that influence the strength of the self-reference effect.
Collapse
Affiliation(s)
- Huqing Shi
- Department of Psychology, Shanghai Normal University, Shanghai, China
| | - Mingping Li
- Department of Psychology, Shanghai Normal University, Shanghai, China
| | - Chenyu Shangguan
- College of Education Science and Technology, Nanjing University of Posts and Telecommunications, Nanjing, China.
| | - Jiamei Lu
- Department of Psychology, Shanghai Normal University, Shanghai, China.
| |
Collapse
|
22
|
Zora H, Csépe V. Perception of Prosodic Modulations of Linguistic and Paralinguistic Origin: Evidence From Early Auditory Event-Related Potentials. Front Neurosci 2022; 15:797487. [PMID: 35002610 PMCID: PMC8733303 DOI: 10.3389/fnins.2021.797487] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 11/29/2021] [Indexed: 11/13/2022] Open
Abstract
How listeners handle prosodic cues of linguistic and paralinguistic origin is a central question for spoken communication. In the present EEG study, we addressed this question by examining neural responses to variations in pitch accent (linguistic) and affective (paralinguistic) prosody in Swedish words, using a passive auditory oddball paradigm. The results indicated that changes in pitch accent and affective prosody elicited mismatch negativity (MMN) responses at around 200 ms, confirming the brain’s pre-attentive response to any prosodic modulation. The MMN amplitude was, however, statistically larger to the deviation in affective prosody in comparison to the deviation in pitch accent and affective prosody combined, which is in line with previous research indicating not only a larger MMN response to affective prosody in comparison to neutral prosody but also a smaller MMN response to multidimensional deviants than unidimensional ones. The results, further, showed a significant P3a response to the affective prosody change in comparison to the pitch accent change at around 300 ms, in accordance with previous findings showing an enhanced positive response to emotional stimuli. The present findings provide evidence for distinct neural processing of different prosodic cues, and statistically confirm the intrinsic perceptual and motivational salience of paralinguistic information in spoken communication.
Collapse
Affiliation(s)
- Hatice Zora
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
| | - Valéria Csépe
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary
| |
Collapse
|
23
|
Thompson L, White B. Neuropsychological correlates of evocative multimodal speech: The combined roles of fearful prosody, visuospatial attention, cortisol response, and anxiety. Behav Brain Res 2022; 416:113560. [PMID: 34461163 DOI: 10.1016/j.bbr.2021.113560] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 08/04/2021] [Accepted: 08/24/2021] [Indexed: 12/21/2022]
Abstract
Past research reveals left-hemisphere dominance for linguistic processing and right-hemisphere dominance for emotional prosody processing during auditory language comprehension, a pattern also found in visuospatial attention studies where listeners are presented with a view of the talker's face. Is this lateralization pattern for visuospatial attention and language processing upheld when listeners are experiencing a stress response? To investigate this question, participants completed the Trier Social Stress Test (TSST) between administrations of a visuospatial attention and language comprehension dual-task paradigm. Subjective anxiety, cardiovascular, and saliva cortisol measures were taken before and after the TSST. Higher language comprehension scores in the post-TSST neutral prosody condition were associated with lower cortisol responses, differences in blood pressure, and less subjective anxiety. In this challenging task, visuospatial attention was most focused at the mouth region, both prior to and after stress induction. Greater visuospatial attention on the left side of the face image, compared to the right side, indicated greater right hemisphere activation. In the Fear, but not the Neutral, prosody condition, greater cortisol response was associated with greater visuospatial attention to the left side of the face image. Results are placed into theoretical context, and can be applied to situations where stressed listeners must interpret emotionally evocative language.
Collapse
Affiliation(s)
- Laura Thompson
- Clinical Psychology Program, Fielding Graduate University, United States.
| | - Bryan White
- Department of Psychology, New Mexico State University, United States
| |
Collapse
|
24
|
Abstract
AbstractHuman emotion recognition is an active research area in artificial intelligence and has made substantial progress over the past few years. Many recent works mainly focus on facial regions to infer human affection, while the surrounding context information is not effectively utilized. In this paper, we proposed a new deep network to effectively recognize human emotions using a novel global-local attention mechanism. Our network is designed to extract features from both facial and context regions independently, then learn them together using the attention module. In this way, both the facial and contextual information is used to infer human emotions, therefore enhancing the discrimination of the classifier. The intensive experiments show that our method surpasses the current state-of-the-art methods on recent emotion datasets by a fair margin. Qualitatively, our global-local attention module can extract more meaningful attention maps than previous methods. The source code and trained model of our network are available at https://github.com/minhnhatvt/glamor-net.
Collapse
|
25
|
The neural basis of authenticity recognition in laughter and crying. Sci Rep 2021; 11:23750. [PMID: 34887461 PMCID: PMC8660868 DOI: 10.1038/s41598-021-03131-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 11/22/2021] [Indexed: 01/28/2023] Open
Abstract
Deciding whether others' emotions are genuine is essential for successful communication and social relationships. While previous fMRI studies suggested that differentiation between authentic and acted emotional expressions involves higher-order brain areas, the time course of authenticity discrimination is still unknown. To address this gap, we tested the impact of authenticity discrimination on event-related potentials (ERPs) related to emotion, motivational salience, and higher-order cognitive processing (N100, P200 and late positive complex, the LPC), using vocalised non-verbal expressions of sadness (crying) and happiness (laughter) in a 32-participant, within-subject study. Using a repeated measures 2-factor (authenticity, emotion) ANOVA, we show that N100's amplitude was larger in response to authentic than acted vocalisations, particularly in cries, while P200's was larger in response to acted vocalisations, particularly in laughs. We suggest these results point to two different mechanisms: (1) a larger N100 in response to authentic vocalisations is consistent with its link to emotional content and arousal (putatively larger amplitude for genuine emotional expressions); (2) a larger P200 in response to acted ones is in line with evidence relating it to motivational salience (putatively larger for ambiguous emotional expressions). Complementarily, a significant main effect of emotion was found on P200 and LPC amplitudes, in that the two were larger for laughs than cries, regardless of authenticity. Overall, we provide the first electroencephalographic examination of authenticity discrimination and propose that authenticity processing of others' vocalisations is initiated early, along that of their emotional content or category, attesting for its evolutionary relevance for trust and bond formation.
Collapse
|
26
|
Hugentobler KG, Lüdtke J. Micropoetry Meets Neurocognitive Poetics: Influence of Associations on the Reception of Poetry. Front Psychol 2021; 12:737756. [PMID: 34744908 PMCID: PMC8563571 DOI: 10.3389/fpsyg.2021.737756] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 09/14/2021] [Indexed: 12/01/2022] Open
Abstract
Reading and understanding poetic texts is often described as an interactive process influenced by the words and phrases building the poems and all associations and images induced by them in the readers mind. Iser, for example, described the understanding process as the closing of a good Gestalt promoted by mental images. Here, we investigate the effect that semantic cohesion, that is the internal connection of a list words, has on understanding and appreciation of poetic texts. To do this, word lists are presented as modern micropoems to the participants and the (ease of) extraction of underlying concepts as well as the affective and aesthetic responses are implicitly and explicitly measured. We found that a unifying concept is found more easily and unifying concepts vary significantly less between participants when the words composing a micropoem are semantically related. Moreover these items are liked better and are understood more easily. Our study shows evidence for the assumed relationship between building spontaneous associations, forming mental imagery, and understanding and appreciation of poetic texts. In addition, we introduced a new method well-suited to manipulate backgrounding features independently of foregrounding features which allows to disentangle the effects of both on poetry reception.
Collapse
Affiliation(s)
- Katharina Gloria Hugentobler
- Department of Education and Psychology, Experimental and Neurocognitive Psychology, Freie Universität Berlin, Berlin, Germany
| | - Jana Lüdtke
- Department of Education and Psychology, Experimental and Neurocognitive Psychology, Freie Universität Berlin, Berlin, Germany
| |
Collapse
|
27
|
Döllinger L, Laukka P, Högman LB, Bänziger T, Makower I, Fischer H, Hau S. Training Emotion Recognition Accuracy: Results for Multimodal Expressions and Facial Micro Expressions. Front Psychol 2021; 12:708867. [PMID: 34475841 PMCID: PMC8406528 DOI: 10.3389/fpsyg.2021.708867] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 07/09/2021] [Indexed: 12/22/2022] Open
Abstract
Nonverbal emotion recognition accuracy (ERA) is a central feature of successful communication and interaction, and is of importance for many professions. We developed and evaluated two ERA training programs-one focusing on dynamic multimodal expressions (audio, video, audio-video) and one focusing on facial micro expressions. Sixty-seven subjects were randomized to one of two experimental groups (multimodal, micro expression) or an active control group (emotional working memory task). Participants trained once weekly with a brief computerized training program for three consecutive weeks. Pre-post outcome measures consisted of a multimodal ERA task, a micro expression recognition task, and a task about patients' emotional cues. Post measurement took place approximately a week after the last training session. Non-parametric mixed analyses of variance using the Aligned Rank Transform were used to evaluate the effectiveness of the training programs. Results showed that multimodal training was significantly more effective in improving multimodal ERA compared to micro expression training or the control training; and the micro expression training was significantly more effective in improving micro expression ERA compared to the other two training conditions. Both pre-post effects can be interpreted as large. No group differences were found for the outcome measure about recognizing patients' emotion cues. There were no transfer effects of the training programs, meaning that participants only improved significantly for the specific facet of ERA that they had trained on. Further, low baseline ERA was associated with larger ERA improvements. Results are discussed with regard to methodological and conceptual aspects, and practical implications and future directions are explored.
Collapse
Affiliation(s)
- Lillian Döllinger
- Department of Psychology, Faculty of Social Sciences, Stockholm University, Stockholm, Sweden
| | - Petri Laukka
- Department of Psychology, Faculty of Social Sciences, Stockholm University, Stockholm, Sweden
| | - Lennart Björn Högman
- Department of Psychology, Faculty of Social Sciences, Stockholm University, Stockholm, Sweden
| | - Tanja Bänziger
- Department of Psychology and Social Work, Mid Sweden University, Sundsvall, Sweden
| | | | - Håkan Fischer
- Department of Psychology, Faculty of Social Sciences, Stockholm University, Stockholm, Sweden
| | - Stephan Hau
- Department of Psychology, Faculty of Social Sciences, Stockholm University, Stockholm, Sweden
| |
Collapse
|
28
|
Caballero JA, Mauchand M, Jiang X, Pell MD. Cortical processing of speaker politeness: Tracking the dynamic effects of voice tone and politeness markers. Soc Neurosci 2021; 16:423-438. [PMID: 34102955 DOI: 10.1080/17470919.2021.1938667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Information in the tone of voice alters social impressions and underlying brain activity as listeners evaluate the interpersonal relevance of utterances. Here, we presented requests that expressed politeness distinctions through the voice (polite/rude) and explicit linguistic markers (half of the requests began with Please). Thirty participants performed a social perception task (rating friendliness) while their electroencephalogram was recorded. Behaviorally, vocal politeness strategies had a much stronger influence on the perceived friendliness than the linguistic marker. Event-related potentials revealed rapid effects of (im)polite voices on cortical activity prior to ~300 ms; P200 amplitudes increased for polite versus rude voices, suggesting that the speaker's polite stance was registered as more salient in our task. At later stages, politeness distinctions encoded by the speaker's voice and their use of Please interacted, modulating activity in the N400 (300-500 ms) and late positivity (600-800 ms) time windows. Patterns of results suggest that initial attention deployment to politeness cues is rapidly influenced by the motivational significance of a speaker's voice. At later stages, processes for integrating vocal and lexical information resulted in increased cognitive effort to reevaluate utterances with ambiguous/contradictory cues. The potential influence of social anxiety on the P200 effect is also discussed.
Collapse
Affiliation(s)
- Jonathan A Caballero
- School of Communication Sciences and Disorders 2001 McGill College, McGill University, Montréal, Québec, Canada
| | - Maël Mauchand
- School of Communication Sciences and Disorders 2001 McGill College, McGill University, Montréal, Québec, Canada
| | - Xiaoming Jiang
- Shanghai International Studies University, Institute of Linguistics (IoL), Shanghai, China
| | - Marc D Pell
- School of Communication Sciences and Disorders 2001 McGill College, McGill University, Montréal, Québec, Canada
| |
Collapse
|
29
|
Duville MM, Alonso-Valerdi LM, Ibarra-Zarate DI. Electroencephalographic Correlate of Mexican Spanish Emotional Speech Processing in Autism Spectrum Disorder: To a Social Story and Robot-Based Intervention. Front Hum Neurosci 2021; 15:626146. [PMID: 33716696 PMCID: PMC7952538 DOI: 10.3389/fnhum.2021.626146] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Accepted: 02/08/2021] [Indexed: 12/04/2022] Open
Abstract
Socio-emotional impairments are key symptoms of Autism Spectrum Disorders. This work proposes to analyze the neuronal activity related to the discrimination of emotional prosodies in autistic children (aged 9 to 11-year-old) as follows. Firstly, a database for single words uttered in Mexican Spanish by males, females, and children will be created. Then, optimal acoustic features for emotion characterization will be extracted, followed of a cubic kernel function Support Vector Machine (SVM) in order to validate the speech corpus. As a result, human-specific acoustic properties of emotional voice signals will be identified. Secondly, those identified acoustic properties will be modified to synthesize the recorded human emotional voices. Thirdly, both human and synthesized utterances will be used to study the electroencephalographic correlate of affective prosody processing in typically developed and autistic children. Finally, and on the basis of the outcomes, synthesized voice-enhanced environments will be created to develop an intervention based on social-robot and Social StoryTM for autistic children to improve affective prosodies discrimination. This protocol has been registered at BioMed Central under the following number: ISRCTN18117434.
Collapse
Affiliation(s)
- Mathilde Marie Duville
- Neuroengineering and Neuroacoustics Research Group, Tecnologico de Monterrey, Escuela de Ingeniería y Ciencias, Monterrey, Mexico
| | - Luz Maria Alonso-Valerdi
- Neuroengineering and Neuroacoustics Research Group, Tecnologico de Monterrey, Escuela de Ingeniería y Ciencias, Monterrey, Mexico
| | - David I Ibarra-Zarate
- Neuroengineering and Neuroacoustics Research Group, Tecnologico de Monterrey, Escuela de Ingeniería y Ciencias, Monterrey, Mexico
| |
Collapse
|
30
|
Acoustic salience in emotional voice perception and its relationship with hallucination proneness. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2021; 21:412-425. [DOI: 10.3758/s13415-021-00864-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 12/23/2020] [Indexed: 01/01/2023]
|
31
|
Do infants represent human actions cross-modally? An ERP visual-auditory priming study. Biol Psychol 2021; 160:108047. [PMID: 33596461 DOI: 10.1016/j.biopsycho.2021.108047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 01/15/2021] [Accepted: 02/08/2021] [Indexed: 12/27/2022]
Abstract
Recent findings indicate that 7-months-old infants perceive and represent the sounds inherent to moving human bodies. However, it is not known whether infants integrate auditory and visual information in representations of specific human actions. To address this issue, we used ERPs to investigate infants' neural sensitivity to the correspondence between sounds and images of human actions. In a cross-modal priming paradigm, 7-months-olds were presented with the sounds generated by two types of human body movement, walking and handclapping, after watching the kinematics of those actions in either a congruent or incongruent manner. ERPs recorded from frontal, central and parietal electrodes in response to action sounds indicate that 7-months-old infants perceptually link the visual and auditory cues of human actions. However, at this age these percepts do not seem to be integrated in cognitive multimodal representations of human actions.
Collapse
|
32
|
Pinheiro AP, Schwartze M, Kotz SA. Cerebellar circuitry and auditory verbal hallucinations: An integrative synthesis and perspective. Neurosci Biobehav Rev 2020; 118:485-503. [DOI: 10.1016/j.neubiorev.2020.08.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Revised: 06/30/2020] [Accepted: 08/07/2020] [Indexed: 02/06/2023]
|
33
|
The Relationship of Symmetry, Complexity, and Shape in Mobile Interface Aesthetics, from an Emotional Perspective—A Case Study of the Smartwatch. Symmetry (Basel) 2020. [DOI: 10.3390/sym12091403] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Products with interactive interfaces can be seen everywhere, and product interface design aesthetics is a topic that has begun to receive wide attention. Consumers’ perceptions of product interfaces come from their own emotions, and emotion plays a significant role in product interface design aesthetics. In other words, it must meet the users’ emotional and aesthetic requirements. Therefore, we need to better understand the aesthetic design criteria and how they stimulate specific emotional responses. This study takes the dial interface of smartwatches as its experimental sample and explores how the interaction effects of the screen shape (square and round) and the symmetry type and the complexity type of the interface design influence the users’ emotional arousal and valence. In addition, it analyzes the effects of the symmetry type, the complexity type, and the screen shape on the users’ arousal and valence. The results show that the attributes of interface design aesthetics (symmetry-asymmetry, complexity-simplicity, and square-round) affect the users’ emotional responses. Moreover, the interface shape is one of the important factors in the emotional response to an interface design. This paper, based on previous research, provides vital theoretical support for the relevant literature on interface design aesthetics and the users’ emotional state. In addition, it may provide a reference for designers and developers who wish to develop and implement emotional user interfaces that are designed to more effectively appeal to their emotions.
Collapse
|
34
|
Paquette S, Rigoulot S, Grunewald K, Lehmann A. Temporal decoding of vocal and musical emotions: Same code, different timecourse? Brain Res 2020; 1741:146887. [PMID: 32422128 DOI: 10.1016/j.brainres.2020.146887] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 04/22/2020] [Accepted: 05/12/2020] [Indexed: 11/24/2022]
Abstract
From a baby's cry to a piece of music, we perceive emotions from our auditory environment every day. Many theories bring forward the concept of common neural substrates for the perception of vocal and musical emotions. It has been proposed that, for us to perceive emotions, music recruits emotional circuits that evolved for the processing of biologically relevant vocalizations (e.g., screams, laughs). Although some studies have found similarities between voice and instrumental music in terms of acoustic cues and neural correlates, little is known about their processing timecourse. To further understand how vocal and instrumental emotional sounds are perceived, we used EEG to compare the neural processing timecourse of both stimuli type expressed with a varying degree of complexity (vocal/musical affect bursts and emotion-embedded speech/music). Vocal stimuli in general, as well as musical/vocal bursts, were associated with a more concise sensory trace at initial stages of analysis (smaller N1), although vocal bursts had shorter latencies than the musical ones. As for the P2 - vocal affect bursts and Emotion-Embedded Musical stimuli were associated with earlier P2s. These results support the idea that emotional vocal stimuli are differentiated early from other sources and provide insight into the common neurobiological underpinnings of auditory emotions.
Collapse
Affiliation(s)
- S Paquette
- Department of Otolaryngology - Head and Neck Surgery, McGill University, Montreal, Canada; Center for Research on Brain, Language, and Music, McGill University, Montreal, Canada; International Laboratory for Brain, Music, and Sound Research, Université de Montréal, Montreal, Canada.
| | - S Rigoulot
- Center for Research on Brain, Language, and Music, McGill University, Montreal, Canada; Department of Psychology, Université du Québec à Trois-Rivières, Trois-Rivières, Canada; International Laboratory for Brain, Music, and Sound Research, Université de Montréal, Montreal, Canada
| | - K Grunewald
- Center for Research on Brain, Language, and Music, McGill University, Montreal, Canada; International Laboratory for Brain, Music, and Sound Research, Université de Montréal, Montreal, Canada
| | - A Lehmann
- Department of Otolaryngology - Head and Neck Surgery, McGill University, Montreal, Canada; Center for Research on Brain, Language, and Music, McGill University, Montreal, Canada; International Laboratory for Brain, Music, and Sound Research, Université de Montréal, Montreal, Canada
| |
Collapse
|
35
|
Vergis N, Jiang X, Pell MD. Neural responses to interpersonal requests: Effects of imposition and vocally-expressed stance. Brain Res 2020; 1740:146855. [DOI: 10.1016/j.brainres.2020.146855] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 04/02/2020] [Accepted: 04/23/2020] [Indexed: 02/07/2023]
|
36
|
Rigoulot S, Jiang X, Vergis N, Pell MD. Neurophysiological correlates of sexually evocative speech. Biol Psychol 2020; 154:107909. [DOI: 10.1016/j.biopsycho.2020.107909] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2019] [Revised: 05/14/2020] [Accepted: 05/20/2020] [Indexed: 12/11/2022]
|
37
|
Steber S, König N, Stephan F, Rossi S. Uncovering electrophysiological and vascular signatures of implicit emotional prosody. Sci Rep 2020; 10:5807. [PMID: 32242032 PMCID: PMC7118077 DOI: 10.1038/s41598-020-62761-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 03/18/2020] [Indexed: 11/13/2022] Open
Abstract
The capability of differentiating between various emotional states in speech displays a crucial prerequisite for successful social interactions. The aim of the present study was to investigate neural processes underlying this differentiating ability by applying a simultaneous neuroscientific approach in order to gain both electrophysiological (via electroencephalography, EEG) and vascular (via functional near-infrared-spectroscopy, fNIRS) responses. Pseudowords conforming to angry, happy, and neutral prosody were presented acoustically to participants using a passive listening paradigm in order to capture implicit mechanisms of emotional prosody processing. Event-related brain potentials (ERPs) revealed a larger P200 and an increased late positive potential (LPP) for happy prosody as well as larger negativities for angry and neutral prosody compared to happy prosody around 500 ms. FNIRS results showed increased activations for angry prosody at right fronto-temporal areas. Correlation between negativity in the EEG and activation in fNIRS for angry prosody suggests analogous underlying processes resembling a negativity bias. Overall, results indicate that mechanisms of emotional and phonological encoding (P200), emotional evaluation (increased negativities) as well as emotional arousal and relevance (LPP) are present during implicit processing of emotional prosody.
Collapse
Affiliation(s)
- Sarah Steber
- ICONE - Innsbruck Cognitive Neuroscience, Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020, Innsbruck, Austria
- Department of Psychology, University of Innsbruck, 6020, Innsbruck, Austria
| | - Nicola König
- ICONE - Innsbruck Cognitive Neuroscience, Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020, Innsbruck, Austria
- Department of Psychology, University of Innsbruck, 6020, Innsbruck, Austria
| | - Franziska Stephan
- ICONE - Innsbruck Cognitive Neuroscience, Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020, Innsbruck, Austria
- Department of Educational Psychology, Faculty of Education, University of Leipzig, 04109, Leipzig, Germany
| | - Sonja Rossi
- ICONE - Innsbruck Cognitive Neuroscience, Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020, Innsbruck, Austria.
| |
Collapse
|
38
|
Proverbio AM, Santoni S, Adorni R. ERP Markers of Valence Coding in Emotional Speech Processing. iScience 2020; 23:100933. [PMID: 32151976 PMCID: PMC7063241 DOI: 10.1016/j.isci.2020.100933] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Revised: 12/20/2019] [Accepted: 02/19/2020] [Indexed: 11/01/2022] Open
Abstract
How is auditory emotional information processed? The study's aim was to compare cerebral responses to emotionally positive or negative spoken phrases matched for structure and content. Twenty participants listened to 198 vocal stimuli while detecting filler phrases containing first names. EEG was recorded from 128 sites. Three event-related potential (ERP) components were quantified and found to be sensitive to emotional valence since 350 ms of latency. P450 and late positivity were enhanced by positive content, whereas anterior negativity was larger to negative content. A similar set of markers (P300, N400, LP) was found previously for the processing of positive versus negative affective vocalizations, prosody, and music, which suggests a common neural mechanism for extracting the emotional content of auditory information. SwLORETA applied to potentials recorded between 350 and 550 ms showed that negative speech activated the right temporo/parietal areas (BA40, BA20/21), whereas positive speech activated the left homologous and inferior frontal areas.
Collapse
Affiliation(s)
- Alice Mado Proverbio
- Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy.
| | - Sacha Santoni
- Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy
| | - Roberta Adorni
- Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy
| |
Collapse
|
39
|
Neurophysiological Differences in Emotional Processing by Cochlear Implant Users, Extending Beyond the Realm of Speech. Ear Hear 2020; 40:1197-1209. [PMID: 30762600 DOI: 10.1097/aud.0000000000000701] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
OBJECTIVE Cochlear implants (CIs) restore a sense of hearing in deaf individuals. However, they do not transmit the acoustic signal with sufficient fidelity, leading to difficulties in recognizing emotions in voice and in music. The study aimed to explore the neurophysiological bases of these limitations. DESIGN Twenty-two adults (18 to 70 years old) with CIs and 22 age-matched controls with normal hearing participated. Event-related potentials (ERPs) were recorded in response to emotional bursts (happy, sad, or neutral) produced in each modality (voice or music) that were for the most part correctly identified behaviorally. RESULTS Compared to controls, the N1 and P2 components were attenuated and prolonged in CI users. To a smaller degree, N1 and P2 were also attenuated and prolonged in music compared to voice, in both populations. The N1-P2 complex was emotion-dependent (e.g., reduced and prolonged response to sadness), but this was also true in both populations. In contrast, the later portion of the response, between 600 and 850 ms, differentiated happy and sad from neutral stimuli in normal hearing but not in CI listeners. CONCLUSIONS The early portion of the ERP waveform reflected primarily the general reduction in sensory encoding by CI users (largely due to CI processing itself), whereas altered emotional processing (by CI users) could be found in the later portion of the ERP and extended beyond the realm of speech.
Collapse
|
40
|
Event-related potential and behavioural differences in affective self-referential processing in long-term meditators versus controls. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2020; 20:326-339. [DOI: 10.3758/s13415-020-00771-y] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
41
|
Lin SY, Lee CC, Chen YS, Kuo LW. Investigation of functional brain network reconfiguration during vocal emotional processing using graph-theoretical analysis. Soc Cogn Affect Neurosci 2020; 14:529-538. [PMID: 31157395 PMCID: PMC6545541 DOI: 10.1093/scan/nsz025] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2018] [Revised: 03/11/2019] [Accepted: 04/02/2019] [Indexed: 12/12/2022] Open
Abstract
Vocal expression is essential for conveying the emotion during social interaction. Although vocal emotion has been explored in previous studies, little is known about how perception of different vocal emotional expressions modulates the functional brain network topology. In this study, we aimed to investigate the functional brain networks under different attributes of vocal emotion by graph-theoretical network analysis. Functional magnetic resonance imaging (fMRI) experiments were performed on 36 healthy participants. We utilized the Power-264 functional brain atlas to calculate the interregional functional connectivity (FC) from fMRI data under resting state and vocal stimuli at different arousal and valence levels. The orthogonal minimal spanning trees method was used for topological filtering. The paired-sample t-test with Bonferroni correction across all regions and arousal-valence levels were used for statistical comparisons. Our results show that brain network exhibits significantly altered network attributes at FC, nodal and global levels, especially under high-arousal or negative-valence vocal emotional stimuli. The alterations within/between well-known large-scale functional networks were also investigated. Through the present study, we have gained more insights into how comprehending emotional speech modulates brain networks. These findings may shed light on how the human brain processes emotional speech and how it distinguishes different emotional conditions.
Collapse
Affiliation(s)
- Shih-Yen Lin
- Institute of Biomedical Engineering and Nanomedicine, National Health Research Institutes, Miaoli, Taiwan.,Department of Computer Science, National Chiao Tung University, Hsinchu, Taiwan
| | - Chi-Chun Lee
- Department of Electrical Engineering, National Tsing Hua University, Hsinchu, Taiwan
| | - Yong-Sheng Chen
- Department of Computer Science, National Chiao Tung University, Hsinchu, Taiwan
| | - Li-Wei Kuo
- Institute of Biomedical Engineering and Nanomedicine, National Health Research Institutes, Miaoli, Taiwan.,Institute of Medical Device and Imaging, National Taiwan University College of Medicine, Taipei, Taiwan
| |
Collapse
|
42
|
Proverbio AM, Benedetto F, Guazzone M. Shared neural mechanisms for processing emotions in music and vocalizations. Eur J Neurosci 2019; 51:1987-2007. [DOI: 10.1111/ejn.14650] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 11/21/2019] [Accepted: 12/05/2019] [Indexed: 12/21/2022]
Affiliation(s)
- Alice Mado Proverbio
- Department of Psychology University of Milano‐Bicocca Milan Italy
- Milan Center for Neuroscience Milan Italy
| | - Francesco Benedetto
- Department of Psychology University of Milano‐Bicocca Milan Italy
- Milan Center for Neuroscience Milan Italy
| | - Martina Guazzone
- Department of Psychology University of Milano‐Bicocca Milan Italy
- Milan Center for Neuroscience Milan Italy
| |
Collapse
|
43
|
How Therapeutic Tapping Can Alter Neural Correlates of Emotional Prosody Processing in Anxiety. Brain Sci 2019; 9:brainsci9080206. [PMID: 31430984 PMCID: PMC6721443 DOI: 10.3390/brainsci9080206] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 08/09/2019] [Accepted: 08/12/2019] [Indexed: 11/17/2022] Open
Abstract
Anxiety disorders are the most common psychological disorders worldwide resulting in a great demand of adequate and cost-effective treatment. New short-term interventions can be used as an effective adjunct or alternative to pharmaco- and psychotherapy. One of these approaches is therapeutic tapping. It combines somatic stimulation of acupressure points with elements from Cognitive Behavioral Therapy (CBT). Tapping reduces anxiety symptoms after only one session. Anxiety is associated with a deficient emotion regulation for threatening stimuli. These deficits are compensated e.g., by CBT. Whether Tapping can also elicit similar modulations and which dynamic neural correlates are affected was subject to this study. Anxiety patients were assessed listening to pseudowords with a different emotional prosody (happy, angry, fearful, and neutral) prior and after one Tapping session. The emotion-related component Late Positive Potential (LPP) was investigated via electroencephalography. Progressive Muscle Relaxation (PMR) served as control intervention. Results showed LPP reductions for negative stimuli after the interventions. Interestingly, PMR influenced fearful and Tapping altered angry prosody. While PMR generally reduced arousal for fearful prosody, Tapping specifically affected fear-eliciting, angry stimuli, and might thus be able to reduce anxiety symptoms. Findings highlight the efficacy of Tapping and its impact on neural correlates of emotion regulation.
Collapse
|
44
|
Jiang X, Gossack-Keenan K, Pell MD. To believe or not to believe? How voice and accent information in speech alter listener impressions of trust. Q J Exp Psychol (Hove) 2019; 73:55-79. [DOI: 10.1177/1747021819865833] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Our decision to believe what another person says can be influenced by vocally expressed confidence in speech and by whether the speaker–listener are members of the same social group. The dynamic effects of these two information sources on neurocognitive processes that promote believability impressions from vocal cues are unclear. Here, English Canadian listeners were presented personal statements ( She has access to the building) produced in a confident or doubtful voice by speakers of their own dialect (in-group) or speakers from two different “out-groups” (regional or foreign-accented English). Participants rated how believable the speaker is for each statement and event-related potentials (ERPs) were analysed from utterance onset. Believability decisions were modulated by both the speaker’s vocal confidence level and their perceived in-group status. For in-group speakers, ERP effects revealed an early differentiation of vocally expressed confidence (i.e., N100, P200), highlighting the motivational significance of doubtful voices for drawing believability inferences. These early effects on vocal confidence perception were qualitatively different or absent when speakers had an accent; evaluating out-group voices was associated with increased demands on contextual integration and re-analysis of a non-native representation of believability (i.e., increased N400, late negativity response). Accent intelligibility and experience with particular out-group accents each influenced how vocal confidence was processed for out-group speakers. The N100 amplitude was sensitive to out-group attitudes and predicted actual believability decisions for certain out-group speakers. We propose a neurocognitive model in which vocal identity information (social categorization) dynamically influences how vocal expressions are decoded and used to derive social inferences during person perception.
Collapse
Affiliation(s)
- Xiaoming Jiang
- School of Communication Sciences and Disorders, McGill University, Montréal, Québec, Canada
- Department of Psychology, Tongji University, Shanghai, China
| | - Kira Gossack-Keenan
- School of Communication Sciences and Disorders, McGill University, Montréal, Québec, Canada
| | - Marc D Pell
- School of Communication Sciences and Disorders, McGill University, Montréal, Québec, Canada
| |
Collapse
|
45
|
Emotional prosody Stroop effect in Hindi: An event related potential study. PROGRESS IN BRAIN RESEARCH 2019. [PMID: 31196434 DOI: 10.1016/bs.pbr.2019.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register]
Abstract
Prosody processing is an important aspect of language comprehension. Previous research on emotional word-prosody conflict has shown that participants are worse when emotional prosody and word meaning are incongruent. Studies with event-related potentials have shown a congruency effect in N400 component. There has been no study on emotional processing in Hindi language in the context of conflict between emotional word meaning and prosody. We used happy and angry words spoken using happy and angry prosody. Participants had to identify whether the word had a happy or angry word meaning. The results showed a congruency effect with worse performance in incongruent trials indicating an emotional Stroop effect in Hindi. The ERP results showed that prosody information is detected very early, which can be seen in the N1 component. In addition, there was a congruency effect in N400. The results show that prosody is processed very early and emotional meaning-prosody congruency effect is obtained with Hindi. Further studies would be needed to investigate similarities and differences in cognitive control associated with language processing.
Collapse
|
46
|
Tian Y, Li L, Yin H, Huang X. Gender Differences in the Effect of Facial Attractiveness on Perception of Time. Front Psychol 2019; 10:1292. [PMID: 31231284 PMCID: PMC6558225 DOI: 10.3389/fpsyg.2019.01292] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Accepted: 05/16/2019] [Indexed: 11/17/2022] Open
Abstract
Time perception plays a fundamental role in human social activities, and it can be influenced in social situations by various factors, including facial attractiveness. However, in the eyes of observers of different genders, the attractiveness of a face varies. The current study aimed to explore whether gender modulates the effect of facial attractiveness on time perception. To account for individual differences in esthetic standards, the critical stimuli presented to each participant were selected from an image pool based on the participant’s own attractiveness judgments. In Experiment 1, men and women performed a stimuli selection task followed by a temporal reproduction task to measure their time perception of faces of different attractiveness levels and gender. To control for the potential influence of task order, Experiment 2 flipped the order of the selection and temporal tasks. Taken together, the experiments showed that both men and women exhibited longer reproduced durations for attractive opposite-sex faces than for unattractive opposite-sex faces; conversely, in the same-sex face condition, women still exhibited longer reproduced durations for attractive faces than for unattractive faces, whereas the effect of facial attractiveness on time perception among men tended to be smaller or even fail to reach significance. These results suggest that gender differences play an important role in the effect of facial attractiveness on time perception.
Collapse
Affiliation(s)
- Yu Tian
- School of Psychology, Southwest University, Chongqing, China.,Key Research Base of Humanities and Social Sciences, Southwest University, Chongqing, China
| | - Lingjing Li
- The Experimental Middle School Attached to Yunnan Normal University, Kunming, China
| | - Huazhan Yin
- Cognition and Human Behavior Key Laboratory of Hunan Province, Hunan Normal University, Changsha, China
| | - Xiting Huang
- School of Psychology, Southwest University, Chongqing, China.,Key Research Base of Humanities and Social Sciences, Southwest University, Chongqing, China
| |
Collapse
|
47
|
Emotionality of Turkish language and primary adaptation of affective English norms for Turkish. CURRENT PSYCHOLOGY 2019. [DOI: 10.1007/s12144-018-0119-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
48
|
Pavlov YG, Kotchoubey B. Classical conditioning in oddball paradigm: A comparison between aversive and name conditioning. Psychophysiology 2019; 56:e13370. [DOI: 10.1111/psyp.13370] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Revised: 02/24/2019] [Accepted: 03/02/2019] [Indexed: 12/12/2022]
Affiliation(s)
- Yuri G. Pavlov
- Institute of Medical Psychology and Behavioral Neurobiology University of Tübingen Tübingen Germany
- Department of Psychology Ural Federal University Ekaterinburg Russian Federation
| | - Boris Kotchoubey
- Institute of Medical Psychology and Behavioral Neurobiology University of Tübingen Tübingen Germany
| |
Collapse
|
49
|
Paulmann S, Weinstein N, Zougkou K. Now listen to this! Evidence from a cross-spliced experimental design contrasting pressuring and supportive communications. Neuropsychologia 2019; 124:192-201. [DOI: 10.1016/j.neuropsychologia.2018.12.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Revised: 12/13/2018] [Accepted: 12/14/2018] [Indexed: 02/04/2023]
|
50
|
Burra N, Kerzel D, Munoz Tord D, Grandjean D, Ceravolo L. Early spatial attention deployment toward and away from aggressive voices. Soc Cogn Affect Neurosci 2019; 14:73-80. [PMID: 30418635 PMCID: PMC6318470 DOI: 10.1093/scan/nsy100] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Accepted: 11/07/2018] [Indexed: 01/29/2023] Open
Abstract
Salient vocalizations, especially aggressive voices, are believed to attract attention due to an automatic threat detection system. However, studies assessing the temporal dynamics of auditory spatial attention to aggressive voices are missing. Using event-related potential markers of auditory spatial attention (N2ac and LPCpc), we show that attentional processing of threatening vocal signals is enhanced at two different stages of auditory processing. As early as 200 ms post-stimulus onset, attentional orienting/engagement is enhanced for threatening as compared to happy vocal signals. Subsequently, as early as 400 ms post-stimulus onset, the reorienting of auditory attention to the center of the screen (or disengagement from the target) is enhanced. This latter effect is consistent with the need to optimize perception by balancing the intake of stimulation from left and right auditory space. Our results extend the scope of theories from the visual to the auditory modality by showing that threatening stimuli also bias early spatial attention in the auditory modality. Attentional enhancement was only present in female and not in male participants.
Collapse
Affiliation(s)
- Nicolas Burra
- Faculté de Psychologie et des Sciences de l'Education, University of Geneva, Geneva, Switzerland
| | - Dirk Kerzel
- Faculté de Psychologie et des Sciences de l'Education, University of Geneva, Geneva, Switzerland
| | - David Munoz Tord
- Faculté de Psychologie et des Sciences de l'Education, University of Geneva, Geneva, Switzerland
| | - Didier Grandjean
- Faculté de Psychologie et des Sciences de l'Education, University of Geneva, Geneva, Switzerland.,Neuroscience of Emotion and Affective Dynamics Lab, University of Geneva, Geneva, Switzerland.,Swiss Center for Affective Sciences, University of Geneva, Geneva, Swizerland
| | - Leonardo Ceravolo
- Faculté de Psychologie et des Sciences de l'Education, University of Geneva, Geneva, Switzerland.,Neuroscience of Emotion and Affective Dynamics Lab, University of Geneva, Geneva, Switzerland.,Swiss Center for Affective Sciences, University of Geneva, Geneva, Swizerland
| |
Collapse
|