1
|
Nussbaum C, Schirmer A, Schweinberger SR. Musicality - Tuned to the melody of vocal emotions. Br J Psychol 2024; 115:206-225. [PMID: 37851369 DOI: 10.1111/bjop.12684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 09/12/2023] [Accepted: 09/24/2023] [Indexed: 10/19/2023]
Abstract
Musicians outperform non-musicians in vocal emotion perception, likely because of increased sensitivity to acoustic cues, such as fundamental frequency (F0) and timbre. Yet, how musicians make use of these acoustic cues to perceive emotions, and how they might differ from non-musicians, is unclear. To address these points, we created vocal stimuli that conveyed happiness, fear, pleasure or sadness, either in all acoustic cues, or selectively in either F0 or timbre only. We then compared vocal emotion perception performance between professional/semi-professional musicians (N = 39) and non-musicians (N = 38), all socialized in Western music culture. Compared to non-musicians, musicians classified vocal emotions more accurately. This advantage was seen in the full and F0-modulated conditions, but was absent in the timbre-modulated condition indicating that musicians excel at perceiving the melody (F0), but not the timbre of vocal emotions. Further, F0 seemed more important than timbre for the recognition of all emotional categories. Additional exploratory analyses revealed a link between time-varying F0 perception in music and voices that was independent of musical training. Together, these findings suggest that musicians are particularly tuned to the melody of vocal emotions, presumably due to a natural predisposition to exploit melodic patterns.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
| | - Annett Schirmer
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Institute of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
2
|
Nussbaum C, Schirmer A, Schweinberger SR. Electrophysiological Correlates of Vocal Emotional Processing in Musicians and Non-Musicians. Brain Sci 2023; 13:1563. [PMID: 38002523 PMCID: PMC10670383 DOI: 10.3390/brainsci13111563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 10/31/2023] [Accepted: 11/03/2023] [Indexed: 11/26/2023] Open
Abstract
Musicians outperform non-musicians in vocal emotion recognition, but the underlying mechanisms are still debated. Behavioral measures highlight the importance of auditory sensitivity towards emotional voice cues. However, it remains unclear whether and how this group difference is reflected at the brain level. Here, we compared event-related potentials (ERPs) to acoustically manipulated voices between musicians (n = 39) and non-musicians (n = 39). We used parameter-specific voice morphing to create and present vocal stimuli that conveyed happiness, fear, pleasure, or sadness, either in all acoustic cues or selectively in either pitch contour (F0) or timbre. Although the fronto-central P200 (150-250 ms) and N400 (300-500 ms) components were modulated by pitch and timbre, differences between musicians and non-musicians appeared only for a centro-parietal late positive potential (500-1000 ms). Thus, this study does not support an early auditory specialization in musicians but suggests instead that musicality affects the manner in which listeners use acoustic voice cues during later, controlled aspects of emotion evaluation.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Voice Research Unit, Friedrich Schiller University, 07743 Jena, Germany
| | - Annett Schirmer
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Institute of Psychology, University of Innsbruck, 6020 Innsbruck, Austria
| | - Stefan R. Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Voice Research Unit, Friedrich Schiller University, 07743 Jena, Germany
- Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland
| |
Collapse
|
3
|
Duville MM, Alonso-Valerdi LM, Ibarra-Zarate DI. Neuronal and behavioral affective perceptions of human and naturalness-reduced emotional prosodies. Front Comput Neurosci 2022; 16:1022787. [DOI: 10.3389/fncom.2022.1022787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 10/24/2022] [Indexed: 11/19/2022] Open
Abstract
Artificial voices are nowadays embedded into our daily lives with latest neural voices approaching human voice consistency (naturalness). Nevertheless, behavioral, and neuronal correlates of the perception of less naturalistic emotional prosodies are still misunderstood. In this study, we explored the acoustic tendencies that define naturalness from human to synthesized voices. Then, we created naturalness-reduced emotional utterances by acoustic editions of human voices. Finally, we used Event-Related Potentials (ERP) to assess the time dynamics of emotional integration when listening to both human and synthesized voices in a healthy adult sample. Additionally, listeners rated their perceptions for valence, arousal, discrete emotions, naturalness, and intelligibility. Synthesized voices were characterized by less lexical stress (i.e., reduced difference between stressed and unstressed syllables within words) as regards duration and median pitch modulations. Besides, spectral content was attenuated toward lower F2 and F3 frequencies and lower intensities for harmonics 1 and 4. Both psychometric and neuronal correlates were sensitive to naturalness reduction. (1) Naturalness and intelligibility ratings dropped with emotional utterances synthetization, (2) Discrete emotion recognition was impaired as naturalness declined, consistent with P200 and Late Positive Potentials (LPP) being less sensitive to emotional differentiation at lower naturalness, and (3) Relative P200 and LPP amplitudes between prosodies were modulated by synthetization. Nevertheless, (4) Valence and arousal perceptions were preserved at lower naturalness, (5) Valence (arousal) ratings correlated negatively (positively) with Higuchi’s fractal dimension extracted on neuronal data under all naturalness perturbations, (6) Inter-Trial Phase Coherence (ITPC) and standard deviation measurements revealed high inter-individual heterogeneity for emotion perception that is still preserved as naturalness reduces. Notably, partial between-participant synchrony (low ITPC), along with high amplitude dispersion on ERPs at both early and late stages emphasized miscellaneous emotional responses among subjects. In this study, we highlighted for the first time both behavioral and neuronal basis of emotional perception under acoustic naturalness alterations. Partial dependencies between ecological relevance and emotion understanding outlined the modulation but not the annihilation of emotional integration by synthetization.
Collapse
|
4
|
Martins I, Lima CF, Pinheiro AP. Enhanced salience of musical sounds in singers and instrumentalists. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2022; 22:1044-1062. [PMID: 35501427 DOI: 10.3758/s13415-022-01007-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/10/2022] [Indexed: 06/14/2023]
Abstract
Music training has been linked to facilitated processing of emotional sounds. However, most studies have focused on speech, and less is known about musicians' brain responses to other emotional sounds and in relation to instrument-specific experience. The current study combined behavioral and EEG methods to address two novel questions related to the perception of auditory emotional cues: whether and how long-term music training relates to a distinct emotional processing of nonverbal vocalizations and music; and whether distinct training profiles (vocal vs. instrumental) modulate brain responses to emotional sounds from early to late processing stages. Fifty-eight participants completed an EEG implicit emotional processing task, in which musical and vocal sounds differing in valence were presented as nontarget stimuli. After this task, participants explicitly evaluated the same sounds regarding the emotion being expressed, their valence, and arousal. Compared with nonmusicians, musicians displayed enhanced salience detection (P2), attention orienting (P3), and elaborative processing (Late Positive Potential) of musical (vs. vocal) sounds in event-related potential (ERP) data. The explicit evaluation of musical sounds also was distinct in musicians: accuracy in the emotional recognition of musical sounds was similar across valence types in musicians, who also judged musical sounds to be more pleasant and more arousing than nonmusicians. Specific profiles of music training (singers vs. instrumentalists) did not relate to differences in the processing of vocal vs. musical sounds. Together, these findings reveal that music has a privileged status in the auditory system of long-term musically trained listeners, irrespective of their instrument-specific experience.
Collapse
Affiliation(s)
- Inês Martins
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal
| | - César F Lima
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisbon, Portugal
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal.
| |
Collapse
|
5
|
Maltezou-Papastylianou C, Russo R, Wallace D, Harmsworth C, Paulmann S. Different stages of emotional prosody processing in healthy ageing–evidence from behavioural responses, ERPs, tDCS, and tRNS. PLoS One 2022; 17:e0270934. [PMID: 35862317 PMCID: PMC9302842 DOI: 10.1371/journal.pone.0270934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 06/21/2022] [Indexed: 11/22/2022] Open
Abstract
Past research suggests that the ability to recognise the emotional intent of a speaker decreases as a function of age. Yet, few studies have looked at the underlying cause for this effect in a systematic way. This paper builds on the view that emotional prosody perception is a multi-stage process and explores which step of the recognition processing line is impaired in healthy ageing using time-sensitive event-related brain potentials (ERPs). Results suggest that early processes linked to salience detection as reflected in the P200 component and initial build-up of emotional representation as linked to a subsequent negative ERP component are largely unaffected in healthy ageing. The two groups show, however, emotional prosody recognition differences: older participants recognise emotional intentions of speakers less well than younger participants do. These findings were followed up by two neuro-stimulation studies specifically targeting the inferior frontal cortex to test if recognition improves during active stimulation relative to sham. Overall, results suggests that neither tDCS nor high-frequency tRNS stimulation at 2mA for 30 minutes facilitates emotional prosody recognition rates in healthy older adults.
Collapse
Affiliation(s)
| | - Riccardo Russo
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester, United Kingdom
- Department of Brain and Behavioural Sciences, Universita’ di Pavia, Pavia, Italy
| | - Denise Wallace
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester, United Kingdom
| | - Chelsea Harmsworth
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester, United Kingdom
| | - Silke Paulmann
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester, United Kingdom
- * E-mail:
| |
Collapse
|
6
|
Sung YW, Kiyama S, Choi US, Ogawa S. Involvement of the intrinsic functional network of the red nucleus in complex behavioral processing. Cereb Cortex Commun 2022; 3:tgac037. [PMID: 36159204 PMCID: PMC9491841 DOI: 10.1093/texcom/tgac037] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 08/25/2022] [Indexed: 11/15/2022] Open
Abstract
Abstract
Previous studies suggested the possibility that the red nucleus (RN) is involved in other cognitive functions than motion per se, even though such functions have yet to be clarified. We investigated the activation of RN during several tasks and its intrinsic functional network associated with social cognition and musical practice. The tasks included finger tapping, n-back, and memory recall tasks. Region of interest for RN was identified through those tasks, anatomical information of RN, and a brain atlas. The intrinsic functional network was identified for RN by an analysis of connectivity between RN and other regions typically involved in seven known resting state functional networks with RN used as the seed region. Association of the RN network with a psychological trait of the interpersonal reactivity index and musical training years revealed subnetworks that included empathy related regions or music practice related regions. These social or highly coordinated motor activity represent the most complex functions ever known to involve the RN, adding further evidence for the multifunctional roles of RN. These discoveries may lead to a new direction of investigations to clarify probable novel roles for RN in high-level human behavior.
Collapse
Affiliation(s)
- Yul-Wan Sung
- Kansei Fukushi Research Institute, Tohoku Fukushi University , Sendai, Miyagi 9893201 , Japan
| | - Sachiko Kiyama
- Department of Linguistics, Tohoku University , Sendai, Miyagi 9800862 , Japan
| | - Uk-Su Choi
- Medical Device Development Center, Daegu-Gyeongbuk Medical Innovation Foundation , Daegu 41061 , Republic of Korea
| | - Seiji Ogawa
- Kansei Fukushi Research Institute, Tohoku Fukushi University , Sendai, Miyagi 9893201 , Japan
| |
Collapse
|
7
|
Smit EA, Milne AJ, Escudero P. Music Perception Abilities and Ambiguous Word Learning: Is There Cross-Domain Transfer in Nonmusicians? Front Psychol 2022; 13:801263. [PMID: 35401340 PMCID: PMC8984940 DOI: 10.3389/fpsyg.2022.801263] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 02/08/2022] [Indexed: 11/14/2022] Open
Abstract
Perception of music and speech is based on similar auditory skills, and it is often suggested that those with enhanced music perception skills may perceive and learn novel words more easily. The current study tested whether music perception abilities are associated with novel word learning in an ambiguous learning scenario. Using a cross-situational word learning (CSWL) task, nonmusician adults were exposed to word-object pairings between eight novel words and visual referents. Novel words were either non-minimal pairs differing in all sounds or minimal pairs differing in their initial consonant or vowel. In order to be successful in this task, learners need to be able to correctly encode the phonological details of the novel words and have sufficient auditory working memory to remember the correct word-object pairings. Using the Mistuning Perception Test (MPT) and the Melodic Discrimination Test (MDT), we measured learners’ pitch perception and auditory working memory. We predicted that those with higher MPT and MDT values would perform better in the CSWL task and in particular for novel words with high phonological overlap (i.e., minimal pairs). We found that higher musical perception skills led to higher accuracy for non-minimal pairs and minimal pairs differing in their initial consonant. Interestingly, this was not the case for vowel minimal pairs. We discuss the results in relation to theories of second language word learning such as the Second Language Perception model (L2LP).
Collapse
Affiliation(s)
- Eline A. Smit
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
- ARC Centre of Excellence for the Dynamics of Language, Canberra, ACT, Australia
- *Correspondence: Eline A. Smit,
| | - Andrew J. Milne
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
| | - Paola Escudero
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
- ARC Centre of Excellence for the Dynamics of Language, Canberra, ACT, Australia
| |
Collapse
|
8
|
Elmer S, Valizadeh SA, Cunillera T, Rodriguez-Fornells A. Statistical learning and prosodic bootstrapping differentially affect neural synchronization during speech segmentation. Neuroimage 2021; 235:118051. [PMID: 33848624 DOI: 10.1016/j.neuroimage.2021.118051] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 03/12/2021] [Accepted: 04/05/2021] [Indexed: 10/21/2022] Open
Abstract
Neural oscillations constitute an intrinsic property of functional brain organization that facilitates the tracking of linguistic units at multiple time scales through brain-to-stimulus alignment. This ubiquitous neural principle has been shown to facilitate speech segmentation and word learning based on statistical regularities. However, there is no common agreement yet on whether speech segmentation is mediated by a transition of neural synchronization from syllable to word rate, or whether the two time scales are concurrently tracked. Furthermore, it is currently unknown whether syllable transition probability contributes to speech segmentation when lexical stress cues can be directly used to extract word forms. Using Inter-Trial Coherence (ITC) analyses in combinations with Event-Related Potentials (ERPs), we showed that speech segmentation based on both statistical regularities and lexical stress cues was accompanied by concurrent neural synchronization to syllables and words. In particular, ITC at the word rate was generally higher in structured compared to random sequences, and this effect was particularly pronounced in the flat condition. Furthermore, ITC at the syllable rate dynamically increased across the blocks of the flat condition, whereas a similar modulation was not observed in the stressed condition. Notably, in the flat condition ITC at both time scales correlated with each other, and changes in neural synchronization were accompanied by a rapid reconfiguration of the P200 and N400 components with a close relationship between ITC and ERPs. These results highlight distinct computational principles governing neural synchronization to pertinent linguistic units while segmenting speech under different listening conditions.
Collapse
Affiliation(s)
- Stefan Elmer
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich, Binzmühlestrasse 14/25, Zurich 8050, Switzerland; Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute, L'Hospitalet de Llobregat, Barcelona 08097, Spain.
| | - Seyed Abolfazl Valizadeh
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich, Binzmühlestrasse 14/25, Zurich 8050, Switzerland; Department of Internal Medicine, University Hospital, University of Zurich, Zurich 8091, Switzerland; University Research Priority Program, "Dynamics of Healthy Aging", University of Zurich, Zurich 8050, Switzerland.
| | - Toni Cunillera
- Department of Cognition, Development and Educational Psychology, Barcelona 08035, University of Barcelona, Spain.
| | - Antoni Rodriguez-Fornells
- Department of Cognition, Development and Educational Psychology, Campus Bellvitge, University of Barcelona, 5L'Hospitalet de Llobregat, Barcelona 08097, Spain; Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute, L'Hospitalet de Llobregat, Barcelona 08097, Spain; Institució Catalana de Recerca i Estudis Avançats, ICREA, Barcelona 08010, Spain.
| |
Collapse
|
9
|
Di Mauro M, Toffalini E, Grassi M, Petrini K. Effect of Long-Term Music Training on Emotion Perception From Drumming Improvisation. Front Psychol 2018; 9:2168. [PMID: 30473677 PMCID: PMC6237981 DOI: 10.3389/fpsyg.2018.02168] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Accepted: 10/22/2018] [Indexed: 11/13/2022] Open
Abstract
Long-term music training has been shown to affect different cognitive and perceptual abilities. However, it is less well known whether it can also affect the perception of emotion from music, especially purely rhythmic music. Hence, we asked a group of 16 non-musicians, 16 musicians with no drumming experience, and 16 drummers to judge the level of expressiveness, the valence (positive and negative), and the category of emotion perceived from 96 drumming improvisation clips (audio-only, video-only, and audiovideo) that varied in several music features (e.g., musical genre, tempo, complexity, drummer’s expressiveness, and drummer’s style). Our results show that the level and type of music training influence the perceived expressiveness, valence, and emotion from solo drumming improvisation. Overall, non-musicians, non-drummer musicians, and drummers were affected differently by changes in some characteristics of the music performance, for example musicians (with and without drumming experience) gave a greater weight to the visual performance than non-musicians when giving their emotional judgments. These findings suggest that besides influencing several cognitive and perceptual abilities, music training also affects how we perceive emotion from music.
Collapse
Affiliation(s)
- Martina Di Mauro
- Department of General Psychology, University of Padua, Padua, Italy
| | - Enrico Toffalini
- Department of General Psychology, University of Padua, Padua, Italy
| | - Massimo Grassi
- Department of General Psychology, University of Padua, Padua, Italy
| | - Karin Petrini
- Department of Psychology, University of Bath, Bath, United Kingdom
| |
Collapse
|
10
|
Conde T, Gonçalves ÓF, Pinheiro AP. Stimulus complexity matters when you hear your own voice: Attention effects on self-generated voice processing. Int J Psychophysiol 2018; 133:66-78. [PMID: 30114437 DOI: 10.1016/j.ijpsycho.2018.08.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2017] [Revised: 06/05/2018] [Accepted: 08/10/2018] [Indexed: 11/26/2022]
Abstract
The ability to discriminate self- and non-self voice cues is a fundamental aspect of self-awareness and subserves self-monitoring during verbal communication. Nonetheless, the neurofunctional underpinnings of self-voice perception and recognition are still poorly understood. Moreover, how attention and stimulus complexity influence the processing and recognition of one's own voice remains to be clarified. Using an oddball task, the current study investigated how self-relevance and stimulus type interact during selective attention to voices, and how they affect the representation of regularity during voice perception. Event-related potentials (ERPs) were recorded from 18 right-handed males. Pre-recorded self-generated (SGV) and non-self (NSV) voices, consisting of a nonverbal vocalization (vocalization condition) or disyllabic word (word condition), were presented as either standard or target stimuli in different experimental blocks. The results showed increased N2 amplitude to SGV relative to NSV stimuli. Stimulus type modulated later processing stages only: P3 amplitude was increased for SGV relative to NSV words, whereas no differences between SGV and NSV were observed in the case of vocalizations. Moreover, SGV standards elicited reduced N1 and P2 amplitude relative to NSV standards. These findings revealed that the self-voice grabs more attention when listeners are exposed to words but not vocalizations. Further, they indicate that detection of regularity in an auditory stream is facilitated for one's own voice at early processing stages. Together, they demonstrate that self-relevance affects attention to voices differently as a function of stimulus type.
Collapse
Affiliation(s)
- Tatiana Conde
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal; Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Óscar F Gonçalves
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Spaulding Center of Neuromodulation, Department of Physical Medicine & Rehabilitation, Spaulding Rehabilitation Hospital & Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Bouvé College of Health Sciences, Northeastern University, Boston, MA, USA
| | - Ana P Pinheiro
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal; Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Cognitive Neuroscience Lab, Department of Psychiatry, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
11
|
Garrido-Vásquez P, Pell MD, Paulmann S, Kotz SA. Dynamic Facial Expressions Prime the Processing of Emotional Prosody. Front Hum Neurosci 2018; 12:244. [PMID: 29946247 PMCID: PMC6007283 DOI: 10.3389/fnhum.2018.00244] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 05/28/2018] [Indexed: 11/29/2022] Open
Abstract
Evidence suggests that emotion is represented supramodally in the human brain. Emotional facial expressions, which often precede vocally expressed emotion in real life, can modulate event-related potentials (N100 and P200) during emotional prosody processing. To investigate these cross-modal emotional interactions, two lines of research have been put forward: cross-modal integration and cross-modal priming. In cross-modal integration studies, visual and auditory channels are temporally aligned, while in priming studies they are presented consecutively. Here we used cross-modal emotional priming to study the interaction of dynamic visual and auditory emotional information. Specifically, we presented dynamic facial expressions (angry, happy, neutral) as primes and emotionally-intoned pseudo-speech sentences (angry, happy) as targets. We were interested in how prime-target congruency would affect early auditory event-related potentials, i.e., N100 and P200, in order to shed more light on how dynamic facial information is used in cross-modal emotional prediction. Results showed enhanced N100 amplitudes for incongruently primed compared to congruently and neutrally primed emotional prosody, while the latter two conditions did not significantly differ. However, N100 peak latency was significantly delayed in the neutral condition compared to the other two conditions. Source reconstruction revealed that the right parahippocampal gyrus was activated in incongruent compared to congruent trials in the N100 time window. No significant ERP effects were observed in the P200 range. Our results indicate that dynamic facial expressions influence vocal emotion processing at an early point in time, and that an emotional mismatch between a facial expression and its ensuing vocal emotional signal induces additional processing costs in the brain, potentially because the cross-modal emotional prediction mechanism is violated in case of emotional prime-target incongruency.
Collapse
Affiliation(s)
- Patricia Garrido-Vásquez
- Department of Experimental Psychology and Cognitive Science, Justus Liebig University Giessen, Giessen, Germany.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Marc D Pell
- School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada
| | - Silke Paulmann
- Department of Psychology, University of Essex, Colchester, United Kingdom
| | - Sonja A Kotz
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neuropsychology and Psychopharmacology, University of Maastricht, Maastricht, Netherlands
| |
Collapse
|
12
|
Minho Affective Sentences (MAS): Probing the roles of sex, mood, and empathy in affective ratings of verbal stimuli. Behav Res Methods 2017; 49:698-716. [PMID: 27004484 DOI: 10.3758/s13428-016-0726-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
During social communication, words and sentences play a critical role in the expression of emotional meaning. The Minho Affective Sentences (MAS) were developed to respond to the lack of a standardized sentence battery with normative affective ratings: 192 neutral, positive, and negative declarative sentences were strictly controlled for psycholinguistic variables such as numbers of words and letters and per-million word frequency. The sentences were designed to represent examples of each of the five basic emotions (anger, sadness, disgust, fear, and happiness) and of neutral situations. These sentences were presented to 536 participants who rated the stimuli using both dimensional and categorical measures of emotions. Sex differences were also explored. Additionally, we probed how personality, empathy, and mood from a subset of 40 participants modulated the affective ratings. Our results confirmed that the MAS affective norms are valid measures to guide the selection of stimuli for experimental studies of emotion. The combination of dimensional and categorical ratings provided a more fine-grained characterization of the affective properties of the sentences. Moreover, the affective ratings of positive and negative sentences were not only modulated by participants' sex, but also by individual differences in empathy and mood state. Together, our results indicate that, in their quest to reveal the neurofunctional underpinnings of verbal emotional processing, researchers should consider not only the role of sex, but also of interindividual differences in empathy and mood states, in responses to the emotional meaning of sentences.
Collapse
|
13
|
Ding R, Li P, Wang W, Luo W. Emotion Processing by ERP Combined with Development and Plasticity. Neural Plast 2017; 2017:5282670. [PMID: 28831313 PMCID: PMC5555003 DOI: 10.1155/2017/5282670] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2017] [Accepted: 07/09/2017] [Indexed: 12/17/2022] Open
Abstract
Emotions important for survival and social interaction have received wide and deep investigations. The application of the fMRI technique into emotion processing has obtained overwhelming achievements with respect to the localization of emotion processes. The ERP method, which possesses highly temporal resolution compared to fMRI, can be employed to investigate the time course of emotion processing. The emotional modulation of the ERP component has been verified across numerous researches. Emotions, described as dynamically developing along with the growing age, have the possibility to be enhanced through learning (or training) or to be damaged due to disturbances in growth, which is underlain by the neural plasticity of emotion-relevant nervous systems. And mood disorders with typical symptoms of emotion discordance probably have been caused by the dysfunctional neural plasticity.
Collapse
Affiliation(s)
- Rui Ding
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China
| | - Ping Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China
| | - Wei Wang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China
- Laboratory of Cognition and Mental Health, Chongqing University of Arts and Sciences, Chongqing 402160, China
| |
Collapse
|
14
|
Heffner CC, Slevc LR. Prosodic Structure as a Parallel to Musical Structure. Front Psychol 2015; 6:1962. [PMID: 26733930 PMCID: PMC4687474 DOI: 10.3389/fpsyg.2015.01962] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2015] [Accepted: 12/07/2015] [Indexed: 11/13/2022] Open
Abstract
What structural properties do language and music share? Although early speculation identified a wide variety of possibilities, the literature has largely focused on the parallels between musical structure and syntactic structure. Here, we argue that parallels between musical structure and prosodic structure deserve more attention. We review the evidence for a link between musical and prosodic structure and find it to be strong. In fact, certain elements of prosodic structure may provide a parsimonious comparison with musical structure without sacrificing empirical findings related to the parallels between language and music. We then develop several predictions related to such a hypothesis.
Collapse
Affiliation(s)
- Christopher C. Heffner
- Program in Neuroscience and Cognitive Science, University of Maryland, College ParkMD, USA
- Department of Linguistics, University of Maryland, College ParkMD, USA
- Department of Hearing and Speech Sciences, University of Maryland, College ParkMD, USA
| | - L. Robert Slevc
- Program in Neuroscience and Cognitive Science, University of Maryland, College ParkMD, USA
- Department of Psychology, University of Maryland, College ParkMD, USA
| |
Collapse
|