1
|
Kao C, Zhang Y. Detecting Emotional Prosody in Real Words: Electrophysiological Evidence From a Modified Multifeature Oddball Paradigm. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:2988-2998. [PMID: 37379567 DOI: 10.1044/2023_jslhr-22-00652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2023]
Abstract
PURPOSE Emotional voice conveys important social cues that demand listeners' attention and timely processing. This event-related potential study investigated the feasibility of a multifeature oddball paradigm to examine adult listeners' neural responses to detecting emotional prosody changes in nonrepeating naturally spoken words. METHOD Thirty-three adult listeners completed the experiment by passively listening to the words in neutral and three alternating emotions while watching a silent movie. Previous research documented preattentive change-detection electrophysiological responses (e.g., mismatch negativity [MMN], P3a) to emotions carried by fixed syllables or words. Given that the MMN and P3a have also been shown to reflect extraction of abstract regularities over repetitive acoustic patterns, this study employed a multifeature oddball paradigm to compare listeners' MMN and P3a to emotional prosody change from neutral to angry, happy, and sad emotions delivered with hundreds of nonrepeating words in a single recording session. RESULTS Both MMN and P3a were successfully elicited by the emotional prosodic change over the varying linguistic context. Angry prosody elicited the strongest MMN compared with happy and sad prosodies. Happy prosody elicited the strongest P3a in the centro-frontal electrodes, and angry prosody elicited the smallest P3a. CONCLUSIONS The results demonstrated that listeners were able to extract the acoustic patterns for each emotional prosody category over constantly changing spoken words. The findings confirm the feasibility of the multifeature oddball paradigm in investigating emotional speech processing beyond simple acoustic change detection, which may potentially be applied to pediatric and clinical populations.
Collapse
Affiliation(s)
- Chieh Kao
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities
- Center for Cognitive Sciences, University of Minnesota, Twin Cities
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities
- Masonic Institute for the Developing Brain, University of Minnesota, Twin Cities
| |
Collapse
|
2
|
Interaction effects of the 5-HTT and MAOA-uVNTR gene variants on pre-attentive EEG activity in response to threatening voices. Commun Biol 2022; 5:340. [PMID: 35396540 PMCID: PMC8993814 DOI: 10.1038/s42003-022-03297-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 03/21/2022] [Indexed: 11/08/2022] Open
Abstract
Both the serotonin transporter polymorphism (5-HTTLPR) and the monoamine oxidase A gene (MAOA-uVNTR) are considered genetic contributors for anxiety-related symptomatology and aggressive behavior. Nevertheless, an interaction between these genes and the pre-attentive processing of threatening voices -a biological marker for anxiety-related conditions- has not been assessed yet. Among the entire sample of participants in the study with valid genotyping and electroencephalographic (EEG) data (N = 140), here we show that men with low-activity MAOA-uVNTR, and who were not homozygous for the 5-HTTLPR short allele (s) (n = 11), had significantly larger fearful MMN amplitudes -as driven by significant larger ERPs to fearful stimuli- than men with high-activity MAOA-uVNTR variants (n = 20). This is in contrast with previous studies, where significantly reduced fearful MMN amplitudes, driven by increased ERPs to neutral stimuli, were observed in those homozygous for the 5-HTT s-allele. In conclusion, using genetic, neurophysiological, and behavioral measurements, this study illustrates how the intricate interaction between the 5-HTT and the MAOA-uVNTR variants have an impact on threat processing, and social cognition, in male individuals (n = 62).
Collapse
|
3
|
Toufan R, Aghamolaei M, Ashayeri H. Differential effects of gender on mismatch negativity to violations of simple and pattern acoustic regularities. Brain Behav 2021; 11:e2248. [PMID: 34124855 PMCID: PMC8413778 DOI: 10.1002/brb3.2248] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 04/23/2021] [Accepted: 05/25/2021] [Indexed: 02/02/2023] Open
Abstract
INTRODUCTION The effects of gender on the mismatch negativity (MMN) potential have been studied using simple frequency deviants. However, the effects of gender on MMN to violations of abstract regularities have not yet been studied. Here, we addressed this issue and compared the effects of gender on simple and pattern frequency MMNs. METHODS MMN response was recorded from 29 healthy young adults, 14 females (mean age = 26.20 ± 2.17) and 15 males (mean age = 27.57 ± 2.24), using 32 scalp electrodes during simple and pattern frequency oddball paradigms and the mean amplitude, peak latency, and scalp topography of MMN evoked by each paradigm were compared between the two genders. RESULTS The peak latency of simple MMN was significantly longer in females (p < .05); however, its mean amplitude and topography were similar between the two genders (p > .05). There were no significant differences in peak latency, mean amplitude, and scalp topography of pattern MMN between the two genders (p > .05). CONCLUSIONS Based on the obtained results, gender differently affects simple and pattern MMN. These findings may provide preliminary evidence for distinct effects of gender on various types of MMN.
Collapse
Affiliation(s)
- Reyhane Toufan
- Department of Audiology, Faculty of Rehabilitation Sciences, Iran University of Medical Sciences (IUMS), Tehran, Iran
| | - Maryam Aghamolaei
- Department of Audiology, School of Rehabilitation, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Hasan Ashayeri
- Department of Basic Sciences in Rehabilitation, School of Rehabilitation Sciences, Iran University of Medical Sciences (IUMS), Tehran, Iran
| |
Collapse
|
4
|
An integrative analysis of 5HTT-mediated mechanism of hyperactivity to non-threatening voices. Commun Biol 2020; 3:113. [PMID: 32157156 PMCID: PMC7064530 DOI: 10.1038/s42003-020-0850-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Accepted: 02/21/2020] [Indexed: 01/24/2023] Open
Abstract
The tonic model delineating the serotonin transporter polymorphism’s (5-HTTLPR) modulatory effect on anxiety points towards a universal underlying mechanism involving a hyper-or-elevated baseline level of arousal even to non-threatening stimuli. However, to our knowledge, this mechanism has never been observed in non-clinical cohorts exhibiting high anxiety. Moreover, empirical support regarding said association is mixed, potentially because of publication bias with a relatively small sample size. Hence, how the 5-HTTLPR modulates neural correlates remains controversial. Here we show that 5-HTTLPR short-allele carriers had significantly increased baseline ERPs and reduced fearful MMN, phenomena which can nevertheless be reversed by acute anxiolytic treatment. This provides evidence that the 5-HTT affects the automatic processing of threatening and non-threatening voices, impacts broadly on social cognition, and conclusively asserts the heightened baseline arousal level as the universal underlying neural mechanism for anxiety-related susceptibilities, functioning as a spectrum-like distribution from high trait anxiety non-patients to anxiety patients. Chen et al. apply a multi-level approach to show that serotonin signaling modulates neuronal responses to both threatening and non-threatening voices even in the pre-attentive stage. They show that 5-HTTLPR short-allele carriers had higher baseline event-related potentials and lower fearful mismatch negativity, which can be reversed by acute anxiolytic treatment.
Collapse
|
5
|
新生儿情绪性语音加工的正性偏向——来自事件相关电位的证据. ACTA PSYCHOLOGICA SINICA 2019. [DOI: 10.3724/sp.j.1041.2019.00462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
6
|
Yang X, Wang Q, Qiao Z, Qiu X, Han D, Zhu X, Zhang C, Yang Y. Dysfunction of Pre-Attentive Visual Information Processing in Drug-Naïve Women, But Not Men, During the Initial Episode of Major Depressive Disorder. Front Psychiatry 2019; 10:899. [PMID: 31969836 PMCID: PMC6960197 DOI: 10.3389/fpsyt.2019.00899] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/16/2018] [Accepted: 11/13/2019] [Indexed: 11/13/2022] Open
Abstract
Women are twice as likely as men to develop depression. Few studies have explored gender difference in cognitive function of patients with MDD. The gender difference in the pre-attentive information processing of MDD patients is still poorly understood. To examine the gender differences in change detection, 30 medication-free MDD patients (15 women) and 30 age and education matched controls (15 women) were recruited. The deviant-standard reverse oddball paradigm (50 ms/150 ms) was used to obtain the visual mismatch negativity (vMMN) in first episode MDD patients. Compared to men with MDD, women with MDD showed a significantly decreased increment vMMN, while no gender difference in decrement vMMN was found. The increment vMMN amplitude in MDD women was smaller than in healthy women, whereas no difference was found in decrement vMMN. Neither increment nor decrement vMMN differed between MDD men and healthy men. The mean amplitude of increment vMMN was not correlated with symptoms of MDD in MDD patients and MDD women. To conclude, the dysfunction of visual information processing existed at pre-attentive stage in MDD women.
Collapse
Affiliation(s)
- Xiuxian Yang
- Department of Medical Psychology, Public Health Institute of Harbin Medical University, Harbin, China
| | - Qihe Wang
- Department of Medical Psychology, Public Health Institute of Harbin Medical University, Harbin, China
| | - Zhengxue Qiao
- Department of Medical Psychology, Public Health Institute of Harbin Medical University, Harbin, China
| | - Xiaohui Qiu
- Department of Medical Psychology, Public Health Institute of Harbin Medical University, Harbin, China
| | - Dong Han
- Department of Medical Psychology, Public Health Institute of Harbin Medical University, Harbin, China
| | - Xiongzhao Zhu
- Medical Psychological Institute, Second Xiangya Hospital, Central South University, Changsha, China
| | | | - Yanjie Yang
- Department of Medical Psychology, Public Health Institute of Harbin Medical University, Harbin, China
| |
Collapse
|
7
|
Chen C, Martínez RM, Cheng Y. The Developmental Origins of the Social Brain: Empathy, Morality, and Justice. Front Psychol 2018; 9:2584. [PMID: 30618998 PMCID: PMC6302010 DOI: 10.3389/fpsyg.2018.02584] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Accepted: 12/03/2018] [Indexed: 12/12/2022] Open
Abstract
The social brain is the cornerstone that effectively negotiates and navigates complex social environments and relationships. When mature, these social abilities facilitate the interaction and cooperation with others. Empathy, morality, and justice, among others, are all closely intertwined, yet the relationships between them are quite complex. They are fundamental components of our human nature, and shape the landscape of our social lives. The various facets of empathy, including affective arousal/emotional sharing, empathic concern, and perspective taking, have unique contributions as subcomponents of morality. This review helps understand how basic forms of empathy, morality, and justice are substantialized in early ontogeny. It provides valuable information as to gain new insights into the underlying neurobiological precursors of the social brain, enabling future translation toward therapeutic and medical interventions.
Collapse
Affiliation(s)
- Chenyi Chen
- Department of Physical Medicine and Rehabilitation, National Yang-Ming University Hospital, Yilan, Taiwan.,Graduate Institute of Injury Prevention and Control, College of Public Health, Taipei Medical University, Taipei, Taiwan.,Research Center of Brain and Consciousness, Shuang Ho Hospital, Taipei Medical University, New Taipei City, Taiwan.,Institute of Humanities in Medicine, Taipei Medical University, Taipei, Taiwan
| | - Róger Marcelo Martínez
- Department of Physical Medicine and Rehabilitation, National Yang-Ming University Hospital, Yilan, Taiwan.,Institute of Neuroscience and Brain Research Center, National Yang-Ming University, Taipei, Taiwan
| | - Yawei Cheng
- Department of Physical Medicine and Rehabilitation, National Yang-Ming University Hospital, Yilan, Taiwan.,Institute of Neuroscience and Brain Research Center, National Yang-Ming University, Taipei, Taiwan.,Department of Education and Research, Taipei City Hospital, Taipei, Taiwan
| |
Collapse
|
8
|
Chen C, Chan CW, Cheng Y. Test-Retest Reliability of Mismatch Negativity (MMN) to Emotional Voices. Front Hum Neurosci 2018; 12:453. [PMID: 30498437 PMCID: PMC6249375 DOI: 10.3389/fnhum.2018.00453] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Accepted: 10/24/2018] [Indexed: 12/20/2022] Open
Abstract
A voice from kin species conveys indispensable social and affective signals with uniquely phylogenetic and ontogenetic standpoints. However, the neural underpinning of emotional voices, beyond low-level acoustic features, activates a processing chain that proceeds from the auditory pathway to the brain structures implicated in cognition and emotion. By using a passive auditory oddball paradigm, which employs emotional voices, this study investigates the test–retest reliability of emotional mismatch negativity (MMN), indicating that the deviants of positively (happily)- and negatively (angrily)-spoken syllables, as compared to neutral standards, can trigger MMN as a response to an automatic discrimination of emotional salience. The neurophysiological estimates of MMN to positive and negative deviants appear to be highly reproducible, irrespective of the subject’s attentional disposition: whether the subjects are set to a condition that involves watching a silent movie or do a working memory task. Specifically, negativity bias is evinced as threatening, relative to positive vocalizations, consistently inducing larger MMN amplitudes, regardless of the day and the time of a day. The present findings provide evidence to support the fact that emotional MMN offers a stable platform to detect subtle changes in current emotional shifts.
Collapse
Affiliation(s)
- Chenyi Chen
- Department of Physical Medicine and Rehabilitation, National Yang-Ming University Hospital, Yilan, Taiwan.,Graduate Institute of Injury Prevention and Control, Taipei Medical University, Taipei, Taiwan.,Institute of Humanities in Medicine, Taipei Medical University, Taipei, Taiwan.,Research Center of Brain and Consciousness, Shuang Ho Hospital, Taipei Medical University, Taipei, Taiwan
| | - Chia-Wen Chan
- Graduate Institute of Injury Prevention and Control, Taipei Medical University, Taipei, Taiwan
| | - Yawei Cheng
- Department of Physical Medicine and Rehabilitation, National Yang-Ming University Hospital, Yilan, Taiwan.,Institute of Neuroscience and Brain Research Center, National Yang-Ming University, Taipei, Taiwan.,Department of Research and Education, Taipei City Hospital, Taipei, Taiwan
| |
Collapse
|
9
|
Schirmer A, McGlone F. A touching Sight: EEG/ERP correlates for the vicarious processing of affectionate touch. Cortex 2018; 111:1-15. [PMID: 30419352 DOI: 10.1016/j.cortex.2018.10.005] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Revised: 08/28/2018] [Accepted: 10/08/2018] [Indexed: 11/30/2022]
Abstract
Observers can simulate aspects of other people's tactile experiences. We asked whether they do so when faced with full-body social interactions, whether emerging representations go beyond basic sensorimotor mirroring, and whether they depend on processing goals and inclinations. In an EEG/ERP study, we presented line-drawn, dyadic interactions with and without affectionate touch. In an explicit and an implicit task, participants categorized images into touch versus no-touch and same versus opposite sex interactions, respectively. Modulations of central Rolandic rhythms implied that affectionate touch displays engaged sensorimotor mechanisms. Additionally, the late positive potential (LPP) being larger for images with as compared to without touch pointed to an involvement of higher order socio-affective mechanisms. Task and sex modulated touch perception. Sensorimotor responding, indexed by Rolandic rhythms, was fairly independent of the task but appeared less effortful in women than in men. Touch induced socio-affective responding, indexed by the LPP, declined from explicit to implicit processing in women and disappeared in men. In sum, this study provides first evidence that vicarious touch from full-body social interactions entails shared sensorimotor as well as socio-affective experiences. Yet, mental representations of touch at a socio-affective level are more likely when touch is goal relevant and observers are female. Together, these results outline the conditions under which touch in visual media may be usefully employed to socially engage observers.
Collapse
Affiliation(s)
- Annett Schirmer
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong; Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong; Center for Cognition and Brain Studies, The Chinese University of Hong Kong, Hong Kong.
| | - Francis McGlone
- School of Natural Sciences & Psychology, Liverpool John Moores University, UK; Institute of Psychology, Health & Society, University of Liverpool, UK
| |
Collapse
|
10
|
Chen C, Hu CH, Cheng Y. Mismatch negativity (MMN) stands at the crossroads between explicit and implicit emotional processing. Hum Brain Mapp 2016; 38:140-150. [PMID: 27534834 DOI: 10.1002/hbm.23349] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2016] [Revised: 08/03/2016] [Accepted: 08/04/2016] [Indexed: 11/06/2022] Open
Abstract
The amygdala is known as a key brain region involved in the explicit and implicit processing of emotional faces, and plays a crucial role in salience detection. Not until recently was the mismatch negativity (MMN), a component of the event-related potentials to an odd stimulus in a sequence of stimuli, utilized as an index of preattentive salience detection of emotional voice processing. However, their relationship remains to be delineated. This study combined the fMRI scanning and event-related potential recording by examining amygdala reactivity in response to explicit and implicit (backward masked) perception of fearful and angry faces, along with recording MMN in response to the fearfully and angrily spoken syllables dada in healthy subjects who varied in trait anxiety (STAI-T). Results indicated that the amplitudes of fearful MMN were positively correlated with left amygdala reactivity to explicit perception of fear, but negatively correlated with right amygdala reactivity to implicit perception of fear. The fearful MMN predicted STAI-T along with left amygdala reactivity to explicit fear, whereas the association between fearful MMN and STAI-T was mediated by right amygdala reactivity to implicit fear. These findings suggest that amygdala reactivity in response to explicit and implicit threatening faces exhibits opposite associations with emotional MMN. In terms of emotional processing, MMN not only reflects preattentive saliency detection but also stands at the crossroads of explicit and implicit perception. Hum Brain Mapp 38:140-150, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Chenyi Chen
- Institute of Neuroscience, National Yang-Ming University, Taipei, Taiwan
| | - Chia-Hsuan Hu
- Institute of Neuroscience, National Yang-Ming University, Taipei, Taiwan
| | - Yawei Cheng
- Institute of Neuroscience, National Yang-Ming University, Taipei, Taiwan.,Department of Rehabilitation, National Yang-Ming University, Yilan, Taiwan
| |
Collapse
|
11
|
Chen C, Liu CC, Weng PY, Cheng Y. Mismatch Negativity to Threatening Voices Associated with Positive Symptoms in Schizophrenia. Front Hum Neurosci 2016; 10:362. [PMID: 27471459 PMCID: PMC4945630 DOI: 10.3389/fnhum.2016.00362] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2015] [Accepted: 07/05/2016] [Indexed: 11/15/2022] Open
Abstract
Although the general consensus holds that emotional perception is impaired in patients with schizophrenia, the extent to which neural processing of emotional voices is altered in schizophrenia remains to be determined. This study enrolled 30 patients with chronic schizophrenia and 30 controls and measured their mismatch negativity (MMN), a component of auditory event-related potentials (ERP). In a passive oddball paradigm, happily or angrily spoken deviant syllables dada were randomly presented within a train of emotionally neutral standard syllables. Results showed that MMN in response to angry syllables and angry-derived non-vocal sounds was significantly decreased in individuals with schizophrenia. P3a to angry syllables showed stronger amplitudes but longer latencies. Weaker MMN amplitudes were associated with more positive symptoms of schizophrenia. Receiver operator characteristic analysis revealed that angry MMN, angry-derived MMN, and angry P3a could help predict whether someone had received a clinical diagnosis of schizophrenia. The findings suggested general impairments of voice perception and acoustic discrimination in patients with chronic schizophrenia. The emotional salience processing of voices showed an atypical fashion at the preattentive level, being associated with positive symptoms in schizophrenia.
Collapse
Affiliation(s)
- Chenyi Chen
- Institute of Neuroscience, National Yang-Ming University, Taipei Taiwan
| | - Chia-Chien Liu
- Institute of Neuroscience, National Yang-Ming University, TaipeiTaiwan; Department of Psychiatry, National Yang-Ming University Hospital, YilanTaiwan
| | - Pei-Yuan Weng
- Department of Psychiatry, National Yang-Ming University Hospital, Yilan Taiwan
| | - Yawei Cheng
- Institute of Neuroscience, National Yang-Ming University, TaipeiTaiwan; Department of Rehabilitation, National Yang-Ming University Hospital, YilanTaiwan
| |
Collapse
|
12
|
Chen C, Sung JY, Cheng Y. Neural Dynamics of Emotional Salience Processing in Response to Voices during the Stages of Sleep. Front Behav Neurosci 2016; 10:117. [PMID: 27378870 PMCID: PMC4906046 DOI: 10.3389/fnbeh.2016.00117] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2015] [Accepted: 05/25/2016] [Indexed: 11/21/2022] Open
Abstract
Sleep has been related to emotional functioning. However, the extent to which emotional salience is processed during sleep is unknown. To address this concern, we investigated night sleep in healthy adults regarding brain reactivity to the emotionally (happily, fearfully) spoken meaningless syllables dada, along with correspondingly synthesized nonvocal sounds. Electroencephalogram (EEG) signals were continuously acquired during an entire night of sleep while we applied a passive auditory oddball paradigm. During all stages of sleep, mismatch negativity (MMN) in response to emotional syllables, which is an index for emotional salience processing of voices, was detected. In contrast, MMN to acoustically matching nonvocal sounds was undetected during Sleep Stage 2 and 3 as well as rapid eye movement (REM) sleep. Post-MMN positivity (PMP) was identified with larger amplitudes during Stage 3, and at earlier latencies during REM sleep, relative to wakefulness. These findings clearly demonstrated the neural dynamics of emotional salience processing during the stages of sleep.
Collapse
Affiliation(s)
- Chenyi Chen
- Institute of Neuroscience, National Yang-Ming University Taipei, Taiwan
| | - Jia-Ying Sung
- Department of Neurology, Wan Fang Hospital, Taipei Medical UniversityTaipei, Taiwan; Department of Neurology, School of Medicine, College of Medicine, Taipei Medical UniversityTaipei, Taiwan
| | - Yawei Cheng
- Institute of Neuroscience, National Yang-Ming UniversityTaipei, Taiwan; Department of Rehabilitation, National Yang-Ming University HospitalYilan, Taiwan
| |
Collapse
|
13
|
Diamond E, Zhang Y. Cortical processing of phonetic and emotional information in speech: A cross-modal priming study. Neuropsychologia 2016; 82:110-122. [PMID: 26796714 DOI: 10.1016/j.neuropsychologia.2016.01.019] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2015] [Revised: 01/06/2016] [Accepted: 01/16/2016] [Indexed: 10/22/2022]
Abstract
The current study employed behavioral and electrophysiological measures to investigate the timing, localization, and neural oscillation characteristics of cortical activities associated with phonetic and emotional information processing of speech. The experimental design used a cross-modal priming paradigm in which the normal adult participants were presented a visual prime followed by an auditory target. Primes were facial expressions that systematically varied in emotional content (happy or angry) and mouth shape (corresponding to /a/ or /i/ vowels). Targets were spoken words that varied by emotional prosody (happy or angry) and vowel (/a/ or /i/). In both the phonetic and prosodic conditions, participants were asked to judge congruency status of the visual prime and the auditory target. Behavioral results showed a congruency effect for both percent correct and reaction time. Two ERP responses, the N400 and late positive response (LPR), were identified in both conditions. Source localization and inter-trial phase coherence of the N400 and LPR components further revealed different cortical contributions and neural oscillation patterns for selective processing of phonetic and emotional information in speech. The results provide corroborating evidence for the necessity of differentiating brain mechanisms underlying the representation and processing of co-existing linguistic and paralinguistic information in spoken language, which has important implications for theoretical models of speech recognition as well as clinical studies on the neural bases of language and social communication deficits.
Collapse
Affiliation(s)
- Erin Diamond
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN 55455, USA
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN 55455, USA; Center for Neurobehavioral Development, University of Minnesota, Minneapolis, MN 55455, USA; School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China.
| |
Collapse
|
14
|
Schirmer A, Escoffier N, Cheng X, Feng Y, Penney TB. Detecting Temporal Change in Dynamic Sounds: On the Role of Stimulus Duration, Speed, and Emotion. Front Psychol 2016; 6:2055. [PMID: 26793161 PMCID: PMC4710701 DOI: 10.3389/fpsyg.2015.02055] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Accepted: 12/24/2015] [Indexed: 11/16/2022] Open
Abstract
For dynamic sounds, such as vocal expressions, duration often varies alongside speed. Compared to longer sounds, shorter sounds unfold more quickly. Here, we asked whether listeners implicitly use this confound when representing temporal regularities in their environment. In addition, we explored the role of emotions in this process. Using a mismatch negativity (MMN) paradigm, we asked participants to watch a silent movie while passively listening to a stream of task-irrelevant sounds. In Experiment 1, one surprised and one neutral vocalization were compressed and stretched to create stimuli of 378 and 600 ms duration. Stimuli were presented in four blocks, two of which used surprised and two of which used neutral expressions. In one surprised and one neutral block, short and long stimuli served as standards and deviants, respectively. In the other two blocks, the assignment of standards and deviants was reversed. We observed a climbing MMN-like negativity shortly after deviant onset, which suggests that listeners implicitly track sound speed and detect speed changes. Additionally, this MMN-like effect emerged earlier and was larger for long than short deviants, suggesting greater sensitivity to duration increments or slowing down than to decrements or speeding up. Last, deviance detection was facilitated in surprised relative to neutral blocks, indicating that emotion enhances temporal processing. Experiment 2 was comparable to Experiment 1 with the exception that sounds were spectrally rotated to remove vocal emotional content. This abolished the emotional processing benefit, but preserved the other effects. Together, these results provide insights into listener sensitivity to sound speed and raise the possibility that speed biases duration judgements implicitly in a feed-forward manner. Moreover, this bias may be amplified for duration increments relative to decrements and within an emotional relative to a neutral stimulus context.
Collapse
Affiliation(s)
- Annett Schirmer
- Department of Psychology, National University of Singapore, SingaporeSingapore; Life Sciences Institute Programme in Neurobiology and Ageing, National University of Singapore, SingaporeSingapore; Duke-NUS Graduate Medical School, SingaporeSingapore
| | - Nicolas Escoffier
- Department of Psychology, National University of Singapore, SingaporeSingapore; Life Sciences Institute Programme in Neurobiology and Ageing, National University of Singapore, SingaporeSingapore
| | - Xiaoqin Cheng
- Life Sciences Institute Programme in Neurobiology and Ageing, National University of Singapore, SingaporeSingapore; Graduate School for Integrative Sciences and Engineering, National University of Singapore, SingaporeSingapore
| | - Yenju Feng
- Life Sciences Institute Programme in Neurobiology and Ageing, National University of Singapore, SingaporeSingapore; Graduate School for Integrative Sciences and Engineering, National University of Singapore, SingaporeSingapore
| | - Trevor B Penney
- Department of Psychology, National University of Singapore, SingaporeSingapore; Life Sciences Institute Programme in Neurobiology and Ageing, National University of Singapore, SingaporeSingapore
| |
Collapse
|
15
|
Tsolaki A, Kosmidou V, Hadjileontiadis L, Kompatsiaris I(Y, Tsolaki M. Brain source localization of MMN, P300 and N400: Aging and gender differences. Brain Res 2015; 1603:32-49. [DOI: 10.1016/j.brainres.2014.10.004] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2014] [Revised: 09/28/2014] [Accepted: 10/01/2014] [Indexed: 12/29/2022]
|
16
|
Chen C, Chen CY, Yang CY, Lin CH, Cheng Y. Testosterone modulates preattentive sensory processing and involuntary attention switches to emotional voices. J Neurophysiol 2015; 113:1842-9. [DOI: 10.1152/jn.00587.2014] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Testosterone is capable of altering facial threat processing. Voices, similar to faces, convey social information. We hypothesized that administering a single dose of testosterone would change voice perception in humans. In a placebo-controlled, randomly assigned, double-blind crossover design, we administered a single dose of testosterone or placebo to 18 healthy female volunteers and used a passive auditory oddball paradigm. The mismatch negativity (MMN) and P3a in responses to fearfully, happily, and neutrally spoken syllables dada and acoustically matched nonvocal sounds were analyzed, indicating preattentive sensory processing and involuntary attention switches. Results showed that testosterone administration had a trend to shorten the peak latencies of happy MMN and significantly enhanced the amplitudes of happy and fearful P3a, whereas the happy- and fearful-derived nonvocal MMN and P3a remained unaffected. These findings demonstrated acute effect of testosterone on the neural dynamics of voice perception. Administering a single dose of testosterone modulates preattentive sensory processing and involuntary attention switches in response to emotional voices.
Collapse
Affiliation(s)
- Chenyi Chen
- Institute of Neuroscience, National Yang-Ming University, Taipei, Taiwan
| | - Chin-Yau Chen
- Department of Surgery, National Yang-Ming University Hospital, Yilan, Taiwan
| | - Chih-Yung Yang
- Department of Education and Research, Taipei City Hospital, Taipei, Taiwan
- Institute of Microbiology and Immunology, National Yang-Ming University, Taipei, Taiwan; and
| | - Chi-Hung Lin
- Department of Education and Research, Taipei City Hospital, Taipei, Taiwan
- Institute of Microbiology and Immunology, National Yang-Ming University, Taipei, Taiwan; and
| | - Yawei Cheng
- Institute of Neuroscience, National Yang-Ming University, Taipei, Taiwan
- Department of Education and Research, Taipei City Hospital, Taipei, Taiwan
- Department of Rehabilitation, National Yang-Ming University Hospital, Yilan, Taiwan
| |
Collapse
|
17
|
Zhang D, Liu Y, Hou X, Sun G, Cheng Y, Luo Y. Discrimination of fearful and angry emotional voices in sleeping human neonates: a study of the mismatch brain responses. Front Behav Neurosci 2014; 8:422. [PMID: 25538587 PMCID: PMC4255595 DOI: 10.3389/fnbeh.2014.00422] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Accepted: 11/18/2014] [Indexed: 02/04/2023] Open
Abstract
Appropriate processing of human voices with different threat-related emotions is of evolutionarily adaptive value for the survival of individuals. Nevertheless, it is still not clear whether the sensitivity to threat-related information is present at birth. Using an odd-ball paradigm, the current study investigated the neural correlates underlying automatic processing of emotional voices of fear and anger in sleeping neonates. Event-related potential data showed that the fronto-central scalp distribution of the neonatal brain could discriminate fearful voices from angry voices; the mismatch response (MMR) was larger in response to the deviant stimuli of anger, compared with the standard stimuli of fear. Furthermore, this fear–anger MMR discrimination was observed only when neonates were in active sleep state. Although the neonates' sensitivity to threat-related voices is not likely associated with a conceptual understanding of fearful and angry emotions, this special discrimination in early life may provide a foundation for later emotion and social cognition development.
Collapse
Affiliation(s)
- Dandan Zhang
- Institute of Affective and Social Neuroscience, Shenzhen University Shenzhen, China ; State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University Beijing, China
| | - Yunzhe Liu
- Institute of Affective and Social Neuroscience, Shenzhen University Shenzhen, China ; State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University Beijing, China
| | - Xinlin Hou
- Department of Pediatrics, Peking University First Hospital Beijing, China
| | - Guoyu Sun
- Department of Pediatrics, Peking University First Hospital Beijing, China
| | - Yawei Cheng
- Institute of Neuroscience, Yang-Ming University Taipei, Taiwan ; Department of Rehabilitation, Yang-Ming University Hospital Ilan, Taiwan
| | - Yuejia Luo
- Institute of Affective and Social Neuroscience, Shenzhen University Shenzhen, China
| |
Collapse
|
18
|
Hung AY, Cheng Y. Sex differences in preattentive perception of emotional voices and acoustic attributes. Neuroreport 2014; 25:464-9. [PMID: 24488031 DOI: 10.1097/wnr.0000000000000115] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Sex stereotypes consider women to be superior in terms of voice sensitivity. However, whether such sex differences are driven by voice perception per se or by low-level acoustic attributes remains unclear. Using a passive auditory oddball paradigm, we studied the emotionally spoken meaningless syllables 'dada' (neutral, happy, fearful) along with corresponding nonvocal sounds in female and male adults. Mismatch negativity (MMN) and P3a were identified in the waveforms obtained by subtraction of the responses to standard stimuli from the deviant stimuli. Results showed that MMN in response to fearful syllables was stronger in women, whereas MMN elicited by nonvocal sounds was comparable between sexes. This sex effect was specifically found in MMN, but not in P3a. These findings suggest that sex differences in voice sensitivity are selectively driven by voice perception. The sex effect in preattentive processing of emotional voices may further implicate possible etiological pathways for mental disorders characterized by disturbance in emotional processes as well as disparities in the prevalence between sexes.
Collapse
Affiliation(s)
- An-Yi Hung
- aInstitute of Neuroscience and Brain Research Center, National Yang-Ming University bDepartment of Education and Research, Taipei City Hospital, Taipei cDepartment of Rehabilitation, National Yang-Ming University Hospital, Yilan, Taiwan
| | | |
Collapse
|
19
|
Chen C, Lee YH, Cheng Y. Anterior insular cortex activity to emotional salience of voices in a passive oddball paradigm. Front Hum Neurosci 2014; 8:743. [PMID: 25346670 PMCID: PMC4193252 DOI: 10.3389/fnhum.2014.00743] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2014] [Accepted: 09/03/2014] [Indexed: 11/23/2022] Open
Abstract
The human voice, which has a pivotal role in communication, is processed in specialized brain regions. Although a general consensus holds that the anterior insular cortex (AIC) plays a critical role in negative emotional experience, previous studies have not observed AIC activation in response to hearing disgust in voices. We used magnetoencephalography to measure the magnetic counterparts of mismatch negativity (MMNm) and P3a (P3am) in healthy adults while the emotionally meaningless syllables dada, spoken as neutral, happy, or disgusted prosodies, along with acoustically matched simple and complex tones, were presented in a passive oddball paradigm. The results revealed that disgusted relative to happy syllables elicited stronger MMNm-related cortical activities in the right AIC and precentral gyrus along with the left posterior insular cortex, supramarginal cortex, transverse temporal cortex, and upper bank of superior temporal cortex. The AIC activity specific to disgusted syllables (corrected p < 0.05) was associated with the hit rate of the emotional categorization task. These findings may clarify the neural correlates of emotional MMNm and lend support to the role of AIC in the processing of emotional salience already at the preattentive level.
Collapse
Affiliation(s)
- Chenyi Chen
- Institute of Neuroscience, National Yang-Ming University Taipei, Taiwan
| | - Yu-Hsuan Lee
- Institute of Neuroscience, National Yang-Ming University Taipei, Taiwan
| | - Yawei Cheng
- Institute of Neuroscience, National Yang-Ming University Taipei, Taiwan ; Department of Rehabilitation, National Yang-Ming University Yilan, Taiwan ; Department of Education and Research, Taipei City Hospital Taipei, Taiwan
| |
Collapse
|
20
|
Fan YT, Cheng Y. Atypical mismatch negativity in response to emotional voices in people with autism spectrum conditions. PLoS One 2014; 9:e102471. [PMID: 25036143 PMCID: PMC4103818 DOI: 10.1371/journal.pone.0102471] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2014] [Accepted: 06/19/2014] [Indexed: 11/19/2022] Open
Abstract
Autism Spectrum Conditions (ASC) are characterized by heterogeneous impairments of social reciprocity and sensory processing. Voices, similar to faces, convey socially relevant information. Whether voice processing is selectively impaired remains undetermined. This study involved recording mismatch negativity (MMN) while presenting emotionally spoken syllables dada and acoustically matched nonvocal sounds to 20 subjects with ASC and 20 healthy matched controls. The people with ASC exhibited no MMN response to emotional syllables and reduced MMN to nonvocal sounds, indicating general impairments of affective voice and acoustic discrimination. Weaker angry MMN amplitudes were associated with more autistic traits. Receiver operator characteristic analysis revealed that angry MMN amplitudes yielded a value of 0.88 (p<.001). The results suggest that people with ASC may process emotional voices in an atypical fashion already at the automatic stage. This processing abnormality can facilitate diagnosing ASC and enable social deficits in people with ASC to be predicted.
Collapse
Affiliation(s)
- Yang-Teng Fan
- Institute of Neuroscience and Brain Research Center, National Yang-Ming University, Taipei, Taiwan
| | - Yawei Cheng
- Institute of Neuroscience and Brain Research Center, National Yang-Ming University, Taipei, Taiwan
- Department of Rehabilitation, National Yang-Ming University Hospital, Yilan, Taiwan
- Department of Research and Education, Taipei City Hospital, Taipei, Taiwan
| |
Collapse
|