1
|
Silva F, Ribeiro S, Silva S, Garrido MI, Soares SC. Exploring the use of visual predictions in social scenarios while under anticipatory threat. Sci Rep 2024; 14:10913. [PMID: 38740937 DOI: 10.1038/s41598-024-61682-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 05/08/2024] [Indexed: 05/16/2024] Open
Abstract
One of the less recognized effects of anxiety lies in perception alterations caused by how one weighs both sensory evidence and contextual cues. Here, we investigated how anxiety affects our ability to use social cues to anticipate the others' actions. We adapted a paradigm to assess expectations in social scenarios, whereby participants were asked to identify the presence of agents therein, while supported by contextual cues from another agent. Participants (N = 66) underwent this task under safe and threat-of-shock conditions. We extracted both criterion and sensitivity measures as well as gaze data. Our analysis showed that whilst the type of action had the expected effect, threat-of-shock had no effect over criterion and sensitivity. Although showing similar dwell times, gaze exploration of the contextual cue was associated with shorter fixation durations whilst participants were under threat. Our findings suggest that anxiety does not appear to influence the use of expectations in social scenarios.
Collapse
Affiliation(s)
- Fábio Silva
- William James Center for Research, Department of Education and Psychology, University of Aveiro, Universidade de Aveiro, 3810-193, Aveiro, Portugal
| | - Sérgio Ribeiro
- Department of Education and Psychology, University of Aveiro, Aveiro, Portugal
| | - Samuel Silva
- IEETA, DETI, University of Aveiro, Aveiro, Portugal
| | - Marta I Garrido
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, VIC, Australia
- Graeme Clark Institute for Biomedical Engineering, The University of Melbourne, Melbourne, Australia
| | - Sandra C Soares
- William James Center for Research, Department of Education and Psychology, University of Aveiro, Universidade de Aveiro, 3810-193, Aveiro, Portugal.
| |
Collapse
|
2
|
Li L, Ishida K, Mizuhara K, Barry RJ, Nittono H. Effects of the cardiac cycle on auditory processing: A preregistered study on mismatch negativity. Psychophysiology 2024; 61:e14506. [PMID: 38149745 DOI: 10.1111/psyp.14506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 11/23/2023] [Accepted: 12/01/2023] [Indexed: 12/28/2023]
Abstract
The systolic and diastolic phases of the cardiac cycle are known to affect perception and cognition differently. Higher order processing tends to be facilitated at systole, whereas sensory processing of external stimuli tends to be impaired at systole compared to diastole. The current study aims to examine whether the cardiac cycle affects auditory deviance detection, as reflected in the mismatch negativity (MMN) of the event-related brain potential (ERP). We recorded the intensity deviance response to deviant tones (70 dB) presented among standard tones (60 or 80 dB, depending on blocks) and calculated the MMN by subtracting standard ERP waveforms from deviant ERP waveforms. We also assessed intensity-dependent N1 and P2 amplitude changes by subtracting ERPs elicited by soft standard tones (60 dB) from ERPs elicited by loud standard tones (80 dB). These subtraction methods were used to eliminate phase-locked cardiac-related electric artifacts that overlap auditory ERPs. The endogenous MMN was expected to be larger at systole, reflecting the facilitation of memory-based auditory deviance detection, whereas the exogenous N1 and P2 would be smaller at systole, reflecting impaired exteroceptive sensory processing. However, after the elimination of cardiac-related artifacts, there were no significant differences between systole and diastole in any ERP components. The intensity-dependent N1 and P2 amplitude changes were not obvious in either cardiac phase, probably because of the short interstimulus intervals. The lack of a cardiac phase effect on MMN amplitude suggests that preattentive auditory processing may not be affected by bodily signals from the heart.
Collapse
Affiliation(s)
- Lingjun Li
- Graduate School of Human Sciences, Osaka University, Osaka, Japan
| | - Kai Ishida
- Graduate School of Human Sciences, Osaka University, Osaka, Japan
- Japan Society for the Promotion of Science, Tokyo, Japan
| | - Keita Mizuhara
- Graduate School of Human Sciences, Osaka University, Osaka, Japan
- Japan Society for the Promotion of Science, Tokyo, Japan
| | - Robert J Barry
- School of Psychology, Brain & Behaviour Research Institute, University of Wollongong, Wollongong, New South Wales, Australia
| | - Hiroshi Nittono
- Graduate School of Human Sciences, Osaka University, Osaka, Japan
| |
Collapse
|
3
|
Ringer H, Rösch SA, Roeber U, Deller J, Escera C, Grimm S. That sounds awful! Does sound unpleasantness modulate the mismatch negativity and its habituation? Psychophysiology 2024; 61:e14450. [PMID: 37779371 DOI: 10.1111/psyp.14450] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 08/05/2023] [Accepted: 08/28/2023] [Indexed: 10/03/2023]
Abstract
There are sounds that most people perceive as highly unpleasant, for instance, the sound of rubbing pieces of polystyrene together. Previous research showed larger physiological and neural responses for such aversive compared to neutral sounds. Hitherto, it remains unclear whether habituation, i.e., diminished responses to repeated stimulus presentation, which is typically reported for neutral sounds, occurs to the same extent for aversive stimuli. We measured the mismatch negativity (MMN) in response to rare occurrences of aversive or neutral deviant sounds within an auditory oddball sequence in 24 healthy participants, while they performed a demanding visual distractor task. Deviants occurred as single events (i.e., between two standards) or as double deviants (i.e., repeating the identical deviant sound in two consecutive trials). All deviants elicited a clear MMN, and amplitudes were larger for aversive than for neutral deviants (irrespective of their position within a deviant pair). This supports the claim of preattentive emotion evaluation during early auditory processing. In contrast to our expectations, MMN amplitudes did not show habituation, but increased in response to deviant repetition-similarly for aversive and neutral deviants. A more fine-grained analysis of individual MMN amplitudes in relation to individual arousal and valence ratings of each sound item revealed that stimulus-specific MMN amplitudes were best predicted by the interaction of deviant position and perceived arousal, but not by valence. Deviants with perceived higher arousal elicited larger MMN amplitudes only at the first deviant position, indicating that the MMN reflects preattentive processing of the emotional content of sounds.
Collapse
Affiliation(s)
- Hanna Ringer
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Sarah Alica Rösch
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Integrated Research and Treatment Center (IFB) Adiposity Diseases, Behavioral Medicine Research Unit, Leipzig University Medical Center, Leipzig, Germany
| | - Urte Roeber
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| | - Julia Deller
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig, Leipzig, Germany
| | - Carles Escera
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, Faculty of Psychology, University of Barcelona, Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| | - Sabine Grimm
- Physics of Cognition Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| |
Collapse
|
4
|
Nussbaum C, Schirmer A, Schweinberger SR. Electrophysiological Correlates of Vocal Emotional Processing in Musicians and Non-Musicians. Brain Sci 2023; 13:1563. [PMID: 38002523 PMCID: PMC10670383 DOI: 10.3390/brainsci13111563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 10/31/2023] [Accepted: 11/03/2023] [Indexed: 11/26/2023] Open
Abstract
Musicians outperform non-musicians in vocal emotion recognition, but the underlying mechanisms are still debated. Behavioral measures highlight the importance of auditory sensitivity towards emotional voice cues. However, it remains unclear whether and how this group difference is reflected at the brain level. Here, we compared event-related potentials (ERPs) to acoustically manipulated voices between musicians (n = 39) and non-musicians (n = 39). We used parameter-specific voice morphing to create and present vocal stimuli that conveyed happiness, fear, pleasure, or sadness, either in all acoustic cues or selectively in either pitch contour (F0) or timbre. Although the fronto-central P200 (150-250 ms) and N400 (300-500 ms) components were modulated by pitch and timbre, differences between musicians and non-musicians appeared only for a centro-parietal late positive potential (500-1000 ms). Thus, this study does not support an early auditory specialization in musicians but suggests instead that musicality affects the manner in which listeners use acoustic voice cues during later, controlled aspects of emotion evaluation.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Voice Research Unit, Friedrich Schiller University, 07743 Jena, Germany
| | - Annett Schirmer
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Institute of Psychology, University of Innsbruck, 6020 Innsbruck, Austria
| | - Stefan R. Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Voice Research Unit, Friedrich Schiller University, 07743 Jena, Germany
- Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland
| |
Collapse
|
5
|
Ioakeimidis V, Lennuyeux-Comnene L, Khachatoorian N, Gaigg SB, Haenschel C, Kyriakopoulos M, Dima D. Trait and State Anxiety Effects on Mismatch Negativity and Sensory Gating Event-Related Potentials. Brain Sci 2023; 13:1421. [PMID: 37891790 PMCID: PMC10605251 DOI: 10.3390/brainsci13101421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 09/29/2023] [Accepted: 10/02/2023] [Indexed: 10/29/2023] Open
Abstract
We used the auditory roving oddball to investigate whether individual differences in self-reported anxiety influence event-related potential (ERP) activity related to sensory gating and mismatch negativity (MMN). The state-trait anxiety inventory (STAI) was used to assess the effects of anxiety on the ERPs for auditory change detection and information filtering in a sample of thirty-six healthy participants. The roving oddball paradigm involves presentation of stimulus trains of auditory tones with certain frequencies followed by trains of tones with different frequencies. Enhanced negative mid-latency response (130-230 ms post-stimulus) was marked at the deviant (first tone) and the standard (six or more repetitions) tone at Fz, indicating successful mismatch negativity (MMN). In turn, the first and second tone in a stimulus train were subject to sensory gating at the Cz electrode site as a response to the second stimulus was suppressed at an earlier latency (40-80 ms). We used partial correlations and analyses of covariance to investigate the influence of state and trait anxiety on these two processes. Higher trait anxiety exhibited enhanced MMN amplitude (more negative) (F(1,33) = 14.259, p = 6.323 × 10-6, ηp2 = 0.302), whereas state anxiety reduced sensory gating (F(1,30) = 13.117, p = 0.001, ηp2 = 0.304). Our findings suggest that high trait-anxious participants demonstrate hypervigilant change detection to deviant tones that appear more salient, whereas increased state anxiety associates with failure to filter out irrelevant stimuli.
Collapse
Affiliation(s)
- Vasileios Ioakeimidis
- Department of Psychology, School of Health and Psychological Sciences, City University of London, 10 Northampton Square, London EC1V 0HB, UK; (V.I.); (L.L.-C.); (S.B.G.); (C.H.)
| | - Laura Lennuyeux-Comnene
- Department of Psychology, School of Health and Psychological Sciences, City University of London, 10 Northampton Square, London EC1V 0HB, UK; (V.I.); (L.L.-C.); (S.B.G.); (C.H.)
| | - Nareg Khachatoorian
- Department of Psychology, School of Health and Psychological Sciences, City University of London, 10 Northampton Square, London EC1V 0HB, UK; (V.I.); (L.L.-C.); (S.B.G.); (C.H.)
| | - Sebastian B. Gaigg
- Department of Psychology, School of Health and Psychological Sciences, City University of London, 10 Northampton Square, London EC1V 0HB, UK; (V.I.); (L.L.-C.); (S.B.G.); (C.H.)
| | - Corinna Haenschel
- Department of Psychology, School of Health and Psychological Sciences, City University of London, 10 Northampton Square, London EC1V 0HB, UK; (V.I.); (L.L.-C.); (S.B.G.); (C.H.)
| | - Marinos Kyriakopoulos
- South London and the Maudsley NHS Foundation Trust, London SE5 8AF, UK
- Department of Child and Adolescent Psychiatry, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London SE5 8AF, UK
- 1st Department of Psychiatry, National and Kapodistrian University of Athens, 11528 Athens, Greece
| | - Danai Dima
- Department of Psychology, School of Health and Psychological Sciences, City University of London, 10 Northampton Square, London EC1V 0HB, UK; (V.I.); (L.L.-C.); (S.B.G.); (C.H.)
- Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London SE5 8AF, UK
| |
Collapse
|
6
|
K A, Prasad S, Chakrabarty M. Trait anxiety modulates the detection sensitivity of negative affect in speech: an online pilot study. Front Behav Neurosci 2023; 17:1240043. [PMID: 37744950 PMCID: PMC10512416 DOI: 10.3389/fnbeh.2023.1240043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 08/21/2023] [Indexed: 09/26/2023] Open
Abstract
Acoustic perception of emotions in speech is relevant for humans to navigate the social environment optimally. While sensory perception is known to be influenced by ambient noise, and bodily internal states (e.g., emotional arousal and anxiety), their relationship to human auditory perception is relatively less understood. In a supervised, online pilot experiment sans the artificially controlled laboratory environment, we asked if the detection sensitivity of emotions conveyed by human speech-in-noise (acoustic signals) varies between individuals with relatively lower and higher levels of subclinical trait-anxiety, respectively. In a task, participants (n = 28) accurately discriminated the target emotion conveyed by the temporally unpredictable acoustic signals (signal to noise ratio = 10 dB), which were manipulated at four levels (Happy, Neutral, Fear, and Disgust). We calculated the empirical area under the curve (a measure of acoustic signal detection sensitivity) based on signal detection theory to answer our questions. A subset of individuals with High trait-anxiety relative to Low in the above sample showed significantly lower detection sensitivities to acoustic signals of negative emotions - Disgust and Fear and significantly lower detection sensitivities to acoustic signals when averaged across all emotions. The results from this pilot study with a small but statistically relevant sample size suggest that trait-anxiety levels influence the overall acoustic detection of speech-in-noise, especially those conveying threatening/negative affect. The findings are relevant for future research on acoustic perception anomalies underlying affective traits and disorders.
Collapse
Affiliation(s)
- Achyuthanand K
- Department of Computational Biology, Indraprastha Institute of Information Technology Delhi, New Delhi, India
| | - Saurabh Prasad
- Department of Computer Science and Engineering, Indraprastha Institute of Information Technology Delhi, New Delhi, India
| | - Mrinmoy Chakrabarty
- Department of Social Sciences and Humanities, Indraprastha Institute of Information Technology Delhi, New Delhi, India
- Centre for Design and New Media, Indraprastha Institute of Information Technology Delhi, New Delhi, India
| |
Collapse
|
7
|
Kao C, Zhang Y. Detecting Emotional Prosody in Real Words: Electrophysiological Evidence From a Modified Multifeature Oddball Paradigm. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:2988-2998. [PMID: 37379567 DOI: 10.1044/2023_jslhr-22-00652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2023]
Abstract
PURPOSE Emotional voice conveys important social cues that demand listeners' attention and timely processing. This event-related potential study investigated the feasibility of a multifeature oddball paradigm to examine adult listeners' neural responses to detecting emotional prosody changes in nonrepeating naturally spoken words. METHOD Thirty-three adult listeners completed the experiment by passively listening to the words in neutral and three alternating emotions while watching a silent movie. Previous research documented preattentive change-detection electrophysiological responses (e.g., mismatch negativity [MMN], P3a) to emotions carried by fixed syllables or words. Given that the MMN and P3a have also been shown to reflect extraction of abstract regularities over repetitive acoustic patterns, this study employed a multifeature oddball paradigm to compare listeners' MMN and P3a to emotional prosody change from neutral to angry, happy, and sad emotions delivered with hundreds of nonrepeating words in a single recording session. RESULTS Both MMN and P3a were successfully elicited by the emotional prosodic change over the varying linguistic context. Angry prosody elicited the strongest MMN compared with happy and sad prosodies. Happy prosody elicited the strongest P3a in the centro-frontal electrodes, and angry prosody elicited the smallest P3a. CONCLUSIONS The results demonstrated that listeners were able to extract the acoustic patterns for each emotional prosody category over constantly changing spoken words. The findings confirm the feasibility of the multifeature oddball paradigm in investigating emotional speech processing beyond simple acoustic change detection, which may potentially be applied to pediatric and clinical populations.
Collapse
Affiliation(s)
- Chieh Kao
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities
- Center for Cognitive Sciences, University of Minnesota, Twin Cities
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities
- Masonic Institute for the Developing Brain, University of Minnesota, Twin Cities
| |
Collapse
|
8
|
Papesh MA, Fowler L, Pesa SR, Frederick MT. Functional Hearing Difficulties in Veterans: Retrospective Chart Review of Auditory Processing Assessments in the VA Health Care System. Am J Audiol 2023; 32:101-118. [PMID: 36599099 DOI: 10.1044/2022_aja-22-00117] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Abstract
PURPOSE Approximately 23 million Americans might have functional hearing difficulties (FHDs) that are not well explained by their audiometric thresholds. Clinical management of patients with FHDs is the subject of considerable debate, with few evidence-based guidelines to direct patient care. A better understanding of the characteristics of patients who seek help for FHDs, as well as current audiological management practices, is needed to direct research efforts to the areas greatest opportunity for advancement of clinical care. METHOD A retrospective chart review was conducted examining the medical records of a random sample of 100 Veterans who underwent auditory processing assessments across the VA Health Care System between 2008 and 2020. RESULTS Patients were young to middle-age, often with previous traumatic brain injury or blast exposure. Mental health, sleep, and pain disorders were common. No consistent relationships emerged between specific patient factors and domains of auditory processing deficits. Low-gain hearing aids were provided to 35 patients, 69% of whom continued wearing their hearing aids for at least 2 years. CONCLUSION Future research should address the potential overlap in symptoms and treatment for comorbid health conditions and FHDs, as well as the conditions underlying successful hearing aid use in this patient population.
Collapse
Affiliation(s)
- Melissa A Papesh
- VA RR&D National Center for Rehabilitative Auditory Research, VA Portland Health Care System, OR
- Department of Otolaryngology - Head and Neck Surgery, Oregon Health and Science University, Portland
| | - Lora Fowler
- Department of Communication Sciences and Disorders, Idaho State University, Pocatello
| | - Stephanie R Pesa
- VA Portland Audiology and Speech and Language Pathology Service, VA Portland Health Care System, OR
| | - Melissa T Frederick
- VA RR&D National Center for Rehabilitative Auditory Research, VA Portland Health Care System, OR
| |
Collapse
|
9
|
Schirmer A, Lai O, McGlone F, Cham C, Lau D. Gentle Stroking Elicits Somatosensory ERP that Differentiates Between Hairy and Glabrous Skin. Soc Cogn Affect Neurosci 2022; 17:864-875. [PMID: 35277720 PMCID: PMC9433843 DOI: 10.1093/scan/nsac012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 01/14/2022] [Accepted: 02/20/2022] [Indexed: 11/14/2022] Open
Abstract
Here we asked whether, similar to visual and auditory event-related potentials (ERPs), somatosensory ERPs reflect affect. Participants were stroked on hairy or glabrous skin at five stroking velocities (0.5, 1, 3, 10 and 20 cm/s). For stroking of hairy skin, pleasantness ratings related to velocity in an inverted u-shaped manner. ERPs showed a negativity at 400 ms following touch onset over somatosensory cortex contra-lateral to the stimulation site. This negativity, referred to as sN400, was larger for intermediate than for faster and slower velocities and positively predicted pleasantness ratings. For stroking of glabrous skin, pleasantness showed again an inverted u-shaped relation with velocity and, additionally, increased linearly with faster stroking. The sN400 revealed no quadratic effect and instead was larger for faster velocities. Its amplitude failed to significantly predict pleasantness. In sum, as was reported for other senses, a touch’s affective value modulates the somatosensory ERP. Notably, however, this ERP and associated subjective pleasantness dissociate between hairy and glabrous skin underscoring functional differences between the skin with which we typically receive touch and the skin with which we typically reach out to touch.
Collapse
Affiliation(s)
- Annett Schirmer
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong SAR
- The Brain and Mind Institute, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Oscar Lai
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong SAR
| | - Francis McGlone
- School of Natural Sciences & Psychology, Liverpool John Moores University, UK
- Institute of Psychology, Health & Society, University of Liverpool, UK
| | - Clare Cham
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong SAR
| | - Darwin Lau
- Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong SAR
| |
Collapse
|
10
|
Zora H, Csépe V. Perception of Prosodic Modulations of Linguistic and Paralinguistic Origin: Evidence From Early Auditory Event-Related Potentials. Front Neurosci 2022; 15:797487. [PMID: 35002610 PMCID: PMC8733303 DOI: 10.3389/fnins.2021.797487] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 11/29/2021] [Indexed: 11/13/2022] Open
Abstract
How listeners handle prosodic cues of linguistic and paralinguistic origin is a central question for spoken communication. In the present EEG study, we addressed this question by examining neural responses to variations in pitch accent (linguistic) and affective (paralinguistic) prosody in Swedish words, using a passive auditory oddball paradigm. The results indicated that changes in pitch accent and affective prosody elicited mismatch negativity (MMN) responses at around 200 ms, confirming the brain’s pre-attentive response to any prosodic modulation. The MMN amplitude was, however, statistically larger to the deviation in affective prosody in comparison to the deviation in pitch accent and affective prosody combined, which is in line with previous research indicating not only a larger MMN response to affective prosody in comparison to neutral prosody but also a smaller MMN response to multidimensional deviants than unidimensional ones. The results, further, showed a significant P3a response to the affective prosody change in comparison to the pitch accent change at around 300 ms, in accordance with previous findings showing an enhanced positive response to emotional stimuli. The present findings provide evidence for distinct neural processing of different prosodic cues, and statistically confirm the intrinsic perceptual and motivational salience of paralinguistic information in spoken communication.
Collapse
Affiliation(s)
- Hatice Zora
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
| | - Valéria Csépe
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary
| |
Collapse
|
11
|
LI W, LIU S, HAN S, ZHANG L, XU Q. Emotional bias of trait anxiety on pre-attentive processing of facial expressions: ERP investigation. ACTA PSYCHOLOGICA SINICA 2022. [DOI: 10.3724/sp.j.1041.2022.00001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
12
|
Xu J, Zhou L, Liu F, Xue C, Jiang J, Jiang C. The autistic brain can process local but not global emotion regularities in facial and musical sequences. Autism Res 2021; 15:222-240. [PMID: 34792299 DOI: 10.1002/aur.2635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 10/31/2021] [Accepted: 11/01/2021] [Indexed: 11/05/2022]
Abstract
Whether autism spectrum disorder (ASD) is associated with a global processing deficit remains controversial. Global integration requires extraction of regularity across various timescales, yet little is known about how individuals with ASD process regularity at local (short timescale) versus global (long timescale) levels. To this end, we used event-related potentials to investigate whether individuals with ASD would show different neural responses to local (within trial) versus global (across trials) emotion regularities extracted from sequential facial expressions; and if so, whether this visual abnormality would generalize to the music (auditory) domain. Twenty individuals with ASD and 21 age- and IQ-matched individuals with typical development participated in this study. At an early processing stage, ASD participants exhibited preserved neural responses to violations of local emotion regularity for both faces and music. At a later stage, however, there was an absence of neural responses in ASD to violations of global emotion regularity for both faces and music. These findings suggest that the autistic brain responses to emotion regularity are modulated by the timescale of sequential stimuli, and provide insight into the neural mechanisms underlying emotional processing in ASD.
Collapse
Affiliation(s)
- Jie Xu
- Department of Psychology, Shanghai Normal University, Shanghai, China
| | - Linshu Zhou
- Music College, Shanghai Normal University, Shanghai, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Chao Xue
- Department of Psychology, Shanghai Normal University, Shanghai, China
| | - Jun Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China
| |
Collapse
|
13
|
Schirmer A, Wijaya M, Chiu MH, Maess B, Gunter TC. Musical rhythm effects on visual attention are non-rhythmical: evidence against metrical entrainment. Soc Cogn Affect Neurosci 2021; 16:58-71. [PMID: 32507877 PMCID: PMC7812633 DOI: 10.1093/scan/nsaa077] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 05/26/2020] [Accepted: 06/02/2020] [Indexed: 12/11/2022] Open
Abstract
The idea that external rhythms synchronize attention cross-modally has attracted much interest and scientific inquiry. Yet, whether associated attentional modulations are indeed rhythmical in that they spring from and map onto an underlying meter has not been clearly established. Here we tested this idea while addressing the shortcomings of previous work associated with confounding (i) metricality and regularity, (ii) rhythmic and temporal expectations or (iii) global and local temporal effects. We designed sound sequences that varied orthogonally (high/low) in metricality and regularity and presented them as task-irrelevant auditory background in four separate blocks. The participants' task was to detect rare visual targets occurring at a silent metrically aligned or misaligned temporal position. We found that target timing was irrelevant for reaction times and visual event-related potentials. High background regularity and to a lesser extent metricality facilitated target processing across metrically aligned and misaligned positions. Additionally, high regularity modulated auditory background frequencies in the EEG recorded over occipital cortex. We conclude that external rhythms, rather than synchronizing attention cross-modally, confer general, nontemporal benefits. Their predictability conserves processing resources that then benefit stimulus representations in other modalities.
Collapse
Affiliation(s)
- Annett Schirmer
- Correspondence should be addressed to Annett Schirmer, Department of Psychology, The Chinese University of Hong Kong, 3rd Floor, Sino Building, Shatin, N.T., Hong Kong. E-mail:
| | - Maria Wijaya
- Department of Psychology, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Man Hey Chiu
- Department of Psychology, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Burkhard Maess
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Thomas C Gunter
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| |
Collapse
|
14
|
Leung JH, Purdy SC, Corballis PM. Improving Emotion Perception in Children with Autism Spectrum Disorder with Computer-Based Training and Hearing Amplification. Brain Sci 2021; 11:469. [PMID: 33917776 PMCID: PMC8068114 DOI: 10.3390/brainsci11040469] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Revised: 03/22/2021] [Accepted: 04/06/2021] [Indexed: 11/16/2022] Open
Abstract
Individuals with Autism Spectrum Disorder (ASD) experience challenges with social communication, often involving emotional elements of language. This may stem from underlying auditory processing difficulties, especially when incoming speech is nuanced or complex. This study explored the effects of auditory training on social perception abilities of children with ASD. The training combined use of a remote-microphone hearing system and computerized emotion perception training. At baseline, children with ASD had poorer social communication scores and delayed mismatch negativity (MMN) compared to typically developing children. Behavioral results, measured pre- and post-intervention, revealed increased social perception scores in children with ASD to the extent that they outperformed their typically developing peers post-intervention. Electrophysiology results revealed changes in neural responses to emotional speech stimuli. Post-intervention, mismatch responses of children with ASD more closely resembled their neurotypical peers, with shorter MMN latencies, a significantly heightened P2 wave, and greater differentiation of emotional stimuli, consistent with their improved behavioral results. This study sets the foundation for further investigation into connections between auditory processing difficulties and social perception and communication for individuals with ASD, and provides a promising indication that combining amplified hearing and computer-based targeted social perception training using emotional speech stimuli may have neuro-rehabilitative benefits.
Collapse
Affiliation(s)
- Joan H. Leung
- School of Psychology, The University of Auckland, Auckland 1023, New Zealand; (S.C.P.); (P.M.C.)
| | | | | |
Collapse
|
15
|
McMackin R, Dukic S, Costello E, Pinto-Grau M, McManus L, Broderick M, Chipika R, Iyer PM, Heverin M, Bede P, Muthuraman M, Pender N, Hardiman O, Nasseroleslami B. Cognitive network hyperactivation and motor cortex decline correlate with ALS prognosis. Neurobiol Aging 2021; 104:57-70. [PMID: 33964609 DOI: 10.1016/j.neurobiolaging.2021.03.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Revised: 02/26/2021] [Accepted: 03/02/2021] [Indexed: 02/07/2023]
Abstract
We aimed to quantitatively characterize progressive brain network disruption in Amyotrophic Lateral Sclerosis (ALS) during cognition using the mismatch negativity (MMN), an electrophysiological index of attention switching. We measured the MMN using 128-channel EEG longitudinally (2-5 timepoints) in 60 ALS patients and cross-sectionally in 62 healthy controls. Using dipole fitting and linearly constrained minimum variance beamforming we investigated cortical source activity changes over time. In ALS, the inferior frontal gyri (IFG) show significantly lower baseline activity compared to controls. The right IFG and both superior temporal gyri (STG) become progressively hyperactive longitudinally. By contrast, the left motor and dorsolateral prefrontal cortices are initially hyperactive, declining progressively. Baseline motor hyperactivity correlates with cognitive disinhibition, and lower baseline IFG activities correlate with motor decline rate, while left dorsolateral prefrontal activity predicted cognitive and behavioural impairment. Shorter survival correlates with reduced baseline IFG and STG activity and later STG hyperactivation. Source-resolved EEG facilitates quantitative characterization of symptom-associated and symptom-preceding motor and cognitive-behavioral cortical network decline in ALS.
Collapse
Affiliation(s)
- Roisin McMackin
- Academic Unit of Neurology, Trinity College Dublin, the University of Dublin, Dublin 2, Ireland
| | - Stefan Dukic
- Academic Unit of Neurology, Trinity College Dublin, the University of Dublin, Dublin 2, Ireland
| | - Emmet Costello
- Academic Unit of Neurology, Trinity College Dublin, the University of Dublin, Dublin 2, Ireland
| | - Marta Pinto-Grau
- Academic Unit of Neurology, Trinity College Dublin, the University of Dublin, Dublin 2, Ireland; Department of Neurology, University Medical Centre Utrecht Brain Centre, Utrecht University, Utrecht, The Netherlands
| | - Lara McManus
- Academic Unit of Neurology, Trinity College Dublin, the University of Dublin, Dublin 2, Ireland
| | - Michael Broderick
- Academic Unit of Neurology, Trinity College Dublin, the University of Dublin, Dublin 2, Ireland; Trinity Centre for Bioengineering, Trinity College Dublin, the University of Dublin, Dublin 2, Ireland
| | - Rangariroyashe Chipika
- Academic Unit of Neurology, Trinity College Dublin, the University of Dublin, Dublin 2, Ireland; Computational Neuroimaging Group, Trinity College Dublin, the University of Dublin, Dublin 2, Ireland
| | - Parameswaran M Iyer
- Academic Unit of Neurology, Trinity College Dublin, the University of Dublin, Dublin 2, Ireland; Beaumont Hospital Dublin, Department of Neurology, Dublin 9, Ireland
| | - Mark Heverin
- Academic Unit of Neurology, Trinity College Dublin, the University of Dublin, Dublin 2, Ireland
| | - Peter Bede
- Academic Unit of Neurology, Trinity College Dublin, the University of Dublin, Dublin 2, Ireland; Computational Neuroimaging Group, Trinity College Dublin, the University of Dublin, Dublin 2, Ireland
| | - Muthuraman Muthuraman
- Biomedical Statistics and Multimodal Signal Processing Unit, Department of Neurology, Johannes-Gutenberg-University Hospital, Mainz, Germany
| | - Niall Pender
- Academic Unit of Neurology, Trinity College Dublin, the University of Dublin, Dublin 2, Ireland; Department of Neurology, University Medical Centre Utrecht Brain Centre, Utrecht University, Utrecht, The Netherlands; Beaumont Hospital Dublin, Department of Neurology, Dublin 9, Ireland
| | - Orla Hardiman
- Academic Unit of Neurology, Trinity College Dublin, the University of Dublin, Dublin 2, Ireland; Beaumont Hospital Dublin, Department of Neurology, Dublin 9, Ireland.
| | - Bahman Nasseroleslami
- Academic Unit of Neurology, Trinity College Dublin, the University of Dublin, Dublin 2, Ireland
| |
Collapse
|
16
|
Nonverbal auditory communication - Evidence for integrated neural systems for voice signal production and perception. Prog Neurobiol 2020; 199:101948. [PMID: 33189782 DOI: 10.1016/j.pneurobio.2020.101948] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Revised: 10/12/2020] [Accepted: 11/04/2020] [Indexed: 12/24/2022]
Abstract
While humans have developed a sophisticated and unique system of verbal auditory communication, they also share a more common and evolutionarily important nonverbal channel of voice signaling with many other mammalian and vertebrate species. This nonverbal communication is mediated and modulated by the acoustic properties of a voice signal, and is a powerful - yet often neglected - means of sending and perceiving socially relevant information. From the viewpoint of dyadic (involving a sender and a signal receiver) voice signal communication, we discuss the integrated neural dynamics in primate nonverbal voice signal production and perception. Most previous neurobiological models of voice communication modelled these neural dynamics from the limited perspective of either voice production or perception, largely disregarding the neural and cognitive commonalities of both functions. Taking a dyadic perspective on nonverbal communication, however, it turns out that the neural systems for voice production and perception are surprisingly similar. Based on the interdependence of both production and perception functions in communication, we first propose a re-grouping of the neural mechanisms of communication into auditory, limbic, and paramotor systems, with special consideration for a subsidiary basal-ganglia-centered system. Second, we propose that the similarity in the neural systems involved in voice signal production and perception is the result of the co-evolution of nonverbal voice production and perception systems promoted by their strong interdependence in dyadic interactions.
Collapse
|
17
|
Fucci E, Abdoun O, Lutz A. Auditory perceptual learning is not affected by anticipatory anxiety in the healthy population except for highly anxious individuals: EEG evidence. Clin Neurophysiol 2019; 130:1135-1143. [PMID: 31085447 DOI: 10.1016/j.clinph.2019.04.010] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Revised: 04/11/2019] [Accepted: 04/16/2019] [Indexed: 11/27/2022]
Abstract
OBJECTIVE A recent neurocomputational model proposed that anxious hypervigilance impedes perceptual learning. This view is supported by the observed modulation of the mismatch negativity (MMN), a biomarker of implicit perceptual learning processes, in anxiety disorders. However, other studies found that anxious states sensitize brain responses with no impact on perceptual learning. The present research aimed to elucidate the impact of anticipatory anxiety on early stimulus processing in the healthy population. METHODS We used electroencephalography to investigate the impact of unpredictable threat on the amplitude of the MMN and other components of the auditory evoked response in healthy participants during a passive auditory oddball task. RESULTS We found a general sensitization of early components of the auditory evoked response and changes in subjective and autonomic measures of anxiety during threat periods. The MMN amplitude did not differ during threat, compared to safe periods. However, this difference was modulated by the level of state or trait anxiety. CONCLUSION We propose that anxiety sensitizes early brain responses to unspecific environmental stimuli but affects implicit perceptual learning processes only when an individual is located at the higher end of the anxiety spectrum. SIGNIFICANCE This view might distinguish between an adaptive role of anxiety on processing efficiency and its detrimental impact on implicit perceptual learning observed in psychiatric conditions.
Collapse
Affiliation(s)
- E Fucci
- Lyon Neuroscience Research Centre, INSERM U1028, CNRS UMR5292, Lyon 1 University, Lyon, France
| | - O Abdoun
- Lyon Neuroscience Research Centre, INSERM U1028, CNRS UMR5292, Lyon 1 University, Lyon, France
| | - A Lutz
- Lyon Neuroscience Research Centre, INSERM U1028, CNRS UMR5292, Lyon 1 University, Lyon, France.
| |
Collapse
|
18
|
Insensitivity of auditory mismatch negativity to classical fear conditioning and extinction in healthy humans. Neuroreport 2019; 30:468-472. [PMID: 30817683 DOI: 10.1097/wnr.0000000000001221] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
The relationship between auditory mismatch negativity (MMN) and the neural cognitive processes of fear has been suggested in both healthy participants and patients with fear-related mental disorders such as post-traumatic stress disorder and panic disorder. The present study sought to confirm whether the MMN is affected by classical fear conditioning in healthy participants. MMN amplitude, N1 amplitude, and skin conductance level (SCL) in 20 healthy volunteers during a fear-conditioning paradigm consisting of three phases (habituation, fear acquisition, and fear extinction) were recorded. Red and blue light signals were presented as the conditioned stimuli CS+ (threat cue) and CS- (safety cue), respectively. In addition, an aversive electrical stimulus was delivered as the unconditioned stimulus with CS+ in the fear-acquisition phase. No MMN amplitude changes were observed between the CS types during the three phases. In the acquisition phase, the mean SCL during CS+ was significantly higher than that during CS-. The MMN amplitude and deviant N1 amplitude in the extinction phase were significantly lower than those in the other phases regardless of the CS type. Despite the clear alteration of SCL between CS types in the acquisition phase, no significant differences in MMN were observed. Decreased MMN and deviant N1 in the fear-extinction phase were considered to be mainly due to decreased arousal or attention level. Results indicate that the auditory MMN amplitude was not affected by the cognitive process of fear recognized by other sense modalities.
Collapse
|
19
|
Chen C, Chan CW, Cheng Y. Test-Retest Reliability of Mismatch Negativity (MMN) to Emotional Voices. Front Hum Neurosci 2018; 12:453. [PMID: 30498437 PMCID: PMC6249375 DOI: 10.3389/fnhum.2018.00453] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Accepted: 10/24/2018] [Indexed: 12/20/2022] Open
Abstract
A voice from kin species conveys indispensable social and affective signals with uniquely phylogenetic and ontogenetic standpoints. However, the neural underpinning of emotional voices, beyond low-level acoustic features, activates a processing chain that proceeds from the auditory pathway to the brain structures implicated in cognition and emotion. By using a passive auditory oddball paradigm, which employs emotional voices, this study investigates the test–retest reliability of emotional mismatch negativity (MMN), indicating that the deviants of positively (happily)- and negatively (angrily)-spoken syllables, as compared to neutral standards, can trigger MMN as a response to an automatic discrimination of emotional salience. The neurophysiological estimates of MMN to positive and negative deviants appear to be highly reproducible, irrespective of the subject’s attentional disposition: whether the subjects are set to a condition that involves watching a silent movie or do a working memory task. Specifically, negativity bias is evinced as threatening, relative to positive vocalizations, consistently inducing larger MMN amplitudes, regardless of the day and the time of a day. The present findings provide evidence to support the fact that emotional MMN offers a stable platform to detect subtle changes in current emotional shifts.
Collapse
Affiliation(s)
- Chenyi Chen
- Department of Physical Medicine and Rehabilitation, National Yang-Ming University Hospital, Yilan, Taiwan.,Graduate Institute of Injury Prevention and Control, Taipei Medical University, Taipei, Taiwan.,Institute of Humanities in Medicine, Taipei Medical University, Taipei, Taiwan.,Research Center of Brain and Consciousness, Shuang Ho Hospital, Taipei Medical University, Taipei, Taiwan
| | - Chia-Wen Chan
- Graduate Institute of Injury Prevention and Control, Taipei Medical University, Taipei, Taiwan
| | - Yawei Cheng
- Department of Physical Medicine and Rehabilitation, National Yang-Ming University Hospital, Yilan, Taiwan.,Institute of Neuroscience and Brain Research Center, National Yang-Ming University, Taipei, Taiwan.,Department of Research and Education, Taipei City Hospital, Taipei, Taiwan
| |
Collapse
|
20
|
Tse CY, Yip LY, Lui TKY, Xiao XZ, Wang Y, Chu WCW, Parks NA, Chan SSM, Neggers SFW. Establishing the functional connectivity of the frontotemporal network in pre-attentive change detection with Transcranial Magnetic Stimulation and event-related optical signal. Neuroimage 2018; 179:403-413. [DOI: 10.1016/j.neuroimage.2018.06.053] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2018] [Revised: 06/05/2018] [Accepted: 06/17/2018] [Indexed: 11/16/2022] Open
|
21
|
Conde T, Gonçalves ÓF, Pinheiro AP. Stimulus complexity matters when you hear your own voice: Attention effects on self-generated voice processing. Int J Psychophysiol 2018; 133:66-78. [PMID: 30114437 DOI: 10.1016/j.ijpsycho.2018.08.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2017] [Revised: 06/05/2018] [Accepted: 08/10/2018] [Indexed: 11/26/2022]
Abstract
The ability to discriminate self- and non-self voice cues is a fundamental aspect of self-awareness and subserves self-monitoring during verbal communication. Nonetheless, the neurofunctional underpinnings of self-voice perception and recognition are still poorly understood. Moreover, how attention and stimulus complexity influence the processing and recognition of one's own voice remains to be clarified. Using an oddball task, the current study investigated how self-relevance and stimulus type interact during selective attention to voices, and how they affect the representation of regularity during voice perception. Event-related potentials (ERPs) were recorded from 18 right-handed males. Pre-recorded self-generated (SGV) and non-self (NSV) voices, consisting of a nonverbal vocalization (vocalization condition) or disyllabic word (word condition), were presented as either standard or target stimuli in different experimental blocks. The results showed increased N2 amplitude to SGV relative to NSV stimuli. Stimulus type modulated later processing stages only: P3 amplitude was increased for SGV relative to NSV words, whereas no differences between SGV and NSV were observed in the case of vocalizations. Moreover, SGV standards elicited reduced N1 and P2 amplitude relative to NSV standards. These findings revealed that the self-voice grabs more attention when listeners are exposed to words but not vocalizations. Further, they indicate that detection of regularity in an auditory stream is facilitated for one's own voice at early processing stages. Together, they demonstrate that self-relevance affects attention to voices differently as a function of stimulus type.
Collapse
Affiliation(s)
- Tatiana Conde
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal; Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Óscar F Gonçalves
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Spaulding Center of Neuromodulation, Department of Physical Medicine & Rehabilitation, Spaulding Rehabilitation Hospital & Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Bouvé College of Health Sciences, Northeastern University, Boston, MA, USA
| | - Ana P Pinheiro
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal; Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Cognitive Neuroscience Lab, Department of Psychiatry, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
22
|
The right touch: Stroking of CT-innervated skin promotes vocal emotion processing. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2018; 17:1129-1140. [PMID: 28933047 PMCID: PMC5709431 DOI: 10.3758/s13415-017-0537-5] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Research has revealed a special mechanoreceptor, called C-tactile (CT) afferent, that is situated in hairy skin and that seems relevant for the processing of social touch. We pursued a possible role of this receptor in the perception of other social signals such as a person’s voice. Participants completed three sessions in which they heard surprised and neutral vocal and nonvocal sounds and detected rare sound repetitions. In a given session, participants received no touch or soft brushstrokes to the arm (CT innervated) or palm (CT free). Event-related potentials elicited to sounds revealed that stroking to the arm facilitated the integration of vocal and emotional information. The late positive potential was greater for surprised vocal relative to neutral vocal and nonvocal sounds, and this effect was greater for arm touch relative to both palm touch and no touch. Together, these results indicate that stroking to the arm facilitates the allocation of processing resources to emotional voices, thus supporting the possibility that CT stimulation benefits social perception cross-modally.
Collapse
|
23
|
Review and Classification of Emotion Recognition Based on EEG Brain-Computer Interface System Research: A Systematic Review. APPLIED SCIENCES-BASEL 2017. [DOI: 10.3390/app7121239] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
24
|
Hoyniak CP, Bates JE, Petersen IT, Yang CL, Darcy I, Fontaine NMG. Reduced neural responses to vocal fear: a potential biomarker for callous-uncaring traits in early childhood. Dev Sci 2017; 21:e12608. [PMID: 29119657 DOI: 10.1111/desc.12608] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2016] [Accepted: 07/14/2017] [Indexed: 01/30/2023]
Abstract
OBJECTIVE Callous-unemotional (CU) traits are characterized by a lack of guilt and empathy, and low responsiveness to distress and fear in others. Children with CU traits are at-risk for engaging in early and persistent conduct problems. Individuals showing CU traits have been shown to have reduced neural responses to others' distress (e.g., fear). However, the neural components of distress responses in children with CU traits have not been investigated in early childhood. In the current study, we examined neural responses that underlie the processing of emotionally valenced vocal stimuli using the event-related potential technique in a group of preschoolers. METHOD Participants between 2 and 5 years old took part in an auditory oddball task containing English-based pseudowords spoken with either a fearful, happy, or a neutral prosody while electroencephalography data were collected. The mismatch negativity (MMN) component, an index of the automatic detection of deviant stimuli within a series of stimuli, was examined in association with two dimensions of CU traits (i.e., callousness-uncaring and unemotional dimensions) reported by primary caregivers. RESULTS Findings suggest that the callousness-uncaring dimension of CU traits in early childhood is associated with reduced responses to fearful vocal stimuli. CONCLUSIONS Reduced neural responses to vocal fear could be a biomarker for callous-uncaring traits in early childhood. These findings are relevant for clinicians and researchers attempting to identify risk factors for early callous-uncaring traits.
Collapse
Affiliation(s)
- Caroline P Hoyniak
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA
| | - John E Bates
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA
| | - Isaac T Petersen
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, USA
| | - Chung-Lin Yang
- Department of Second Language Studies, Indiana University, Bloomington, IN, USA
| | - Isabelle Darcy
- Department of Second Language Studies, Indiana University, Bloomington, IN, USA
| | | |
Collapse
|
25
|
Scheumann M, Hasting AS, Zimmermann E, Kotz SA. Human Novelty Response to Emotional Animal Vocalizations: Effects of Phylogeny and Familiarity. Front Behav Neurosci 2017; 11:204. [PMID: 29114210 PMCID: PMC5660701 DOI: 10.3389/fnbeh.2017.00204] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2017] [Accepted: 10/06/2017] [Indexed: 11/13/2022] Open
Abstract
Darwin (1872) postulated that emotional expressions contain universals that are retained across species. We recently showed that human rating responses were strongly affected by a listener's familiarity with vocalization types, whereas evidence for universal cross-taxa emotion recognition was limited. To disentangle the impact of evolutionarily retained mechanisms (phylogeny) and experience-driven cognitive processes (familiarity), we compared the temporal unfolding of event-related potentials (ERPs) in response to agonistic and affiliative vocalizations expressed by humans and three animal species. Using an auditory oddball novelty paradigm, ERPs were recorded in response to task-irrelevant novel sounds, comprising vocalizations varying in their degree of phylogenetic relationship and familiarity to humans. Vocalizations were recorded in affiliative and agonistic contexts. Offline, participants rated the vocalizations for valence, arousal, and familiarity. Correlation analyses revealed a significant correlation between a posteriorly distributed early negativity and arousal ratings. More specifically, a contextual category effect of this negativity was observed for human infant and chimpanzee vocalizations but absent for other species vocalizations. Further, a significant correlation between the later and more posteriorly P3a and P3b responses and familiarity ratings indicates a link between familiarity and attentional processing. A contextual category effect of the P3b was observed for the less familiar chimpanzee and tree shrew vocalizations. Taken together, these findings suggest that early negative ERP responses to agonistic and affiliative vocalizations may be influenced by evolutionary retained mechanisms, whereas the later orienting of attention (positive ERPs) may mainly be modulated by the prior experience.
Collapse
Affiliation(s)
- Marina Scheumann
- Institute of Zoology, University of Veterinary Medicine Hannover, Hannover, Germany
| | - Anna S. Hasting
- Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Day Clinic for Cognitive Neurology, University Hospital Leipzig, Leipzig, Germany
| | - Elke Zimmermann
- Institute of Zoology, University of Veterinary Medicine Hannover, Hannover, Germany
| | - Sonja A. Kotz
- Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
26
|
Is laughter a better vocal change detector than a growl? Cortex 2017; 92:233-248. [DOI: 10.1016/j.cortex.2017.03.018] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2016] [Revised: 01/26/2017] [Accepted: 03/27/2017] [Indexed: 11/23/2022]
|
27
|
Wang L, Bao Y, Zhang J, Lin X, Yang L, Pöppel E, Zhou B. Scanning the world in three seconds: Mismatch negativity as an indicator of temporal segmentation. Psych J 2017; 5:170-6. [PMID: 27678482 DOI: 10.1002/pchj.144] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2016] [Revised: 08/09/2016] [Accepted: 08/10/2016] [Indexed: 11/11/2022]
Abstract
It has been shown recently that a temporal window of approximately 3 s has a modulatory effect on mismatch negativity (MMN). This special temporal window has been interpreted as representing the "subjective present," and reflecting a temporal segmentation in behavioral and cognitive functions. A more detailed look into the temporal structure of the MMN appeared to be reasonable as group data might shadow the underlying mechanisms because of too-high response variance. In this study, we tested one subject on 11 successive days at the same circadian phase using a passive auditory oddball paradigm with interstimulus intervals (ISIs) ranging from 1 s to 6 s. We observed a U-shape function of MMN showing the largest amplitudes to the oddball stimuli with an ISI of 2 s and 3 s being flanked by smaller response amplitudes for shorter and longer ISIs. This result pattern can be explained with an oscillatory neural mechanism underlying the temporal modulation of MMN. Besides confirming and substantiating temporal segmentation in sensory processing, the present study also demonstrates that a single case study can be a useful and complementary tool in cognitive research.
Collapse
Affiliation(s)
- Lingyan Wang
- School of Psychological and Cognitive Sciences, Key Laboratory of Machine Perception (Ministry of Education), and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Yan Bao
- School of Psychological and Cognitive Sciences, Key Laboratory of Machine Perception (Ministry of Education), and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China. .,Institute of Medical Psychology and Human Science Center, Ludwig-Maximilians-University, Munich, Germany.
| | - Jiyuan Zhang
- School of Psychological and Cognitive Sciences, Key Laboratory of Machine Perception (Ministry of Education), and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Xiaoxiong Lin
- School of Psychological and Cognitive Sciences, Key Laboratory of Machine Perception (Ministry of Education), and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Lang Yang
- School of Psychological and Cognitive Sciences, Key Laboratory of Machine Perception (Ministry of Education), and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Ernst Pöppel
- School of Psychological and Cognitive Sciences, Key Laboratory of Machine Perception (Ministry of Education), and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.,Institute of Medical Psychology and Human Science Center, Ludwig-Maximilians-University, Munich, Germany.,Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Bin Zhou
- Institute of Psychology, Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
28
|
Asutay E, Västfjäll D. Auditory attentional selection is biased by reward cues. Sci Rep 2016; 6:36989. [PMID: 27841363 PMCID: PMC5107919 DOI: 10.1038/srep36989] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2016] [Accepted: 10/24/2016] [Indexed: 11/10/2022] Open
Abstract
Auditory attention theories suggest that humans are able to decompose the complex acoustic input into separate auditory streams, which then compete for attentional resources. How this attentional competition is influenced by motivational salience of sounds is, however, not well-understood. Here, we investigated whether a positive motivational value associated with sounds could bias the attentional selection in an auditory detection task. Participants went through a reward-learning period, where correct attentional selection of one stimulus (CS+) lead to higher rewards compared to another stimulus (CS-). We assessed the impact of reward-learning by comparing perceptual sensitivity before and after the learning period, when CS+ and CS- were presented as distractors for a different target. Performance decreased after reward-learning when CS+ was a distractor, while it increased when CS- was a distractor. Thus, the findings show that sounds that were associated with high rewards captures attention involuntarily. Additionally, when successful inhibition of a particular sound (CS-) was associated with high rewards then it became easier to ignore it. The current findings have important implications for the understanding of the organizing principles of auditory perception and provide, for the first time, clear behavioral evidence for reward-dependent attentional learning in the auditory domain in humans.
Collapse
Affiliation(s)
- Erkin Asutay
- Behavioral Sciences and Learning, Linköping University, SE - 581 83, Linköping, Sweden.,Civil and Environmental Engineering, Chalmers University of Technology, SE - 412 96, Gothenburg, Sweden
| | - Daniel Västfjäll
- Behavioral Sciences and Learning, Linköping University, SE - 581 83, Linköping, Sweden.,Decision Research, 1201 Oak Street, Suite 200 Eugene, OR, USA
| |
Collapse
|
29
|
Schirmer A, Escoffier N, Cheng X, Feng Y, Penney TB. Detecting Temporal Change in Dynamic Sounds: On the Role of Stimulus Duration, Speed, and Emotion. Front Psychol 2016; 6:2055. [PMID: 26793161 PMCID: PMC4710701 DOI: 10.3389/fpsyg.2015.02055] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Accepted: 12/24/2015] [Indexed: 11/16/2022] Open
Abstract
For dynamic sounds, such as vocal expressions, duration often varies alongside speed. Compared to longer sounds, shorter sounds unfold more quickly. Here, we asked whether listeners implicitly use this confound when representing temporal regularities in their environment. In addition, we explored the role of emotions in this process. Using a mismatch negativity (MMN) paradigm, we asked participants to watch a silent movie while passively listening to a stream of task-irrelevant sounds. In Experiment 1, one surprised and one neutral vocalization were compressed and stretched to create stimuli of 378 and 600 ms duration. Stimuli were presented in four blocks, two of which used surprised and two of which used neutral expressions. In one surprised and one neutral block, short and long stimuli served as standards and deviants, respectively. In the other two blocks, the assignment of standards and deviants was reversed. We observed a climbing MMN-like negativity shortly after deviant onset, which suggests that listeners implicitly track sound speed and detect speed changes. Additionally, this MMN-like effect emerged earlier and was larger for long than short deviants, suggesting greater sensitivity to duration increments or slowing down than to decrements or speeding up. Last, deviance detection was facilitated in surprised relative to neutral blocks, indicating that emotion enhances temporal processing. Experiment 2 was comparable to Experiment 1 with the exception that sounds were spectrally rotated to remove vocal emotional content. This abolished the emotional processing benefit, but preserved the other effects. Together, these results provide insights into listener sensitivity to sound speed and raise the possibility that speed biases duration judgements implicitly in a feed-forward manner. Moreover, this bias may be amplified for duration increments relative to decrements and within an emotional relative to a neutral stimulus context.
Collapse
Affiliation(s)
- Annett Schirmer
- Department of Psychology, National University of Singapore, SingaporeSingapore; Life Sciences Institute Programme in Neurobiology and Ageing, National University of Singapore, SingaporeSingapore; Duke-NUS Graduate Medical School, SingaporeSingapore
| | - Nicolas Escoffier
- Department of Psychology, National University of Singapore, SingaporeSingapore; Life Sciences Institute Programme in Neurobiology and Ageing, National University of Singapore, SingaporeSingapore
| | - Xiaoqin Cheng
- Life Sciences Institute Programme in Neurobiology and Ageing, National University of Singapore, SingaporeSingapore; Graduate School for Integrative Sciences and Engineering, National University of Singapore, SingaporeSingapore
| | - Yenju Feng
- Life Sciences Institute Programme in Neurobiology and Ageing, National University of Singapore, SingaporeSingapore; Graduate School for Integrative Sciences and Engineering, National University of Singapore, SingaporeSingapore
| | - Trevor B Penney
- Department of Psychology, National University of Singapore, SingaporeSingapore; Life Sciences Institute Programme in Neurobiology and Ageing, National University of Singapore, SingaporeSingapore
| |
Collapse
|
30
|
The effects of stimulus complexity on the preattentive processing of self-generated and nonself voices: An ERP study. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2015; 16:106-23. [PMID: 26415897 DOI: 10.3758/s13415-015-0376-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The ability to differentiate one's own voice from the voice of somebody else plays a critical role in successful verbal self-monitoring processes and in communication. However, most of the existing studies have only focused on the sensory correlates of self-generated voice processing, whereas the effects of attentional demands and stimulus complexity on self-generated voice processing remain largely unknown. In this study, we investigated the effects of stimulus complexity on the preattentive processing of self and nonself voice stimuli. Event-related potentials (ERPs) were recorded from 17 healthy males who watched a silent movie while ignoring prerecorded self-generated (SGV) and nonself (NSV) voice stimuli, consisting of a vocalization (vocalization category condition: VCC) or of a disyllabic word (word category condition: WCC). All voice stimuli were presented as standard and deviant events in four distinct oddball sequences. The mismatch negativity (MMN) ERP component peaked earlier for NSV than for SGV stimuli. Moreover, when compared with SGV stimuli, the P3a amplitude was increased for NSV stimuli in the VCC only, whereas in the WCC no significant differences were found between the two voice types. These findings suggest differences in the time course of automatic detection of a change in voice identity. In addition, they suggest that stimulus complexity modulates the magnitude of the orienting response to SGV and NSV stimuli, extending previous findings on self-voice processing.
Collapse
|
31
|
Auditory change-related cerebral responses and personality traits. Neurosci Res 2015; 103:34-9. [PMID: 26360233 DOI: 10.1016/j.neures.2015.08.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2014] [Revised: 08/31/2015] [Accepted: 08/31/2015] [Indexed: 11/24/2022]
Abstract
The rapid detection of changes in sensory information is an essential process for survival. Individual humans are thought to have their own intrinsic preattentive responsiveness to sensory changes. Here we sought to determine the relationship between auditory change-related responses and personality traits, using event-related potentials. A change-related response peaking at approximately 120 ms (Change-N1) was elicited by an abrupt decrease in sound pressure (10 dB) from the baseline (60 dB) of a continuous sound. Sixty-three healthy volunteers (14 females and 49 males) were recruited and were assessed by the Temperament and Character Inventory (TCI) for personality traits. We investigated the relationship between Change-N1 values (amplitude and latency) and each TCI dimension. The Change-N1 amplitude was positively correlated with harm avoidance scores and negatively correlated with the self-directedness scores, but not with other TCI dimensions. Since these two TCI dimensions are associated with anxiety disorders and depression, it is possible that the change-related response is affected by personality traits, particularly anxiety- or depression-related traits.
Collapse
|
32
|
Pell MD, Rothermich K, Liu P, Paulmann S, Sethi S, Rigoulot S. Preferential decoding of emotion from human non-linguistic vocalizations versus speech prosody. Biol Psychol 2015; 111:14-25. [PMID: 26307467 DOI: 10.1016/j.biopsycho.2015.08.008] [Citation(s) in RCA: 76] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2015] [Revised: 08/04/2015] [Accepted: 08/19/2015] [Indexed: 11/26/2022]
Abstract
This study used event-related brain potentials (ERPs) to compare the time course of emotion processing from non-linguistic vocalizations versus speech prosody, to test whether vocalizations are treated preferentially by the neurocognitive system. Participants passively listened to vocalizations or pseudo-utterances conveying anger, sadness, or happiness as the EEG was recorded. Simultaneous effects of vocal expression type and emotion were analyzed for three ERP components (N100, P200, late positive component). Emotional vocalizations and speech were differentiated very early (N100) and vocalizations elicited stronger, earlier, and more differentiated P200 responses than speech. At later stages (450-700ms), anger vocalizations evoked a stronger late positivity (LPC) than other vocal expressions, which was similar but delayed for angry speech. Individuals with high trait anxiety exhibited early, heightened sensitivity to vocal emotions (particularly vocalizations). These data provide new neurophysiological evidence that vocalizations, as evolutionarily primitive signals, are accorded precedence over speech-embedded emotions in the human voice.
Collapse
Affiliation(s)
- M D Pell
- School of Communication Sciences and Disorders, McGill University, Montreal, Canada; International Laboratory for Brain, Music, and Sound Research, Montreal, Canada.
| | - K Rothermich
- School of Communication Sciences and Disorders, McGill University, Montreal, Canada
| | - P Liu
- School of Communication Sciences and Disorders, McGill University, Montreal, Canada
| | - S Paulmann
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester, United Kingdom
| | - S Sethi
- School of Communication Sciences and Disorders, McGill University, Montreal, Canada
| | - S Rigoulot
- International Laboratory for Brain, Music, and Sound Research, Montreal, Canada
| |
Collapse
|
33
|
Conde T, Gonçalves ÓF, Pinheiro AP. Paying attention to my voice or yours: An ERP study with words. Biol Psychol 2015; 111:40-52. [PMID: 26234962 DOI: 10.1016/j.biopsycho.2015.07.014] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2015] [Revised: 07/27/2015] [Accepted: 07/28/2015] [Indexed: 11/16/2022]
Abstract
Self-related stimuli-such as one's own face or name-seem to be processed differently from non-self stimuli and to involve greater attentional resources, as indexed by larger amplitude of the P3 event-related potential (ERP) component. Nonetheless, the differential processing of self-related vs. non-self information using voice stimuli is still poorly understood. The present study investigated the electrophysiological correlates of processing self-generated vs. non-self voice stimuli, when they are in the focus of attention. ERP data were recorded from twenty right-handed healthy males during an oddball task comprising pre-recorded self-generated (SGV) and non-self (NSV) voice stimuli. Both voices were used as standard and deviant stimuli in distinct experimental blocks. SGV was found to elicit more negative N2 and more positive P3 in comparison with NSV. No association was found between ERP data and voice acoustic properties. These findings demonstrated an earlier and later attentional bias to self-generated relative to non-self voice stimuli. They suggest that one's own voice representation may have a greater affective salience than an unfamiliar voice, confirming the modulatory role of salience on P3.
Collapse
Affiliation(s)
- Tatiana Conde
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Óscar F Gonçalves
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Ana P Pinheiro
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Cognitive Neuroscience Lab, Department of Psychiatry, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
34
|
Chen X, Pan Z, Wang P, Zhang L, Yuan J. EEG oscillations reflect task effects for the change detection in vocal emotion. Cogn Neurodyn 2014; 9:351-8. [PMID: 25972983 DOI: 10.1007/s11571-014-9326-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2014] [Revised: 11/27/2014] [Accepted: 12/14/2014] [Indexed: 12/01/2022] Open
Abstract
How task focus affects recognition of change in vocal emotion remains in debate. In this study, we investigated the role of task focus for change detection in emotional prosody by measuring changes in event-related electroencephalogram (EEG) power. EEG was recorded for prosodies with and without emotion change while subjects performed emotion change detection task (explicit) and visual probe detection task (implicit). We found that vocal emotion change induced theta event-related synchronization during 100-600 ms regardless of task focus. More importantly, vocal emotion change induced significant beta event-related desynchronization during 400-750 ms under explicit instead of implicit task condition. These findings suggest that the detection of emotional changes is independent of task focus, while the task focus effect in neural processing of vocal emotion change is specific to the integration of emotional deviations.
Collapse
Affiliation(s)
- Xuhai Chen
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, 199# South Chang'an Road, Xi'an, 710062 China ; Key Laboratory of Modern Teaching Technology, Ministry of Education, Shaanxi Normal University, Xi'an, 710062 China
| | - Zhihui Pan
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, 199# South Chang'an Road, Xi'an, 710062 China
| | - Ping Wang
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, 199# South Chang'an Road, Xi'an, 710062 China
| | - Lijie Zhang
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, 199# South Chang'an Road, Xi'an, 710062 China
| | - Jiajin Yuan
- Key Laboratory of Cognition and Personality of Ministry of Education, School of Psychology, Southwest University, Chongqing, 400715 China
| |
Collapse
|
35
|
Kastein HB, Kumar VA, Kandula S, Schmidt S. Auditory pre-experience modulates classification of affect intensity: evidence for the evaluation of call salience by a non-human mammal, the bat Megaderma lyra. Front Zool 2013; 10:75. [PMID: 24341839 PMCID: PMC3866277 DOI: 10.1186/1742-9994-10-75] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2013] [Accepted: 11/16/2013] [Indexed: 11/10/2022] Open
Abstract
INTRODUCTION Immediate responses towards emotional utterances in humans are determined by the acoustic structure and perceived relevance, i.e. salience, of the stimuli, and are controlled via a central feedback taking into account acoustic pre-experience. The present study explores whether the evaluation of stimulus salience in the acoustic communication of emotions is specifically human or has precursors in mammals. We created different pre-experiences by habituating bats (Megaderma lyra) to stimuli based on aggression, and response, calls from high or low intensity level agonistic interactions, respectively. Then we presented a test stimulus of opposite affect intensity of the same call type. We compared the modulation of response behaviour by affect intensity between the reciprocal experiments. RESULTS For aggression call stimuli, the bats responded to the dishabituation stimuli independent of affect intensity, emphasising the attention-grabbing function of this call type. For response call stimuli, the bats responded to a high affect intensity test stimulus after experiencing stimuli of low affect intensity, but transferred habituation to a low affect intensity test stimulus after experiencing stimuli of high affect intensity. This transfer of habituation was not due to over-habituation as the bats responded to a frequency-shifted control stimulus. A direct comparison confirmed the asymmetric response behaviour in the reciprocal experiments. CONCLUSIONS Thus, the present study provides not only evidence for a discrimination of affect intensity, but also for an evaluation of stimulus salience, suggesting that basic assessment mechanisms involved in the perception of emotion are an ancestral trait in mammals.
Collapse
Affiliation(s)
| | | | | | - Sabine Schmidt
- Institute of Zoology, University of Veterinary Medicine Hannover Foundation, Bünteweg 17, Hannover 30559, Germany.
| |
Collapse
|
36
|
Hung AY, Ahveninen J, Cheng Y. Atypical mismatch negativity to distressful voices associated with conduct disorder symptoms. J Child Psychol Psychiatry 2013; 54:1016-27. [PMID: 23701279 PMCID: PMC3749266 DOI: 10.1111/jcpp.12076] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/05/2013] [Indexed: 01/20/2023]
Abstract
BACKGROUND Although a general consensus holds that emotional reactivity in youth with conduct disorder (CD) symptoms arises as one of the main causes of successive aggression, it remains to be determined whether automatic emotional processing is altered in this population. METHODS We measured auditory event-related potentials (ERP) in 20 young offenders and 20 controls, screened for DSM-IV criteria of CD and evaluated using the youth version of Hare Psychopathy Checklist (PCL:YV), State-Trait Anxiety Inventory (STAI) and Barrett Impulsiveness Scale (BIS-11). In an oddball design, sadly or fearfully spoken 'deviant' syllables were randomly presented within a train of emotionally neutral 'standard' syllables. RESULTS In young offenders meeting with CD criteria, the ERP component mismatch negativity (MMN), presumed to reflect preattentive auditory change detection, was significantly stronger for fearful than sad syllables. No MMN differences for fearful versus sad syllables were observed in controls. Analyses of nonvocal deviants, matched spectrally with the fearful and sad sounds, supported our interpretation that the MMN abnormalities in juvenile offenders were related to the emotional content of sounds, instead of purely acoustic factors. Further, in the young offenders with CD symptoms, strong MMN amplitudes to fearful syllables were associated with high impulsive tendencies (PCL:YV, Factor 2). Higher trait and state anxiety, assessed by STAI, were positively correlated with P3a amplitudes to fearful and sad syllables, respectively. The differences in group-interaction MMN/P3a patterns to emotional syllables and nonvocal sounds could be speculated to suggest that there is a distinct processing route for preattentive processing of species-specific emotional information in human auditory cortices. CONCLUSIONS Our results suggest that youths with CD symptoms may process distressful voices in an atypical fashion already at the preattentive level. This auditory processing abnormality correlated with increased impulsivity and anxiety. Our results may help to shed light on the neural mechanisms of aggression.
Collapse
Affiliation(s)
- An-Yi Hung
- Institute of Neuroscience and Brain Research Center, National Yang-Ming University,
Taipei, Taiwan
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology,
Massachusetts General Hospital / Harvard Medical School, Charlestown, MA, USA
| | - Yawei Cheng
- Institute of Neuroscience and Brain Research Center, National Yang-Ming University,
Taipei, Taiwan,Department of Rehabilitation, National Yang-Ming University Hospital, Yilan,
Taiwan
| |
Collapse
|
37
|
Wang XD, Wang M, Chen L. Hemispheric lateralization for early auditory processing of lexical tones: dependence on pitch level and pitch contour. Neuropsychologia 2013; 51:2238-44. [PMID: 23911775 DOI: 10.1016/j.neuropsychologia.2013.07.015] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2012] [Revised: 07/20/2013] [Accepted: 07/24/2013] [Indexed: 11/26/2022]
Abstract
In Mandarin Chinese, a tonal language, pitch level and pitch contour are two dimensions of lexical tones according to their acoustic features (i.e., pitch patterns). A change in pitch level features a step change whereas that in pitch contour features a continuous variation in voice pitch. Currently, relatively little is known about the hemispheric lateralization for the processing of each dimension. To address this issue, we made whole-head electrical recordings of mismatch negativity in native Chinese speakers in response to the contrast of Chinese lexical tones in each dimension. We found that pre-attentive auditory processing of pitch level was obviously lateralized to the right hemisphere whereas there is a tendency for that of pitch contour to be lateralized to the left. We also found that the brain responded faster to pitch level than to pitch contour at a pre-attentive stage. These results indicate that the hemispheric lateralization for early auditory processing of lexical tones depends on the pitch level and pitch contour, and suggest an underlying inter-hemispheric interactive mechanism for the processing.
Collapse
Affiliation(s)
- Xiao-Dong Wang
- CAS Key Laboratory of Brain Function and Diseases, School of Life Sciences, University of Science and Technology of China, Hefei 230027, China; School of Humanities and Social Sciences, Nanyang Technological University, 637332 Singapore, Singapore
| | | | | |
Collapse
|
38
|
Doi H, Shinohara K. Electrophysiological responses in mothers to their own and unfamiliar child's gaze information. Brain Cogn 2012; 80:266-76. [PMID: 22940751 DOI: 10.1016/j.bandc.2012.07.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2011] [Revised: 07/15/2012] [Accepted: 07/18/2012] [Indexed: 10/27/2022]
Abstract
An attachment bond between a mother and her child is one of the most intimate human relationships. It is important for a mother to be sensitive to her child's gaze direction because exchanging gaze information plays a vital role in their relationship. Furthermore, recent studies have revealed differential neural activation patterns in mothers when presented the faces of their own children or the unfamiliar child of other people. Based on these findings, in the present study, we investigated whether mothers show differential neural responses to gaze information of their own child compared to that of an unfamiliar child. To this end, event-related-potentials elicited by the faces of one's own or an unfamiliar child with straight or averted gaze directions were measured using an oddball-paradigm. The results showed that peak amplitudes of the N170 component were enlarged by viewing the straight gazes compared to the averted gazes of one's own child, but not of an unfamiliar child. When the gaze was directed straight, the P3 amplitude elicited by one's own child's face is smaller than that elicited by an unfamiliar child's face. P3s elicited in viewing one's own child's face with averted gaze and in viewing an unfamiliar child's face with straight gaze were positively correlated with state-anxiety. These results bolster the hypothesis that processing the gaze information of one's own child elicits differential neural activation compared to the gaze information of an other person's unfamiliar child at both perceptual and evaluative stages of face processing.
Collapse
Affiliation(s)
- Hirokazu Doi
- Graduate School of Biomedical Sciences, Nagasaki University, 1-12-4 Sakamoto-cho, Nagasaki City, Nagasaki 852-8523, Japan
| | | |
Collapse
|
39
|
Escoffier N, Zhong J, Schirmer A, Qiu A. Emotional expressions in voice and music: same code, same effect? Hum Brain Mapp 2012; 34:1796-810. [PMID: 22505222 DOI: 10.1002/hbm.22029] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2010] [Revised: 11/15/2011] [Accepted: 12/05/2011] [Indexed: 11/09/2022] Open
Abstract
Scholars have documented similarities in the way voice and music convey emotions. By using functional magnetic resonance imaging (fMRI) we explored whether these similarities imply overlapping processing substrates. We asked participants to trace changes in either the emotion or pitch of vocalizations and music using a joystick. Compared to music, vocalizations more strongly activated superior and middle temporal cortex, cuneus, and precuneus. However, despite these differences, overlapping rather than differing regions emerged when comparing emotion with pitch tracing for music and vocalizations, respectively. Relative to pitch tracing, emotion tracing activated medial superior frontal and anterior cingulate cortex regardless of stimulus type. Additionally, we observed emotion specific effects in primary and secondary auditory cortex as well as in medial frontal cortex that were comparable for voice and music. Together these results indicate that similar mechanisms support emotional inferences from vocalizations and music and that these mechanisms tap on a general system involved in social cognition.
Collapse
Affiliation(s)
- Nicolas Escoffier
- Department of Psychology, National University of Singapore, Singapore, Singapore
| | | | | | | |
Collapse
|
40
|
Abstract
Experimental evidence suggests that emotions can both speed-up and slow-down the internal clock. Speeding up has been observed for to-be-timed emotional stimuli that have the capacity to sustain attention, whereas slowing down has been observed for to-be-timed neutral stimuli that are presented in the context of emotional distractors. These effects have been explained by mechanisms that involve changes in bodily arousal, attention, or sentience. A review of these mechanisms suggests both merits and difficulties in the explanation of the emotion-timing link. Therefore, a hybrid mechanism involving stimulus-specific sentient representations is proposed as a candidate for mediating emotional influences on time. According to this proposal, emotional events enhance sentient representations, which in turn support temporal estimates. Emotional stimuli with a larger share in ones sentience are then perceived as longer than neutral stimuli with a smaller share.
Collapse
Affiliation(s)
- Annett Schirmer
- Department of Psychology, National University of Singapore Singapore
| |
Collapse
|
41
|
Lui MA, Penney TB, Schirmer A. Emotion effects on timing: attention versus pacemaker accounts. PLoS One 2011; 6:e21829. [PMID: 21799749 PMCID: PMC3140483 DOI: 10.1371/journal.pone.0021829] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2011] [Accepted: 06/09/2011] [Indexed: 11/19/2022] Open
Abstract
Emotions change our perception of time. In the past, this has been attributed primarily to emotions speeding up an "internal clock" thereby increasing subjective time estimates. Here we probed this account using an S1/S2 temporal discrimination paradigm. Participants were presented with a stimulus (S1) followed by a brief delay and then a second stimulus (S2) and indicated whether S2 was shorter or longer in duration than S1. We manipulated participants' emotions by presenting a task-irrelevant picture following S1 and preceding S2. Participants were more likely to judge S2 as shorter than S1 when the intervening picture was emotional as compared to neutral. This effect held independent of S1 and S2 modality (Visual: Exps. 1, 2, & 3; Auditory: Exp. 4) and intervening picture valence (Negative: Exps. 1, 2 & 4; Positive: Exp. 3). Moreover, it was replicated in a temporal reproduction paradigm (Exp. 5) where a timing stimulus was preceded by an emotional or neutral picture and participants were asked to reproduce the duration of the timing stimulus. Taken together, these findings indicate that emotional experiences may decrease temporal estimates and thus raise questions about the suitability of internal clock speed explanations of emotion effects on timing. Moreover, they highlight attentional mechanisms as a viable alternative.
Collapse
Affiliation(s)
- Ming Ann Lui
- Department of Psychology, National University of Singapore, Singapore, Singapore
- Institute of Cognitive Neuroscience, National Central University, Jhongli City, Taiwan
| | - Trevor B. Penney
- Department of Psychology, National University of Singapore, Singapore, Singapore
| | - Annett Schirmer
- Department of Psychology, National University of Singapore, Singapore, Singapore
- * E-mail:
| |
Collapse
|
42
|
Yao S, Liu X, Yang W, Wang X. Preattentive Processing Abnormalities in Chronic Pain: Neurophysiological Evidence from Mismatch Negativity. PAIN MEDICINE 2011; 12:773-81. [DOI: 10.1111/j.1526-4637.2011.01097.x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|