1
|
Day TC, Malik I, Boateng S, Hauschild KM, Lerner MD. Vocal Emotion Recognition in Autism: Behavioral Performance and Event-Related Potential (ERP) Response. J Autism Dev Disord 2024; 54:1235-1248. [PMID: 36694007 DOI: 10.1007/s10803-023-05898-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2023] [Indexed: 01/25/2023]
Abstract
Autistic youth display difficulties in emotion recognition, yet little research has examined behavioral and neural indices of vocal emotion recognition (VER). The current study examines behavioral and event-related potential (N100, P200, Late Positive Potential [LPP]) indices of VER in autistic and non-autistic youth. Participants (N = 164) completed an emotion recognition task, the Diagnostic Analyses of Nonverbal Accuracy (DANVA-2) which included VER, during EEG recording. The LPP amplitude was larger in response to high intensity VER, and social cognition predicted VER errors. Verbal IQ, not autism, was related to VER errors. An interaction between VER intensity and social communication impairments revealed these impairments were related to larger LPP amplitudes during low intensity VER. Taken together, differences in VER may be due to higher order cognitive processes, not basic, early perception (N100, P200), and verbal cognitive abilities may underlie behavioral, yet occlude neural, differences in VER processing.
Collapse
Affiliation(s)
- Talena C Day
- Psychology Department, Stony Brook University, Stony Brook, Psychology B-354, Stony Brook, NY, 11794-2500, USA
| | - Isha Malik
- Psychology Department, Stony Brook University, Stony Brook, Psychology B-354, Stony Brook, NY, 11794-2500, USA
| | - Sydney Boateng
- Psychology Department, Stony Brook University, Stony Brook, Psychology B-354, Stony Brook, NY, 11794-2500, USA
| | | | - Matthew D Lerner
- Psychology Department, Stony Brook University, Stony Brook, Psychology B-354, Stony Brook, NY, 11794-2500, USA.
| |
Collapse
|
2
|
Larrouy-Maestri P, Poeppel D, Pell MD. The Sound of Emotional Prosody: Nearly 3 Decades of Research and Future Directions. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2024:17456916231217722. [PMID: 38232303 DOI: 10.1177/17456916231217722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Emotional voices attract considerable attention. A search on any browser using "emotional prosody" as a key phrase leads to more than a million entries. Such interest is evident in the scientific literature as well; readers are reminded in the introductory paragraphs of countless articles of the great importance of prosody and that listeners easily infer the emotional state of speakers through acoustic information. However, despite decades of research on this topic and important achievements, the mapping between acoustics and emotional states is still unclear. In this article, we chart the rich literature on emotional prosody for both newcomers to the field and researchers seeking updates. We also summarize problems revealed by a sample of the literature of the last decades and propose concrete research directions for addressing them, ultimately to satisfy the need for more mechanistic knowledge of emotional prosody.
Collapse
Affiliation(s)
- Pauline Larrouy-Maestri
- Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- School of Communication Sciences and Disorders, McGill University
- Max Planck-NYU Center for Language, Music, and Emotion, New York, New York
| | - David Poeppel
- Max Planck-NYU Center for Language, Music, and Emotion, New York, New York
- Department of Psychology and Center for Neural Science, New York University
- Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany
| | - Marc D Pell
- School of Communication Sciences and Disorders, McGill University
- Centre for Research on Brain, Language, and Music, Montreal, Quebec, Canada
| |
Collapse
|
3
|
Nussbaum C, Schirmer A, Schweinberger SR. Electrophysiological Correlates of Vocal Emotional Processing in Musicians and Non-Musicians. Brain Sci 2023; 13:1563. [PMID: 38002523 PMCID: PMC10670383 DOI: 10.3390/brainsci13111563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 10/31/2023] [Accepted: 11/03/2023] [Indexed: 11/26/2023] Open
Abstract
Musicians outperform non-musicians in vocal emotion recognition, but the underlying mechanisms are still debated. Behavioral measures highlight the importance of auditory sensitivity towards emotional voice cues. However, it remains unclear whether and how this group difference is reflected at the brain level. Here, we compared event-related potentials (ERPs) to acoustically manipulated voices between musicians (n = 39) and non-musicians (n = 39). We used parameter-specific voice morphing to create and present vocal stimuli that conveyed happiness, fear, pleasure, or sadness, either in all acoustic cues or selectively in either pitch contour (F0) or timbre. Although the fronto-central P200 (150-250 ms) and N400 (300-500 ms) components were modulated by pitch and timbre, differences between musicians and non-musicians appeared only for a centro-parietal late positive potential (500-1000 ms). Thus, this study does not support an early auditory specialization in musicians but suggests instead that musicality affects the manner in which listeners use acoustic voice cues during later, controlled aspects of emotion evaluation.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Voice Research Unit, Friedrich Schiller University, 07743 Jena, Germany
| | - Annett Schirmer
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Institute of Psychology, University of Innsbruck, 6020 Innsbruck, Austria
| | - Stefan R. Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Voice Research Unit, Friedrich Schiller University, 07743 Jena, Germany
- Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland
| |
Collapse
|
4
|
Zhao W, Zhang Q, An H, Yun Y, Fan N, Yan S, Gan M, Tan S, Yang F. Vocal emotion perception in schizophrenia and its diagnostic significance. BMC Psychiatry 2023; 23:760. [PMID: 37848849 PMCID: PMC10580536 DOI: 10.1186/s12888-023-05110-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 08/13/2023] [Indexed: 10/19/2023] Open
Abstract
BACKGROUND Cognitive and emotional impairment are among the core features of schizophrenia; assessment of vocal emotion recognition may facilitate the detection of schizophrenia. We explored the differences between cognitive and social aspects of emotion using vocal emotion recognition and detailed clinical characterization. METHODS Clinical symptoms and social and cognitive functioning were assessed by trained clinical psychiatrists. A vocal emotion perception test, including an assessment of emotion recognition and emotional intensity, was conducted. One-hundred-six patients with schizophrenia (SCZ) and 230 healthy controls (HCs) were recruited. RESULTS Considering emotion recognition, scores for all emotion categories were significantly lower in SCZ compared to HC. Considering emotional intensity, scores for anger, calmness, sadness, and surprise were significantly lower in the SCZs. Vocal recognition patterns showed a trend of unification and simplification in SCZs. A direct correlation was confirmed between vocal recognition impairment and cognition. In diagnostic tests, only the total score of vocal emotion recognition was a reliable index for the presence of schizophrenia. CONCLUSIONS This study shows that patients with schizophrenia are characterized by impaired vocal emotion perception. Furthermore, explicit and implicit vocal emotion perception processing in individuals with schizophrenia are viewed as distinct entities. This study provides a voice recognition tool to facilitate and improve the diagnosis of schizophrenia.
Collapse
Affiliation(s)
- Wenxuan Zhao
- Beijing HuiLongGuan Hospital, Peking University HuiLongGuan Clinical Medical School, No 7, HuangtuNandian, ChangPing District, Beijing, 100096, China
| | - Qi Zhang
- Wuxi Mental Health Center, Wuxi, China
| | - Huimei An
- Beijing HuiLongGuan Hospital, Peking University HuiLongGuan Clinical Medical School, No 7, HuangtuNandian, ChangPing District, Beijing, 100096, China
| | - Yajun Yun
- Beijing HuiLongGuan Hospital, Peking University HuiLongGuan Clinical Medical School, No 7, HuangtuNandian, ChangPing District, Beijing, 100096, China
| | - Ning Fan
- Beijing HuiLongGuan Hospital, Peking University HuiLongGuan Clinical Medical School, No 7, HuangtuNandian, ChangPing District, Beijing, 100096, China
| | - Shaoxiao Yan
- Beijing HuiLongGuan Hospital, Peking University HuiLongGuan Clinical Medical School, No 7, HuangtuNandian, ChangPing District, Beijing, 100096, China
| | - Mingyuan Gan
- Beijing HuiLongGuan Hospital, Peking University HuiLongGuan Clinical Medical School, No 7, HuangtuNandian, ChangPing District, Beijing, 100096, China
| | - Shuping Tan
- Beijing HuiLongGuan Hospital, Peking University HuiLongGuan Clinical Medical School, No 7, HuangtuNandian, ChangPing District, Beijing, 100096, China.
| | - Fude Yang
- Beijing HuiLongGuan Hospital, Peking University HuiLongGuan Clinical Medical School, No 7, HuangtuNandian, ChangPing District, Beijing, 100096, China.
| |
Collapse
|
5
|
Lin Y, Fan X, Chen Y, Zhang H, Chen F, Zhang H, Ding H, Zhang Y. Neurocognitive Dynamics of Prosodic Salience over Semantics during Explicit and Implicit Processing of Basic Emotions in Spoken Words. Brain Sci 2022; 12:brainsci12121706. [PMID: 36552167 PMCID: PMC9776349 DOI: 10.3390/brainsci12121706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/06/2022] [Accepted: 12/07/2022] [Indexed: 12/15/2022] Open
Abstract
How language mediates emotional perception and experience is poorly understood. The present event-related potential (ERP) study examined the explicit and implicit processing of emotional speech to differentiate the relative influences of communication channel, emotion category and task type in the prosodic salience effect. Thirty participants (15 women) were presented with spoken words denoting happiness, sadness and neutrality in either the prosodic or semantic channel. They were asked to judge the emotional content (explicit task) and speakers' gender (implicit task) of the stimuli. Results indicated that emotional prosody (relative to semantics) triggered larger N100, P200 and N400 amplitudes with greater delta, theta and alpha inter-trial phase coherence (ITPC) and event-related spectral perturbation (ERSP) values in the corresponding early time windows, and continued to produce larger LPC amplitudes and faster responses during late stages of higher-order cognitive processing. The relative salience of prosodic and semantics was modulated by emotion and task, though such modulatory effects varied across different processing stages. The prosodic salience effect was reduced for sadness processing and in the implicit task during early auditory processing and decision-making but reduced for happiness processing in the explicit task during conscious emotion processing. Additionally, across-trial synchronization of delta, theta and alpha bands predicted the ERP components with higher ITPC and ERSP values significantly associated with stronger N100, P200, N400 and LPC enhancement. These findings reveal the neurocognitive dynamics of emotional speech processing with prosodic salience tied to stage-dependent emotion- and task-specific effects, which can reveal insights into understanding language and emotion processing from cross-linguistic/cultural and clinical perspectives.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xinran Fan
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yueqi Chen
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Hao Zhang
- School of Foreign Languages and Literature, Shandong University, Jinan 250100, China
| | - Fei Chen
- School of Foreign Languages, Hunan University, Changsha 410012, China
| | - Hui Zhang
- School of International Education, Shandong University, Jinan 250100, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
- Correspondence: (H.D.); (Y.Z.); Tel.: +86-213-420-5664 (H.D.); +1-612-624-7818 (Y.Z.)
| | - Yang Zhang
- Department of Speech-Language-Hearing Science & Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN 55455, USA
- Correspondence: (H.D.); (Y.Z.); Tel.: +86-213-420-5664 (H.D.); +1-612-624-7818 (Y.Z.)
| |
Collapse
|
6
|
Martins I, Lima CF, Pinheiro AP. Enhanced salience of musical sounds in singers and instrumentalists. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2022; 22:1044-1062. [PMID: 35501427 DOI: 10.3758/s13415-022-01007-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/10/2022] [Indexed: 06/14/2023]
Abstract
Music training has been linked to facilitated processing of emotional sounds. However, most studies have focused on speech, and less is known about musicians' brain responses to other emotional sounds and in relation to instrument-specific experience. The current study combined behavioral and EEG methods to address two novel questions related to the perception of auditory emotional cues: whether and how long-term music training relates to a distinct emotional processing of nonverbal vocalizations and music; and whether distinct training profiles (vocal vs. instrumental) modulate brain responses to emotional sounds from early to late processing stages. Fifty-eight participants completed an EEG implicit emotional processing task, in which musical and vocal sounds differing in valence were presented as nontarget stimuli. After this task, participants explicitly evaluated the same sounds regarding the emotion being expressed, their valence, and arousal. Compared with nonmusicians, musicians displayed enhanced salience detection (P2), attention orienting (P3), and elaborative processing (Late Positive Potential) of musical (vs. vocal) sounds in event-related potential (ERP) data. The explicit evaluation of musical sounds also was distinct in musicians: accuracy in the emotional recognition of musical sounds was similar across valence types in musicians, who also judged musical sounds to be more pleasant and more arousing than nonmusicians. Specific profiles of music training (singers vs. instrumentalists) did not relate to differences in the processing of vocal vs. musical sounds. Together, these findings reveal that music has a privileged status in the auditory system of long-term musically trained listeners, irrespective of their instrument-specific experience.
Collapse
Affiliation(s)
- Inês Martins
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal
| | - César F Lima
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisbon, Portugal
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal.
| |
Collapse
|
7
|
Giordano GM, Brando F, Perrottelli A, Di Lorenzo G, Siracusano A, Giuliani L, Pezzella P, Altamura M, Bellomo A, Cascino G, Del Casale A, Monteleone P, Pompili M, Galderisi S, Maj M. Tracing Links Between Early Auditory Information Processing and Negative Symptoms in Schizophrenia: An ERP Study. Front Psychiatry 2021; 12:790745. [PMID: 34987433 PMCID: PMC8721527 DOI: 10.3389/fpsyt.2021.790745] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Accepted: 11/19/2021] [Indexed: 01/28/2023] Open
Abstract
Background: Negative symptoms represent a heterogeneous dimension with a strong impact on functioning of subjects with schizophrenia (SCZ). Five constructs are included in this dimension: anhedonia, asociality, avolition, blunted affect, and alogia. Factor analyses revealed that these symptoms cluster in two domains: experiential domain (avolition, asociality, and anhedonia) and the expressive deficit (alogia and blunted affect), that might be linked to different neurobiological alterations. Few studies investigated associations between N100, an electrophysiological index of early sensory processing, and negative symptoms, reporting controversial results. However, none of these studies investigated electrophysiological correlates of the two negative symptom domains. Objectives: The aim of our study was to evaluate, within the multicenter study of the Italian Network for Research on Psychoses, the relationships between N100 and negative symptom domains in SCZ. Methods: Auditory N100 was analyzed in 114 chronic stabilized SCZ and 63 healthy controls (HCs). Negative symptoms were assessed with the Brief Negative Symptom Scale (BNSS). Repeated measures ANOVA and correlation analyses were performed to evaluate differences between SCZ and HCs and association of N100 features with negative symptoms. Results: Our findings demonstrated a significant N100 amplitude reduction in SCZ compared with HCs. In SCZ, N100 amplitude for standard stimuli was associated with negative symptoms, in particular with the expressive deficit domain. Within the expressive deficit, blunted affect and alogia had the same pattern of correlation with N100. Conclusion: Our findings revealed an association between expressive deficit and N100, suggesting that these negative symptoms might be related to deficits in early auditory processing in SCZ.
Collapse
Affiliation(s)
- Giulia M. Giordano
- Department of Psychiatry, University of Campania “Luigi Vanvitelli”, Naples, Italy
| | - Francesco Brando
- Department of Psychiatry, University of Campania “Luigi Vanvitelli”, Naples, Italy
| | - Andrea Perrottelli
- Department of Psychiatry, University of Campania “Luigi Vanvitelli”, Naples, Italy
| | - Giorgio Di Lorenzo
- Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy
| | - Alberto Siracusano
- Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy
| | - Luigi Giuliani
- Department of Psychiatry, University of Campania “Luigi Vanvitelli”, Naples, Italy
| | - Pasquale Pezzella
- Department of Psychiatry, University of Campania “Luigi Vanvitelli”, Naples, Italy
| | - Mario Altamura
- Department of Clinical and Experimental Medicine, Psychiatry Unit, University of Foggia, Foggia, Italy
| | - Antonello Bellomo
- Department of Clinical and Experimental Medicine, Psychiatry Unit, University of Foggia, Foggia, Italy
| | - Giammarco Cascino
- Department of Medicine, Surgery and Dentistry “Scuola Medica Salernitana”, Section of Neurosciences, University of Salerno, Salerno, Italy
| | - Antonio Del Casale
- Department of Neurosciences, Mental Health and Sensory Organs, S. Andrea Hospital, University of Rome “La Sapienza”, Rome, Italy
| | - Palmiero Monteleone
- Department of Medicine, Surgery and Dentistry “Scuola Medica Salernitana”, Section of Neurosciences, University of Salerno, Salerno, Italy
| | - Maurizio Pompili
- Department of Neurosciences, Mental Health and Sensory Organs, S. Andrea Hospital, University of Rome “La Sapienza”, Rome, Italy
| | - Silvana Galderisi
- Department of Psychiatry, University of Campania “Luigi Vanvitelli”, Naples, Italy
| | - Mario Maj
- Department of Psychiatry, University of Campania “Luigi Vanvitelli”, Naples, Italy
| |
Collapse
|
8
|
Pinheiro AP, Anikin A, Conde T, Sarzedas J, Chen S, Scott SK, Lima CF. Emotional authenticity modulates affective and social trait inferences from voices. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200402. [PMID: 34719249 PMCID: PMC8558771 DOI: 10.1098/rstb.2020.0402] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/12/2021] [Indexed: 01/31/2023] Open
Abstract
The human voice is a primary tool for verbal and nonverbal communication. Studies on laughter emphasize a distinction between spontaneous laughter, which reflects a genuinely felt emotion, and volitional laughter, associated with more intentional communicative acts. Listeners can reliably differentiate the two. It remains unclear, however, if they can detect authenticity in other vocalizations, and whether authenticity determines the affective and social impressions that we form about others. Here, 137 participants listened to laughs and cries that could be spontaneous or volitional and rated them on authenticity, valence, arousal, trustworthiness and dominance. Bayesian mixed models indicated that listeners detect authenticity similarly well in laughter and crying. Speakers were also perceived to be more trustworthy, and in a higher arousal state, when their laughs and cries were spontaneous. Moreover, spontaneous laughs were evaluated as more positive than volitional ones, and we found that the same acoustic features predicted perceived authenticity and trustworthiness in laughter: high pitch, spectral variability and less voicing. For crying, associations between acoustic features and ratings were less reliable. These findings indicate that emotional authenticity shapes affective and social trait inferences from voices, and that the ability to detect authenticity in vocalizations is not limited to laughter. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part I)'.
Collapse
Affiliation(s)
- Ana P. Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013 Lisboa, Portugal
| | - Andrey Anikin
- Equipe de Neuro-Ethologie Sensorielle (ENES)/Centre de Recherche em Neurosciences de Lyon (CRNL), University of Lyon/Saint-Etienne, CNRS UMR5292, INSERM UMR_S 1028, 42023 Saint-Etienne, France
- Division of Cognitive Science, Lund University, 221 00 Lund, Sweden
| | - Tatiana Conde
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013 Lisboa, Portugal
| | - João Sarzedas
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013 Lisboa, Portugal
| | - Sinead Chen
- National Taiwan University, Taipei City, 10617 Taiwan
| | - Sophie K. Scott
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, UK
| | - César F. Lima
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, UK
- Instituto Universitário de Lisboa (ISCTE-IUL), Avenida das Forças Armadas, 1649-026 Lisboa, Portugal
| |
Collapse
|
9
|
The neural basis of authenticity recognition in laughter and crying. Sci Rep 2021; 11:23750. [PMID: 34887461 PMCID: PMC8660868 DOI: 10.1038/s41598-021-03131-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 11/22/2021] [Indexed: 01/28/2023] Open
Abstract
Deciding whether others' emotions are genuine is essential for successful communication and social relationships. While previous fMRI studies suggested that differentiation between authentic and acted emotional expressions involves higher-order brain areas, the time course of authenticity discrimination is still unknown. To address this gap, we tested the impact of authenticity discrimination on event-related potentials (ERPs) related to emotion, motivational salience, and higher-order cognitive processing (N100, P200 and late positive complex, the LPC), using vocalised non-verbal expressions of sadness (crying) and happiness (laughter) in a 32-participant, within-subject study. Using a repeated measures 2-factor (authenticity, emotion) ANOVA, we show that N100's amplitude was larger in response to authentic than acted vocalisations, particularly in cries, while P200's was larger in response to acted vocalisations, particularly in laughs. We suggest these results point to two different mechanisms: (1) a larger N100 in response to authentic vocalisations is consistent with its link to emotional content and arousal (putatively larger amplitude for genuine emotional expressions); (2) a larger P200 in response to acted ones is in line with evidence relating it to motivational salience (putatively larger for ambiguous emotional expressions). Complementarily, a significant main effect of emotion was found on P200 and LPC amplitudes, in that the two were larger for laughs than cries, regardless of authenticity. Overall, we provide the first electroencephalographic examination of authenticity discrimination and propose that authenticity processing of others' vocalisations is initiated early, along that of their emotional content or category, attesting for its evolutionary relevance for trust and bond formation.
Collapse
|
10
|
Castiajo P, Pinheiro AP. Attention to voices is increased in non-clinical auditory verbal hallucinations irrespective of salience. Neuropsychologia 2021; 162:108030. [PMID: 34563552 DOI: 10.1016/j.neuropsychologia.2021.108030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Revised: 09/17/2021] [Accepted: 09/20/2021] [Indexed: 11/24/2022]
Abstract
Alterations in the processing of vocal emotions have been associated with both clinical and non-clinical auditory verbal hallucinations (AVH), suggesting that changes in the mechanisms underpinning voice perception contribute to AVH. These alterations seem to be more pronounced in psychotic patients with AVH when attention demands increase. However, it remains to be clarified how attention modulates the processing of vocal emotions in individuals without clinical diagnoses who report hearing voices but no related distress. Using an active auditory oddball task, the current study clarified how emotion and attention interact during voice processing as a function of AVH proneness, and examined the contributions of stimulus valence and intensity. Participants with vs. without non-clinical AVH were presented with target vocalizations differing in valence (neutral; positive; negative) and intensity (55 decibels (dB); 75 dB). The P3b amplitude was larger in response to louder (vs. softer) vocal targets irrespective of valence, and in response to negative (vs. neutral) vocal targets irrespective of intensity. Of note, the P3b amplitude was globally increased in response to vocal targets in participants reporting AVH, and failed to be modulated by valence and intensity in these participants. These findings suggest enhanced voluntary attention to changes in vocal expressions but reduced discrimination of salient and non-salient cues. A decreased sensitivity to salience cues of vocalizations could contribute to increased cognitive control demands, setting the stage for an AVH.
Collapse
Affiliation(s)
- Paula Castiajo
- Psychological Neuroscience Laboratory, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Ana P Pinheiro
- Faculdade de Psicologia, CICPSI, Universidade de Lisboa, Lisboa, Portugal; Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands.
| |
Collapse
|
11
|
Gong B, Li Q, Zhao Y, Wu C. Auditory emotion recognition deficits in schizophrenia: A systematic review and meta-analysis. Asian J Psychiatr 2021; 65:102820. [PMID: 34482183 DOI: 10.1016/j.ajp.2021.102820] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 08/24/2021] [Indexed: 01/11/2023]
Abstract
BACKGROUND Auditory emotion recognition (AER) deficits refer to the abnormal identification and interpretation of tonal or prosodic features that transmit emotional information in sounds or speech. Evidence suggests that AER deficits are related to the pathology of schizophrenia. However, the effect size of the deficit in specific emotional category recognition in schizophrenia and its association with psychotic symptoms have never been evaluated through a meta-analysis. METHODS A systematic search for literature published in English or Chinese until November 30, 2020 was conducted in PubMed, Embase, Web of Science, PsychINFO, and China National Knowledge Infrastructure (CNKI), WanFang and Weip Databases. AER differences between patients and healthy controls (HCs) were assessed by the standardized mean differences (SMDs). Subgroup analyses were conducted for the type of emotional stimuli and the diagnosis of schizophrenia or schizoaffective disorders (Sch/SchA). Meta-regression analyses were performed to assess the influence of patients' age, sex, illness duration, antipsychotic dose, positive and negative symptoms on the study SMDs. RESULTS Eighteen studies containing 615 psychosis (Sch/SchA) and 488 HCs were included in the meta-analysis. Patients exhibited moderate deficits in recognizing the neutral, happy, sad, angry, fear, disgust, and surprising emotion. Neither the semantic information in the auditory stimuli nor the diagnosis subtype affected AER deficits in schizophrenia. Sadness, anger, and disgust AER deficits were each positively associated with negative symptoms in schizophrenia. CONCLUSIONS Patients with schizophrenia have moderate AER deficits, which were associated with negative symptoms. Rehabilitation focusing on improving AER abilities may help improve negative symptoms and the long-term prognosis of schizophrenia.
Collapse
Affiliation(s)
- Bingyan Gong
- Peking University School of Nursing, Beijing 100191, China
| | - Qiuhong Li
- Peking University School of Nursing, Beijing 100191, China
| | - Yiran Zhao
- Peking University School of Nursing, Beijing 100191, China
| | - Chao Wu
- Peking University School of Nursing, Beijing 100191, China.
| |
Collapse
|
12
|
Sensory attenuation is modulated by the contrasting effects of predictability and control. Neuroimage 2021; 237:118103. [PMID: 33957233 DOI: 10.1016/j.neuroimage.2021.118103] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Revised: 03/18/2021] [Accepted: 04/23/2021] [Indexed: 11/22/2022] Open
Abstract
Self-generated stimuli have been found to elicit a reduced sensory response compared with externally-generated stimuli. However, much of the literature has not adequately controlled for differences in the temporal predictability and temporal control of stimuli. In two experiments, we compared the N1 (and P2) components of the auditory-evoked potential to self- and externally-generated tones that differed with respect to these two factors. In Experiment 1 (n = 42), we found that increasing temporal predictability reduced N1 amplitude in a manner that may often account for the observed reduction in sensory response to self-generated sounds. We also observed that reducing temporal control over the tones resulted in a reduction in N1 amplitude. The contrasting effects of temporal predictability and temporal control on N1 amplitude meant that sensory attenuation prevailed when controlling for each. Experiment 2 (n = 38) explored the potential effect of selective attention on the results of Experiment 1 by modifying task requirements such that similar levels of attention were allocated to the visual stimuli across conditions. The results of Experiment 2 replicated those of Experiment 1, and suggested that the observed effects of temporal control and sensory attenuation were not driven by differences in attention. Given that self- and externally-generated sensations commonly differ with respect to both temporal predictability and temporal control, findings of the present study may necessitate a re-evaluation of the experimental paradigms used to study sensory attenuation.
Collapse
|
13
|
Luo H, Zhao Y, Fan F, Fan H, Wang Y, Qu W, Wang Z, Tan Y, Zhang X, Tan S. A bottom-up model of functional outcome in schizophrenia. Sci Rep 2021; 11:7577. [PMID: 33828168 PMCID: PMC8027854 DOI: 10.1038/s41598-021-87172-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 03/10/2021] [Indexed: 02/01/2023] Open
Abstract
Schizophrenia results in poor functional outcomes owing to numerous factors. This study provides the first test of a bottom-up causal model of functional outcome in schizophrenia, using neurocognition, vocal emotional cognition, alexithymia, and negative symptoms as predictors of functional outcome. We investigated a cross-sectional sample of 135 individuals with schizophrenia and 78 controls. Using a series of structural equation modelling analyses, a single pathway was generated among scores from the MATRICS Consensus Cognitive Battery (MCCB), vocal emotion recognition test, Toronto Alexithymia Scale (TAS), Brief Negative Symptom Scale, and the Personal and Social Performance Scale. The scores for each dimension of the MCCB in the schizophrenia group were significantly lower than that in the control group. The recognition accuracy for different emotions (anger, disgust, fear, sadness, surprise, and satire, but not calm was significantly lower in the schizophrenia group than in the control group. Moreover, the scores on the three dimensions of TAS were significantly higher in the schizophrenia group than in the control group. On path analysis modelling, the proposed bottom-up causal model showed a strong fit with the data and formed a single pathway, from neurocognition to vocal emotional cognition, to alexithymia, to negative symptoms, and to poor functional outcomes. The study results strongly support the proposed bottom-up causal model of functional outcome in schizophrenia. The model could be used to better understand the causal factors related to the functional outcome, as well as for the development of intervention strategies to improve functional outcomes in schizophrenia.
Collapse
Affiliation(s)
- Hongge Luo
- grid.440734.00000 0001 0707 0296School of Public Health, North China University of Science and Technology, Tangshan, China ,grid.440734.00000 0001 0707 0296College of Psychology, North China University of Science and Technology, Tangshan, China
| | - Yanli Zhao
- grid.11135.370000 0001 2256 9319Beijing Huilongguan Hospital, Peking University Huilongguan Clinical Medical School, Beijing, China
| | - Fengmei Fan
- grid.11135.370000 0001 2256 9319Beijing Huilongguan Hospital, Peking University Huilongguan Clinical Medical School, Beijing, China
| | - Hongzhen Fan
- grid.11135.370000 0001 2256 9319Beijing Huilongguan Hospital, Peking University Huilongguan Clinical Medical School, Beijing, China
| | - Yunhui Wang
- grid.11135.370000 0001 2256 9319Beijing Huilongguan Hospital, Peking University Huilongguan Clinical Medical School, Beijing, China
| | - Wei Qu
- grid.11135.370000 0001 2256 9319Beijing Huilongguan Hospital, Peking University Huilongguan Clinical Medical School, Beijing, China
| | - Zhiren Wang
- grid.11135.370000 0001 2256 9319Beijing Huilongguan Hospital, Peking University Huilongguan Clinical Medical School, Beijing, China
| | - Yunlong Tan
- grid.11135.370000 0001 2256 9319Beijing Huilongguan Hospital, Peking University Huilongguan Clinical Medical School, Beijing, China
| | - Xiujun Zhang
- grid.440734.00000 0001 0707 0296School of Public Health, North China University of Science and Technology, Tangshan, China
| | - Shuping Tan
- grid.11135.370000 0001 2256 9319Beijing Huilongguan Hospital, Peking University Huilongguan Clinical Medical School, Beijing, China
| |
Collapse
|
14
|
Acoustic salience in emotional voice perception and its relationship with hallucination proneness. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2021; 21:412-425. [DOI: 10.3758/s13415-021-00864-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 12/23/2020] [Indexed: 01/01/2023]
|
15
|
Portnova GV, Maslennikova AV, Zakharova NV, Martynova OV. The Deficit of Multimodal Perception of Congruent and Non-Congruent Fearful Expressions in Patients with Schizophrenia: The ERP Study. Brain Sci 2021; 11:96. [PMID: 33451054 PMCID: PMC7828540 DOI: 10.3390/brainsci11010096] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Revised: 01/06/2021] [Accepted: 01/07/2021] [Indexed: 11/16/2022] Open
Abstract
Emotional dysfunction, including flat affect and emotional perception deficits, is a specific symptom of schizophrenia disorder. We used a modified multimodal odd-ball paradigm with fearful facial expressions accompanied by congruent and non-congruent emotional vocalizations (sounds of women screaming and laughing) to investigate the impairment of emotional perception and reactions to other people's emotions in schizophrenia. We compared subjective ratings of emotional state and event-related potentials (EPPs) in response to congruent and non-congruent stimuli in patients with schizophrenia and healthy controls. The results showed the altered multimodal perception of fearful stimuli in patients with schizophrenia. The amplitude of N50 was significantly higher for non-congruent stimuli than congruent ones in the control group and did not differ in patients. The P100 and N200 amplitudes were higher in response to non-congruent stimuli in patients than in controls, implying impaired sensory gating in schizophrenia. The observed decrease of P3a and P3b amplitudes in patients could be associated with less attention, less emotional arousal, or incorrect interpretation of emotional valence, as patients differed from healthy controls in the emotion scores of non-congruent stimuli. The difficulties in identifying the incoherence of facial and audial components of emotional expression could be significant in understanding the psychopathology of schizophrenia.
Collapse
Affiliation(s)
- Galina V. Portnova
- Institute of Higher Nervous Activity and Neurophysiology of RAS, 117485 Moscow, Russia; (A.V.M.); (O.V.M.)
- The Pushkin State Russian Language Institute, 117485 Moscow, Russia
| | - Aleksandra V. Maslennikova
- Institute of Higher Nervous Activity and Neurophysiology of RAS, 117485 Moscow, Russia; (A.V.M.); (O.V.M.)
- Psychiatric Clinical Hospital No. 1 Named after ON. Alekseeva of the Moscow City Health Department, 117152 Moscow, Russia;
| | - Natalya V. Zakharova
- Psychiatric Clinical Hospital No. 1 Named after ON. Alekseeva of the Moscow City Health Department, 117152 Moscow, Russia;
| | - Olga V. Martynova
- Institute of Higher Nervous Activity and Neurophysiology of RAS, 117485 Moscow, Russia; (A.V.M.); (O.V.M.)
- Centre for Cognition and Decision Making, Institute for Cognitive Neuroscience, National Research University Higher School of Economics, 109548 Moscow, Russia
| |
Collapse
|
16
|
Mazer P, Macedo I, Paiva TO, Ferreira-Santos F, Pasion R, Barbosa F, Almeida P, Silveira C, Cunha-Reis C, Marques-Teixeira J. Abnormal Habituation of the Auditory Event-Related Potential P2 Component in Patients With Schizophrenia. Front Psychiatry 2021; 12:630406. [PMID: 33815168 PMCID: PMC8012906 DOI: 10.3389/fpsyt.2021.630406] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 02/19/2021] [Indexed: 11/13/2022] Open
Abstract
Auditory event-related potentials (ERP) may serve as diagnostic tools for schizophrenia and inform on the susceptibility for this condition. Particularly, the examination of N1 and P2 components of the auditory ERP may shed light on the impairments of information processing streams in schizophrenia. However, the habituation properties (i.e., decreasing amplitude with the repeated presentation of an auditory stimulus) of these components remain poorly studied compared to other auditory ERPs. Therefore, the current study used a roving paradigm to assess the modulation and habituation of N1 and P2 to simple (pure tones) and complex sounds (human voices and bird songs) in 26 first-episode patients with schizophrenia and 27 healthy participants. To explore the habituation properties of these ERPs, we measured the decrease in amplitude over a train of seven repetitions of the same stimulus (either bird songs or human voices). We observed that, for human voices, N1 and P2 amplitudes decreased linearly from stimulus 1-7, in both groups. Regarding bird songs, only the P2 component showed a decreased amplitude with stimulus presentation, exclusively in the control group. This suggests that patients did not show a fading of neural responses to repeated bird songs, reflecting abnormal habituation to this stimulus. This could reflect the inability to inhibit irrelevant or redundant information at later stages of auditory processing. In turn schizophrenia patients appear to have a preserved auditory processing of human voices.
Collapse
Affiliation(s)
- Prune Mazer
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Sciences of the University of Porto, Porto, Portugal.,School of Health, Polytechnic Institute of Porto, Porto, Portugal
| | - Inês Macedo
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Sciences of the University of Porto, Porto, Portugal
| | - Tiago O Paiva
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Sciences of the University of Porto, Porto, Portugal
| | - Fernando Ferreira-Santos
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Sciences of the University of Porto, Porto, Portugal
| | - Rita Pasion
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Sciences of the University of Porto, Porto, Portugal
| | - Fernando Barbosa
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Sciences of the University of Porto, Porto, Portugal
| | - Pedro Almeida
- Faculty of Law, School of Criminology and Interdisciplinary Research Center on Crime, Justice and Security, University of Porto, Porto, Portugal
| | - Celeste Silveira
- Faculty of Medicine, University of Porto, Porto, Portugal.,Psychiatry Department, Hospital S. João, Porto, Portugal
| | - Cassilda Cunha-Reis
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Sciences of the University of Porto, Porto, Portugal
| | - João Marques-Teixeira
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Sciences of the University of Porto, Porto, Portugal
| |
Collapse
|
17
|
Lin Y, Ding H, Zhang Y. Multisensory Integration of Emotion in Schizophrenic Patients. Multisens Res 2020; 33:865-901. [PMID: 33706267 DOI: 10.1163/22134808-bja10016] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Accepted: 03/24/2020] [Indexed: 01/04/2023]
Abstract
Multisensory integration (MSI) of emotion has been increasingly recognized as an essential element of schizophrenic patients' impairments, leading to the breakdown of their interpersonal functioning. The present review provides an updated synopsis of schizophrenics' MSI abilities in emotion processing by examining relevant behavioral and neurological research. Existing behavioral studies have adopted well-established experimental paradigms to investigate how participants understand multisensory emotion stimuli, and interpret their reciprocal interactions. Yet it remains controversial with regard to congruence-induced facilitation effects, modality dominance effects, and generalized vs specific impairment hypotheses. Such inconsistencies are likely due to differences and variations in experimental manipulations, participants' clinical symptomatology, and cognitive abilities. Recent electrophysiological and neuroimaging research has revealed aberrant indices in event-related potential (ERP) and brain activation patterns, further suggesting impaired temporal processing and dysfunctional brain regions, connectivity and circuities at different stages of MSI in emotion processing. The limitations of existing studies and implications for future MSI work are discussed in light of research designs and techniques, study samples and stimuli, and clinical applications.
Collapse
Affiliation(s)
- Yi Lin
- 1Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, 800 Dong Chuan Rd., Minhang District, Shanghai, 200240, China
| | - Hongwei Ding
- 1Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, 800 Dong Chuan Rd., Minhang District, Shanghai, 200240, China
| | - Yang Zhang
- 2Department of Speech-Language-Hearing Sciences & Center for Neurobehavioral Development, University of Minnesota, Twin Cities, MN 55455, USA
| |
Collapse
|
18
|
Topalidis P, Zinchenko A, Gädeke JC, Föcker J. The role of spatial selective attention in the processing of affective prosodies in congenitally blind adults: An ERP study. Brain Res 2020; 1739:146819. [PMID: 32251662 DOI: 10.1016/j.brainres.2020.146819] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 03/25/2020] [Accepted: 04/02/2020] [Indexed: 10/24/2022]
Abstract
The question whether spatial selective attention is necessary in order to process vocal affective prosody has been controversially discussed in sighted individuals: whereas some studies argue that attention is required in order to process emotions, other studies conclude that vocal prosody can be processed even outside the focus of spatial selective attention. Here, we asked whether spatial selective attention is necessary for the processing of affective prosodies after visual deprivation from birth. For this purpose, pseudowords spoken in happy, neutral, fearful or threatening prosodies were presented at the left or right loudspeaker. Congenitally blind individuals (N = 8) and sighted controls (N = 13) had to attend to one of the loudspeakers and detect rare pseudowords presented at the attended loudspeaker during EEG recording. Emotional prosody of the syllables was task-irrelevant. Blind individuals outperformed sighted controls by being more efficient in detecting deviant syllables at the attended loudspeaker. A higher auditory N1 amplitude was observed in blind individuals compared to sighted controls. Additionally, sighted controls showed enhanced attention-related ERP amplitudes in response to fearful and threatening voices during the time range of the N1. By contrast, blind individuals revealed enhanced ERP amplitudes in attended relative to unattended locations irrespective of the affective valence in all time windows (110-350 ms). These effects were mainly observed at posterior electrodes. The results provide evidence for "emotion-general" auditory spatial selective attention effects in congenitally blind individuals and suggest a potential reorganization of the voice processing brain system following visual deprivation from birth.
Collapse
Affiliation(s)
- Pavlos Topalidis
- Department of Psychology and Educational Sciences, Ludwig Maximilian University, Munich, Germany
| | - Artyom Zinchenko
- Department of Psychology and Educational Sciences, Ludwig Maximilian University, Munich, Germany
| | - Julia C Gädeke
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Julia Föcker
- Biological Psychology and Neuropsychology, University of Hamburg, Germany; University of Lincoln, School of Social Sciences, United Kingdom.
| |
Collapse
|
19
|
The Influence of Maternal Schizotypy on the perception of Facial Emotional Expressions during Infancy: an Event-Related Potential Study. Infant Behav Dev 2020; 58:101390. [DOI: 10.1016/j.infbeh.2019.101390] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Revised: 10/15/2019] [Accepted: 10/30/2019] [Indexed: 01/05/2023]
|
20
|
Leshem R, Icht M, Bentzur R, Ben-David BM. Processing of Emotions in Speech in Forensic Patients With Schizophrenia: Impairments in Identification, Selective Attention, and Integration of Speech Channels. Front Psychiatry 2020; 11:601763. [PMID: 33281649 PMCID: PMC7691229 DOI: 10.3389/fpsyt.2020.601763] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 10/16/2020] [Indexed: 12/11/2022] Open
Abstract
Individuals with schizophrenia show deficits in recognition of emotions which may increase the risk of violence. This study explored how forensic patients with schizophrenia process spoken emotion by: (a) identifying emotions expressed in prosodic and semantic content separately, (b) selectively attending to one speech channel while ignoring the other, and (c) integrating the prosodic and the semantic channels, compared to non-clinical controls. Twenty-one forensic patients with schizophrenia and 21 matched controls listened to sentences conveying four emotions (anger, happiness, sadness, and neutrality) presented in semantic or prosodic channels, in different combinations. They were asked to rate how much they agreed that the sentences conveyed a predefined emotion, focusing on one channel or on the sentence as a whole. Forensic patients with schizophrenia performed with intact identification and integration of spoken emotions, but their ratings indicated reduced discrimination, larger failures of selective attention, and under-ratings of negative emotions, compared to controls. This finding doesn't support previous reports of an inclination to interpret social situations in a negative way among individuals with schizophrenia. Finally, current results may guide rehabilitation approaches matched to the pattern of auditory emotional processing presented by forensic patients with schizophrenia, improving social interactions and quality of life.
Collapse
Affiliation(s)
- Rotem Leshem
- Department of Criminology, Bar-Ilan University, Ramat Gan, Israel
| | - Michal Icht
- Department of Communication Disorders, Ariel University, Ariel, Israel
| | - Roni Bentzur
- Psychiatric Division, Sheba Medical Center, Tel Hashomer, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzliya, Israel.,Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada.,Toronto Rehabilitation Institute, University Health Networks (UHN), Toronto, ON, Canada
| |
Collapse
|
21
|
Interaction of emotion and cognitive control along the psychosis continuum: A critical review. Int J Psychophysiol 2020; 147:156-175. [DOI: 10.1016/j.ijpsycho.2019.11.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2019] [Revised: 10/29/2019] [Accepted: 11/05/2019] [Indexed: 12/11/2022]
|
22
|
Castiajo P, Pinheiro AP. Decoding emotions from nonverbal vocalizations: How much voice signal is enough? MOTIVATION AND EMOTION 2019. [DOI: 10.1007/s11031-019-09783-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
23
|
Burred JJ, Ponsot E, Goupil L, Liuni M, Aucouturier JJ. CLEESE: An open-source audio-transformation toolbox for data-driven experiments in speech and music cognition. PLoS One 2019; 14:e0205943. [PMID: 30947281 PMCID: PMC6448843 DOI: 10.1371/journal.pone.0205943] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2018] [Accepted: 02/15/2019] [Indexed: 11/29/2022] Open
Abstract
Over the past few years, the field of visual social cognition and face processing has been dramatically impacted by a series of data-driven studies employing computer-graphics tools to synthesize arbitrary meaningful facial expressions. In the auditory modality, reverse correlation is traditionally used to characterize sensory processing at the level of spectral or spectro-temporal stimulus properties, but not higher-level cognitive processing of e.g. words, sentences or music, by lack of tools able to manipulate the stimulus dimensions that are relevant for these processes. Here, we present an open-source audio-transformation toolbox, called CLEESE, able to systematically randomize the prosody/melody of existing speech and music recordings. CLEESE works by cutting recordings in small successive time segments (e.g. every successive 100 milliseconds in a spoken utterance), and applying a random parametric transformation of each segment’s pitch, duration or amplitude, using a new Python-language implementation of the phase-vocoder digital audio technique. We present here two applications of the tool to generate stimuli for studying intonation processing of interrogative vs declarative speech, and rhythm processing of sung melodies.
Collapse
Affiliation(s)
| | - Emmanuel Ponsot
- Science and Technology of Music and Sound (UMR9912, IRCAM/CNRS/Sorbonne Université), Paris, France
- Laboratoire des Systèmes Perceptifs (CNRS UMR 8248) and Département d’études cognitives, École Normale Supérieure, PSL Research University, Paris, France
| | - Louise Goupil
- Science and Technology of Music and Sound (UMR9912, IRCAM/CNRS/Sorbonne Université), Paris, France
| | - Marco Liuni
- Science and Technology of Music and Sound (UMR9912, IRCAM/CNRS/Sorbonne Université), Paris, France
| | - Jean-Julien Aucouturier
- Science and Technology of Music and Sound (UMR9912, IRCAM/CNRS/Sorbonne Université), Paris, France
- * E-mail:
| |
Collapse
|
24
|
Altered attentional processing of happy prosody in schizophrenia. Schizophr Res 2019; 206:217-224. [PMID: 30554811 DOI: 10.1016/j.schres.2018.11.024] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Revised: 11/17/2018] [Accepted: 11/19/2018] [Indexed: 11/21/2022]
Abstract
BACKGROUND Abnormalities in emotional prosody processing have been consistently reported in schizophrenia. Emotionally salient changes in vocal expressions attract attention in social interactions. However, it remains to be clarified how attention and emotion interact during voice processing in schizophrenia. The current study addressed this question by examining the P3b event-related potential (ERP) component. METHOD The P3b was elicited with a modified oddball task, in which frequent (p = .84) neutral stimuli were intermixed with infrequent (p = .16) task-relevant emotional (happy or angry) targets. Prosodic speech was presented in two conditions - with intelligible (semantic content condition - SCC) or unintelligible semantic content (prosody-only condition - POC). Fifteen chronic schizophrenia patients and 15 healthy controls were instructed to silently count the target vocal sounds. RESULTS Compared to controls, P3b amplitude was specifically reduced for happy prosodic stimuli in schizophrenia, irrespective of semantic status. Groups did not differ in the processing of neutral standards or angry targets. DISCUSSION The selectively reduced P3b for happy prosody in schizophrenia suggests top-down attentional resources were less strongly engaged by positive relative to negative prosody, reflecting alterations in the evaluation of the emotional salience of the voice. These results highlight the role played by higher-order processes in emotional prosody dysfunction in schizophrenia.
Collapse
|
25
|
Burra N, Kerzel D, Munoz Tord D, Grandjean D, Ceravolo L. Early spatial attention deployment toward and away from aggressive voices. Soc Cogn Affect Neurosci 2019; 14:73-80. [PMID: 30418635 PMCID: PMC6318470 DOI: 10.1093/scan/nsy100] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Accepted: 11/07/2018] [Indexed: 01/29/2023] Open
Abstract
Salient vocalizations, especially aggressive voices, are believed to attract attention due to an automatic threat detection system. However, studies assessing the temporal dynamics of auditory spatial attention to aggressive voices are missing. Using event-related potential markers of auditory spatial attention (N2ac and LPCpc), we show that attentional processing of threatening vocal signals is enhanced at two different stages of auditory processing. As early as 200 ms post-stimulus onset, attentional orienting/engagement is enhanced for threatening as compared to happy vocal signals. Subsequently, as early as 400 ms post-stimulus onset, the reorienting of auditory attention to the center of the screen (or disengagement from the target) is enhanced. This latter effect is consistent with the need to optimize perception by balancing the intake of stimulation from left and right auditory space. Our results extend the scope of theories from the visual to the auditory modality by showing that threatening stimuli also bias early spatial attention in the auditory modality. Attentional enhancement was only present in female and not in male participants.
Collapse
Affiliation(s)
- Nicolas Burra
- Faculté de Psychologie et des Sciences de l'Education, University of Geneva, Geneva, Switzerland
| | - Dirk Kerzel
- Faculté de Psychologie et des Sciences de l'Education, University of Geneva, Geneva, Switzerland
| | - David Munoz Tord
- Faculté de Psychologie et des Sciences de l'Education, University of Geneva, Geneva, Switzerland
| | - Didier Grandjean
- Faculté de Psychologie et des Sciences de l'Education, University of Geneva, Geneva, Switzerland.,Neuroscience of Emotion and Affective Dynamics Lab, University of Geneva, Geneva, Switzerland.,Swiss Center for Affective Sciences, University of Geneva, Geneva, Swizerland
| | - Leonardo Ceravolo
- Faculté de Psychologie et des Sciences de l'Education, University of Geneva, Geneva, Switzerland.,Neuroscience of Emotion and Affective Dynamics Lab, University of Geneva, Geneva, Switzerland.,Swiss Center for Affective Sciences, University of Geneva, Geneva, Swizerland
| |
Collapse
|
26
|
Lin Y, Ding H, Zhang Y. Emotional Prosody Processing in Schizophrenic Patients: A Selective Review and Meta-Analysis. J Clin Med 2018; 7:jcm7100363. [PMID: 30336573 PMCID: PMC6210777 DOI: 10.3390/jcm7100363] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Revised: 10/10/2018] [Accepted: 10/14/2018] [Indexed: 12/11/2022] Open
Abstract
Emotional prosody (EP) has been increasingly recognized as an important area of schizophrenic patients’ dysfunctions in their language use and social communication. The present review aims to provide an updated synopsis on emotional prosody processing (EPP) in schizophrenic disorders, with a specific focus on performance characteristics, the influential factors and underlying neural mechanisms. A literature search up to 2018 was conducted with online databases, and final selections were limited to empirical studies which investigated the prosodic processing of at least one of the six basic emotions in patients with a clear diagnosis of schizophrenia without co-morbid diseases. A narrative synthesis was performed, covering the range of research topics, task paradigms, stimulus presentation, study populations and statistical power with a quantitative meta-analytic approach in Comprehensive Meta-Analysis Version 2.0. Study outcomes indicated that schizophrenic patients’ EPP deficits were consistently observed across studies (d = −0.92, 95% CI = −1.06 < δ < −0.78), with identification tasks (d = −0.95, 95% CI = −1.11 < δ < −0.80) being more difficult to process than discrimination tasks (d = −0.74, 95% CI = −1.03 < δ < −0.44) and emotional stimuli being more difficult than neutral stimuli. Patients’ performance was influenced by both participant- and experiment-related factors. Their social cognitive deficits in EP could be further explained by right-lateralized impairments and abnormalities in primary auditory cortex, medial prefrontal cortex and auditory-insula connectivity. The data pointed to impaired pre-attentive and attentive processes, both of which played important roles in the abnormal EPP in the schizophrenic population. The current selective review and meta-analysis support the clinical advocacy of including EP in early diagnosis and rehabilitation in the general framework of social cognition and neurocognition deficits in schizophrenic disorders. Future cross-sectional and longitudinal studies are further suggested to investigate schizophrenic patients’ perception and production of EP in different languages and cultures, modality forms and neuro-cognitive domains.
Collapse
Affiliation(s)
- Yi Lin
- Institute of Cross-Linguistic Processing and Cognition, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China.
| | - Hongwei Ding
- Institute of Cross-Linguistic Processing and Cognition, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China.
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences & Center for Neurobehavioral Development, University of Minnesota, Twin Cities, MN 55455, USA.
| |
Collapse
|
27
|
Putting salient vocalizations in context: Adults' physiological arousal to emotive cues in domestic and external environments. Physiol Behav 2018; 196:25-32. [PMID: 30149085 DOI: 10.1016/j.physbeh.2018.08.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Revised: 07/24/2018] [Accepted: 08/22/2018] [Indexed: 01/08/2023]
Abstract
Salient vocalizations are automatically processed and distinguished from emotionally irrelevant information. However, little is known of how contextual, gender and attentional variables interact to modulate physiological responses to salient emotive vocalizations. In this study, electrocardiogram (ECG) was utilized to investigate differences in peripheral nervous activity of men and women to infant cry (IC), infant laughter (IL) and adult cry (AC) in two different situational contexts: the domestic environment (DE) and the outside environment (OE). As the mental state of listeners can affect their response to vocalizations, a between-subject design was applied: one group was instructed to imagine being inside the scenes (Task 1: explicit task), and the other group was told to look at the scenes (Task 2: implicit task). Results revealed that females exhibited lower inter-beat interval (IBI) index in the OE condition, as compared to both males in OE and females in DE conditions, suggesting greater physiological arousal amongst females in response to vocalizations in an outside environment. Additionally, Task 1 revealed that males demonstrated higher Low Frequency/High Frequency (LFHF) index towards AC than IL. Task 2 showed the same association between these two sounds in females. The implicit task also elicited lower LFHF index in response to both IL and IC than control sound (CS), only amongst females. Findings highlight the important roles that contextual information and cognitive demand play in regulating physiological responses to salient emotive vocalizations. Integrated perspectives of physiological responses to emotive vocalizations that consider the influence of internal (adult mental states) and external (environment) contextual information will provide a better understanding of mechanisms underlying emotional processing of salient social cues.
Collapse
|
28
|
Conde T, Gonçalves ÓF, Pinheiro AP. Stimulus complexity matters when you hear your own voice: Attention effects on self-generated voice processing. Int J Psychophysiol 2018; 133:66-78. [PMID: 30114437 DOI: 10.1016/j.ijpsycho.2018.08.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2017] [Revised: 06/05/2018] [Accepted: 08/10/2018] [Indexed: 11/26/2022]
Abstract
The ability to discriminate self- and non-self voice cues is a fundamental aspect of self-awareness and subserves self-monitoring during verbal communication. Nonetheless, the neurofunctional underpinnings of self-voice perception and recognition are still poorly understood. Moreover, how attention and stimulus complexity influence the processing and recognition of one's own voice remains to be clarified. Using an oddball task, the current study investigated how self-relevance and stimulus type interact during selective attention to voices, and how they affect the representation of regularity during voice perception. Event-related potentials (ERPs) were recorded from 18 right-handed males. Pre-recorded self-generated (SGV) and non-self (NSV) voices, consisting of a nonverbal vocalization (vocalization condition) or disyllabic word (word condition), were presented as either standard or target stimuli in different experimental blocks. The results showed increased N2 amplitude to SGV relative to NSV stimuli. Stimulus type modulated later processing stages only: P3 amplitude was increased for SGV relative to NSV words, whereas no differences between SGV and NSV were observed in the case of vocalizations. Moreover, SGV standards elicited reduced N1 and P2 amplitude relative to NSV standards. These findings revealed that the self-voice grabs more attention when listeners are exposed to words but not vocalizations. Further, they indicate that detection of regularity in an auditory stream is facilitated for one's own voice at early processing stages. Together, they demonstrate that self-relevance affects attention to voices differently as a function of stimulus type.
Collapse
Affiliation(s)
- Tatiana Conde
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal; Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Óscar F Gonçalves
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Spaulding Center of Neuromodulation, Department of Physical Medicine & Rehabilitation, Spaulding Rehabilitation Hospital & Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Bouvé College of Health Sciences, Northeastern University, Boston, MA, USA
| | - Ana P Pinheiro
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal; Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Cognitive Neuroscience Lab, Department of Psychiatry, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
29
|
Abstract
In speech, social evaluations of a speaker’s dominance or trustworthiness are conveyed by distinguishing, but little-understood, pitch variations. This work describes how to combine state-of-the-art vocal pitch transformations with the psychophysical technique of reverse correlation and uses this methodology to uncover the prosodic prototypes that govern such social judgments in speech. This finding is of great significance, because the exact shape of these prototypes, and how they vary with sex, age, and culture, is virtually unknown, and because prototypes derived with the method can then be reapplied to arbitrary spoken utterances, thus providing a principled way to modulate personality impressions in speech. Human listeners excel at forming high-level social representations about each other, even from the briefest of utterances. In particular, pitch is widely recognized as the auditory dimension that conveys most of the information about a speaker’s traits, emotional states, and attitudes. While past research has primarily looked at the influence of mean pitch, almost nothing is known about how intonation patterns, i.e., finely tuned pitch trajectories around the mean, may determine social judgments in speech. Here, we introduce an experimental paradigm that combines state-of-the-art voice transformation algorithms with psychophysical reverse correlation and show that two of the most important dimensions of social judgments, a speaker’s perceived dominance and trustworthiness, are driven by robust and distinguishing pitch trajectories in short utterances like the word “Hello,” which remained remarkably stable whether male or female listeners judged male or female speakers. These findings reveal a unique communicative adaptation that enables listeners to infer social traits regardless of speakers’ physical characteristics, such as sex and mean pitch. By characterizing how any given individual’s mental representations may differ from this generic code, the method introduced here opens avenues to explore dysprosody and social-cognitive deficits in disorders like autism spectrum and schizophrenia. In addition, once derived experimentally, these prototypes can be applied to novel utterances, thus providing a principled way to modulate personality impressions in arbitrary speech signals.
Collapse
|
30
|
Minho Affective Sentences (MAS): Probing the roles of sex, mood, and empathy in affective ratings of verbal stimuli. Behav Res Methods 2017; 49:698-716. [PMID: 27004484 DOI: 10.3758/s13428-016-0726-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
During social communication, words and sentences play a critical role in the expression of emotional meaning. The Minho Affective Sentences (MAS) were developed to respond to the lack of a standardized sentence battery with normative affective ratings: 192 neutral, positive, and negative declarative sentences were strictly controlled for psycholinguistic variables such as numbers of words and letters and per-million word frequency. The sentences were designed to represent examples of each of the five basic emotions (anger, sadness, disgust, fear, and happiness) and of neutral situations. These sentences were presented to 536 participants who rated the stimuli using both dimensional and categorical measures of emotions. Sex differences were also explored. Additionally, we probed how personality, empathy, and mood from a subset of 40 participants modulated the affective ratings. Our results confirmed that the MAS affective norms are valid measures to guide the selection of stimuli for experimental studies of emotion. The combination of dimensional and categorical ratings provided a more fine-grained characterization of the affective properties of the sentences. Moreover, the affective ratings of positive and negative sentences were not only modulated by participants' sex, but also by individual differences in empathy and mood state. Together, our results indicate that, in their quest to reveal the neurofunctional underpinnings of verbal emotional processing, researchers should consider not only the role of sex, but also of interindividual differences in empathy and mood states, in responses to the emotional meaning of sentences.
Collapse
|
31
|
Pinheiro AP, Barros C, Dias M, Kotz SA. Laughter catches attention! Biol Psychol 2017; 130:11-21. [PMID: 28942367 DOI: 10.1016/j.biopsycho.2017.09.012] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Revised: 09/19/2017] [Accepted: 09/19/2017] [Indexed: 01/02/2023]
Abstract
In social interactions, emotionally salient and sudden changes in vocal expressions attract attention. However, only a few studies examined how emotion and attention interact in voice processing. We investigated neutral, happy (laughs) and angry (growls) vocalizations in a modified oddball task. Participants silently counted the targets in each block and rated the valence and arousal of the vocalizations. A combined event-related potential and time-frequency analysis focused on the P3 and pre-stimulus alpha power to capture attention effects in response to unexpected events. Whereas an early differentiation between emotionally salient and neutral vocalizations was reflected in the P3a response, the P3b was selectively enhanced for happy voices. The P3b modulation was predicted by pre-stimulus frontal alpha desynchronization, and by the perceived pleasantness of the targets. These findings indicate that vocal emotions may be differently processed based on task relevance and valence. Increased anticipation and attention to positive vocal cues (laughter) may reflect their high social relevance.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Universidade de Lisboa, Faculdade de Psicologia, CICPSI, Lisboa, Portugal; Neuropsychophysiology Laboratory, School of Psychology, University of Minho, Braga, Portugal.
| | - Carla Barros
- Neuropsychophysiology Laboratory, School of Psychology, University of Minho, Braga, Portugal
| | - Marcelo Dias
- Neuropsychophysiology Laboratory, School of Psychology, University of Minho, Braga, Portugal
| | - Sonja A Kotz
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Faculty of Psychology and Neuroscience, Department of Neuropsychology & Psychopharmacology, Maastricht University, The Netherlands
| |
Collapse
|
32
|
Jeong JW, Wendimagegn TW, Chang E, Chun Y, Park JH, Kim HJ, Kim HT. Classifying Schizotypy Using an Audiovisual Emotion Perception Test and Scalp Electroencephalography. Front Hum Neurosci 2017; 11:450. [PMID: 28955212 PMCID: PMC5601065 DOI: 10.3389/fnhum.2017.00450] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2016] [Accepted: 08/24/2017] [Indexed: 11/13/2022] Open
Abstract
Schizotypy refers to the personality trait of experiencing "psychotic" symptoms and can be regarded as a predisposition of schizophrenia-spectrum psychopathology (Raine, 1991). Cumulative evidence has revealed that individuals with schizotypy, as well as schizophrenia patients, have emotional processing deficits. In the present study, we investigated multimodal emotion perception in schizotypy and implemented the machine learning technique to find out whether a schizotypy group (ST) is distinguishable from a control group (NC), using electroencephalogram (EEG) signals. Forty-five subjects (30 ST and 15 NC) were divided into two groups based on their scores on a Schizotypal Personality Questionnaire. All participants performed an audiovisual emotion perception test while EEG was recorded. After the preprocessing stage, the discriminatory features were extracted using a mean subsampling technique. For an accurate estimation of covariance matrices, the shrinkage linear discriminant algorithm was used. The classification attained over 98% accuracy and zero rate of false-positive results. This method may have important clinical implications in discriminating those among the general population who have a subtle risk for schizotypy, requiring intervention in advance.
Collapse
Affiliation(s)
- Ji Woon Jeong
- Department of Psychology, Korea UniversitySeoul, South Korea
| | | | - Eunhee Chang
- Department of Psychology, Korea UniversitySeoul, South Korea
| | - Yeseul Chun
- Department of Psychology, Korea UniversitySeoul, South Korea
| | - Joon Hyuk Park
- Department of Neuropsychiatry, Jeju National University HospitalJeju, South Korea
| | - Hyoung Joong Kim
- Department of Information Security, Korea UniversitySeoul, South Korea
| | - Hyun Taek Kim
- Department of Psychology, Korea UniversitySeoul, South Korea
| |
Collapse
|
33
|
Is laughter a better vocal change detector than a growl? Cortex 2017; 92:233-248. [DOI: 10.1016/j.cortex.2017.03.018] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2016] [Revised: 01/26/2017] [Accepted: 03/27/2017] [Indexed: 11/23/2022]
|
34
|
Pinheiro AP, Rezaii N, Rauber A, Nestor PG, Spencer KM, Niznikiewicz M. Emotional self-other voice processing in schizophrenia and its relationship with hallucinations: ERP evidence. Psychophysiology 2017; 54:1252-1265. [PMID: 28474363 DOI: 10.1111/psyp.12880] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2016] [Revised: 01/30/2017] [Accepted: 01/31/2017] [Indexed: 11/27/2022]
Abstract
Abnormalities in self-other voice processing have been observed in schizophrenia, and may underlie the experience of hallucinations. More recent studies demonstrated that these impairments are enhanced for speech stimuli with negative content. Nonetheless, few studies probed the temporal dynamics of self versus nonself speech processing in schizophrenia and, particularly, the impact of semantic valence on self-other voice discrimination. In the current study, we examined these questions, and additionally probed whether impairments in these processes are associated with the experience of hallucinations. Fifteen schizophrenia patients and 16 healthy controls listened to 420 prerecorded adjectives differing in voice identity (self-generated [SGS] versus nonself speech [NSS]) and semantic valence (neutral, positive, and negative), while EEG data were recorded. The N1, P2, and late positive potential (LPP) ERP components were analyzed. ERP results revealed group differences in the interaction between voice identity and valence in the P2 and LPP components. Specifically, LPP amplitude was reduced in patients compared with healthy subjects for SGS and NSS with negative content. Further, auditory hallucinations severity was significantly predicted by LPP amplitude: the higher the SAPS "voices conversing" score, the larger the difference in LPP amplitude between negative and positive NSS. The absence of group differences in the N1 suggests that self-other voice processing abnormalities in schizophrenia are not primarily driven by disrupted sensory processing of voice acoustic information. The association between LPP amplitude and hallucination severity suggests that auditory hallucinations are associated with enhanced sustained attention to negative cues conveyed by a nonself voice.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Faculty of Psychology, University of Lisbon, Lisbon, Portugal.,Neuropsychophysiology Laboratory, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Neguine Rezaii
- VA Boston Healthcare System, Department of Psychiatry, Harvard Medical School, Boston, Massachusetts
| | - Andréia Rauber
- Department of Linguistics, University of Tübingen, Tübingen, Germany
| | - Paul G Nestor
- Laboratory of Applied Neuropsychology, College of Liberal Arts, University of Massachusetts, Boston, Massachusetts
| | - Kevin M Spencer
- VA Boston Healthcare System, Department of Psychiatry, Harvard Medical School, Boston, Massachusetts
| | - Margaret Niznikiewicz
- VA Boston Healthcare System, Department of Psychiatry, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
35
|
What is the Melody of That Voice? Probing Unbiased Recognition Accuracy with the Montreal Affective Voices. JOURNAL OF NONVERBAL BEHAVIOR 2017. [DOI: 10.1007/s10919-017-0253-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
36
|
Paulmann S, Uskul AK. Early and late brain signatures of emotional prosody among individuals with high versus low power. Psychophysiology 2016; 54:555-565. [PMID: 28026863 DOI: 10.1111/psyp.12812] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2016] [Accepted: 11/21/2016] [Indexed: 11/28/2022]
Abstract
Using ERPs, we explored the relationship between social power and emotional prosody processing. In particular, we investigated differences at early and late processing stages between individuals primed with high or low power. Comparable to previously published findings from nonprimed participants, individuals primed with low power displayed differentially modulated P2 amplitudes in response to different emotional prosodies, whereas participants primed with high power failed to do so. Similarly, participants primed with low power showed differentially modulated amplitudes in response to different emotional prosodies at a later processing stage (late ERP component), whereas participants primed with high power did not. These ERP results suggest that high versus low power leads to emotional prosody processing differences at the early stage associated with emotional salience detection and at a later stage associated with more in-depth processing of emotional stimuli.
Collapse
Affiliation(s)
- Silke Paulmann
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester, UK
| | - Ayse K Uskul
- Department of Psychology, University of Kent, Canterbury, UK
| |
Collapse
|
37
|
Grass A, Bayer M, Schacht A. Electrophysiological Correlates of Emotional Content and Volume Level in Spoken Word Processing. Front Hum Neurosci 2016; 10:326. [PMID: 27458359 PMCID: PMC4930929 DOI: 10.3389/fnhum.2016.00326] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2016] [Accepted: 06/13/2016] [Indexed: 11/13/2022] Open
Abstract
For visual stimuli of emotional content as pictures and written words, stimulus size has been shown to increase emotion effects in the early posterior negativity (EPN), a component of event-related potentials (ERPs) indexing attention allocation during visual sensory encoding. In the present study, we addressed the question whether this enhanced relevance of larger (visual) stimuli might generalize to the auditory domain and whether auditory emotion effects are modulated by volume. Therefore, subjects were listening to spoken words with emotional or neutral content, played at two different volume levels, while ERPs were recorded. Negative emotional content led to an increased frontal positivity and parieto-occipital negativity-a scalp distribution similar to the EPN-between ~370 and 530 ms. Importantly, this emotion-related ERP component was not modulated by differences in volume level, which impacted early auditory processing, as reflected in increased amplitudes of the N1 (80-130 ms) and P2 (130-265 ms) components as hypothesized. However, contrary to effects of stimulus size in the visual domain, volume level did not influence later ERP components. These findings indicate modality-specific and functionally independent processing triggered by emotional content of spoken words and volume level.
Collapse
Affiliation(s)
- Annika Grass
- Courant Research Centre Text Structures, University of GöttingenGöttingen, Germany; Leibniz-ScienceCampus Primate CognitionGöttingen, Germany
| | - Mareike Bayer
- Courant Research Centre Text Structures, University of Göttingen Göttingen, Germany
| | - Annekathrin Schacht
- Courant Research Centre Text Structures, University of GöttingenGöttingen, Germany; Leibniz-ScienceCampus Primate CognitionGöttingen, Germany
| |
Collapse
|
38
|
Adaptation of the International Affective Picture System (IAPS) for European Portuguese. Behav Res Methods 2016; 47:1159-1177. [PMID: 25381023 DOI: 10.3758/s13428-014-0535-2] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This study presents the results of the adaptation of the International Affective Picture System (IAPS) for European Portuguese (EP). Following the original procedure of Lang et al., 2000 native speakers of EP rated the 1,182 pictures of the last version of the IAPS set on the three affective dimensions of valence, arousal, and dominance, using the Self-Assessment Manikin (SAM). Results showed that the normative values of the IAPS for EP are properly distributed in the affective space of valence and arousal, showing the typical boomerang-shaped distribution observed in previous studies. Results also point to important differences in the way Portuguese females and males react to affective pictures that should be taken into consideration when planning and conducting research with Portuguese samples. Furthermore, the results from the cross-cultural comparisons between the EP ratings and the ratings from the American, Spanish, Brazilian, Belgian, Chilean, Indian, and Bosnian-Herzegovinian standardizations, showed that in spite of the fact that IAPS stimuli elicited affective responses that are similar across countries and cultures (at least in Western cultures), there are differences in the way Portuguese individuals react to IAPS pictures that strongly recommend the use of the normative values presented in this work. They can be downloaded as a supplemental archive at http://brm.psychonomic-journals.org/content/supplemental or at http://p-pal.di.uminho.pt/about/databases.
Collapse
|
39
|
A Cognitive Neuroscience View of Voice-Processing Abnormalities in Schizophrenia: A Window into Auditory Verbal Hallucinations? Harv Rev Psychiatry 2016; 24:148-63. [PMID: 26954598 DOI: 10.1097/hrp.0000000000000082] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Auditory verbal hallucinations (AVH) are a core symptom of schizophrenia. Like "real" voices, AVH carry a rich amount of linguistic and paralinguistic cues that convey not only speech, but also affect and identity, information. Disturbed processing of voice identity, affective, and speech information has been reported in patients with schizophrenia. More recent evidence has suggested a link between voice-processing abnormalities and specific clinical symptoms of schizophrenia, especially AVH. It is still not well understood, however, to what extent these dimensions are impaired and how abnormalities in these processes might contribute to AVH. In this review, we consider behavioral, neuroimaging, and electrophysiological data to investigate the speech, identity, and affective dimensions of voice processing in schizophrenia, and we discuss how abnormalities in these processes might help to elucidate the mechanisms underlying specific phenomenological features of AVH. Schizophrenia patients exhibit behavioral and neural disturbances in the three dimensions of voice processing. Evidence suggesting a role of dysfunctional voice processing in AVH seems to be stronger for the identity and speech dimensions than for the affective domain.
Collapse
|
40
|
Abstract
UNLABELLED Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal ("prosodic") features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention. SIGNIFICANCE STATEMENT Schizophrenia patients show deficits in the ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive impairments in social cognition and global functional outcome. This study evaluated neural substrates of impaired AER in schizophrenia using a combined event-related potential/resting-state fMRI approach. Patients showed impaired mismatch negativity response to emotionally relevant frequency modulated tones along with impaired functional connectivity between auditory and medial temporal (anterior insula) cortex. These deficits contributed in parallel to impaired AER and accounted for ∼50% of variance in AER performance. Overall, these findings demonstrate the importance of both auditory-level dysfunction and impaired auditory/insula connectivity in the pathophysiology of social cognitive dysfunction in schizophrenia.
Collapse
|
41
|
Pinheiro AP, Rezaii N, Nestor PG, Rauber A, Spencer KM, Niznikiewicz M. Did you or I say pretty, rude or brief? An ERP study of the effects of speaker's identity on emotional word processing. BRAIN AND LANGUAGE 2016; 153-154:38-49. [PMID: 26894680 DOI: 10.1016/j.bandl.2015.12.003] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2015] [Revised: 11/19/2015] [Accepted: 12/10/2015] [Indexed: 06/05/2023]
Abstract
During speech comprehension, multiple cues need to be integrated at a millisecond speed, including semantic information, as well as voice identity and affect cues. A processing advantage has been demonstrated for self-related stimuli when compared with non-self stimuli, and for emotional relative to neutral stimuli. However, very few studies investigated self-other speech discrimination and, in particular, how emotional valence and voice identity interactively modulate speech processing. In the present study we probed how the processing of words' semantic valence is modulated by speaker's identity (self vs. non-self voice). Sixteen healthy subjects listened to 420 prerecorded adjectives differing in voice identity (self vs. non-self) and semantic valence (neutral, positive and negative), while electroencephalographic data were recorded. Participants were instructed to decide whether the speech they heard was their own (self-speech condition), someone else's (non-self speech), or if they were unsure. The ERP results demonstrated interactive effects of speaker's identity and emotional valence on both early (N1, P2) and late (Late Positive Potential - LPP) processing stages: compared with non-self speech, self-speech with neutral valence elicited more negative N1 amplitude, self-speech with positive valence elicited more positive P2 amplitude, and self-speech with both positive and negative valence elicited more positive LPP. ERP differences between self and non-self speech occurred in spite of similar accuracy in the recognition of both types of stimuli. Together, these findings suggest that emotion and speaker's identity interact during speech processing, in line with observations of partially dependent processing of speech and speaker information.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Neuropsychophysiology Laboratory, Psychology Research Center (CIPsi), School of Psychology, University of Minho, Braga, Portugal; Clinical Neuroscience Division, Laboratory of Neuroscience, VA Boston Healthcare System-Brockton Division, Department of Psychiatry, Harvard Medical School, Brockton, MA, United States; Faculty of Psychology, University of Lisbon, Lisbon, Portugal.
| | - Neguine Rezaii
- Clinical Neuroscience Division, Laboratory of Neuroscience, VA Boston Healthcare System-Brockton Division, Department of Psychiatry, Harvard Medical School, Brockton, MA, United States
| | - Paul G Nestor
- Clinical Neuroscience Division, Laboratory of Neuroscience, VA Boston Healthcare System-Brockton Division, Department of Psychiatry, Harvard Medical School, Brockton, MA, United States; Department of Psychology, University of Massachusetts, Boston, MA, United States
| | - Andréia Rauber
- International Studies in Computational Linguistics, University of Tübingen, Tübingen, Germany
| | - Kevin M Spencer
- Neural Dynamics Laboratory, Research Service, VA Boston Healthcare System, and Department of Psychiatry, Harvard Medical School, Boston, MA, United States
| | - Margaret Niznikiewicz
- Clinical Neuroscience Division, Laboratory of Neuroscience, VA Boston Healthcare System-Brockton Division, Department of Psychiatry, Harvard Medical School, Brockton, MA, United States
| |
Collapse
|
42
|
Liu T, Pinheiro AP, Zhao Z, Nestor PG, McCarley RW, Niznikiewicz M. Simultaneous face and voice processing in schizophrenia. Behav Brain Res 2016; 305:76-86. [PMID: 26804362 DOI: 10.1016/j.bbr.2016.01.039] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2015] [Revised: 01/06/2016] [Accepted: 01/17/2016] [Indexed: 12/19/2022]
Abstract
While several studies have consistently demonstrated abnormalities in the unisensory processing of face and voice in schizophrenia (SZ), the extent of abnormalities in the simultaneous processing of both types of information remains unclear. To address this issue, we used event-related potentials (ERP) methodology to probe the multisensory integration of face and non-semantic sounds in schizophrenia. EEG was recorded from 18 schizophrenia patients and 19 healthy control (HC) subjects in three conditions: neutral faces (visual condition-VIS); neutral non-semantic sounds (auditory condition-AUD); neutral faces presented simultaneously with neutral non-semantic sounds (audiovisual condition-AUDVIS). When compared with HC, the schizophrenia group showed less negative N170 to both face and face-voice stimuli; later P270 peak latency in the multimodal condition of face-voice relative to unimodal condition of face (the reverse was true in HC); reduced P400 amplitude and earlier P400 peak latency in the face but not in the voice-face condition. Thus, the analysis of ERP components suggests that deficits in the encoding of facial information extend to multimodal face-voice stimuli and that delays exist in feature extraction from multimodal face-voice stimuli in schizophrenia. In contrast, categorization processes seem to benefit from the presentation of simultaneous face-voice information. Timepoint by timepoint tests of multimodal integration did not suggest impairment in the initial stages of processing in schizophrenia.
Collapse
Affiliation(s)
- Taosheng Liu
- Department of Psychology, Second Military Medical University (SMMU), Shanghai, China; Department of Neurology, Changzheng Hospital, SMMU, Shanghai, China
| | - Ana P Pinheiro
- Clinical Neuroscience Division, Laboratory of Neuroscience, Department of Psychiatry, Boston VA Healthcare System, Brockton Division and Harvard Medical School Boston, MA, United States; Neuropsychophysiology Laboratory, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Zhongxin Zhao
- Department of Neurology, Changzheng Hospital, SMMU, Shanghai, China
| | - Paul G Nestor
- Clinical Neuroscience Division, Laboratory of Neuroscience, Department of Psychiatry, Boston VA Healthcare System, Brockton Division and Harvard Medical School Boston, MA, United States; University of Massachusetts, Boston, MA, United States
| | - Robert W McCarley
- Clinical Neuroscience Division, Laboratory of Neuroscience, Department of Psychiatry, Boston VA Healthcare System, Brockton Division and Harvard Medical School Boston, MA, United States
| | - Margaret Niznikiewicz
- Clinical Neuroscience Division, Laboratory of Neuroscience, Department of Psychiatry, Boston VA Healthcare System, Brockton Division and Harvard Medical School Boston, MA, United States.
| |
Collapse
|
43
|
Balconi M, Tirelli S, Frezza A. Event-related potentials (ERPs) and hemodynamic (functional near-infrared spectroscopy, fNIRS) as measures of schizophrenia deficits in emotional behavior. Front Psychol 2015; 6:1686. [PMID: 26579058 PMCID: PMC4630975 DOI: 10.3389/fpsyg.2015.01686] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2015] [Accepted: 10/19/2015] [Indexed: 11/13/2022] Open
Abstract
Recent research evidences supported the significant role of multimethodological neuroscientific approach for the diagnosis and the rehabilitative intervention in schizophrenia. Indeed both electrophysiological and neuroimaging measures in integration each other appear able to furnish a deep overview of the cognitive and affective behavior in schizophrenia patients (SPs). The aim of the present review is focused on the emotional dysfunctional response taking into account the multimeasures for emotional behavior, i.e., the event-related potentials (ERPs) and the hemodynamic profile functional near-infrared spectroscopy (fNIRS). These measures may be considered as predictive measures of the SPs' deficits in emotional behavior. The integration between ERP and fNIRS may support both the prefrontal cortical localization anomaly and the attentional bias toward some specific emotional conditions (mainly negative).
Collapse
Affiliation(s)
- Michela Balconi
- Research Unit in Affective and Social Neuroscience, Catholic University of the Sacred Heart Milan, Italy ; Department of Psychology, Catholic University of the Sacred Heart Milan, Italy
| | - Simone Tirelli
- Department of Psychology, Catholic University of the Sacred Heart Milan, Italy
| | - Alessandra Frezza
- Department of Psychology, Catholic University of the Sacred Heart Milan, Italy ; Sapienza University of Rome Rome, Italy
| |
Collapse
|
44
|
Pinheiro AP, Barros C, Pedrosa J. Salience in a social landscape: electrophysiological effects of task-irrelevant and infrequent vocal change. Soc Cogn Affect Neurosci 2015; 11:127-39. [PMID: 26468268 DOI: 10.1093/scan/nsv103] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2015] [Accepted: 07/30/2015] [Indexed: 11/14/2022] Open
Abstract
In a dynamically changing social environment, humans have to face the challenge of prioritizing stimuli that compete for attention. In the context of social communication, the voice is the most important sound category. However, the existing studies do not directly address whether and how the salience of an unexpected vocal change in an auditory sequence influences the orientation of attention. In this study, frequent tones were interspersed with task-relevant infrequent tones and task-irrelevant infrequent vocal sounds (neutral, happy and angry vocalizations). Eighteen healthy college students were asked to count infrequent tones. A combined event-related potential (ERP) and EEG time-frequency approach was used, with the focus on the P3 component and on the early auditory evoked gamma band response, respectively. A spatial-temporal principal component analysis was used to disentangle potentially overlapping ERP components. Although no condition differences were observed in the 210-310 ms window, larger positive responses were observed for emotional than neutral vocalizations in the 310-410 ms window. Furthermore, the phase synchronization of the early auditory evoked gamma oscillation was enhanced for happy vocalizations. These findings support the idea that the brain prioritizes the processing of emotional stimuli, by devoting more attentional resources to salient social signals even when they are not task-relevant.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Neuropsychophysiology Laboratory, School of Psychology, University of Minho, Braga, Portugal
| | - Carla Barros
- Neuropsychophysiology Laboratory, School of Psychology, University of Minho, Braga, Portugal
| | - João Pedrosa
- Neuropsychophysiology Laboratory, School of Psychology, University of Minho, Braga, Portugal
| |
Collapse
|
45
|
Premkumar P, Onwumere J, Albert J, Kessel D, Kumari V, Kuipers E, Carretié L. The relation between schizotypy and early attention to rejecting interactions: The influence of neuroticism. World J Biol Psychiatry 2015; 16:587-601. [PMID: 26452584 PMCID: PMC4732428 DOI: 10.3109/15622975.2015.1073855] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/10/2015] [Revised: 05/07/2015] [Accepted: 07/13/2015] [Indexed: 01/21/2023]
Abstract
OBJECTIVES Schizotypy relates to rejection sensitivity (anxiety reflecting an expectancy of social exclusion) and neuroticism (excessive evaluation of negative emotions). Positive schizotypy (e.g., perceptual aberrations and odd beliefs) and negative schizotypy (e.g., social and physical anhedonia) could relate to altered attention to rejection because of neuroticism. METHODS Forty-one healthy individuals were assessed on positive and negative schizotypy and neuroticism, and event-related potentials during rejecting, accepting and neutral scenes. Participants were categorised into high, moderate and low neuroticism groups. Using temporo-spatial principal components analyses, P200 (peak latency = 290 ms) and P300 amplitudes (peak latency = 390 ms) were measured, reflecting mobilisation of attention and early attention, respectively. RESULTS Scalp-level and cortical source analysis revealed elevated fronto-parietal N300/P300 amplitude and P200-related dorsal anterior cingulate current density during rejection than acceptance/neutral scenes. Positive schizotypy related inversely to parietal P200 amplitude during rejection. Negative schizotypy related positively to P200 middle occipital current density. Negative schizotypy related positively to parietal P300, where the association was stronger in high and moderate, than low, neuroticism groups. CONCLUSIONS Positive and negative schizotypy relate divergently to attention to rejection. Positive schizotypy attenuates, but negative schizotypy increases rejection-related mobilisation of attention. Negative schizotypy increases early attention to rejection partly due to elevated neuroticism.
Collapse
Affiliation(s)
- Preethi Premkumar
- Division of Psychology, School of Social Sciences, Nottingham Trent University,
Nottingham,
UK
| | - Juliana Onwumere
- King’s College London, Department of Psychology, Institute of Psychiatry,
London,
UK
- NIHR Biomedical Research Centre for Mental Health, South London and Maudsley NHS Foundation Trust,
London,
UK
| | - Jacobo Albert
- Facultad De Psicología, Universidad Autónoma De Madrid,
Madrid,
Spain
- Instituto Pluridisciplinar, Universidad Complutense De Madrid,
Madrid,
Spain
| | - Dominique Kessel
- Facultad De Psicología, Universidad Autónoma De Madrid,
Madrid,
Spain
| | - Veena Kumari
- King’s College London, Department of Psychology, Institute of Psychiatry,
London,
UK
- NIHR Biomedical Research Centre for Mental Health, South London and Maudsley NHS Foundation Trust,
London,
UK
| | - Elizabeth Kuipers
- King’s College London, Department of Psychology, Institute of Psychiatry,
London,
UK
- NIHR Biomedical Research Centre for Mental Health, South London and Maudsley NHS Foundation Trust,
London,
UK
| | - Luis Carretié
- Facultad De Psicología, Universidad Autónoma De Madrid,
Madrid,
Spain
| |
Collapse
|
46
|
Pinheiro AP, Del Re E, Nestor PG, Mezin J, Rezaii N, McCarley RW, Gonçalves ÓF, Niznikiewicz M. Abnormal interactions between context, memory structure, and mood in schizophrenia: an ERP investigation. Psychophysiology 2015; 52:20-31. [PMID: 25047946 DOI: 10.1111/psyp.12289] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2014] [Accepted: 06/08/2014] [Indexed: 02/06/2023]
Abstract
This study used event-related potentials to examine interactions between mood, sentence context, and semantic memory structure in schizophrenia. Seventeen male chronic schizophrenia and 15 healthy control subjects read sentence pairs after positive, negative, or neutral mood induction. Sentences ended with expected words (EW), within-category violations (WCV), or between-category violations (BCV). Across all moods, patients showed sensitivity to context indexed by reduced N400 to EW relative to both WCV and BCV. However, they did not show sensitivity to the semantic memory structure. N400 abnormalities were particularly enhanced under a negative mood in schizophrenia. These findings suggest abnormal interactions between mood, context processing, and connections within semantic memory in schizophrenia, and a specific role of negative mood in modulating semantic processes in this disease.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Neuropsychophysiology Laboratory, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Cognitive Neuroscience Laboratory, Department of Psychiatry, Harvard Medical School, Boston, Massachusetts, USA
| | | | | | | | | | | | | | | |
Collapse
|
47
|
Pinheiro AP, Vasconcelos M, Dias M, Arrais N, Gonçalves ÓF. The music of language: an ERP investigation of the effects of musical training on emotional prosody processing. BRAIN AND LANGUAGE 2015; 140:24-34. [PMID: 25461917 DOI: 10.1016/j.bandl.2014.10.009] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2014] [Revised: 09/30/2014] [Accepted: 10/22/2014] [Indexed: 06/04/2023]
Abstract
Recent studies have demonstrated the positive effects of musical training on the perception of vocally expressed emotion. This study investigated the effects of musical training on event-related potential (ERP) correlates of emotional prosody processing. Fourteen musicians and fourteen control subjects listened to 228 sentences with neutral semantic content, differing in prosody (one third with neutral, one third with happy and one third with angry intonation), with intelligible semantic content (semantic content condition--SCC) and unintelligible semantic content (pure prosody condition--PPC). Reduced P50 amplitude was found in musicians. A difference between SCC and PPC conditions was found in P50 and N100 amplitude in non-musicians only, and in P200 amplitude in musicians only. Furthermore, musicians were more accurate in recognizing angry prosody in PPC sentences. These findings suggest that auditory expertise characterizing extensive musical training may impact different stages of vocal emotional processing.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Cognitive Neuroscience Lab, Department of Psychiatry, Harvard Medical School, Boston, MA, USA.
| | - Margarida Vasconcelos
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Marcelo Dias
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Nuno Arrais
- Music Department, Institute of Arts and Human Sciences, University of Minho, Braga, Portugal
| | - Óscar F Gonçalves
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Spaulding Center of Neuromodulation, Department of Physical Medicine & Rehabilitation, Spaulding Rehabilitation Hospital and Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
48
|
Kantrowitz JT, Scaramello N, Jakubovitz A, Lehrfeld JM, Laukka P, Elfenbein HA, Silipo G, Javitt DC. Amusia and protolanguage impairments in schizophrenia. Psychol Med 2014; 44:2739-2748. [PMID: 25066878 PMCID: PMC5373691 DOI: 10.1017/s0033291714000373] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
BACKGROUND Both language and music are thought to have evolved from a musical protolanguage that communicated social information, including emotion. Individuals with perceptual music disorders (amusia) show deficits in auditory emotion recognition (AER). Although auditory perceptual deficits have been studied in schizophrenia, their relationship with musical/protolinguistic competence has not previously been assessed. METHOD Musical ability was assessed in 31 schizophrenia/schizo-affective patients and 44 healthy controls using the Montreal Battery for Evaluation of Amusia (MBEA). AER was assessed using a novel battery in which actors provided portrayals of five separate emotions. The Disorganization factor of the Positive and Negative Syndrome Scale (PANSS) was used as a proxy for language/thought disorder and the MATRICS Consensus Cognitive Battery (MCCB) was used to assess cognition. RESULTS Highly significant deficits were seen between patients and controls across auditory tasks (p < 0.001). Moreover, significant differences were seen in AER between the amusia and intact music-perceiving groups, which remained significant after controlling for group status and education. Correlations with AER were specific to the melody domain, and correlations between protolanguage (melody domain) and language were independent of overall cognition. DISCUSSION This is the first study to document a specific relationship between amusia, AER and thought disorder, suggesting a shared linguistic/protolinguistic impairment. Once amusia was considered, other cognitive factors were no longer significant predictors of AER, suggesting that musical ability in general and melodic discrimination ability in particular may be crucial targets for treatment development and cognitive remediation in schizophrenia.
Collapse
Affiliation(s)
- J. T. Kantrowitz
- Schizophrenia Research Center, Nathan Kline Institute, Orangeburg, NY, USA
- Department of Psychiatry, Columbia University, New York, NY, USA
| | - N. Scaramello
- Schizophrenia Research Center, Nathan Kline Institute, Orangeburg, NY, USA
| | - A. Jakubovitz
- Schizophrenia Research Center, Nathan Kline Institute, Orangeburg, NY, USA
| | - J. M. Lehrfeld
- Schizophrenia Research Center, Nathan Kline Institute, Orangeburg, NY, USA
| | - P. Laukka
- Department of Psychology, Stockholm University, Sweden
| | - H. A. Elfenbein
- Olin Business School, Washington University, St Louis, MO, USA
| | - G. Silipo
- Schizophrenia Research Center, Nathan Kline Institute, Orangeburg, NY, USA
| | - D. C. Javitt
- Schizophrenia Research Center, Nathan Kline Institute, Orangeburg, NY, USA
- Department of Psychiatry, Columbia University, New York, NY, USA
| |
Collapse
|
49
|
Dondaine T, Robert G, Péron J, Grandjean D, Vérin M, Drapier D, Millet B. Biases in facial and vocal emotion recognition in chronic schizophrenia. Front Psychol 2014; 5:900. [PMID: 25202287 PMCID: PMC4141280 DOI: 10.3389/fpsyg.2014.00900] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2014] [Accepted: 07/29/2014] [Indexed: 01/26/2023] Open
Abstract
There has been extensive research on impaired emotion recognition in schizophrenia in the facial and vocal modalities. The literature points to biases toward non-relevant emotions for emotional faces but few studies have examined biases in emotional recognition across different modalities (facial and vocal). In order to test emotion recognition biases, we exposed 23 patients with stabilized chronic schizophrenia and 23 healthy controls (HCs) to emotional facial and vocal tasks asking them to rate emotional intensity on visual analog scales. We showed that patients with schizophrenia provided higher intensity ratings on the non-target scales (e.g., surprise scale for fear stimuli) than HCs for the both tasks. Furthermore, with the exception of neutral vocal stimuli, they provided the same intensity ratings on the target scales as the HCs. These findings suggest that patients with chronic schizophrenia have emotional biases when judging emotional stimuli in the visual and vocal modalities. These biases may stem from a basic sensorial deficit, a high-order cognitive dysfunction, or both. The respective roles of prefrontal-subcortical circuitry and the basal ganglia are discussed.
Collapse
Affiliation(s)
- Thibaut Dondaine
- EA 4712 'Behavior and Basal Ganglia' Laboratory, Université de Rennes 1 Rennes, France ; Psychiatry Unit, Guillaume Régnier Hospital Rennes, France
| | - Gabriel Robert
- EA 4712 'Behavior and Basal Ganglia' Laboratory, Université de Rennes 1 Rennes, France ; Psychiatry Unit, Guillaume Régnier Hospital Rennes, France
| | - Julie Péron
- 'Neuroscience of Emotion and Affective Dynamics' Laboratory, Department of Psychology, University of Geneva Switzerland ; Swiss Center for Affective Sciences, University of Geneva Switzerland
| | - Didier Grandjean
- 'Neuroscience of Emotion and Affective Dynamics' Laboratory, Department of Psychology, University of Geneva Switzerland ; Swiss Center for Affective Sciences, University of Geneva Switzerland
| | - Marc Vérin
- EA 4712 'Behavior and Basal Ganglia' Laboratory, Université de Rennes 1 Rennes, France ; Neurology Unit, University Hospital of Rennes France
| | - Dominique Drapier
- EA 4712 'Behavior and Basal Ganglia' Laboratory, Université de Rennes 1 Rennes, France ; Psychiatry Unit, Guillaume Régnier Hospital Rennes, France
| | - Bruno Millet
- EA 4712 'Behavior and Basal Ganglia' Laboratory, Université de Rennes 1 Rennes, France ; Psychiatry Unit, Guillaume Régnier Hospital Rennes, France
| |
Collapse
|
50
|
Champagne J, Mendrek A, Germain M, Hot P, Lavoie ME. Event-related brain potentials to emotional images and gonadal steroid hormone levels in patients with schizophrenia and paired controls. Front Psychol 2014; 5:543. [PMID: 24966840 PMCID: PMC4052747 DOI: 10.3389/fpsyg.2014.00543] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2014] [Accepted: 05/16/2014] [Indexed: 12/05/2022] Open
Abstract
Prominent disturbances in the experience, expression, and emotion recognition in patients with schizophrenia have been relatively well documented over the last few years. Furthermore, sex differences in behavior and brain activity, associated with the processing of various emotions, have been reported in the general population and in schizophrenia patients. Others proposed that sex differences should be rather attributed to testosterone, which may play a role in the etiology of schizophrenia. Also, it had been suggested that estradiol may play a protective role in schizophrenia. Surprisingly, few studies investigating this pathology have focused on both brain substrates and gonadal steroid hormone levels, in emotional processing. In the present study, we investigated electrocortical responses related to emotional valence and arousal as well as gonadal steroid hormone levels in patients with schizophrenia. Event-Related Potentials (ERP) were recorded during exposition to emotional pictures in 18 patients with schizophrenia and in 24 control participants paired on intelligence, manual dominance and socioeconomic status. Given their previous sensitivity to emotional and attention processes, the P200, N200 and the P300 were selected for analysis. More precisely, emotional valence generally affects early components (N200), which reflect early process of selective attention, whereas emotional arousal and valence both influences the P300 component, which is related to memory context updating, and stimulus categorization. Results showed that, in the control group, the amplitude of the N200 was significantly more lateralized over the right hemisphere, while there was no such lateralization in patients with schizophrenia. In patients with schizophrenia, significantly smaller anterior P300 amplitude was observed to the unpleasant, compared to the pleasant. That anterior P300 reduction was also correlated with negative symptoms. The N200 and P300 amplitudes were positively correlated with the estradiol level in all conditions, revealing that the N200 and the P300 were reduced, when estradiol level was higher. Conversely, only the P300 amplitude showed positive correlation with the testosterone level.
Collapse
Affiliation(s)
- Julie Champagne
- Axe de Neurobiologie Cognitive, Laboratoire de Psychophysiologie Cognitive et Sociale, Centre de Recherche de l'Institut Universitaire en Santé Mentale de Montréal Montréal, QC, Canada ; Department of Psychiatry, Université de Montréal Montréal, QC, Canada
| | - Adrianna Mendrek
- Axe de Neurobiologie Cognitive, Laboratoire de Psychophysiologie Cognitive et Sociale, Centre de Recherche de l'Institut Universitaire en Santé Mentale de Montréal Montréal, QC, Canada ; Department of Psychology, Bishop's University, Sherbrooke QC, Canada
| | - Martine Germain
- Axe de Neurobiologie Cognitive, Laboratoire de Psychophysiologie Cognitive et Sociale, Centre de Recherche de l'Institut Universitaire en Santé Mentale de Montréal Montréal, QC, Canada ; Department of Psychiatry, Université de Montréal Montréal, QC, Canada
| | - Pascal Hot
- Laboratoire de Psychologie et Neurocognition, Université de Savoie Chambéry, France
| | - Marc E Lavoie
- Axe de Neurobiologie Cognitive, Laboratoire de Psychophysiologie Cognitive et Sociale, Centre de Recherche de l'Institut Universitaire en Santé Mentale de Montréal Montréal, QC, Canada ; Department of Psychiatry, Université de Montréal Montréal, QC, Canada
| |
Collapse
|