1
|
Day TC, Malik I, Boateng S, Hauschild KM, Lerner MD. Vocal Emotion Recognition in Autism: Behavioral Performance and Event-Related Potential (ERP) Response. J Autism Dev Disord 2024; 54:1235-1248. [PMID: 36694007 DOI: 10.1007/s10803-023-05898-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2023] [Indexed: 01/25/2023]
Abstract
Autistic youth display difficulties in emotion recognition, yet little research has examined behavioral and neural indices of vocal emotion recognition (VER). The current study examines behavioral and event-related potential (N100, P200, Late Positive Potential [LPP]) indices of VER in autistic and non-autistic youth. Participants (N = 164) completed an emotion recognition task, the Diagnostic Analyses of Nonverbal Accuracy (DANVA-2) which included VER, during EEG recording. The LPP amplitude was larger in response to high intensity VER, and social cognition predicted VER errors. Verbal IQ, not autism, was related to VER errors. An interaction between VER intensity and social communication impairments revealed these impairments were related to larger LPP amplitudes during low intensity VER. Taken together, differences in VER may be due to higher order cognitive processes, not basic, early perception (N100, P200), and verbal cognitive abilities may underlie behavioral, yet occlude neural, differences in VER processing.
Collapse
Affiliation(s)
- Talena C Day
- Psychology Department, Stony Brook University, Stony Brook, Psychology B-354, Stony Brook, NY, 11794-2500, USA
| | - Isha Malik
- Psychology Department, Stony Brook University, Stony Brook, Psychology B-354, Stony Brook, NY, 11794-2500, USA
| | - Sydney Boateng
- Psychology Department, Stony Brook University, Stony Brook, Psychology B-354, Stony Brook, NY, 11794-2500, USA
| | | | - Matthew D Lerner
- Psychology Department, Stony Brook University, Stony Brook, Psychology B-354, Stony Brook, NY, 11794-2500, USA.
| |
Collapse
|
2
|
Sarzedas J, Lima CF, Roberto MS, Scott SK, Pinheiro AP, Conde T. Blindness influences emotional authenticity perception in voices: Behavioral and ERP evidence. Cortex 2024; 172:254-270. [PMID: 38123404 DOI: 10.1016/j.cortex.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/31/2023] [Accepted: 11/10/2023] [Indexed: 12/23/2023]
Abstract
The ability to distinguish spontaneous from volitional emotional expressions is an important social skill. How do blind individuals perceive emotional authenticity? Unlike sighted individuals, they cannot rely on facial and body language cues, relying instead on vocal cues alone. Here, we combined behavioral and ERP measures to investigate authenticity perception in laughter and crying in individuals with early- or late-blindness onset. Early-blind, late-blind, and sighted control participants (n = 17 per group, N = 51) completed authenticity and emotion discrimination tasks while EEG data were recorded. The stimuli consisted of laughs and cries that were either spontaneous or volitional. The ERP analysis focused on the N1, P2, and late positive potential (LPP). Behaviorally, early-blind participants showed intact authenticity perception, but late-blind participants performed worse than controls. There were no group differences in the emotion discrimination task. In brain responses, all groups were sensitive to laughter authenticity at the P2 stage, and to crying authenticity at the early LPP stage. Nevertheless, only early-blind participants were sensitive to crying authenticity at the N1 and middle LPP stages, and to laughter authenticity at the early LPP stage. Furthermore, early-blind and sighted participants were more sensitive than late-blind ones to crying authenticity at the P2 and late LPP stages. Altogether, these findings suggest that early blindness relates to facilitated brain processing of authenticity in voices, both at early sensory and late cognitive-evaluative stages. Late-onset blindness, in contrast, relates to decreased sensitivity to authenticity at behavioral and brain levels.
Collapse
Affiliation(s)
- João Sarzedas
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - César F Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal; Institute of Cognitive Neuroscience, University College London, London, UK
| | - Magda S Roberto
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| | - Tatiana Conde
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| |
Collapse
|
3
|
Stephan-Otto C, Núñez C, Lombardini F, Cambra-Martí MR, Ochoa S, Senior C, Brébion G. Neurocognitive bases of self-monitoring of inner speech in hallucination prone individuals. Sci Rep 2023; 13:6251. [PMID: 37069194 PMCID: PMC10110610 DOI: 10.1038/s41598-023-32042-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 03/20/2023] [Indexed: 04/19/2023] Open
Abstract
Verbal hallucinations in schizophrenia patients might be seen as internal verbal productions mistaken for perceptions as a result of over-salient inner speech and/or defective self-monitoring processes. Similar cognitive mechanisms might underpin verbal hallucination proneness in the general population. We investigated, in a non-clinical sample, the cerebral activity associated with verbal hallucinatory predisposition during false recognition of familiar words -assumed to stem from poor monitoring of inner speech-vs. uncommon words. Thirty-seven healthy participants underwent a verbal recognition task. High- and low-frequency words were presented outside the scanner. In the scanner, the participants were then required to recognize the target words among equivalent distractors. Results showed that verbal hallucination proneness was associated with higher rates of false recognition of high-frequency words. It was further associated with activation of language and decisional brain areas during false recognitions of low-, but not high-, frequency words, and with activation of a recollective brain area during correct recognitions of low-, but not high-, frequency words. The increased tendency to report familiar words as targets, along with a lack of activation of the language, recollective, and decisional brain areas necessary for their judgement, suggests failure in the self-monitoring of inner speech in verbal hallucination-prone individuals.
Collapse
Affiliation(s)
- Christian Stephan-Otto
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain
- Parc Sanitari Sant Joan de Déu, Sant Boi de Llobregat, Spain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Madrid, Spain
| | - Christian Núñez
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain
- Parc Sanitari Sant Joan de Déu, Sant Boi de Llobregat, Spain
| | | | | | - Susana Ochoa
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain
- Parc Sanitari Sant Joan de Déu, Sant Boi de Llobregat, Spain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Madrid, Spain
| | - Carl Senior
- School of Life & Health Sciences, Aston University, Birmingham, UK.
- University of Gibraltar, Gibraltar, UK.
| | - Gildas Brébion
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain.
- Parc Sanitari Sant Joan de Déu, Sant Boi de Llobregat, Spain.
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Madrid, Spain.
| |
Collapse
|
4
|
Li M, Huang W, Zhang T. Attention Based Convolutional Neural Network with Multi-frequency Resolution Feature for Environment Sound Classification. Neural Process Lett 2022; 55:1-16. [PMID: 36312843 PMCID: PMC9589621 DOI: 10.1007/s11063-022-11041-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/15/2022] [Indexed: 11/29/2022]
Abstract
The environmental sound classification has great research significance in the fields of intelligent audio monitoring and other fields. A novel multi-frequency resolution (MFR) feature is proposed in this paper to solve the problem that the existing single frequency resolution time-frequency features of sound cannot effectively express the characteristics of multiple types of sound. The MFR feature is composed of three features with different frequency resolutions, which are compressed in varying degrees at the time dimension. This method not only has the effect of data augmentation but also can obtain more context information during the feature extraction. And the MFR features of Log-Mel Spectrogram, Cochleagram, and Constant Q-Transform are combined to form a multi-channel MFR feature. Also, a network named SacNet is built, which can effectively solve the problem that the time-frequency feature map of sound contains more invalid information. The basic structural unit of the SacNet consists of two parallel branches, one using depthwise separable convolution as the main feature extractor, and the other using spatial attention module to extract more effective information. Experiment results have demonstrated that the proposed method achieves the state-of-the-art accuracy of 97.5%, 93.1%, and 95.3% on three benchmark datasets of ESC10, ESC50, and UrbanSound8K respectively, which are increased by 3.3%, 0.5%, and 2.3% respectively compared with the previous advanced methods.
Collapse
Affiliation(s)
- Minze Li
- Chengdu Techman Sofeware Co., Ltd., Chengdu, Sichuan China
| | - Wu Huang
- Sichuan University, Chengdu, Sichuan China
| | - Tao Zhang
- Chengdu Techman Sofeware Co., Ltd., Chengdu, Sichuan China
| |
Collapse
|
5
|
Martins I, Lima CF, Pinheiro AP. Enhanced salience of musical sounds in singers and instrumentalists. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2022; 22:1044-1062. [PMID: 35501427 DOI: 10.3758/s13415-022-01007-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/10/2022] [Indexed: 06/14/2023]
Abstract
Music training has been linked to facilitated processing of emotional sounds. However, most studies have focused on speech, and less is known about musicians' brain responses to other emotional sounds and in relation to instrument-specific experience. The current study combined behavioral and EEG methods to address two novel questions related to the perception of auditory emotional cues: whether and how long-term music training relates to a distinct emotional processing of nonverbal vocalizations and music; and whether distinct training profiles (vocal vs. instrumental) modulate brain responses to emotional sounds from early to late processing stages. Fifty-eight participants completed an EEG implicit emotional processing task, in which musical and vocal sounds differing in valence were presented as nontarget stimuli. After this task, participants explicitly evaluated the same sounds regarding the emotion being expressed, their valence, and arousal. Compared with nonmusicians, musicians displayed enhanced salience detection (P2), attention orienting (P3), and elaborative processing (Late Positive Potential) of musical (vs. vocal) sounds in event-related potential (ERP) data. The explicit evaluation of musical sounds also was distinct in musicians: accuracy in the emotional recognition of musical sounds was similar across valence types in musicians, who also judged musical sounds to be more pleasant and more arousing than nonmusicians. Specific profiles of music training (singers vs. instrumentalists) did not relate to differences in the processing of vocal vs. musical sounds. Together, these findings reveal that music has a privileged status in the auditory system of long-term musically trained listeners, irrespective of their instrument-specific experience.
Collapse
Affiliation(s)
- Inês Martins
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal
| | - César F Lima
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisbon, Portugal
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal.
| |
Collapse
|
6
|
Johnson JF, Belyk M, Schwartze M, Pinheiro AP, Kotz SA. Hypersensitivity to passive voice hearing in hallucination proneness. Front Hum Neurosci 2022; 16:859731. [PMID: 35966990 PMCID: PMC9366353 DOI: 10.3389/fnhum.2022.859731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 06/29/2022] [Indexed: 11/21/2022] Open
Abstract
Voices are a complex and rich acoustic signal processed in an extensive cortical brain network. Specialized regions within this network support voice perception and production and may be differentially affected in pathological voice processing. For example, the experience of hallucinating voices has been linked to hyperactivity in temporal and extra-temporal voice areas, possibly extending into regions associated with vocalization. Predominant self-monitoring hypotheses ascribe a primary role of voice production regions to auditory verbal hallucinations (AVH). Alternative postulations view a generalized perceptual salience bias as causal to AVH. These theories are not mutually exclusive as both ascribe the emergence and phenomenology of AVH to unbalanced top-down and bottom-up signal processing. The focus of the current study was to investigate the neurocognitive mechanisms underlying predisposition brain states for emergent hallucinations, detached from the effects of inner speech. Using the temporal voice area (TVA) localizer task, we explored putative hypersalient responses to passively presented sounds in relation to hallucination proneness (HP). Furthermore, to avoid confounds commonly found in in clinical samples, we employed the Launay-Slade Hallucination Scale (LSHS) for the quantification of HP levels in healthy people across an experiential continuum spanning the general population. We report increased activation in the right posterior superior temporal gyrus (pSTG) during the perception of voice features that positively correlates with increased HP scores. In line with prior results, we propose that this right-lateralized pSTG activation might indicate early hypersensitivity to acoustic features coding speaker identity that extends beyond own voice production to perception in healthy participants prone to experience AVH.
Collapse
Affiliation(s)
- Joseph F. Johnson
- Department of Neuropsychology and Psychopharmacology, University of Maastricht, Maastricht, Netherlands
| | - Michel Belyk
- Department of Psychology, Edge Hill University, Ormskirk, United Kingdom
| | - Michael Schwartze
- Department of Neuropsychology and Psychopharmacology, University of Maastricht, Maastricht, Netherlands
| | - Ana P. Pinheiro
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal
| | - Sonja A. Kotz
- Department of Neuropsychology and Psychopharmacology, University of Maastricht, Maastricht, Netherlands
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- *Correspondence: Sonja A. Kotz,
| |
Collapse
|
7
|
Amorim M, Roberto MS, Kotz SA, Pinheiro AP. The perceived salience of vocal emotions is dampened in non-clinical auditory verbal hallucinations. Cogn Neuropsychiatry 2022; 27:169-182. [PMID: 34261424 DOI: 10.1080/13546805.2021.1949972] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
Introduction: Auditory verbal hallucinations (AVH) are a cardinal symptom of schizophrenia but are also reported in the general population without need for psychiatric care. Previous evidence suggests that AVH may reflect an imbalance of prior expectation and sensory information, and that altered salience processing is characteristic of both psychotic and non-clinical voice hearers. However, it remains to be shown how such an imbalance affects the categorisation of vocal emotions in perceptual ambiguity.Methods: Neutral and emotional nonverbal vocalisations were morphed along two continua differing in valence (anger; pleasure), each including 11 morphing steps at intervals of 10%. College students (N = 234) differing in AVH proneness (measured with the Launay-Slade Hallucination Scale) evaluated the emotional quality of the vocalisations.Results: Increased AVH proneness was associated with more frequent categorisation of ambiguous vocalisations as 'neutral', irrespective of valence. Similarly, the perceptual boundary for emotional classification was shifted by AVH proneness: participants needed more emotional information to categorise a voice as emotional.Conclusions: These findings suggest that emotional salience in vocalisations is dampened as a function of increased AVH proneness. This could be related to changes in the acoustic representations of emotions or reflect top-down expectations of less salient information in the social environment.
Collapse
Affiliation(s)
- Maria Amorim
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Magda S Roberto
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Sonja A Kotz
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.,Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands
| |
Collapse
|
8
|
Cui Q, Liu M, Liu CH, Long Z, Zhao K, Fu X. Unpredictable fearful stimuli disrupt timing activities: Evidence from event-related potentials. Neuropsychologia 2021; 163:108057. [PMID: 34653495 DOI: 10.1016/j.neuropsychologia.2021.108057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 08/16/2021] [Accepted: 10/06/2021] [Indexed: 11/15/2022]
Abstract
The present study investigated the effect of an imminent fearful stimulus on an ongoing temporal task. Participants judged the duration of a blank temporal interval followed by a fearful or a neutral image. Results showed an underestimation of the duration in the fearful condition relative to the neutral condition, but only when the occurrence of the fearful image was difficult to predict. ERPs results for the blank temporal interval found no effect of the fearful stimulus on the contingent negative variation (CNV) amplitude in the clock stage. However, after the image onset, there was a larger P1 for the fearful relative to the neutral condition. Although this effect was indistinguishable regardless of whether the fearful event could be easily predicted, a late positive potential (LPP) component displayed larger amplitude only for unpredictable fearful stimuli. The time-frequency results showed enhanced delta-theta power (0.5-7.5 Hz) for the unpredictable fearful stimuli in the late stage. Importantly, the enhanced delta-theta rhythm correlated negatively with the duration judgments. Together, these results suggest that an unpredictable fearful event might divert more attention away from the counting process in the working memory stage, resulting in missing ticks and temporal underestimation.
Collapse
Affiliation(s)
- Qian Cui
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China; School of Psychology, Liaoning Normal University, Dalian, 116029, China
| | - Mingtong Liu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Chang Hong Liu
- Department of Psychology, Bournemouth University, Dorset, United Kingdom
| | - Zhengkun Long
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Ke Zhao
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - Xiaolan Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|