1
|
Ma Y, Yu K, Yin S, Li L, Li P, Wang R. Attention Modulates the Role of Speakers' Voice Identity and Linguistic Information in Spoken Word Processing: Evidence From Event-Related Potentials. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1678-1693. [PMID: 37071787 DOI: 10.1044/2023_jslhr-22-00420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
PURPOSE The human voice usually contains two types of information: linguistic and identity information. However, whether and how linguistic information interacts with identity information remains controversial. This study aimed to explore the processing of identity and linguistic information during spoken word processing by considering the modulation of attention. METHOD We conducted two event-related potentials (ERPs) experiments in the study. Different speakers (self, friend, and unfamiliar speakers) and emotional words (positive, negative, and neutral words) were used to manipulate the identity and linguistic information. With the manipulation, Experiment 1 explored the identity and linguistic information processing with a word decision task that requires participants' explicit attention to linguistic information. Experiment 2 further investigated the issue with a passive oddball paradigm that requires rare attention to either the identity or linguistic information. RESULTS Experiment 1 revealed an interaction among speaker, word type, and hemisphere in N400 amplitudes but not in N100 and P200, which suggests that identity information interacted with linguistic information at the later stage of spoken word processing. The mismatch negativity results of Experiment 2 showed no significant interaction between speaker and word pair, which indicates that identity and linguistic information were processed independently. CONCLUSIONS The identity information would interact with linguistic information during spoken word processing. However, the interaction was modulated by the task demands on attention involvement. We propose an attention-modulated explanation to explain the mechanism underlying identity and linguistic information processing. Implications of our findings are discussed in light of the integration and independence theories.
Collapse
Affiliation(s)
- Yunxiao Ma
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, Ministry of Education, & Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou, China
| | - Keke Yu
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, Ministry of Education, & Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou, China
| | - Shuqi Yin
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, Ministry of Education, & Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou, China
| | - Li Li
- The Key Laboratory of Chinese Learning and International Promotion, and College of International Culture, South China Normal University, Guangzhou, China
| | - Ping Li
- Department of Chinese and Bilingual Studies, Faculty of Humanities, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Ruiming Wang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, Ministry of Education, & Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou, China
| |
Collapse
|
2
|
Rinke P, Schmidt T, Beier K, Kaul R, Scharinger M. Rapid pre-attentive processing of a famous speaker: Electrophysiological effects of Angela Merkel's voice. Neuropsychologia 2022; 173:108312. [PMID: 35781011 DOI: 10.1016/j.neuropsychologia.2022.108312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 06/27/2022] [Accepted: 06/27/2022] [Indexed: 11/18/2022]
Abstract
The recognition of human speakers by their voices is a remarkable cognitive ability. Previous research has established a voice area in the right temporal cortex involved in the integration of speaker-specific acoustic features. This integration appears to occur rapidly, especially in case of familiar voices. However, the exact time course of this process is less well understood. To this end, we here investigated the automatic change detection response of the human brain while listening to the famous voice of German chancellor Angela Merkel, embedded in the context of acoustically matched voices. A classic passive oddball paradigm contrasted short word stimuli uttered by Merkel with word stimuli uttered by two unfamiliar female speakers. Electrophysiological voice processing indices from 21 participants were quantified as mismatch negativities (MMNs) and P3a differences. Cortical sources were approximated by variable resolution electromagnetic tomography. The results showed amplitude and latency effects for both MMN and P3a: The famous (familiar) voice elicited a smaller but earlier MMN than the unfamiliar voices. The P3a, by contrast, was both larger and later for the familiar than for the unfamiliar voices. Familiar-voice MMNs originated from right-hemispheric regions in temporal cortex, overlapping with the temporal voice area, while unfamiliar-voice MMNs stemmed from left superior temporal gyrus. These results suggest that the processing of a very famous voice relies on pre-attentive right temporal processing within the first 150 ms of the acoustic signal. The findings further our understanding of the neural dynamics underlying familiar voice processing.
Collapse
Affiliation(s)
- Paula Rinke
- Research Group Phonetics, Institute of German Linguistics, Philipps-University Marburg, Germany; Center for Mind, Brain & Behavior, Universities of Marburg & Gießen, Germany
| | - Tatjana Schmidt
- Center for Mind, Brain & Behavior, Universities of Marburg & Gießen, Germany; Faculté de biologie et de médecine, University of Lausanne, Switzerland
| | - Kjartan Beier
- Research Group Phonetics, Institute of German Linguistics, Philipps-University Marburg, Germany
| | - Ramona Kaul
- Research Group Phonetics, Institute of German Linguistics, Philipps-University Marburg, Germany
| | - Mathias Scharinger
- Research Group Phonetics, Institute of German Linguistics, Philipps-University Marburg, Germany; Research Center »Deutscher Sprachatlas«, Philipps-University Marburg, Germany; Center for Mind, Brain & Behavior, Universities of Marburg & Gießen, Germany.
| |
Collapse
|
3
|
Cheng S, Li X, Zhan Q, Wang Y, Guo Y, Huang W, Cao Y, Feng T, Wang H, Wu S, An F, Wang X, Zhao L, Liu X. Processing Self-Related Information Under Non-attentional Conditions Revealed by Visual MMN. Front Hum Neurosci 2022; 16:782496. [PMID: 35463934 PMCID: PMC9019658 DOI: 10.3389/fnhum.2022.782496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 03/02/2022] [Indexed: 12/02/2022] Open
Abstract
Mismatch negativity (MMN) of event-related potentials (ERPs) is a biomarker reflecting the preattentional change detection under non-attentional conditions. This study was performed to explore whether high self-related information could elicit MMN in the visual channel, indicating the automatic processing of self-related information at the preattentional stage. Thirty-five participants were recruited and asked to list 25 city names including the birthplace. According to the difference of relevance reported from the participants, we divided names of the different cities into high (birthplace as deviants), medium (Xi’an, where participants’ university is located, as deviants), and low (totally unrelated cities as standard stimuli) self-related information. Visual MMN (vMMN) was elicited by high self-related information but not by medium self-related information, with an occipital–temporal scalp distribution, indicating that, under non-attentional condition, high self-related information can be effectively processed automatically in the preattentional stage compared with low self-related information. These data provided new electrophysiological evidence for self-related information processing.
Collapse
Affiliation(s)
- Sizhe Cheng
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Xinhong Li
- Department of General Medicine, Tangdu Hospital, Xi'an, China
| | - Qingchen Zhan
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Yapei Wang
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Yaning Guo
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Wei Huang
- Department of Psychiatry and Psychology, 923 Hospital of Joint Logistic Support Force of Chinese People's Liberation Army, Nanning, China
| | - Yang Cao
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Tingwei Feng
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Hui Wang
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Shengjun Wu
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Fei An
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Xiuchao Wang
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Lun Zhao
- School of Education Science, Liaocheng University, Liaocheng, China
| | - Xufeng Liu
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| |
Collapse
|
4
|
Dou H, Dai Y, Qiu Y, Lei Y. Attachment voices promote safety learning in humans: A critical role for P2. Psychophysiology 2022; 59:e13997. [PMID: 35244973 DOI: 10.1111/psyp.13997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 11/20/2021] [Accepted: 11/23/2021] [Indexed: 11/29/2022]
Abstract
Humans have evolved to seek the proximity of attachment figures during times of threat in order to obtain a sense of safety. In this context, we examined whether or not the voice of an intimate partner (termed "attachment voice") could reduce fear-learning of conditioned stimuli (CS+) and enhance learning of safety signals (CS-). Although the ability to learn safety signals is vital for human survival, few studies have explored how attachment voices affect safety learning. To test our hypothesis, we recruited thirty-five young couples and performed a classic Pavlovian conditioning experiment, recording behavioral and electroencephalographic (EEG) data. The results showed that compared with a stranger's voice, the voices of the partners reduced expectancy of the unconditioned stimulus (a shock) during fear-conditioning, as well as the magnitude of P2 event-related potentials within the EEG responses, provided the voices were safety signals. Additionally, behavioral and EEG responses to the CS+ and CS- differed more when the participants heard their partner's voice than when they heard the stranger's voice. Thus, attachment voices, even as pure vowel sounds without any semantic information, enhanced acquisition of conditioned safety (CS-). These findings may provide implications for investigating other new techniques to improve clinical treatments for fear- and anxiety-related disorders and for psychological interventions against the mental health effects of the public health emergency.
Collapse
Affiliation(s)
- Haoran Dou
- Institute for Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China.,Faculty of Education and Psychology, University of Jyväskylä, Jyväskylä, Finland.,College of Psychology, Shenzhen University, Shenzhen, China
| | - Yuqian Dai
- College of Psychology, Shenzhen University, Shenzhen, China
| | - Yiwen Qiu
- College of Psychology, Shenzhen University, Shenzhen, China
| | - Yi Lei
- Institute for Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China
| |
Collapse
|
5
|
Wen W, Okon Y, Yamashita A, Asama H. The over-estimation of distance for self-voice versus other-voice. Sci Rep 2022; 12:420. [PMID: 35013503 PMCID: PMC8748720 DOI: 10.1038/s41598-021-04437-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 12/22/2021] [Indexed: 11/08/2022] Open
Abstract
Self-related stimuli are important cues for people to recognize themselves in the external world and hold a special status in our perceptual system. Self-voice plays an important role in daily social communication and is also a frequent input for self-identification. Although many studies have been conducted on the acoustic features of self-voice, no research has ever examined the spatial aspect, although the spatial perception of voice is important for humans. This study proposes a novel perspective for studying self-voice. We investigated people's distance perception of their own voice when the voice was heard from an external position. Participants heard their own voice from one of four speakers located either 90 or 180 cm from their sitting position, either immediately after uttering a short vowel (i.e., active session) or hearing the replay of their own pronunciation (i.e., replay session). They were then asked to indicate which speaker they heard the voice from. Their voices were either pitch-shifted by ± 4 semitones (i.e., other-voice condition) or unaltered (i.e., self-voice condition). The results of spatial judgment showed that self-voice from the closer speakers was misattributed to that from the speakers further away at a significantly higher proportion than other-voice. This phenomenon was also observed when the participants remained silent and heard prerecorded voices. Additional structural equation modeling using participants' schizotypal scores showed that the effect of self-voice on distance perception was significantly associated with the score of delusional thoughts (Peters Delusion Inventory) and distorted body image (Perceptual Aberration Scale) in the active speaking session but not in the replay session. The findings of this study provide important insights for understanding how people process self-related stimuli when there is a small distortion and how this may be linked to the risk of psychosis.
Collapse
Affiliation(s)
- Wen Wen
- Research Into Artifacts, Center for Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan.
- Department of Precision Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan.
| | - Yuta Okon
- Department of Precision Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
| | - Atsushi Yamashita
- Department of Precision Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
| | - Hajime Asama
- Research Into Artifacts, Center for Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
- Department of Precision Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
| |
Collapse
|
6
|
Deng N, Sun Y, Chen X, Li W. How does self name influence the neural processing of emotional prosody? An ERP study. Psych J 2021; 11:30-42. [PMID: 34856651 DOI: 10.1002/pchj.499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2020] [Revised: 09/24/2021] [Accepted: 09/29/2021] [Indexed: 11/06/2022]
Abstract
In this study, we investigated whether self-relevant information can accelerate the processing of emotional information. Our experiment, based on a passive auditory oddball paradigm, involved recording electroencephalography while participants listened to stimuli comprising their own names (ONs) and unfamiliar names (UNs) spoken with varying emotional prosody. At 220-300 ms, mismatch negativity (MMN) was more negative for ONs and angry prosody than for UNs and neutral prosody, respectively. These results suggest that attention is involuntarily attracted by ONs and emotional prosody, and that both types of information are given priority processing, even under pre-attentive conditions. Importantly, ONs with angry prosody induced more negative MMN than did similar UNs and ONs with neutral prosody, which indicates that the motivational significance embedded in angry prosody promotes the self-reference effect and, thus, involves more attention resources. At 300-500 ms, ONs triggered smaller P3a than did UNs, suggesting that less cognitive resources are required to process self-relevant information. These results suggest that self-relevant and emotional information of preferential processing interact with each other during the pre-attentive stage, with self-reference enhancing the processing of emotional information.
Collapse
Affiliation(s)
- Nali Deng
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China.,Key Laboratory of Brain and Cognitive Neuroscience, Dalian, China
| | - Yifan Sun
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China.,Key Laboratory of Brain and Cognitive Neuroscience, Dalian, China
| | - Xuhai Chen
- School of Psychology, Shaanxi Normal University, Xi'an, China
| | - Weijun Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China.,Key Laboratory of Brain and Cognitive Neuroscience, Dalian, China
| |
Collapse
|
7
|
Iannotti GR, Orepic P, Brunet D, Koenig T, Alcoba-Banqueri S, Garin DFA, Schaller K, Blanke O, Michel CM. EEG Spatiotemporal Patterns Underlying Self-other Voice Discrimination. Cereb Cortex 2021; 32:1978-1992. [PMID: 34649280 PMCID: PMC9070353 DOI: 10.1093/cercor/bhab329] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 08/02/2021] [Accepted: 08/03/2021] [Indexed: 11/24/2022] Open
Abstract
There is growing evidence showing that the representation of the human “self” recruits special systems across different functions and modalities. Compared to self-face and self-body representations, few studies have investigated neural underpinnings specific to self-voice. Moreover, self-voice stimuli in those studies were consistently presented through air and lacking bone conduction, rendering the sound of self-voice stimuli different to the self-voice heard during natural speech. Here, we combined psychophysics, voice-morphing technology, and high-density EEG in order to identify the spatiotemporal patterns underlying self-other voice discrimination (SOVD) in a population of 26 healthy participants, both with air- and bone-conducted stimuli. We identified a self-voice-specific EEG topographic map occurring around 345 ms post-stimulus and activating a network involving insula, cingulate cortex, and medial temporal lobe structures. Occurrence of this map was modulated both with SOVD task performance and bone conduction. Specifically, the better participants performed at SOVD task, the less frequently they activated this network. In addition, the same network was recruited less frequently with bone conduction, which, accordingly, increased the SOVD task performance. This work could have an important clinical impact. Indeed, it reveals neural correlates of SOVD impairments, believed to account for auditory-verbal hallucinations, a common and highly distressing psychiatric symptom.
Collapse
Affiliation(s)
- Giannina Rita Iannotti
- Functional Brain Mapping Lab, Department of Fundamental Neurosciences, University of Geneva, 1202, Switzerland.,Department of Neurosurgery, University Hospitals of Geneva and Faculty of Medicine, University of Geneva, 1205, Switzerland
| | - Pavo Orepic
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics and Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), 1202, Switzerland
| | - Denis Brunet
- Functional Brain Mapping Lab, Department of Fundamental Neurosciences, University of Geneva, 1202, Switzerland.,CIBM Center for Biomedical Imaging, Lausanne and Geneva, 1015, Switzerland
| | - Thomas Koenig
- Translational Research Center, University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern 3000, Switzerland
| | - Sixto Alcoba-Banqueri
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics and Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), 1202, Switzerland
| | - Dorian F A Garin
- Department of Neurosurgery, University Hospitals of Geneva and Faculty of Medicine, University of Geneva, 1205, Switzerland
| | - Karl Schaller
- Department of Neurosurgery, University Hospitals of Geneva and Faculty of Medicine, University of Geneva, 1205, Switzerland
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics and Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), 1202, Switzerland
| | - Christoph M Michel
- Functional Brain Mapping Lab, Department of Fundamental Neurosciences, University of Geneva, 1202, Switzerland.,CIBM Center for Biomedical Imaging, Lausanne and Geneva, 1015, Switzerland
| |
Collapse
|
8
|
Schaller K, Iannotti GR, Orepic P, Betka S, Haemmerli J, Boex C, Alcoba-Banqueri S, Garin DFA, Herbelin B, Park HD, Michel CM, Blanke O. The perspectives of mapping and monitoring of the sense of self in neurosurgical patients. Acta Neurochir (Wien) 2021; 163:1213-1226. [PMID: 33686522 PMCID: PMC8053654 DOI: 10.1007/s00701-021-04778-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Accepted: 02/17/2021] [Indexed: 12/25/2022]
Abstract
Surgical treatment of tumors, epileptic foci or of vascular origin, requires a detailed individual pre-surgical workup and intra-operative surveillance of brain functions to minimize the risk of post-surgical neurological deficits and decline of quality of life. Most attention is attributed to language, motor functions, and perception. However, higher cognitive functions such as social cognition, personality, and the sense of self may be affected by brain surgery. To date, the precise localization and the network patterns of brain regions involved in such functions are not yet fully understood, making the assessment of risks of related post-surgical deficits difficult. It is in the interest of neurosurgeons to understand with which neural systems related to selfhood and personality they are interfering during surgery. Recent neuroscience research using virtual reality and clinical observations suggest that the insular cortex, medial prefrontal cortex, and temporo-parietal junction are important components of a neural system dedicated to self-consciousness based on multisensory bodily processing, including exteroceptive and interoceptive cues (bodily self-consciousness (BSC)). Here, we argue that combined extra- and intra-operative approaches using targeted cognitive testing, functional imaging and EEG, virtual reality, combined with multisensory stimulations, may contribute to the assessment of the BSC and related cognitive aspects. Although the usefulness of particular biomarkers, such as cardiac and respiratory signals linked to virtual reality, and of heartbeat evoked potentials as a surrogate marker for intactness of multisensory integration for intra-operative monitoring has to be proved, systemic and automatized testing of BSC in neurosurgical patients will improve future surgical outcome.
Collapse
Affiliation(s)
- Karl Schaller
- Department of Neurosurgery, Geneva University Medical Center & Faculty of Medicine, University of Geneva, Rue Gabrielle-Perret-Gentil 4, 1205, Geneva, Switzerland
| | - Giannina Rita Iannotti
- Department of Neurosurgery, Geneva University Medical Center & Faculty of Medicine, University of Geneva, Rue Gabrielle-Perret-Gentil 4, 1205, Geneva, Switzerland
- Functional Brain Mapping Laboratory, Department of Fundamental Neurosciences, University Geneva, Geneva, Switzerland
| | - Pavo Orepic
- Laboratory of Neurocognitive Science, Center for Neuroprosthetics and Brain Mind Institute, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
| | - Sophie Betka
- Department of Neurosurgery, Geneva University Medical Center & Faculty of Medicine, University of Geneva, Rue Gabrielle-Perret-Gentil 4, 1205, Geneva, Switzerland
- Laboratory of Neurocognitive Science, Center for Neuroprosthetics and Brain Mind Institute, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
| | - Julien Haemmerli
- Department of Neurosurgery, Geneva University Medical Center & Faculty of Medicine, University of Geneva, Rue Gabrielle-Perret-Gentil 4, 1205, Geneva, Switzerland.
| | - Colette Boex
- Department of Neurosurgery, Geneva University Medical Center & Faculty of Medicine, University of Geneva, Rue Gabrielle-Perret-Gentil 4, 1205, Geneva, Switzerland
- Department of Clinical Neurosciences, Geneva University Medical Center & Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Sixto Alcoba-Banqueri
- Laboratory of Neurocognitive Science, Center for Neuroprosthetics and Brain Mind Institute, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
| | - Dorian F A Garin
- Department of Neurosurgery, Geneva University Medical Center & Faculty of Medicine, University of Geneva, Rue Gabrielle-Perret-Gentil 4, 1205, Geneva, Switzerland
| | - Bruno Herbelin
- Laboratory of Neurocognitive Science, Center for Neuroprosthetics and Brain Mind Institute, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
| | - Hyeong-Dong Park
- Laboratory of Neurocognitive Science, Center for Neuroprosthetics and Brain Mind Institute, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
| | - Christoph M Michel
- Functional Brain Mapping Laboratory, Department of Fundamental Neurosciences, University Geneva, Geneva, Switzerland
| | - Olaf Blanke
- Laboratory of Neurocognitive Science, Center for Neuroprosthetics and Brain Mind Institute, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
- Department of Clinical Neurosciences, Geneva University Medical Center & Faculty of Medicine, University of Geneva, Geneva, Switzerland
| |
Collapse
|
9
|
The processing of intimately familiar and unfamiliar voices: Specific neural responses of speaker recognition and identification. PLoS One 2021; 16:e0250214. [PMID: 33861789 PMCID: PMC8051806 DOI: 10.1371/journal.pone.0250214] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 04/03/2021] [Indexed: 11/19/2022] Open
Abstract
Research has repeatedly shown that familiar and unfamiliar voices elicit different neural responses. But it has also been suggested that different neural correlates associate with the feeling of having heard a voice and knowing who the voice represents. The terminology used to designate these varying responses remains vague, creating a degree of confusion in the literature. Additionally, terms serving to designate tasks of voice discrimination, voice recognition, and speaker identification are often inconsistent creating further ambiguities. The present study used event-related potentials (ERPs) to clarify the difference between responses to 1) unknown voices, 2) trained-to-familiar voices as speech stimuli are repeatedly presented, and 3) intimately familiar voices. In an experiment, 13 participants listened to repeated utterances recorded from 12 speakers. Only one of the 12 voices was intimately familiar to a participant, whereas the remaining 11 voices were unfamiliar. The frequency of presentation of these 11 unfamiliar voices varied with only one being frequently presented (the trained-to-familiar voice). ERP analyses revealed different responses for intimately familiar and unfamiliar voices in two distinct time windows (P2 between 200-250 ms and a late positive component, LPC, between 450-850 ms post-onset) with late responses occurring only for intimately familiar voices. The LPC present sustained shifts, and short-time ERP components appear to reflect an early recognition stage. The trained voice equally elicited distinct responses, compared to rarely heard voices, but these occurred in a third time window (N250 between 300-350 ms post-onset). Overall, the timing of responses suggests that the processing of intimately familiar voices operates in two distinct steps of voice recognition, marked by a P2 on right centro-frontal sites, and speaker identification marked by an LPC component. The recognition of frequently heard voices entails an independent recognition process marked by a differential N250. Based on the present results and previous observations, it is proposed that there is a need to distinguish between processes of voice "recognition" and "identification". The present study also specifies test conditions serving to reveal this distinction in neural responses, one of which bears on the length of speech stimuli given the late responses associated with voice identification.
Collapse
|
10
|
Peng Z, Hu Z, Wang X, Liu H. Mechanism underlying the self‐enhancement effect of voice attractiveness evaluation: self‐positivity bias and familiarity effect. Scand J Psychol 2020; 61:690-697. [DOI: 10.1111/sjop.12643] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Accepted: 03/20/2020] [Indexed: 01/12/2023]
Affiliation(s)
- Zhikang Peng
- Department of Psychology Zhejiang Sci‐Tech University Hangzhou 310018 China
| | - Zhiguo Hu
- Center for Cognition and Brain Disorders Hangzhou Normal University Hangzhou 311121 China
- Institute of Psychological Sciences Hangzhou Normal University Hangzhou 311121 China
- Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments Hangzhou Normal University Hangzhou 311121 China
| | - Xinyu Wang
- Department of Psychology Zhejiang Sci‐Tech University Hangzhou 310018 China
| | - Hongyan Liu
- Department of Psychology Zhejiang Sci‐Tech University Hangzhou 310018 China
| |
Collapse
|
11
|
Rachman L, Dubal S, Aucouturier JJ. Happy you, happy me: expressive changes on a stranger's voice recruit faster implicit processes than self-produced expressions. Soc Cogn Affect Neurosci 2020; 14:559-568. [PMID: 31044241 PMCID: PMC6545538 DOI: 10.1093/scan/nsz030] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2018] [Revised: 04/09/2019] [Accepted: 04/21/2019] [Indexed: 01/09/2023] Open
Abstract
In social interactions, people have to pay attention both to the ‘what’ and ‘who’. In particular, expressive changes heard on speech signals have to be integrated with speaker identity, differentiating e.g. self- and other-produced signals. While previous research has shown that self-related visual information processing is facilitated compared to non-self stimuli, evidence in the auditory modality remains mixed. Here, we compared electroencephalography (EEG) responses to expressive changes in sequence of self- or other-produced speech sounds using a mismatch negativity (MMN) passive oddball paradigm. Critically, to control for speaker differences, we used programmable acoustic transformations to create voice deviants that differed from standards in exactly the same manner, making EEG responses to such deviations comparable between sequences. Our results indicate that expressive changes on a stranger’s voice are highly prioritized in auditory processing compared to identical changes on the self-voice. Other-voice deviants generate earlier MMN onset responses and involve stronger cortical activations in a left motor and somatosensory network suggestive of an increased recruitment of resources for less internally predictable, and therefore perhaps more socially relevant, signals.
Collapse
Affiliation(s)
- Laura Rachman
- Inserm U, CNRS UMR, Sorbonne Université UMR S, Institut du Cerveau et de la Moelle épinière, Social and Affective Neuroscience Lab, Paris, France.,Science & Technology of Music and Sound, UMR (CNRS/IRCAM/Sorbonne Université), Paris, France
| | - Stéphanie Dubal
- Inserm U, CNRS UMR, Sorbonne Université UMR S, Institut du Cerveau et de la Moelle épinière, Social and Affective Neuroscience Lab, Paris, France
| | - Jean-Julien Aucouturier
- Science & Technology of Music and Sound, UMR (CNRS/IRCAM/Sorbonne Université), Paris, France
| |
Collapse
|
12
|
Liu L, Li W, Li J, Lou L, Chen J. Temporal Features of Psychological and Physical Self-Representation: An ERP Study. Front Psychol 2019; 10:785. [PMID: 31024408 PMCID: PMC6467969 DOI: 10.3389/fpsyg.2019.00785] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Accepted: 03/21/2019] [Indexed: 11/18/2022] Open
Abstract
Psychological and physical-self are two important aspects of self-concept. Although a growing number of behavioral and neuroimaging studies have investigated the cognitive mechanism and neural substrate underlying psychological and physical-self-representation, most of the existing research on psychological and physical-self-representation had been done in isolation. The present study aims to examine the electrophysiological responses to both psychological (one’s own name) and physical (one’s own voice) self-related stimuli in a uniform paradigm. Event-related potentials (ERPs) were recorded for subjects’ own and others’ names uttered by subjects’ own or others’ voice (own voice-own name, own voice-other’s name, other’s voice-own name, other’s voice-other’s name) while subjects performed an auditory passive oddball task. The results showed that one’s own name elicited smaller P2 and larger P3 amplitudes than those of other’s names, irrespective of the voice identity. However, no differences were observed between self and other’s voice during the P2 and P3 stages. Moreover, an obvious interaction effect was observed between voice content and voice identity at the N400 stage that the subject’s own voice elicited a larger parietal N400 amplitude than other’s voice in other name condition but not in own name condition. Taken together, these findings suggested that psychological (one’s own name) and physical (one’s own voice) self-representation induced distinct electrophysiological response patterns in auditory-cognitive processing.
Collapse
Affiliation(s)
- Lei Liu
- School of Educational Science, Hunan Normal University, Changsha, China.,Cognition and Human Behavior Key Laboratory, Hunan Normal University, Changsha, China.,School of Psychological and Cognitive Science, Peking University, Beijing, China
| | - Wenjie Li
- School of Educational Science, Hunan Normal University, Changsha, China.,Cognition and Human Behavior Key Laboratory, Hunan Normal University, Changsha, China
| | - Jin Li
- School of Educational Science, Hunan Normal University, Changsha, China.,Cognition and Human Behavior Key Laboratory, Hunan Normal University, Changsha, China
| | - Lingna Lou
- Faculty of Philosophy, Martin Luther University Halle-Wittenberg, Halle, Germany
| | - Jie Chen
- School of Educational Science, Hunan Normal University, Changsha, China.,Cognition and Human Behavior Key Laboratory, Hunan Normal University, Changsha, China
| |
Collapse
|
13
|
Barnaud ML, Schwartz JL, Bessière P, Diard J. Computer simulations of coupled idiosyncrasies in speech perception and speech production with COSMO, a perceptuo-motor Bayesian model of speech communication. PLoS One 2019; 14:e0210302. [PMID: 30633745 PMCID: PMC6329510 DOI: 10.1371/journal.pone.0210302] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Accepted: 12/18/2018] [Indexed: 01/09/2023] Open
Abstract
The existence of a functional relationship between speech perception and production systems is now widely accepted, but the exact nature and role of this relationship remains quite unclear. The existence of idiosyncrasies in production and in perception sheds interesting light on the nature of the link. Indeed, a number of studies explore inter-individual variability in auditory and motor prototypes within a given language, and provide evidence for a link between both sets. In this paper, we attempt to simulate one study on coupled idiosyncrasies in the perception and production of French oral vowels, within COSMO, a Bayesian computational model of speech communication. First, we show that if the learning process in COSMO includes a communicative mechanism between a Learning Agent and a Master Agent, vowel production does display idiosyncrasies. Second, we implement within COSMO three models for speech perception that are, respectively, auditory, motor and perceptuo-motor. We show that no idiosyncrasy in perception can be obtained in the auditory model, since it is optimally tuned to the learning environment, which does not include the motor variability of the Learning Agent. On the contrary, motor and perceptuo-motor models provide perception idiosyncrasies correlated with idiosyncrasies in production. We draw conclusions about the role and importance of motor processes in speech perception, and propose a perceptuo-motor model in which auditory processing would enable optimal processing of learned sounds and motor processing would be helpful in unlearned adverse conditions.
Collapse
Affiliation(s)
- Marie-Lou Barnaud
- Univ. Grenoble Alpes, Gipsa-lab, Grenoble, France.,CNRS, Gipsa-lab, Grenoble, France.,Univ. Grenoble Alpes, LPNC, Grenoble, France.,CNRS, LPNC, Grenoble, France
| | - Jean-Luc Schwartz
- Univ. Grenoble Alpes, Gipsa-lab, Grenoble, France.,CNRS, Gipsa-lab, Grenoble, France
| | | | - Julien Diard
- Univ. Grenoble Alpes, LPNC, Grenoble, France.,CNRS, LPNC, Grenoble, France
| |
Collapse
|
14
|
Conde T, Gonçalves ÓF, Pinheiro AP. Stimulus complexity matters when you hear your own voice: Attention effects on self-generated voice processing. Int J Psychophysiol 2018; 133:66-78. [PMID: 30114437 DOI: 10.1016/j.ijpsycho.2018.08.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2017] [Revised: 06/05/2018] [Accepted: 08/10/2018] [Indexed: 11/26/2022]
Abstract
The ability to discriminate self- and non-self voice cues is a fundamental aspect of self-awareness and subserves self-monitoring during verbal communication. Nonetheless, the neurofunctional underpinnings of self-voice perception and recognition are still poorly understood. Moreover, how attention and stimulus complexity influence the processing and recognition of one's own voice remains to be clarified. Using an oddball task, the current study investigated how self-relevance and stimulus type interact during selective attention to voices, and how they affect the representation of regularity during voice perception. Event-related potentials (ERPs) were recorded from 18 right-handed males. Pre-recorded self-generated (SGV) and non-self (NSV) voices, consisting of a nonverbal vocalization (vocalization condition) or disyllabic word (word condition), were presented as either standard or target stimuli in different experimental blocks. The results showed increased N2 amplitude to SGV relative to NSV stimuli. Stimulus type modulated later processing stages only: P3 amplitude was increased for SGV relative to NSV words, whereas no differences between SGV and NSV were observed in the case of vocalizations. Moreover, SGV standards elicited reduced N1 and P2 amplitude relative to NSV standards. These findings revealed that the self-voice grabs more attention when listeners are exposed to words but not vocalizations. Further, they indicate that detection of regularity in an auditory stream is facilitated for one's own voice at early processing stages. Together, they demonstrate that self-relevance affects attention to voices differently as a function of stimulus type.
Collapse
Affiliation(s)
- Tatiana Conde
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal; Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Óscar F Gonçalves
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Spaulding Center of Neuromodulation, Department of Physical Medicine & Rehabilitation, Spaulding Rehabilitation Hospital & Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Bouvé College of Health Sciences, Northeastern University, Boston, MA, USA
| | - Ana P Pinheiro
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal; Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Cognitive Neuroscience Lab, Department of Psychiatry, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
15
|
Candini M, Avanzi S, Cantagallo A, Zangoli MG, Benassi M, Querzani P, Lotti EM, Iachini T, Frassinetti F. The lost ability to distinguish between self and other voice following a brain lesion. Neuroimage Clin 2018; 18:903-911. [PMID: 29876275 PMCID: PMC5988014 DOI: 10.1016/j.nicl.2018.03.021] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2017] [Revised: 03/13/2018] [Accepted: 03/15/2018] [Indexed: 12/18/2022]
Abstract
Mechanisms underlying the self/other distinction have been mainly investigated focusing on visual, tactile or proprioceptive cues, whereas very little is known about the contribution of acoustical information. Here the ability to distinguish between self and others' voice is investigated by using a neuropsychological approach. Right (RBD) and left brain damaged (LBD) patients and healthy controls were submitted to a voice discrimination and a voice recognition task. Stimuli were paired words/pseudowords pronounced by the participant, by a familiar or unfamiliar person. In the voice discrimination task, participants had to judge whether two voices were same or different, whereas in the voice recognition task participants had to judge whether their own voice was or was not present. Crucially, differences between patient groups were found. In the discrimination task, only RBD patients were selectively impaired when their own voice was present. By contrast, in the recognition task, both RBD and LBD patients were impaired and showed two different biases: RBD patients misattributed the other's voice to themselves, while LBD patients denied the ownership of their own voice. Thus, two kinds of bias can affect self-voice recognition: we can refuse self-stimuli (voice disownership), or we can misidentify others' stimuli as our own (embodiment of others' voice). Overall, these findings reflect different impairments in self/other distinction both at behavioral and anatomical level, the right hemisphere being involved in voice discrimination and both hemispheres in the voice identity explicit recognition. The finding of selective brain networks dedicated to processing one's own voice demonstrates the relevance of self-related acoustic information in bodily self-representation.
Collapse
Affiliation(s)
- M Candini
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, 40127 Bologna, Italy.
| | - S Avanzi
- Maugeri Clinical Scientific Institutes - IRCCS of Castel Goffredo, Via Ospedale 36, 46042 Castel Goffredo, Mantova, Italy
| | - A Cantagallo
- BrainCare Clinic Center, Via Fornace Morandi 24, 35133 Padova, Italy; Sol et Salus Hospital, Viale San Salvador 204, 47922 Torre Pedrera, Rimini, Italy
| | - M G Zangoli
- BrainCare Clinic Center, Via Fornace Morandi 24, 35133 Padova, Italy
| | - M Benassi
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, 40127 Bologna, Italy
| | - P Querzani
- Neurological Unit, Santa Maria delle Croci Hospital Ausl della Romagna, Viale Randi 5, 48121 Ravenna, Italy
| | - E M Lotti
- Neurological Unit, Santa Maria delle Croci Hospital Ausl della Romagna, Viale Randi 5, 48121 Ravenna, Italy
| | - T Iachini
- Department of Psychology, Laboratory of Cognitive Science and Immersive Virtual Reality, University of Campania "L. Vanvitelli", Viale Ellittico 31, 81100 Caserta, Italy
| | - F Frassinetti
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, 40127 Bologna, Italy; Maugeri Clinical Scientific Institutes - IRCCS of Castel Goffredo, Via Ospedale 36, 46042 Castel Goffredo, Mantova, Italy
| |
Collapse
|
16
|
Lei Y, Dou H, Liu Q, Zhang W, Zhang Z, Li H. Automatic Processing of Emotional Words in the Absence of Awareness: The Critical Role of P2. Front Psychol 2017; 8:592. [PMID: 28473785 PMCID: PMC5397533 DOI: 10.3389/fpsyg.2017.00592] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2016] [Accepted: 03/30/2017] [Indexed: 01/22/2023] Open
Abstract
It has been long debated to what extent emotional words can be processed in the absence of awareness. Behavioral studies have shown that the meaning of emotional words can be accessed even without any awareness. However, functional magnetic resonance imaging studies have revealed that emotional words that are unconsciously presented do not activate the brain regions involved in semantic or emotional processing. To clarify this point, we used continuous flash suppression (CFS) and event-related potential (ERP) techniques to distinguish between semantic and emotional processing. In CFS, we successively flashed some Mondrian-style images into one participant's eye steadily, which suppressed the images projected to the other eye. Negative, neutral, and scrambled words were presented to 16 healthy participants for 500 ms. Whenever the participants saw the stimuli—in both visible and invisible conditions—they pressed specific keyboard buttons. Behavioral data revealed that there was no difference in reaction time to negative words and to neutral words in the invisible condition, although negative words were processed faster than neutral words in the visible condition. The ERP results showed that negative words elicited a larger P2 amplitude in the invisible condition than in the visible condition. The P2 component was enhanced for the neutral words compared with the scrambled words in the visible condition; however, the scrambled words elicited larger P2 amplitudes than the neutral words in the invisible condition. These results suggest that the emotional processing of words is more sensitive than semantic processing in the conscious condition. Semantic processing was found to be attenuated in the absence of awareness. Our findings indicate that P2 plays an important role in the unconscious processing of emotional words, which highlights the fact that emotional processing may be automatic and prioritized compared with semantic processing in the absence of awareness.
Collapse
Affiliation(s)
- Yi Lei
- College of Psychology and Sociology, Shenzhen UniversityShenzhen, China
| | - Haoran Dou
- College of Psychology and Sociology, Shenzhen UniversityShenzhen, China.,Research Center for Brain and Cognitive Neuroscience, Liaoning Normal UniversityDalian, China
| | - Qingming Liu
- School of Psychology, Nanjing Normal UniversityNanjing, China
| | - Wenhai Zhang
- Research Center for Brain and Cognitive Neuroscience, Liaoning Normal UniversityDalian, China.,College of Education Science, Chengdu UniversityChengdu, China
| | - Zhonglu Zhang
- Research Center for Brain and Cognitive Neuroscience, Liaoning Normal UniversityDalian, China
| | - Hong Li
- College of Psychology and Sociology, Shenzhen UniversityShenzhen, China.,Research Center for Brain and Cognitive Neuroscience, Liaoning Normal UniversityDalian, China.,College of Education Science, Chengdu UniversityChengdu, China
| |
Collapse
|
17
|
Justen C, Herbert C. Snap Your Fingers! An ERP/sLORETA Study Investigating Implicit Processing of Self- vs. Other-Related Movement Sounds Using the Passive Oddball Paradigm. Front Hum Neurosci 2016; 10:465. [PMID: 27777557 PMCID: PMC5056175 DOI: 10.3389/fnhum.2016.00465] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2016] [Accepted: 09/05/2016] [Indexed: 11/13/2022] Open
Abstract
So far, neurophysiological studies have investigated implicit and explicit self-related processing particularly for self-related stimuli such as the own face or name. The present study extends previous research to the implicit processing of self-related movement sounds and explores their spatio-temporal dynamics. Event-related potentials (ERPs) were assessed while participants (N = 12 healthy subjects) listened passively to previously recorded self- and other-related finger snapping sounds, presented either as deviants or standards during an oddball paradigm. Passive listening to low (500 Hz) and high (1000 Hz) pure tones served as additional control. For self- vs. other-related finger snapping sounds, analysis of ERPs revealed significant differences in the time windows of the N2a/MMN and P3. An subsequent source localization analysis with standardized low-resolution brain electromagnetic tomography (sLORETA) revealed increased cortical activation in distinct motor areas such as the supplementary motor area (SMA) in the N2a/mismatch negativity (MMN) as well as the P3 time window during processing of self- and other-related finger snapping sounds. In contrast, brain regions associated with self-related processing [e.g., right anterior/posterior cingulate cortex (ACC/PPC)] as well as the right inferior parietal lobule (IPL) showed increased activation particularly during processing of self- vs. other-related finger snapping sounds in the time windows of the N2a/MMN (ACC/PCC) or the P3 (IPL). None of these brain regions showed enhanced activation while listening passively to low (500 Hz) and high (1000 Hz) pure tones. Taken together, the current results indicate (1) a specific role of motor regions such as SMA during auditory processing of movement-related information, regardless of whether this information is self- or other-related, (2) activation of neural sources such as the ACC/PCC and the IPL during implicit processing of self-related movement stimuli, and (3) their differential temporal activation during deviance (N2a/MMN - ACC/PCC) and target detection (P3 - IPL) of self- vs. other-related movement sounds.
Collapse
Affiliation(s)
- Christoph Justen
- University of TübingenTübingen, Germany
- Applied Emotion and Motivation Research, Institute of Psychology and Education, University of UlmUlm, Germany
| | - Cornelia Herbert
- Applied Emotion and Motivation Research, Institute of Psychology and Education, University of UlmUlm, Germany
| |
Collapse
|
18
|
Pinheiro AP, Rezaii N, Nestor PG, Rauber A, Spencer KM, Niznikiewicz M. Did you or I say pretty, rude or brief? An ERP study of the effects of speaker's identity on emotional word processing. BRAIN AND LANGUAGE 2016; 153-154:38-49. [PMID: 26894680 DOI: 10.1016/j.bandl.2015.12.003] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2015] [Revised: 11/19/2015] [Accepted: 12/10/2015] [Indexed: 06/05/2023]
Abstract
During speech comprehension, multiple cues need to be integrated at a millisecond speed, including semantic information, as well as voice identity and affect cues. A processing advantage has been demonstrated for self-related stimuli when compared with non-self stimuli, and for emotional relative to neutral stimuli. However, very few studies investigated self-other speech discrimination and, in particular, how emotional valence and voice identity interactively modulate speech processing. In the present study we probed how the processing of words' semantic valence is modulated by speaker's identity (self vs. non-self voice). Sixteen healthy subjects listened to 420 prerecorded adjectives differing in voice identity (self vs. non-self) and semantic valence (neutral, positive and negative), while electroencephalographic data were recorded. Participants were instructed to decide whether the speech they heard was their own (self-speech condition), someone else's (non-self speech), or if they were unsure. The ERP results demonstrated interactive effects of speaker's identity and emotional valence on both early (N1, P2) and late (Late Positive Potential - LPP) processing stages: compared with non-self speech, self-speech with neutral valence elicited more negative N1 amplitude, self-speech with positive valence elicited more positive P2 amplitude, and self-speech with both positive and negative valence elicited more positive LPP. ERP differences between self and non-self speech occurred in spite of similar accuracy in the recognition of both types of stimuli. Together, these findings suggest that emotion and speaker's identity interact during speech processing, in line with observations of partially dependent processing of speech and speaker information.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Neuropsychophysiology Laboratory, Psychology Research Center (CIPsi), School of Psychology, University of Minho, Braga, Portugal; Clinical Neuroscience Division, Laboratory of Neuroscience, VA Boston Healthcare System-Brockton Division, Department of Psychiatry, Harvard Medical School, Brockton, MA, United States; Faculty of Psychology, University of Lisbon, Lisbon, Portugal.
| | - Neguine Rezaii
- Clinical Neuroscience Division, Laboratory of Neuroscience, VA Boston Healthcare System-Brockton Division, Department of Psychiatry, Harvard Medical School, Brockton, MA, United States
| | - Paul G Nestor
- Clinical Neuroscience Division, Laboratory of Neuroscience, VA Boston Healthcare System-Brockton Division, Department of Psychiatry, Harvard Medical School, Brockton, MA, United States; Department of Psychology, University of Massachusetts, Boston, MA, United States
| | - Andréia Rauber
- International Studies in Computational Linguistics, University of Tübingen, Tübingen, Germany
| | - Kevin M Spencer
- Neural Dynamics Laboratory, Research Service, VA Boston Healthcare System, and Department of Psychiatry, Harvard Medical School, Boston, MA, United States
| | - Margaret Niznikiewicz
- Clinical Neuroscience Division, Laboratory of Neuroscience, VA Boston Healthcare System-Brockton Division, Department of Psychiatry, Harvard Medical School, Brockton, MA, United States
| |
Collapse
|
19
|
Diard-Detoeuf C, Desmidt T, Mondon K, Graux J. A case of Capgras syndrome with one's own reflected image in a mirror. Neurocase 2016; 22:168-9. [PMID: 26304673 DOI: 10.1080/13554794.2015.1080847] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
We report the case of a 78-year-old patient admitted to the hospital for behavioral and psychological disorders consisting in impressions of presence of a stranger located behind the bathroom mirror, who strikingly shared the patient's appearance but was considered a different person, yet. We discuss how this case can be interpreted as an atypical Capgras syndrome for his mirror image and how it suggests an adjustment of the classical dual-route model that sustains face recognition between covert (or affective) and overt neural pathways.
Collapse
Affiliation(s)
- Capucine Diard-Detoeuf
- a Department of Geriatric Medicine and Memory Center , Université François Rabelais, CHRU de Tours , Tours , France
| | - Thomas Desmidt
- b UMR 930 Imagerie et Cerveau, INSERM , Université François Rabelais de Tours, CHRU de Tours , Tours , France
| | - Karl Mondon
- a Department of Geriatric Medicine and Memory Center , Université François Rabelais, CHRU de Tours , Tours , France.,b UMR 930 Imagerie et Cerveau, INSERM , Université François Rabelais de Tours, CHRU de Tours , Tours , France
| | - Jérôme Graux
- b UMR 930 Imagerie et Cerveau, INSERM , Université François Rabelais de Tours, CHRU de Tours , Tours , France
| |
Collapse
|
20
|
Sel A, Harding R, Tsakiris M. Electrophysiological correlates of self-specific prediction errors in the human brain. Neuroimage 2015; 125:13-24. [PMID: 26455899 DOI: 10.1016/j.neuroimage.2015.09.064] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2015] [Revised: 09/28/2015] [Accepted: 09/29/2015] [Indexed: 12/16/2022] Open
Abstract
Recognising one's self, vs. others, is a key component of self-awareness, crucial for social interactions. Here we investigated whether processing self-face and self-body images can be explained by the brain's prediction of sensory events, based on regularities in the given context. We measured evoked cortical responses while participants observed alternating sequences of self-face or other-face images (experiment 1) and self-body or other-body images (experiment 2), which were embedded in an identity-irrelevant task. In experiment 1, the expected sequences were violated by deviant morphed images, which contained 33%, 66% or 100% of the self-face when the other's face was expected (and vice versa). In experiment 2, the anticipated sequences were violated by deviant images of the self when the other's image was expected (and vice versa), or by two deviant images composed of pictures of the self-face attached to the other's body, or the other's face attached to the self-body. This manipulation allowed control of the prediction error associated with the self or the other's image. Deviant self-images (but not deviant images of the other) elicited a visual mismatch response (vMMR)--a cortical index of violations of regularity. This was source localised to face and body related visual, sensorimotor and limbic areas and had amplitude proportional to the amount of deviance from the self-image. We provide novel evidence that self-processing can be described by the brain's prediction error system, which accounts for self-bias in visual processing. These findings are discussed in the light of recent predictive coding models of self-processing.
Collapse
Affiliation(s)
- Alejandra Sel
- Lab of Action & Body, Department of Psychology, Royal Holloway University London, Egham Surrey, TW20 0EX London, UK.
| | - Rachel Harding
- Lab of Action & Body, Department of Psychology, Royal Holloway University London, Egham Surrey, TW20 0EX London, UK
| | - Manos Tsakiris
- Lab of Action & Body, Department of Psychology, Royal Holloway University London, Egham Surrey, TW20 0EX London, UK
| |
Collapse
|
21
|
The effects of stimulus complexity on the preattentive processing of self-generated and nonself voices: An ERP study. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2015; 16:106-23. [PMID: 26415897 DOI: 10.3758/s13415-015-0376-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The ability to differentiate one's own voice from the voice of somebody else plays a critical role in successful verbal self-monitoring processes and in communication. However, most of the existing studies have only focused on the sensory correlates of self-generated voice processing, whereas the effects of attentional demands and stimulus complexity on self-generated voice processing remain largely unknown. In this study, we investigated the effects of stimulus complexity on the preattentive processing of self and nonself voice stimuli. Event-related potentials (ERPs) were recorded from 17 healthy males who watched a silent movie while ignoring prerecorded self-generated (SGV) and nonself (NSV) voice stimuli, consisting of a vocalization (vocalization category condition: VCC) or of a disyllabic word (word category condition: WCC). All voice stimuli were presented as standard and deviant events in four distinct oddball sequences. The mismatch negativity (MMN) ERP component peaked earlier for NSV than for SGV stimuli. Moreover, when compared with SGV stimuli, the P3a amplitude was increased for NSV stimuli in the VCC only, whereas in the WCC no significant differences were found between the two voice types. These findings suggest differences in the time course of automatic detection of a change in voice identity. In addition, they suggest that stimulus complexity modulates the magnitude of the orienting response to SGV and NSV stimuli, extending previous findings on self-voice processing.
Collapse
|
22
|
Conde T, Gonçalves ÓF, Pinheiro AP. Paying attention to my voice or yours: An ERP study with words. Biol Psychol 2015; 111:40-52. [PMID: 26234962 DOI: 10.1016/j.biopsycho.2015.07.014] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2015] [Revised: 07/27/2015] [Accepted: 07/28/2015] [Indexed: 11/16/2022]
Abstract
Self-related stimuli-such as one's own face or name-seem to be processed differently from non-self stimuli and to involve greater attentional resources, as indexed by larger amplitude of the P3 event-related potential (ERP) component. Nonetheless, the differential processing of self-related vs. non-self information using voice stimuli is still poorly understood. The present study investigated the electrophysiological correlates of processing self-generated vs. non-self voice stimuli, when they are in the focus of attention. ERP data were recorded from twenty right-handed healthy males during an oddball task comprising pre-recorded self-generated (SGV) and non-self (NSV) voice stimuli. Both voices were used as standard and deviant stimuli in distinct experimental blocks. SGV was found to elicit more negative N2 and more positive P3 in comparison with NSV. No association was found between ERP data and voice acoustic properties. These findings demonstrated an earlier and later attentional bias to self-generated relative to non-self voice stimuli. They suggest that one's own voice representation may have a greater affective salience than an unfamiliar voice, confirming the modulatory role of salience on P3.
Collapse
Affiliation(s)
- Tatiana Conde
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Óscar F Gonçalves
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Ana P Pinheiro
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Cognitive Neuroscience Lab, Department of Psychiatry, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|