1
|
Egan S, Seidel A, Weber C, Ghio M, Bellebaum C. Fifty Percent of the Time, Tones Come Every Time: Stronger Prediction Error Effects on Neurophysiological Sensory Attenuation for Self-generated Tones. J Cogn Neurosci 2024; 36:2067-2083. [PMID: 39023362 DOI: 10.1162/jocn_a_02226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
The N1/P2 amplitude reduction for self-generated tones in comparison to external tones in EEG, which has recently also been described for action observation, is an example of the so-called sensory attenuation. Whether this effect is dependent on motor-based or general predictive mechanisms is unclear. Using a paradigm, in which actions (button presses) elicited tones in only half the trials, this study examined how the processing of the tones is modulated by the prediction error in each trial in a self-performed action compared with action observation. In addition, we considered the effect of temporal predictability by adding a third condition, in which visual cues were followed by external tones in half the trials. The attenuation result patterns differed for N1 and P2 amplitudes, but neither showed an attenuation effect beyond temporal predictability. Interestingly, we found that both N1 and P2 amplitudes reflected prediction errors derived from a reinforcement learning model, in that larger errors coincided with larger amplitudes. This effect was stronger for tones following button presses compared with cued external tones, but only for self-performed and not for observed actions. Taken together, our results suggest that attenuation effects are partially driven by general predictive mechanisms irrespective of self-performed actions. However, the stronger prediction-error effects for self-generated tones suggest that distinct motor-related factors beyond temporal predictability, potentially linked to reinforcement learning, play a role in the underlying mechanisms. Further research is needed to validate these initial findings as the calculation of the prediction errors was limited by the design of the experiment.
Collapse
Affiliation(s)
- Sophie Egan
- Heinrich Heine University, Faculty of Mathematics and Natural Sciences, Düsseldorf
| | - Alexander Seidel
- Heinrich Heine University, Faculty of Mathematics and Natural Sciences, Düsseldorf
- MSH Medical School Hamburg
| | - Constanze Weber
- Heinrich Heine University, Faculty of Mathematics and Natural Sciences, Düsseldorf
| | - Marta Ghio
- Heinrich Heine University, Faculty of Mathematics and Natural Sciences, Düsseldorf
| | - Christian Bellebaum
- Heinrich Heine University, Faculty of Mathematics and Natural Sciences, Düsseldorf
| |
Collapse
|
2
|
Duggirala SX, Schwartze M, Goller LK, Linden DEJ, Pinheiro AP, Kotz SA. Hallucination Proneness Alters Sensory Feedback Processing in Self-voice Production. Schizophr Bull 2024; 50:1147-1158. [PMID: 38824450 PMCID: PMC11349023 DOI: 10.1093/schbul/sbae095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/03/2024]
Abstract
BACKGROUND Sensory suppression occurs when hearing one's self-generated voice, as opposed to passively listening to one's own voice. Quality changes in sensory feedback to the self-generated voice can increase attentional control. These changes affect the self-other voice distinction and might lead to hearing voices in the absence of an external source (ie, auditory verbal hallucinations). However, it is unclear how changes in sensory feedback processing and attention allocation interact and how this interaction might relate to hallucination proneness (HP). STUDY DESIGN Participants varying in HP self-generated (via a button-press) and passively listened to their voice that varied in emotional quality and certainty of recognition-100% neutral, 60%-40% neutral-angry, 50%-50% neutral-angry, 40%-60% neutral-angry, 100% angry, during electroencephalography (EEG) recordings. STUDY RESULTS The N1 auditory evoked potential was more suppressed for self-generated than externally generated voices. Increased HP was associated with (1) an increased N1 response to the self- compared with externally generated voices, (2) a reduced N1 response for angry compared with neutral voices, and (3) a reduced N2 response to unexpected voice quality in sensory feedback (60%-40% neutral-angry) compared with neutral voices. CONCLUSIONS The current study highlights an association between increased HP and systematic changes in the emotional quality and certainty in sensory feedback processing (N1) and attentional control (N2) in self-voice production in a nonclinical population. Considering that voice hearers also display these changes, these findings support the continuum hypothesis.
Collapse
Affiliation(s)
- Suvarnalata Xanthate Duggirala
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
- Department of Psychology, Faculty of Psychology, University of Lisbon, Lisbon, Portugal
- Department of Psychiatry and Neuropsychology, School for Mental Health and Neuroscience, Faculty of Health and Medical Sciences, Maastricht University, Maastricht, Netherlands
| | - Michael Schwartze
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Lisa K Goller
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - David E J Linden
- Department of Psychiatry and Neuropsychology, School for Mental Health and Neuroscience, Faculty of Health and Medical Sciences, Maastricht University, Maastricht, Netherlands
- Maastricht University Medical Center, Maastricht, Netherlands
| | - Ana P Pinheiro
- Department of Psychology, Faculty of Psychology, University of Lisbon, Lisbon, Portugal
| | - Sonja A Kotz
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
3
|
Tast V, Schröger E, Widmann A. Suppression and omission effects in auditory predictive processing-Two of the same? Eur J Neurosci 2024; 60:4049-4062. [PMID: 38764129 DOI: 10.1111/ejn.16393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 04/24/2024] [Accepted: 04/30/2024] [Indexed: 05/21/2024]
Abstract
Recent theories describe perception as an inferential process based on internal predictive models that are adjusted by prediction violations (prediction error). Two different modulations of the auditory N1 event-related brain potential component are often discussed as an expression of auditory predictive processing. The sound-related N1 component is attenuated for self-generated sounds compared to the N1 elicited by externally generated sounds (N1 suppression). An omission-related component in the N1 time-range is elicited when the self-generated sounds are occasionally omitted (omission N1). Both phenomena were explained by action-related forward modelling, which takes place when the sensory input is predictable: prediction error signals are reduced when predicted sensory input is presented (N1 suppression) and elicited when predicted sensory input is omitted (omission N1). This common theoretical account is appealing but has not yet been directly tested. We manipulated the predictability of a sound in a self-generation paradigm in which, in two conditions, either 80% or 50% of the button presses did generate a sound, inducing a strong or a weak expectation for the occurrence of the sound. Consistent with the forward modelling account, an omission N1 was observed in the 80% but not in the 50% condition. However, N1 suppression was highly similar in both conditions. Thus, our results demonstrate a clear effect of predictability for the omission N1 but not for the N1 suppression. These results imply that the two phenomena rely (at least in part) on different mechanisms and challenge prediction related accounts of N1 suppression.
Collapse
Affiliation(s)
- Valentina Tast
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| | - Erich Schröger
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| | - Andreas Widmann
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| |
Collapse
|
4
|
Gu J, Buidze T, Zhao K, Gläscher J, Fu X. The neural network of sensory attenuation: A neuroimaging meta-analysis. Psychon Bull Rev 2024:10.3758/s13423-024-02532-1. [PMID: 38954157 DOI: 10.3758/s13423-024-02532-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/04/2024] [Indexed: 07/04/2024]
Abstract
Sensory attenuation refers to the reduction in sensory intensity resulting from self-initiated actions compared to stimuli initiated externally. A classic example is scratching oneself without feeling itchy. This phenomenon extends across various sensory modalities, including visual, auditory, somatosensory, and nociceptive stimuli. The internal forward model proposes that during voluntary actions, an efferent copy of the action command is sent out to predict sensory feedback. This predicted sensory feedback is then compared with the actual sensory feedback, leading to the suppression or reduction of sensory stimuli originating from self-initiated actions. To further elucidate the neural mechanisms underlying sensory attenuation effect, we conducted an extensive meta-analysis of functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) studies. Utilizing activation likelihood estimation (ALE) analysis, our results revealed significant activations in a prominent cluster encompassing the right superior temporal gyrus (rSTG), right middle temporal gyrus (rMTG), and right insula when comparing external-generated with self-generated conditions. Additionally, significant activation was observed in the right anterior cerebellum when comparing self-generated to external-generated conditions. Further analysis using meta-analytic connectivity modeling (MACM) unveiled distinct brain networks co-activated with the rMTG and right cerebellum, respectively. Based on these findings, we propose that sensory attenuation arises from the suppression of reflexive inputs elicited by self-initiated actions through the internal forward modeling of a cerebellum-centered action prediction network, enabling the "sensory conflict detection" regions to effectively discriminate between inputs resulting from self-induced actions and those originating externally.
Collapse
Affiliation(s)
- Jingjin Gu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, 100049, China
| | - Tatia Buidze
- Institute for Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany
| | - Ke Zhao
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China.
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, 100049, China.
| | - Jan Gläscher
- Institute for Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany
| | - Xiaolan Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, 100049, China
| |
Collapse
|
5
|
Dercksen TT, Widmann A, Noesselt T, Wetzel N. Somatosensory omissions reveal action-related predictive processing. Hum Brain Mapp 2024; 45:e26550. [PMID: 38050773 PMCID: PMC10915725 DOI: 10.1002/hbm.26550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 11/07/2023] [Accepted: 11/16/2023] [Indexed: 12/06/2023] Open
Abstract
The intricate relation between action and somatosensory perception has been studied extensively in the past decades. Generally, a forward model is thought to predict the somatosensory consequences of an action. These models propose that when an action is reliably coupled to a tactile stimulus, unexpected absence of the stimulus should elicit prediction error. Although such omission responses have been demonstrated in the auditory modality, it remains unknown whether this mechanism generalizes across modalities. This study therefore aimed to record action-induced somatosensory omission responses using EEG in humans. Self-paced button presses were coupled to somatosensory stimuli in 88% of trials, allowing a prediction, or in 50% of trials, not allowing a prediction. In the 88% condition, stimulus omission resulted in a neural response consisting of multiple components, as revealed by temporal principal component analysis. The oN1 response suggests similar sensory sources as stimulus-evoked activity, but an origin outside primary cortex. Subsequent oN2 and oP3 responses, as previously observed in the auditory domain, likely reflect modality-unspecific higher order processes. Together, findings straightforwardly demonstrate somatosensory predictions during action and provide evidence for a partially amodal mechanism of prediction error generation.
Collapse
Affiliation(s)
- Tjerk T. Dercksen
- Research Group Neurocognitive DevelopmentLeibniz Institute for NeurobiologyMagdeburgGermany
- Center for Behavioral Brain SciencesMagdeburgGermany
| | - Andreas Widmann
- Research Group Neurocognitive DevelopmentLeibniz Institute for NeurobiologyMagdeburgGermany
- Wilhelm Wundt Institute for PsychologyLeipzig UniversityLeipzigGermany
| | - Tömme Noesselt
- Center for Behavioral Brain SciencesMagdeburgGermany
- Department of Biological PsychologyOtto‐von‐Guericke‐University MagdeburgMagdeburgGermany
| | - Nicole Wetzel
- Research Group Neurocognitive DevelopmentLeibniz Institute for NeurobiologyMagdeburgGermany
- Center for Behavioral Brain SciencesMagdeburgGermany
- University of Applied Sciences Magdeburg‐StendalStendalGermany
| |
Collapse
|
6
|
Sturm S, Costa-Faidella J, SanMiguel I. Neural signatures of memory gain through active exploration in an oculomotor-auditory learning task. Psychophysiology 2023; 60:e14337. [PMID: 37209002 DOI: 10.1111/psyp.14337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 04/27/2023] [Accepted: 04/27/2023] [Indexed: 05/21/2023]
Abstract
Active engagement improves learning and memory, and self- versus externally generated stimuli are processed differently: perceptual intensity and neural responses are attenuated. Whether the attenuation is linked to memory formation remains unclear. This study investigates whether active oculomotor control over auditory stimuli-controlling for movement and stimulus predictability-benefits associative learning, and studies the underlying neural mechanisms. Using EEG and eye tracking we explored the impact of control during learning on the processing and memory recall of arbitrary oculomotor-auditory associations. Participants (N = 23) learned associations through active exploration or passive observation, using a gaze-controlled interface to generate sounds. Our results show faster learning progress in the active condition. ERPs time-locked to the onset of sound stimuli showed that learning progress was linked to an attenuation of the P3a component. The detection of matching movement-sound pairs triggered a target-matching P3b. There was no general modulation of ERPs through active learning. However, we found continuous variation in the strength of the memory benefit across participants: some benefited more strongly from active control during learning than others. This was paralleled in the strength of the N1 attenuation effect for self-generated stimuli, which was correlated with memory gain in active learning. Our results show that control helps learning and memory and modulates sensory responses. Individual differences during sensory processing predict the strength of the memory benefit. Taken together, these results help to disentangle the effects of agency, unspecific motor-based neuromodulation, and predictability on ERP components and establish a link between self-generation effects and active learning memory gain.
Collapse
Affiliation(s)
- Stefanie Sturm
- Brainlab - Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, Universitat de Barcelona, Barcelona, Spain
- Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
| | - Jordi Costa-Faidella
- Brainlab - Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, Universitat de Barcelona, Barcelona, Spain
- Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain
| | - Iria SanMiguel
- Brainlab - Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, Universitat de Barcelona, Barcelona, Spain
- Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain
| |
Collapse
|
7
|
Ma Y, Yu K, Yin S, Li L, Li P, Wang R. Attention Modulates the Role of Speakers' Voice Identity and Linguistic Information in Spoken Word Processing: Evidence From Event-Related Potentials. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1678-1693. [PMID: 37071787 DOI: 10.1044/2023_jslhr-22-00420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
PURPOSE The human voice usually contains two types of information: linguistic and identity information. However, whether and how linguistic information interacts with identity information remains controversial. This study aimed to explore the processing of identity and linguistic information during spoken word processing by considering the modulation of attention. METHOD We conducted two event-related potentials (ERPs) experiments in the study. Different speakers (self, friend, and unfamiliar speakers) and emotional words (positive, negative, and neutral words) were used to manipulate the identity and linguistic information. With the manipulation, Experiment 1 explored the identity and linguistic information processing with a word decision task that requires participants' explicit attention to linguistic information. Experiment 2 further investigated the issue with a passive oddball paradigm that requires rare attention to either the identity or linguistic information. RESULTS Experiment 1 revealed an interaction among speaker, word type, and hemisphere in N400 amplitudes but not in N100 and P200, which suggests that identity information interacted with linguistic information at the later stage of spoken word processing. The mismatch negativity results of Experiment 2 showed no significant interaction between speaker and word pair, which indicates that identity and linguistic information were processed independently. CONCLUSIONS The identity information would interact with linguistic information during spoken word processing. However, the interaction was modulated by the task demands on attention involvement. We propose an attention-modulated explanation to explain the mechanism underlying identity and linguistic information processing. Implications of our findings are discussed in light of the integration and independence theories.
Collapse
Affiliation(s)
- Yunxiao Ma
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, Ministry of Education, & Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou, China
| | - Keke Yu
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, Ministry of Education, & Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou, China
| | - Shuqi Yin
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, Ministry of Education, & Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou, China
| | - Li Li
- The Key Laboratory of Chinese Learning and International Promotion, and College of International Culture, South China Normal University, Guangzhou, China
| | - Ping Li
- Department of Chinese and Bilingual Studies, Faculty of Humanities, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Ruiming Wang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, Ministry of Education, & Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou, China
| |
Collapse
|
8
|
Seidel A, Weber C, Ghio M, Bellebaum C. My view on your actions: Dynamic changes in viewpoint-dependent auditory ERP attenuation during action observation. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023:10.3758/s13415-023-01083-7. [PMID: 36949276 PMCID: PMC10400693 DOI: 10.3758/s13415-023-01083-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 02/23/2023] [Indexed: 03/24/2023]
Abstract
It has been suggested that during action observation, a sensory representation of the observed action is mapped onto one's own motor system. However, it is largely unexplored what this may imply for the early processing of the action's sensory consequences, whether the observational viewpoint exerts influence on this and how such a modulatory effect might change over time. We tested whether the event-related potential of auditory effects of actions observed from a first- versus third-person perspective show amplitude reductions compared with externally generated sounds, as revealed for self-generated sounds. Multilevel modeling on trial-level data showed distinct dynamic patterns for the two viewpoints on reductions of the N1, P2, and N2 components. For both viewpoints, an N1 reduction for sounds generated by observed actions versus externally generated sounds was observed. However, only during first-person observation, we found a temporal dynamic within experimental runs (i.e., the N1 reduction only emerged with increasing trial number), indicating time-variant, viewpoint-dependent processes involved in sensorimotor prediction during action observation. For the P2, only a viewpoint-independent reduction was found for sounds elicited by observed actions, which disappeared in the second half of the experiment. The opposite pattern was found in an exploratory analysis concerning the N2, revealing a reduction that increased in the second half of the experiment, and, moreover, a temporal dynamic within experimental runs for the first-person perspective, possibly reflecting an agency-related process. Overall, these results suggested that the processing of auditory outcomes of observed actions is dynamically modulated by the viewpoint over time.
Collapse
Affiliation(s)
- Alexander Seidel
- Institute of Experimental Psychology, Department of Biological Psychology, Heinrich Heine University, Universitätstrasse, 1, 40255, Düsseldorf, Germany
| | - Constanze Weber
- Institute of Experimental Psychology, Department of Biological Psychology, Heinrich Heine University, Universitätstrasse, 1, 40255, Düsseldorf, Germany.
| | - Marta Ghio
- Institute of Experimental Psychology, Department of Biological Psychology, Heinrich Heine University, Universitätstrasse, 1, 40255, Düsseldorf, Germany
| | - Christian Bellebaum
- Institute of Experimental Psychology, Department of Biological Psychology, Heinrich Heine University, Universitätstrasse, 1, 40255, Düsseldorf, Germany
| |
Collapse
|
9
|
Repp M, Schumacher PB. What naturalistic stimuli tell us about pronoun resolution in real-time processing. Front Artif Intell 2023; 6:1058554. [PMID: 37009201 PMCID: PMC10060885 DOI: 10.3389/frai.2023.1058554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 02/16/2023] [Indexed: 03/18/2023] Open
Abstract
Studies on pronoun resolution have mostly utilized short texts consisting of a context and a target sentence. In the current study we presented participants with nine chapters of an audio book while recording their EEG to investigate the real-time resolution of personal and demonstrative pronouns in a more naturalistic setting. The annotation of the features of the pronouns and their antecedents registered a surprising pattern: demonstrative pronouns showed an interpretive preference for subject/agent antecedents, although they are described to have an anti-subject or anti-agent preference. Given the presence of perspectival centers in the audio book, this however confirmed proposals that demonstrative pronouns are sensitive to perspectival centers. The ERP results revealed a biphasic N400–Late Positivity pattern at posterior electrodes for the demonstrative pronoun relative to the personal pronoun, thereby confirming previous findings with highly controlled stimuli. We take the observed N400 for the demonstrative pronoun as an indication for more demanding processing costs that occur due to the relative unexpectedness of this referential expression. The Late Positivity is taken to reflect the consequences of attentional reorientation: since the demonstrative pronoun indicates a possible shift in the discourse structure, it induces updating of the discourse structure. In addition to the biphasic pattern, the data showed an enhanced positivity at frontal electrode sites for the demonstrative pronoun relative to the personal pronoun. We suggest that this frontal positivity reflects self-relevant engagement and identification with the perspective holder. Our study suggests that by using naturalistic stimuli, we get one step closer to understanding the implementation of language processing in the brain during real life language processing.
Collapse
|
10
|
Press C, Thomas ER, Yon D. Cancelling cancellation? Sensorimotor control, agency, and prediction. Neurosci Biobehav Rev 2023; 145:105012. [PMID: 36565943 DOI: 10.1016/j.neubiorev.2022.105012] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 12/06/2022] [Accepted: 12/18/2022] [Indexed: 12/24/2022]
Abstract
For decades, classic theories of action control and action awareness have been built around the idea that the brain predictively 'cancels' expected action outcomes from perception. However, recent research casts doubt over this basic premise. What do these new findings mean for classic accounts of action? Should we now 'cancel' old data, theories and approaches generated under this idea? In this paper, we argue 'No'. While doubts about predictive cancellation may urge us to fundamentally rethink how predictions shape perception, the wider pyramid using these ideas to explain action control and agentic experiences can remain largely intact. Some adaptive functions assigned to predictive cancellation can be achieved through quasi-predictive processes, that influence perception without actively tracking the probabilistic structure of the environment. Other functions may rely upon truly predictive processes, but not require that these predictions cancel perception. Appreciating the role of these processes may help us to move forward in explaining how agents optimise their interactions with the external world, even if predictive cancellation is cancelled from theory.
Collapse
Affiliation(s)
- Clare Press
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, UK; Wellcome Centre for Human Neuroimaging, UCL, 12 Queen Square, London WC1N 3AR, UK.
| | - Emily R Thomas
- Neuroscience Institute, New York University School of Medicine, 550 1st Ave, New York, NY 10016, USA
| | - Daniel Yon
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, UK
| |
Collapse
|
11
|
Karanikolaou M, Limanowski J, Northoff G. Does temporal irregularity drive prediction failure in schizophrenia? temporal modelling of ERPs. SCHIZOPHRENIA 2022; 8:23. [PMID: 35301329 PMCID: PMC8931057 DOI: 10.1038/s41537-022-00239-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 02/02/2022] [Indexed: 11/10/2022]
Abstract
AbstractSchizophrenia subjects often suffer from a failure to properly predict incoming inputs; most notably, some patients exhibit impaired prediction of the sensory consequences of their own actions. The mechanisms underlying this deficit remain unclear, though. One possible mechanism could consist in aberrant predictive processing, as schizophrenic patients show relatively less attenuated neuronal activity to self-produced tones, than healthy controls. Here, we tested the hypothesis that this aberrant predictive mechanism would manifest itself in the temporal irregularity of neuronal signals. For that purpose, we here introduce an event-related potential (ERP) study model analysis that consists of an EEG real-time model equation, eeg(t) and a frequency Laplace transformed Transfer Function (TF) equation, eeg(s). Combining circuit analysis with control and cable theory, we estimate the temporal model representations of auditory ERPs to reveal neural mechanisms that make predictions about self-generated sensations. We use data from 49 schizophrenic patients (SZ) and 32 healthy control (HC) subjects in an auditory ‘prediction’ paradigm; i.e., who either pressed a button to deliver a sound tone (epoch a), or just heard the tone without button press (epoch b). Our results show significantly higher degrees of temporal irregularity or imprecision between different trials of the ERP from the Cz electrode (N100, P200) in SZ compared to HC (Levene’s test, p < 0.0001) as indexed by altered latency, lower similarity/correlation of single trial time courses (using dynamic time warping), and longer settling times to reach steady state in the intertrial interval. Using machine learning, SZ vs HC could be highly accurately classified (92%) based on the temporal parameters of their ERPs’ TF models, using as features the poles of the TF rational functions. Together, our findings show temporal irregularity or imprecision between single trials to be abnormally increased in SZ. This may indicate a general impairment of SZ, related to precisely predicting the sensory consequences of one’s actions.
Collapse
|
12
|
Loyola-Navarro R, Moënne-Loccoz C, Vergara RC, Hyafil A, Aboitiz F, Maldonado PE. Voluntary self-initiation of the stimuli onset improves working memory and accelerates visual and attentional processing. Heliyon 2022; 8:e12215. [PMID: 36578387 PMCID: PMC9791366 DOI: 10.1016/j.heliyon.2022.e12215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 08/25/2022] [Accepted: 11/30/2022] [Indexed: 12/14/2022] Open
Abstract
The ability of an organism to voluntarily control the stimuli onset modulates perceptual and attentional functions. Since stimulus encoding is an essential component of working memory (WM), we conjectured that controlling the initiation of the perceptual process would positively modulate WM. To corroborate this proposition, we tested twenty-five healthy subjects in a modified-Sternberg WM task under three stimuli presentation conditions: an automatic presentation of the stimuli, a self-initiated presentation of the stimuli (through a button press), and a self-initiated presentation with random-delay stimuli onset. Concurrently, we recorded the subjects' electroencephalographic signals during WM encoding. We found that the self-initiated condition was associated with better WM accuracy, and earlier latencies of N1, P2 and P3 evoked potential components representing visual, attentional and mental review of the stimuli processes, respectively. Our work demonstrates that self-initiated stimuli enhance WM performance and accelerate early visual and attentional processes deployed during WM encoding. We also found that self-initiated stimuli correlate with an increased attentional state compared to the other two conditions, suggesting a role for temporal stimuli predictability. Our study remarks on the relevance of self-control of the stimuli onset in sensory, attentional and memory updating processing for WM.
Collapse
Affiliation(s)
- Rocio Loyola-Navarro
- Departamento de Neurociencia, Universidad de Chile, Santiago, Chile
- Biomedical Neuroscience Institute (BNI), Santiago, Chile
- Departamento de Educación Diferencial, Universidad Metropolitana de Ciencias de la Educación, Santiago, Chile
- Center for Advanced Research in Education, Institute of Education, Universidad de Chile, Santiago, Chile
| | - Cristóbal Moënne-Loccoz
- Departamento de Ciencias de la Salud, Pontificia Universidad Católica de Chile, Santiago, Chile
- Centro Nacional de Inteligencia Artificial (CENIA), Santiago, Chile
| | - Rodrigo C. Vergara
- Departamento de Kinesiología, Universidad Metropolitana de Ciencias de la Educación, Santiago, Chile
- Centro Nacional de Inteligencia Artificial (CENIA), Santiago, Chile
- Centro de Investigación en Educación, Universidad Metropolitana de Ciencias de la Educación (CIE-UMCE), Santiago, Chile
| | | | - Francisco Aboitiz
- Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Pedro E. Maldonado
- Departamento de Neurociencia, Universidad de Chile, Santiago, Chile
- Biomedical Neuroscience Institute (BNI), Santiago, Chile
- Centro Nacional de Inteligencia Artificial (CENIA), Santiago, Chile
| |
Collapse
|
13
|
Widmann A, Schröger E. Intention-based predictive information modulates auditory deviance processing. Front Neurosci 2022; 16:995119. [PMID: 36248631 PMCID: PMC9554204 DOI: 10.3389/fnins.2022.995119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 09/08/2022] [Indexed: 11/26/2022] Open
Abstract
The human brain is highly responsive to (deviant) sounds violating an auditory regularity. Respective brain responses are usually investigated in situations when the sounds were produced by the experimenter. Acknowledging that humans also actively produce sounds, the present event-related potential study tested for differences in the brain responses to deviants that were produced by the listeners by pressing one of two buttons. In one condition, deviants were unpredictable with respect to the button-sound association. In another condition, deviants were predictable with high validity yielding correctly predicted deviants and incorrectly predicted (mispredicted) deviants. Temporal principal component analysis revealed deviant-specific N1 enhancement, mismatch negativity (MMN) and P3a. N1 enhancements were highly similar for each deviant type, indicating that the underlying neural mechanism is not affected by intention-based expectation about the self-produced forthcoming sound. The MMN was abolished for predictable deviants, suggesting that the intention-based prediction for a deviant can overwrite the prediction derived from the auditory regularity (predicting a standard). The P3a was present for each deviant type but was largest for mispredicted deviants. It is argued that the processes underlying P3a not only evaluate the deviant with respect to the fact that it violates an auditory regularity but also with respect to the intended sensorial effect of an action. Overall, our results specify current theories of auditory predictive processing, as they reveal that intention-based predictions exert different effects on different deviance-specific brain responses.
Collapse
Affiliation(s)
- Andreas Widmann
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
- Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Erich Schröger
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| |
Collapse
|
14
|
Paraskevoudi N, SanMiguel I. Sensory suppression and increased neuromodulation during actions disrupt memory encoding of unpredictable self-initiated stimuli. Psychophysiology 2022; 60:e14156. [PMID: 35918912 PMCID: PMC10078310 DOI: 10.1111/psyp.14156] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 04/06/2022] [Accepted: 07/01/2022] [Indexed: 11/26/2022]
Abstract
Actions modulate sensory processing by attenuating responses to self- compared to externally generated inputs, which is traditionally attributed to stimulus-specific motor predictions. Yet, suppression has been also found for stimuli merely coinciding with actions, pointing to unspecific processes that may be driven by neuromodulatory systems. Meanwhile, the differential processing for self-generated stimuli raises the possibility of producing effects also on memory for these stimuli; however, evidence remains mixed as to the direction of the effects. Here, we assessed the effects of actions on sensory processing and memory encoding of concomitant, but unpredictable sounds, using a combination of self-generation and memory recognition task concurrently with EEG and pupil recordings. At encoding, subjects performed button presses that half of the time generated a sound (motor-auditory; MA) and listened to passively presented sounds (auditory-only; A). At retrieval, two sounds were presented and participants had to respond which one was present before. We measured memory bias and memory performance by having sequences where either both or only one of the test sounds were presented at encoding, respectively. Results showed worse memory performance - but no differences in memory bias -, attenuated responses, and larger pupil diameter for MA compared to A sounds. Critically, the larger the sensory attenuation and pupil diameter, the worse the memory performance for MA sounds. Nevertheless, sensory attenuation did not correlate with pupil dilation. Collectively, our findings suggest that sensory attenuation and neuromodulatory processes coexist during actions, and both relate to disrupted memory for concurrent, albeit unpredictable sounds.
Collapse
Affiliation(s)
- Nadia Paraskevoudi
- Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain.,Brainlab-Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, University of Barcelona, Barcelona, Spain
| | - Iria SanMiguel
- Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain.,Brainlab-Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, University of Barcelona, Barcelona, Spain.,Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain
| |
Collapse
|
15
|
Chen J, Huang X, Wang X, Zhang X, Liu S, Ma J, Huang Y, Tang A, Wu W. Visually Perceived Negative Emotion Enhances Mismatch Negativity but Fails to Compensate for Age-Related Impairments. Front Hum Neurosci 2022; 16:903797. [PMID: 35832873 PMCID: PMC9271563 DOI: 10.3389/fnhum.2022.903797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Accepted: 05/31/2022] [Indexed: 11/13/2022] Open
Abstract
Objective: Automatic detection of auditory stimuli, represented by the mismatch negativity (MMN), facilitates rapid processing of salient stimuli in the environment. The amplitude of MMN declines with ageing. However, whether automatic detection of auditory stimuli is affected by visually perceived negative emotions with normal ageing remains unclear. We aimed to evaluate how fearful facial expressions affect the MMN amplitude under ageing.Methods: We used a modified oddball paradigm to analyze the amplitude of N100 (N1) and MMN in 22 young adults and 21 middle-aged adults.Results: We found that the amplitude of N1 elicited by standard tones was smaller under fearful facial expressions than neutral facial expressions and was more negative for young adults than middle-aged adults. The MMN amplitude under fearful facial expressions was greater than neutral facial expressions, but the amplitude in middle-aged adults was smaller than in young adults.Conclusion: Visually perceived negative emotion promotes the extraction of auditory features. Additionally, it enhances the effect of auditory change detection in middle-aged adults but fails to compensate for this decline with normal ageing.Significance: The study may help to understand how visually perceived emotion affects the early stage of auditory information processing from an event process perspective.
Collapse
Affiliation(s)
- Jiali Chen
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Xiaomin Huang
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Xianglong Wang
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Xuefei Zhang
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Sishi Liu
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Junqin Ma
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Yuanqiu Huang
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Province Work Injury Rehabilitation Hospital, Guangzhou, China
| | - Anli Tang
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Wen Wu
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- *Correspondence: Wen Wu
| |
Collapse
|
16
|
Han N, Jack BN, Hughes G, Whitford TJ. The Role of Action-Effect Contingency on Sensory Attenuation in the Absence of Movement. J Cogn Neurosci 2022; 34:1488-1499. [PMID: 35579993 DOI: 10.1162/jocn_a_01867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Stimuli that have been generated by a person's own willed motor actions generally elicit a suppressed electrophysiological, as well as phenomenological, response than identical stimuli that have been externally generated. This well-studied phenomenon, known as sensory attenuation, has mostly been studied by comparing ERPs evoked by self-initiated and externally generated sounds. However, most studies have assumed a uniform action-effect contingency, in which a motor action leads to a resulting sensation 100% of the time. In this study, we investigated the effect of manipulating the probability of action-effect contingencies on the sensory attenuation effect. In Experiment 1, participants watched a moving, marked tickertape while EEG was recorded. In the full-contingency (FC) condition, participants chose whether to press a button by a certain mark on the tickertape. If a button press had not occurred by the mark, a sound would be played a second later 100% of the time. If the button was pressed before the mark, the sound was not played. In the no-contingency (NC) condition, participants observed the same tickertape; in contrast, however, if participants did not press the button by the mark, a sound would occur only 50% of the time (NC-inaction). Furthermore, in the NC condition, if a participant pressed the button before the mark, a sound would also play 50% of the time (NC-action). In Experiment 2, the design was identical, except that a willed action (as opposed to a willed inaction) triggered the sound in the FC condition. The results were consistent across the two experiments: Although there were no differences in N1 amplitude between conditions, the amplitude of the Tb and P2 components were smaller in the FC condition compared with the NC-inaction condition, and the amplitude of the P2 component was also smaller in the FC condition compared with the NC-action condition. The results suggest that the effect of contingency on electrophysiological indices of sensory attenuation may be indexed primarily by the Tb and P2 components, rather than the N1 component which is most commonly studied.
Collapse
|
17
|
Jack BN, Chilver MR, Vickery RM, Birznieks I, Krstanoska-Blazeska K, Whitford TJ, Griffiths O. Movement Planning Determines Sensory Suppression: An Event-related Potential Study. J Cogn Neurosci 2021; 33:2427-2439. [PMID: 34424986 DOI: 10.1162/jocn_a_01747] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Sensory suppression refers to the phenomenon that sensory input generated by our own actions, such as moving a finger to press a button to hear a tone, elicits smaller neural responses than sensory input generated by external agents. This observation is usually explained via the internal forward model in which an efference copy of the motor command is used to compute a corollary discharge, which acts to suppress sensory input. However, because moving a finger to press a button is accompanied by neural processes involved in preparing and performing the action, it is unclear whether sensory suppression is the result of movement planning, movement execution, or both. To investigate this, in two experiments, we compared ERPs to self-generated tones that were produced by voluntary, semivoluntary, or involuntary button-presses, with externally generated tones that were produced by a computer. In Experiment 1, the semivoluntary and involuntary button-presses were initiated by the participant or experimenter, respectively, by electrically stimulating the median nerve in the participant's forearm, and in Experiment 2, by applying manual force to the participant's finger. We found that tones produced by voluntary button-presses elicited a smaller N1 component of the ERP than externally generated tones. This is known as N1-suppression. However, tones produced by semivoluntary and involuntary button-presses did not yield significant N1-suppression. We also found that the magnitude of N1-suppression linearly decreased across the voluntary, semivoluntary, and involuntary conditions. These results suggest that movement planning is a necessary condition for producing sensory suppression. We conclude that the most parsimonious account of sensory suppression is the internal forward model.
Collapse
Affiliation(s)
- Bradley N Jack
- University of New South Wales Sydney, Australia.,Australian National University, Canberra
| | - Miranda R Chilver
- University of New South Wales Sydney, Australia.,Neuroscience Research Australia, Sydney
| | - Richard M Vickery
- University of New South Wales Sydney, Australia.,Neuroscience Research Australia, Sydney
| | - Ingvars Birznieks
- University of New South Wales Sydney, Australia.,Neuroscience Research Australia, Sydney
| | | | | | - Oren Griffiths
- University of New South Wales Sydney, Australia.,Flinders University, Adelaide, Australia
| |
Collapse
|
18
|
Sugimoto F, Kimura M, Takeda Y. Attenuation of auditory N2 for self-modulated tones during continuous actions. Biol Psychol 2021; 166:108201. [PMID: 34653547 DOI: 10.1016/j.biopsycho.2021.108201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 10/01/2021] [Accepted: 10/04/2021] [Indexed: 11/19/2022]
Abstract
Event-related potentials elicited by tones generated by one's own discrete actions (e.g., button presses) are attenuated compared to those elicited by tones generated externally. The present study investigated whether ERP attenuation would occur when the timing or pitch of tones is modulated by continuous actions, as for such actions, a weak association between actions and their auditory consequences is assumed. In a modulation condition, participants modulated the time interval between tones (Experiment 1) or the pitch of tones (Experiment 2) by turning a steering wheel. In a listening condition, participants listened to the same tones as in the modulation condition without any action. The results revealed that the amplitude of N2 elicited by tones decreased in the modulation compared to listening conditions, consistently in the two experiments, suggesting relatively higher-order auditory processing can be mainly influenced by the prediction of action consequences when continuous actions modulate features of auditory stimuli.
Collapse
Affiliation(s)
- Fumie Sugimoto
- Human-Centered Mobility Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Japan.
| | - Motohiro Kimura
- Human-Centered Mobility Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Japan
| | - Yuji Takeda
- Human-Centered Mobility Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Japan
| |
Collapse
|
19
|
Darriba Á, Hsu YF, Van Ommen S, Waszak F. Intention-based and sensory-based predictions. Sci Rep 2021; 11:19899. [PMID: 34615990 PMCID: PMC8494815 DOI: 10.1038/s41598-021-99445-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 09/23/2021] [Indexed: 02/08/2023] Open
Abstract
We inhabit a continuously changing world, where the ability to anticipate future states of the environment is critical for adaptation. Anticipation can be achieved by learning about the causal or temporal relationship between sensory events, as well as by learning to act on the environment to produce an intended effect. Together, sensory-based and intention-based predictions provide the flexibility needed to successfully adapt. Yet it is currently unknown whether the two sources of information are processed independently to form separate predictions, or are combined into a common prediction. To investigate this, we ran an experiment in which the final tone of two possible four-tone sequences could be predicted from the preceding tones in the sequence and/or from the participants' intention to trigger that final tone. This tone could be congruent with both sensory-based and intention-based predictions, incongruent with both, or congruent with one while incongruent with the other. Trials where predictions were incongruent with each other yielded similar prediction error responses irrespectively of the violated prediction, indicating that both predictions were formulated and coexisted simultaneously. The violation of intention-based predictions yielded late additional error responses, suggesting that those violations underwent further differential processing which the violations of sensory-based predictions did not receive.
Collapse
Affiliation(s)
- Álvaro Darriba
- Université de Paris, INCC UMR 8002, CNRS, F-75006, Paris, France.
| | - Yi-Fang Hsu
- Department of Educational Psychology and Counselling, National Taiwan Normal University, 10610, Taipei, Taiwan
- Institute for Research Excellence in Learning Sciences, National Taiwan Normal University, 10610, Taipei, Taiwan
| | - Sandrien Van Ommen
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Geneva, Switzerland
| | - Florian Waszak
- Université de Paris, INCC UMR 8002, CNRS, F-75006, Paris, France
| |
Collapse
|
20
|
Ford JM, Roach BJ, Mathalon DH. Vocalizing and singing reveal complex patterns of corollary discharge function in schizophrenia. Int J Psychophysiol 2021; 164:30-40. [PMID: 33621618 DOI: 10.1016/j.ijpsycho.2021.02.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 01/30/2021] [Accepted: 02/16/2021] [Indexed: 10/22/2022]
Abstract
INTRODUCTION As we vocalize, our brains generate predictions of the sounds we produce to enable suppression of neural responses when intentions match vocalizations and to make adjustments when they do not. This may be instantiated by efference copy and corollary discharge mechanisms, which are impaired in people with schizophrenia (SZ). Although innate, these mechanisms can be affected by intentions. We asked if attending to pitch during vocalizations would take these mechanisms "off-line" and reduce suppression. METHODS Event-related potentials (ERP) were recorded from 96 SZ and 92 healthy controls (HC) as they vocalized triplets in monotone (Phrase) or sang triplets in ascending thirds (Pitch). Pre-vocalization activity (Bereitschaftspotential, BP), N1, and P2 ERP components to sounds were compared during vocalization and playback. RESULTS N1 was not as suppressed during Pitch as during Phrase. N1 suppression was not affected by SZ in either task when all data were collapsed across pitches (Pitch) and positions (Phrase). However, when binned according to vocalization performance, SZ showed less N1 suppression than HC at longer (>2 s) inter-stimulus intervals (Phrase) and inconsistent suppression across pitches (Pitch). Unlike N1, P2 was more suppressed during Pitch than Phrase and not affected by SZ. BP was greater during vocalization than playback but did not contribute to N1 or P2 effects. Pitch variability was inversely related to negative symptoms. CONCLUSIONS Neural processing is not suppressed when patients and controls sing, and corollary discharge abnormalities in schizophrenia are only seen at long vocalization intervals.
Collapse
Affiliation(s)
- Judith M Ford
- University of California, San Francisco (UCSF), United States of America; Veterans Affairs San Francisco Healthcare System, United States of America.
| | - Brian J Roach
- Veterans Affairs San Francisco Healthcare System, United States of America
| | - Daniel H Mathalon
- University of California, San Francisco (UCSF), United States of America; Veterans Affairs San Francisco Healthcare System, United States of America
| |
Collapse
|
21
|
Johnson JF, Belyk M, Schwartze M, Pinheiro AP, Kotz SA. Expectancy changes the self-monitoring of voice identity. Eur J Neurosci 2021; 53:2681-2695. [PMID: 33638190 PMCID: PMC8252045 DOI: 10.1111/ejn.15162] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 01/18/2021] [Accepted: 02/20/2021] [Indexed: 12/02/2022]
Abstract
Self‐voice attribution can become difficult when voice characteristics are ambiguous, but functional magnetic resonance imaging (fMRI) investigations of such ambiguity are sparse. We utilized voice‐morphing (self‐other) to manipulate (un‐)certainty in self‐voice attribution in a button‐press paradigm. This allowed investigating how levels of self‐voice certainty alter brain activation in brain regions monitoring voice identity and unexpected changes in voice playback quality. FMRI results confirmed a self‐voice suppression effect in the right anterior superior temporal gyrus (aSTG) when self‐voice attribution was unambiguous. Although the right inferior frontal gyrus (IFG) was more active during a self‐generated compared to a passively heard voice, the putative role of this region in detecting unexpected self‐voice changes during the action was demonstrated only when hearing the voice of another speaker and not when attribution was uncertain. Further research on the link between right aSTG and IFG is required and may establish a threshold monitoring voice identity in action. The current results have implications for a better understanding of the altered experience of self‐voice feedback in auditory verbal hallucinations.
Collapse
Affiliation(s)
- Joseph F Johnson
- Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, the Netherlands
| | - Michel Belyk
- Division of Psychology and Language Sciences, University College London, London, UK
| | - Michael Schwartze
- Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, the Netherlands
| | - Ana P Pinheiro
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal
| | - Sonja A Kotz
- Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, the Netherlands.,Department of Neuropsychology, Max Planck Institute for Human and Cognitive Sciences, Leipzig, Germany
| |
Collapse
|
22
|
van Laarhoven T, Stekelenburg JJ, Vroomen J. Suppression of the auditory N1 by visual anticipatory motion is modulated by temporal and identity predictability. Psychophysiology 2020; 58:e13749. [PMID: 33355930 PMCID: PMC7900976 DOI: 10.1111/psyp.13749] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 10/25/2020] [Accepted: 11/23/2020] [Indexed: 11/28/2022]
Abstract
The amplitude of the auditory N1 component of the event-related potential (ERP) is typically suppressed when a sound is accompanied by visual anticipatory information that reliably predicts the timing and identity of the sound. While this visually induced suppression of the auditory N1 is considered an early electrophysiological marker of fulfilled prediction, it is not yet fully understood whether this internal predictive coding mechanism is primarily driven by the temporal characteristics, or by the identity features of the anticipated sound. The current study examined the impact of temporal and identity predictability on suppression of the auditory N1 by visual anticipatory motion with an ecologically valid audiovisual event (a video of a handclap). Predictability of auditory timing and identity was manipulated in three different conditions in which sounds were either played in isolation, or in conjunction with a video that either reliably predicted the timing of the sound, the identity of the sound, or both the timing and identity. The results showed that N1 suppression was largest when the video reliably predicted both the timing and identity of the sound, and reduced when either the timing or identity of the sound was unpredictable. The current results indicate that predictions of timing and identity are both essential elements for predictive coding in audition.
Collapse
Affiliation(s)
- Thijs van Laarhoven
- Department of Cognitive Neuropsychology, Tilburg University, Tilburg, The Netherlands
| | - Jeroen J Stekelenburg
- Department of Cognitive Neuropsychology, Tilburg University, Tilburg, The Netherlands
| | - Jean Vroomen
- Department of Cognitive Neuropsychology, Tilburg University, Tilburg, The Netherlands
| |
Collapse
|
23
|
Pinheiro AP, Schwartze M, Gutiérrez-Domínguez F, Kotz SA. Real and imagined sensory feedback have comparable effects on action anticipation. Cortex 2020; 130:290-301. [PMID: 32698087 DOI: 10.1016/j.cortex.2020.04.030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Revised: 03/23/2020] [Accepted: 04/13/2020] [Indexed: 01/08/2023]
Abstract
The forward model monitors the success of sensory feedback to an action and links it to an efference copy originating in the motor system. The Readiness Potential (RP) of the electroencephalogram has been denoted as a neural signature of the efference copy. An open question is whether imagined sensory feedback works similarly to real sensory feedback. We investigated the RP to audible and imagined sounds in a button-press paradigm and assessed the role of sound complexity (vocal vs. non-vocal sound). Sensory feedback (both audible and imagined) in response to a voluntary action modulated the RP amplitude time-locked to the button press. The RP amplitude increase was larger for actions with expected sensory feedback (audible and imagined) than those without sensory feedback, and associated with N1 suppression for audible sounds. Further, the early RP phase was increased when actions elicited an imagined vocal (self-voice) compared to non-vocal sound. Our results support the notion that sensory feedback is anticipated before voluntary actions. This is the case for both audible and imagined sensory feedback and confirms a role of overt and covert feedback in the forward model.
Collapse
Affiliation(s)
- Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal; Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands.
| | - Michael Schwartze
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands
| | | | - Sonja A Kotz
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands
| |
Collapse
|
24
|
Dercksen TT, Widmann A, Schröger E, Wetzel N. Omission related brain responses reflect specific and unspecific action-effect couplings. Neuroimage 2020; 215:116840. [PMID: 32289452 DOI: 10.1016/j.neuroimage.2020.116840] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Revised: 03/31/2020] [Accepted: 04/06/2020] [Indexed: 11/29/2022] Open
Abstract
When an auditory stimulus is predicted but unexpectedly omitted, an omission response can be observed in the EEG. This endogenous response to the absence of a stimulus demonstrates the important role of prediction in perception. SanMiguel et al. (2013a) showed that in order to observe an omission response, a specific prediction concerning the identity of an upcoming stimulus is necessary. They used button presses coupled to either a single sound (predictable identity), or a random sound (unpredictable identity). In the event-related potentials, a sequence of omission responses consisting of oN1, oN2, and oP3 was observed in the single condition but not in the random condition. Given the importance of omission studies to understand the role of prediction in perception, we replicated this study. We enhanced statistical power by doubling the sample size and adjusting data pre-processing, and applied temporal principal component analysis and replication Bayes statistics. Results in the single sound condition were successfully replicated. Principal component analysis additionally revealed attenuated oN1 and oP3 omission responses in the random sound condition. These results suggest the existence of both specific and unspecific predictions along the sound processing hierarchy, where precision weighting possibly influences the strength of prediction error. Results are discussed in the framework of predictive coding and are congruent with everyday life, where uncertainty often requires broader or more general predictions.
Collapse
Affiliation(s)
- Tjerk T Dercksen
- Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118, Magdeburg, Germany; Center for Behavioral Brain Sciences, Universitätsplatz 2, D-39106, Magdeburg, Germany.
| | - Andreas Widmann
- Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118, Magdeburg, Germany; Leipzig University, Neumarkt 9-19, D-04109, Leipzig, Germany.
| | - Erich Schröger
- Leipzig University, Neumarkt 9-19, D-04109, Leipzig, Germany.
| | - Nicole Wetzel
- Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118, Magdeburg, Germany; Center for Behavioral Brain Sciences, Universitätsplatz 2, D-39106, Magdeburg, Germany; University of Applied Sciences Magdeburg-Stendal, Osterburgerstraße 25, 39576, Stendal, Germany.
| |
Collapse
|
25
|
Pinheiro AP, Schwartze M, Amorim M, Coentre R, Levy P, Kotz SA. Changes in motor preparation affect the sensory consequences of voice production in voice hearers. Neuropsychologia 2020; 146:107531. [PMID: 32553846 DOI: 10.1016/j.neuropsychologia.2020.107531] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2020] [Revised: 05/11/2020] [Accepted: 06/08/2020] [Indexed: 10/24/2022]
Abstract
BACKGROUND Auditory verbal hallucinations (AVH) are a cardinal symptom of psychosis but are also present in 6-13% of the general population. Alterations in sensory feedback processing are a likely cause of AVH, indicative of changes in the forward model. However, it is unknown whether such alterations are related to anomalies in forming an efference copy during action preparation, selective for voices, and similar along the psychosis continuum. By directly comparing psychotic and nonclinical voice hearers (NCVH), the current study specifies whether and how AVH proneness modulates both the efference copy (Readiness Potential) and sensory feedback processing for voices and tones (N1, P2) with event-related brain potentials (ERPs). METHODS Controls with low AVH proneness (n = 15), NCVH (n = 16) and first-episode psychotic patients with AVH (n = 16) engaged in a button-press task with two types of stimuli: self-initiated and externally generated self-voices or tones during EEG recordings. RESULTS Groups differed in sensory feedback processing of expected and actual feedback: NCVH displayed an atypically enhanced N1 to self-initiated voices, while N1 suppression was reduced in psychotic patients. P2 suppression for voices and tones was strongest in NCVH, but absent for voices in patients. Motor activity preceding the button press was reduced in NCVH and patients, specifically for sensory feedback to self-voice in NCVH. CONCLUSIONS These findings suggest that selective changes in sensory feedback to voice are core to AVH. These changes already show in preparatory motor activity, potentially reflecting changes in forming an efference copy. The results provide partial support for continuum models of psychosis.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| | - Michael Schwartze
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Maria Amorim
- Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Ricardo Coentre
- Serviço de Psiquiatria e Saúde Mental, Centro Hospitalar Universitário Lisboa Norte EPE, Lisboa, Portugal; Faculdade de Medicina, Universidade de Lisboa, Lisboa, Portugal
| | - Pedro Levy
- Serviço de Psiquiatria e Saúde Mental, Centro Hospitalar Universitário Lisboa Norte EPE, Lisboa, Portugal
| | - Sonja A Kotz
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
26
|
Pinheiro AP, Schwartze M, Gutierrez F, Kotz SA. When temporal prediction errs: ERP responses to delayed action-feedback onset. Neuropsychologia 2019; 134:107200. [DOI: 10.1016/j.neuropsychologia.2019.107200] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2019] [Revised: 09/18/2019] [Accepted: 09/19/2019] [Indexed: 11/26/2022]
|
27
|
Knolle F, Schwartze M, Schröger E, Kotz SA. Auditory Predictions and Prediction Errors in Response to Self-Initiated Vowels. Front Neurosci 2019; 13:1146. [PMID: 31708737 PMCID: PMC6823252 DOI: 10.3389/fnins.2019.01146] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Accepted: 10/10/2019] [Indexed: 11/13/2022] Open
Abstract
It has been suggested that speech production is accomplished by an internal forward model, reducing processing activity directed to self-produced speech in the auditory cortex. The current study uses an established N1-suppression paradigm comparing self- and externally initiated natural speech sounds to answer two questions: (1) Are forward predictions generated to process complex speech sounds, such as vowels, initiated via a button press? (2) Are prediction errors regarding self-initiated deviant vowels reflected in the corresponding ERP components? Results confirm an N1-suppression in response to self-initiated speech sounds. Furthermore, our results suggest that predictions leading to the N1-suppression effect are specific, as self-initiated deviant vowels do not elicit an N1-suppression effect. Rather, self-initiated deviant vowels elicit an enhanced N2b and P3a compared to externally generated deviants, externally generated standard, or self-initiated standards, again confirming prediction specificity. Results show that prediction errors are salient in self-initiated auditory speech sounds, which may lead to more efficient error correction in speech production.
Collapse
Affiliation(s)
- Franziska Knolle
- Department of Psychiatry, University of Cambridge, Cambridge, United Kingdom.,Department of Neuroradiology, Technical University of Munich, Munich, Germany
| | - Michael Schwartze
- Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, Netherlands
| | - Erich Schröger
- Institute of Psychology, Leipzig University, Leipzig, Germany
| | - Sonja A Kotz
- Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, Netherlands.,Department of Neuropsychology, Max Planck Institute of Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
28
|
Jack BN, Le Pelley ME, Han N, Harris AW, Spencer KM, Whitford TJ. Inner speech is accompanied by a temporally-precise and content-specific corollary discharge. Neuroimage 2019; 198:170-180. [DOI: 10.1016/j.neuroimage.2019.04.038] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Accepted: 04/11/2019] [Indexed: 11/29/2022] Open
|
29
|
Burgess JD, Major BP, McNeel C, Clark GM, Lum JAG, Enticott PG. Learning to Expect: Predicting Sounds During Movement Is Related to Sensorimotor Association During Listening. Front Hum Neurosci 2019; 13:215. [PMID: 31333431 PMCID: PMC6624421 DOI: 10.3389/fnhum.2019.00215] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2018] [Accepted: 06/11/2019] [Indexed: 11/13/2022] Open
Abstract
Sensory experiences, such as sound, often result from our motor actions. Over time, repeated sound-producing performance can generate sensorimotor associations. However, it is not clear how sensory and motor information are associated. Here, we explore if sensory prediction is associated with the formation of sensorimotor associations during a learning task. We recorded event-related potentials (ERPs) while participants produced index and little finger-swipes on a bespoke device, generating novel sounds. ERPs were also obtained as participants heard those sounds played back. Peak suppression was compared to assess sensory prediction. Additionally, transcranial magnetic stimulation (TMS) was used during listening to generate finger-motor evoked potentials (MEPs). MEPs were recorded before and after training upon hearing these sounds, and then compared to reveal sensorimotor associations. Finally, we explored the relationship between these components. Results demonstrated that an increased positive-going peak (e.g., P2) and a suppressed negative-going peak (e.g., N2) were recorded during action, revealing some sensory prediction outcomes (P2: p = 0.050, ηp2 = 0.208; N2: p = 0.001, ηp2 = 0.474). Increased MEPs were also observed upon hearing congruent sounds compared with incongruent sounds (i.e., associated to a finger), demonstrating precise sensorimotor associations that were not present before learning (Index finger: p < 0.001, ηp2 = 0.614; Little finger: p < 0.001, ηp2 = 0.529). Consistent with our broad hypotheses, a negative association between the MEPs in one finger during listening and ERPs during performance of the other was observed (Index finger MEPs and Fz N1 action ERPs; r = −0.655, p = 0.003). Overall, data suggest that predictive mechanisms are associated with the fine-tuning of sensorimotor associations.
Collapse
Affiliation(s)
- Jed D Burgess
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| | - Brendan P Major
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| | - Claire McNeel
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| | - Gillian M Clark
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| | - Jarrad A G Lum
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| | - Peter G Enticott
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| |
Collapse
|
30
|
van Laarhoven T, Stekelenburg JJ, Eussen MLJM, Vroomen J. Electrophysiological alterations in motor-auditory predictive coding in autism spectrum disorder. Autism Res 2019; 12:589-599. [PMID: 30801964 PMCID: PMC6593426 DOI: 10.1002/aur.2087] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 12/18/2018] [Accepted: 02/05/2019] [Indexed: 12/28/2022]
Abstract
The amplitude of the auditory N1 component of the event‐related potential (ERP) is typically attenuated for self‐initiated sounds, compared to sounds with identical acoustic and temporal features that are triggered externally. This effect has been ascribed to internal forward models predicting the sensory consequences of one's own motor actions. The predictive coding account of autistic symptomatology states that individuals with autism spectrum disorder (ASD) have difficulties anticipating upcoming sensory stimulation due to a decreased ability to infer the probabilistic structure of their environment. Without precise internal forward prediction models to rely on, perception in ASD could be less affected by prior expectations and more driven by sensory input. Following this reasoning, one would expect diminished attenuation of the auditory N1 due to self‐initiation in individuals with ASD. Here, we tested this hypothesis by comparing the neural response to self‐ versus externally‐initiated tones between a group of individuals with ASD and a group of age matched neurotypical controls. ERPs evoked by tones initiated via button‐presses were compared with ERPs evoked by the same tones replayed at identical pace. Significant N1 attenuation effects were only found in the TD group. Self‐initiation of the tones did not attenuate the auditory N1 in the ASD group, indicating that they may be unable to anticipate the auditory sensory consequences of their own motor actions. These results show that individuals with ASD have alterations in sensory attenuation of self‐initiated sounds, and support the notion of impaired predictive coding as a core deficit underlying autistic symptomatology. Autism Res 2019, 12: 589–599. © 2019 The Authors. Autism Research published by International Society for Autism Research published by Wiley Periodicals, Inc. Lay Summary Many individuals with ASD experience difficulties in processing sensory information (for example, increased sensitivity to sound). Here we show that these difficulties may be related to an inability to anticipate upcoming sensory stimulation. Our findings contribute to a better understanding of the neural mechanisms underlying the different sensory perception experienced by individuals with ASD.
Collapse
Affiliation(s)
- Thijs van Laarhoven
- Department of Cognitive Neuropsychology, Tilburg University, 5000 LE Tilburg, The Netherlands
| | - Jeroen J Stekelenburg
- Department of Cognitive Neuropsychology, Tilburg University, 5000 LE Tilburg, The Netherlands
| | - Mart L J M Eussen
- Department of Child and Adolescent Psychiatry, Yulius Mental Health Organization, Dordrecht, The Netherlands.,Department of Autism, Yulius Mental Health Organization, Dordrecht, The Netherlands
| | - Jean Vroomen
- Department of Cognitive Neuropsychology, Tilburg University, 5000 LE Tilburg, The Netherlands
| |
Collapse
|
31
|
Osumi T, Tsuji K, Shibata M, Umeda S. Machiavellianism and early neural responses to others' facial expressions caused by one's own decisions. Psychiatry Res 2019; 271:669-677. [PMID: 30791340 DOI: 10.1016/j.psychres.2018.12.037] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 09/26/2018] [Accepted: 12/06/2018] [Indexed: 12/21/2022]
Abstract
The processing of social stimuli generated by one's own voluntary behavior is an element of social adaptation. It is known that self-generated stimuli induce attenuated sensory experiences compared with externally generated stimuli. The present study aimed to examine this self-specific attenuation effect on early stimulus processing in the case of others' facial expressions during interpersonal interactions. In addition, this study explored the possibility that the self-specific attenuation effect on social cognition is modulated by antisocial personality traits such as Machiavellianism. We analyzed early components of the event-related brain potential in participants elicited by happy and sad facial expressions of others when the participant's decision was responsible for the others' emotions and when the others' facial expressions were independent of the participant's decision. Compared to the non-responsible condition, the responsible condition showed an attenuated amplitude of the N170 component in response to sad faces. Moreover, Machiavellianism explained individual differences in the self-specific attenuation effect depending on the affective valence of social signals. The present findings support the possibility that the self-specific attenuation effect extends to interpersonal interactions and imply that distorted cognition of others' emotions caused by one's own behavior is associated with personality disorders that promote antisocial behaviors.
Collapse
Affiliation(s)
- Takahiro Osumi
- Japan Society for the Promotion of Science (JSPS), Tokyo, Japan; Department of Psychology, Keio University, Tokyo, Japan.
| | - Koki Tsuji
- Graduate School of Human Relations, Keio University, Tokyo, Japan; Japan Society for the Promotion of Science (JSPS), Tokyo, Japan
| | - Midori Shibata
- Keio Advanced Research Center, Keio University, Tokyo, Japan
| | - Satoshi Umeda
- Department of Psychology, Keio University, Tokyo, Japan; Keio Advanced Research Center, Keio University, Tokyo, Japan
| |
Collapse
|
32
|
Pinheiro AP, Schwartze M, Kotz SA. Voice-selective prediction alterations in nonclinical voice hearers. Sci Rep 2018; 8:14717. [PMID: 30283058 PMCID: PMC6170384 DOI: 10.1038/s41598-018-32614-9] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2018] [Accepted: 09/03/2018] [Indexed: 11/09/2022] Open
Abstract
Auditory verbal hallucinations (AVH) are a cardinal symptom of psychosis but also occur in 6–13% of the general population. Voice perception is thought to engage an internal forward model that generates predictions, preparing the auditory cortex for upcoming sensory feedback. Impaired processing of sensory feedback in vocalization seems to underlie the experience of AVH in psychosis, but whether this is the case in nonclinical voice hearers remains unclear. The current study used electroencephalography (EEG) to investigate whether and how hallucination predisposition (HP) modulates the internal forward model in response to self-initiated tones and self-voices. Participants varying in HP (based on the Launay-Slade Hallucination Scale) listened to self-generated and externally generated tones or self-voices. HP did not affect responses to self vs. externally generated tones. However, HP altered the processing of the self-generated voice: increased HP was associated with increased pre-stimulus alpha power and increased N1 response to the self-generated voice. HP did not affect the P2 response to voices. These findings confirm that both prediction and comparison of predicted and perceived feedback to a self-generated voice are altered in individuals with AVH predisposition. Specific alterations in the processing of self-generated vocalizations may establish a core feature of the psychosis continuum.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal. .,Neuropsychophysiology Lab, School of Psychology, University of Minho, Braga, Portugal.
| | - Michael Schwartze
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Sonja A Kotz
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands.,Department of Neuropsychology, Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
33
|
Omission P3 after voluntary action indexes the formation of action-driven prediction. Int J Psychophysiol 2018; 124:54-61. [DOI: 10.1016/j.ijpsycho.2017.12.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2017] [Revised: 11/02/2017] [Accepted: 12/15/2017] [Indexed: 11/20/2022]
|
34
|
Attenuation of Responses to Self-Generated Sounds in Auditory Cortical Neurons. J Neurosci 2017; 36:12010-12026. [PMID: 27881785 DOI: 10.1523/jneurosci.1564-16.2016] [Citation(s) in RCA: 68] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2016] [Revised: 09/21/2016] [Accepted: 09/23/2016] [Indexed: 12/21/2022] Open
Abstract
Many of the sounds that we perceive are caused by our own actions, for example when speaking or moving, and must be distinguished from sounds caused by external events. Studies using macroscopic measurements of brain activity in human subjects have consistently shown that responses to self-generated sounds are attenuated in amplitude. However, the underlying manifestation of this phenomenon at the cellular level is not well understood. To address this, we recorded the activity of neurons in the auditory cortex of mice in response to sounds generated by their own behavior. We found that the responses of auditory cortical neurons to these self-generated sounds were consistently attenuated, compared with the same sounds generated independently of the animals' behavior. This effect was observed in both putative pyramidal neurons and in interneurons and was stronger in lower layers of auditory cortex. Downstream of the auditory cortex, we found that responses of hippocampal neurons to self-generated sounds were almost entirely suppressed. Responses to self-generated optogenetic stimulation of auditory thalamocortical terminals were also attenuated, suggesting a cortical contribution to this effect. Further analyses revealed that the attenuation of self-generated sounds was not simply due to the nonspecific effects of movement or behavioral state on auditory responsiveness. However, the strength of attenuation depended on the degree to which self-generated sounds were expected to occur, in a cell-type-specific manner. Together, these results reveal the cellular basis underlying attenuated responses to self-generated sounds and suggest that predictive processes contribute to this effect. SIGNIFICANCE STATEMENT Distinguishing self-generated from externally generated sensory input poses a fundamental problem for behaving organisms. Our study in mice shows for the first time that responses of auditory cortical neurons are attenuated to sounds generated manually by the animals' own behavior. This effect is distinct from the nonspecific effect of behavioral activity on auditory responsiveness that has previously been reported and its magnitude is modulated by the probability with which self-generated sounds occur, suggesting an underlying predictive process. We also reveal how this effect varies across cell types and cortical layers. These findings lay a foundation for studying impairments in the processing of self-generated sounds, which are observed in psychiatric illness, in animal disease models.
Collapse
|
35
|
Pinheiro AP, Barros C, Dias M, Niznikiewicz M. Does emotion change auditory prediction and deviance detection? Biol Psychol 2017; 127:123-133. [PMID: 28499839 DOI: 10.1016/j.biopsycho.2017.05.007] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2016] [Revised: 03/15/2017] [Accepted: 05/06/2017] [Indexed: 01/23/2023]
Abstract
In the last decades, a growing number of studies provided compelling evidence supporting the interplay of cognitive and affective processes. However, it remains to be clarified whether and how an emotional context affects the prediction and detection of change in unattended sensory events. In an event-related potential (ERP) study, we probed the modulatory role of pleasant, unpleasant and neutral visual contexts on the brain response to automatic detection of change in spectral (intensity) vs. temporal (duration) sound features. Twenty participants performed a passive auditory oddball task. Additionally, we tested the relationship between ERPs and self-reported mood. Participants reported more negative mood after the negative block. The P2 amplitude elicited by standards was increased in a positive context. Mismatch Negativity (MMN) amplitude was decreased in the negative relative to the neutral and positive contexts, and was associated with self-reported mood. These findings suggest that the detection of regularities in the auditory stream was facilitated in a positive context, whereas a negative visual context interfered with prediction error elicitation, through associated mood changes. Both ERP and behavioral effects highlight the intricate links between emotion, perception and cognitive processes.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Neuropsychophysiology Lab, School of Psychology, University of Minho, Braga, Portugal; Faculty of Psychology, University of Lisbon, Lisbon, Portugal.
| | - Carla Barros
- Neuropsychophysiology Lab, School of Psychology, University of Minho, Braga, Portugal
| | - Marcelo Dias
- Neuropsychophysiology Lab, School of Psychology, University of Minho, Braga, Portugal
| | - Margaret Niznikiewicz
- VA Boston Healthcare System, Department of Psychiatry, Harvard Medical School, Boston MA, USA
| |
Collapse
|
36
|
Weller L, Schwarz KA, Kunde W, Pfister R. Was it me? – Filling the interval between action and effects increases agency but not sensory attenuation. Biol Psychol 2017; 123:241-249. [DOI: 10.1016/j.biopsycho.2016.12.015] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2016] [Revised: 11/03/2016] [Accepted: 12/16/2016] [Indexed: 11/16/2022]
|
37
|
Timm J, Schönwiesner M, Schröger E, SanMiguel I. Sensory suppression of brain responses to self-generated sounds is observed with and without the perception of agency. Cortex 2016; 80:5-20. [DOI: 10.1016/j.cortex.2016.03.018] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2015] [Revised: 11/30/2015] [Accepted: 03/24/2016] [Indexed: 10/22/2022]
|
38
|
Pinheiro AP, Rezaii N, Nestor PG, Rauber A, Spencer KM, Niznikiewicz M. Did you or I say pretty, rude or brief? An ERP study of the effects of speaker's identity on emotional word processing. BRAIN AND LANGUAGE 2016; 153-154:38-49. [PMID: 26894680 DOI: 10.1016/j.bandl.2015.12.003] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2015] [Revised: 11/19/2015] [Accepted: 12/10/2015] [Indexed: 06/05/2023]
Abstract
During speech comprehension, multiple cues need to be integrated at a millisecond speed, including semantic information, as well as voice identity and affect cues. A processing advantage has been demonstrated for self-related stimuli when compared with non-self stimuli, and for emotional relative to neutral stimuli. However, very few studies investigated self-other speech discrimination and, in particular, how emotional valence and voice identity interactively modulate speech processing. In the present study we probed how the processing of words' semantic valence is modulated by speaker's identity (self vs. non-self voice). Sixteen healthy subjects listened to 420 prerecorded adjectives differing in voice identity (self vs. non-self) and semantic valence (neutral, positive and negative), while electroencephalographic data were recorded. Participants were instructed to decide whether the speech they heard was their own (self-speech condition), someone else's (non-self speech), or if they were unsure. The ERP results demonstrated interactive effects of speaker's identity and emotional valence on both early (N1, P2) and late (Late Positive Potential - LPP) processing stages: compared with non-self speech, self-speech with neutral valence elicited more negative N1 amplitude, self-speech with positive valence elicited more positive P2 amplitude, and self-speech with both positive and negative valence elicited more positive LPP. ERP differences between self and non-self speech occurred in spite of similar accuracy in the recognition of both types of stimuli. Together, these findings suggest that emotion and speaker's identity interact during speech processing, in line with observations of partially dependent processing of speech and speaker information.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Neuropsychophysiology Laboratory, Psychology Research Center (CIPsi), School of Psychology, University of Minho, Braga, Portugal; Clinical Neuroscience Division, Laboratory of Neuroscience, VA Boston Healthcare System-Brockton Division, Department of Psychiatry, Harvard Medical School, Brockton, MA, United States; Faculty of Psychology, University of Lisbon, Lisbon, Portugal.
| | - Neguine Rezaii
- Clinical Neuroscience Division, Laboratory of Neuroscience, VA Boston Healthcare System-Brockton Division, Department of Psychiatry, Harvard Medical School, Brockton, MA, United States
| | - Paul G Nestor
- Clinical Neuroscience Division, Laboratory of Neuroscience, VA Boston Healthcare System-Brockton Division, Department of Psychiatry, Harvard Medical School, Brockton, MA, United States; Department of Psychology, University of Massachusetts, Boston, MA, United States
| | - Andréia Rauber
- International Studies in Computational Linguistics, University of Tübingen, Tübingen, Germany
| | - Kevin M Spencer
- Neural Dynamics Laboratory, Research Service, VA Boston Healthcare System, and Department of Psychiatry, Harvard Medical School, Boston, MA, United States
| | - Margaret Niznikiewicz
- Clinical Neuroscience Division, Laboratory of Neuroscience, VA Boston Healthcare System-Brockton Division, Department of Psychiatry, Harvard Medical School, Brockton, MA, United States
| |
Collapse
|
39
|
Horváth J. Action-related auditory ERP attenuation: Paradigms and hypotheses. Brain Res 2015; 1626:54-65. [PMID: 25843932 DOI: 10.1016/j.brainres.2015.03.038] [Citation(s) in RCA: 81] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2014] [Revised: 02/22/2015] [Accepted: 03/23/2015] [Indexed: 11/15/2022]
Abstract
A number studies have shown that the auditory N1 event-related potential (ERP) is attenuated when elicited by self-induced or self-generated sounds. Because N1 is a correlate of auditory feature- and event-detection, it was generally assumed that N1-attenuation reflected the cancellation of auditory re-afference, enabled by the internal forward modeling of the predictable sensory consequences of the given action. Focusing on paradigms utilizing non-speech actions, the present review summarizes recent progress on action-related auditory attenuation. Following a critical analysis of the most widely used, contingent paradigm, two further hypotheses on the possible causes of action-related auditory ERP attenuation are presented. The attention hypotheses suggest that auditory ERP attenuation is brought about by a temporary division of attention between the action and the auditory stimulation. The pre-activation hypothesis suggests that the attenuation is caused by the activation of a sensory template during the initiation of the action, which interferes with the incoming stimulation. Although each hypothesis can account for a number of findings, none of them can accommodate the whole spectrum of results. It is suggested that a better understanding of auditory ERP attenuation phenomena could be achieved by systematic investigations of the types of actions, the degree of action-effect contingency, and the temporal characteristics of action-effect contingency representation-buildup and -deactivation. This article is part of a Special Issue entitled SI: Prediction and Attention.
Collapse
Affiliation(s)
- János Horváth
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, HAS, P.O.B. 286, H-1519 Budapest, Hungary.
| |
Collapse
|
40
|
Poonian SK, McFadyen J, Ogden J, Cunnington R. Implicit Agency in Observed Actions: Evidence for N1 Suppression of Tones Caused by Self-made and Observed Actions. J Cogn Neurosci 2015; 27:752-64. [DOI: 10.1162/jocn_a_00745] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Every day we make attributions about how our actions and the actions of others cause consequences in the world around us. It is unknown whether we use the same implicit process in attributing causality when observing others' actions as we do when making our own. The aim of this research was to investigate the neural processes involved in the implicit sense of agency we form between actions and effects, for both our own actions and when watching others' actions. Using an interval estimation paradigm to elicit intentional binding in self-made and observed actions, we measured the EEG responses indicative of anticipatory processes before an action and the ERPs in response to the sensory consequence. We replicated our previous findings that we form a sense of implicit agency over our own and others' actions. Crucially, EEG results showed that tones caused by either self-made or observed actions both resulted in suppression of the N1 component of the sensory ERP, with no difference in suppression between consequences caused by observed actions compared with self-made actions. Furthermore, this N1 suppression was greatest for tones caused by observed goal-directed actions rather than non-action or non-goal-related visual events. This suggests that top–down processes act upon the neural responses to sensory events caused by goal-directed actions in the same way for events caused by the self or those made by other agents.
Collapse
|
41
|
Schröger E, Marzecová A, SanMiguel I. Attention and prediction in human audition: a lesson from cognitive psychophysiology. Eur J Neurosci 2015; 41:641-64. [PMID: 25728182 PMCID: PMC4402002 DOI: 10.1111/ejn.12816] [Citation(s) in RCA: 156] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2014] [Revised: 11/27/2014] [Accepted: 12/01/2014] [Indexed: 11/30/2022]
Abstract
Attention is a hypothetical mechanism in the service of perception that facilitates the processing of relevant information and inhibits the processing of irrelevant information. Prediction is a hypothetical mechanism in the service of perception that considers prior information when interpreting the sensorial input. Although both (attention and prediction) aid perception, they are rarely considered together. Auditory attention typically yields enhanced brain activity, whereas auditory prediction often results in attenuated brain responses. However, when strongly predicted sounds are omitted, brain responses to silence resemble those elicited by sounds. Studies jointly investigating attention and prediction revealed that these different mechanisms may interact, e.g. attention may magnify the processing differences between predicted and unpredicted sounds. Following the predictive coding theory, we suggest that prediction relates to predictions sent down from predictive models housed in higher levels of the processing hierarchy to lower levels and attention refers to gain modulation of the prediction error signal sent up to the higher level. As predictions encode contents and confidence in the sensory data, and as gain can be modulated by the intention of the listener and by the predictability of the input, various possibilities for interactions between attention and prediction can be unfolded. From this perspective, the traditional distinction between bottom-up/exogenous and top-down/endogenous driven attention can be revisited and the classic concepts of attentional gain and attentional trace can be integrated.
Collapse
Affiliation(s)
- Erich Schröger
- Institute for Psychology, BioCog - Cognitive and Biological Psychology, University of LeipzigNeumarkt 9-19, D-04109, Leipzig, Germany
| | - Anna Marzecová
- Institute for Psychology, BioCog - Cognitive and Biological Psychology, University of LeipzigNeumarkt 9-19, D-04109, Leipzig, Germany
| | - Iria SanMiguel
- Institute for Psychology, BioCog - Cognitive and Biological Psychology, University of LeipzigNeumarkt 9-19, D-04109, Leipzig, Germany
| |
Collapse
|
42
|
Parmentier FBR, Kefauver M. The semantic aftermath of distraction by deviant sounds: Crosstalk interference is mediated by the predictability of semantic congruency. Brain Res 2015; 1626:247-57. [PMID: 25641044 DOI: 10.1016/j.brainres.2015.01.034] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2014] [Revised: 12/23/2014] [Accepted: 01/19/2015] [Indexed: 11/17/2022]
Abstract
Rare changes in a stream of otherwise repeated task-irrelevant sounds break through selective attention and disrupt performance in an unrelated visual task. This deviance distraction effect emerges because deviant sounds violate the cognitive system's predictions. In this study we sought to examine whether predictability also mediate the so-called semantic effect whereby behavioral performance suffers from the clash between the involuntary semantic evaluation of irrelevant sounds and the voluntary processing of visual targets (e.g., when participants must categorize a right visual arrow following the presentation of the deviant sound "left"). By manipulating the conditional probabilities of the congruent and incongruent deviant sounds in a left/right arrow categorization task, we elicited implicit predictions about the upcoming target and related response. We observed a linear increase of the semantic effect with the proportion of congruent deviant trials (i.e., as deviant sounds increasingly predicted congruent targets). We conclude that deviant sounds affect response times based on a combination of crosstalk interference and two types of prediction violations: stimulus violations (violations of predictions regarding the identity of upcoming irrelevant sounds) and semantic violations (violations of predictions regarding the target afforded by deviant sounds). We report a three-parameter model that captures all key features of the observed RTs. Overall, our results fit with the view that the brain builds forward models of the environment in order to optimize cognitive processing and that control of one's attention and actions is called upon when predictions are violated. This article is part of a Special Issue entitled SI: Prediction and Attention.
Collapse
Affiliation(s)
- Fabrice B R Parmentier
- Neuropsychology & Cognition Group, Department of Psychology and Research Institute for Health Sciences (iUNICS), University of the Balearic Islands, Palma, Balearic Islands, Spain; Instituto de Investigación Sanitaria de Palma (IdISPa), Balearic Islands, Spain; School of Psychology, University of Western Australia, Perth, WA, Australia.
| | - Miriam Kefauver
- Neuropsychology & Cognition Group, Department of Psychology and Research Institute for Health Sciences (iUNICS), University of the Balearic Islands, Palma, Balearic Islands, Spain
| |
Collapse
|
43
|
Suppression of the N1 auditory evoked potential for sounds generated by the upper and lower limbs. Biol Psychol 2014; 102:108-17. [DOI: 10.1016/j.biopsycho.2014.06.007] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2013] [Revised: 06/24/2014] [Accepted: 06/27/2014] [Indexed: 11/20/2022]
|
44
|
Ford JM, Palzes VA, Roach BJ, Mathalon DH. Did I do that? Abnormal predictive processes in schizophrenia when button pressing to deliver a tone. Schizophr Bull 2014; 40:804-12. [PMID: 23754836 PMCID: PMC4059422 DOI: 10.1093/schbul/sbt072] [Citation(s) in RCA: 109] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Motor actions are preceded by an efference copy of the motor command, resulting in a corollary discharge of the expected sensation in sensory cortex. These mechanisms allow animals to predict sensations, suppress responses to self-generated sensations, and thereby process sensations efficiently and economically. During talking, patients with schizophrenia show less evidence of pretalking activity and less suppression of the speech sound, consistent with dysfunction of efference copy and corollary discharge, respectively. We asked if patterns seen in talking would generalize to pressing a button to hear a tone, a paradigm translatable to less vocal animals. In 26 patients [23 schizophrenia, 3 schizoaffective (SZ)] and 22 healthy controls (HC), suppression of the N1 component of the auditory event-related potential was estimated by comparing N1 to tones delivered by button presses and N1 to those tones played back. The lateralized readiness potential (LRP) associated with the motor plan preceding presses to deliver tones was estimated by comparing right and left hemispheres' neural activity. The relationship between N1 suppression and LRP amplitude was assessed. LRP preceding button presses to deliver tones was larger in HC than SZ, as was N1 suppression. LRP amplitude and N1 suppression were correlated in both groups, suggesting stronger efference copies are associated with stronger corollary discharges. SZ have reduced N1 suppression, reflecting corollary discharge action, and smaller LRPs preceding button presses to deliver tones, reflecting the efference copy of the motor plan. Effects seen during vocalization largely extend to other motor acts more translatable to lab animals.
Collapse
Affiliation(s)
- Judith M. Ford
- Mental Health Service, San Francisco VA Medical Center, San Francisco, CA;,Department of Psychiatry, University of California, San Francisco, CA,*To whom correspondence should be addressed; 4150 Clement Street, San Francisco, CA 94121; tel: 415-221-4810, ext. 4187, fax: 415-750-6622, e-mail:
| | - Vanessa A. Palzes
- Mental Health Service, San Francisco VA Medical Center, San Francisco, CA
| | - Brian J. Roach
- Mental Health Service, San Francisco VA Medical Center, San Francisco, CA
| | - Daniel H. Mathalon
- Mental Health Service, San Francisco VA Medical Center, San Francisco, CA;,Department of Psychiatry, University of California, San Francisco, CA
| |
Collapse
|
45
|
Flagmeier SG, Ray KL, Parkinson AL, Li K, Vargas R, Price LR, Laird AR, Larson CR, Robin DA. The neural changes in connectivity of the voice network during voice pitch perturbation. BRAIN AND LANGUAGE 2014; 132:7-13. [PMID: 24681401 PMCID: PMC4526025 DOI: 10.1016/j.bandl.2014.02.001] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2013] [Revised: 01/28/2014] [Accepted: 02/04/2014] [Indexed: 06/03/2023]
Abstract
Voice control is critical to communication. To date, studies have used behavioral, electrophysiological and functional data to investigate the neural correlates of voice control using perturbation tasks, but have yet to examine the interactions of these neural regions. The goal of this study was to use structural equation modeling of functional neuroimaging data to examine network properties of voice with and without perturbation. Results showed that the presence of a pitch shift, which was processed as an error in vocalization, altered connections between right STG and left STG. Other regions that revealed differences in connectivity during error detection and correction included bilateral inferior frontal gyrus, and the primary and pre motor cortices. Results indicated that STG plays a critical role in voice control, specifically, during error detection and correction. Additionally, pitch perturbation elicits changes in the voice network that suggest the right hemisphere is critical to pitch modulation.
Collapse
Affiliation(s)
- Sabina G Flagmeier
- Research Imaging Institute, University of Texas Health Science Center at San Antonio, United States
| | - Kimberly L Ray
- Research Imaging Institute, University of Texas Health Science Center at San Antonio, United States
| | - Amy L Parkinson
- Research Imaging Institute, University of Texas Health Science Center at San Antonio, United States
| | - Karl Li
- Research Imaging Institute, University of Texas Health Science Center at San Antonio, United States
| | - Robert Vargas
- Research Imaging Institute, University of Texas Health Science Center at San Antonio, United States
| | - Larry R Price
- Department of Mathematics and College of Education, Texas State University, San Marcos, TX, United States
| | - Angela R Laird
- Research Imaging Institute, University of Texas Health Science Center at San Antonio, United States; Department of Physics, Florida International University, Miami, FL, United States
| | - Charles R Larson
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, United States
| | - Donald A Robin
- Research Imaging Institute, University of Texas Health Science Center at San Antonio, United States; Neurology, University of Texas Health Science Center at San Antonio, United States; Biomedical Engineering, University of Texas Health Science Center at San Antonio, United States; Radiology, University of Texas Health Science Center at San Antonio, United States; Honor's College, University of Texas San Antonio, San Antonio, United States.
| |
Collapse
|
46
|
Wang J, Mathalon DH, Roach BJ, Reilly J, Keedy SK, Sweeney JA, Ford JM. Action planning and predictive coding when speaking. Neuroimage 2014; 91:91-8. [PMID: 24423729 DOI: 10.1016/j.neuroimage.2014.01.003] [Citation(s) in RCA: 58] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2013] [Revised: 11/23/2013] [Accepted: 01/03/2014] [Indexed: 12/20/2022] Open
Abstract
Across the animal kingdom, sensations resulting from an animal's own actions are processed differently from sensations resulting from external sources, with self-generated sensations being suppressed. A forward model has been proposed to explain this process across sensorimotor domains. During vocalization, reduced processing of one's own speech is believed to result from a comparison of speech sounds to corollary discharges of intended speech production generated from efference copies of commands to speak. Until now, anatomical and functional evidence validating this model in humans has been indirect. Using EEG with anatomical MRI to facilitate source localization, we demonstrate that inferior frontal gyrus activity during the 300ms before speaking was associated with suppressed processing of speech sounds in auditory cortex around 100ms after speech onset (N1). These findings indicate that an efference copy from speech areas in prefrontal cortex is transmitted to auditory cortex, where it is used to suppress processing of anticipated speech sounds. About 100ms after N1, a subsequent auditory cortical component (P2) was not suppressed during talking. The combined N1 and P2 effects suggest that although sensory processing is suppressed as reflected in N1, perceptual gaps may be filled as reflected in the lack of P2 suppression, explaining the discrepancy between sensory suppression and preserved sensory experiences. These findings, coupled with the coherence between relevant brain regions before and during speech, provide new mechanistic understanding of the complex interactions between action planning and sensory processing that provide for differentiated tagging and monitoring of one's own speech, processes disrupted in neuropsychiatric disorders.
Collapse
Affiliation(s)
- Jun Wang
- Department of Psychiatry, University of Texas Southwestern, Dallas, TX 75390, USA
| | - Daniel H Mathalon
- San Francisco VA Medical Center, San Francisco, CA 94121, USA; Department of Psychiatry, University of California, San Francisco, CA 94121, USA
| | - Brian J Roach
- San Francisco VA Medical Center, San Francisco, CA 94121, USA
| | - James Reilly
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL 60611, USA
| | - Sarah K Keedy
- Department of Psychiatry and Behavioral Neuroscience, University of Chicago, Chicago, IL 60637, USA
| | - John A Sweeney
- Department of Psychiatry, University of Texas Southwestern, Dallas, TX 75390, USA
| | - Judith M Ford
- San Francisco VA Medical Center, San Francisco, CA 94121, USA; Department of Psychiatry, University of California, San Francisco, CA 94121, USA.
| |
Collapse
|
47
|
Maes PJ, Leman M, Palmer C, Wanderley MM. Action-based effects on music perception. Front Psychol 2014; 4:1008. [PMID: 24454299 PMCID: PMC3879531 DOI: 10.3389/fpsyg.2013.01008] [Citation(s) in RCA: 85] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2013] [Accepted: 12/17/2013] [Indexed: 11/30/2022] Open
Abstract
The classical, disembodied approach to music cognition conceptualizes action and perception as separate, peripheral processes. In contrast, embodied accounts of music cognition emphasize the central role of the close coupling of action and perception. It is a commonly established fact that perception spurs action tendencies. We present a theoretical framework that captures the ways in which the human motor system and its actions can reciprocally influence the perception of music. The cornerstone of this framework is the common coding theory, postulating a representational overlap in the brain between the planning, the execution, and the perception of movement. The integration of action and perception in so-called internal models is explained as a result of associative learning processes. Characteristic of internal models is that they allow intended or perceived sensory states to be transferred into corresponding motor commands (inverse modeling), and vice versa, to predict the sensory outcomes of planned actions (forward modeling). Embodied accounts typically refer to inverse modeling to explain action effects on music perception (Leman, 2007). We extend this account by pinpointing forward modeling as an alternative mechanism by which action can modulate perception. We provide an extensive overview of recent empirical evidence in support of this idea. Additionally, we demonstrate that motor dysfunctions can cause perceptual disabilities, supporting the main idea of the paper that the human motor system plays a functional role in auditory perception. The finding that music perception is shaped by the human motor system and its actions suggests that the musical mind is highly embodied. However, we advocate for a more radical approach to embodied (music) cognition in the sense that it needs to be considered as a dynamical process, in which aspects of action, perception, introspection, and social interaction are of crucial importance.
Collapse
Affiliation(s)
- Pieter-Jan Maes
- Department of Music Research, McGill University Montreal, QC, Canada
| | - Marc Leman
- Department of Musicology, Ghent University Ghent, Belgium
| | - Caroline Palmer
- Department of Psychology, McGill University Montreal, QC, Canada
| | | |
Collapse
|
48
|
Behroozmand R, Ibrahim N, Korzyukov O, Robin DA, Larson CR. Left-hemisphere activation is associated with enhanced vocal pitch error detection in musicians with absolute pitch. Brain Cogn 2013; 84:97-108. [PMID: 24355545 DOI: 10.1016/j.bandc.2013.11.007] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2013] [Revised: 09/16/2013] [Accepted: 11/20/2013] [Indexed: 11/25/2022]
Abstract
The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing.
Collapse
Affiliation(s)
- Roozbeh Behroozmand
- Speech Physiology Lab, Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, United States
| | - Nadine Ibrahim
- Speech Physiology Lab, Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, United States
| | - Oleg Korzyukov
- Speech Physiology Lab, Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, United States
| | - Donald A Robin
- Research Imaging Institute, University of Texas Health Science Center San Antonio, San Antonio, TX 78229, United States
| | - Charles R Larson
- Speech Physiology Lab, Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, United States.
| |
Collapse
|
49
|
Saupe K, Widmann A, Trujillo-Barreto NJ, Schröger E. Sensorial suppression of self-generated sounds and its dependence on attention. Int J Psychophysiol 2013; 90:300-10. [DOI: 10.1016/j.ijpsycho.2013.09.006] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2013] [Revised: 09/24/2013] [Accepted: 09/25/2013] [Indexed: 11/25/2022]
|
50
|
Abstract
Recent neuroscience advances suggest that when interacting with our environment, along with previous experience, we use contextual cues and regularities to form predictions that guide our perceptions and actions. The goal of such active "predictive sensing" is to selectively enhance the processing and representation of behaviorally relevant information in an efficient manner. Since a hallmark of schizophrenia is impaired information selection, we tested whether this deficiency stems from dysfunctional predictive sensing by measuring the degree to which neuronal activity predicts relevant events. In healthy subjects, we established that these mechanisms are engaged in an effort-dependent manner and that, based on a correspondence between human scalp and intracranial nonhuman primate recordings, their main role is a predictive suppression of excitability in task-irrelevant regions. In contrast, schizophrenia patients displayed a reduced alignment of neuronal activity to attended stimuli, which correlated with their behavioral performance deficits and clinical symptoms. These results support the relevance of predictive sensing for normal and aberrant brain function, and highlight the importance of neuronal mechanisms that mold internal ongoing neuronal activity to model key features of the external environment.
Collapse
|