1
|
Navare UP, Ciardo F, Kompatsiari K, De Tommaso D, Wykowska A. When performing actions with robots, attribution of intentionality affects the sense of joint agency. Sci Robot 2024; 9:eadj3665. [PMID: 38924424 DOI: 10.1126/scirobotics.adj3665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 05/30/2024] [Indexed: 06/28/2024]
Abstract
Sense of joint agency (SoJA) is the sense of control experienced by humans when acting with others to bring about changes in the shared environment. SoJA is proposed to arise from the sensorimotor predictive processes underlying action control and monitoring. Because SoJA is a ubiquitous phenomenon occurring when we perform actions with other humans, it is of great interest and importance to understand whether-and under what conditions-SoJA occurs in collaborative tasks with humanoid robots. In this study, using behavioral measures and neural responses measured by electroencephalography (EEG), we aimed to evaluate whether SoJA occurs in joint action with the humanoid robot iCub and whether its emergence is influenced by the perceived intentionality of the robot. Behavioral results show that participants experienced SoJA with the robot partner when it was presented as an intentional agent but not when it was presented as a mechanical artifact. EEG results show that the mechanism that influences the emergence of SoJA in the condition when the robot is presented as an intentional agent is the ability to form similarly accurate predictions about the sensory consequences of our own and others' actions, leading to similar modulatory activity over sensory processing. Together, our results shed light on the joint sensorimotor processing mechanisms underlying the emergence of SoJA in human-robot interaction and underscore the importance of attribution of intentionality to the robot in human-robot collaboration.
Collapse
Affiliation(s)
- Uma Prashant Navare
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, 16152 Genova, Italy
- Department of Computer Science, Faculty of Science and Engineering, University of Manchester, M13 9PL, Manchester, United Kingdom
| | - Francesca Ciardo
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, 16152 Genova, Italy
- Department of Psychology, University of Milan-Bicocca, Milan, Italy
| | - Kyveli Kompatsiari
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, 16152 Genova, Italy
- Section for Cognitive Systems, DTU Compute, Kgs. Lyngby, Copenhagen, Denmark
| | - Davide De Tommaso
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, 16152 Genova, Italy
| | - Agnieszka Wykowska
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, 16152 Genova, Italy
| |
Collapse
|
2
|
Balla VR, Kilencz T, Szalóki S, Dalos VD, Partanen E, Csifcsák G. Motor dominance and movement-outcome congruency influence the electrophysiological correlates of sensory attenuation for self-induced visual stimuli. Int J Psychophysiol 2024; 200:112344. [PMID: 38614439 DOI: 10.1016/j.ijpsycho.2024.112344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 04/08/2024] [Accepted: 04/10/2024] [Indexed: 04/15/2024]
Abstract
This study explores the impact of movement-outcome congruency and motor dominance on the action-associated modulations of early visual event-related potentials (ERPs). Employing the contingent paradigm, participants with varying degrees of motor dominance were exposed to stimuli depicting left or right human hands in the corresponding visual hemifields. Stimuli were either passively observed or evoked by voluntary button-presses with the dominant or non-dominant hand, in a manner that was either congruent or incongruent with stimulus laterality and hemifield. Early occipital responses (C1 and P1 components) revealed modulations consistent with sensory attenuation (SA) for self-evoked stimuli. Our findings suggest that sensory attenuation during the initial stages of visual processing (C1 component) is a general phenomenon across all degrees of handedness and stimulus/movement combinations. However, the magnitude of C1 suppression was modulated by handedness and movement-stimulus congruency, reflecting stronger SA in right-handed participants for stimuli depicting the right hand, when elicited by actions of the corresponding hand, and measured above the contralateral occipital lobe. P1 modulation suggested concurrent but opposing influences of attention and sensory prediction, with more pronounced suppression following stimulus-congruent button-presses over the hemisphere contralateral to movement, especially in left-handed individuals. We suggest that effects of motor dominance on the degree of SA may stem from functional/anatomical asymmetries in the processing of body parts (C1) and attention networks (P1). Overall, our results demonstrate the modulating effect of hand dominance and movement-outcome congruency on SA, underscoring the need for deeper exploration of their interplay. Additional empirical evidence in this direction could substantiate a premotor account for action-associated modulation of early sensory processing in the visual domain.
Collapse
Affiliation(s)
- Viktória Roxána Balla
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland.
| | - Tünde Kilencz
- Department of Psychiatry and Psychotherapy, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| | - Szilvia Szalóki
- Department of Cognitive and Neuropsychology, Institute of Psychology, Faculty of Humanities and Social Sciences, University of Szeged, Hungary
| | - Vera Daniella Dalos
- Doctoral School of Interdisciplinary Medicine, Faculty of Medicine, University of Szeged, Hungary
| | - Eino Partanen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - Gábor Csifcsák
- Department of Psychology, Faculty of Health Sciences, UiT The Arctic University of Norway, Tromsø, Norway
| |
Collapse
|
3
|
Tast V, Schröger E, Widmann A. Suppression and omission effects in auditory predictive processing-Two of the same? Eur J Neurosci 2024. [PMID: 38764129 DOI: 10.1111/ejn.16393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 04/24/2024] [Accepted: 04/30/2024] [Indexed: 05/21/2024]
Abstract
Recent theories describe perception as an inferential process based on internal predictive models that are adjusted by prediction violations (prediction error). Two different modulations of the auditory N1 event-related brain potential component are often discussed as an expression of auditory predictive processing. The sound-related N1 component is attenuated for self-generated sounds compared to the N1 elicited by externally generated sounds (N1 suppression). An omission-related component in the N1 time-range is elicited when the self-generated sounds are occasionally omitted (omission N1). Both phenomena were explained by action-related forward modelling, which takes place when the sensory input is predictable: prediction error signals are reduced when predicted sensory input is presented (N1 suppression) and elicited when predicted sensory input is omitted (omission N1). This common theoretical account is appealing but has not yet been directly tested. We manipulated the predictability of a sound in a self-generation paradigm in which, in two conditions, either 80% or 50% of the button presses did generate a sound, inducing a strong or a weak expectation for the occurrence of the sound. Consistent with the forward modelling account, an omission N1 was observed in the 80% but not in the 50% condition. However, N1 suppression was highly similar in both conditions. Thus, our results demonstrate a clear effect of predictability for the omission N1 but not for the N1 suppression. These results imply that the two phenomena rely (at least in part) on different mechanisms and challenge prediction related accounts of N1 suppression.
Collapse
Affiliation(s)
- Valentina Tast
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| | - Erich Schröger
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| | - Andreas Widmann
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| |
Collapse
|
4
|
Ody E, Kircher T, Straube B, He Y. Pre-movement event-related potentials and multivariate pattern of EEG encode action outcome prediction. Hum Brain Mapp 2023; 44:6198-6213. [PMID: 37792296 PMCID: PMC10619393 DOI: 10.1002/hbm.26506] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 09/04/2023] [Accepted: 09/17/2023] [Indexed: 10/05/2023] Open
Abstract
Self-initiated movements are accompanied by an efference copy, a motor command sent from motor regions to the sensory cortices, containing a prediction of the movement's sensory outcome. Previous studies have proposed pre-motor event-related potentials (ERPs), including the readiness potential (RP) and its lateralized sub-component (LRP), as potential neural markers of action feedback prediction. However, it is not known how specific these neural markers are for voluntary (active) movements as compared to involuntary (passive) movements, which produce much of the same sensory feedback (tactile, proprioceptive) but are not accompanied by an efference copy. The goal of the current study was to investigate how active and passive movements are distinguishable from premotor electroencephalography (EEG), and to examine if this change of neural activity differs when participants engage in tasks that differ in their expectation of sensory outcomes. Participants made active (self-initiated) or passive (finger moved by device) finger movements that led to either visual or auditory stimuli (100 ms delay), or to no immediate contingency effects (control). We investigated the time window before the movement onset by measuring pre-movement ERPs time-locked to the button press. For RP, we observed an interaction between task and movement. This was driven by movement differences in the visual and auditory but not the control conditions. LRP conversely only showed a main effect of movement. We then used multivariate pattern analysis to decode movements (active vs. passive). The results revealed ramping decoding for all tasks from around -800 ms onwards up to an accuracy of approximately 85% at the movement. Importantly, similar to RP, we observed lower decoding accuracies for the control condition than the visual and auditory conditions, but only shortly (from -200 ms) before the button press. We also decoded visual vs. auditory conditions. Here, task is decodable for both active and passive conditions, but the active condition showed increased decoding shortly before the button press. Taken together, our results provide robust evidence that pre-movement EEG activity may represent action-feedback prediction in which information about the subsequent sensory outcome is encoded.
Collapse
Affiliation(s)
- Edward Ody
- Department of Psychiatry and PsychotherapyUniversity of MarburgMarburgGermany
| | - Tilo Kircher
- Department of Psychiatry and PsychotherapyUniversity of MarburgMarburgGermany
| | - Benjamin Straube
- Department of Psychiatry and PsychotherapyUniversity of MarburgMarburgGermany
| | - Yifei He
- Department of Psychiatry and PsychotherapyUniversity of MarburgMarburgGermany
| |
Collapse
|
5
|
Okazaki M, Yumoto M, Kaneko Y, Maruo K. Correlation of motor-auditory cross-modal and auditory unimodal N1 and mismatch responses of schizophrenic patients and normal subjects: an MEG study. Front Psychiatry 2023; 14:1217307. [PMID: 37886112 PMCID: PMC10598755 DOI: 10.3389/fpsyt.2023.1217307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 09/25/2023] [Indexed: 10/28/2023] Open
Abstract
Introduction It has been suggested that the positive symptoms of schizophrenic patients (hallucinations, delusions, and passivity experience) are caused by dysfunction of their internal and external sensory prediction errors. This is often discussed as related to dysfunction of the forward model that executes self-monitoring. Several reports have suggested that dysfunction of the forward model in schizophrenia causes misattributions of self-generated thoughts and actions to external sources. There is some evidence that the forward model can be measured using the electroencephalography (EEG) and magnetoencephalography (MEG) components such as N1 (m) and mismatch negativity (MMN) (m). The objective in this MEG study is to investigate differences in the N1m and MMNm-like activity generated in motor-auditory cross-modal tasks in normal control (NC) subjects and schizophrenic (SC) patients, and compared that activity with N1m and MMNm in the auditory unimodal task. Methods The N1m and MMNm/MMNm-like activity were recorded in 15 SC patients and 12 matched NC subjects. The N1m-attenuation effects and peak amplitude of MMNm/MMNm-like activity of the NC and SC groups were compared. Additionally, correlations between MEG measures (N1m suppression rate, MMNm, and MMNm-like activity) and clinical variables (Positive and Negative Syndrome Scale (PANSS) scores and antipsychotic drug (APD) dosages) in SC patients were investigated. Results It was found that (i) there was no significant difference in N1m-attenuation for the NC and SC groups, and that (ii) MMNm in the unimodal task in the SC group was significantly smaller than that in the NC group. Further, the MMNm-like activity in the cross-modal task was smaller than that of the MMNm in the unimodal task in the NC group, but there was no significant difference in the SC group. The PANSS positive symptoms and general psychopathology score were moderately negatively correlated with the amplitudes of the MMNm-like activity, and the APD dosage was moderately negatively correlated with the N1m suppression rate. However, none of these correlations reached statistical significance. Discussion The findings suggest that schizophrenic patients perform altered predictive processes differently from healthy subjects in latencies reflecting MMNm, depending on whether they are under forward model generation or not. This may support the hypothesis that schizophrenic patients tend to misattribute their inner experience to external agents, thus leading to the characteristic schizophrenia symptoms.
Collapse
Affiliation(s)
- Mitsutoshi Okazaki
- Department of Psychiatry, National Center Hospital of Neurology and Psychiatry, Kodaira, Japan
- Department of Psychiatry, Ome Municipal General Hospital, Ome, Japan
| | - Masato Yumoto
- Department of Clinical Engineering, Faculty of Medical Science and Technology, Gunma Paz University, Takasaki, Japan
| | - Yuu Kaneko
- Department of Neurosurgery, National Center Hospital of Neurology and Psychiatry, Kodaira, Japan
| | - Kazushi Maruo
- Department of Biostatistics, Faculty of Medicine, University of Tsukuba, Tsukuba, Japan
| |
Collapse
|
6
|
Sturm S, Costa-Faidella J, SanMiguel I. Neural signatures of memory gain through active exploration in an oculomotor-auditory learning task. Psychophysiology 2023; 60:e14337. [PMID: 37209002 DOI: 10.1111/psyp.14337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 04/27/2023] [Accepted: 04/27/2023] [Indexed: 05/21/2023]
Abstract
Active engagement improves learning and memory, and self- versus externally generated stimuli are processed differently: perceptual intensity and neural responses are attenuated. Whether the attenuation is linked to memory formation remains unclear. This study investigates whether active oculomotor control over auditory stimuli-controlling for movement and stimulus predictability-benefits associative learning, and studies the underlying neural mechanisms. Using EEG and eye tracking we explored the impact of control during learning on the processing and memory recall of arbitrary oculomotor-auditory associations. Participants (N = 23) learned associations through active exploration or passive observation, using a gaze-controlled interface to generate sounds. Our results show faster learning progress in the active condition. ERPs time-locked to the onset of sound stimuli showed that learning progress was linked to an attenuation of the P3a component. The detection of matching movement-sound pairs triggered a target-matching P3b. There was no general modulation of ERPs through active learning. However, we found continuous variation in the strength of the memory benefit across participants: some benefited more strongly from active control during learning than others. This was paralleled in the strength of the N1 attenuation effect for self-generated stimuli, which was correlated with memory gain in active learning. Our results show that control helps learning and memory and modulates sensory responses. Individual differences during sensory processing predict the strength of the memory benefit. Taken together, these results help to disentangle the effects of agency, unspecific motor-based neuromodulation, and predictability on ERP components and establish a link between self-generation effects and active learning memory gain.
Collapse
Affiliation(s)
- Stefanie Sturm
- Brainlab - Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, Universitat de Barcelona, Barcelona, Spain
- Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
| | - Jordi Costa-Faidella
- Brainlab - Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, Universitat de Barcelona, Barcelona, Spain
- Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain
| | - Iria SanMiguel
- Brainlab - Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, Universitat de Barcelona, Barcelona, Spain
- Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain
| |
Collapse
|
7
|
Hölle D, Bleichner MG. Smartphone-based ear-electroencephalography to study sound processing in everyday life. Eur J Neurosci 2023; 58:3671-3685. [PMID: 37612776 DOI: 10.1111/ejn.16124] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 07/22/2023] [Accepted: 07/30/2023] [Indexed: 08/25/2023]
Abstract
In everyday life, people differ in their sound perception and thus sound processing. Some people may be distracted by construction noise, while others do not even notice. With smartphone-based mobile ear-electroencephalography (ear-EEG), we can measure and quantify sound processing in everyday life by analysing presented sounds and also naturally occurring ones. Twenty-four participants completed four controlled conditions in the lab (1 h) and one condition in the office (3 h). All conditions used the same paired-click stimuli. In the lab, participants listened to click tones under four different instructions: no task towards the sounds, reading a newspaper article, listening to an audio article or counting a rare deviant sound. In the office recording, participants followed daily activities while they were sporadically presented with clicks, without any further instruction. In the beyond-the-lab condition, in addition to the presented sounds, environmental sounds were recorded as acoustic features (i.e., loudness, power spectral density and sounds onsets). We found task-dependent differences in the auditory event-related potentials (ERPs) to the presented click sounds in all lab conditions, which underline that neural processes related to auditory attention can be differentiated with ear-EEG. In the beyond-the-lab condition, we found ERPs comparable to some of the lab conditions. The N1 amplitude to the click sounds beyond the lab was dependent on the background noise, probably due to energetic masking. Contrary to our expectation, we did not find a clear ERP in response to the environmental sounds. Overall, we showed that smartphone-based ear-EEG can be used to study sound processing of well defined-stimuli in everyday life.
Collapse
Affiliation(s)
- Daniel Hölle
- Neurophysiology of Everyday Life Group, Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - Martin G Bleichner
- Neurophysiology of Everyday Life Group, Department of Psychology, University of Oldenburg, Oldenburg, Germany
- Research Center for Neurosensory Science, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
8
|
Feder S, Miksch J, Grimm S, Krems JF, Bendixen A. Using event-related brain potentials to evaluate motor-auditory latencies in virtual reality. FRONTIERS IN NEUROERGONOMICS 2023; 4:1196507. [PMID: 38234486 PMCID: PMC10790907 DOI: 10.3389/fnrgo.2023.1196507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 06/14/2023] [Indexed: 01/19/2024]
Abstract
Actions in the real world have immediate sensory consequences. Mimicking these in digital environments is within reach, but technical constraints usually impose a certain latency (delay) between user actions and system responses. It is important to assess the impact of this latency on the users, ideally with measurement techniques that do not interfere with their digital experience. One such unobtrusive technique is electroencephalography (EEG), which can capture the users' brain activity associated with motor responses and sensory events by extracting event-related potentials (ERPs) from the continuous EEG recording. Here we exploit the fact that the amplitude of sensory ERP components (specifically, N1 and P2) reflects the degree to which the sensory event was perceived as an expected consequence of an own action (self-generation effect). Participants (N = 24) elicit auditory events in a virtual-reality (VR) setting by entering codes on virtual keypads to open doors. In a within-participant design, the delay between user input and sound presentation is manipulated across blocks. Occasionally, the virtual keypad is operated by a simulated robot instead, yielding a control condition with externally generated sounds. Results show that N1 (but not P2) amplitude is reduced for self-generated relative to externally generated sounds, and P2 (but not N1) amplitude is modulated by delay of sound presentation in a graded manner. This dissociation between N1 and P2 effects maps back to basic research on self-generation of sounds. We suggest P2 amplitude as a candidate read-out to assess the quality and immersiveness of digital environments with respect to system latency.
Collapse
Affiliation(s)
- Sascha Feder
- Cognitive Systems Lab, Institute of Physics, Faculty of Natural Sciences, Chemnitz University of Technology, Chemnitz, Germany
| | - Jochen Miksch
- Cognitive Systems Lab, Institute of Physics, Faculty of Natural Sciences, Chemnitz University of Technology, Chemnitz, Germany
- Physics of Cognition Group, Institute of Physics, Faculty of Natural Sciences, Chemnitz University of Technology, Chemnitz, Germany
| | - Sabine Grimm
- Cognitive Systems Lab, Institute of Physics, Faculty of Natural Sciences, Chemnitz University of Technology, Chemnitz, Germany
- Physics of Cognition Group, Institute of Physics, Faculty of Natural Sciences, Chemnitz University of Technology, Chemnitz, Germany
| | - Josef F. Krems
- Research Group Cognitive and Engineering Psychology, Institute of Psychology, Faculty of Behavioural and Social Sciences, Chemnitz University of Technology, Chemnitz, Germany
| | - Alexandra Bendixen
- Cognitive Systems Lab, Institute of Physics, Faculty of Natural Sciences, Chemnitz University of Technology, Chemnitz, Germany
| |
Collapse
|
9
|
Harrison AW, Hughes G, Rudman G, Christensen BK, Whitford TJ. Exploring the internal forward model: action-effect prediction and attention in sensorimotor processing. Cereb Cortex 2023:7191713. [PMID: 37288477 DOI: 10.1093/cercor/bhad189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 05/10/2023] [Accepted: 05/11/2023] [Indexed: 06/09/2023] Open
Abstract
Action-effect predictions are believed to facilitate movement based on its association with sensory objectives and suppress the neurophysiological response to self- versus externally generated stimuli (i.e. sensory attenuation). However, research is needed to explore theorized differences in the use of action-effect prediction based on whether movement is uncued (i.e. volitional) or in response to external cues (i.e. stimulus-driven). While much of the sensory attenuation literature has examined effects involving the auditory N1, evidence is also conflicted regarding this component's sensitivity to action-effect prediction. In this study (n = 64), we explored the influence of action-effect contingency on event-related potentials associated with visually cued and uncued movement, as well as resultant stimuli. Our findings replicate recent evidence demonstrating reduced N1 amplitude for tones produced by stimulus-driven movement. Despite influencing motor preparation, action-effect contingency was not found to affect N1 amplitudes. Instead, we explore electrophysiological markers suggesting that attentional mechanisms may suppress the neurophysiological response to sound produced by stimulus-driven movement. Our findings demonstrate lateralized parieto-occipital activity that coincides with the auditory N1, corresponds to a reduction in its amplitude, and is topographically consistent with documented effects of attentional suppression. These results provide new insights into sensorimotor coordination and potential mechanisms underlying sensory attenuation.
Collapse
Affiliation(s)
- Anthony W Harrison
- School of Psychology, UNSW Sydney, Mathews Building, Library Walk, Kensington NSW 2052, Australia
| | - Gethin Hughes
- Department of Psychology, University Of Essex, Wivenhoe Park, Colchester CO4 3SQ, United Kingdom
| | - Gabriella Rudman
- School of Psychology, UNSW Sydney, Mathews Building, Library Walk, Kensington NSW 2052, Australia
| | - Bruce K Christensen
- Research School of Psychology, Building 39, The Australian National University, Science Rd, Canberra ACT 2601, Australia
| | - Thomas J Whitford
- School of Psychology, UNSW Sydney, Mathews Building, Library Walk, Kensington NSW 2052, Australia
| |
Collapse
|
10
|
Bolt NK, Loehr JD. The auditory P2 differentiates self- from partner-produced sounds during joint action: Contributions of self-specific attenuation and temporal orienting of attention. Neuropsychologia 2023; 182:108526. [PMID: 36870472 DOI: 10.1016/j.neuropsychologia.2023.108526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 02/03/2023] [Accepted: 02/27/2023] [Indexed: 03/06/2023]
Abstract
Sensory attenuation of the auditory P2 event-related potential (ERP) has been shown to differentiate the sensory consequences of one's own from others' action in joint action contexts. However, recent evidence suggests that when people coordinate joint actions over time, temporal orienting of attention might simultaneously contribute to enhancing the auditory P2. The current study employed a joint tapping task in which partners produced tone sequences together to examine whether temporal orienting influences auditory ERP amplitudes during the time window of self-other differentiation. Our findings demonstrate that the combined requirements of coordinating with a partner toward a joint goal and immediately adjusting to the partner's tone timing enhance P2 amplitudes elicited by the partner's tone onsets. Furthermore, our findings replicate prior evidence for self-specific sensory attenuation of the auditory P2 in joint action, and additionally demonstrate that it occurs regardless of the coordination requirements between partners. Together, these findings provide evidence that temporal orienting and sensory attenuation both modulate the auditory P2 during joint action and suggest that both processes play a role in facilitating precise interpersonal coordination between partners.
Collapse
Affiliation(s)
- Nicole K Bolt
- Department of Psychology and Health Studies, University of Saskatchewan, 9 Campus Drive, Saskatoon, Saskatchewan, S7N 5A5, Canada.
| | - Janeen D Loehr
- Department of Psychology and Health Studies, University of Saskatchewan, 9 Campus Drive, Saskatoon, Saskatchewan, S7N 5A5, Canada.
| |
Collapse
|
11
|
Ody E, Straube B, He Y, Kircher T. Perception of self-generated and externally-generated visual stimuli: Evidence from EEG and behavior. Psychophysiology 2023:e14295. [PMID: 36966486 DOI: 10.1111/psyp.14295] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 01/23/2023] [Accepted: 03/04/2023] [Indexed: 03/27/2023]
Abstract
Efference copy-based forward model mechanisms may help us to distinguish between self-generated and externally-generated sensory consequences. Previous studies have shown that self-initiation modulates neural and perceptual responses to identical stimulation. For example, event-related potentials (ERPs) elicited by tones that follow a button press are reduced in amplitude relative to ERPs elicited by passively attended tones. However, previous EEG studies investigating visual stimuli in this context are rare, provide inconclusive results, and lack adequate control conditions with passive movements. Furthermore, although self-initiation is known to modulate behavioral responses, it is not known whether differences in the amplitude of ERPs also reflect differences in perception of sensory outcomes. In this study, we presented to participants visual stimuli consisting of gray discs following either active button presses, or passive button presses, in which an electromagnet moved the participant's finger. Two discs presented visually 500-1250 ms apart followed each button press, and participants judged which of the two was more intense. Early components of the primary visual response (N1 and P2) over the occipital electrodes were suppressed in the active condition. Interestingly, suppression in the intensity judgment task was only correlated with suppression of the visual P2 component. These data support the notion of efference copy-based forward model predictions in the visual sensory modality, but especially later processes (P2) seem to be perceptually relevant. Taken together, the results challenge the assumption that N1 differences reflect perceptual suppression and emphasize the relevance of the P2 ERP component.
Collapse
Affiliation(s)
- Edward Ody
- Department of Psychiatry and Psychotherapy, University of Marburg, Rudolf Bultmann-Strasse 8, Marburg, 35039, Germany
| | - Benjamin Straube
- Department of Psychiatry and Psychotherapy, University of Marburg, Rudolf Bultmann-Strasse 8, Marburg, 35039, Germany
| | - Yifei He
- Department of Psychiatry and Psychotherapy, University of Marburg, Rudolf Bultmann-Strasse 8, Marburg, 35039, Germany
| | - Tilo Kircher
- Department of Psychiatry and Psychotherapy, University of Marburg, Rudolf Bultmann-Strasse 8, Marburg, 35039, Germany
| |
Collapse
|
12
|
Font-Alaminos M, Paraskevoudi N, SanMiguel I. Actions do not clearly impact auditory memory. Front Hum Neurosci 2023; 17:1124784. [PMID: 36923585 PMCID: PMC10009998 DOI: 10.3389/fnhum.2023.1124784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 01/31/2023] [Indexed: 03/06/2023] Open
Abstract
When memorizing a list of words, those that are read aloud are remembered better than those read silently, a phenomenon known as the production effect. There have been several attempts to understand the production effect, however, actions alone have not been examined as possible contributors. Stimuli that coincide with our own actions are processed differently compared to stimuli presented passively to us. These sensory response modulations may have an impact on how action-revolving inputs are stored in memory. In this study, we investigated whether actions could impact auditory memory. Participants listened to sounds presented either during or in between their actions. We measured electrophysiological responses to the sounds and tested participants' memory of them. Results showed attenuation of sensory responses for action-coinciding sounds. However, we did not find a significant effect on memory performance. The absence of significant behavioral findings suggests that the production effect may be not dependent on the effects of actions per se. We conclude that action alone is not sufficient to improve memory performance, and thus elicit a production effect.
Collapse
Affiliation(s)
- Marta Font-Alaminos
- Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain.,Brainlab-Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, Universitat de Barcelona, Barcelona, Spain
| | - Nadia Paraskevoudi
- Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain.,Brainlab-Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, Universitat de Barcelona, Barcelona, Spain
| | - Iria SanMiguel
- Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain.,Brainlab-Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, Universitat de Barcelona, Barcelona, Spain.,Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain
| |
Collapse
|
13
|
Chen L, Yang M, Gao F, Fang Z, Wang P, Feng L. Mandarin Chinese L1 and L2 complex sentence reading reveals a consistent electrophysiological pattern of highly interactive syntactic and semantic processing: An ERP study. Front Psychol 2023; 14:1143062. [PMID: 37151349 PMCID: PMC10155869 DOI: 10.3389/fpsyg.2023.1143062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 03/30/2023] [Indexed: 05/09/2023] Open
Abstract
Introduction A hallmark of the human language faculty is processing complex hierarchical syntactic structures across languages. However, for Mandarin Chinese, a language typically dependent on semantic combinations and free of morphosyntactic information, the relationship between syntactic and semantic processing during Chinese complex sentence reading is unclear. From the neuropsychological perspective of bilingual studies, whether second language (L2) learners can develop a consistent pattern of target language (i.e., L2) comprehension regarding the interplay of syntactic and semantic processing, especially when their first language (L1) and L2 are typologically distinct, remains to be determined. In this study, Chinese complex sentences with center-embedded relative clauses were generated. By utilizing the high-time-resolution technique of event-related potentials (ERPs), this study aimed to investigate the processing relationships between syntactic and semantic information during Chinese complex sentence reading in both Chinese L1 speakers and highly proficient L2 learners from South Korea. Methods Normal, semantically violated (SEM), and double-violated (containing both semantic and syntactic violations, SEM + SYN) conditions were set with regard to the nonadjacent dependencies of the Chinese complex sentence, and participants were required to judge whether the sentences they read were acceptable. Results The ERP results showed that sentences with "SEM + SYN" did not elicit early left anterior negativity (ELAN), a component assumed to signal initial syntactic processing, but evoked larger components in the N400 and P600 windows than those of the "SEM" condition, thus exhibiting a biphasic waveform pattern consistent for both groups and in line with previous studies using simpler Chinese syntactic structures. The only difference between the L1 and L2 groups was that L2 learners presented later latencies of the corresponding ERP components. Discussion Taken together, these results do not support the temporal and functional priorities of syntactic processing as identified in morphologically rich languages (e.g., German) and converge on the notion that even for Chinese complex sentence reading, syntactic and semantic processing are highly interactive. This is consistent across L1 speakers and high-proficiency L2 learners with typologically different language backgrounds.
Collapse
Affiliation(s)
- Luyao Chen
- Max Planck Partner Group, School of International Chinese Language Education, Beijing Normal University, Beijing, China
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- *Correspondence: Luyao Chen,
| | - Mingchuan Yang
- Max Planck Partner Group, School of International Chinese Language Education, Beijing Normal University, Beijing, China
| | - Fei Gao
- Institute of Modern Languages and Linguistics, Fudan University, Shanghai, China
- Centre for Cognitive and Brain Sciences, University of Macau, Macao, Macao SAR, China
| | - Zhengyuan Fang
- Max Planck Partner Group, School of International Chinese Language Education, Beijing Normal University, Beijing, China
| | - Peng Wang
- Methods and Development Group (MEG and Cortical Networks), Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Institute of Psychology, University of Greifswald, Greifswald, Germany
- Institute of Psychology, University of Regensburg, Regensburg, Germany
| | - Liping Feng
- Max Planck Partner Group, School of International Chinese Language Education, Beijing Normal University, Beijing, China
- Liping Feng,
| |
Collapse
|
14
|
Loyola-Navarro R, Moënne-Loccoz C, Vergara RC, Hyafil A, Aboitiz F, Maldonado PE. Voluntary self-initiation of the stimuli onset improves working memory and accelerates visual and attentional processing. Heliyon 2022; 8:e12215. [PMID: 36578387 PMCID: PMC9791366 DOI: 10.1016/j.heliyon.2022.e12215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 08/25/2022] [Accepted: 11/30/2022] [Indexed: 12/14/2022] Open
Abstract
The ability of an organism to voluntarily control the stimuli onset modulates perceptual and attentional functions. Since stimulus encoding is an essential component of working memory (WM), we conjectured that controlling the initiation of the perceptual process would positively modulate WM. To corroborate this proposition, we tested twenty-five healthy subjects in a modified-Sternberg WM task under three stimuli presentation conditions: an automatic presentation of the stimuli, a self-initiated presentation of the stimuli (through a button press), and a self-initiated presentation with random-delay stimuli onset. Concurrently, we recorded the subjects' electroencephalographic signals during WM encoding. We found that the self-initiated condition was associated with better WM accuracy, and earlier latencies of N1, P2 and P3 evoked potential components representing visual, attentional and mental review of the stimuli processes, respectively. Our work demonstrates that self-initiated stimuli enhance WM performance and accelerate early visual and attentional processes deployed during WM encoding. We also found that self-initiated stimuli correlate with an increased attentional state compared to the other two conditions, suggesting a role for temporal stimuli predictability. Our study remarks on the relevance of self-control of the stimuli onset in sensory, attentional and memory updating processing for WM.
Collapse
Affiliation(s)
- Rocio Loyola-Navarro
- Departamento de Neurociencia, Universidad de Chile, Santiago, Chile
- Biomedical Neuroscience Institute (BNI), Santiago, Chile
- Departamento de Educación Diferencial, Universidad Metropolitana de Ciencias de la Educación, Santiago, Chile
- Center for Advanced Research in Education, Institute of Education, Universidad de Chile, Santiago, Chile
| | - Cristóbal Moënne-Loccoz
- Departamento de Ciencias de la Salud, Pontificia Universidad Católica de Chile, Santiago, Chile
- Centro Nacional de Inteligencia Artificial (CENIA), Santiago, Chile
| | - Rodrigo C. Vergara
- Departamento de Kinesiología, Universidad Metropolitana de Ciencias de la Educación, Santiago, Chile
- Centro Nacional de Inteligencia Artificial (CENIA), Santiago, Chile
- Centro de Investigación en Educación, Universidad Metropolitana de Ciencias de la Educación (CIE-UMCE), Santiago, Chile
| | | | - Francisco Aboitiz
- Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Pedro E. Maldonado
- Departamento de Neurociencia, Universidad de Chile, Santiago, Chile
- Biomedical Neuroscience Institute (BNI), Santiago, Chile
- Centro Nacional de Inteligencia Artificial (CENIA), Santiago, Chile
- Corresponding author.
| |
Collapse
|
15
|
Karanikolaou M, Limanowski J, Northoff G. Does temporal irregularity drive prediction failure in schizophrenia? temporal modelling of ERPs. SCHIZOPHRENIA 2022; 8:23. [PMID: 35301329 PMCID: PMC8931057 DOI: 10.1038/s41537-022-00239-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 02/02/2022] [Indexed: 11/10/2022]
Abstract
AbstractSchizophrenia subjects often suffer from a failure to properly predict incoming inputs; most notably, some patients exhibit impaired prediction of the sensory consequences of their own actions. The mechanisms underlying this deficit remain unclear, though. One possible mechanism could consist in aberrant predictive processing, as schizophrenic patients show relatively less attenuated neuronal activity to self-produced tones, than healthy controls. Here, we tested the hypothesis that this aberrant predictive mechanism would manifest itself in the temporal irregularity of neuronal signals. For that purpose, we here introduce an event-related potential (ERP) study model analysis that consists of an EEG real-time model equation, eeg(t) and a frequency Laplace transformed Transfer Function (TF) equation, eeg(s). Combining circuit analysis with control and cable theory, we estimate the temporal model representations of auditory ERPs to reveal neural mechanisms that make predictions about self-generated sensations. We use data from 49 schizophrenic patients (SZ) and 32 healthy control (HC) subjects in an auditory ‘prediction’ paradigm; i.e., who either pressed a button to deliver a sound tone (epoch a), or just heard the tone without button press (epoch b). Our results show significantly higher degrees of temporal irregularity or imprecision between different trials of the ERP from the Cz electrode (N100, P200) in SZ compared to HC (Levene’s test, p < 0.0001) as indexed by altered latency, lower similarity/correlation of single trial time courses (using dynamic time warping), and longer settling times to reach steady state in the intertrial interval. Using machine learning, SZ vs HC could be highly accurately classified (92%) based on the temporal parameters of their ERPs’ TF models, using as features the poles of the TF rational functions. Together, our findings show temporal irregularity or imprecision between single trials to be abnormally increased in SZ. This may indicate a general impairment of SZ, related to precisely predicting the sensory consequences of one’s actions.
Collapse
|
16
|
Lubinus C, Einhäuser W, Schiller F, Kircher T, Straube B, van Kemenade BM. Action-based predictions affect visual perception, neural processing, and pupil size, regardless of temporal predictability. Neuroimage 2022; 263:119601. [PMID: 36064139 DOI: 10.1016/j.neuroimage.2022.119601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 08/30/2022] [Accepted: 09/01/2022] [Indexed: 10/31/2022] Open
Abstract
Sensory consequences of one's own action are often perceived as less intense, and lead to reduced neural responses, compared to externally generated stimuli. Presumably, such sensory attenuation is due to predictive mechanisms based on the motor command (efference copy). However, sensory attenuation has also been observed outside the context of voluntary action, namely when stimuli are temporally predictable. Here, we aimed at disentangling the effects of motor and temporal predictability-based mechanisms on the attenuation of sensory action consequences. During fMRI data acquisition, participants (N = 25) judged which of two visual stimuli was brighter. In predictable blocks, the stimuli appeared temporally aligned with their button press (active) or aligned with an automatically generated cue (passive). In unpredictable blocks, stimuli were presented with a variable delay after button press/cue, respectively. Eye tracking was performed to investigate pupil-size changes and to ensure proper fixation. Self-generated stimuli were perceived as darker and led to less neural activation in visual areas than their passive counterparts, indicating sensory attenuation for self-generated stimuli independent of temporal predictability. Pupil size was larger during self-generated stimuli, which correlated negatively with the blood oxygenation level dependent (BOLD) response: the larger the pupil, the smaller the BOLD amplitude in visual areas. Our results suggest that sensory attenuation in visual cortex is driven by action-based predictive mechanisms rather than by temporal predictability. This effect may be related to changes in pupil diameter. Altogether, these results emphasize the role of the efference copy in the processing of sensory action consequences.
Collapse
Affiliation(s)
- Christina Lubinus
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Grüneburgweg 14, Frankfurt am Main D-60322, Germany; Department of Psychiatry and Psychotherapy and Center for Mind, Brain and Behavior (CMBB), University of Marburg, Rudolf-Bultmann-Str. 8, Marburg D-35039, Germany.
| | - Wolfgang Einhäuser
- Institute of Physics, Physics of Cognition Group, Chemnitz University of Technology, Chemnitz D-09107, Germany
| | - Florian Schiller
- Department of Psychology, Justus Liebig University Giessen, Otto-Behaghel-Str. 10, Giessen D-35394, Germany
| | - Tilo Kircher
- Department of Psychiatry and Psychotherapy and Center for Mind, Brain and Behavior (CMBB), University of Marburg, Rudolf-Bultmann-Str. 8, Marburg D-35039, Germany
| | - Benjamin Straube
- Department of Psychiatry and Psychotherapy and Center for Mind, Brain and Behavior (CMBB), University of Marburg, Rudolf-Bultmann-Str. 8, Marburg D-35039, Germany
| | - Bianca M van Kemenade
- Department of Psychiatry and Psychotherapy and Center for Mind, Brain and Behavior (CMBB), University of Marburg, Rudolf-Bultmann-Str. 8, Marburg D-35039, Germany; Center for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| |
Collapse
|
17
|
Paraskevoudi N, SanMiguel I. Sensory suppression and increased neuromodulation during actions disrupt memory encoding of unpredictable self-initiated stimuli. Psychophysiology 2022; 60:e14156. [PMID: 35918912 PMCID: PMC10078310 DOI: 10.1111/psyp.14156] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 04/06/2022] [Accepted: 07/01/2022] [Indexed: 11/26/2022]
Abstract
Actions modulate sensory processing by attenuating responses to self- compared to externally generated inputs, which is traditionally attributed to stimulus-specific motor predictions. Yet, suppression has been also found for stimuli merely coinciding with actions, pointing to unspecific processes that may be driven by neuromodulatory systems. Meanwhile, the differential processing for self-generated stimuli raises the possibility of producing effects also on memory for these stimuli; however, evidence remains mixed as to the direction of the effects. Here, we assessed the effects of actions on sensory processing and memory encoding of concomitant, but unpredictable sounds, using a combination of self-generation and memory recognition task concurrently with EEG and pupil recordings. At encoding, subjects performed button presses that half of the time generated a sound (motor-auditory; MA) and listened to passively presented sounds (auditory-only; A). At retrieval, two sounds were presented and participants had to respond which one was present before. We measured memory bias and memory performance by having sequences where either both or only one of the test sounds were presented at encoding, respectively. Results showed worse memory performance - but no differences in memory bias -, attenuated responses, and larger pupil diameter for MA compared to A sounds. Critically, the larger the sensory attenuation and pupil diameter, the worse the memory performance for MA sounds. Nevertheless, sensory attenuation did not correlate with pupil dilation. Collectively, our findings suggest that sensory attenuation and neuromodulatory processes coexist during actions, and both relate to disrupted memory for concurrent, albeit unpredictable sounds.
Collapse
Affiliation(s)
- Nadia Paraskevoudi
- Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain.,Brainlab-Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, University of Barcelona, Barcelona, Spain
| | - Iria SanMiguel
- Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain.,Brainlab-Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, University of Barcelona, Barcelona, Spain.,Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain
| |
Collapse
|
18
|
Turning a blind eye to motor differences leads to bias in estimating action-related auditory ERP attenuation. Biol Psychol 2022; 173:108387. [PMID: 35843416 DOI: 10.1016/j.biopsycho.2022.108387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 07/05/2022] [Accepted: 07/08/2022] [Indexed: 11/22/2022]
Abstract
Event-related potential (ERP) studies investigating the processing of self-induced stimuli often rely on the assumption that ballistic actions and motor ERPs are constant across different sets of action effects. Since recent studies challenge this motor equivalence assumption, we examined whether neglecting effect-related motor differences can bias the estimation of auditory ERPs in a typical action-related ERP attenuation paradigm. We increased action variability with a force production task and selected an event subset in which the motor equivalence assumption was true. ERP attenuation estimated in this subset was compared with attenuation obtained in the standard task, where motor differences were not controlled. Violation of the motor equivalence assumption resulted in a positive deflection overlapping auditory ERPs elicited by self-induced sounds, resulting in the overestimation of N1- and underestimation of P2-attenuation. This demonstrates that sensory-effect-related motor differences should be considered when separating sensory and motor components in ERPs elicited by self-induced stimuli.
Collapse
|
19
|
Chung WY, Darriba ÁL, Korka B, Widmann A, Schröger E, Waszak F. Action effect predictions in 'what', 'when', and 'whether' intentional actions. Brain Res 2022; 1791:147992. [PMID: 35753390 DOI: 10.1016/j.brainres.2022.147992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 06/17/2022] [Accepted: 06/20/2022] [Indexed: 11/02/2022]
Abstract
It has been proposed that intentional action can be separated into three major types depending on the nature of the action choice - what (selecting what to do), when (selecting when to act) and whether (to perform the action or not). While many theories on action control assume that intentional action involves the prediction of action effects, there has not been any attempt to compare the three types of intentional actions (what, when, whether) with respect to action-effect prediction. Here, we employ an action-effect prediction paradigm where participants select the action on every trial based on either the what (choosing between alternative actions), when (choosing to respond at different time points) or whether (choosing to perform an action or not) action components, and each action choice is followed by either a predicted (standard) or a mispredicted (deviant) tone. We found a significant P2 difference between standard/deviant tones reflecting the formation of action-effect predictions regardless of whether the action choice was based on the 'what', 'when' or 'whether' decision. Furthermore, our analysis revealed that this P2 difference for the prediction effect was not observable in non-action trials within the 'whether' condition, which suggests an action-specific prediction process.
Collapse
Affiliation(s)
- Wai Ying Chung
- Université Paris Cité, CNRS, Integrative Neuroscience and Cognition Center, F-75006 Paris, France.
| | - ÁLvaro Darriba
- Université Paris Cité, CNRS, Integrative Neuroscience and Cognition Center, F-75006 Paris, France.
| | | | - Andreas Widmann
- University of Leipzig, Germany; Leibniz Institute for Neurobiology, Magdeburg, Germany.
| | | | - Florian Waszak
- Université Paris Cité, CNRS, Integrative Neuroscience and Cognition Center, F-75006 Paris, France.
| |
Collapse
|
20
|
Han N, Jack BN, Hughes G, Whitford TJ. The Role of Action-Effect Contingency on Sensory Attenuation in the Absence of Movement. J Cogn Neurosci 2022; 34:1488-1499. [PMID: 35579993 DOI: 10.1162/jocn_a_01867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Stimuli that have been generated by a person's own willed motor actions generally elicit a suppressed electrophysiological, as well as phenomenological, response than identical stimuli that have been externally generated. This well-studied phenomenon, known as sensory attenuation, has mostly been studied by comparing ERPs evoked by self-initiated and externally generated sounds. However, most studies have assumed a uniform action-effect contingency, in which a motor action leads to a resulting sensation 100% of the time. In this study, we investigated the effect of manipulating the probability of action-effect contingencies on the sensory attenuation effect. In Experiment 1, participants watched a moving, marked tickertape while EEG was recorded. In the full-contingency (FC) condition, participants chose whether to press a button by a certain mark on the tickertape. If a button press had not occurred by the mark, a sound would be played a second later 100% of the time. If the button was pressed before the mark, the sound was not played. In the no-contingency (NC) condition, participants observed the same tickertape; in contrast, however, if participants did not press the button by the mark, a sound would occur only 50% of the time (NC-inaction). Furthermore, in the NC condition, if a participant pressed the button before the mark, a sound would also play 50% of the time (NC-action). In Experiment 2, the design was identical, except that a willed action (as opposed to a willed inaction) triggered the sound in the FC condition. The results were consistent across the two experiments: Although there were no differences in N1 amplitude between conditions, the amplitude of the Tb and P2 components were smaller in the FC condition compared with the NC-inaction condition, and the amplitude of the P2 component was also smaller in the FC condition compared with the NC-action condition. The results suggest that the effect of contingency on electrophysiological indices of sensory attenuation may be indexed primarily by the Tb and P2 components, rather than the N1 component which is most commonly studied.
Collapse
|
21
|
Bečev O, Kozáková E, Sakálošová L, Mareček R, Majchrowicz B, Roman R, Brázdil M. Actions of a Shaken Heart: Interoception Interacts with Action Processing. Biol Psychol 2022; 169:108288. [PMID: 35143921 DOI: 10.1016/j.biopsycho.2022.108288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 12/15/2021] [Accepted: 02/04/2022] [Indexed: 11/28/2022]
Abstract
In the present study, we investigated the modulatory influence of the unconscious, bodily arousal on motor-related embodied information. Specifically, we examined how the interoceptive prediction error interacts with the event-related potentials linked to action-effect processing. Participants were asked to perform a task with self-initiated or externally-triggered sounds while receiving synchronous or false auditory cardiac feedback. The results found that interaction of interoceptive manipulation and action-effect processing modulates the frontal subcomponent of the P3 response. During the synchronous cardiac feedback, the P3 response to self-initiated tones was enhanced. During the false cardiac feedback, the frontal cortical response was reversed. N1 and P2 components were affected by the interoceptive manipulation, but not by the interaction of interoception and action processing. These findings provide experimental support for the theoretical accounts of the interaction between interoception and action processing within a framework of predictive coding, manifested particularly in the higher stages of action processing.
Collapse
Affiliation(s)
- Ondřej Bečev
- Brain and Mind Research, CEITEC-Central European Institute of Technology, Masaryk University, Kamenice 753/5, 625 00 Brno, Czech Republic; First Department of Neurology, Faculty of Medicine, Masaryk University and St. Anne's University Hospital, Pekařská 664/53, 656 91 Brno, Czech Republic; Department of Applied Neuroscience and Neuroimaging, National Institute of Mental Health, Topolová 748, 250 67 Klecany, Czech Republic.
| | - Eva Kozáková
- Brain and Mind Research, CEITEC-Central European Institute of Technology, Masaryk University, Kamenice 753/5, 625 00 Brno, Czech Republic; Department of Applied Neuroscience and Neuroimaging, National Institute of Mental Health, Topolová 748, 250 67 Klecany, Czech Republic
| | - Lenka Sakálošová
- Brain and Mind Research, CEITEC-Central European Institute of Technology, Masaryk University, Kamenice 753/5, 625 00 Brno, Czech Republic; First Department of Neurology, Faculty of Medicine, Masaryk University and St. Anne's University Hospital, Pekařská 664/53, 656 91 Brno, Czech Republic
| | - Radek Mareček
- Brain and Mind Research, CEITEC-Central European Institute of Technology, Masaryk University, Kamenice 753/5, 625 00 Brno, Czech Republic
| | - Bartosz Majchrowicz
- Consciousness Lab, Institute of Psychology, Jagiellonian University, Ingardena 6, 30-060, Kraków, Poland
| | - Robert Roman
- Brain and Mind Research, CEITEC-Central European Institute of Technology, Masaryk University, Kamenice 753/5, 625 00 Brno, Czech Republic
| | - Milan Brázdil
- Brain and Mind Research, CEITEC-Central European Institute of Technology, Masaryk University, Kamenice 753/5, 625 00 Brno, Czech Republic; First Department of Neurology, Faculty of Medicine, Masaryk University and St. Anne's University Hospital, Pekařská 664/53, 656 91 Brno, Czech Republic
| |
Collapse
|
22
|
Hölle D, Blum S, Kissner S, Debener S, Bleichner MG. Real-Time Audio Processing of Real-Life Soundscapes for EEG Analysis: ERPs Based on Natural Sound Onsets. FRONTIERS IN NEUROERGONOMICS 2022; 3:793061. [PMID: 38235458 PMCID: PMC10790832 DOI: 10.3389/fnrgo.2022.793061] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 01/03/2021] [Indexed: 01/19/2024]
Abstract
With smartphone-based mobile electroencephalography (EEG), we can investigate sound perception beyond the lab. To understand sound perception in the real world, we need to relate naturally occurring sounds to EEG data. For this, EEG and audio information need to be synchronized precisely, only then it is possible to capture fast and transient evoked neural responses and relate them to individual sounds. We have developed Android applications (AFEx and Record-a) that allow for the concurrent acquisition of EEG data and audio features, i.e., sound onsets, average signal power (RMS), and power spectral density (PSD) on smartphone. In this paper, we evaluate these apps by computing event-related potentials (ERPs) evoked by everyday sounds. One participant listened to piano notes (played live by a pianist) and to a home-office soundscape. Timing tests showed a stable lag and a small jitter (< 3 ms) indicating a high temporal precision of the system. We calculated ERPs to sound onsets and observed the typical P1-N1-P2 complex of auditory processing. Furthermore, we show how to relate information on loudness (RMS) and spectra (PSD) to brain activity. In future studies, we can use this system to study sound processing in everyday life.
Collapse
Affiliation(s)
- Daniel Hölle
- Neurophysiology of Everyday Life Group, Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - Sarah Blum
- Neuropsychology Lab, Department of Psychology, University of Oldenburg, Oldenburg, Germany
- Cluster of Excellence Hearing4all, Oldenburg, Germany
| | - Sven Kissner
- Institute for Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany
| | - Stefan Debener
- Neuropsychology Lab, Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - Martin G. Bleichner
- Neurophysiology of Everyday Life Group, Department of Psychology, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
23
|
Jack BN, Chilver MR, Vickery RM, Birznieks I, Krstanoska-Blazeska K, Whitford TJ, Griffiths O. Movement Planning Determines Sensory Suppression: An Event-related Potential Study. J Cogn Neurosci 2021; 33:2427-2439. [PMID: 34424986 DOI: 10.1162/jocn_a_01747] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Sensory suppression refers to the phenomenon that sensory input generated by our own actions, such as moving a finger to press a button to hear a tone, elicits smaller neural responses than sensory input generated by external agents. This observation is usually explained via the internal forward model in which an efference copy of the motor command is used to compute a corollary discharge, which acts to suppress sensory input. However, because moving a finger to press a button is accompanied by neural processes involved in preparing and performing the action, it is unclear whether sensory suppression is the result of movement planning, movement execution, or both. To investigate this, in two experiments, we compared ERPs to self-generated tones that were produced by voluntary, semivoluntary, or involuntary button-presses, with externally generated tones that were produced by a computer. In Experiment 1, the semivoluntary and involuntary button-presses were initiated by the participant or experimenter, respectively, by electrically stimulating the median nerve in the participant's forearm, and in Experiment 2, by applying manual force to the participant's finger. We found that tones produced by voluntary button-presses elicited a smaller N1 component of the ERP than externally generated tones. This is known as N1-suppression. However, tones produced by semivoluntary and involuntary button-presses did not yield significant N1-suppression. We also found that the magnitude of N1-suppression linearly decreased across the voluntary, semivoluntary, and involuntary conditions. These results suggest that movement planning is a necessary condition for producing sensory suppression. We conclude that the most parsimonious account of sensory suppression is the internal forward model.
Collapse
Affiliation(s)
- Bradley N Jack
- University of New South Wales Sydney, Australia.,Australian National University, Canberra
| | - Miranda R Chilver
- University of New South Wales Sydney, Australia.,Neuroscience Research Australia, Sydney
| | - Richard M Vickery
- University of New South Wales Sydney, Australia.,Neuroscience Research Australia, Sydney
| | - Ingvars Birznieks
- University of New South Wales Sydney, Australia.,Neuroscience Research Australia, Sydney
| | | | | | - Oren Griffiths
- University of New South Wales Sydney, Australia.,Flinders University, Adelaide, Australia
| |
Collapse
|
24
|
Sugimoto F, Kimura M, Takeda Y. Attenuation of auditory N2 for self-modulated tones during continuous actions. Biol Psychol 2021; 166:108201. [PMID: 34653547 DOI: 10.1016/j.biopsycho.2021.108201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 10/01/2021] [Accepted: 10/04/2021] [Indexed: 11/19/2022]
Abstract
Event-related potentials elicited by tones generated by one's own discrete actions (e.g., button presses) are attenuated compared to those elicited by tones generated externally. The present study investigated whether ERP attenuation would occur when the timing or pitch of tones is modulated by continuous actions, as for such actions, a weak association between actions and their auditory consequences is assumed. In a modulation condition, participants modulated the time interval between tones (Experiment 1) or the pitch of tones (Experiment 2) by turning a steering wheel. In a listening condition, participants listened to the same tones as in the modulation condition without any action. The results revealed that the amplitude of N2 elicited by tones decreased in the modulation compared to listening conditions, consistently in the two experiments, suggesting relatively higher-order auditory processing can be mainly influenced by the prediction of action consequences when continuous actions modulate features of auditory stimuli.
Collapse
Affiliation(s)
- Fumie Sugimoto
- Human-Centered Mobility Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Japan.
| | - Motohiro Kimura
- Human-Centered Mobility Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Japan
| | - Yuji Takeda
- Human-Centered Mobility Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Japan
| |
Collapse
|
25
|
Darriba Á, Hsu YF, Van Ommen S, Waszak F. Intention-based and sensory-based predictions. Sci Rep 2021; 11:19899. [PMID: 34615990 PMCID: PMC8494815 DOI: 10.1038/s41598-021-99445-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 09/23/2021] [Indexed: 02/08/2023] Open
Abstract
We inhabit a continuously changing world, where the ability to anticipate future states of the environment is critical for adaptation. Anticipation can be achieved by learning about the causal or temporal relationship between sensory events, as well as by learning to act on the environment to produce an intended effect. Together, sensory-based and intention-based predictions provide the flexibility needed to successfully adapt. Yet it is currently unknown whether the two sources of information are processed independently to form separate predictions, or are combined into a common prediction. To investigate this, we ran an experiment in which the final tone of two possible four-tone sequences could be predicted from the preceding tones in the sequence and/or from the participants' intention to trigger that final tone. This tone could be congruent with both sensory-based and intention-based predictions, incongruent with both, or congruent with one while incongruent with the other. Trials where predictions were incongruent with each other yielded similar prediction error responses irrespectively of the violated prediction, indicating that both predictions were formulated and coexisted simultaneously. The violation of intention-based predictions yielded late additional error responses, suggesting that those violations underwent further differential processing which the violations of sensory-based predictions did not receive.
Collapse
Affiliation(s)
- Álvaro Darriba
- Université de Paris, INCC UMR 8002, CNRS, F-75006, Paris, France.
| | - Yi-Fang Hsu
- Department of Educational Psychology and Counselling, National Taiwan Normal University, 10610, Taipei, Taiwan
- Institute for Research Excellence in Learning Sciences, National Taiwan Normal University, 10610, Taipei, Taiwan
| | - Sandrien Van Ommen
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Geneva, Switzerland
| | - Florian Waszak
- Université de Paris, INCC UMR 8002, CNRS, F-75006, Paris, France
| |
Collapse
|
26
|
The auditory brain in action: Intention determines predictive processing in the auditory system-A review of current paradigms and findings. Psychon Bull Rev 2021; 29:321-342. [PMID: 34505988 PMCID: PMC9038838 DOI: 10.3758/s13423-021-01992-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/29/2021] [Indexed: 11/08/2022]
Abstract
According to the ideomotor theory, action may serve to produce desired sensory outcomes. Perception has been widely described in terms of sensory predictions arising due to top-down input from higher order cortical areas. Here, we demonstrate that the action intention results in reliable top-down predictions that modulate the auditory brain responses. We bring together several lines of research, including sensory attenuation, active oddball, and action-related omission studies: Together, the results suggest that the intention-based predictions modulate several steps in the sound processing hierarchy, from preattentive to evaluation-related processes, also when controlling for additional prediction sources (i.e., sound regularity). We propose an integrative theoretical framework—the extended auditory event representation system (AERS), a model compatible with the ideomotor theory, theory of event coding, and predictive coding. Initially introduced to describe regularity-based auditory predictions, we argue that the extended AERS explains the effects of action intention on auditory processing while additionally allowing studying the differences and commonalities between intention- and regularity-based predictions—we thus believe that this framework could guide future research on action and perception.
Collapse
|
27
|
Paraskevoudi N, SanMiguel I. Self-generation and sound intensity interactively modulate perceptual bias, but not perceptual sensitivity. Sci Rep 2021; 11:17103. [PMID: 34429453 PMCID: PMC8385100 DOI: 10.1038/s41598-021-96346-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 08/02/2021] [Indexed: 02/07/2023] Open
Abstract
The ability to distinguish self-generated stimuli from those caused by external sources is critical for all behaving organisms. Although many studies point to a sensory attenuation of self-generated stimuli, recent evidence suggests that motor actions can result in either attenuated or enhanced perceptual processing depending on the environmental context (i.e., stimulus intensity). The present study employed 2-AFC sound detection and loudness discrimination tasks to test whether sound source (self- or externally-generated) and stimulus intensity (supra- or near-threshold) interactively modulate detection ability and loudness perception. Self-generation did not affect detection and discrimination sensitivity (i.e., detection thresholds and Just Noticeable Difference, respectively). However, in the discrimination task, we observed a significant interaction between self-generation and intensity on perceptual bias (i.e. Point of Subjective Equality). Supra-threshold self-generated sounds were perceived softer than externally-generated ones, while at near-threshold intensities self-generated sounds were perceived louder than externally-generated ones. Our findings provide empirical support to recent theories on how predictions and signal intensity modulate perceptual processing, pointing to interactive effects of intensity and self-generation that seem to be driven by a biased estimate of perceived loudness, rather by changes in detection and discrimination sensitivity.
Collapse
Affiliation(s)
- Nadia Paraskevoudi
- Brainlab-Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, P. Vall d'Hebron 171, 08035, Barcelona, Spain.,Institute of Neurosciences, University of Barcelona, Barcelona, Spain
| | - Iria SanMiguel
- Brainlab-Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, P. Vall d'Hebron 171, 08035, Barcelona, Spain. .,Institute of Neurosciences, University of Barcelona, Barcelona, Spain. .,Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain.
| |
Collapse
|
28
|
Bolt NK, Loehr JD. Sensory Attenuation of the Auditory P2 Differentiates Self- from Partner-Produced Sounds during Joint Action. J Cogn Neurosci 2021; 33:2297-2310. [PMID: 34272962 DOI: 10.1162/jocn_a_01760] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Successful human interaction relies on people's ability to differentiate between the sensory consequences of their own and others' actions. Research in solo action contexts has identified sensory attenuation, that is, the selective perceptual or neural dampening of the sensory consequences of self-produced actions, as a potential marker of the distinction between self- and externally produced sensory consequences. However, very little research has examined whether sensory attenuation distinguishes self- from partner-produced sensory consequences in joint action contexts. The current study examined whether sensory attenuation of the auditory N1 or P2 ERPs distinguishes self- from partner-produced tones when pairs of people coordinate their actions to produce tone sequences that match a metronome pace. We did not find evidence of auditory N1 attenuation for either self- or partner-produced tones. Instead, the auditory P2 was attenuated for self-produced tones compared to partner-produced tones within the joint action. These findings indicate that self-specific attenuation of the auditory P2 differentiates the sensory consequences of one's own from others' actions during joint action. These findings also corroborate recent evidence that N1 attenuation may be driven by general rather than action-specific processes and support a recently proposed functional dissociation between auditory N1 and P2 attenuation.
Collapse
|
29
|
Neszmélyi B, Horváth J. Processing and utilization of auditory action effects in individual and social tasks. Acta Psychol (Amst) 2021; 217:103326. [PMID: 33989835 DOI: 10.1016/j.actpsy.2021.103326] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Revised: 04/28/2021] [Accepted: 04/29/2021] [Indexed: 10/21/2022] Open
Abstract
The influence of action-effect integration on motor control and sensory processing is often investigated in arrangements featuring human-machine interactions. Such experiments focus on predictable sensory events produced through participants' interactions with simple response devices. Action-effect integration may, however, also occur when we interact with human partners. The current study examined the similarities and differences in perceptual and motor control processes related to generating sounds with or without the involvement of a human partner. We manipulated the complexity of the causal chain of events between the initial motor and the final sensory event. In the self-induced condition participants generated sounds directly by pressing a button, while in the interactive condition sounds resulted from a paired reaction-time task, that is, the final sound was generated indirectly, by relying on the contribution of the partner. Auditory event-related potentials (ERPs) and force application patterns were similar in the two conditions, suggesting that social action effects produced with the involvement of a second human agent in the causal sequence are processed, and utilized as action feedback in the same way as direct consequences of one's actions. The only reflection of a processing difference between the two conditions was a slow, posterior ERP waveform that started before the presentation of the auditory stimulus, which may reflect differences in stimulus expectancy or task difficulty.
Collapse
|
30
|
Neural correlates of implicit agency during the transition from adolescence to adulthood: An ERP study. Neuropsychologia 2021; 158:107908. [PMID: 34062152 DOI: 10.1016/j.neuropsychologia.2021.107908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 05/03/2021] [Accepted: 05/26/2021] [Indexed: 11/20/2022]
Abstract
Sense of agency (SoA), the experience of being in control of our voluntary actions and their outcomes, is a key feature of normal human experience. Frontoparietal brain circuits associated with SoA undergo a major maturational process during adolescence. To examine whether this translates to neurodevelopmental changes in agency experience, we investigated two key neural processes associated with SoA, the activity that is leading to voluntary action (Readiness Potential) and the activity that is associated with the action outcome processing (attenuation of auditory N1 and P2 event related potentials, ERPs) in mid-adolescents (13-14), late-adolescents (18-20) and adults (25-28) while they perform an intentional binding task. In this task, participants pressed a button (action) that delivered a tone (outcome) after a small delay and reported the time of the tone using the Libet clock. This action-outcome condition alternated with a no-action condition where an identical tone was triggered by a computer. Mid-adolescents showed greater outcome binding, such that they perceived self-triggered tones as being temporally closer to their actions compared to adults. Suggesting greater agency experience over the outcomes of their voluntary actions during mid-adolescence. Consistent with this, greater levels of attenuated neural response to self-triggered auditory tones (specifically P2 attenuation) were found during mid-adolescence compared to older age groups. This enhanced attenuation decreased with age as observed in outcome binding. However, there were no age-related differences in the readiness potential leading to the voluntary action (button press) as well as in the N1 attenuation to the self-triggered tones. Notably, in mid-adolescents greater outcome binding scores were positively associated with greater P2 attenuation, and smaller negativity in the late readiness potential. These findings suggest that the greater experience of implicit agency observed during mid-adolescence may be mediated by a neural over-suppression of action outcomes (auditory P2 attenuation), and over-reliance on motor preparation (late readiness potential), which we found to become adult-like during late-adolescence. Implications for adolescent development and SoA related neurodevelopmental disorders are discussed.
Collapse
|
31
|
Effector-independent brain network for auditory-motor integration: fMRI evidence from singing and cello playing. Neuroimage 2021; 237:118128. [PMID: 33989814 DOI: 10.1016/j.neuroimage.2021.118128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Revised: 04/13/2021] [Accepted: 04/25/2021] [Indexed: 11/22/2022] Open
Abstract
Many everyday tasks share high-level sensory goals but differ in the movements used to accomplish them. One example of this is musical pitch regulation, where the same notes can be produced using the vocal system or a musical instrument controlled by the hands. Cello playing has previously been shown to rely on brain structures within the singing network for performance of single notes, except in areas related to primary motor control, suggesting that the brain networks for auditory feedback processing and sensorimotor integration may be shared (Segado et al. 2018). However, research has shown that singers and cellists alike can continue singing/playing in tune even in the absence of auditory feedback (Chen et al. 2013, Kleber et al. 2013), so different paradigms are required to test feedback monitoring and control mechanisms. In singing, auditory pitch feedback perturbation paradigms have been used to show that singers engage a network of brain regions including anterior cingulate cortex (ACC), anterior insula (aINS), and intraparietal sulcus (IPS) when compensating for altered pitch feedback, and posterior superior temporal gyrus (pSTG) and supramarginal gyrus (SMG) when ignoring it (Zarate et al. 2005, 2008). To determine whether the brain networks for cello playing and singing directly overlap in these sensory-motor integration areas, in the present study expert cellists were asked to compensate for or ignore introduced pitch perturbations when singing/playing during fMRI scanning. We found that cellists were able to sing/play target tones, and compensate for and ignore introduced feedback perturbations equally well. Brain activity overlapped for singing and playing in IPS and SMG when compensating, and pSTG and dPMC when ignoring; differences between singing/playing across all three conditions were most prominent in M1, centered on the relevant motor effectors (hand, larynx). These findings support the hypothesis that pitch regulation during cello playing relies on structures within the singing network and suggests that differences arise primarily at the level of forward motor control.
Collapse
|
32
|
Han N, Jack BN, Hughes G, Elijah RB, Whitford TJ. Sensory attenuation in the absence of movement: Differentiating motor action from sense of agency. Cortex 2021; 141:436-448. [PMID: 34146742 DOI: 10.1016/j.cortex.2021.04.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 02/13/2021] [Accepted: 04/01/2021] [Indexed: 11/29/2022]
Abstract
Sensory attenuation is the phenomenon that stimuli generated by willed motor actions elicit a smaller neurophysiological response than those generated by external sources. It has mostly been investigated in the auditory domain, by comparing ERPs evoked by self-initiated (active condition) and externally-generated (passive condition) sounds. The mechanistic basis of sensory attenuation has been argued to involve a duplicate of the motor command being used to predict sensory consequences of self-generated movements. An alternative possibility is that the effect is driven by between-condition differences in participants' sense of agency over the sound. In this paper, we disambiguated the effects of motor-action and sense of agency on sensory attenuation with a novel experimental paradigm. In Experiment 1, participants watched a moving, marked tickertape while EEG was recorded. In the active condition, participants chose whether to press a button by a certain mark on the tickertape. If a button-press had not occurred by the mark, then a tone would be played 1 s later. If the button was pressed prior to the mark, the tone was not played. In the passive condition, participants passively watched the animation, and were informed about whether a tone would be played on each trial. The design for Experiment 2 was identical, except that the contingencies were reversed (i.e., a button-press by the mark led to a tone). The results were consistent across the two experiments: while there were no differences in N1 amplitude between the active and passive conditions, the amplitude of the Tb component was suppressed in the active condition. The amplitude of the P2 component was enhanced in the active condition in both Experiments 1 and 2. These results suggest that motor-actions and sense of agency have differential effects on sensory attenuation to sounds and are indexed with different ERP components.
Collapse
Affiliation(s)
- Nathan Han
- School of Psychology, The University of New South Wales (UNSW Sydney), Sydney, Australia.
| | - Bradley N Jack
- Research School of Psychology, Australian National University, Canberra, Australia
| | - Gethin Hughes
- Department of Psychology, University of Essex, Colchester, UK
| | - Ruth B Elijah
- School of Psychology, The University of New South Wales (UNSW Sydney), Sydney, Australia
| | - Thomas J Whitford
- School of Psychology, The University of New South Wales (UNSW Sydney), Sydney, Australia
| |
Collapse
|
33
|
Ford JM, Roach BJ, Mathalon DH. Vocalizing and singing reveal complex patterns of corollary discharge function in schizophrenia. Int J Psychophysiol 2021; 164:30-40. [PMID: 33621618 DOI: 10.1016/j.ijpsycho.2021.02.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 01/30/2021] [Accepted: 02/16/2021] [Indexed: 10/22/2022]
Abstract
INTRODUCTION As we vocalize, our brains generate predictions of the sounds we produce to enable suppression of neural responses when intentions match vocalizations and to make adjustments when they do not. This may be instantiated by efference copy and corollary discharge mechanisms, which are impaired in people with schizophrenia (SZ). Although innate, these mechanisms can be affected by intentions. We asked if attending to pitch during vocalizations would take these mechanisms "off-line" and reduce suppression. METHODS Event-related potentials (ERP) were recorded from 96 SZ and 92 healthy controls (HC) as they vocalized triplets in monotone (Phrase) or sang triplets in ascending thirds (Pitch). Pre-vocalization activity (Bereitschaftspotential, BP), N1, and P2 ERP components to sounds were compared during vocalization and playback. RESULTS N1 was not as suppressed during Pitch as during Phrase. N1 suppression was not affected by SZ in either task when all data were collapsed across pitches (Pitch) and positions (Phrase). However, when binned according to vocalization performance, SZ showed less N1 suppression than HC at longer (>2 s) inter-stimulus intervals (Phrase) and inconsistent suppression across pitches (Pitch). Unlike N1, P2 was more suppressed during Pitch than Phrase and not affected by SZ. BP was greater during vocalization than playback but did not contribute to N1 or P2 effects. Pitch variability was inversely related to negative symptoms. CONCLUSIONS Neural processing is not suppressed when patients and controls sing, and corollary discharge abnormalities in schizophrenia are only seen at long vocalization intervals.
Collapse
Affiliation(s)
- Judith M Ford
- University of California, San Francisco (UCSF), United States of America; Veterans Affairs San Francisco Healthcare System, United States of America.
| | - Brian J Roach
- Veterans Affairs San Francisco Healthcare System, United States of America
| | - Daniel H Mathalon
- University of California, San Francisco (UCSF), United States of America; Veterans Affairs San Francisco Healthcare System, United States of America
| |
Collapse
|
34
|
Neszmélyi B, Horváth J. Action-related auditory ERP attenuation is not modulated by action effect relevance. Biol Psychol 2021; 161:108029. [PMID: 33556451 DOI: 10.1016/j.biopsycho.2021.108029] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Revised: 01/13/2021] [Accepted: 01/26/2021] [Indexed: 10/22/2022]
Abstract
Event-related potentials (ERPs) elicited by self-induced sounds are often smaller than ERPs elicited by identical, but externally generated sounds. This action-related auditory ERP attenuation is more pronounced when self-induced sounds are intermixed with similar sounds generated by an external source. The current study explored whether attentional factors contributed to this phenomenon. Participants performed tone-eliciting actions, while the action-tone contingency and the set of additional action effects (tactile only, tactile and visual) were manipulated in a blocked manner. Previous action-tone contingence-effects were replicated, but the addition of other sensory action consequences did not influence the magnitude of auditory ERP attenuation. This suggests that the amount of attention allocated to concurrent non-auditory action effects does not substantially affect the magnitude of action-related auditory ERP attenuation and is on a par with the assumption that action-related auditory ERP attenuation might be related to the process of distinguishing self-induced stimuli from externally generated ones.
Collapse
Affiliation(s)
- Bence Neszmélyi
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary; Budapest University of Technology and Economics, Budapest, Hungary; Pázmány Péter Catholic University, Budapest, Hungary.
| | - János Horváth
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary; Károli Gáspár University of the Reformed Church in Hungary, Hungary
| |
Collapse
|
35
|
van Laarhoven T, Stekelenburg JJ, Vroomen J. Suppression of the auditory N1 by visual anticipatory motion is modulated by temporal and identity predictability. Psychophysiology 2020; 58:e13749. [PMID: 33355930 PMCID: PMC7900976 DOI: 10.1111/psyp.13749] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 10/25/2020] [Accepted: 11/23/2020] [Indexed: 11/28/2022]
Abstract
The amplitude of the auditory N1 component of the event-related potential (ERP) is typically suppressed when a sound is accompanied by visual anticipatory information that reliably predicts the timing and identity of the sound. While this visually induced suppression of the auditory N1 is considered an early electrophysiological marker of fulfilled prediction, it is not yet fully understood whether this internal predictive coding mechanism is primarily driven by the temporal characteristics, or by the identity features of the anticipated sound. The current study examined the impact of temporal and identity predictability on suppression of the auditory N1 by visual anticipatory motion with an ecologically valid audiovisual event (a video of a handclap). Predictability of auditory timing and identity was manipulated in three different conditions in which sounds were either played in isolation, or in conjunction with a video that either reliably predicted the timing of the sound, the identity of the sound, or both the timing and identity. The results showed that N1 suppression was largest when the video reliably predicted both the timing and identity of the sound, and reduced when either the timing or identity of the sound was unpredictable. The current results indicate that predictions of timing and identity are both essential elements for predictive coding in audition.
Collapse
Affiliation(s)
- Thijs van Laarhoven
- Department of Cognitive Neuropsychology, Tilburg University, Tilburg, The Netherlands
| | - Jeroen J Stekelenburg
- Department of Cognitive Neuropsychology, Tilburg University, Tilburg, The Netherlands
| | - Jean Vroomen
- Department of Cognitive Neuropsychology, Tilburg University, Tilburg, The Netherlands
| |
Collapse
|
36
|
Mathias B, Zamm A, Gianferrara PG, Ross B, Palmer C. Rhythm Complexity Modulates Behavioral and Neural Dynamics During Auditory–Motor Synchronization. J Cogn Neurosci 2020; 32:1864-1880. [DOI: 10.1162/jocn_a_01601] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
We addressed how rhythm complexity influences auditory–motor synchronization in musically trained individuals who perceived and produced complex rhythms while EEG was recorded. Participants first listened to two-part auditory sequences (Listen condition). Each part featured a single pitch presented at a fixed rate; the integer ratio formed between the two rates varied in rhythmic complexity from low (1:1) to moderate (1:2) to high (3:2). One of the two parts occurred at a constant rate across conditions. Then, participants heard the same rhythms as they synchronized their tapping at a fixed rate (Synchronize condition). Finally, they tapped at the same fixed rate (Motor condition). Auditory feedback from their taps was present in all conditions. Behavioral effects of rhythmic complexity were evidenced in all tasks; detection of missing beats (Listen) worsened in the most complex (3:2) rhythm condition, and tap durations (Synchronize) were most variable and least synchronous with stimulus onsets in the 3:2 condition. EEG power spectral density was lowest at the fixed rate during the 3:2 rhythm and greatest during the 1:1 rhythm (Listen and Synchronize). ERP amplitudes corresponding to an N1 time window were smallest for the 3:2 rhythm and greatest for the 1:1 rhythm (Listen). Finally, synchronization accuracy (Synchronize) decreased as amplitudes in the N1 time window became more positive during the high rhythmic complexity condition (3:2). Thus, measures of neural entrainment corresponded to synchronization accuracy, and rhythmic complexity modulated the behavioral and neural measures similarly.
Collapse
Affiliation(s)
- Brian Mathias
- McGill University
- Max Planck Institute for Human Cognitive and Brain Science
| | - Anna Zamm
- McGill University
- Central European University, Budapest, Hungary
| | | | | | | |
Collapse
|
37
|
Pinheiro AP, Schwartze M, Amorim M, Coentre R, Levy P, Kotz SA. Changes in motor preparation affect the sensory consequences of voice production in voice hearers. Neuropsychologia 2020; 146:107531. [PMID: 32553846 DOI: 10.1016/j.neuropsychologia.2020.107531] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2020] [Revised: 05/11/2020] [Accepted: 06/08/2020] [Indexed: 10/24/2022]
Abstract
BACKGROUND Auditory verbal hallucinations (AVH) are a cardinal symptom of psychosis but are also present in 6-13% of the general population. Alterations in sensory feedback processing are a likely cause of AVH, indicative of changes in the forward model. However, it is unknown whether such alterations are related to anomalies in forming an efference copy during action preparation, selective for voices, and similar along the psychosis continuum. By directly comparing psychotic and nonclinical voice hearers (NCVH), the current study specifies whether and how AVH proneness modulates both the efference copy (Readiness Potential) and sensory feedback processing for voices and tones (N1, P2) with event-related brain potentials (ERPs). METHODS Controls with low AVH proneness (n = 15), NCVH (n = 16) and first-episode psychotic patients with AVH (n = 16) engaged in a button-press task with two types of stimuli: self-initiated and externally generated self-voices or tones during EEG recordings. RESULTS Groups differed in sensory feedback processing of expected and actual feedback: NCVH displayed an atypically enhanced N1 to self-initiated voices, while N1 suppression was reduced in psychotic patients. P2 suppression for voices and tones was strongest in NCVH, but absent for voices in patients. Motor activity preceding the button press was reduced in NCVH and patients, specifically for sensory feedback to self-voice in NCVH. CONCLUSIONS These findings suggest that selective changes in sensory feedback to voice are core to AVH. These changes already show in preparatory motor activity, potentially reflecting changes in forming an efference copy. The results provide partial support for continuum models of psychosis.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| | - Michael Schwartze
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Maria Amorim
- Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Ricardo Coentre
- Serviço de Psiquiatria e Saúde Mental, Centro Hospitalar Universitário Lisboa Norte EPE, Lisboa, Portugal; Faculdade de Medicina, Universidade de Lisboa, Lisboa, Portugal
| | - Pedro Levy
- Serviço de Psiquiatria e Saúde Mental, Centro Hospitalar Universitário Lisboa Norte EPE, Lisboa, Portugal
| | - Sonja A Kotz
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
38
|
Hsu YF, Xu W, Parviainen T, Hämäläinen JA. Context-dependent minimisation of prediction errors involves temporal-frontal activation. Neuroimage 2020; 207:116355. [DOI: 10.1016/j.neuroimage.2019.116355] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Revised: 10/16/2019] [Accepted: 11/11/2019] [Indexed: 10/25/2022] Open
|
39
|
Silva DMR, Rothe-Neves R, Melges DB. Long-latency event-related responses to vowels: N1-P2 decomposition by two-step principal component analysis. Int J Psychophysiol 2019; 148:93-102. [PMID: 31863852 DOI: 10.1016/j.ijpsycho.2019.11.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Revised: 11/16/2019] [Accepted: 11/18/2019] [Indexed: 11/26/2022]
Abstract
The N1-P2 complex of the auditory event-related potential (ERP) has been used to examine neural activity associated with speech sound perception. Since it is thought to reflect multiple generator processes, its functional significance is difficult to infer. In the present study, a temporospatial principal component analysis (PCA) was used to decompose the N1-P2 response into latent factors underlying covariance patterns in ERP data recorded during passive listening to pairs of successive vowels. In each trial, one of six sounds drawn from an /i/-/e/ vowel continuum was followed either by an identical sound, a different token of the same vowel category, or a token from the other category. Responses were examined as to how they were modulated by within- and across-category vowel differences and by adaptation (repetition suppression) effects. Five PCA factors were identified as corresponding to three well-known N1 subcomponents and two P2 subcomponents. Results added evidence that the N1 peak reflects both generators that are sensitive to spectral information and generators that are not. For later latency ranges, different patterns of sensitivity to vowel quality were found, including category-related effects. Particularly, a subcomponent identified as the Tb wave showed release from adaptation in response to an /i/ followed by an /e/ sound. A P2 subcomponent varied linearly with spectral shape along the vowel continuum, while the other was stronger the closer the vowel was to the category boundary, suggesting separate processing of continuous and category-related information. Thus, the PCA-based decomposition of the N1-P2 complex was functionally meaningful, revealing distinct underlying processes at work during speech sound perception.
Collapse
Affiliation(s)
- Daniel M R Silva
- Phonetics Lab, Faculty of Letters, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - Rui Rothe-Neves
- Phonetics Lab, Faculty of Letters, Federal University of Minas Gerais, Belo Horizonte, Brazil.
| | - Danilo B Melges
- Graduate Program in Electrical Engineering, Department of Electrical Engineering, Federal University of Minas Gerais
| |
Collapse
|
40
|
Korka B, Schröger E, Widmann A. Action Intention-based and Stimulus Regularity-based Predictions: Same or Different? J Cogn Neurosci 2019; 31:1917-1932. [PMID: 31393234 DOI: 10.1162/jocn_a_01456] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We act on the environment to produce desired effects, but we also adapt to the environmental demands by learning what to expect next, based on experience: How do action-based predictions and sensory predictions relate to each other? We explore this by implementing a self-generation oddball paradigm, where participants performed random sequences of left and right button presses to produce frequent standard and rare deviant tones. By manipulating the action–tone association as well as the likelihood of a button press over the other one, we compare ERP effects evoked by the intention to produce a specific tone, tone regularity, and both intention and regularity. We show that the N1b and Tb components of the N1 response are modulated by violations of tone regularity only. However, violations of action intention as well as of regularity elicit MMN responses, which occur similarly in all three conditions. Regardless of whether the predictions at sensory levels were based on either intention, regularity, or both, the tone deviance was further and equally well detected at hierarchically higher processing level, as reflected in similar P3a effects between conditions. We did not observe additive prediction errors when intention and regularity were violated concurrently, suggesting the two integrate despite presumably having independent generators. Even though they are often discussed as individual prediction sources in the literature, this study represents to our knowledge the first to directly compare them. Finally, these results show how, in the context of action, our brain can easily switch between top–down intention-based expectations and bottom–up regularity cues to efficiently predict future events.
Collapse
Affiliation(s)
| | | | - Andreas Widmann
- University of Leipzig
- Leibniz Institute for Neurobiology, Magdeburg, Germany
| |
Collapse
|
41
|
Modality-specific sensory readiness for upcoming events revealed by slow cortical potentials. Brain Struct Funct 2019; 225:149-159. [DOI: 10.1007/s00429-019-01993-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2019] [Accepted: 11/22/2019] [Indexed: 02/02/2023]
|
42
|
Pinheiro AP, Schwartze M, Gutierrez F, Kotz SA. When temporal prediction errs: ERP responses to delayed action-feedback onset. Neuropsychologia 2019; 134:107200. [DOI: 10.1016/j.neuropsychologia.2019.107200] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2019] [Revised: 09/18/2019] [Accepted: 09/19/2019] [Indexed: 11/26/2022]
|
43
|
Jack BN, Le Pelley ME, Han N, Harris AW, Spencer KM, Whitford TJ. Inner speech is accompanied by a temporally-precise and content-specific corollary discharge. Neuroimage 2019; 198:170-180. [DOI: 10.1016/j.neuroimage.2019.04.038] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Accepted: 04/11/2019] [Indexed: 11/29/2022] Open
|
44
|
van Laarhoven T, Stekelenburg JJ, Eussen MLJM, Vroomen J. Electrophysiological alterations in motor-auditory predictive coding in autism spectrum disorder. Autism Res 2019; 12:589-599. [PMID: 30801964 PMCID: PMC6593426 DOI: 10.1002/aur.2087] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 12/18/2018] [Accepted: 02/05/2019] [Indexed: 12/28/2022]
Abstract
The amplitude of the auditory N1 component of the event‐related potential (ERP) is typically attenuated for self‐initiated sounds, compared to sounds with identical acoustic and temporal features that are triggered externally. This effect has been ascribed to internal forward models predicting the sensory consequences of one's own motor actions. The predictive coding account of autistic symptomatology states that individuals with autism spectrum disorder (ASD) have difficulties anticipating upcoming sensory stimulation due to a decreased ability to infer the probabilistic structure of their environment. Without precise internal forward prediction models to rely on, perception in ASD could be less affected by prior expectations and more driven by sensory input. Following this reasoning, one would expect diminished attenuation of the auditory N1 due to self‐initiation in individuals with ASD. Here, we tested this hypothesis by comparing the neural response to self‐ versus externally‐initiated tones between a group of individuals with ASD and a group of age matched neurotypical controls. ERPs evoked by tones initiated via button‐presses were compared with ERPs evoked by the same tones replayed at identical pace. Significant N1 attenuation effects were only found in the TD group. Self‐initiation of the tones did not attenuate the auditory N1 in the ASD group, indicating that they may be unable to anticipate the auditory sensory consequences of their own motor actions. These results show that individuals with ASD have alterations in sensory attenuation of self‐initiated sounds, and support the notion of impaired predictive coding as a core deficit underlying autistic symptomatology. Autism Res 2019, 12: 589–599. © 2019 The Authors. Autism Research published by International Society for Autism Research published by Wiley Periodicals, Inc. Lay Summary Many individuals with ASD experience difficulties in processing sensory information (for example, increased sensitivity to sound). Here we show that these difficulties may be related to an inability to anticipate upcoming sensory stimulation. Our findings contribute to a better understanding of the neural mechanisms underlying the different sensory perception experienced by individuals with ASD.
Collapse
Affiliation(s)
- Thijs van Laarhoven
- Department of Cognitive Neuropsychology, Tilburg University, 5000 LE Tilburg, The Netherlands
| | - Jeroen J Stekelenburg
- Department of Cognitive Neuropsychology, Tilburg University, 5000 LE Tilburg, The Netherlands
| | - Mart L J M Eussen
- Department of Child and Adolescent Psychiatry, Yulius Mental Health Organization, Dordrecht, The Netherlands.,Department of Autism, Yulius Mental Health Organization, Dordrecht, The Netherlands
| | - Jean Vroomen
- Department of Cognitive Neuropsychology, Tilburg University, 5000 LE Tilburg, The Netherlands
| |
Collapse
|
45
|
Hsu YF, Waszak F, Hämäläinen JA. Prior Precision Modulates the Minimization of Auditory Prediction Error. Front Hum Neurosci 2019; 13:30. [PMID: 30828293 PMCID: PMC6385564 DOI: 10.3389/fnhum.2019.00030] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 01/21/2019] [Indexed: 11/21/2022] Open
Abstract
The predictive coding model of perception proposes that successful representation of the perceptual world depends upon canceling out the discrepancy between prediction and sensory input (i.e., prediction error). Recent studies further suggest a distinction to be made between prediction error triggered by non-predicted stimuli of different prior precision (i.e., inverse variance). However, it is not fully understood how prediction error with different precision levels is minimized in the predictive process. Here, we conducted a magnetoencephalography (MEG) experiment which orthogonally manipulated prime-probe relation (for contextual precision) and stimulus repetition (for perceptual learning which decreases prediction error). We presented participants with cycles of tone quartets which consisted of three prime tones and one probe tone of randomly selected frequencies. Within each cycle, the three prime tones remained identical while the probe tones changed once at some point (e.g., from repetition of 123X to repetition of 123Y). Therefore, the repetition of probe tones can reveal the development of perceptual inferences in low and high precision contexts depending on their position within the cycle. We found that the two conditions resemble each other in terms of N1m modulation (as both were associated with N1m suppression) but differ in terms of N2m modulation. While repeated probe tones in low precision context did not exhibit any modulatory effect, repeated probe tones in high precision context elicited a suppression and rebound of the N2m source power. The differentiation suggested that the minimization of prediction error in low and high precision contexts likely involves distinct mechanisms.
Collapse
Affiliation(s)
- Yi-Fang Hsu
- Department of Educational Psychology and Counselling, National Taiwan Normal University, Taipei, Taiwan.,Institute for Research Excellence in Learning Sciences, National Taiwan Normal University, Taipei, Taiwan
| | - Florian Waszak
- Université Paris Descartes, Sorbonne Paris Cité, Paris, France.,CNRS, Laboratoire Psychologie de la Perception, UMR 8242, Paris, France
| | - Jarmo A Hämäläinen
- Jyväskylä Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
| |
Collapse
|
46
|
Dogge M, Hofman D, Custers R, Aarts H. Exploring the role of motor and non-motor predictive mechanisms in sensory attenuation: Perceptual and neurophysiological findings. Neuropsychologia 2019; 124:216-225. [DOI: 10.1016/j.neuropsychologia.2018.12.007] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 11/23/2018] [Accepted: 12/10/2018] [Indexed: 12/01/2022]
|
47
|
Mathias B, Gehring WJ, Palmer C. Electrical Brain Responses Reveal Sequential Constraints on Planning during Music Performance. Brain Sci 2019; 9:E25. [PMID: 30696038 PMCID: PMC6406892 DOI: 10.3390/brainsci9020025] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Revised: 01/21/2019] [Accepted: 01/26/2019] [Indexed: 12/20/2022] Open
Abstract
Elements in speech and music unfold sequentially over time. To produce sentences and melodies quickly and accurately, individuals must plan upcoming sequence events, as well as monitor outcomes via auditory feedback. We investigated the neural correlates of sequential planning and monitoring processes by manipulating auditory feedback during music performance. Pianists performed isochronous melodies from memory at an initially cued rate while their electroencephalogram was recorded. Pitch feedback was occasionally altered to match either an immediately upcoming Near-Future pitch (next sequence event) or a more distant Far-Future pitch (two events ahead of the current event). Near-Future, but not Far-Future altered feedback perturbed the timing of pianists' performances, suggesting greater interference of Near-Future sequential events with current planning processes. Near-Future feedback triggered a greater reduction in auditory sensory suppression (enhanced response) than Far-Future feedback, reflected in the P2 component elicited by the pitch event following the unexpected pitch change. Greater timing perturbations were associated with enhanced cortical sensory processing of the pitch event following the Near-Future altered feedback. Both types of feedback alterations elicited feedback-related negativity (FRN) and P3a potentials and amplified spectral power in the theta frequency range. These findings suggest similar constraints on producers' sequential planning to those reported in speech production.
Collapse
Affiliation(s)
- Brian Mathias
- Department of Psychology, McGill University, Montreal, QC H3A 1B1, Canada.
- Research Group Neural Mechanisms of Human Communication, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany.
| | - William J Gehring
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109, USA.
| | - Caroline Palmer
- Department of Psychology, McGill University, Montreal, QC H3A 1B1, Canada.
| |
Collapse
|
48
|
Motor output, neural states and auditory perception. Neurosci Biobehav Rev 2019; 96:116-126. [DOI: 10.1016/j.neubiorev.2018.10.021] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2018] [Revised: 10/26/2018] [Accepted: 10/29/2018] [Indexed: 12/12/2022]
|
49
|
Osumi T, Tsuji K, Shibata M, Umeda S. Machiavellianism and early neural responses to others' facial expressions caused by one's own decisions. Psychiatry Res 2019; 271:669-677. [PMID: 30791340 DOI: 10.1016/j.psychres.2018.12.037] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 09/26/2018] [Accepted: 12/06/2018] [Indexed: 12/21/2022]
Abstract
The processing of social stimuli generated by one's own voluntary behavior is an element of social adaptation. It is known that self-generated stimuli induce attenuated sensory experiences compared with externally generated stimuli. The present study aimed to examine this self-specific attenuation effect on early stimulus processing in the case of others' facial expressions during interpersonal interactions. In addition, this study explored the possibility that the self-specific attenuation effect on social cognition is modulated by antisocial personality traits such as Machiavellianism. We analyzed early components of the event-related brain potential in participants elicited by happy and sad facial expressions of others when the participant's decision was responsible for the others' emotions and when the others' facial expressions were independent of the participant's decision. Compared to the non-responsible condition, the responsible condition showed an attenuated amplitude of the N170 component in response to sad faces. Moreover, Machiavellianism explained individual differences in the self-specific attenuation effect depending on the affective valence of social signals. The present findings support the possibility that the self-specific attenuation effect extends to interpersonal interactions and imply that distorted cognition of others' emotions caused by one's own behavior is associated with personality disorders that promote antisocial behaviors.
Collapse
Affiliation(s)
- Takahiro Osumi
- Japan Society for the Promotion of Science (JSPS), Tokyo, Japan; Department of Psychology, Keio University, Tokyo, Japan.
| | - Koki Tsuji
- Graduate School of Human Relations, Keio University, Tokyo, Japan; Japan Society for the Promotion of Science (JSPS), Tokyo, Japan
| | - Midori Shibata
- Keio Advanced Research Center, Keio University, Tokyo, Japan
| | - Satoshi Umeda
- Department of Psychology, Keio University, Tokyo, Japan; Keio Advanced Research Center, Keio University, Tokyo, Japan
| |
Collapse
|
50
|
Vercillo T, O'Neil S, Jiang F. Action-effect contingency modulates the readiness potential. Neuroimage 2018; 183:273-279. [PMID: 30114465 PMCID: PMC6450698 DOI: 10.1016/j.neuroimage.2018.08.028] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2018] [Revised: 08/08/2018] [Accepted: 08/13/2018] [Indexed: 11/24/2022] Open
Abstract
The ability to constantly anticipate events in the world is critical to human survival. It has been suggested that predictive processing originates from the motor system and that incoming sensory inputs can be altered to facilitate sensorimotor integration. In the current study, we investigated the role of the readiness potentials, i.e. the premotor brain activity registered within the fronto-parietal areas, in sensorimotor integration. We recorded EEG data during three conditions: a motor condition in which a simple action was required, a visual condition in which a visual stimulus was presented on the screen, and a visuomotor condition wherein the visual stimulus appeared in response to a button press. We measured evoked potentials before the motor action and/or after the appearance of the visual stimulus. Anticipating a visual feedback in response to a voluntary action modulated the amplitude of the readiness potentials. We also found an enhancement in the amplitude of the visual N1 and a reduction in the amplitude of the visual P2 when the visual stimulus was induced by the action rather than externally generated. Our results suggest that premotor brain activity might reflect predictive processes in sensory-motor binding and that the readiness potentials may possibly represent a neural marker of these predictive mechanisms.
Collapse
Affiliation(s)
- Tiziana Vercillo
- Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY, 14642, USA.
| | - Sean O'Neil
- Department of Psychology, University of Nevada, Reno, USA
| | - Fang Jiang
- Department of Psychology, University of Nevada, Reno, USA
| |
Collapse
|