51
|
Kiepe F, Kraus N, Hesselmann G. Sensory Attenuation in the Auditory Modality as a Window Into Predictive Processing. Front Hum Neurosci 2021; 15:704668. [PMID: 34803629 PMCID: PMC8602204 DOI: 10.3389/fnhum.2021.704668] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Accepted: 10/14/2021] [Indexed: 11/23/2022] Open
Abstract
Self-generated auditory input is perceived less loudly than the same sounds generated externally. The existence of this phenomenon, called Sensory Attenuation (SA), has been studied for decades and is often explained by motor-based forward models. Recent developments in the research of SA, however, challenge these models. We review the current state of knowledge regarding theoretical implications about the significance of Sensory Attenuation and its role in human behavior and functioning. Focusing on behavioral and electrophysiological results in the auditory domain, we provide an overview of the characteristics and limitations of existing SA paradigms and highlight the problem of isolating SA from other predictive mechanisms. Finally, we explore different hypotheses attempting to explain heterogeneous empirical findings, and the impact of the Predictive Coding Framework in this research area.
Collapse
Affiliation(s)
- Fabian Kiepe
- Psychologische Hochschule Berlin (PHB), Berlin Psychological University, Berlin, Germany
| | - Nils Kraus
- Psychologische Hochschule Berlin (PHB), Berlin Psychological University, Berlin, Germany
| | - Guido Hesselmann
- Psychologische Hochschule Berlin (PHB), Berlin Psychological University, Berlin, Germany
| |
Collapse
|
52
|
Tarasi L, Trajkovic J, Diciotti S, di Pellegrino G, Ferri F, Ursino M, Romei V. Predictive waves in the autism-schizophrenia continuum: A novel biobehavioral model. Neurosci Biobehav Rev 2021; 132:1-22. [PMID: 34774901 DOI: 10.1016/j.neubiorev.2021.11.006] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 10/29/2021] [Accepted: 11/07/2021] [Indexed: 12/14/2022]
Abstract
The brain is a predictive machine. Converging data suggests a diametric predictive strategy from autism spectrum disorders (ASD) to schizophrenic spectrum disorders (SSD). Whereas perceptual inference in ASD is rigidly shaped by incoming sensory information, the SSD population is prone to overestimate the precision of their priors' models. Growing evidence considers brain oscillations pivotal biomarkers to understand how top-down predictions integrate bottom-up input. Starting from the conceptualization of ASD and SSD as oscillopathies, we introduce an integrated perspective that ascribes the maladjustments of the predictive mechanism to dysregulation of neural synchronization. According to this proposal, disturbances in the oscillatory profile do not allow the appropriate trade-off between descending predictive signal, overweighted in SSD, and ascending prediction errors, overweighted in ASD. These opposing imbalances both result in an ill-adapted reaction to external challenges. This approach offers a neuro-computational model capable of linking predictive coding theories with electrophysiological findings, aiming to increase knowledge on the neuronal foundations of the two spectra features and stimulate hypothesis-driven rehabilitation/research perspectives.
Collapse
Affiliation(s)
- Luca Tarasi
- Centro Studi e Ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum - Università di Bologna, Campus di Cesena, 47521 Cesena, Italy.
| | - Jelena Trajkovic
- Centro Studi e Ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum - Università di Bologna, Campus di Cesena, 47521 Cesena, Italy
| | - Stefano Diciotti
- Department of Electrical, Electronic, and Information Engineering "Guglielmo Marconi", University of Bologna, Cesena, Italy; Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Bologna, Italy
| | - Giuseppe di Pellegrino
- Centro Studi e Ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum - Università di Bologna, Campus di Cesena, 47521 Cesena, Italy
| | - Francesca Ferri
- Department of Neuroscience, Imaging and Clinical Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| | - Mauro Ursino
- Department of Electrical, Electronic, and Information Engineering "Guglielmo Marconi", University of Bologna, Cesena, Italy
| | - Vincenzo Romei
- Centro Studi e Ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum - Università di Bologna, Campus di Cesena, 47521 Cesena, Italy; IRCCS Fondazione Santa Lucia, 00179 Rome, Italy.
| |
Collapse
|
53
|
Jack BN, Chilver MR, Vickery RM, Birznieks I, Krstanoska-Blazeska K, Whitford TJ, Griffiths O. Movement Planning Determines Sensory Suppression: An Event-related Potential Study. J Cogn Neurosci 2021; 33:2427-2439. [PMID: 34424986 DOI: 10.1162/jocn_a_01747] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Sensory suppression refers to the phenomenon that sensory input generated by our own actions, such as moving a finger to press a button to hear a tone, elicits smaller neural responses than sensory input generated by external agents. This observation is usually explained via the internal forward model in which an efference copy of the motor command is used to compute a corollary discharge, which acts to suppress sensory input. However, because moving a finger to press a button is accompanied by neural processes involved in preparing and performing the action, it is unclear whether sensory suppression is the result of movement planning, movement execution, or both. To investigate this, in two experiments, we compared ERPs to self-generated tones that were produced by voluntary, semivoluntary, or involuntary button-presses, with externally generated tones that were produced by a computer. In Experiment 1, the semivoluntary and involuntary button-presses were initiated by the participant or experimenter, respectively, by electrically stimulating the median nerve in the participant's forearm, and in Experiment 2, by applying manual force to the participant's finger. We found that tones produced by voluntary button-presses elicited a smaller N1 component of the ERP than externally generated tones. This is known as N1-suppression. However, tones produced by semivoluntary and involuntary button-presses did not yield significant N1-suppression. We also found that the magnitude of N1-suppression linearly decreased across the voluntary, semivoluntary, and involuntary conditions. These results suggest that movement planning is a necessary condition for producing sensory suppression. We conclude that the most parsimonious account of sensory suppression is the internal forward model.
Collapse
Affiliation(s)
- Bradley N Jack
- University of New South Wales Sydney, Australia.,Australian National University, Canberra
| | - Miranda R Chilver
- University of New South Wales Sydney, Australia.,Neuroscience Research Australia, Sydney
| | - Richard M Vickery
- University of New South Wales Sydney, Australia.,Neuroscience Research Australia, Sydney
| | - Ingvars Birznieks
- University of New South Wales Sydney, Australia.,Neuroscience Research Australia, Sydney
| | | | | | - Oren Griffiths
- University of New South Wales Sydney, Australia.,Flinders University, Adelaide, Australia
| |
Collapse
|
54
|
SanMiguel I, Costa-Faidella J, Lugo ZR, Vilella E, Escera C. Standard Tone Stability as a Manipulation of Precision in the Oddball Paradigm: Modulation of Prediction Error Responses to Fixed-Probability Deviants. Front Hum Neurosci 2021; 15:734200. [PMID: 34650417 PMCID: PMC8505747 DOI: 10.3389/fnhum.2021.734200] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 09/09/2021] [Indexed: 11/13/2022] Open
Abstract
Electrophysiological sensory deviance detection signals, such as the mismatch negativity (MMN), have been interpreted from the predictive coding framework as manifestations of prediction error (PE). From a frequentist perspective of the classic oddball paradigm, deviant stimuli are unexpected because of their low probability. However, the amount of PE elicited by a stimulus can be dissociated from its probability of occurrence: when the observer cannot make confident predictions, any event holds little surprise value, no matter how improbable. Here we tested the hypothesis that the magnitude of the neural response elicited to an improbable sound (D) would scale with the precision of the prediction derived from the repetition of another sound (S), by manipulating repetition stability. We recorded the Electroencephalogram (EEG) from 20 participants while passively listening to 4 types of isochronous pure tone sequences differing in the probability of the S tone (880 Hz) while holding constant the probability of the D tone [1,046 Hz; p(D) = 1/11]: Oddball [p(S) = 10/11]; High confidence (7/11); Low confidence (4/11); and Random (1/11). Tones of 9 different frequencies were equiprobably presented as fillers [p(S) + p(D) + p(F) = 1]. Using a mass-univariate non-parametric, cluster-based correlation analysis controlling for multiple comparisons, we found that the amplitude of the deviant-elicited ERP became more negative with increasing S probability, in a time-electrode window consistent with the MMN (ca. 120–200 ms; frontal), suggesting that the strength of a PE elicited to an improbable event indeed increases with the precision of the predictive model.
Collapse
Affiliation(s)
- Iria SanMiguel
- Brainlab-Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain.,Institute of Neurosciences, University of Barcelona, Barcelona, Spain.,Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain
| | - Jordi Costa-Faidella
- Brainlab-Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain.,Institute of Neurosciences, University of Barcelona, Barcelona, Spain.,Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain
| | - Zulay R Lugo
- Hospital Universitari Institut Pere Mata, Universitat Rovira i Virgili (URV), Institut d'Investigació Sanitària Pere Virgili (IISPV), Reus, Spain
| | - Elisabet Vilella
- Hospital Universitari Institut Pere Mata, Universitat Rovira i Virgili (URV), Institut d'Investigació Sanitària Pere Virgili (IISPV), Reus, Spain.,Centro de Investigación Biomédica en Red en Salud Mental (CIBERSAM), Madrid, Spain
| | - Carles Escera
- Brainlab-Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain.,Institute of Neurosciences, University of Barcelona, Barcelona, Spain.,Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain
| |
Collapse
|
55
|
Sugimoto F, Kimura M, Takeda Y. Attenuation of auditory N2 for self-modulated tones during continuous actions. Biol Psychol 2021; 166:108201. [PMID: 34653547 DOI: 10.1016/j.biopsycho.2021.108201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 10/01/2021] [Accepted: 10/04/2021] [Indexed: 11/19/2022]
Abstract
Event-related potentials elicited by tones generated by one's own discrete actions (e.g., button presses) are attenuated compared to those elicited by tones generated externally. The present study investigated whether ERP attenuation would occur when the timing or pitch of tones is modulated by continuous actions, as for such actions, a weak association between actions and their auditory consequences is assumed. In a modulation condition, participants modulated the time interval between tones (Experiment 1) or the pitch of tones (Experiment 2) by turning a steering wheel. In a listening condition, participants listened to the same tones as in the modulation condition without any action. The results revealed that the amplitude of N2 elicited by tones decreased in the modulation compared to listening conditions, consistently in the two experiments, suggesting relatively higher-order auditory processing can be mainly influenced by the prediction of action consequences when continuous actions modulate features of auditory stimuli.
Collapse
Affiliation(s)
- Fumie Sugimoto
- Human-Centered Mobility Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Japan.
| | - Motohiro Kimura
- Human-Centered Mobility Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Japan
| | - Yuji Takeda
- Human-Centered Mobility Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Japan
| |
Collapse
|
56
|
Darriba Á, Hsu YF, Van Ommen S, Waszak F. Intention-based and sensory-based predictions. Sci Rep 2021; 11:19899. [PMID: 34615990 PMCID: PMC8494815 DOI: 10.1038/s41598-021-99445-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 09/23/2021] [Indexed: 02/08/2023] Open
Abstract
We inhabit a continuously changing world, where the ability to anticipate future states of the environment is critical for adaptation. Anticipation can be achieved by learning about the causal or temporal relationship between sensory events, as well as by learning to act on the environment to produce an intended effect. Together, sensory-based and intention-based predictions provide the flexibility needed to successfully adapt. Yet it is currently unknown whether the two sources of information are processed independently to form separate predictions, or are combined into a common prediction. To investigate this, we ran an experiment in which the final tone of two possible four-tone sequences could be predicted from the preceding tones in the sequence and/or from the participants' intention to trigger that final tone. This tone could be congruent with both sensory-based and intention-based predictions, incongruent with both, or congruent with one while incongruent with the other. Trials where predictions were incongruent with each other yielded similar prediction error responses irrespectively of the violated prediction, indicating that both predictions were formulated and coexisted simultaneously. The violation of intention-based predictions yielded late additional error responses, suggesting that those violations underwent further differential processing which the violations of sensory-based predictions did not receive.
Collapse
Affiliation(s)
- Álvaro Darriba
- Université de Paris, INCC UMR 8002, CNRS, F-75006, Paris, France.
| | - Yi-Fang Hsu
- Department of Educational Psychology and Counselling, National Taiwan Normal University, 10610, Taipei, Taiwan
- Institute for Research Excellence in Learning Sciences, National Taiwan Normal University, 10610, Taipei, Taiwan
| | - Sandrien Van Ommen
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Geneva, Switzerland
| | - Florian Waszak
- Université de Paris, INCC UMR 8002, CNRS, F-75006, Paris, France
| |
Collapse
|
57
|
Cao L, Steinborn MB, Haendel BF. Delusional thinking and action binding in healthy individuals. Sci Rep 2021; 11:18932. [PMID: 34556707 PMCID: PMC8460725 DOI: 10.1038/s41598-021-97977-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 08/31/2021] [Indexed: 11/09/2022] Open
Abstract
Action binding is the effect that the perceived time of an action is shifted towards the action related feedback. A much larger action binding effect in schizophrenia compared to normal controls has been shown, which might be due to positive symptoms like delusions. Here we investigated the relationship between delusional thinking and action binding in healthy individuals, predicting a positive correlation between them. The action binding effect was evaluated by comparing the perceived time of a keypress between an operant (keypress triggering a sound) and a baseline condition (keypress alone), with a novel testing method that massively improved the precision of the subjective timing measurement. A positive correlation was found between the tendency of delusional thinking (measured by the 21-item Peters et al. delusions inventory) and action binding across participants after controlling for the effect of testing order between operant and baseline conditions. The results indicate that delusional thinking in particular influences action time perception and support the notion of a continuous distribution of schizotypal traits with normal controls at one end and clinical patients at the other end.
Collapse
Affiliation(s)
- Liyu Cao
- Department of Psychology and Behavioural Sciences, Zhejiang University, Tianmushan Road 148, Hangzhou, 310007, China. .,Department of Psychology (III), Julius-Maximilians-Universität Würzburg, 97070, Würzburg, Germany.
| | - Michael B Steinborn
- Department of Psychology (III), Julius-Maximilians-Universität Würzburg, 97070, Würzburg, Germany
| | - Barbara F Haendel
- Department of Psychology (III), Julius-Maximilians-Universität Würzburg, 97070, Würzburg, Germany
| |
Collapse
|
58
|
The auditory brain in action: Intention determines predictive processing in the auditory system-A review of current paradigms and findings. Psychon Bull Rev 2021; 29:321-342. [PMID: 34505988 PMCID: PMC9038838 DOI: 10.3758/s13423-021-01992-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/29/2021] [Indexed: 11/08/2022]
Abstract
According to the ideomotor theory, action may serve to produce desired sensory outcomes. Perception has been widely described in terms of sensory predictions arising due to top-down input from higher order cortical areas. Here, we demonstrate that the action intention results in reliable top-down predictions that modulate the auditory brain responses. We bring together several lines of research, including sensory attenuation, active oddball, and action-related omission studies: Together, the results suggest that the intention-based predictions modulate several steps in the sound processing hierarchy, from preattentive to evaluation-related processes, also when controlling for additional prediction sources (i.e., sound regularity). We propose an integrative theoretical framework—the extended auditory event representation system (AERS), a model compatible with the ideomotor theory, theory of event coding, and predictive coding. Initially introduced to describe regularity-based auditory predictions, we argue that the extended AERS explains the effects of action intention on auditory processing while additionally allowing studying the differences and commonalities between intention- and regularity-based predictions—we thus believe that this framework could guide future research on action and perception.
Collapse
|
59
|
Li J, Hong B, Nolte G, Engel AK, Zhang D. Preparatory delta phase response is correlated with naturalistic speech comprehension performance. Cogn Neurodyn 2021; 16:337-352. [PMID: 35401861 PMCID: PMC8934811 DOI: 10.1007/s11571-021-09711-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 07/09/2021] [Accepted: 08/12/2021] [Indexed: 01/07/2023] Open
Abstract
While human speech comprehension is thought to be an active process that involves top-down predictions, it remains unclear how predictive information is used to prepare for the processing of upcoming speech information. We aimed to identify the neural signatures of the preparatory processing of upcoming speech. Participants selectively attended to one of two competing naturalistic, narrative speech streams, and a temporal response function (TRF) method was applied to derive event-related-like neural responses from electroencephalographic data. The phase responses to the attended speech at the delta band (1-4 Hz) were correlated with the comprehension performance of individual participants, with a latency of - 200-0 ms relative to the onset of speech amplitude envelope fluctuations over the fronto-central and left-lateralized parietal electrodes. The phase responses to the attended speech at the alpha band also correlated with comprehension performance but with a latency of 650-980 ms post-onset over the fronto-central electrodes. Distinct neural signatures were found for the attentional modulation, taking the form of TRF-based amplitude responses at a latency of 240-320 ms post-onset over the left-lateralized fronto-central and occipital electrodes. Our findings reveal how the brain gets prepared to process an upcoming speech in a continuous, naturalistic speech context.
Collapse
Affiliation(s)
- Jiawei Li
- Department of Psychology, School of Social Sciences, Tsinghua University, Room 334, Mingzhai Building, Beijing, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
| | - Bo Hong
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
| | - Guido Nolte
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg Eppendorf, Hamburg, Germany
| | - Andreas K. Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg Eppendorf, Hamburg, Germany
| | - Dan Zhang
- Department of Psychology, School of Social Sciences, Tsinghua University, Room 334, Mingzhai Building, Beijing, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
| |
Collapse
|
60
|
Mihai PG, Tschentscher N, von Kriegstein K. Modulation of the Primary Auditory Thalamus When Recognizing Speech with Background Noise. J Neurosci 2021; 41:7136-7147. [PMID: 34244362 PMCID: PMC8372015 DOI: 10.1523/jneurosci.2902-20.2021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 05/18/2021] [Accepted: 05/20/2021] [Indexed: 11/21/2022] Open
Abstract
Recognizing speech in background noise is a strenuous daily activity, yet most humans can master it. An explanation of how the human brain deals with such sensory uncertainty during speech recognition is to-date missing. Previous work has shown that recognition of speech without background noise involves modulation of the auditory thalamus (medial geniculate body; MGB): there are higher responses in left MGB for speech recognition tasks that require tracking of fast-varying stimulus properties in contrast to relatively constant stimulus properties (e.g., speaker identity tasks) despite the same stimulus input. Here, we tested the hypotheses that (1) this task-dependent modulation for speech recognition increases in parallel with the sensory uncertainty in the speech signal, i.e., the amount of background noise; and that (2) this increase is present in the ventral MGB, which corresponds to the primary sensory part of the auditory thalamus. In accordance with our hypothesis, we show, by using ultra-high-resolution functional magnetic resonance imaging (fMRI) in male and female human participants, that the task-dependent modulation of the left ventral MGB (vMGB) for speech is particularly strong when recognizing speech in noisy listening conditions in contrast to situations where the speech signal is clear. The results imply that speech in noise recognition is supported by modifications at the level of the subcortical sensory pathway providing driving input to the auditory cortex.SIGNIFICANCE STATEMENT Speech recognition in noisy environments is a challenging everyday task. One reason why humans can master this task is the recruitment of additional cognitive resources as reflected in recruitment of non-language cerebral cortex areas. Here, we show that also modulation in the primary sensory pathway is specifically involved in speech in noise recognition. We found that the left primary sensory thalamus (ventral medial geniculate body; vMGB) is more involved when recognizing speech signals as opposed to a control task (speaker identity recognition) when heard in background noise versus when the noise was absent. This finding implies that the brain optimizes sensory processing in subcortical sensory pathway structures in a task-specific manner to deal with speech recognition in noisy environments.
Collapse
Affiliation(s)
- Paul Glad Mihai
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden 01187, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Nadja Tschentscher
- Research Unit Biological Psychology, Department of Psychology, Ludwig-Maximilians-University Munich, Munich 80802, Germany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden 01187, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| |
Collapse
|
61
|
Tivadar RI, Knight RT, Tzovara A. Automatic Sensory Predictions: A Review of Predictive Mechanisms in the Brain and Their Link to Conscious Processing. Front Hum Neurosci 2021; 15:702520. [PMID: 34489663 PMCID: PMC8416526 DOI: 10.3389/fnhum.2021.702520] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 07/12/2021] [Indexed: 01/22/2023] Open
Abstract
The human brain has the astonishing capacity of integrating streams of sensory information from the environment and forming predictions about future events in an automatic way. Despite being initially developed for visual processing, the bulk of predictive coding research has subsequently focused on auditory processing, with the famous mismatch negativity signal as possibly the most studied signature of a surprise or prediction error (PE) signal. Auditory PEs are present during various consciousness states. Intriguingly, their presence and characteristics have been linked with residual levels of consciousness and return of awareness. In this review we first give an overview of the neural substrates of predictive processes in the auditory modality and their relation to consciousness. Then, we focus on different states of consciousness - wakefulness, sleep, anesthesia, coma, meditation, and hypnosis - and on what mysteries predictive processing has been able to disclose about brain functioning in such states. We review studies investigating how the neural signatures of auditory predictions are modulated by states of reduced or lacking consciousness. As a future outlook, we propose the combination of electrophysiological and computational techniques that will allow investigation of which facets of sensory predictive processes are maintained when consciousness fades away.
Collapse
Affiliation(s)
| | - Robert T. Knight
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| | - Athina Tzovara
- Institute of Computer Science, University of Bern, Bern, Switzerland
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States
- Sleep-Wake Epilepsy Center | NeuroTec, Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
62
|
Ficco L, Mancuso L, Manuello J, Teneggi A, Liloia D, Duca S, Costa T, Kovacs GZ, Cauda F. Disentangling predictive processing in the brain: a meta-analytic study in favour of a predictive network. Sci Rep 2021; 11:16258. [PMID: 34376727 PMCID: PMC8355157 DOI: 10.1038/s41598-021-95603-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 07/28/2021] [Indexed: 02/07/2023] Open
Abstract
According to the predictive coding (PC) theory, the brain is constantly engaged in predicting its upcoming states and refining these predictions through error signals. Despite extensive research investigating the neural bases of this theory, to date no previous study has systematically attempted to define the neural mechanisms of predictive coding across studies and sensory channels, focussing on functional connectivity. In this study, we employ a coordinate-based meta-analytical approach to address this issue. We first use the Activation Likelihood Estimation (ALE) algorithm to detect spatial convergence across studies, related to prediction error and encoding. Overall, our ALE results suggest the ultimate role of the left inferior frontal gyrus and left insula in both processes. Moreover, we employ a meta-analytic connectivity method (Seed-Voxel Correlations Consensus). This technique reveals a large, bilateral predictive network, which resembles large-scale networks involved in task-driven attention and execution. In sum, we find that: (i) predictive processing seems to occur more in certain brain regions than others, when considering different sensory modalities at a time; (ii) there is no evidence, at the network level, for a distinction between error and prediction processing.
Collapse
Affiliation(s)
- Linda Ficco
- Focuslab, Department of Psychology, University of Turin, Turin, Italy.
- GCS-fMRI, Koelliker Hospital and Department of Psychology, University of Turin, Turin, Italy.
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Am Steiger 3/Haus 1, 07743, Jena, Germany.
| | - Lorenzo Mancuso
- Focuslab, Department of Psychology, University of Turin, Turin, Italy
- GCS-fMRI, Koelliker Hospital and Department of Psychology, University of Turin, Turin, Italy
| | - Jordi Manuello
- Focuslab, Department of Psychology, University of Turin, Turin, Italy
- GCS-fMRI, Koelliker Hospital and Department of Psychology, University of Turin, Turin, Italy
| | - Alessia Teneggi
- Focuslab, Department of Psychology, University of Turin, Turin, Italy
- GCS-fMRI, Koelliker Hospital and Department of Psychology, University of Turin, Turin, Italy
| | - Donato Liloia
- Focuslab, Department of Psychology, University of Turin, Turin, Italy
- GCS-fMRI, Koelliker Hospital and Department of Psychology, University of Turin, Turin, Italy
| | - Sergio Duca
- GCS-fMRI, Koelliker Hospital and Department of Psychology, University of Turin, Turin, Italy
| | - Tommaso Costa
- Focuslab, Department of Psychology, University of Turin, Turin, Italy
- GCS-fMRI, Koelliker Hospital and Department of Psychology, University of Turin, Turin, Italy
| | - Gyula Zoltán Kovacs
- Department of Biological Psychology and Cognitive Neuroscience, Institute for Psychology, Friedrich-Schiller University of Jena, Jena, Germany
| | - Franco Cauda
- Focuslab, Department of Psychology, University of Turin, Turin, Italy
- GCS-fMRI, Koelliker Hospital and Department of Psychology, University of Turin, Turin, Italy
| |
Collapse
|
63
|
Bolt NK, Loehr JD. Sensory Attenuation of the Auditory P2 Differentiates Self- from Partner-Produced Sounds during Joint Action. J Cogn Neurosci 2021; 33:2297-2310. [PMID: 34272962 DOI: 10.1162/jocn_a_01760] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Successful human interaction relies on people's ability to differentiate between the sensory consequences of their own and others' actions. Research in solo action contexts has identified sensory attenuation, that is, the selective perceptual or neural dampening of the sensory consequences of self-produced actions, as a potential marker of the distinction between self- and externally produced sensory consequences. However, very little research has examined whether sensory attenuation distinguishes self- from partner-produced sensory consequences in joint action contexts. The current study examined whether sensory attenuation of the auditory N1 or P2 ERPs distinguishes self- from partner-produced tones when pairs of people coordinate their actions to produce tone sequences that match a metronome pace. We did not find evidence of auditory N1 attenuation for either self- or partner-produced tones. Instead, the auditory P2 was attenuated for self-produced tones compared to partner-produced tones within the joint action. These findings indicate that self-specific attenuation of the auditory P2 differentiates the sensory consequences of one's own from others' actions during joint action. These findings also corroborate recent evidence that N1 attenuation may be driven by general rather than action-specific processes and support a recently proposed functional dissociation between auditory N1 and P2 attenuation.
Collapse
|
64
|
Reznik D, Guttman N, Buaron B, Zion-Golumbic E, Mukamel R. Action-locked Neural Responses in Auditory Cortex to Self-generated Sounds. Cereb Cortex 2021; 31:5560-5569. [PMID: 34185837 DOI: 10.1093/cercor/bhab179] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/24/2021] [Accepted: 05/25/2021] [Indexed: 11/14/2022] Open
Abstract
Sensory perception is a product of interactions between the internal state of an organism and the physical attributes of a stimulus. It has been shown across the animal kingdom that perception and sensory-evoked physiological responses are modulated depending on whether or not the stimulus is the consequence of voluntary actions. These phenomena are often attributed to motor signals sent to relevant sensory regions that convey information about upcoming sensory consequences. However, the neurophysiological signature of action-locked modulations in sensory cortex, and their relationship with perception, is still unclear. In the current study, we recorded neurophysiological (using Magnetoencephalography) and behavioral responses from 16 healthy subjects performing an auditory detection task of faint tones. Tones were either generated by subjects' voluntary button presses or occurred predictably following a visual cue. By introducing a constant temporal delay between button press/cue and tone delivery, and applying source-level analysis, we decoupled action-locked and auditory-locked activity in auditory cortex. We show action-locked evoked-responses in auditory cortex following sound-triggering actions and preceding sound onset. Such evoked-responses were not found for button-presses that were not coupled with sounds, or sounds delivered following a predictive visual cue. Our results provide evidence for efferent signals in human auditory cortex that are locked to voluntary actions coupled with future auditory consequences.
Collapse
Affiliation(s)
- Daniel Reznik
- Max Planck Institute for Human Cognitive and Brain Sciences, Psychology Department, Leipzig, 04103, Germany
| | - Noa Guttman
- The Gonda Center for Multidisciplinary Brain Research, Bar-Ilan University, Ramat Gan, 5290002, Israel
| | - Batel Buaron
- Sagol School of Neuroscience and School of Psychological Sciences, Tel-Aviv University, 69978, Israel
| | - Elana Zion-Golumbic
- The Gonda Center for Multidisciplinary Brain Research, Bar-Ilan University, Ramat Gan, 5290002, Israel
| | - Roy Mukamel
- Sagol School of Neuroscience and School of Psychological Sciences, Tel-Aviv University, 69978, Israel
| |
Collapse
|
65
|
Hsu YF, Darriba Á, Waszak F. Attention modulates repetition effects in a context of low periodicity. Brain Res 2021; 1767:147559. [PMID: 34118219 DOI: 10.1016/j.brainres.2021.147559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 06/04/2021] [Accepted: 06/07/2021] [Indexed: 10/21/2022]
Abstract
Stimulus repetition can result in a reduction in neural responses (i.e., repetition suppression) in neuroimaging studies. Predictive coding models of perception postulate that this phenomenon largely reflects the top-down attenuation of prediction errors. Electroencephalography research further demonstrated that repetition effects consist of sequentially ordered attention-independent and attention-dependent components in a context of high periodicity. However, the statistical structure of our auditory environment is richer than that of a fixed pattern. It remains unclear if the attentional modulation of repetition effects can be generalised to a setting which better represents the nature of our auditory environment. Here we used electroencephalography to investigate whether the attention-independent and attention-dependent components of repetition effects previously described in the auditory modality remain in a context of low periodicity where temporary disruption might be absent/present. Participants were presented with repetition trains of various lengths, with/without temporary disruptions. We found attention-independent and attention-dependent repetition effects on, respectively, the P2 and P3a event-related potential components. This pattern of results is in line with previous research, confirming that the attenuation of prediction errors upon stimulus repetition is first registered regardless of attentional state before further attenuation of attended but not unattended prediction errors takes place. However, unlike previous reports, these effects manifested on later components. This divergence from previous studies is discussed in terms of the possible contribution of contextual factors.
Collapse
Affiliation(s)
- Yi-Fang Hsu
- Department of Educational Psychology and Counselling, National Taiwan Normal University, 10610 Taipei, Taiwan; Institute for Research Excellence in Learning Sciences, National Taiwan Normal University, 10610 Taipei, Taiwan.
| | - Álvaro Darriba
- Centre National de la Recherche Scientifique (CNRS), Integrative Neuroscience and Cognition Center (INCC), Unité Mixte de Recherche, 8002 75006 Paris, France; Université de Paris, 75006 Paris, France.
| | - Florian Waszak
- Centre National de la Recherche Scientifique (CNRS), Integrative Neuroscience and Cognition Center (INCC), Unité Mixte de Recherche, 8002 75006 Paris, France; Université de Paris, 75006 Paris, France; Fondation Ophtalmologique Rothschild, Paris, France.
| |
Collapse
|
66
|
Neszmélyi B, Horváth J. Processing and utilization of auditory action effects in individual and social tasks. Acta Psychol (Amst) 2021; 217:103326. [PMID: 33989835 DOI: 10.1016/j.actpsy.2021.103326] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Revised: 04/28/2021] [Accepted: 04/29/2021] [Indexed: 10/21/2022] Open
Abstract
The influence of action-effect integration on motor control and sensory processing is often investigated in arrangements featuring human-machine interactions. Such experiments focus on predictable sensory events produced through participants' interactions with simple response devices. Action-effect integration may, however, also occur when we interact with human partners. The current study examined the similarities and differences in perceptual and motor control processes related to generating sounds with or without the involvement of a human partner. We manipulated the complexity of the causal chain of events between the initial motor and the final sensory event. In the self-induced condition participants generated sounds directly by pressing a button, while in the interactive condition sounds resulted from a paired reaction-time task, that is, the final sound was generated indirectly, by relying on the contribution of the partner. Auditory event-related potentials (ERPs) and force application patterns were similar in the two conditions, suggesting that social action effects produced with the involvement of a second human agent in the causal sequence are processed, and utilized as action feedback in the same way as direct consequences of one's actions. The only reflection of a processing difference between the two conditions was a slow, posterior ERP waveform that started before the presentation of the auditory stimulus, which may reflect differences in stimulus expectancy or task difficulty.
Collapse
|
67
|
Hsu YF, Waszak F, Strömmer J, Hämäläinen JA. Human Brain Ages With Hierarchy-Selective Attenuation of Prediction Errors. Cereb Cortex 2021; 31:2156-2168. [PMID: 33258914 PMCID: PMC7945026 DOI: 10.1093/cercor/bhaa352] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Revised: 09/30/2020] [Accepted: 10/20/2020] [Indexed: 12/16/2022] Open
Abstract
From the perspective of predictive coding, our brain embodies a hierarchical generative model to realize perception, which proactively predicts the statistical structure of sensory inputs. How are these predictive processes modified as we age? Recent research suggested that aging leads to decreased weighting of sensory inputs and increased reliance on predictions. Here we investigated whether this age-related shift from sensorium to predictions occurs at all levels of hierarchical message passing. We recorded the electroencephalography responses with an auditory local-global paradigm in a cohort of 108 healthy participants from 3 groups: seniors, adults, and adolescents. The detection of local deviancy seems largely preserved in older individuals at earlier latency (including the mismatch negativity followed by the P3a but not the reorienting negativity). In contrast, the detection of global deviancy is clearly compromised in older individuals, as they showed worse task performance and attenuated P3b. Our findings demonstrate that older brains show little decline in sensory (i.e., first-order) prediction errors but significant diminution in contextual (i.e., second-order) prediction errors. Age-related deficient maintenance of auditory information in working memory might affect whether and how lower-level prediction errors propagate to the higher level.
Collapse
Affiliation(s)
- Yi-Fang Hsu
- Department of Educational Psychology and Counselling, National Taiwan Normal University, 106308 Taipei, Taiwan
- Institute for Research Excellence in Learning Sciences, National Taiwan Normal University, 106308 Taipei, Taiwan
| | - Florian Waszak
- Centre National de la Recherche Scientifique (CNRS), Integrative Neuroscience and Cognition Center (INCC), Unité Mixte de Recherche 8002, 75006 Paris, France
- Université de Paris, 75006 Paris, France
| | - Juho Strömmer
- Jyväskylä Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyväskylä, 40014 Jyväskylä, Finland
| | - Jarmo A Hämäläinen
- Jyväskylä Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyväskylä, 40014 Jyväskylä, Finland
| |
Collapse
|
68
|
Blohm S, Schlesewsky M, Menninghaus W, Scharinger M. Text type attribution modulates pre-stimulus alpha power in sentence reading. BRAIN AND LANGUAGE 2021; 214:104894. [PMID: 33477059 DOI: 10.1016/j.bandl.2020.104894] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Revised: 10/28/2020] [Accepted: 12/11/2020] [Indexed: 06/12/2023]
Abstract
Prior knowledge and context-specific expectations influence the perception of sensory events, e.g., speech, as well as complex higher-order cognitive operations like text reading. Here, we focused on pre-stimulus neural activity during sentence reading to examine text type-dependent attentional bias in anticipation of written stimuli, capitalizing on the functional relevance of brain oscillations in the alpha (8-12 Hz) frequency range. Two sex- and age-matched groups of participants (n = 24 each) read identical sentences on a screen at a fixed per-constituent presentation rate while their electroencephalogram was recorded; the groups were differentially instructed to read "sentences" (genre-neutral condition) or "verses from poems" (poetry condition). Relative alpha power (pre-cue vs. post-cue) in pre-stimulus time windows was greater in the poetry condition than in the genre-neutral condition. This finding constitutes initial evidence for genre-specific cognitive adjustments that precede processing proper, and potentially links current theories of discourse comprehension to current theories of brain function.
Collapse
Affiliation(s)
- Stefan Blohm
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Department of English and Linguistics, University of Mainz, Germany
| | - Matthias Schlesewsky
- Department of English and Linguistics, University of Mainz, Germany; University of South Australia, Adelaide, Australia.
| | - Winfried Menninghaus
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Mathias Scharinger
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Phonetics Research Group, Department of German Linguistics & Marburg Center for Mind, Brain and Behavior, Marburg, Germany
| |
Collapse
|
69
|
Neszmélyi B, Horváth J. Action-related auditory ERP attenuation is not modulated by action effect relevance. Biol Psychol 2021; 161:108029. [PMID: 33556451 DOI: 10.1016/j.biopsycho.2021.108029] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Revised: 01/13/2021] [Accepted: 01/26/2021] [Indexed: 10/22/2022]
Abstract
Event-related potentials (ERPs) elicited by self-induced sounds are often smaller than ERPs elicited by identical, but externally generated sounds. This action-related auditory ERP attenuation is more pronounced when self-induced sounds are intermixed with similar sounds generated by an external source. The current study explored whether attentional factors contributed to this phenomenon. Participants performed tone-eliciting actions, while the action-tone contingency and the set of additional action effects (tactile only, tactile and visual) were manipulated in a blocked manner. Previous action-tone contingence-effects were replicated, but the addition of other sensory action consequences did not influence the magnitude of auditory ERP attenuation. This suggests that the amount of attention allocated to concurrent non-auditory action effects does not substantially affect the magnitude of action-related auditory ERP attenuation and is on a par with the assumption that action-related auditory ERP attenuation might be related to the process of distinguishing self-induced stimuli from externally generated ones.
Collapse
Affiliation(s)
- Bence Neszmélyi
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary; Budapest University of Technology and Economics, Budapest, Hungary; Pázmány Péter Catholic University, Budapest, Hungary.
| | - János Horváth
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary; Károli Gáspár University of the Reformed Church in Hungary, Hungary
| |
Collapse
|
70
|
Ghio M, Egan S, Bellebaum C. Similarities and Differences between Performers and Observers in Processing Auditory Action Consequences: Evidence from Simultaneous EEG Acquisition. J Cogn Neurosci 2020; 33:683-694. [PMID: 33378242 DOI: 10.1162/jocn_a_01671] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In our social environment, we easily distinguish stimuli caused by our own actions (e.g., water splashing when I fill my glass) from stimuli that have an external source (e.g., water splashing in a fountain). Accumulating evidence suggests that processing the auditory consequences of self-performed actions elicits N1 and P2 ERPs of reduced amplitude compared to physically identical but externally generated sounds, with such reductions being ascribed to neural predictive mechanisms. It is unexplored, however, whether the sensory processing of action outcomes is similarly modulated by action observation (e.g., water splashing when I observe you filling my glass). We tested 40 healthy participants by applying a methodological approach for the simultaneous EEG recording of two persons: An observer observed button presses executed by a performer in real time. For the performers, we replicated previous findings of a reduced N1 amplitude for self- versus externally generated sounds. This pattern differed significantly from the one in observers, whose N1 for sounds generated by observed button presses was not attenuated. In turn, the P2 amplitude was reduced for processing action- versus externally generated sounds for both performers and observers. These findings show that both action performance and observation affect the processing of action-generated sounds. There are, however, important differences between the two in the timing of the effects, probably related to differences in the predictability of the actions and thus also the associated stimuli. We discuss how these differences might contribute to recognizing the stimulus as caused by self versus others.
Collapse
|
71
|
Wöstmann M, Maess B, Obleser J. Orienting auditory attention in time: Lateralized alpha power reflects spatio-temporal filtering. Neuroimage 2020; 228:117711. [PMID: 33385562 PMCID: PMC7903158 DOI: 10.1016/j.neuroimage.2020.117711] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 11/27/2020] [Accepted: 12/21/2020] [Indexed: 12/30/2022] Open
Abstract
The deployment of neural alpha (8–12 Hz) lateralization in service of spatial attention is well-established: Alpha power increases in the cortical hemisphere ipsilateral to the attended hemifield, and decreases in the contralateral hemisphere, respectively. Much less is known about humans’ ability to deploy such alpha lateralization in time, and to thus exploit alpha power as a spatio-temporal filter. Here we show that spatially lateralized alpha power does signify – beyond the direction of spatial attention – the distribution of attention in time and thereby qualifies as a spatio-temporal attentional filter. Participants (N = 20) selectively listened to spoken numbers presented on one side (left vs right), while competing numbers were presented on the other side. Key to our hypothesis, temporal foreknowledge was manipulated via a visual cue, which was either instructive and indicated the to-be-probed number position (70% valid) or neutral. Temporal foreknowledge did guide participants’ attention, as they recognized numbers from the to-be-attended side more accurately following valid cues. In the magnetoencephalogram (MEG), spatial attention to the left versus right side induced lateralization of alpha power in all temporal cueing conditions. Modulation of alpha lateralization at the 0.8 Hz presentation rate of spoken numbers was stronger following instructive compared to neutral temporal cues. Critically, we found stronger modulation of lateralized alpha power specifically at the onsets of temporally cued numbers. These results suggest that the precisely timed hemispheric lateralization of alpha power qualifies as a spatio-temporal attentional filter mechanism susceptible to top-down behavioural goals.
Collapse
Affiliation(s)
- Malte Wöstmann
- Department of Psychology, University of Lübeck, Germany; Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Germany.
| | - Burkhard Maess
- Max-Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Germany; Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Germany
| |
Collapse
|
72
|
Hsu YF, Hämäläinen JA. Both contextual regularity and selective attention affect the reduction of precision-weighted prediction errors but in distinct manners. Psychophysiology 2020; 58:e13753. [PMID: 33340115 DOI: 10.1111/psyp.13753] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 12/02/2020] [Accepted: 12/02/2020] [Indexed: 10/22/2022]
Abstract
Predictive coding model of perception postulates that the primary objective of the brain is to infer the causes of sensory inputs by reducing prediction errors (i.e., the discrepancy between expected and actual information). Moreover, prediction errors are weighted by their precision (i.e., inverse variance), which quantifies the degree of certainty about the variables. There is accumulating evidence that the reduction of precision-weighted prediction errors can be affected by contextual regularity (as an external factor) and selective attention (as an internal factor). However, it is unclear whether the two factors function together or separately. Here we used electroencephalography (EEG) to examine the putative interaction of contextual regularity and selective attention on this reduction process. Participants were presented with pairs of regular and irregular quartets in attended and unattended conditions. We found that contextual regularity and selective attention independently modulated the N1/MMN where the repetition effect was absent. On the P2, the two factors respectively interacted with the repetition effect without interacting with each other. The results showed that contextual regularity and selective attention likely affect the reduction of precision-weighted prediction errors in distinct manners. While contextual regularity finetunes our efficiency at reducing precision-weighted prediction errors, selective attention seems to modulate the reduction process following the Matthew effect of accumulated advantage.
Collapse
Affiliation(s)
- Yi-Fang Hsu
- Department of Educational Psychology and Counselling, National Taiwan Normal University, Taipei, Taiwan.,Institute for Research Excellence in Learning Sciences, National Taiwan Normal University, Taipei, Taiwan
| | - Jarmo A Hämäläinen
- Jyväskylä Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
| |
Collapse
|
73
|
Learning to predict: Neuronal signatures of auditory expectancy in human event-related potentials. Neuroimage 2020; 225:117472. [PMID: 33099012 PMCID: PMC9215305 DOI: 10.1016/j.neuroimage.2020.117472] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Revised: 10/08/2020] [Accepted: 10/15/2020] [Indexed: 12/31/2022] Open
Abstract
Learning to anticipate future states of the world based on statistical regularities in the environment is a key component of perception and is vital for the survival of many organisms. Such statistical learning and prediction are crucial for acquiring language and music appreciation. Importantly, learned expectations can be implicitly derived from exposure to sensory input, without requiring explicit information regarding contingencies in the environment. Whereas many previous studies of statistical learning have demonstrated larger neuronal responses to unexpected versus expected stimuli, the neuronal bases of the expectations themselves remain poorly understood. Here we examined behavioral and neuronal signatures of learned expectancy via human scalp-recorded event-related brain potentials (ERPs). Participants were instructed to listen to a series of sounds and press a response button as quickly as possible upon hearing a target noise burst, which was either reliably or unreliably preceded by one of three pure tones in low-, mid-, and high-frequency ranges. Participants were not informed about the statistical contingencies between the preceding tone ‘cues’ and the target. Over the course of a stimulus block, participants responded more rapidly to reliably cued targets. This behavioral index of learned expectancy was paralleled by a negative ERP deflection, designated as a neuronal contingency response (CR), which occurred immediately prior to the onset of the target. The amplitude and latency of the CR were systematically modulated by the strength of the predictive relationship between the cue and the target. Re-averaging ERPs with respect to the latency of behavioral responses revealed no consistent relationship between the CR and the motor response, suggesting that the CR represents a neuronal signature of learned expectancy or anticipatory attention. Our results demonstrate that statistical regularities in an auditory input stream can be implicitly learned and exploited to influence behavior. Furthermore, we uncover a potential ‘prediction signal’ that reflects this fundamental learning process.
Collapse
|
74
|
Dürschmid S, Reichert C, Hinrichs H, Heinze HJ, Kirsch HE, Knight RT, Deouell LY. Direct Evidence for Prediction Signals in Frontal Cortex Independent of Prediction Error. Cereb Cortex 2020; 29:4530-4538. [PMID: 30590422 DOI: 10.1093/cercor/bhy331] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Revised: 11/27/2018] [Accepted: 11/29/2018] [Indexed: 12/13/2022] Open
Abstract
Predictive coding (PC) has been suggested as one of the main mechanisms used by brains to interact with complex environments. PC theories posit top-down prediction signals, which are compared with actual outcomes, yielding in turn prediction error (PE) signals, which are used, bottom-up, to modify the ensuing predictions. However, disentangling prediction from PE signals has been challenging. Critically, while many studies found indirect evidence for PC in the form of PE signals, direct evidence for the prediction signal is mostly lacking. Here, we provide clear evidence, obtained from intracranial cortical recordings in human surgical patients, that the human lateral prefrontal cortex evinces prediction signals while anticipating an event. Patients listened to task-irrelevant sequences of repetitive tones including infrequent predictable or unpredictable pitch deviants. The broadband high-frequency amplitude (HFA) was decreased prior to the onset of expected relative to unexpected deviants in the frontal cortex only, and its amplitude was sensitive to the increasing likelihood of deviants following longer trains of standards in the unpredictable condition. Single-trial HFA predicted deviations and correlated with poststimulus response to deviations. These results provide direct evidence for frontal cortex prediction signals independent of PE signals.
Collapse
Affiliation(s)
- Stefan Dürschmid
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Brenneckestr. 6, Magdeburg, Germany.,Department of Neurology, Otto-von-Guericke University, Leipziger Str. 44, Magdeburg, Germany
| | - Christoph Reichert
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Brenneckestr. 6, Magdeburg, Germany.,CBBS-Center of Behavioral Brain Sciences, Otto-von-Guericke University, Universitätsplatz 2, Magdeburg, Germany
| | - Hermann Hinrichs
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Brenneckestr. 6, Magdeburg, Germany.,Department of Neurology, Otto-von-Guericke University, Leipziger Str. 44, Magdeburg, Germany.,Stereotactic Neurosurgery, Otto-von-Guericke University, Leipziger Str. 44, Magdeburg, Germany.,German Center for Neurodegenerative Diseases (DZNE), Leipziger Str. 44, Magdeburg, Germany.,Forschungscampus STIMULATE, Otto-von-Guericke University, Universitätsplatz 2, Magdeburg, Germany.,CBBS-Center of Behavioral Brain Sciences, Otto-von-Guericke University, Universitätsplatz 2, Magdeburg, Germany
| | - Hans-Jochen Heinze
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Brenneckestr. 6, Magdeburg, Germany.,Department of Neurology, Otto-von-Guericke University, Leipziger Str. 44, Magdeburg, Germany.,Stereotactic Neurosurgery, Otto-von-Guericke University, Leipziger Str. 44, Magdeburg, Germany.,German Center for Neurodegenerative Diseases (DZNE), Leipziger Str. 44, Magdeburg, Germany.,Forschungscampus STIMULATE, Otto-von-Guericke University, Universitätsplatz 2, Magdeburg, Germany.,CBBS-Center of Behavioral Brain Sciences, Otto-von-Guericke University, Universitätsplatz 2, Magdeburg, Germany
| | - Heidi E Kirsch
- Department of Neurology, University of California, 400 Parnassus Avenue, San Francisco, CA, USA
| | - Robert T Knight
- Helen Wills Neuroscience Institute and Department of Psychology, University of California, Berkeley, CA, USA
| | - Leon Y Deouell
- Edmond and Lily Safra Center for Brain Sciences and Department of Psychology, The Hebrew University of Jerusalem, Mount Scopus, Jerusalem, Israel
| |
Collapse
|
75
|
Wikman P, Sahari E, Salmela V, Leminen A, Leminen M, Laine M, Alho K. Breaking down the cocktail party: Attentional modulation of cerebral audiovisual speech processing. Neuroimage 2020; 224:117365. [PMID: 32941985 DOI: 10.1016/j.neuroimage.2020.117365] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 08/19/2020] [Accepted: 09/07/2020] [Indexed: 12/20/2022] Open
Abstract
Recent studies utilizing electrophysiological speech envelope reconstruction have sparked renewed interest in the cocktail party effect by showing that auditory neurons entrain to selectively attended speech. Yet, the neural networks of attention to speech in naturalistic audiovisual settings with multiple sound sources remain poorly understood. We collected functional brain imaging data while participants viewed audiovisual video clips of lifelike dialogues with concurrent distracting speech in the background. Dialogues were presented in a full-factorial design, comprising task (listen to the dialogues vs. ignore them), audiovisual quality and semantic predictability. We used univariate analyses in combination with multivariate pattern analysis (MVPA) to study modulations of brain activity related to attentive processing of audiovisual speech. We found attentive speech processing to cause distinct spatiotemporal modulation profiles in distributed cortical areas including sensory and frontal-control networks. Semantic coherence modulated attention-related activation patterns in the earliest stages of auditory cortical processing, suggesting that the auditory cortex is involved in high-level speech processing. Our results corroborate views that emphasize the dynamic nature of attention, with task-specificity and context as cornerstones of the underlying neuro-cognitive mechanisms.
Collapse
Affiliation(s)
- Patrik Wikman
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland.
| | - Elisa Sahari
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Viljami Salmela
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Alina Leminen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Department of Digital Humanities, University of Helsinki, Helsinki, Finland
| | - Miika Leminen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Department of Phoniatrics, Helsinki University Hospital, Helsinki, Finland
| | - Matti Laine
- Department of Psychology, Åbo Akademi University, Turku, Finland
| | - Kimmo Alho
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| |
Collapse
|
76
|
Ruhnau P, Rufener KS, Heinze HJ, Zaehle T. Pulsed transcranial electric brain stimulation enhances speech comprehension. Brain Stimul 2020; 13:1402-1411. [PMID: 32735988 DOI: 10.1016/j.brs.2020.07.011] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 05/27/2020] [Accepted: 07/21/2020] [Indexed: 01/14/2023] Open
Abstract
BACKGROUND One key mechanism thought to underlie speech processing is the alignment of cortical brain rhythms to the acoustic input, a mechanism termed entrainment. Recent work showed that transcranial electrical stimulation (tES) in speech relevant frequencies or adapted to the speech envelope can in fact enhance speech processing. However, it is unclear whether an oscillatory tES is necessary, or if transients in the stimulation (e.g., peaks in the tES signal) at relevant times are sufficient. OBJECTIVE In this study we used a novel pulsed-tES-protocol and tested behaviorally if a transiently pulsed - instead of a persistently oscillating - tES signal, can improve speech processing. METHODS While subjects listened to spoken sentences embedded in noise, brief electric direct current pulses aligned to speech transients (syllable onsets) were applied to auditory cortex regions to modulate comprehension. Additionally, we modulated the temporal delay between tES-pulses and speech transients to test for periodic modulations of behavior, indicative of entrainment by tES. RESULTS Speech comprehension was improved when tES-pulses were applied with a delay of 100 ms in respect to the speech transients. Contradictory to previous reports we find no periodic modulation of behavior. However, we find indications that periodic modulations can be spurious results of sampling behavioral data too coarsely. CONCLUSIONS Subject's speech comprehension benefits from pulsed-tES, yet behavior is not modulated periodically. Thus, pulsed-tES can aid cortical entrainment to speech input, which is especially relevant in a noisy environment. Yet, pulsed-tES does not seem to entrain brain oscillations by itself.
Collapse
Affiliation(s)
- Philipp Ruhnau
- Department of Neurology, Otto-von-Guericke-University, Magdeburg, Germany; Center for Behavioral Brain Sciences, Otto-von-Guericke-University Magdeburg, Germany.
| | - Katharina S Rufener
- Department of Neurology, Otto-von-Guericke-University, Magdeburg, Germany; Center for Behavioral Brain Sciences, Otto-von-Guericke-University Magdeburg, Germany
| | - Hans-Jochen Heinze
- Department of Neurology, Otto-von-Guericke-University, Magdeburg, Germany; Center for Behavioral Brain Sciences, Otto-von-Guericke-University Magdeburg, Germany
| | - Tino Zaehle
- Department of Neurology, Otto-von-Guericke-University, Magdeburg, Germany; Center for Behavioral Brain Sciences, Otto-von-Guericke-University Magdeburg, Germany
| |
Collapse
|
77
|
Dercksen TT, Widmann A, Schröger E, Wetzel N. Omission related brain responses reflect specific and unspecific action-effect couplings. Neuroimage 2020; 215:116840. [PMID: 32289452 DOI: 10.1016/j.neuroimage.2020.116840] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Revised: 03/31/2020] [Accepted: 04/06/2020] [Indexed: 11/29/2022] Open
Abstract
When an auditory stimulus is predicted but unexpectedly omitted, an omission response can be observed in the EEG. This endogenous response to the absence of a stimulus demonstrates the important role of prediction in perception. SanMiguel et al. (2013a) showed that in order to observe an omission response, a specific prediction concerning the identity of an upcoming stimulus is necessary. They used button presses coupled to either a single sound (predictable identity), or a random sound (unpredictable identity). In the event-related potentials, a sequence of omission responses consisting of oN1, oN2, and oP3 was observed in the single condition but not in the random condition. Given the importance of omission studies to understand the role of prediction in perception, we replicated this study. We enhanced statistical power by doubling the sample size and adjusting data pre-processing, and applied temporal principal component analysis and replication Bayes statistics. Results in the single sound condition were successfully replicated. Principal component analysis additionally revealed attenuated oN1 and oP3 omission responses in the random sound condition. These results suggest the existence of both specific and unspecific predictions along the sound processing hierarchy, where precision weighting possibly influences the strength of prediction error. Results are discussed in the framework of predictive coding and are congruent with everyday life, where uncertainty often requires broader or more general predictions.
Collapse
Affiliation(s)
- Tjerk T Dercksen
- Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118, Magdeburg, Germany; Center for Behavioral Brain Sciences, Universitätsplatz 2, D-39106, Magdeburg, Germany.
| | - Andreas Widmann
- Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118, Magdeburg, Germany; Leipzig University, Neumarkt 9-19, D-04109, Leipzig, Germany.
| | - Erich Schröger
- Leipzig University, Neumarkt 9-19, D-04109, Leipzig, Germany.
| | - Nicole Wetzel
- Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118, Magdeburg, Germany; Center for Behavioral Brain Sciences, Universitätsplatz 2, D-39106, Magdeburg, Germany; University of Applied Sciences Magdeburg-Stendal, Osterburgerstraße 25, 39576, Stendal, Germany.
| |
Collapse
|
78
|
Korka B, Schröger E, Widmann A. What exactly is missing here? The sensory processing of unpredictable omissions is modulated by the specificity of expected action-effects. Eur J Neurosci 2020; 52:4667-4683. [PMID: 32643797 DOI: 10.1111/ejn.14899] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Revised: 06/02/2020] [Accepted: 07/03/2020] [Indexed: 11/28/2022]
Abstract
We select our actions according to the desired outcomes; for instance, piano players press certain keys to generate specific musical notes. It is well-described that the omission of a predicted action-effect may elicit prediction error signals in the brain, but what happens in the case of simultaneous effector-specific (by contrast to effector-unspecific) predictions? To answer this question, we asked participants to press left and right keys to generate tones A and B; based on the action-effect association, the tones' identity was either predictable or unpredictable, while rarely, the expected input was omitted. Crucially, the data show that omissions following hand-specific associations reliably elicited a late omission N1 (oN1) component, by contrast to the hand-unspecific associations, where the late oN1 was rather weak. An additional condition where both key-presses generated a unique tone was implemented. Here, rare omissions of the expected tone generated both early and late oN1 responses, by contrast to the condition in which two simultaneous action-effect representations had to be maintained, where only late oN1 responses were elicited. Finally, omission P3 (oP3) responses were strongly elicited for all omission types without differences, indicating that a general expectation based on a tone presentation (rather than which tone), is likely indexed at this stage. The present results emphasize the top-down effects of action intention on the sensory processing of omissions, where unspecific (vs. specific) and multiple (vs. single) action-effect representations are associated with processing costs at the early sensory levels.
Collapse
Affiliation(s)
- Betina Korka
- Cognitive and Biological Psychology, Leipzig University, Leipzig, Germany
| | - Erich Schröger
- Cognitive and Biological Psychology, Leipzig University, Leipzig, Germany
| | - Andreas Widmann
- Cognitive and Biological Psychology, Leipzig University, Leipzig, Germany.,Leibniz Institute for Neurobiology, Magdeburg, Germany
| |
Collapse
|
79
|
Foldal MD, Blenkmann AO, Llorens A, Knight RT, Solbakk AK, Endestad T. The brain tracks auditory rhythm predictability independent of selective attention. Sci Rep 2020; 10:7975. [PMID: 32409738 PMCID: PMC7224206 DOI: 10.1038/s41598-020-64758-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Accepted: 04/07/2020] [Indexed: 11/16/2022] Open
Abstract
The brain responds to violations of expected rhythms, due to extraction- and prediction of the temporal structure in auditory input. Yet, it is unknown how probability of rhythm violations affects the overall rhythm predictability. Another unresolved question is whether predictive processes are independent of attention processes. In this study, EEG was recorded while subjects listened to rhythmic sequences. Predictability was manipulated by changing the stimulus-onset-asynchrony (SOA deviants) for given tones in the rhythm. When SOA deviants were inserted rarely, predictability remained high, whereas predictability was lower with more frequent SOA deviants. Dichotic tone-presentation allowed for independent manipulation of attention, as specific tones of the rhythm were presented to separate ears. Attention was manipulated by instructing subjects to attend to tones in one ear only, while keeping the rhythmic structure of tones constant. The analyses of event-related potentials revealed an attenuated N1 for tones when rhythm predictability was high, while the N1 was enhanced by attention to tones. Bayesian statistics revealed no interaction between predictability and attention. A right-lateralization of attention effects, but not predictability effects, suggested potentially different cortical processes. This is the first study to show that probability of rhythm violation influences rhythm predictability, independent of attention.
Collapse
Affiliation(s)
- Maja D Foldal
- Department of Psychology, University of Oslo, Oslo, Norway. .,RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway.
| | - Alejandro O Blenkmann
- Department of Psychology, University of Oslo, Oslo, Norway.,RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
| | - Anaïs Llorens
- Department of Psychology, University of Oslo, Oslo, Norway.,Department of Neurosurgery, Oslo University Hospital, Oslo, Norway.,Department of Psychology and the Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, USA
| | - Robert T Knight
- Department of Psychology and the Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, USA
| | - Anne-Kristin Solbakk
- Department of Psychology, University of Oslo, Oslo, Norway.,RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway.,Department of Neurosurgery, Oslo University Hospital, Oslo, Norway.,Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway
| | - Tor Endestad
- Department of Psychology, University of Oslo, Oslo, Norway.,RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway.,Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway
| |
Collapse
|
80
|
Heins N, Pomp J, Kluger DS, Trempler I, Zentgraf K, Raab M, Schubotz RI. Incidental or Intentional? Different Brain Responses to One's Own Action Sounds in Hurdling vs. Tap Dancing. Front Neurosci 2020; 14:483. [PMID: 32477059 PMCID: PMC7237737 DOI: 10.3389/fnins.2020.00483] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 04/20/2020] [Indexed: 12/20/2022] Open
Abstract
Most human actions produce concomitant sounds. Action sounds can be either part of the action goal (GAS, goal-related action sounds), as for instance in tap dancing, or a mere by-product of the action (BAS, by-product action sounds), as for instance in hurdling. It is currently unclear whether these two types of action sounds-incidental or intentional-differ in their neural representation and whether the impact on the performance evaluation of an action diverges between the two. We here examined whether during the observation of tap dancing compared to hurdling, auditory information is a more important factor for positive action quality ratings. Moreover, we tested whether observation of tap dancing vs. hurdling led to stronger attenuation in primary auditory cortex, and a stronger mismatch signal when sounds do not match our expectations. We recorded individual point-light videos of newly trained participants performing tap dancing and hurdling. In the subsequent functional magnetic resonance imaging (fMRI) session, participants were presented with the videos that displayed their own actions, including corresponding action sounds, and were asked to rate the quality of their performance. Videos were either in their original form or scrambled regarding the visual modality, the auditory modality, or both. As hypothesized, behavioral results showed significantly lower rating scores in the GAS condition compared to the BAS condition when the auditory modality was scrambled. Functional MRI contrasts between BAS and GAS actions revealed higher activation of primary auditory cortex in the BAS condition, speaking in favor of stronger attenuation in GAS, as well as stronger activation of posterior superior temporal gyri and the supplementary motor area in GAS. Results suggest that the processing of self-generated action sounds depends on whether we have the intention to produce a sound with our action or not, and action sounds may be more prone to be used as sensory feedback when they are part of the explicit action goal. Our findings contribute to a better understanding of the function of action sounds for learning and controlling sound-producing actions.
Collapse
Affiliation(s)
- Nina Heins
- Department of Psychology, University of Muenster, Münster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster, Germany
| | - Jennifer Pomp
- Department of Psychology, University of Muenster, Münster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster, Germany
| | - Daniel S. Kluger
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster, Germany
- Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster, Germany
| | - Ima Trempler
- Department of Psychology, University of Muenster, Münster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster, Germany
| | - Karen Zentgraf
- Department of Movement Science and Training in Sports, Institute of Sport Sciences, Goethe University Frankfurt, Frankfurt, Germany
| | - Markus Raab
- Department of Performance Psychology, Institute of Psychology, German Sport University Cologne, Cologne, Germany
- School of Applied Sciences, London South Bank University, London, United Kingdom
| | - Ricarda I. Schubotz
- Department of Psychology, University of Muenster, Münster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster, Germany
| |
Collapse
|
81
|
Norman LJ, Thaler L. Stimulus uncertainty affects perception in human echolocation: Timing, level, and spectrum. J Exp Psychol Gen 2020; 149:2314-2331. [PMID: 32324025 PMCID: PMC7727089 DOI: 10.1037/xge0000775] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The human brain may use recent sensory experience to create sensory templates that are then compared to incoming sensory input, that is, "knowing what to listen for." This can lead to greater perceptual sensitivity, as long as the relevant properties of the target stimulus can be reliably estimated from past sensory experiences. Echolocation is an auditory skill probably best understood in bats, but humans can also echolocate. Here we investigated for the first time whether echolocation in humans involves the use of sensory templates derived from recent sensory experiences. Our results showed that when there was certainty in the acoustic properties of the echo relative to the emission, either in temporal onset, spectral content or level, people detected the echo more accurately than when there was uncertainty. In addition, we found that people were more accurate when the emission's spectral content was certain but, surprisingly, not when either its level or temporal onset was certain. Importantly, the lack of an effect of temporal onset of the emission is counter to that found previously for tasks using nonecholocation sounds, suggesting that the underlying mechanisms might be different for echolocation and nonecholocation sounds. Importantly, the effects of stimulus certainty were no different for people with and without experience in echolocation, suggesting that stimulus-specific sensory templates can be used in a skill that people have never used before. From an applied perspective our results suggest that echolocation instruction should encourage users to make clicks that are similar to one another in their spectral content. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
|
82
|
Riecke L, Marianu IA, De Martino F. Effect of Auditory Predictability on the Human Peripheral Auditory System. Front Neurosci 2020; 14:362. [PMID: 32351361 PMCID: PMC7174672 DOI: 10.3389/fnins.2020.00362] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Accepted: 03/24/2020] [Indexed: 11/13/2022] Open
Abstract
Auditory perception is facilitated by prior knowledge about the statistics of the acoustic environment. Predictions about upcoming auditory stimuli are processed at various stages along the human auditory pathway, including the cortex and midbrain. Whether such auditory predictions are processed also at hierarchically lower stages-in the peripheral auditory system-is unclear. To address this question, we assessed outer hair cell (OHC) activity in response to isochronous tone sequences and varied the predictability and behavioral relevance of the individual tones (by manipulating tone-to-tone probabilities and the human participants' task, respectively). We found that predictability alters the amplitude of distortion-product otoacoustic emissions (DPOAEs, a measure of OHC activity) in a manner that depends on the behavioral relevance of the tones. Simultaneously recorded cortical responses showed a significant effect of both predictability and behavioral relevance of the tones, indicating that their experimental manipulations were effective in central auditory processing stages. Our results provide evidence for a top-down effect on the processing of auditory predictability in the human peripheral auditory system, in line with previous studies showing peripheral effects of auditory attention.
Collapse
Affiliation(s)
- Lars Riecke
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Irina-Andreea Marianu
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States
| |
Collapse
|
83
|
Hu M, Wang D, Ji X, Yu T, Shan Y, Fan X, Du J, Zhang X, Zhao G, Wang Y, Ren L, Liégeois-Chauvel C. Neural processes of auditory perception in Heschl's gyrus for upcoming acoustic stimuli in humans. Hear Res 2020; 388:107895. [PMID: 31982643 DOI: 10.1016/j.heares.2020.107895] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 12/20/2019] [Accepted: 01/10/2020] [Indexed: 11/29/2022]
Abstract
In the natural environment, attended sounds tend to be perceived much better than unattended sounds. However, the physiological mechanism of how our neural systems direct the state of perceptual attention to prepare for the detection of upcoming acoustic stimuli before auditory stream segregation remains elusive. In this study, based on the direct intracerebral recordings from the auditory cortex in eight epileptic patients with refractory focal seizures, we investigated the neural processing of auditory attention by comparing the local field potentials before 'attentional' and 'distracted' conditions. Here we first showed a distinct build-up of slow, negative cortical potential in Heschl's gyrus. The amplitude increased steadily, starting from 600 to 800 ms before presentation of the tone until the onset of the evoked component P/N 60-80 when the patients were in the attentional condition. Because of their specific topographical distribution and modality-specific properties, we named these 'auditory preparatory potentials', which are also associated with increased gamma oscillations (30-150 Hz) and desynchronized low frequency activity (below 30 Hz). Thus, our findings suggest that the auditory cortex is pre-activated to facilitate the perception of forthcoming sound events, and contribute to the understanding of the neurophysiological mechanisms of auditory perception from a new perspective.
Collapse
Affiliation(s)
- Minjing Hu
- Department of Neurology, Xuanwu Hospital, Capital Medical University, Beijing, China; Department of Neurology, Affiliated Hospital of Nantong University, Nantong, China
| | - Di Wang
- Department of Neurology, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Xuanxiu Ji
- Second Department of Geriatric Division, General Hospital of Jinan Military Region, Jinan, China
| | - Tao Yu
- Beijing Institute of Functional Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Yongzhi Shan
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Xiaotong Fan
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Jialin Du
- Department of Neurology, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Xiaohua Zhang
- Beijing Institute of Functional Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Guoguang Zhao
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China.
| | - Yuping Wang
- Department of Neurology, Xuanwu Hospital, Capital Medical University, Beijing, China.
| | - Liankun Ren
- Department of Neurology, Xuanwu Hospital, Capital Medical University, Beijing, China.
| | - Catherine Liégeois-Chauvel
- Aix Marseille Université, Inserm, Institut des Neurosciences des Systemes, Marseille, France; Cleveland Clinic Neurological Institute, Epilepsy Center, Cleveland, OH, USA
| |
Collapse
|
84
|
The First 250 ms of Auditory Processing: No Evidence of Early Processing Negativity in the Go/NoGo Task. Sci Rep 2020; 10:4041. [PMID: 32132630 PMCID: PMC7055275 DOI: 10.1038/s41598-020-61060-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Accepted: 01/30/2020] [Indexed: 12/02/2022] Open
Abstract
Past evidence of an early Processing Negativity in auditory Go/NoGo event-related potential (ERP) data suggests that young adults proactively process sensory information in two-choice tasks. This study aimed to clarify the occurrence of Go/NoGo Processing Negativity and investigate the ERP component series related to the first 250 ms of auditory processing in two Go/NoGo tasks differing in target probability. ERP data related to each task were acquired from 60 healthy young adults (M = 20.4, SD = 3.1 years). Temporal principal components analyses were used to decompose ERP data in each task. Statistical analyses compared component amplitudes between stimulus type (Go vs. NoGo) and probability (High vs. Low). Neuronal source localisation was also conducted for each component. Processing Negativity was not evident; however, P1, N1a, N1b, and N1c were identified in each task, with Go P2 and NoGo N2b. The absence of Processing Negativity in this study indicated that young adults do not proactively process targets to complete the Go/NoGo task and/or questioned Processing Negativity’s conceptualisation. Additional analyses revealed stimulus-specific processing as early as P1, and outlined a complex network of active neuronal sources underlying each component, providing useful insight into Go and NoGo information processing in young adults.
Collapse
|
85
|
Neural correlates of auditory sensory memory dynamics in the aging brain. Neurobiol Aging 2020; 88:128-136. [PMID: 32035848 DOI: 10.1016/j.neurobiolaging.2019.12.020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Revised: 12/22/2019] [Accepted: 12/24/2019] [Indexed: 11/21/2022]
Abstract
The auditory system allows us to monitor background environmental sound patterns and recognize deviations that may indicate opportunities or threats. The mismatch negativity and P3a potentials have generators in the auditory and inferior frontal cortex and index expected sound patterns (standards) and any aberrations (deviants). The mismatch negativity and P3a waveforms show increased positivity for consecutive standards and deviants preceded by more standards. We hypothesized attenuated repetition effects in older participants, potentially because of differences in prefrontal functions. Young (23 ± 5 years) and older (75 ± 5 years) adults were tested in 2 oddball paradigms with pitch or location deviants. Significant repetition effects were observed in the young standard and deviant waveforms at multiple time windows. Except the earliest time window (30-100 ms), repetition effects were absent in the older group. Repetition effects were significant at frontal but not temporal lobe sites and did not differ among pitch and location deviants. However, P3a repetition was evident in both ages. Findings suggest age differences in the dynamic updating of sensory memory for background sound patterns.
Collapse
|
86
|
Fogarty JS, Barry RJ, Steiner GZ. Auditory stimulus- and response-locked ERP components and behavior. Psychophysiology 2020; 57:e13538. [PMID: 32010995 DOI: 10.1111/psyp.13538] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2019] [Revised: 12/18/2019] [Accepted: 01/09/2020] [Indexed: 01/24/2023]
Abstract
To clarify the functional significance of Go event-related potential (ERP) components, this study aimed to explore stimulus- and response-locked ERP averaging effects on the series of ERP components elicited during an auditory Go/NoGo task. Go stimulus- and response-locked ERP data from 126 healthy young adults (Mage = 20.3, SD = 2.8 years, 83 female) were decomposed using temporal principal components analysis (PCA). The extracted components were then identified as stimulus-specific, response-specific, or common to both stimulus- and response-locked data. MANOVAs were then used to test for stimulus- versus response-locked averaging effects on common component amplitudes to determine their primary functional significance (i.e., stimulus- or response-related). Go stimulus- and response-related component amplitudes were then entered into stepwise linear regressions predicting the reaction time (RT), RT variability, and omission errors. Nine ERP components were extracted from the stimulus- and response-locked data, including N1-1, processing negativity (PN), P2, response-related N2 (RN2), motor potential (MP), P3b, P420, and two slow wave components; SW1 and SW2. N1-1, PN, and P2 were stimulus-specific, whereas, RN2, MP, and P420 were response-specific; P3b, SW1, and SW2 were common to both data sets. P3b, SW1, and SW2 were significantly larger in the response-locked data, indicating that they were primarily response-related. RT, RT variability, and omission errors were predicted by various stimulus- and response-related components, providing further insight into ERP markers of auditory information processing and cognitive control. Further, the results of this study indicate the utility of quantifying some common components (i.e., Go P3b, SW1, and SW2) using the response-locked ERP.
Collapse
Affiliation(s)
- Jack S Fogarty
- Brain & Behaviour Research Institute, School of Psychology, University of Wollongong, Wollongong, NSW, Australia
| | - Robert J Barry
- Brain & Behaviour Research Institute, School of Psychology, University of Wollongong, Wollongong, NSW, Australia
| | - Genevieve Z Steiner
- Brain & Behaviour Research Institute, School of Psychology, University of Wollongong, Wollongong, NSW, Australia.,NICM Health Research Institute, Western Sydney University, Penrith, NSW, Australia.,Translational Health Research Institute (THRI), Western Sydney University, Penrith, NSW, Australia
| |
Collapse
|
87
|
Hsu YF, Xu W, Parviainen T, Hämäläinen JA. Context-dependent minimisation of prediction errors involves temporal-frontal activation. Neuroimage 2020; 207:116355. [DOI: 10.1016/j.neuroimage.2019.116355] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Revised: 10/16/2019] [Accepted: 11/11/2019] [Indexed: 10/25/2022] Open
|
88
|
Congruency of intervening events and self-induced action influence prediction of final results. Exp Brain Res 2020; 238:575-586. [PMID: 31993684 PMCID: PMC7142040 DOI: 10.1007/s00221-020-05735-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Accepted: 01/16/2020] [Indexed: 11/26/2022]
Abstract
Predicting self-induced stimuli is easier than predicting externally produced ones and the amplitude of event-related brain potentials (ERP) elicited by self-induced stimuli is smaller than that elicited by externally produced ones. Previous studies reported that these phenomena occurred strong when stimuli were presented immediately after self-induced action. To be able to adapt to changes, however, it is necessary to predict not only an event that follows a self-induced action but also a subsequent final result. We investigated whether congruency among self-induced actions, intervening events, and final results influences the processing of final results. The congruency of an intervening event with self-induced action was task-irrelevant information for the required response to a final result. The results showed that the P1 amplitude elicited by the final result (i.e., somatosensory stimulus) when an intervening event was congruent with self-induced action was smaller than other elicited amplitudes. This suggests that the congruency of an intervening event and self-induced action may facilitate prediction of a final result, even when this congruency is irrelevant to the ongoing task.
Collapse
|
89
|
Bouwer FL, Honing H, Slagter HA. Beat-based and Memory-based Temporal Expectations in Rhythm: Similar Perceptual Effects, Different Underlying Mechanisms. J Cogn Neurosci 2020; 32:1221-1241. [PMID: 31933432 DOI: 10.1162/jocn_a_01529] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Predicting the timing of incoming information allows the brain to optimize information processing in dynamic environments. Behaviorally, temporal expectations have been shown to facilitate processing of events at expected time points, such as sounds that coincide with the beat in musical rhythm. Yet, temporal expectations can develop based on different forms of structure in the environment, not just the regularity afforded by a musical beat. Little is still known about how different types of temporal expectations are neurally implemented and affect performance. Here, we orthogonally manipulated the periodicity and predictability of rhythmic sequences to examine the mechanisms underlying beat-based and memory-based temporal expectations, respectively. Behaviorally and using EEG, we looked at the effects of beat-based and memory-based expectations on auditory processing when rhythms were task-relevant or task-irrelevant. At expected time points, both beat-based and memory-based expectations facilitated target detection and led to attenuation of P1 and N1 responses, even when expectations were task-irrelevant (unattended). For beat-based expectations, we additionally found reduced target detection and enhanced N1 responses for events at unexpected time points (e.g., off-beat), regardless of the presence of memory-based expectations or task relevance. This latter finding supports the notion that periodicity selectively induces rhythmic fluctuations in neural excitability and furthermore indicates that, although beat-based and memory-based expectations may similarly affect auditory processing of expected events, their underlying neural mechanisms may be different.
Collapse
|
90
|
Jenson D, Bowers AL, Hudock D, Saltuklaroglu T. The Application of EEG Mu Rhythm Measures to Neurophysiological Research in Stuttering. Front Hum Neurosci 2020; 13:458. [PMID: 31998103 PMCID: PMC6965028 DOI: 10.3389/fnhum.2019.00458] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Accepted: 12/13/2019] [Indexed: 11/29/2022] Open
Abstract
Deficits in basal ganglia-based inhibitory and timing circuits along with sensorimotor internal modeling mechanisms are thought to underlie stuttering. However, much remains to be learned regarding the precise manner how these deficits contribute to disrupting both speech and cognitive functions in those who stutter. Herein, we examine the suitability of electroencephalographic (EEG) mu rhythms for addressing these deficits. We review some previous findings of mu rhythm activity differentiating stuttering from non-stuttering individuals and present some new preliminary findings capturing stuttering-related deficits in working memory. Mu rhythms are characterized by spectral peaks in alpha (8-13 Hz) and beta (14-25 Hz) frequency bands (mu-alpha and mu-beta). They emanate from premotor/motor regions and are influenced by basal ganglia and sensorimotor function. More specifically, alpha peaks (mu-alpha) are sensitive to basal ganglia-based inhibitory signals and sensory-to-motor feedback. Beta peaks (mu-beta) are sensitive to changes in timing and capture motor-to-sensory (i.e., forward model) projections. Observing simultaneous changes in mu-alpha and mu-beta across the time-course of specific events provides a rich window for observing neurophysiological deficits associated with stuttering in both speech and cognitive tasks and can provide a better understanding of the functional relationship between these stuttering symptoms. We review how independent component analysis (ICA) can extract mu rhythms from raw EEG signals in speech production tasks, such that changes in alpha and beta power are mapped to myogenic activity from articulators. We review findings from speech production and auditory discrimination tasks demonstrating that mu-alpha and mu-beta are highly sensitive to capturing sensorimotor and basal ganglia deficits associated with stuttering with high temporal precision. Novel findings from a non-word repetition (working memory) task are also included. They show reduced mu-alpha suppression in a stuttering group compared to a typically fluent group. Finally, we review current limitations and directions for future research.
Collapse
Affiliation(s)
- David Jenson
- Department of Speech and Hearing Sciences, Elson S. Floyd College of Medicine, Washington State University, Spokane, WA, United States
| | - Andrew L. Bowers
- Epley Center for Health Professions, Communication Sciences and Disorders, University of Arkansas, Fayetteville, AR, United States
| | - Daniel Hudock
- Department of Communication Sciences and Disorders, Idaho State University, Pocatello, ID, United States
| | - Tim Saltuklaroglu
- College of Health Professions, Department of Audiology and Speech-Pathology, University of Tennessee Health Science Center, Knoxville, TN, United States
| |
Collapse
|
91
|
Korka B, Schröger E, Widmann A. Action Intention-based and Stimulus Regularity-based Predictions: Same or Different? J Cogn Neurosci 2019; 31:1917-1932. [PMID: 31393234 DOI: 10.1162/jocn_a_01456] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We act on the environment to produce desired effects, but we also adapt to the environmental demands by learning what to expect next, based on experience: How do action-based predictions and sensory predictions relate to each other? We explore this by implementing a self-generation oddball paradigm, where participants performed random sequences of left and right button presses to produce frequent standard and rare deviant tones. By manipulating the action–tone association as well as the likelihood of a button press over the other one, we compare ERP effects evoked by the intention to produce a specific tone, tone regularity, and both intention and regularity. We show that the N1b and Tb components of the N1 response are modulated by violations of tone regularity only. However, violations of action intention as well as of regularity elicit MMN responses, which occur similarly in all three conditions. Regardless of whether the predictions at sensory levels were based on either intention, regularity, or both, the tone deviance was further and equally well detected at hierarchically higher processing level, as reflected in similar P3a effects between conditions. We did not observe additive prediction errors when intention and regularity were violated concurrently, suggesting the two integrate despite presumably having independent generators. Even though they are often discussed as individual prediction sources in the literature, this study represents to our knowledge the first to directly compare them. Finally, these results show how, in the context of action, our brain can easily switch between top–down intention-based expectations and bottom–up regularity cues to efficiently predict future events.
Collapse
Affiliation(s)
| | | | - Andreas Widmann
- University of Leipzig
- Leibniz Institute for Neurobiology, Magdeburg, Germany
| |
Collapse
|
92
|
George N, Sunny MM. Challenges to the Modularity Thesis Under the Bayesian Brain Models. Front Hum Neurosci 2019; 13:353. [PMID: 31649518 PMCID: PMC6796786 DOI: 10.3389/fnhum.2019.00353] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2019] [Accepted: 09/23/2019] [Indexed: 11/13/2022] Open
Abstract
Modularity assumption is central to most theoretical and empirical approaches in cognitive science. The Bayesian Brain (BB) models are a class of neuro-computational models that aim to ground perception, cognition, and action under a single computational principle of prediction-error minimization. It is argued that the proposals of BB models contradict the modular nature of mind as the modularity assumption entails computational separation of individual modules. This review examines how BB models address the assumption of modularity. Empirical evidences of top-down influence on early sensory processes is often cited as a case against the modularity thesis. In the modularity thesis, such top-down effects are attributed to attentional modulation of the output of an early impenetrable stage of sensory processing. The attentional-mediation argument defends the modularity thesis. We analyse this argument using the novel conception of attention in the BB models. We attempt to reconcile classical bottom-up vs. top-down dichotomy of information processing, within the information passing scheme of the BB models. Theoretical considerations and empirical findings associated with BB models that address the modularity assumption is reviewed. Further, we examine the modularity of perceptual and motor systems.
Collapse
Affiliation(s)
- Nithin George
- Centre for Cognitive Science, Indian Institute of Technology Gandhinagar, Gandhinagar, India
| | - Meera Mary Sunny
- Centre for Cognitive Science, Indian Institute of Technology Gandhinagar, Gandhinagar, India
| |
Collapse
|
93
|
Auditory mismatch detection, distraction, and attentional reorientation (MMN-P3a-RON) in neurological and psychiatric disorders: A review. Int J Psychophysiol 2019; 146:85-100. [PMID: 31654696 DOI: 10.1016/j.ijpsycho.2019.09.010] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2019] [Revised: 09/26/2019] [Accepted: 09/27/2019] [Indexed: 12/14/2022]
Abstract
Involuntary attention allows for the detection and processing of novel and potentially relevant stimuli that lie outside of cognitive focus. These processes comprise change detection in sensory contexts, automatic orientation toward this change, and the selection of adaptive responses, including reorientation to the original goal in cases when the detected change is not relevant for task demands. These processes have been studied using the Event-Related Potential (ERP) technique and have been associated to the Mismatch Negativity (MMN), the P3a, and the Reorienting Negativity (RON) electrophysiological components, respectively. This has allowed for the objective evaluation of the impact of different neuropsychiatric pathologies on involuntary attention. Additionally, these ERP have been proposed as alternative measures for the early detection of disease and the tracking of its progression. The objective of this review was to integrate the results reported to date about MMN, P3a, and RON in different neurological and psychiatric disorders. We included experimental studies with clinical populations that reported at least two of these three components in the same experimental paradigm. Overall, involuntary attention seems to reflect the state of cognitive integrity in different pathologies in adults. However, if the main goal for these ERP is to consider them as biomarkers, more research about their pathophysiological specificity in each disorder is needed, as well as improvement in the general experimental conditions under which these components are elicited. Nevertheless, these ERP represent a valuable neurophysiological tool for early detection and follow-up of diverse clinical populations.
Collapse
|
94
|
Tavano A, Schröger E, Kotz SA. Beta power encodes contextual estimates of temporal event probability in the human brain. PLoS One 2019; 14:e0222420. [PMID: 31557168 PMCID: PMC6762064 DOI: 10.1371/journal.pone.0222420] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2019] [Accepted: 08/29/2019] [Indexed: 12/30/2022] Open
Abstract
To prepare for an impending event of unknown temporal distribution, humans internally increase the perceived probability of event onset as time elapses. This effect is termed the hazard rate of events. We tested how the neural encoding of hazard rate changes by providing human participants with prior information on temporal event probability. We recorded behavioral and electroencephalographic (EEG) data while participants listened to continuously repeating five-tone sequences, composed of four standard tones followed by a non-target deviant tone, delivered at slow (1.6 Hz) or fast (4 Hz) rates. The task was to detect a rare target tone, which equiprobably appeared at either position two, three or four of the repeating sequence. In this design, potential target position acts as a proxy for elapsed time. For participants uninformed about the target's distribution, elapsed time to uncertain target onset increased response speed, displaying a significant hazard rate effect at both slow and fast stimulus rates. However, only in fast sequences did prior information about the target's temporal distribution interact with elapsed time, suppressing the hazard rate. Importantly, in the fast, uninformed condition pre-stimulus power synchronization in the beta band (Beta 1, 15-19 Hz) predicted the hazard rate of response times. Prior information suppressed pre-stimulus power synchronization in the same band, while still significantly predicting response times. We conclude that Beta 1 power does not simply encode the hazard rate, but-more generally-internal estimates of temporal event probability based upon contextual information.
Collapse
Affiliation(s)
- Alessandro Tavano
- BioCog, Cognitive Incl. Biological Psychology, Institute of Psychology, University of Leipzig, Leipzig, Germany
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Erich Schröger
- BioCog, Cognitive Incl. Biological Psychology, Institute of Psychology, University of Leipzig, Leipzig, Germany
| | - Sonja A. Kotz
- Department of Neuropsychology, Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
95
|
Dogge M, Custers R, Aarts H. Moving Forward: On the Limits of Motor-Based Forward Models. Trends Cogn Sci 2019; 23:743-753. [DOI: 10.1016/j.tics.2019.06.008] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Revised: 06/21/2019] [Accepted: 06/27/2019] [Indexed: 01/26/2023]
|
96
|
Klaffehn AL, Baess P, Kunde W, Pfister R. Sensory attenuation prevails when controlling for temporal predictability of self- and externally generated tones. Neuropsychologia 2019; 132:107145. [DOI: 10.1016/j.neuropsychologia.2019.107145] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Revised: 07/11/2019] [Accepted: 07/12/2019] [Indexed: 10/26/2022]
|
97
|
Schmidt-Kassow M, Thöne K, Kaiser J. Auditory-motor coupling affects phonetic encoding. Brain Res 2019; 1716:39-49. [PMID: 29191770 DOI: 10.1016/j.brainres.2017.11.022] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2017] [Revised: 10/24/2017] [Accepted: 11/21/2017] [Indexed: 10/18/2022]
Abstract
Recent studies have shown that moving in synchrony with auditory stimuli boosts attention allocation and verbal learning. Furthermore rhythmic tones are processed more efficiently than temporally random tones ('timing effect'), and this effect is increased when participants actively synchronize their motor performance with the rhythm of the tones, resulting in auditory-motor synchronization. Here, we investigated whether this applies also to sequences of linguistic stimuli (syllables). We compared temporally irregular syllable sequences with two temporally regular conditions where either the interval between syllable onsets (stimulus onset asynchrony, SOA) or the interval between the syllables' vowel onsets was kept constant. Entrainment to the stimulus presentation frequency (1 Hz) and event-related potentials were assessed in 24 adults who were instructed to detect pre-defined deviant syllables while they either pedaled or sat still on a stationary exercise bike. We found larger 1 Hz entrainment and P300 amplitudes for the SOA presentation during motor activity. Furthermore, the magnitude of the P300 component correlated with the motor variability in the SOA condition and 1 Hz entrainment, while in turn 1 Hz entrainment correlated with auditory-motor synchronization performance. These findings demonstrate that acute auditory-motor coupling facilitates phonetic encoding.
Collapse
Affiliation(s)
| | - Katharina Thöne
- Institute of Medical Psychology, Goethe University, Frankfurt, Germany
| | - Jochen Kaiser
- Institute of Medical Psychology, Goethe University, Frankfurt, Germany
| |
Collapse
|
98
|
Quiroga-Martinez DR, Hansen NC, Højlund A, Pearce MT, Brattico E, Vuust P. Reduced prediction error responses in high-as compared to low-uncertainty musical contexts. Cortex 2019; 120:181-200. [PMID: 31323458 DOI: 10.1016/j.cortex.2019.06.010] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Revised: 04/05/2019] [Accepted: 06/19/2019] [Indexed: 02/05/2023]
Abstract
Theories of predictive processing propose that prediction error responses are modulated by the certainty of the predictive model or precision. While there is some evidence for this phenomenon in the visual and, to a lesser extent, the auditory modality, little is known about whether it operates in the complex auditory contexts of daily life. Here, we examined how prediction error responses behave in a more complex and ecologically valid auditory context than those typically studied. We created musical tone sequences with different degrees of pitch uncertainty to manipulate the precision of participants' auditory expectations. Magnetoencephalography was used to measure the magnetic counterpart of the mismatch negativity (MMNm) as a neural marker of prediction error in a multi-feature paradigm. Pitch, slide, intensity and timbre deviants were included. We compared high-entropy stimuli, consisting of a set of non-repetitive melodies, with low-entropy stimuli consisting of a simple, repetitive pitch pattern. Pitch entropy was quantitatively assessed with an information-theoretic model of auditory expectation. We found a reduction in pitch and slide MMNm amplitudes in the high-entropy as compared to the low-entropy context. No significant differences were found for intensity and timbre MMNm amplitudes. Furthermore, in a separate behavioral experiment investigating the detection of pitch deviants, similar decreases were found for accuracy measures in response to more fine-grained increases in pitch entropy. Our results are consistent with a precision modulation of auditory prediction error in a musical context, and suggest that this effect is specific to features that depend on the manipulated dimension-pitch information, in this case.
Collapse
Affiliation(s)
| | - Niels C Hansen
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Australia
| | - Andreas Højlund
- Center for Functionally Integrative Neuroscience, Aarhus University, Denmark
| | - Marcus T Pearce
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music, Denmark; School of Electronic Engineering and Computer Science, Queen Mary University of London, UK
| | - Elvira Brattico
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music, Denmark
| | - Peter Vuust
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music, Denmark
| |
Collapse
|
99
|
Gordon N, Hohwy J, Davidson MJ, van Boxtel JJA, Tsuchiya N. From intermodulation components to visual perception and cognition-a review. Neuroimage 2019; 199:480-494. [PMID: 31173903 DOI: 10.1016/j.neuroimage.2019.06.008] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2018] [Revised: 04/15/2019] [Accepted: 06/03/2019] [Indexed: 01/27/2023] Open
Abstract
Perception results from complex interactions among sensory and cognitive processes across hierarchical levels in the brain. Intermodulation (IM) components, used in frequency tagging neuroimaging designs, have emerged as a promising direct measure of such neural interactions. IMs have initially been used in electroencephalography (EEG) to investigate low-level visual processing. In a more recent trend, IMs in EEG and other neuroimaging methods are being used to shed light on mechanisms of mid- and high-level perceptual processes, including the involvement of cognitive functions such as attention and expectation. Here, we provide an account of various mechanisms that may give rise to IMs in neuroimaging data, and what these IMs may look like. We discuss methodologies that can be implemented for different uses of IMs and we demonstrate how IMs can provide insights into the existence, the degree and the type of neural integration mechanisms at hand. We then review a range of recent studies exploiting IMs in visual perception research, placing an emphasis on high-level vision and the influence of awareness and cognition on visual processing. We conclude by suggesting future directions that can enhance the benefits of IM-methodology in perception research.
Collapse
Affiliation(s)
- Noam Gordon
- Cognition and Philosophy Lab, Philosophy Department, Monash University, Clayton VIC, 3800, Australia.
| | - Jakob Hohwy
- Cognition and Philosophy Lab, Philosophy Department, Monash University, Clayton VIC, 3800, Australia
| | - Matthew James Davidson
- Monash Institute of Cognitive and Clinical Neurosciences, Monash University, Clayton VIC, 3800, Australia; School of Psychological Sciences, Monash University, Clayton VIC, 3800, Australia
| | - Jeroen J A van Boxtel
- Monash Institute of Cognitive and Clinical Neurosciences, Monash University, Clayton VIC, 3800, Australia; School of Psychological Sciences, Monash University, Clayton VIC, 3800, Australia; School of Psychology, Faculty of Health, University of Canberra, Canberra, Australia
| | - Naotsugu Tsuchiya
- Monash Institute of Cognitive and Clinical Neurosciences, Monash University, Clayton VIC, 3800, Australia; School of Psychological Sciences, Monash University, Clayton VIC, 3800, Australia; ATR Computational Neuroscience Laboratories, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0288, Japan; Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Suita, Osaka 565-0871, Japan
| |
Collapse
|
100
|
Getz LM, Toscano JC. Electrophysiological Evidence for Top-Down Lexical Influences on Early Speech Perception. Psychol Sci 2019; 30:830-841. [PMID: 31018103 DOI: 10.1177/0956797619841813] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
An unresolved issue in speech perception concerns whether top-down linguistic information influences perceptual responses. We addressed this issue using the event-related-potential technique in two experiments that measured cross-modal sequential-semantic priming effects on the auditory N1, an index of acoustic-cue encoding. Participants heard auditory targets (e.g., "potatoes") following associated visual primes (e.g., "MASHED"), neutral visual primes (e.g., "FACE"), or a visual mask (e.g., "XXXX"). Auditory targets began with voiced (/b/, /d/, /g/) or voiceless (/p/, /t/, /k/) stop consonants, an acoustic difference known to yield differences in N1 amplitude. In Experiment 1 (N = 21), semantic context modulated responses to upcoming targets, with smaller N1 amplitudes for semantic associates. In Experiment 2 (N = 29), semantic context changed how listeners encoded sounds: Ambiguous voice-onset times were encoded similarly to the voicing end point elicited by semantic associates. These results are consistent with an interactive model of spoken-word recognition that includes top-down effects on early perception.
Collapse
Affiliation(s)
- Laura M Getz
- Department of Psychological and Brain Sciences, Villanova University
| | - Joseph C Toscano
- Department of Psychological and Brain Sciences, Villanova University
| |
Collapse
|