301
|
Eye movements, visual search and scene memory, in an immersive virtual environment. PLoS One 2014; 9:e94362. [PMID: 24759905 PMCID: PMC3997357 DOI: 10.1371/journal.pone.0094362] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2014] [Accepted: 03/13/2014] [Indexed: 12/02/2022] Open
Abstract
Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.
Collapse
|
302
|
Variability in neural activity and behavior. Curr Opin Neurobiol 2014; 25:211-20. [DOI: 10.1016/j.conb.2014.02.013] [Citation(s) in RCA: 134] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2013] [Revised: 02/07/2014] [Accepted: 02/12/2014] [Indexed: 11/19/2022]
|
303
|
Gershman SJ. The penumbra of learning: a statistical theory of synaptic tagging and capture. NETWORK (BRISTOL, ENGLAND) 2014; 25:97-115. [PMID: 24679103 DOI: 10.3109/0954898x.2013.862749] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Learning in humans and animals is accompanied by a penumbra: Learning one task benefits from learning an unrelated task shortly before or after. At the cellular level, the penumbra of learning appears when weak potentiation of one synapse is amplified by strong potentiation of another synapse on the same neuron during a critical time window. Weak potentiation sets a molecular tag that enables the synapse to capture plasticity-related proteins synthesized in response to strong potentiation at another synapse. This paper describes a computational model which formalizes synaptic tagging and capture in terms of statistical learning mechanisms. According to this model, synaptic strength encodes a probabilistic inference about the dynamically changing association between pre- and post-synaptic firing rates. The rate of change is itself inferred, coupling together different synapses on the same neuron. When the inputs to one synapse change rapidly, the inferred rate of change increases, amplifying learning at other synapses.
Collapse
Affiliation(s)
- Samuel J Gershman
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology , Cambridge, MA , USA
| |
Collapse
|
304
|
Dayan P. Rationalizable irrationalities of choice. Top Cogn Sci 2014; 6:204-28. [PMID: 24648392 DOI: 10.1111/tops.12082] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2012] [Revised: 02/19/2013] [Accepted: 08/14/2013] [Indexed: 11/28/2022]
Abstract
Although seemingly irrational choice abounds, the rules governing these mis-steps that might provide hints about the factors limiting normative behavior are unclear. We consider three experimental tasks, which probe different aspects of non-normative choice under uncertainty. We argue for systematic statistical, algorithmic, and implementational sources of irrationality, including incomplete evaluation of long-run future utilities, Pavlovian actions, and habits, together with computational and statistical noise and uncertainty. We suggest structural and functional adaptations that minimize their maladaptive effects.
Collapse
Affiliation(s)
- Peter Dayan
- Gatsby Computational Neuroscience Unit, University College London
| |
Collapse
|
305
|
Bayesian integration of information in hippocampal place cells. PLoS One 2014; 9:e89762. [PMID: 24603429 PMCID: PMC3945610 DOI: 10.1371/journal.pone.0089762] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2013] [Accepted: 01/24/2014] [Indexed: 11/29/2022] Open
Abstract
Accurate spatial localization requires a mechanism that corrects for errors, which might arise from inaccurate sensory information or neuronal noise. In this paper, we propose that Hippocampal place cells might implement such an error correction mechanism by integrating different sources of information in an approximately Bayes-optimal fashion. We compare the predictions of our model with physiological data from rats. Our results suggest that useful predictions regarding the firing fields of place cells can be made based on a single underlying principle, Bayesian cue integration, and that such predictions are possible using a remarkably small number of model parameters.
Collapse
|
306
|
Savin C, Dayan P, Lengyel M. Optimal recall from bounded metaplastic synapses: predicting functional adaptations in hippocampal area CA3. PLoS Comput Biol 2014; 10:e1003489. [PMID: 24586137 PMCID: PMC3937414 DOI: 10.1371/journal.pcbi.1003489] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2013] [Accepted: 12/23/2013] [Indexed: 12/20/2022] Open
Abstract
A venerable history of classical work on autoassociative memory has significantly shaped our understanding of several features of the hippocampus, and most prominently of its CA3 area, in relation to memory storage and retrieval. However, existing theories of hippocampal memory processing ignore a key biological constraint affecting memory storage in neural circuits: the bounded dynamical range of synapses. Recent treatments based on the notion of metaplasticity provide a powerful model for individual bounded synapses; however, their implications for the ability of the hippocampus to retrieve memories well and the dynamics of neurons associated with that retrieval are both unknown. Here, we develop a theoretical framework for memory storage and recall with bounded synapses. We formulate the recall of a previously stored pattern from a noisy recall cue and limited-capacity (and therefore lossy) synapses as a probabilistic inference problem, and derive neural dynamics that implement approximate inference algorithms to solve this problem efficiently. In particular, for binary synapses with metaplastic states, we demonstrate for the first time that memories can be efficiently read out with biologically plausible network dynamics that are completely constrained by the synaptic plasticity rule, and the statistics of the stored patterns and of the recall cue. Our theory organises into a coherent framework a wide range of existing data about the regulation of excitability, feedback inhibition, and network oscillations in area CA3, and makes novel and directly testable predictions that can guide future experiments.
Collapse
Affiliation(s)
- Cristina Savin
- Computational & Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
- * E-mail:
| | - Peter Dayan
- Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom
| | - Máté Lengyel
- Computational & Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
307
|
Bitzer S, Park H, Blankenburg F, Kiebel SJ. Perceptual decision making: drift-diffusion model is equivalent to a Bayesian model. Front Hum Neurosci 2014; 8:102. [PMID: 24616689 PMCID: PMC3935359 DOI: 10.3389/fnhum.2014.00102] [Citation(s) in RCA: 64] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2013] [Accepted: 02/10/2014] [Indexed: 12/13/2022] Open
Abstract
Behavioral data obtained with perceptual decision making experiments are typically analyzed with the drift-diffusion model. This parsimonious model accumulates noisy pieces of evidence toward a decision bound to explain the accuracy and reaction times of subjects. Recently, Bayesian models have been proposed to explain how the brain extracts information from noisy input as typically presented in perceptual decision making tasks. It has long been known that the drift-diffusion model is tightly linked with such functional Bayesian models but the precise relationship of the two mechanisms was never made explicit. Using a Bayesian model, we derived the equations which relate parameter values between these models. In practice we show that this equivalence is useful when fitting multi-subject data. We further show that the Bayesian model suggests different decision variables which all predict equal responses and discuss how these may be discriminated based on neural correlates of accumulated evidence. In addition, we discuss extensions to the Bayesian model which would be difficult to derive for the drift-diffusion model. We suggest that these and other extensions may be highly useful for deriving new experiments which test novel hypotheses.
Collapse
Affiliation(s)
- Sebastian Bitzer
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
| | - Hame Park
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
| | - Felix Blankenburg
- Bernstein Center for Computational Neuroscience, Charité Universitätsmedizin BerlinBerlin, Germany
- Neurocomputation and Neuroimaging Unit, Department of Education and Psychology, Freie Universität BerlinBerlin, Germany
| | - Stefan J. Kiebel
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
- Biomagnetic Center, Hans Berger Clinic for Neurology, University Hospital JenaJena, Germany
| |
Collapse
|
308
|
Neftci E, Das S, Pedroni B, Kreutz-Delgado K, Cauwenberghs G. Event-driven contrastive divergence for spiking neuromorphic systems. Front Neurosci 2014; 7:272. [PMID: 24574952 PMCID: PMC3922083 DOI: 10.3389/fnins.2013.00272] [Citation(s) in RCA: 113] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2013] [Accepted: 12/22/2013] [Indexed: 11/13/2022] Open
Abstract
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.
Collapse
Affiliation(s)
- Emre Neftci
- Institute for Neural Computation, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Srinjoy Das
- Institute for Neural Computation, University of CaliforniaSan Diego, La Jolla, CA, USA
- Electrical and Computer Engineering Department, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Bruno Pedroni
- Department of Bioengineering, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Kenneth Kreutz-Delgado
- Institute for Neural Computation, University of CaliforniaSan Diego, La Jolla, CA, USA
- Electrical and Computer Engineering Department, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Gert Cauwenberghs
- Institute for Neural Computation, University of CaliforniaSan Diego, La Jolla, CA, USA
- Department of Bioengineering, University of CaliforniaSan Diego, La Jolla, CA, USA
| |
Collapse
|
309
|
Guilé JM. Probabilistic perception, empathy, and dynamic homeostasis: insights in autism spectrum disorders and conduct disorders. Front Public Health 2014; 2:4. [PMID: 24479115 PMCID: PMC3902472 DOI: 10.3389/fpubh.2014.00004] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2013] [Accepted: 01/12/2014] [Indexed: 01/08/2023] Open
Abstract
Homeostasis is not a permanent and stable state but instead results from conflicting forces. Therefore, infants have to engage in dynamic exchanges with their environment, in biological, cognitive, and affective domains. Empathy is an adaptive response to these environmental challenges, which contributes to reaching proper dynamic homeostasis and development. Empathy relies on implicit interactive processes, namely probabilistic perception and synchrony, which will be reviewed in the article. If typically-developed neonates are fully equipped to automatically and synchronously interact with their human environment, conduct disorders (CD) and autism spectrum disorders (ASD) present with impairments in empathetic communication, e.g., emotional arousal and facial emotion processing. In addition sensorimotor resonance is lacking in ASD, and emotional concern and semantic empathy are impaired in CD with Callous-Unemotional traits.
Collapse
Affiliation(s)
- Jean Marc Guilé
- Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, INSERM 1105, Université Picardie Jules Verne , Amiens , France
| |
Collapse
|
310
|
Kording KP. Bayesian statistics: relevant for the brain? Curr Opin Neurobiol 2014; 25:130-3. [PMID: 24463330 DOI: 10.1016/j.conb.2014.01.003] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2013] [Revised: 11/19/2013] [Accepted: 01/02/2014] [Indexed: 11/27/2022]
Abstract
Analyzing data from experiments involves variables that we neuroscientists are uncertain about. Efficiently calculating with such variables usually requires Bayesian statistics. As it is crucial when analyzing complex data, it seems natural that the brain would "use" such statistics to analyze data from the world. And indeed, recent studies in the areas of perception, action, and cognition suggest that Bayesian behavior is widespread, in many modalities and species. Consequently, many models have suggested that the brain is built on simple Bayesian principles. While the brain's code is probably not actually simple, I believe that Bayesian principles will facilitate the construction of faithful models of the brain.
Collapse
|
311
|
Gow DW, Nied AC. Rules from words: a dynamic neural basis for a lawful linguistic process. PLoS One 2014; 9:e86212. [PMID: 24465965 PMCID: PMC3897659 DOI: 10.1371/journal.pone.0086212] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2013] [Accepted: 12/06/2013] [Indexed: 11/18/2022] Open
Abstract
Listeners show a reliable bias towards interpreting speech sounds in a way that conforms to linguistic restrictions (phonotactic constraints) on the permissible patterning of speech sounds in a language. This perceptual bias may enforce and strengthen the systematicity that is the hallmark of phonological representation. Using Granger causality analysis of magnetic resonance imaging (MRI)-constrained magnetoencephalography (MEG) and electroencephalography (EEG) data, we tested the differential predictions of rule-based, frequency-based, and top-down lexical influence-driven explanations of processes that produce phonotactic biases in phoneme categorization. Consistent with the top-down lexical influence account, brain regions associated with the representation of words had a stronger influence on acoustic-phonetic regions in trials that led to the identification of phonotactically legal (versus illegal) word-initial consonant clusters. Regions associated with the application of linguistic rules had no such effect. Similarly, high frequency phoneme clusters failed to produce stronger feedforward influences by acoustic-phonetic regions on areas associated with higher linguistic representation. These results suggest that top-down lexical influences contribute to the systematicity of phonological representation.
Collapse
Affiliation(s)
- David W. Gow
- Neuropsychology Laboratory, Massachusetts General Hospital, Boston, Massachusetts, United States of America
- Department of Psychology, Salem State University, Salem, Massachusetts, United States of America
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, United States of America
- Harvard-MIT Division of Health Sciences and Technology, Cambridge, Massachusetts, United States of America
| | - A. Conrad Nied
- Neuropsychology Laboratory, Massachusetts General Hospital, Boston, Massachusetts, United States of America
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, United States of America
| |
Collapse
|
312
|
Ramachandra CA, Mel BW. Computing local edge probability in natural scenes from a population of oriented simple cells. J Vis 2013; 13:13.14.19. [PMID: 24381295 DOI: 10.1167/13.14.19] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
A key computation in visual cortex is the extraction of object contours, where the first stage of processing is commonly attributed to V1 simple cells. The standard model of a simple cell-an oriented linear filter followed by a divisive normalization-fits a wide variety of physiological data, but is a poor performing local edge detector when applied to natural images. The brain's ability to finely discriminate edges from nonedges therefore likely depends on information encoded by local simple cell populations. To gain insight into the corresponding decoding problem, we used Bayes's rule to calculate edge probability at a given location/orientation in an image based on a surrounding filter population. Beginning with a set of ∼ 100 filters, we culled out a subset that were maximally informative about edges, and minimally correlated to allow factorization of the joint on- and off-edge likelihood functions. Key features of our approach include a new, efficient method for ground-truth edge labeling, an emphasis on achieving filter independence, including a focus on filters in the region orthogonal rather than tangential to an edge, and the use of a customized parametric model to represent the individual filter likelihood functions. The resulting population-based edge detector has zero parameters, calculates edge probability based on a sum of surrounding filter influences, is much more sharply tuned than the underlying linear filters, and effectively captures fine-scale edge structure in natural scenes. Our findings predict nonmonotonic interactions between cells in visual cortex, wherein a cell may for certain stimuli excite and for other stimuli inhibit the same neighboring cell, depending on the two cells' relative offsets in position and orientation, and their relative activation levels.
Collapse
|
313
|
Keating P, King AJ. Developmental plasticity of spatial hearing following asymmetric hearing loss: context-dependent cue integration and its clinical implications. Front Syst Neurosci 2013; 7:123. [PMID: 24409125 PMCID: PMC3873525 DOI: 10.3389/fnsys.2013.00123] [Citation(s) in RCA: 59] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2013] [Accepted: 12/12/2013] [Indexed: 11/23/2022] Open
Abstract
Under normal hearing conditions, comparisons of the sounds reaching each ear are critical for accurate sound localization. Asymmetric hearing loss should therefore degrade spatial hearing and has become an important experimental tool for probing the plasticity of the auditory system, both during development and adulthood. In clinical populations, hearing loss affecting one ear more than the other is commonly associated with otitis media with effusion, a disorder experienced by approximately 80% of children before the age of two. Asymmetric hearing may also arise in other clinical situations, such as after unilateral cochlear implantation. Here, we consider the role played by spatial cue integration in sound localization under normal acoustical conditions. We then review evidence for adaptive changes in spatial hearing following a developmental hearing loss in one ear, and show that adaptation may be achieved either by learning a new relationship between the altered cues and directions in space or by changing the way different cues are integrated in the brain. We next consider developmental plasticity as a source of vulnerability, describing maladaptive effects of asymmetric hearing loss that persist even when normal hearing is provided. We also examine the extent to which the consequences of asymmetric hearing loss depend upon its timing and duration. Although much of the experimental literature has focused on the effects of a stable unilateral hearing loss, some of the most common hearing impairments experienced by children tend to fluctuate over time. We therefore propose that there is a need to bridge this gap by investigating the effects of recurring hearing loss during development, and outline recent steps in this direction. We conclude by arguing that this work points toward a more nuanced view of developmental plasticity, in which plasticity may be selectively expressed in response to specific sensory contexts, and consider the clinical implications of this.
Collapse
Affiliation(s)
- Peter Keating
- Department of Physiology, Anatomy and Genetics, University of OxfordOxford, UK
| | - Andrew J. King
- Department of Physiology, Anatomy and Genetics, University of OxfordOxford, UK
| |
Collapse
|
314
|
Stochastic computations in cortical microcircuit models. PLoS Comput Biol 2013; 9:e1003311. [PMID: 24244126 PMCID: PMC3828141 DOI: 10.1371/journal.pcbi.1003311] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2013] [Accepted: 08/22/2013] [Indexed: 12/30/2022] Open
Abstract
Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving.
Collapse
|
315
|
Average is optimal: an inverted-U relationship between trial-to-trial brain activity and behavioral performance. PLoS Comput Biol 2013; 9:e1003348. [PMID: 24244146 PMCID: PMC3820514 DOI: 10.1371/journal.pcbi.1003348] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2013] [Accepted: 10/04/2013] [Indexed: 01/26/2023] Open
Abstract
It is well known that even under identical task conditions, there is a tremendous amount of trial-to-trial variability in both brain activity and behavioral output. Thus far the vast majority of event-related potential (ERP) studies investigating the relationship between trial-to-trial fluctuations in brain activity and behavioral performance have only tested a monotonic relationship between them. However, it was recently found that across-trial variability can correlate with behavioral performance independent of trial-averaged activity. This finding predicts a U- or inverted-U- shaped relationship between trial-to-trial brain activity and behavioral output, depending on whether larger brain variability is associated with better or worse behavior, respectively. Using a visual stimulus detection task, we provide evidence from human electrocorticography (ECoG) for an inverted-U brain-behavior relationship: When the raw fluctuation in broadband ECoG activity is closer to the across-trial mean, hit rate is higher and reaction times faster. Importantly, we show that this relationship is present not only in the post-stimulus task-evoked brain activity, but also in the pre-stimulus spontaneous brain activity, suggesting anticipatory brain dynamics. Our findings are consistent with the presence of stochastic noise in the brain. They further support attractor network theories, which postulate that the brain settles into a more confined state space under task performance, and proximity to the targeted trajectory is associated with better performance. The human brain is notoriously “noisy”. Even with identical physical sensory inputs and task demands, brain responses and behavioral output vary tremendously from trial to trial. Such brain and behavioral variability and the relationship between them have been the focus of intense neuroscience research for decades. Traditionally, it is thought that the relationship between trial-to-trial brain activity and behavioral performance is monotonic: the highest or lowest brain activity levels are associated with the best behavioral performance. Using invasive recordings in neurosurgical patients, we demonstrate an inverted-U relationship between brain and behavioral variability. Under such a relationship, moderate brain activity is associated with the best performance, while both very low and very high brain activity levels are predictive of compromised performance. These results have significant implications for our understanding of brain functioning. They further support recent theoretical frameworks that view the brain as an active nonlinear dynamical system instead of a passive signal-processing device.
Collapse
|
316
|
Harmelech T, Malach R. Neurocognitive biases and the patterns of spontaneous correlations in the human cortex. Trends Cogn Sci 2013; 17:606-15. [PMID: 24182697 DOI: 10.1016/j.tics.2013.09.014] [Citation(s) in RCA: 90] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2013] [Revised: 09/24/2013] [Accepted: 09/24/2013] [Indexed: 01/08/2023]
Abstract
When the brain is 'at rest', spatiotemporal activity patterns emerge spontaneously, that is, in the absence of an overt task. However, what these patterns reveal about cortical function remains elusive. In this article, we put forward the hypothesis that the correlation patterns among these spontaneous fluctuations (SPs) reflect the profile of individual a priori cognitive biases, coded as synaptic efficacies in cortical networks. Thus, SPs offer a new means for mapping personal traits in both neurotypical and atypical cases. Three sets of observations and related empirical evidence provide support for this hypothesis. First, SPs correspond to activation patterns that occur during typical task performance. Second, individual differences in SPs reflect individual biases and abnormalities. Finally, SPs can be actively remodeled in a long-term manner by focused and intense cortical training.
Collapse
Affiliation(s)
- Tal Harmelech
- Neurobiology Department, Weizmann Institute of Science, Rehovot, Israel
| | | |
Collapse
|
317
|
Seriès P, Seitz AR. Learning what to expect (in visual perception). Front Hum Neurosci 2013; 7:668. [PMID: 24187536 PMCID: PMC3807544 DOI: 10.3389/fnhum.2013.00668] [Citation(s) in RCA: 86] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2013] [Accepted: 09/24/2013] [Indexed: 11/25/2022] Open
Abstract
Expectations are known to greatly affect our experience of the world. A growing theory in computational neuroscience is that perception can be successfully described using Bayesian inference models and that the brain is “Bayes-optimal” under some constraints. In this context, expectations are particularly interesting, because they can be viewed as prior beliefs in the statistical inference process. A number of questions remain unsolved, however, for example: How fast do priors change over time? Are there limits in the complexity of the priors that can be learned? How do an individual’s priors compare to the true scene statistics? Can we unlearn priors that are thought to correspond to natural scene statistics? Where and what are the neural substrate of priors? Focusing on the perception of visual motion, we here review recent studies from our laboratories and others addressing these issues. We discuss how these data on motion perception fit within the broader literature on perceptual Bayesian priors, perceptual expectations, and statistical and perceptual learning and review the possible neural basis of priors.
Collapse
Affiliation(s)
- Peggy Seriès
- Department of Informatics, University of Edinburgh Edinburgh, UK
| | | |
Collapse
|
318
|
O’Donnell C, Nolan MF. Stochastic Ion Channel Gating and Probabilistic Computation in Dendritic Neurons. ACTA ACUST UNITED AC 2013. [DOI: 10.1007/978-1-4614-8094-5_24] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
319
|
Yildiz IB, von Kriegstein K, Kiebel SJ. From birdsong to human speech recognition: bayesian inference on a hierarchy of nonlinear dynamical systems. PLoS Comput Biol 2013; 9:e1003219. [PMID: 24068902 PMCID: PMC3772045 DOI: 10.1371/journal.pcbi.1003219] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2013] [Accepted: 07/27/2013] [Indexed: 11/19/2022] Open
Abstract
Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents—an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments. Neuroscience still lacks a concrete explanation of how humans recognize speech. Even though neuroimaging techniques are helpful in determining the brain areas involved in speech recognition, there are rarely mechanistic explanations at a neuronal level. Here, we assume that songbirds and humans solve a very similar task: extracting information from sound wave modulations produced by a singing bird or a speaking human. Given strong evidence that both humans and songbirds, although genetically very distant, converged to a similar solution, we combined the vast amount of neurobiological findings for songbirds with nonlinear dynamical systems theory to develop a hierarchical, Bayesian model which explains fundamental functions in recognition of sound sequences. We found that the resulting model is good at learning and recognizing human speech. We suggest that this translated model can be used to qualitatively explain or predict experimental data, and the underlying mechanism can be used to construct improved automatic speech recognition algorithms.
Collapse
Affiliation(s)
- Izzet B. Yildiz
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Group for Neural Theory, Institute of Cognitive Studies, École Normale Supérieure, Paris, France
- * E-mail:
| | - Katharina von Kriegstein
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Humboldt University of Berlin, Department of Psychology, Berlin, Germany
| | - Stefan J. Kiebel
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Biomagnetic Center, Hans Berger Clinic for Neurology, University Hospital Jena, Jena, Germany
| |
Collapse
|
320
|
Ghuman AS, van den Honert RN, Martin A. Interregional neural synchrony has similar dynamics during spontaneous and stimulus-driven states. Sci Rep 2013; 3:1481. [PMID: 23512004 PMCID: PMC3601606 DOI: 10.1038/srep01481] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2012] [Accepted: 02/07/2013] [Indexed: 11/09/2022] Open
Abstract
Assessing the correspondence between spontaneous and stimulus-driven neural activity can reveal intrinsic properties of the brain. Recent studies have demonstrated that many large-scale functional networks have a similar spatial structure during spontaneous and stimulus-driven states. However, it is unknown whether the temporal dynamics of network activity are also similar across these states. Here we demonstrate that, in the human brain, interhemispheric coupling of somatosensory regions is preferentially synchronized in the high beta frequency band (~20-30 Hz) in response to somatosensory stimulation and interhemispheric coupling of auditory cortices is preferentially synchronized in the alpha frequency band (~7-12 Hz) in response to auditory stimulation. Critically, these stimulus-driven synchronization frequencies were also selective to these interregional interactions during spontaneous activity. This similarity between stimulus-driven and spontaneous states suggests that frequency-specific oscillatory dynamics are intrinsic to the interactions between the nodes of these brain networks.
Collapse
Affiliation(s)
- Avniel Singh Ghuman
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA.
| | | | | |
Collapse
|
321
|
Pouget A, Beck JM, Ma WJ, Latham PE. Probabilistic brains: knowns and unknowns. Nat Neurosci 2013; 16:1170-8. [PMID: 23955561 PMCID: PMC4487650 DOI: 10.1038/nn.3495] [Citation(s) in RCA: 303] [Impact Index Per Article: 27.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2013] [Accepted: 07/13/2013] [Indexed: 12/12/2022]
Abstract
There is strong behavioral and physiological evidence that the brain both represents probability distributions and performs probabilistic inference. Computational neuroscientists have started to shed light on how these probabilistic representations and computations might be implemented in neural circuits. One particularly appealing aspect of these theories is their generality: they can be used to model a wide range of tasks, from sensory processing to high-level cognition. To date, however, these theories have only been applied to very simple tasks. Here we discuss the challenges that will emerge as researchers start focusing their efforts on real-life computations, with a focus on probabilistic learning, structural learning and approximate inference.
Collapse
Affiliation(s)
- Alexandre Pouget
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York, USA.
| | | | | | | |
Collapse
|
322
|
Formation and Reverberation of Sequential Neural Activity Patterns Evoked by Sensory Stimulation Are Enhanced during Cortical Desynchronization. Neuron 2013; 79:555-66. [DOI: 10.1016/j.neuron.2013.06.013] [Citation(s) in RCA: 72] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/12/2013] [Indexed: 11/17/2022]
|
323
|
Reichert DP, Seriès P, Storkey AJ. Charles Bonnet syndrome: evidence for a generative model in the cortex? PLoS Comput Biol 2013; 9:e1003134. [PMID: 23874177 PMCID: PMC3715531 DOI: 10.1371/journal.pcbi.1003134] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2013] [Accepted: 05/28/2013] [Indexed: 11/25/2022] Open
Abstract
Several theories propose that the cortex implements an internal model to explain, predict, and learn about sensory data, but the nature of this model is unclear. One condition that could be highly informative here is Charles Bonnet syndrome (CBS), where loss of vision leads to complex, vivid visual hallucinations of objects, people, and whole scenes. CBS could be taken as indication that there is a generative model in the brain, specifically one that can synthesise rich, consistent visual representations even in the absence of actual visual input. The processes that lead to CBS are poorly understood. Here, we argue that a model recently introduced in machine learning, the deep Boltzmann machine (DBM), could capture the relevant aspects of (hypothetical) generative processing in the cortex. The DBM carries both the semantics of a probabilistic generative model and of a neural network. The latter allows us to model a concrete neural mechanism that could underlie CBS, namely, homeostatic regulation of neuronal activity. We show that homeostatic plasticity could serve to make the learnt internal model robust against e.g. degradation of sensory input, but overcompensate in the case of CBS, leading to hallucinations. We demonstrate how a wide range of features of CBS can be explained in the model and suggest a potential role for the neuromodulator acetylcholine. This work constitutes the first concrete computational model of CBS and the first application of the DBM as a model in computational neuroscience. Our results lend further credence to the hypothesis of a generative model in the brain. The cerebral cortex is central to many aspects of cognition and intelligence in humans and other mammals, but our scientific understanding of the computational principles underlying cortical processing is still limited. We might gain insights by considering visual hallucinations, specifically in a pathology known as Charles Bonnet syndrome, where patients suffering from visual impairment experience hallucinatory images that rival the vividness and complexity of normal seeing. Such generation of rich internal imagery could naturally be accounted for by theories that posit that the cortex implements an internal generative model of sensory input. Perception then could entail the synthesis of internal explanations that are evaluated by testing whether what they predict is consistent with actual sensory input. Here, we take an approach from artificial intelligence that is based on similar ideas, the deep Boltzmann machine, use it as a model of generative processing in the cortex, and examine various aspects of Charles Bonnet syndrome in computer simulations. In particular, we explain why the synthesis of internal explanations, which is so useful for perception, goes astray in the syndrome as neurons overcompensate for the lack of sensory input by increasing spontaneous activity.
Collapse
Affiliation(s)
- David P Reichert
- Institute for Adaptive and Neural Computation, University of Edinburgh, Edinburgh, United Kingdom.
| | | | | |
Collapse
|
324
|
Gekas N, Chalk M, Seitz AR, Seriès P. Complexity and specificity of experimentally induced expectations in motion perception. BMC Neurosci 2013. [PMCID: PMC3704676 DOI: 10.1186/1471-2202-14-s1-p355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
325
|
Chalk M, Murray I, Seriès P. Attention as reward-driven optimization of sensory processing. Neural Comput 2013; 25:2904-33. [PMID: 23777518 DOI: 10.1162/neco_a_00494] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Attention causes diverse changes to visual neuron responses, including alterations in receptive field structure, and firing rates. A common theoretical approach to investigate why sensory neurons behave as they do is based on the efficient coding hypothesis: that sensory processing is optimized toward the statistics of the received input. We extend this approach to account for the influence of task demands, hypothesizing that the brain learns a probabilistic model of both the sensory input and reward received for performing different actions. Attention-dependent changes to neural responses reflect optimization of this internal model to deal with changes in the sensory environment (stimulus statistics) and behavioral demands (reward statistics). We use this framework to construct a simple model of visual processing that is able to replicate a number of attention-dependent changes to the responses of neurons in the midlevel visual cortices. The model is consistent with and provides a normative explanation for recent divisive normalization models of attention (Reynolds & Heeger, 2009).
Collapse
Affiliation(s)
- Matthew Chalk
- Group for Neural Theory, LNC, DEC, Ecole Supérieure, Paris 75005, France
| | | | | |
Collapse
|
326
|
Bornschein J, Henniges M, Lücke J. Are v1 simple cells optimized for visual occlusions? A comparative study. PLoS Comput Biol 2013; 9:e1003062. [PMID: 23754938 PMCID: PMC3675001 DOI: 10.1371/journal.pcbi.1003062] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2013] [Accepted: 03/21/2013] [Indexed: 11/26/2022] Open
Abstract
Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of 'globular' receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of 'globular' fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of 'globular' fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of 'globular' fields well. Our computational study, therefore, suggests that 'globular' fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex.
Collapse
Affiliation(s)
- Jörg Bornschein
- Frankfurt Institute for Advanced Studies, Goethe-Universität Frankfurt, Frankfurt, Germany
| | - Marc Henniges
- Frankfurt Institute for Advanced Studies, Goethe-Universität Frankfurt, Frankfurt, Germany
| | - Jörg Lücke
- Frankfurt Institute for Advanced Studies, Goethe-Universität Frankfurt, Frankfurt, Germany
- Department of Physics, Goethe-Universität Frankfurt, Frankfurt, Germany
| |
Collapse
|
327
|
Nessler B, Pfeiffer M, Buesing L, Maass W. Bayesian computation emerges in generic cortical microcircuits through spike-timing-dependent plasticity. PLoS Comput Biol 2013; 9:e1003037. [PMID: 23633941 PMCID: PMC3636028 DOI: 10.1371/journal.pcbi.1003037] [Citation(s) in RCA: 112] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2012] [Accepted: 03/04/2013] [Indexed: 11/24/2022] Open
Abstract
The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex.
Collapse
Affiliation(s)
- Bernhard Nessler
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria.
| | | | | | | |
Collapse
|
328
|
Habenschuss S, Puhr H, Maass W. Emergence of optimal decoding of population codes through STDP. Neural Comput 2013; 25:1371-407. [PMID: 23517096 DOI: 10.1162/neco_a_00446] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
The brain faces the problem of inferring reliable hidden causes from large populations of noisy neurons, for example, the direction of a moving object from spikes in area MT. It is known that a theoretically optimal likelihood decoding could be carried out by simple linear readout neurons if weights of synaptic connections were set to certain values that depend on the tuning functions of sensory neurons. We show here that such theoretically optimal readout weights emerge autonomously through STDP in conjunction with lateral inhibition between readout neurons. In particular, we identify a class of optimal STDP learning rules with homeostatic plasticity, for which the autonomous emergence of optimal readouts can be explained on the basis of a rigorous learning theory. This theory shows that the network motif we consider approximates expectation-maximization for creating internal generative models for hidden causes of high-dimensional spike inputs. Notably, we find that this optimal functionality can be well approximated by a variety of STDP rules beyond those predicted by theory. Furthermore, we show that this learning process is very stable and automatically adjusts weights to changes in the number of readout neurons, the tuning functions of sensory neurons, and the statistics of external stimuli.
Collapse
Affiliation(s)
- Stefan Habenschuss
- Institute for Theoretical Computer Science, Graz University of Technology, A-8010 Graz, Austria.
| | | | | |
Collapse
|
329
|
Pezzulo G, Rigoli F, Chersi F. The mixed instrumental controller: using value of information to combine habitual choice and mental simulation. Front Psychol 2013; 4:92. [PMID: 23459512 PMCID: PMC3586710 DOI: 10.3389/fpsyg.2013.00092] [Citation(s) in RCA: 90] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2012] [Accepted: 02/08/2013] [Indexed: 11/13/2022] Open
Abstract
Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available "cached" value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated "Value of Information" exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus - ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation.
Collapse
Affiliation(s)
- Giovanni Pezzulo
- Istituto di Linguistica Computazionale, "Antonio Zampolli," Consiglio Nazionale delle Ricerche Pisa, Italy ; Istituto di Scienze e Tecnologie della Cognizione, Consiglio Nazionale delle Ricerche Roma, Italy
| | | | | |
Collapse
|
330
|
Abstract
Cortical circuits encode sensory stimuli through the firing of neuronal ensembles, and also produce spontaneous population patterns in the absence of sensory drive. This population activity is often characterized experimentally by the distribution of multineuron "words" (binary firing vectors), and a match between spontaneous and evoked word distributions has been suggested to reflect learning of a probabilistic model of the sensory world. We analyzed multineuron word distributions in sensory cortex of anesthetized rats and cats, and found that they are dominated by fluctuations in population firing rate rather than precise interactions between individual units. Furthermore, cortical word distributions change when brain state shifts, and similar behavior is seen in simulated networks with fixed, random connectivity. Our results suggest that similarity or dissimilarity in multineuron word distributions could primarily reflect similarity or dissimilarity in population firing rate dynamics, and not necessarily the precise interactions between neurons that would indicate learning of sensory features.
Collapse
|
331
|
Harré MS. From Amateur to Professional: A Neuro-cognitive Model of Categories and Expert Development. Minds Mach (Dordr) 2013. [DOI: 10.1007/s11023-013-9305-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
332
|
Jern A, Kemp C. A probabilistic account of exemplar and category generation. Cogn Psychol 2013; 66:85-125. [PMID: 23108001 DOI: 10.1016/j.cogpsych.2012.09.003] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2012] [Revised: 07/20/2012] [Accepted: 09/14/2012] [Indexed: 11/26/2022]
|
333
|
Abstract
Neural processing faces three rather different, and perniciously tied, communication problems. First, computation is radically distributed, yet point-to-point interconnections are limited. Second, the bulk of these connections are semantically uniform, lacking differentiation at their targets that could tag particular sorts of information. Third, the brain's structure is relatively fixed, and yet different sorts of input, forms of processing, and rules for determining the output are appropriate under different, and possibly rapidly changing, conditions. Neuromodulators address these problems by their multifarious and broad distribution, by enjoying specialized receptor types in partially specific anatomical arrangements, and by their ability to mold the activity and sensitivity of neurons and the strength and plasticity of their synapses. Here, I offer a computationally focused review of algorithmic and implementational motifs associated with neuromodulators, using decision making in the face of uncertainty as a running example.
Collapse
|
334
|
Uhlhaas PJ. Dysconnectivity, large-scale networks and neuronal dynamics in schizophrenia. Curr Opin Neurobiol 2012; 23:283-90. [PMID: 23228430 DOI: 10.1016/j.conb.2012.11.004] [Citation(s) in RCA: 131] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2012] [Revised: 11/02/2012] [Accepted: 11/11/2012] [Indexed: 01/15/2023]
Abstract
Schizophrenia remains a daunting challenge for efforts aimed at identifying fundamental pathophysiological processes and to develop evidence-based effective treatments and interventions. One reason for the lack of progress lies in the fact that the pathophysiology of schizophrenia has been predominantly conceived in terms of circumscribed alterations in cellular and anatomical variables. In the current review, it is proposed that this approach needs to be complemented by a focus on the neuronal dynamics in large-scale networks which is compatible with the notion of dysconnectivity, highlighting the involvement of both reduced and increased interactions in extended cortical circuits in schizophrenia. Neural synchrony is one candidate mechanisms for achieving functional connectivity in large-scale networks and has been found to be impaired in schizophrenia. Importantly, alterations in the synchronization of neural oscillations can be related to dysfunctions in the excitation-inhibition (E/I)-balance and developmental modifications with important implications for translational research.
Collapse
Affiliation(s)
- Peter J Uhlhaas
- Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK.
| |
Collapse
|
335
|
Wohrer A, Humphries MD, Machens CK. Population-wide distributions of neural activity during perceptual decision-making. Prog Neurobiol 2012; 103:156-93. [PMID: 23123501 DOI: 10.1016/j.pneurobio.2012.09.004] [Citation(s) in RCA: 51] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2012] [Revised: 09/09/2012] [Accepted: 09/26/2012] [Indexed: 01/14/2023]
Abstract
Cortical activity involves large populations of neurons, even when it is limited to functionally coherent areas. Electrophysiological recordings, on the other hand, involve comparatively small neural ensembles, even when modern-day techniques are used. Here we review results which have started to fill the gap between these two scales of inquiry, by shedding light on the statistical distributions of activity in large populations of cells. We put our main focus on data recorded in awake animals that perform simple decision-making tasks and consider statistical distributions of activity throughout cortex, across sensory, associative, and motor areas. We transversally review the complexity of these distributions, from distributions of firing rates and metrics of spike-train structure, through distributions of tuning to stimuli or actions and of choice signals, and finally the dynamical evolution of neural population activity and the distributions of (pairwise) neural interactions. This approach reveals shared patterns of statistical organization across cortex, including: (i) long-tailed distributions of activity, where quasi-silence seems to be the rule for a majority of neurons; that are barely distinguishable between spontaneous and active states; (ii) distributions of tuning parameters for sensory (and motor) variables, which show an extensive extrapolation and fragmentation of their representations in the periphery; and (iii) population-wide dynamics that reveal rotations of internal representations over time, whose traces can be found both in stimulus-driven and internally generated activity. We discuss how these insights are leading us away from the notion of discrete classes of cells, and are acting as powerful constraints on theories and models of cortical organization and population coding.
Collapse
Affiliation(s)
- Adrien Wohrer
- Group for Neural Theory, INSERM U960, École Normale Supérieure Département d'Études Cognitives, 29 rue d'Ulm, 75005 Paris, France
| | | | | |
Collapse
|
336
|
Abstract
This paper presents a review of Bayesian models of brain and behaviour. We first review the basic principles of Bayesian inference. This is followed by descriptions of sampling and variational methods for approximate inference, and forward and backward recursions in time for inference in dynamical models. The review of behavioural models covers work in visual processing, sensory integration, sensorimotor integration, and collective decision making. The review of brain models covers a range of spatial scales from synapses to neurons and population codes, but with an emphasis on models of cortical hierarchies. We describe a simple hierarchical model which provides a mathematical framework relating constructs in Bayesian inference to those in neural computation. We close by reviewing recent theoretical developments in Bayesian inference for planning and control.
Collapse
Affiliation(s)
- William Penny
- Wellcome Trust Centre for Neuroimaging, University College, London WC1N 3BG, UK
| |
Collapse
|
337
|
Nguyen Trong M, Bojak I, Knösche TR. Associating spontaneous with evoked activity in a neural mass model of visual cortex. Neuroimage 2012; 66:80-7. [PMID: 23085110 DOI: 10.1016/j.neuroimage.2012.10.024] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2012] [Revised: 10/08/2012] [Accepted: 10/15/2012] [Indexed: 10/27/2022] Open
Abstract
Spontaneous activity of the brain at rest frequently has been considered a mere backdrop to the salient activity evoked by external stimuli or tasks. However, the resting state of the brain consumes most of its energy budget, which suggests a far more important role. An intriguing hint comes from experimental observations of spontaneous activity patterns, which closely resemble those evoked by visual stimulation with oriented gratings, except that cortex appeared to cycle between different orientation maps. Moreover, patterns similar to those evoked by the behaviorally most relevant horizontal and vertical orientations occurred more often than those corresponding to oblique angles. We hypothesize that this kind of spontaneous activity develops at least to some degree autonomously, providing a dynamical reservoir of cortical states, which are then associated with visual stimuli through learning. To test this hypothesis, we use a biologically inspired neural mass model to simulate a patch of cat visual cortex. Spontaneous transitions between orientation states were induced by modest modifications of the neural connectivity, establishing a stable heteroclinic channel. Significantly, the experimentally observed greater frequency of states representing the behaviorally important horizontal and vertical orientations emerged spontaneously from these simulations. We then applied bar-shaped inputs to the model cortex and used Hebbian learning rules to modify the corresponding synaptic strengths. After unsupervised learning, different bar inputs reliably and exclusively evoked their associated orientation state; whereas in the absence of input, the model cortex resumed its spontaneous cycling. We conclude that the experimentally observed similarities between spontaneous and evoked activity in visual cortex can be explained as the outcome of a learning process that associates external stimuli with a preexisting reservoir of autonomous neural activity states. Our findings hence demonstrate how cortical connectivity can link the maintenance of spontaneous activity in the brain mechanistically to its core cognitive functions.
Collapse
Affiliation(s)
- Manh Nguyen Trong
- Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany; Institute for Biomedical Engineering and Informatics, Technical University of Ilmenau, 98693 Ilmenau, Germany.
| | - Ingo Bojak
- School of Psychology (CN-CR), University of Birmingham, Edgbaston, Birmingham B15 2TT, UK; Centre for Neuroscience, Donders Institute for Brain, Cognition and Behaviour, 6500 HB Nijmegen, The Netherlands
| | - Thomas R Knösche
- Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany
| |
Collapse
|
338
|
Ullman TD, Goodman ND, Tenenbaum JB. Theory learning as stochastic search in the language of thought. COGNITIVE DEVELOPMENT 2012. [DOI: 10.1016/j.cogdev.2012.07.005] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
339
|
Kang K, Amari SI. Self-consistent learning of the environment. Neural Comput 2012; 24:3191-212. [PMID: 22970868 DOI: 10.1162/neco_a_00371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We study the Bayesian process to estimate the features of the environment. We focus on two aspects of the Bayesian process: how estimation error depends on the prior distribution of features and how the prior distribution can be learned from experience. The accuracy of the perception is underestimated when each feature of the environment is considered independently because many different features of the environment are usually highly correlated and the estimation error greatly depends on the correlations. The self-consistent learning process renews the prior distribution of correlated features jointly with the estimation of the environment. Here, maximum a posteriori probability (MAP) estimation decreases the effective dimensions of the feature vector. There are critical noise levels in self-consistent learning with MAP estimation, that cause hysteresis behaviors in learning. The self-consistent learning process with stochastic Bayesian estimation (SBE) makes the presumed distribution of environmental features converge to the true distribution for any level of channel noise. However, SBE is less accurate than MAP estimation. We also discuss another stochastic method of estimation, SBE2, which has a smaller estimation error than SBE without hysteresis.
Collapse
|
340
|
Ma WJ. Organizing probabilistic models of perception. Trends Cogn Sci 2012; 16:511-8. [PMID: 22981359 DOI: 10.1016/j.tics.2012.08.010] [Citation(s) in RCA: 98] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2012] [Revised: 08/22/2012] [Accepted: 08/22/2012] [Indexed: 10/27/2022]
Abstract
Probability has played a central role in models of perception for more than a century, but a look at probabilistic concepts in the literature raises many questions. Is being Bayesian the same as being optimal? Are recent Bayesian models fundamentally different from classic signal detection theory models? Do findings of near-optimal inference provide evidence that neurons compute with probability distributions? This review aims to disentangle these concepts and to classify empirical evidence accordingly.
Collapse
Affiliation(s)
- Wei Ji Ma
- Department of Neuroscience, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX 77030, USA.
| |
Collapse
|
341
|
Music training enhances the rapid plasticity of P3a/P3b event-related brain potentials for unattended and attended target sounds. Atten Percept Psychophys 2012; 74:600-12. [PMID: 22222306 DOI: 10.3758/s13414-011-0257-9] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Neurocognitive studies have shown that extensive musical training enhances P3a and P3b event-related potentials for infrequent target sounds, which reflects stronger attention switching and stimulus evaluation in musicians than in nonmusicians. However, it is unknown whether the short-term plasticity of P3a and P3b responses is also enhanced in musicians. We compared the short-term plasticity of P3a and P3b responses to infrequent target sounds in musicians and nonmusicians during auditory perceptual learning tasks. Target sounds, deviating in location, pitch, and duration with three difficulty levels, were interspersed among frequently presented standard sounds in an oddball paradigm. We found that during passive exposure to sounds, musicians had habituation of the P3a, while nonmusicians showed enhancement of the P3a between blocks. Between active tasks, P3b amplitudes for duration deviants were reduced (habituated) in musicians only, and showed a more posterior scalp topography for habituation when compared to P3bs of nonmusicians. In both groups, the P3a and P3b latencies were shortened for deviating sounds. Also, musicians were better than nonmusicians at discriminating target deviants. Regardless of musical training, better discrimination was associated with higher working memory capacity. We concluded that music training enhances short-term P3a/P3b plasticity, indicating training-induced changes in attentional skills.
Collapse
|
342
|
Pellicano E, Burr D. When the world becomes 'too real': a Bayesian explanation of autistic perception. Trends Cogn Sci 2012; 16:504-10. [PMID: 22959875 DOI: 10.1016/j.tics.2012.08.009] [Citation(s) in RCA: 621] [Impact Index Per Article: 51.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2012] [Revised: 08/18/2012] [Accepted: 08/20/2012] [Indexed: 11/30/2022]
Abstract
Perceptual experience is influenced both by incoming sensory information and prior knowledge about the world, a concept recently formalised within Bayesian decision theory. We propose that Bayesian models can be applied to autism - a neurodevelopmental condition with atypicalities in sensation and perception - to pinpoint fundamental differences in perceptual mechanisms. We suggest specifically that attenuated Bayesian priors - 'hypo-priors' - may be responsible for the unique perceptual experience of autistic people, leading to a tendency to perceive the world more accurately rather than modulated by prior experience. In this account, we consider how hypo-priors might explain key features of autism - the broad range of sensory and other non-social atypicalities--in addition to the phenomenological differences in autistic perception.
Collapse
Affiliation(s)
- Elizabeth Pellicano
- Centre for Research in Autism and Education, Institute of Education, University of London, London, UK.
| | | |
Collapse
|
343
|
Fleming SM, Dolan RJ, Frith CD. Metacognition: computation, biology and function. Philos Trans R Soc Lond B Biol Sci 2012; 367:1280-6. [PMID: 22492746 PMCID: PMC3318771 DOI: 10.1098/rstb.2012.0021] [Citation(s) in RCA: 157] [Impact Index Per Article: 13.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Many complex systems maintain a self-referential check and balance. In animals, such reflective monitoring and control processes have been grouped under the rubric of metacognition. In this introductory article to a Theme Issue on metacognition, we review recent and rapidly progressing developments from neuroscience, cognitive psychology, computer science and philosophy of mind. While each of these areas is represented in detail by individual contributions to the volume, we take this opportunity to draw links between disciplines, and highlight areas where further integration is needed. Specifically, we cover the definition, measurement, neurobiology and possible functions of metacognition, and assess the relationship between metacognition and consciousness. We propose a framework in which level of representation, order of behaviour and access consciousness are orthogonal dimensions of the conceptual landscape.
Collapse
Affiliation(s)
- Stephen M Fleming
- Center for Neural Science, New York University, 6 Washington Place, New York, NY 10003, USA.
| | | | | |
Collapse
|
344
|
|
345
|
Vilares I, Howard JD, Fernandes HL, Gottfried JA, Kording KP. Differential representations of prior and likelihood uncertainty in the human brain. Curr Biol 2012; 22:1641-8. [PMID: 22840519 DOI: 10.1016/j.cub.2012.07.010] [Citation(s) in RCA: 101] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2012] [Revised: 06/11/2012] [Accepted: 07/03/2012] [Indexed: 11/16/2022]
Abstract
BACKGROUND Uncertainty shapes our perception of the world and the decisions we make. Two aspects of uncertainty are commonly distinguished: uncertainty in previously acquired knowledge (prior) and uncertainty in current sensory information (likelihood). Previous studies have established that humans can take both types of uncertainty into account, often in a way predicted by Bayesian statistics. However, the neural representations underlying these parameters remain poorly understood. RESULTS By varying prior and likelihood uncertainty in a decision-making task while performing neuroimaging in humans, we found that prior and likelihood uncertainty had quite distinct representations. Whereas likelihood uncertainty activated brain regions along the early stages of the visuomotor pathway, representations of prior uncertainty were identified in specialized brain areas outside this pathway, including putamen, amygdala, insula, and orbitofrontal cortex. Furthermore, the magnitude of brain activity in the putamen predicted individuals' personal tendencies to rely more on either prior or current information. CONCLUSIONS Our results suggest different pathways by which prior and likelihood uncertainty map onto the human brain and provide a potential neural correlate for higher reliance on current or prior knowledge. Overall, these findings offer insights into the neural pathways that may allow humans to make decisions close to the optimal defined by a Bayesian statistical framework.
Collapse
Affiliation(s)
- Iris Vilares
- Department of Physical Medicine and Rehabilitation, Northwestern University and Rehabilitation Institute of Chicago, Chicago, IL 60611, USA.
| | | | | | | | | |
Collapse
|
346
|
Griffiths TL, Vul E, Sanborn AN. Bridging Levels of Analysis for Probabilistic Models of Cognition. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2012. [DOI: 10.1177/0963721412447619] [Citation(s) in RCA: 78] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Probabilistic models of cognition characterize the abstract computational problems underlying inductive inferences and identify their ideal solutions. This approach differs from traditional methods of investigating human cognition, which focus on identifying the cognitive or neural processes that underlie behavior and therefore concern alternative levels of analysis. To evaluate the theoretical implications of probabilistic models and increase their predictive power, we must understand the relationships between theories at these different levels of analysis. One strategy for bridging levels of analysis is to explore cognitive processes that have a direct link to probabilistic inference. Recent research employing this strategy has focused on the possibility that the Monte Carlo principle—which concerns sampling from probability distributions in order to perform computations—provides a way to link probabilistic models of cognition to more concrete cognitive and neural processes.
Collapse
|
347
|
Hennequin G, Vogels TP, Gerstner W. Non-normal amplification in random balanced neuronal networks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2012; 86:011909. [PMID: 23005454 DOI: 10.1103/physreve.86.011909] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2012] [Indexed: 06/01/2023]
Abstract
In dynamical models of cortical networks, the recurrent connectivity can amplify the input given to the network in two distinct ways. One is induced by the presence of near-critical eigenvalues in the connectivity matrix W, producing large but slow activity fluctuations along the corresponding eigenvectors (dynamical slowing). The other relies on W not being normal, which allows the network activity to make large but fast excursions along specific directions. Here we investigate the trade-off between non-normal amplification and dynamical slowing in the spontaneous activity of large random neuronal networks composed of excitatory and inhibitory neurons. We use a Schur decomposition of W to separate the two amplification mechanisms. Assuming linear stochastic dynamics, we derive an exact expression for the expected amount of purely non-normal amplification. We find that amplification is very limited if dynamical slowing must be kept weak. We conclude that, to achieve strong transient amplification with little slowing, the connectivity must be structured. We show that unidirectional connections between neurons of the same type together with reciprocal connections between neurons of different types, allow for amplification already in the fast dynamical regime. Finally, our results also shed light on the differences between balanced networks in which inhibition exactly cancels excitation and those where inhibition dominates.
Collapse
Affiliation(s)
- Guillaume Hennequin
- School of Computer and Communication Sciences and Brain-Mind Institute, Ecole Polytechnique Fédérale de Lausanne, 1015 EPFL, Switzerland.
| | | | | |
Collapse
|
348
|
Abstract
Neuroscience has made considerable progress in understanding the neural substrates supporting cognitive performance in a number of domains, including memory, perception, and decision making. In contrast, how the human brain generates metacognitive awareness of task performance remains unclear. Here, we address this question by asking participants to perform perceptual decisions while providing concurrent metacognitive reports during fMRI scanning. We show that activity in right rostrolateral prefrontal cortex (rlPFC) satisfies three constraints for a role in metacognitive aspects of decision-making. Right rlPFC showed greater activity during self-report compared to a matched control condition, activity in this region correlated with reported confidence, and the strength of the relationship between activity and confidence predicted metacognitive ability across individuals. In addition, functional connectivity between right rlPFC and both contralateral PFC and visual cortex increased during metacognitive reports. We discuss these findings in a theoretical framework where rlPFC re-represents object-level decision uncertainty to facilitate metacognitive report.
Collapse
|
349
|
Luczak A, Maclean JN. Default activity patterns at the neocortical microcircuit level. Front Integr Neurosci 2012; 6:30. [PMID: 22701405 PMCID: PMC3373160 DOI: 10.3389/fnint.2012.00030] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2011] [Accepted: 05/24/2012] [Indexed: 11/17/2022] Open
Abstract
Even in absence of sensory stimuli cortical networks exhibit complex, self-organized activity patterns. While the function of those spontaneous patterns of activation remains poorly understood, recent studies both in vivo and in vitro have demonstrated that neocortical neurons activate in a surprisingly similar sequential order both spontaneously and following input into cortex. For example, neurons that tend to fire earlier within spontaneous bursts of activity also fire earlier than other neurons in response to sensory stimuli. These “default patterns” can last hundreds of milliseconds and are strongly conserved under a variety of conditions. In this paper, we will review recent evidence for these default patterns at the local cortical level. We speculate that cortical architecture imposes common constraints on spontaneous and evoked activity flow, which result in the similarity of the patterns.
Collapse
Affiliation(s)
- Artur Luczak
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, AB, Canada
| | | |
Collapse
|
350
|
Arnal LH, Giraud AL. Cortical oscillations and sensory predictions. Trends Cogn Sci 2012; 16:390-8. [PMID: 22682813 DOI: 10.1016/j.tics.2012.05.003] [Citation(s) in RCA: 624] [Impact Index Per Article: 52.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2012] [Revised: 05/11/2012] [Accepted: 05/18/2012] [Indexed: 11/24/2022]
Abstract
Many theories of perception are anchored in the central notion that the brain continuously updates an internal model of the world to infer the probable causes of sensory events. In this framework, the brain needs not only to predict the causes of sensory input, but also when they are most likely to happen. In this article, we review the neurophysiological bases of sensory predictions of "what' (predictive coding) and 'when' (predictive timing), with an emphasis on low-level oscillatory mechanisms. We argue that neural rhythms offer distinct and adapted computational solutions to predicting 'what' is going to happen in the sensory environment and 'when'.
Collapse
Affiliation(s)
- Luc H Arnal
- Inserm U960 Département d'Etudes Cognitives, Ecole Normale Supérieure, 29 rue d'Ulm 75005 Paris, France
| | | |
Collapse
|