1
|
Shen B, Wilson J, Nguyen D, Glimcher PW, Louie K. Origins of noise in both improving and degrading decision making. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.26.586597. [PMID: 38915616 PMCID: PMC11195060 DOI: 10.1101/2024.03.26.586597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
Noise is a fundamental problem for information processing in neural systems. In decision-making, noise is assumed to have a primary role in errors and stochastic choice behavior. However, little is known about how noise arising from different sources contributes to value coding and choice behaviors, especially when it interacts with neural computation. Here we examine how noise arising early versus late in the choice process differentially impacts context-dependent choice behavior. We found in model simulations that early and late noise predict opposing context effects: under early noise, contextual information enhances choice accuracy; while under late noise, context degrades choice accuracy. Furthermore, we verified these opposing predictions in experimental human choice behavior. Manipulating early and late noise - by inducing uncertainty in option values and controlling time pressure - produced dissociable positive and negative context effects. These findings reconcile controversial experimental findings in the literature reporting either context-driven impairments or improvements in choice performance, suggesting a unified mechanism for context-dependent choice. More broadly, these findings highlight how different sources of noise can interact with neural computations to differentially modulate behavior. Significance The current study addresses the role of noise origin in decision-making, reconciling controversies around how decision-making is impacted by context. We demonstrate that different types of noise - either arising early during evaluation or late during option comparison - leads to distinct results: with early noise, context enhances choice accuracy, while with late noise, context impairs it. Understanding these dynamics offers potential strategies for improving decision-making in noisy environments and refining existing neural computation models. Overall, our findings advance our understanding of how neural systems handle noise in essential cognitive tasks, suggest a beneficial role for contextual modulation under certain conditions, and highlight the profound implications of noise structure in decision-making.
Collapse
|
2
|
Ichikawa K, Kaneko K. Bayesian inference is facilitated by modular neural networks with different time scales. PLoS Comput Biol 2024; 20:e1011897. [PMID: 38478575 PMCID: PMC10962854 DOI: 10.1371/journal.pcbi.1011897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 03/25/2024] [Accepted: 02/06/2024] [Indexed: 03/26/2024] Open
Abstract
Various animals, including humans, have been suggested to perform Bayesian inferences to handle noisy, time-varying external information. In performing Bayesian inference by the brain, the prior distribution must be acquired and represented by sampling noisy external inputs. However, the mechanism by which neural activities represent such distributions has not yet been elucidated. Our findings reveal that networks with modular structures, composed of fast and slow modules, are adept at representing this prior distribution, enabling more accurate Bayesian inferences. Specifically, the modular network that consists of a main module connected with input and output layers and a sub-module with slower neural activity connected only with the main module outperformed networks with uniform time scales. Prior information was represented specifically by the slow sub-module, which could integrate observed signals over an appropriate period and represent input means and variances. Accordingly, the neural network could effectively predict the time-varying inputs. Furthermore, by training the time scales of neurons starting from networks with uniform time scales and without modular structure, the above slow-fast modular network structure and the division of roles in which prior knowledge is selectively represented in the slow sub-modules spontaneously emerged. These results explain how the prior distribution for Bayesian inference is represented in the brain, provide insight into the relevance of modular structure with time scale hierarchy to information processing, and elucidate the significance of brain areas with slower time scales.
Collapse
Affiliation(s)
- Kohei Ichikawa
- Department of Basic Science, Graduate School of Arts and Sciences, University of Tokyo, Meguro-ku, Tokyo, Japan
| | - Kunihiko Kaneko
- Research Center for Complex Systems Biology, University of Tokyo, Bunkyo-ku, Tokyo, Japan
- The Niels Bohr Institute, University of Copenhagen, Blegdamsvej, Copenhagen, Denmark
| |
Collapse
|
3
|
Lange RD, Shivkumar S, Chattoraj A, Haefner RM. Bayesian encoding and decoding as distinct perspectives on neural coding. Nat Neurosci 2023; 26:2063-2072. [PMID: 37996525 PMCID: PMC11003438 DOI: 10.1038/s41593-023-01458-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Accepted: 09/08/2023] [Indexed: 11/25/2023]
Abstract
The Bayesian brain hypothesis is one of the most influential ideas in neuroscience. However, unstated differences in how Bayesian ideas are operationalized make it difficult to draw general conclusions about how Bayesian computations map onto neural circuits. Here, we identify one such unstated difference: some theories ask how neural circuits could recover information about the world from sensory neural activity (Bayesian decoding), whereas others ask how neural circuits could implement inference in an internal model (Bayesian encoding). These two approaches require profoundly different assumptions and lead to different interpretations of empirical data. We contrast them in terms of motivations, empirical support and relationship to neural data. We also use a simple model to argue that encoding and decoding models are complementary rather than competing. Appreciating the distinction between Bayesian encoding and Bayesian decoding will help to organize future work and enable stronger empirical tests about the nature of inference in the brain.
Collapse
Affiliation(s)
- Richard D Lange
- Department of Neurobiology, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Computer Science, Rochester Institute of Technology, Rochester, NY, USA.
| | - Sabyasachi Shivkumar
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Ankani Chattoraj
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Ralf M Haefner
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| |
Collapse
|
4
|
Zhang WH, Wu S, Josić K, Doiron B. Sampling-based Bayesian inference in recurrent circuits of stochastic spiking neurons. Nat Commun 2023; 14:7074. [PMID: 37925497 PMCID: PMC10625605 DOI: 10.1038/s41467-023-41743-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 09/15/2023] [Indexed: 11/06/2023] Open
Abstract
Two facts about cortex are widely accepted: neuronal responses show large spiking variability with near Poisson statistics and cortical circuits feature abundant recurrent connections between neurons. How these spiking and circuit properties combine to support sensory representation and information processing is not well understood. We build a theoretical framework showing that these two ubiquitous features of cortex combine to produce optimal sampling-based Bayesian inference. Recurrent connections store an internal model of the external world, and Poissonian variability of spike responses drives flexible sampling from the posterior stimulus distributions obtained by combining feedforward and recurrent neuronal inputs. We illustrate how this framework for sampling-based inference can be used by cortex to represent latent multivariate stimuli organized either hierarchically or in parallel. A neural signature of such network sampling are internally generated differential correlations whose amplitude is determined by the prior stored in the circuit, which provides an experimentally testable prediction for our framework.
Collapse
Affiliation(s)
- Wen-Hao Zhang
- Department of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Si Wu
- School of Psychological and Cognitive Sciences, Peking University, Beijing, 100871, China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing, 100871, China
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, 100871, China
- Center of Quantitative Biology, Peking University, Beijing, 100871, China
| | - Krešimir Josić
- Department of Mathematics, University of Houston, Houston, TX, USA.
- Department of Biology and Biochemistry, University of Houston, Houston, TX, USA.
| | - Brent Doiron
- Department of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA.
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA.
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA.
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
| |
Collapse
|
5
|
Weiss O, Bounds HA, Adesnik H, Coen-Cagli R. Modeling the diverse effects of divisive normalization on noise correlations. PLoS Comput Biol 2023; 19:e1011667. [PMID: 38033166 PMCID: PMC10715670 DOI: 10.1371/journal.pcbi.1011667] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 12/12/2023] [Accepted: 11/07/2023] [Indexed: 12/02/2023] Open
Abstract
Divisive normalization, a prominent descriptive model of neural activity, is employed by theories of neural coding across many different brain areas. Yet, the relationship between normalization and the statistics of neural responses beyond single neurons remains largely unexplored. Here we focus on noise correlations, a widely studied pairwise statistic, because its stimulus and state dependence plays a central role in neural coding. Existing models of covariability typically ignore normalization despite empirical evidence suggesting it affects correlation structure in neural populations. We therefore propose a pairwise stochastic divisive normalization model that accounts for the effects of normalization and other factors on covariability. We first show that normalization modulates noise correlations in qualitatively different ways depending on whether normalization is shared between neurons, and we discuss how to infer when normalization signals are shared. We then apply our model to calcium imaging data from mouse primary visual cortex (V1), and find that it accurately fits the data, often outperforming a popular alternative model of correlations. Our analysis indicates that normalization signals are often shared between V1 neurons in this dataset. Our model will enable quantifying the relation between normalization and covariability in a broad range of neural systems, which could provide new constraints on circuit mechanisms of normalization and their role in information transmission and representation.
Collapse
Affiliation(s)
- Oren Weiss
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, New York, United States of America
| | - Hayley A. Bounds
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
| | - Hillel Adesnik
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, California, United States of America
| | - Ruben Coen-Cagli
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, New York, United States of America
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
- Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, New York, United States of America
| |
Collapse
|
6
|
Noel JP, Angelaki DE. A theory of autism bridging across levels of description. Trends Cogn Sci 2023; 27:631-641. [PMID: 37183143 PMCID: PMC10330321 DOI: 10.1016/j.tics.2023.04.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 04/18/2023] [Accepted: 04/19/2023] [Indexed: 05/16/2023]
Abstract
Autism impacts a wide range of behaviors and neural functions. As such, theories of autism spectrum disorder (ASD) are numerous and span different levels of description, from neurocognitive to molecular. We propose how existent behavioral, computational, algorithmic, and neural accounts of ASD may relate to one another. Specifically, we argue that ASD may be cast as a disorder of causal inference (computational level). This computation relies on marginalization, which is thought to be subserved by divisive normalization (algorithmic level). In turn, divisive normalization may be impaired by excitatory-to-inhibitory imbalances (neural implementation level). We also discuss ASD within similar frameworks, those of predictive coding and circular inference. Together, we hope to motivate work unifying the different accounts of ASD.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY, USA.
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY, USA; Tandon School of Engineering, New York University, New York, NY, USA
| |
Collapse
|
7
|
Kutschireiter A, Basnak MA, Wilson RI, Drugowitsch J. Bayesian inference in ring attractor networks. Proc Natl Acad Sci U S A 2023; 120:e2210622120. [PMID: 36812206 PMCID: PMC9992764 DOI: 10.1073/pnas.2210622120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 01/12/2023] [Indexed: 02/24/2023] Open
Abstract
Working memories are thought to be held in attractor networks in the brain. These attractors should keep track of the uncertainty associated with each memory, so as to weigh it properly against conflicting new evidence. However, conventional attractors do not represent uncertainty. Here, we show how uncertainty could be incorporated into an attractor, specifically a ring attractor that encodes head direction. First, we introduce a rigorous normative framework (the circular Kalman filter) for benchmarking the performance of a ring attractor under conditions of uncertainty. Next, we show that the recurrent connections within a conventional ring attractor can be retuned to match this benchmark. This allows the amplitude of network activity to grow in response to confirmatory evidence, while shrinking in response to poor-quality or strongly conflicting evidence. This "Bayesian ring attractor" performs near-optimal angular path integration and evidence accumulation. Indeed, we show that a Bayesian ring attractor is consistently more accurate than a conventional ring attractor. Moreover, near-optimal performance can be achieved without exact tuning of the network connections. Finally, we use large-scale connectome data to show that the network can achieve near-optimal performance even after we incorporate biological constraints. Our work demonstrates how attractors can implement a dynamic Bayesian inference algorithm in a biologically plausible manner, and it makes testable predictions with direct relevance to the head direction system as well as any neural system that tracks direction, orientation, or periodic rhythms.
Collapse
Affiliation(s)
| | | | - Rachel I. Wilson
- Department of Neurobiology, Harvard Medical School, Boston, MA02115
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA02115
| |
Collapse
|
8
|
Bill J, Gershman SJ, Drugowitsch J. Visual motion perception as online hierarchical inference. Nat Commun 2022; 13:7403. [PMID: 36456546 PMCID: PMC9715570 DOI: 10.1038/s41467-022-34805-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 11/07/2022] [Indexed: 12/03/2022] Open
Abstract
Identifying the structure of motion relations in the environment is critical for navigation, tracking, prediction, and pursuit. Yet, little is known about the mental and neural computations that allow the visual system to infer this structure online from a volatile stream of visual information. We propose online hierarchical Bayesian inference as a principled solution for how the brain might solve this complex perceptual task. We derive an online Expectation-Maximization algorithm that explains human percepts qualitatively and quantitatively for a diverse set of stimuli, covering classical psychophysics experiments, ambiguous motion scenes, and illusory motion displays. We thereby identify normative explanations for the origin of human motion structure perception and make testable predictions for future psychophysics experiments. The proposed online hierarchical inference model furthermore affords a neural network implementation which shares properties with motion-sensitive cortical areas and motivates targeted experiments to reveal the neural representations of latent structure.
Collapse
Grants
- U19 NS118246 NINDS NIH HHS
- U.S. Department of Health & Human Services | NIH | National Institute of Neurological Disorders and Stroke (NINDS)
- James S. McDonnell Foundation (McDonnell Foundation)
- This research was supported by grants from the NIH (NINDS U19NS118246, J.D.), the James S. McDonnell Foundation (Scholar Award for Understanding Human Cognition, Grant 220020462, J.D.), the Harvard Brain Science Initiative (Collaborative Seed Grant, J.D.\ & S.J.G.), and the Center for Brains, Minds, and Machines (CBMM; funded by NSF STC award CCF-1231216, S.J.G.).
Collapse
Affiliation(s)
- Johannes Bill
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA.
- Department of Psychology, Harvard University, Cambridge, MA, USA.
| | - Samuel J Gershman
- Department of Psychology, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Center for Brains, Minds, and Machines, MIT, Cambridge, MA, USA
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| |
Collapse
|
9
|
Moon J, Kwon OS. Attractive and repulsive effects of sensory history concurrently shape visual perception. BMC Biol 2022; 20:247. [PMID: 36345010 PMCID: PMC9641899 DOI: 10.1186/s12915-022-01444-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 10/19/2022] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND Sequential effects of environmental stimuli are ubiquitous in most behavioral tasks involving magnitude estimation, memory, decision making, and emotion. The human visual system exploits continuity in the visual environment, which induces two contrasting perceptual phenomena shaping visual perception. Previous work reported that perceptual estimation of a stimulus may be influenced either by attractive serial dependencies or repulsive aftereffects, with a number of experimental variables suggested as factors determining the direction and magnitude of sequential effects. Recent studies have theorized that these two effects concurrently arise in perceptual processing, but empirical evidence that directly supports this hypothesis is lacking, and it remains unclear whether and how attractive and repulsive sequential effects interact in a trial. Here we show that the two effects concurrently modulate estimation behavior in a typical sequence of perceptual tasks. RESULTS We first demonstrate that observers' estimation error as a function of both the previous stimulus and response cannot be fully described by either attractive or repulsive bias but is instead well captured by a summation of repulsion from the previous stimulus and attraction toward the previous response. We then reveal that the repulsive bias is centered on the observer's sensory encoding of the previous stimulus, which is again repelled away from its own preceding trial, whereas the attractive bias is centered precisely on the previous response, which is the observer's best prediction about the incoming stimuli. CONCLUSIONS Our findings provide strong evidence that sensory encoding is shaped by dynamic tuning of the system to the past stimuli, inducing repulsive aftereffects, and followed by inference incorporating the prediction from the past estimation, leading to attractive serial dependence.
Collapse
Affiliation(s)
- Jongmin Moon
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, 50 UNIST-gil, Ulsan, 44919, South Korea
| | - Oh-Sang Kwon
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, 50 UNIST-gil, Ulsan, 44919, South Korea.
| |
Collapse
|
10
|
Divisive normalization is an efficient code for multivariate Pareto-distributed environments. Proc Natl Acad Sci U S A 2022; 119:e2120581119. [PMID: 36161961 PMCID: PMC9546555 DOI: 10.1073/pnas.2120581119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Divisive normalization is a canonical computation in the brain, observed across neural systems, that is often considered to be an implementation of the efficient coding principle. We provide a theoretical result that makes the conditions under which divisive normalization is an efficient code analytically precise: We show that, in a low-noise regime, encoding an n-dimensional stimulus via divisive normalization is efficient if and only if its prevalence in the environment is described by a multivariate Pareto distribution. We generalize this multivariate analog of histogram equalization to allow for arbitrary metabolic costs of the representation, and show how different assumptions on costs are associated with different shapes of the distributions that divisive normalization efficiently encodes. Our result suggests that divisive normalization may have evolved to efficiently represent stimuli with Pareto distributions. We demonstrate that this efficiently encoded distribution is consistent with stylized features of naturalistic stimulus distributions such as their characteristic conditional variance dependence, and we provide empirical evidence suggesting that it may capture the statistics of filter responses to naturalistic images. Our theoretical finding also yields empirically testable predictions across sensory domains on how the divisive normalization parameters should be tuned to features of the input distribution.
Collapse
|
11
|
Lin CHS, Garrido MI. Towards a cross-level understanding of Bayesian inference in the brain. Neurosci Biobehav Rev 2022; 137:104649. [PMID: 35395333 DOI: 10.1016/j.neubiorev.2022.104649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Revised: 02/28/2022] [Accepted: 03/29/2022] [Indexed: 10/18/2022]
Abstract
Perception emerges from unconscious probabilistic inference, which guides behaviour in our ubiquitously uncertain environment. Bayesian decision theory is a prominent computational model that describes how people make rational decisions using noisy and ambiguous sensory observations. However, critical questions have been raised about the validity of the Bayesian framework in explaining the mental process of inference. Firstly, some natural behaviours deviate from Bayesian optimum. Secondly, the neural mechanisms that support Bayesian computations in the brain are yet to be understood. Taking Marr's cross level approach, we review the recent progress made in addressing these challenges. We first review studies that combined behavioural paradigms and modelling approaches to explain both optimal and suboptimal behaviours. Next, we evaluate the theoretical advances and the current evidence for ecologically feasible algorithms and neural implementations in the brain, which may enable probabilistic inference. We argue that this cross-level approach is necessary for the worthwhile pursuit to uncover mechanistic accounts of human behaviour.
Collapse
Affiliation(s)
- Chin-Hsuan Sophie Lin
- Melbourne School of Psychological Sciences, The University of Melbourne, Australia; Australian Research Council for Integrative Brain Function, Australia.
| | - Marta I Garrido
- Melbourne School of Psychological Sciences, The University of Melbourne, Australia; Australian Research Council for Integrative Brain Function, Australia
| |
Collapse
|
12
|
Noel JP, Shivkumar S, Dokka K, Haefner RM, Angelaki DE. Aberrant causal inference and presence of a compensatory mechanism in autism spectrum disorder. eLife 2022; 11:71866. [PMID: 35579424 PMCID: PMC9170250 DOI: 10.7554/elife.71866] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 05/15/2022] [Indexed: 12/02/2022] Open
Abstract
Autism spectrum disorder (ASD) is characterized by a panoply of social, communicative, and sensory anomalies. As such, a central goal of computational psychiatry is to ascribe the heterogenous phenotypes observed in ASD to a limited set of canonical computations that may have gone awry in the disorder. Here, we posit causal inference - the process of inferring a causal structure linking sensory signals to hidden world causes - as one such computation. We show that audio-visual integration is intact in ASD and in line with optimal models of cue combination, yet multisensory behavior is anomalous in ASD because this group operates under an internal model favoring integration (vs. segregation). Paradoxically, during explicit reports of common cause across spatial or temporal disparities, individuals with ASD were less and not more likely to report common cause, particularly at small cue disparities. Formal model fitting revealed differences in both the prior probability for common cause (p-common) and choice biases, which are dissociable in implicit but not explicit causal inference tasks. Together, this pattern of results suggests (i) different internal models in attributing world causes to sensory signals in ASD relative to neurotypical individuals given identical sensory cues, and (ii) the presence of an explicit compensatory mechanism in ASD, with these individuals putatively having learned to compensate for their bias to integrate in explicit reports.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, United States
| | | | - Kalpana Dokka
- Department of Neuroscience, Baylor College of Medicine, Houston, United States
| | - Ralf M Haefner
- Brain and Cognitive Sciences, University of Rochester, Rochester, United States
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York City, United States.,Department of Neuroscience, Baylor College of Medicine, Houston, United States
| |
Collapse
|
13
|
Lange RD, Haefner RM. Task-induced neural covariability as a signature of approximate Bayesian learning and inference. PLoS Comput Biol 2022; 18:e1009557. [PMID: 35259152 PMCID: PMC8963539 DOI: 10.1371/journal.pcbi.1009557] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 03/29/2022] [Accepted: 10/12/2021] [Indexed: 11/30/2022] Open
Abstract
Perception is often characterized computationally as an inference process in which uncertain or ambiguous sensory inputs are combined with prior expectations. Although behavioral studies have shown that observers can change their prior expectations in the context of a task, robust neural signatures of task-specific priors have been elusive. Here, we analytically derive such signatures under the general assumption that the responses of sensory neurons encode posterior beliefs that combine sensory inputs with task-specific expectations. Specifically, we derive predictions for the task-dependence of correlated neural variability and decision-related signals in sensory neurons. The qualitative aspects of our results are parameter-free and specific to the statistics of each task. The predictions for correlated variability also differ from predictions of classic feedforward models of sensory processing and are therefore a strong test of theories of hierarchical Bayesian inference in the brain. Importantly, we find that Bayesian learning predicts an increase in so-called “differential correlations” as the observer’s internal model learns the stimulus distribution, and the observer’s behavioral performance improves. This stands in contrast to classic feedforward encoding/decoding models of sensory processing, since such correlations are fundamentally information-limiting. We find support for our predictions in data from existing neurophysiological studies across a variety of tasks and brain areas. Finally, we show in simulation how measurements of sensory neural responses can reveal information about a subject’s internal beliefs about the task. Taken together, our results reinterpret task-dependent sources of neural covariability as signatures of Bayesian inference and provide new insights into their cause and their function. Perceptual decision-making has classically been studied in the context of feedforward encoding/ decoding models. Here, we derive predictions for the responses of sensory neurons under the assumption that the brain performs hierarchical Bayesian inference, including feedback signals that communicate task-specific prior expectations. Interestingly, those predictions stand in contrast to some of the conclusions drawn in the classic framework. In particular, we find that Bayesian learning predicts the increase of a type of correlated variability called “differential correlations” over the course of learning. Differential correlations limit information, and hence are seen as harmful in feedforward models. Since our results are also specific to the statistics of a given task, and since they hold under a wide class of theories about how Bayesian probabilities may be represented by neural responses, they constitute a strong test of the Bayesian Brain hypothesis. Our results can explain the task-dependence of correlated variability in prior studies and suggest a reason why these kinds of correlations are surprisingly common in empirical data. Interpreted in a probabilistic framework, correlated variability provides a window into an observer’s task-related beliefs.
Collapse
Affiliation(s)
- Richard D. Lange
- Brain and Cognitive Sciences, University of Rochester, Rochester, New York, United States of America
- Center for Visual Science, University of Rochester, Rochester, New York, United States of America
- * E-mail: (RDL); (RMH)
| | - Ralf M. Haefner
- Brain and Cognitive Sciences, University of Rochester, Rochester, New York, United States of America
- Center for Visual Science, University of Rochester, Rochester, New York, United States of America
- * E-mail: (RDL); (RMH)
| |
Collapse
|
14
|
Ichikawa K, Kataoka A. Dynamical Mechanism of Sampling-Based Probabilistic Inference under Probabilistic Population Codes. Neural Comput 2022; 34:804-827. [PMID: 35026031 DOI: 10.1162/neco_a_01477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Accepted: 11/04/2021] [Indexed: 11/04/2022]
Abstract
Animals make efficient probabilistic inferences based on uncertain and noisy information from the outside environment. It is known that probabilistic population codes, which have been proposed as a neural basis for encoding probability distributions, allow general neural networks (NNs) to perform near-optimal point estimation. However, the mechanism of sampling-based probabilistic inference has not been clarified. In this study, we trained two types of artificial NNs, feedforward NN (FFNN) and recurrent NN (RNN), to perform sampling-based probabilistic inference. Then we analyzed and compared their mechanisms of sampling. We found that sampling in RNN was performed by a mechanism that efficiently uses the properties of dynamical systems, unlike FFNN. In addition, we found that sampling in RNNs acted as an inductive bias, enabling a more accurate estimation than in maximum a posteriori estimation. These results provide important arguments for discussing the relationship between dynamical systems and information processing in NNs.
Collapse
Affiliation(s)
- Kohei Ichikawa
- Graduate School of Arts and Sciences, University of Tokyo, Tokyo 153-0041, Japan, and ACES, Bunkyo-ku, Tokyo-to 223-0034, Japan
| | - Asaki Kataoka
- Graduate School of Arts and Sciences, University of Tokyo, Tokyo 153-0041, Japan, and ACES, Bunkyo-ku, Tokyo-to 223-0034, Japan
| |
Collapse
|
15
|
Masset P, Zavatone-Veth JA, Connor JP, Murthy VN, Pehlevan C. Natural gradient enables fast sampling in spiking neural networks. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2022; 35:22018-22034. [PMID: 37476623 PMCID: PMC10358281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/22/2023]
Abstract
For animals to navigate an uncertain world, their brains need to estimate uncertainty at the timescales of sensations and actions. Sampling-based algorithms afford a theoretically-grounded framework for probabilistic inference in neural circuits, but it remains unknown how one can implement fast sampling algorithms in biologically-plausible spiking networks. Here, we propose to leverage the population geometry, controlled by the neural code and the neural dynamics, to implement fast samplers in spiking neural networks. We first show that two classes of spiking samplers-efficient balanced spiking networks that simulate Langevin sampling, and networks with probabilistic spike rules that implement Metropolis-Hastings sampling-can be unified within a common framework. We then show that careful choice of population geometry, corresponding to the natural space of parameters, enables rapid inference of parameters drawn from strongly-correlated high-dimensional distributions in both networks. Our results suggest design principles for algorithms for sampling-based probabilistic inference in spiking neural networks, yielding potential inspiration for neuromorphic computing and testable predictions for neurobiology.
Collapse
Affiliation(s)
- Paul Masset
- Center for Brain Science, Harvard University Cambridge, MA 02138
- Department of Molecular and Cellular Biology, Harvard University Cambridge, MA 02138
| | - Jacob A Zavatone-Veth
- Center for Brain Science, Harvard University Cambridge, MA 02138
- Department of Physics, Harvard University Cambridge, MA 02138
| | - J Patrick Connor
- John A. Paulson School of Engineering and Applied Sciences, Harvard University Cambridge, MA 02138
| | - Venkatesh N Murthy
- Center for Brain Science, Harvard University Cambridge, MA 02138
- Department of Molecular and Cellular Biology, Harvard University Cambridge, MA 02138
| | - Cengiz Pehlevan
- Center for Brain Science, Harvard University Cambridge, MA 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University Cambridge, MA 02138
| |
Collapse
|
16
|
Sokoloski S, Aschner A, Coen-Cagli R. Modelling the neural code in large populations of correlated neurons. eLife 2021; 10:64615. [PMID: 34608865 PMCID: PMC8577837 DOI: 10.7554/elife.64615] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Accepted: 10/01/2021] [Indexed: 01/02/2023] Open
Abstract
Neurons respond selectively to stimuli, and thereby define a code that associates stimuli with population response patterns. Certain correlations within population responses (noise correlations) significantly impact the information content of the code, especially in large populations. Understanding the neural code thus necessitates response models that quantify the coding properties of modelled populations, while fitting large-scale neural recordings and capturing noise correlations. In this paper, we propose a class of response model based on mixture models and exponential families. We show how to fit our models with expectation-maximization, and that they capture diverse variability and covariability in recordings of macaque primary visual cortex. We also show how they facilitate accurate Bayesian decoding, provide a closed-form expression for the Fisher information, and are compatible with theories of probabilistic population coding. Our framework could allow researchers to quantitatively validate the predictions of neural coding theories against both large-scale neural recordings and cognitive performance.
Collapse
Affiliation(s)
- Sacha Sokoloski
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, United States.,Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
| | - Amir Aschner
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, United States
| | - Ruben Coen-Cagli
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, United States.,Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, United States
| |
Collapse
|
17
|
Festa D, Aschner A, Davila A, Kohn A, Coen-Cagli R. Neuronal variability reflects probabilistic inference tuned to natural image statistics. Nat Commun 2021; 12:3635. [PMID: 34131142 PMCID: PMC8206154 DOI: 10.1038/s41467-021-23838-x] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Accepted: 05/19/2021] [Indexed: 11/23/2022] Open
Abstract
Neuronal activity in sensory cortex fluctuates over time and across repetitions of the same input. This variability is often considered detrimental to neural coding. The theory of neural sampling proposes instead that variability encodes the uncertainty of perceptual inferences. In primary visual cortex (V1), modulation of variability by sensory and non-sensory factors supports this view. However, it is unknown whether V1 variability reflects the statistical structure of visual inputs, as would be required for inferences correctly tuned to the statistics of the natural environment. Here we combine analysis of image statistics and recordings in macaque V1 to show that probabilistic inference tuned to natural image statistics explains the widely observed dependence between spike count variance and mean, and the modulation of V1 activity and variability by spatial context in images. Our results show that the properties of a basic aspect of cortical responses-their variability-can be explained by a probabilistic representation tuned to naturalistic inputs.
Collapse
Affiliation(s)
- Dylan Festa
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Amir Aschner
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Aida Davila
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Adam Kohn
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, NY, USA
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
- Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Ruben Coen-Cagli
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, NY, USA.
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA.
| |
Collapse
|
18
|
Dehaene GP, Coen-Cagli R, Pouget A. Investigating the representation of uncertainty in neuronal circuits. PLoS Comput Biol 2021; 17:e1008138. [PMID: 33577553 PMCID: PMC7880493 DOI: 10.1371/journal.pcbi.1008138] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Accepted: 07/09/2020] [Indexed: 11/24/2022] Open
Abstract
Skilled behavior often displays signatures of Bayesian inference. In order for the brain to implement the required computations, neuronal activity must carry accurate information about the uncertainty of sensory inputs. Two major approaches have been proposed to study neuronal representations of uncertainty. The first one, the Bayesian decoding approach, aims primarily at decoding the posterior probability distribution of the stimulus from population activity using Bayes’ rule, and indirectly yields uncertainty estimates as a by-product. The second one, which we call the correlational approach, searches for specific features of neuronal activity (such as tuning-curve width and maximum firing-rate) which correlate with uncertainty. To compare these two approaches, we derived a new normative model of sound source localization by Interaural Time Difference (ITD), that reproduces a wealth of behavioral and neural observations. We found that several features of neuronal activity correlated with uncertainty on average, but none provided an accurate estimate of uncertainty on a trial-by-trial basis, indicating that the correlational approach may not reliably identify which aspects of neuronal responses represent uncertainty. In contrast, the Bayesian decoding approach reveals that the activity pattern of the entire population was required to reconstruct the trial-to-trial posterior distribution with Bayes’ rule. These results suggest that uncertainty is unlikely to be represented in a single feature of neuronal activity, and highlight the importance of using a Bayesian decoding approach when exploring the neural basis of uncertainty. In order to optimize their behavior, animals must continuously represent the uncertainty associated with their beliefs. Understanding the neural code for this uncertainty is a pressing and critical issue in neuroscience. Following a long tradition, some studies have investigated this code by measuring how average statistics of neural responses (like the tuning curves) correlate with uncertainty as stimulus characteristics are varied. We show that this approach can be very misleading. An alternative consists in decoding the neuronal responses to recover the posterior distribution over the encoded sensory variables and using the variance of this distribution as the measure of uncertainty. We demonstrate that this decoding approach can indeed avoid the pitfalls of the traditional approach, while leading to more accurate estimates of uncertainty.
Collapse
Affiliation(s)
- Guillaume P Dehaene
- University of Geneva, Département des neurosciences fondamentales, Geneva, Switzerland
| | - Ruben Coen-Cagli
- University of Geneva, Département des neurosciences fondamentales, Geneva, Switzerland.,Albert Einstein College of Medicine, Bronx, Department of Systems & Computational Biology & Department of Neuroscience, New York, United States of America
| | - Alexandre Pouget
- University of Geneva, Département des neurosciences fondamentales, Geneva, Switzerland.,Gatsby Computational Neuroscience Unit, London, United Kingdom
| |
Collapse
|
19
|
Huys QJM, Browning M, Paulus MP, Frank MJ. Advances in the computational understanding of mental illness. Neuropsychopharmacology 2021; 46:3-19. [PMID: 32620005 PMCID: PMC7688938 DOI: 10.1038/s41386-020-0746-4] [Citation(s) in RCA: 50] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 06/11/2020] [Accepted: 06/15/2020] [Indexed: 12/11/2022]
Abstract
Computational psychiatry is a rapidly growing field attempting to translate advances in computational neuroscience and machine learning into improved outcomes for patients suffering from mental illness. It encompasses both data-driven and theory-driven efforts. Here, recent advances in theory-driven work are reviewed. We argue that the brain is a computational organ. As such, an understanding of the illnesses arising from it will require a computational framework. The review divides work up into three theoretical approaches that have deep mathematical connections: dynamical systems, Bayesian inference and reinforcement learning. We discuss both general and specific challenges for the field, and suggest ways forward.
Collapse
Affiliation(s)
- Quentin J M Huys
- Division of Psychiatry and Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, UK.
- Camden and Islington NHS Trust, London, UK.
| | - Michael Browning
- Computational Psychiatry Lab, Department of Psychiatry, University of Oxford, Oxford, UK
- Oxford Health NHS Trust, Oxford, UK
| | - Martin P Paulus
- Laureate Institute For Brain Research (LIBR), Tulsa, OK, USA
| | - Michael J Frank
- Cognitive, Linguistic & Psychological Sciences, Neuroscience Graduate Program, Brown University, Providence, RI, USA
- Carney Center for Computational Brain Science, Carney Institute for Brain Science Psychiatry and Human Behavior, Brown University, Providence, RI, USA
| |
Collapse
|
20
|
Lowet AS, Zheng Q, Matias S, Drugowitsch J, Uchida N. Distributional Reinforcement Learning in the Brain. Trends Neurosci 2020; 43:980-997. [PMID: 33092893 PMCID: PMC8073212 DOI: 10.1016/j.tins.2020.09.004] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 08/14/2020] [Accepted: 09/08/2020] [Indexed: 12/11/2022]
Abstract
Learning about rewards and punishments is critical for survival. Classical studies have demonstrated an impressive correspondence between the firing of dopamine neurons in the mammalian midbrain and the reward prediction errors of reinforcement learning algorithms, which express the difference between actual reward and predicted mean reward. However, it may be advantageous to learn not only the mean but also the complete distribution of potential rewards. Recent advances in machine learning have revealed a biologically plausible set of algorithms for reconstructing this reward distribution from experience. Here, we review the mathematical foundations of these algorithms as well as initial evidence for their neurobiological implementation. We conclude by highlighting outstanding questions regarding the circuit computation and behavioral readout of these distributional codes.
Collapse
Affiliation(s)
- Adam S Lowet
- Department of Molecular and Cellular Biology, Center for Brain Science, Harvard University, Cambridge, MA 02138, USA
| | - Qiao Zheng
- Department of Neurobiology, Harvard Medical School, Boston, MA 02115, USA
| | - Sara Matias
- Department of Molecular and Cellular Biology, Center for Brain Science, Harvard University, Cambridge, MA 02138, USA
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA 02115, USA.
| | - Naoshige Uchida
- Department of Molecular and Cellular Biology, Center for Brain Science, Harvard University, Cambridge, MA 02138, USA.
| |
Collapse
|
21
|
Hierarchical structure is employed by humans during visual motion perception. Proc Natl Acad Sci U S A 2020; 117:24581-24589. [PMID: 32938799 DOI: 10.1073/pnas.2008961117] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
In the real world, complex dynamic scenes often arise from the composition of simpler parts. The visual system exploits this structure by hierarchically decomposing dynamic scenes: When we see a person walking on a train or an animal running in a herd, we recognize the individual's movement as nested within a reference frame that is, itself, moving. Despite its ubiquity, surprisingly little is understood about the computations underlying hierarchical motion perception. To address this gap, we developed a class of stimuli that grant tight control over statistical relations among object velocities in dynamic scenes. We first demonstrate that structured motion stimuli benefit human multiple object tracking performance. Computational analysis revealed that the performance gain is best explained by human participants making use of motion relations during tracking. A second experiment, using a motion prediction task, reinforced this conclusion and provided fine-grained information about how the visual system flexibly exploits motion structure.
Collapse
|
22
|
Yu Z, Chen F, Liu JK. Sampling-Tree Model: Efficient Implementation of Distributed Bayesian Inference in Neural Networks. IEEE Trans Cogn Dev Syst 2020. [DOI: 10.1109/tcds.2019.2927808] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
23
|
Kohn A, Jasper AI, Semedo JD, Gokcen E, Machens CK, Yu BM. Principles of Corticocortical Communication: Proposed Schemes and Design Considerations. Trends Neurosci 2020; 43:725-737. [PMID: 32771224 PMCID: PMC7484382 DOI: 10.1016/j.tins.2020.07.001] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 07/01/2020] [Accepted: 07/05/2020] [Indexed: 12/22/2022]
Abstract
Nearly all brain functions involve routing neural activity among a distributed network of areas. Understanding this routing requires more than a description of interareal anatomical connectivity: it requires understanding what controls the flow of signals through interareal circuitry and how this communication might be modulated to allow flexible behavior. Here we review proposals of how communication, particularly between visual cortical areas, is instantiated and modulated, highlighting recent work that offers new perspectives. We suggest transitioning from a focus on assessing changes in the strength of interareal interactions, as often seen in studies of interareal communication, to a broader consideration of how different signaling schemes might contribute to computation. To this end, we discuss a set of features that might be desirable for a communication scheme.
Collapse
Affiliation(s)
- Adam Kohn
- Dominik Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, NY, USA; Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, New York, NY, USA; Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, New York, NY, USA.
| | - Anna I Jasper
- Dominik Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, NY, USA
| | - João D Semedo
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Evren Gokcen
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Christian K Machens
- Champalimaud Research, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Byron M Yu
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
24
|
Walsh KS, McGovern DP, Clark A, O'Connell RG. Evaluating the neurophysiological evidence for predictive processing as a model of perception. Ann N Y Acad Sci 2020; 1464:242-268. [PMID: 32147856 PMCID: PMC7187369 DOI: 10.1111/nyas.14321] [Citation(s) in RCA: 120] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 01/21/2020] [Accepted: 02/03/2020] [Indexed: 12/12/2022]
Abstract
For many years, the dominant theoretical framework guiding research into the neural origins of perceptual experience has been provided by hierarchical feedforward models, in which sensory inputs are passed through a series of increasingly complex feature detectors. However, the long-standing orthodoxy of these accounts has recently been challenged by a radically different set of theories that contend that perception arises from a purely inferential process supported by two distinct classes of neurons: those that transmit predictions about sensory states and those that signal sensory information that deviates from those predictions. Although these predictive processing (PP) models have become increasingly influential in cognitive neuroscience, they are also criticized for lacking the empirical support to justify their status. This limited evidence base partly reflects the considerable methodological challenges that are presented when trying to test the unique predictions of these models. However, a confluence of technological and theoretical advances has prompted a recent surge in human and nonhuman neurophysiological research seeking to fill this empirical gap. Here, we will review this new research and evaluate the degree to which its findings support the key claims of PP.
Collapse
Affiliation(s)
- Kevin S. Walsh
- Trinity College Institute of Neuroscience and School of PsychologyTrinity College DublinDublinIreland
| | - David P. McGovern
- Trinity College Institute of Neuroscience and School of PsychologyTrinity College DublinDublinIreland
- School of PsychologyDublin City UniversityDublinIreland
| | - Andy Clark
- Department of PhilosophyUniversity of SussexBrightonUK
- Department of InformaticsUniversity of SussexBrightonUK
| | - Redmond G. O'Connell
- Trinity College Institute of Neuroscience and School of PsychologyTrinity College DublinDublinIreland
| |
Collapse
|
25
|
Lazar AA, Ukani NH, Zhou Y. Sparse identification of contrast gain control in the fruit fly photoreceptor and amacrine cell layer. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2020; 10:3. [PMID: 32052209 PMCID: PMC7016054 DOI: 10.1186/s13408-020-0080-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Accepted: 01/28/2020] [Indexed: 05/05/2023]
Abstract
The fruit fly's natural visual environment is often characterized by light intensities ranging across several orders of magnitude and by rapidly varying contrast across space and time. Fruit fly photoreceptors robustly transduce and, in conjunction with amacrine cells, process visual scenes and provide the resulting signal to downstream targets. Here, we model the first step of visual processing in the photoreceptor-amacrine cell layer. We propose a novel divisive normalization processor (DNP) for modeling the computation taking place in the photoreceptor-amacrine cell layer. The DNP explicitly models the photoreceptor feedforward and temporal feedback processing paths and the spatio-temporal feedback path of the amacrine cells. We then formally characterize the contrast gain control of the DNP and provide sparse identification algorithms that can efficiently identify each the feedforward and feedback DNP components. The algorithms presented here are the first demonstration of tractable and robust identification of the components of a divisive normalization processor. The sparse identification algorithms can be readily employed in experimental settings, and their effectiveness is demonstrated with several examples.
Collapse
Affiliation(s)
- Aurel A. Lazar
- Department of Electrical Engineering, Columbia University, New York, USA
| | - Nikul H. Ukani
- Department of Electrical Engineering, Columbia University, New York, USA
| | - Yiyin Zhou
- Department of Electrical Engineering, Columbia University, New York, USA
| |
Collapse
|
26
|
Divisively Normalized Integration of Multisensory Error Information Develops Motor Memories Specific to Vision and Proprioception. J Neurosci 2020; 40:1560-1570. [PMID: 31924610 DOI: 10.1523/jneurosci.1745-19.2019] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2019] [Revised: 11/17/2019] [Accepted: 12/13/2019] [Indexed: 11/21/2022] Open
Abstract
Both visual and proprioceptive information contribute to the accuracy of limb movement, but the mechanism of integration of these different modality signals for movement control and learning remains controversial. We aimed to elucidate the mechanism of multisensory integration for motor adaptation by evaluating single-trial adaptation (i.e., aftereffect) induced by visual and proprioceptive perturbations while male and female human participants performed reaching movements. The force-channel method was used to precisely impose several combinations of visual and proprioceptive perturbations (i.e., error), including an instance when the directions of perturbation in both stimuli opposed each another. In the subsequent probe force-channel trial, the lateral force against the channel was quantified as the aftereffect to clarify the mechanism by which the motor adaptation system corrects movement in the event of visual and proprioceptive errors. We observed that the aftereffects had complex dependence on the visual and proprioceptive errors. Although this pattern could not be explained by previously proposed computational models based on the reliability of sensory information, we found that it could be reasonably explained by a mechanism known as divisive normalization, which was the reported mechanism underlying the integration of multisensory signals in neurons. Furthermore, we discovered evidence that the motor memory for each sensory modality developed separately in accordance with a divisive normalization mechanism and that the outputs of both memories were integrated. These results provide a novel view of the utilization and integration of different sensory modality signals in motor adaptation.SIGNIFICANCE STATEMENT The mechanism of utilization of multimodal sensory information by the motor control system to perform limb movements with accuracy is a fundamental question. However, the mechanism of integration of these different sensory modalities for movement control and learning remains highly debatable. Herein, we demonstrate that multisensory integration in the motor learning system can be reasonably explained by divisive normalization, a canonical computation, ubiquitously observed in the brain (Carandini and Heeger, 2011). Moreover, we provide evidence of a novel idea that integration does not occur at the sensory information processing level, but at the motor execution level, after the motor memory for each sensory modality is separately created.
Collapse
|
27
|
Optimized but Not Maximized Cue Integration for 3D Visual Perception. eNeuro 2020; 7:ENEURO.0411-19.2019. [PMID: 31836597 PMCID: PMC6948924 DOI: 10.1523/eneuro.0411-19.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 12/05/2019] [Accepted: 12/08/2019] [Indexed: 02/02/2023] Open
Abstract
Reconstructing three-dimensional (3D) scenes from two-dimensional (2D) retinal images is an ill-posed problem. Despite this, 3D perception of the world based on 2D retinal images is seemingly accurate and precise. The integration of distinct visual cues is essential for robust 3D perception in humans, but it is unclear whether this is true for non-human primates (NHPs). Here, we assessed 3D perception in macaque monkeys using a planar surface orientation discrimination task. Perception was accurate across a wide range of spatial poses (orientations and distances), but precision was highly dependent on the plane's pose. The monkeys achieved robust 3D perception by dynamically reweighting the integration of stereoscopic and perspective cues according to their pose-dependent reliabilities. Errors in performance could be explained by a prior resembling the 3D orientation statistics of natural scenes. We used neural network simulations based on 3D orientation-selective neurons recorded from the same monkeys to assess how neural computation might constrain perception. The perceptual data were consistent with a model in which the responses of two independent neuronal populations representing stereoscopic cues and perspective cues (with perspective signals from the two eyes combined using nonlinear canonical computations) were optimally integrated through linear summation. Perception of combined-cue stimuli was optimal given this architecture. However, an alternative architecture in which stereoscopic cues, left eye perspective cues, and right eye perspective cues were represented by three independent populations yielded two times greater precision than the monkeys. This result suggests that, due to canonical computations, cue integration for 3D perception is optimized but not maximized.
Collapse
|
28
|
A neural basis of probabilistic computation in visual cortex. Nat Neurosci 2019; 23:122-129. [PMID: 31873286 DOI: 10.1038/s41593-019-0554-5] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2018] [Accepted: 11/06/2019] [Indexed: 11/08/2022]
Abstract
Bayesian models of behavior suggest that organisms represent uncertainty associated with sensory variables. However, the neural code of uncertainty remains elusive. A central hypothesis is that uncertainty is encoded in the population activity of cortical neurons in the form of likelihood functions. We tested this hypothesis by simultaneously recording population activity from primate visual cortex during a visual categorization task in which trial-to-trial uncertainty about stimulus orientation was relevant for the decision. We decoded the likelihood function from the trial-to-trial population activity and found that it predicted decisions better than a point estimate of orientation. This remained true when we conditioned on the true orientation, suggesting that internal fluctuations in neural activity drive behaviorally meaningful variations in the likelihood function. Our results establish the role of population-encoded likelihood functions in mediating behavior and provide a neural underpinning for Bayesian models of perception.
Collapse
|
29
|
Position-theta-phase model of hippocampal place cell activity applied to quantification of running speed modulation of firing rate. Proc Natl Acad Sci U S A 2019; 116:27035-27042. [PMID: 31843934 PMCID: PMC6936353 DOI: 10.1073/pnas.1912792116] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Spiking activity of place cells in the hippocampus encodes the animal's position as it moves through an environment. Within a cell's place field, both the firing rate and the phase of spiking in the local theta oscillation contain spatial information. We propose a position-theta-phase (PTP) model that captures the simultaneous expression of the firing-rate code and theta-phase code in place cell spiking. This model parametrically characterizes place fields to compare across cells, time, and conditions; generates realistic place cell simulation data; and conceptualizes a framework for principled hypothesis testing to identify additional features of place cell activity. We use the PTP model to assess the effect of running speed in place cell data recorded from rats running on linear tracks. For the majority of place fields, we do not find evidence for speed modulation of the firing rate. For a small subset of place fields, we find firing rates significantly increase or decrease with speed. We use the PTP model to compare candidate mechanisms of speed modulation in significantly modulated fields and determine that speed acts as a gain control on the magnitude of firing rate. Our model provides a tool that connects rigorous analysis with a computational framework for understanding place cell activity.
Collapse
|
30
|
Human confidence judgments reflect reliability-based hierarchical integration of contextual information. Nat Commun 2019; 10:5430. [PMID: 31780659 PMCID: PMC6882790 DOI: 10.1038/s41467-019-13472-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Accepted: 11/07/2019] [Indexed: 11/08/2022] Open
Abstract
Our immediate observations must be supplemented with contextual information to resolve ambiguities. However, the context is often ambiguous too, and thus it should be inferred itself to guide behavior. Here, we introduce a novel hierarchical task (airplane task) in which participants should infer a higher-level, contextual variable to inform probabilistic inference about a hidden dependent variable at a lower level. By controlling the reliability of past sensory evidence through varying the sample size of the observations, we find that humans estimate the reliability of the context and combine it with current sensory uncertainty to inform their confidence reports. Behavior closely follows inference by probabilistic message passing between latent variables across hierarchical state representations. Commonly reported inferential fallacies, such as sample size insensitivity, are not present, and neither did participants appear to rely on simple heuristics. Our results reveal uncertainty-sensitive integration of information at different hierarchical levels and temporal scales. Because our immediate observations are often ambiguous, we must use the context (prior beliefs) to guide inference, but the context may also be uncertain. Here, the authors show that humans can accurately estimate the reliability of the context and combine it with sensory uncertainty to form their decisions and estimate confidence.
Collapse
|
31
|
Coen-Cagli R, Solomon SS. Relating Divisive Normalization to Neuronal Response Variability. J Neurosci 2019; 39:7344-7356. [PMID: 31387914 PMCID: PMC6759019 DOI: 10.1523/jneurosci.0126-19.2019] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2019] [Revised: 06/13/2019] [Accepted: 06/18/2019] [Indexed: 01/13/2023] Open
Abstract
Cortical responses to repeated presentations of a sensory stimulus are variable. This variability is sensitive to several stimulus dimensions, suggesting that it may carry useful information beyond the average firing rate. Many experimental manipulations that affect response variability are also known to engage divisive normalization, a widespread operation that describes neuronal activity as the ratio of a numerator (representing the excitatory stimulus drive) and denominator (the normalization signal). Although it has been suggested that normalization affects response variability, we lack a quantitative framework to determine the relation between the two. Here we extend the standard normalization model, by treating the numerator and the normalization signal as variable quantities. The resulting model predicts a general stabilizing effect of normalization on neuronal responses, and allows us to infer the single-trial normalization strength, a quantity that cannot be measured directly. We test the model on neuronal responses to stimuli of varying contrast, recorded in primary visual cortex of male macaques. We find that neurons that are more strongly normalized fire more reliably, and response variability and pairwise noise correlations are reduced during trials in which normalization is inferred to be strong. Our results thus suggest a novel functional role for normalization, namely, modulating response variability. Our framework could enable a direct quantification of the impact of single-trial normalization strength on the accuracy of perceptual judgments, and can be readily applied to other sensory and nonsensory factors.SIGNIFICANCE STATEMENT Divisive normalization is a widespread neural operation across sensory and nonsensory brain areas, which describes neuronal responses as the ratio between the excitatory drive to the neuron and a normalization signal. Normalization plays a key role in several important computations, including adjusting the neuron's dynamic range, reducing redundancy, and facilitating probabilistic inference. However, the relation between normalization and neuronal response variability (a fundamental aspect of neural coding) remains unclear. Here we develop a new model and test it on primary visual cortex responses. We show that normalization has a stabilizing effect on neuronal activity, beyond the known suppression of firing rate. This modulation of variability suggests a new functional role for normalization in neural coding and perception.
Collapse
Affiliation(s)
- Ruben Coen-Cagli
- Department of Systems and Computational Biology, and
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York 10461
| | - Selina S Solomon
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York 10461
| |
Collapse
|
32
|
Conen KE, Padoa-Schioppa C. Partial Adaptation to the Value Range in the Macaque Orbitofrontal Cortex. J Neurosci 2019; 39:3498-3513. [PMID: 30833513 PMCID: PMC6495134 DOI: 10.1523/jneurosci.2279-18.2019] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 01/21/2019] [Accepted: 02/13/2019] [Indexed: 11/21/2022] Open
Abstract
Values available for choice in different behavioral contexts can vary immensely. To compensate for this variability, neuronal circuits underlying economic decisions undergo adaptation. In orbitofrontal cortex (OFC), neurons encode the subjective value of offered and chosen goods in a quasilinear way. Previous experiments found that the gain of the encoding is lower when the value range is wider. However, the parameters OFC neurons adapted to remained unclear. Furthermore, previous studies did not examine additive changes in neuronal responses. Computational considerations indicate that these factors can directly impact choice behavior. Here we investigated how OFC neurons adapt to changes in the value range. We recorded from two male rhesus monkeys during a juice choice task. Each session was divided into two blocks of trials. In each block, juices were offered within a set range of values, and ranges changed between blocks. Across blocks, neuronal responses adapted to both the maximum and the minimum value, but only partially. As a result, the minimum neural activity was elevated in some value ranges relative to others. Through simulation of a linear decision model, we showed that increasing the minimum response increases choice variability, lowering the expected payoff. This effect is modulated by the balance between cells with positive and negative encoding. The presence of these two populations induces a non-monotonic relationship between the value range and choice efficacy, such that the expected payoff is highest for decisions in an intermediate value range.SIGNIFICANCE STATEMENT Economic decisions are thought to rely on the orbitofrontal cortex (OFC). The values available for choice vary enormously in different contexts. Previous work showed that neurons in OFC encode values in a linear way, and that the gain of encoding is inversely related to the range of available values. However, the specific parameters driving adaptation remained unclear. Here we show that OFC neurons adapt to both the maximum and minimum value in the current context. However, adaptation is partial, leading to contextual changes in the response offset. Interestingly, increasing the activity offset negatively affects choices in a simulated network. Partial adaptation may allow the circuit to maintain information about context value at the cost of slightly reduced payoff.
Collapse
Affiliation(s)
| | - Camillo Padoa-Schioppa
- Departments of Neuroscience,
- Economics, and
- Biomedical Engineering, Washington University, St Louis, Missouri 63110
| |
Collapse
|
33
|
McNamee D, Wolpert DM. Internal Models in Biological Control. ANNUAL REVIEW OF CONTROL, ROBOTICS, AND AUTONOMOUS SYSTEMS 2019; 2:339-364. [PMID: 31106294 PMCID: PMC6520231 DOI: 10.1146/annurev-control-060117-105206] [Citation(s) in RCA: 96] [Impact Index Per Article: 19.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Rationality principles such as optimal feedback control and Bayesian inference underpin a probabilistic framework that has accounted for a range of empirical phenomena in biological sensorimotor control. To facilitate the optimization of flexible and robust behaviors consistent with these theories, the ability to construct internal models of the motor system and environmental dynamics can be crucial. In the context of this theoretic formalism, we review the computational roles played by such internal models and the neural and behavioral evidence for their implementation in the brain.
Collapse
Affiliation(s)
- Daniel McNamee
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, United Kingdom
- Institute of Neurology, University College London, London WC1E 6BT, United Kingdom
| | - Daniel M. Wolpert
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, United Kingdom
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York 10027, United States
| |
Collapse
|
34
|
Tiganj Z, Gershman SJ, Sederberg PB, Howard MW. Estimating Scale-Invariant Future in Continuous Time. Neural Comput 2019; 31:681-709. [PMID: 30764739 PMCID: PMC6959535 DOI: 10.1162/neco_a_01171] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Natural learners must compute an estimate of future outcomes that follow from a stimulus in continuous time. Widely used reinforcement learning algorithms discretize continuous time and estimate either transition functions from one step to the next (model-based algorithms) or a scalar value of exponentially discounted future reward using the Bellman equation (model-free algorithms). An important drawback of model-based algorithms is that computational cost grows linearly with the amount of time to be simulated. An important drawback of model-free algorithms is the need to select a timescale required for exponential discounting. We present a computational mechanism, developed based on work in psychology and neuroscience, for computing a scale-invariant timeline of future outcomes. This mechanism efficiently computes an estimate of inputs as a function of future time on a logarithmically compressed scale and can be used to generate a scale-invariant power-law-discounted estimate of expected future reward. The representation of future time retains information about what will happen when. The entire timeline can be constructed in a single parallel operation that generates concrete behavioral and neural predictions. This computational mechanism could be incorporated into future reinforcement learning algorithms.
Collapse
Affiliation(s)
- Zoran Tiganj
- Center for Memory and Brain, Department of Psychological and Brain Sciences, Boston, MA 02215, U.S.A.
| | - Samuel J Gershman
- Department of Psychology and Center for Brain Science, Harvard University, Cambridge, MA 02138, U.S.A.
| | - Per B Sederberg
- Department of Psychology, University of Virginia, Charlottesville, VA, 22904, U.S.A.
| | - Marc W Howard
- Center for Memory and Brain, Department of Psychological and Brain Sciences, Boston, MA 02215, U.S.A.
| |
Collapse
|
35
|
Sanchez-Giraldo LG, Laskar MNU, Schwartz O. Normalization and pooling in hierarchical models of natural images. Curr Opin Neurobiol 2019; 55:65-72. [PMID: 30785005 DOI: 10.1016/j.conb.2019.01.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Revised: 12/29/2018] [Accepted: 01/13/2019] [Indexed: 11/17/2022]
Abstract
Divisive normalization and subunit pooling are two canonical classes of computation that have become widely used in descriptive (what) models of visual cortical processing. Normative (why) models from natural image statistics can help constrain the form and parameters of such classes of models. We focus on recent advances in two particular directions, namely deriving richer forms of divisive normalization, and advances in learning pooling from image statistics. We discuss the incorporation of such components into hierarchical models. We consider both hierarchical unsupervised learning from image statistics, and discriminative supervised learning in deep convolutional neural networks (CNNs). We further discuss studies on the utility and extensions of the convolutional architecture, which has also been adopted by recent descriptive models. We review the recent literature and discuss the current promises and gaps of using such approaches to gain a better understanding of how cortical neurons represent and process complex visual stimuli.
Collapse
Affiliation(s)
- Luis G Sanchez-Giraldo
- Computational Neuroscience Lab, Dept. of Computer Science, University of Miami, FL 33146, United States.
| | - Md Nasir Uddin Laskar
- Computational Neuroscience Lab, Dept. of Computer Science, University of Miami, FL 33146, United States
| | - Odelia Schwartz
- Computational Neuroscience Lab, Dept. of Computer Science, University of Miami, FL 33146, United States
| |
Collapse
|
36
|
Sasaki R, Angelaki DE, DeAngelis GC. Processing of object motion and self-motion in the lateral subdivision of the medial superior temporal area in macaques. J Neurophysiol 2019; 121:1207-1221. [PMID: 30699042 DOI: 10.1152/jn.00497.2018] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Multiple areas of macaque cortex are involved in visual motion processing, but their relative functional roles remain unclear. The medial superior temporal (MST) area is typically divided into lateral (MSTl) and dorsal (MSTd) subdivisions that are thought to be involved in processing object motion and self-motion, respectively. Whereas MSTd has been studied extensively with regard to processing visual and nonvisual self-motion cues, little is known about self-motion signals in MSTl, especially nonvisual signals. Moreover, little is known about how self-motion and object motion signals interact in MSTl and how this differs from interactions in MSTd. We compared the visual and vestibular heading tuning of neurons in MSTl and MSTd using identical stimuli. Our findings reveal that both visual and vestibular heading signals are weaker in MSTl than in MSTd, suggesting that MSTl is less well suited to participate in self-motion perception than MSTd. We also tested neurons in both areas with a variety of combinations of object motion and self-motion. Our findings reveal that vestibular signals improve the separability of coding of heading and object direction in both areas, albeit more strongly in MSTd due to the greater strength of vestibular signals. Based on a marginalization technique, population decoding reveals that heading and object direction can be more effectively dissociated from MSTd responses than MSTl responses. Our findings help to clarify the respective contributions that MSTl and MSTd make to processing of object motion and self-motion, although our conclusions may be somewhat specific to the multipart moving objects that we employed. NEW & NOTEWORTHY Retinal image motion reflects contributions from both the observer's self-motion and the movement of objects in the environment. The neural mechanisms by which the brain dissociates self-motion and object motion remain unclear. This study provides the first systematic examination of how the lateral subdivision of area MST (MSTl) contributes to dissociating object motion and self-motion. We also examine, for the first time, how MSTl neurons represent translational self-motion based on both vestibular and visual cues.
Collapse
Affiliation(s)
- Ryo Sasaki
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester , Rochester, New York
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine , Houston, Texas.,Department of Electrical and Computer Engineering, Rice University , Houston, Texas
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester , Rochester, New York
| |
Collapse
|
37
|
Prefrontal mechanisms combining rewards and beliefs in human decision-making. Nat Commun 2019; 10:301. [PMID: 30655534 PMCID: PMC6336816 DOI: 10.1038/s41467-018-08121-w] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2018] [Accepted: 12/11/2018] [Indexed: 02/03/2023] Open
Abstract
In uncertain and changing environments, optimal decision-making requires integrating reward expectations with probabilistic beliefs about reward contingencies. Little is known, however, about how the prefrontal cortex (PFC), which subserves decision-making, combines these quantities. Here, using computational modelling and neuroimaging, we show that the ventromedial PFC encodes both reward expectations and proper beliefs about reward contingencies, while the dorsomedial PFC combines these quantities and guides choices that are at variance with those predicted by optimal decision theory: instead of integrating reward expectations with beliefs, the dorsomedial PFC built context-dependent reward expectations commensurable to beliefs and used these quantities as two concurrent appetitive components, driving choices. This neural mechanism accounts for well-known risk aversion effects in human decision-making. The results reveal that the irrationality of human choices commonly theorized as deriving from optimal computations over false beliefs, actually stems from suboptimal neural heuristics over rational beliefs about reward contingencies. Optimal decision-making requires integrating expectations about rewards with beliefs about reward contingencies. Here, the authors show that these aspects of reward are encoded in the ventromedial prefrontal cortex then combined in the dorsomedial prefrontal cortex, a process that guides choice biases characteristic of human decision-making.
Collapse
|
38
|
Aschner A, Solomon SG, Landy MS, Heeger DJ, Kohn A. Temporal Contingencies Determine Whether Adaptation Strengthens or Weakens Normalization. J Neurosci 2018; 38:10129-10142. [PMID: 30291205 PMCID: PMC6246879 DOI: 10.1523/jneurosci.1131-18.2018] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2018] [Revised: 08/30/2018] [Accepted: 09/19/2018] [Indexed: 11/21/2022] Open
Abstract
A fundamental and nearly ubiquitous feature of sensory encoding is that neuronal responses are strongly influenced by recent experience, or adaptation. Theoretical and computational studies have proposed that many adaptation effects may result in part from changes in the strength of normalization signals. Normalization is a "canonical" computation in which a neuron's response is modulated (normalized) by the pooled activity of other neurons. Here, we test whether adaptation can alter the strength of cross-orientation suppression, or masking, a paradigmatic form of normalization evident in primary visual cortex (V1). We made extracellular recordings of V1 neurons in anesthetized male macaques and measured responses to plaid stimuli composed of two overlapping, orthogonal gratings before and after prolonged exposure to two distinct adapters. The first adapter was a plaid consisting of orthogonal gratings and led to stronger masking. The second adapter presented the same orthogonal gratings in an interleaved manner and led to weaker masking. The strength of adaptation's effects on masking depended on the orientation of the test stimuli relative to the orientation of the adapters, but was independent of neuronal orientation preference. Changes in masking could not be explained by altered neuronal responsivity. Our results suggest that normalization signals can be strengthened or weakened by adaptation depending on the temporal contingencies of the adapting stimuli. Our findings reveal an interplay between two widespread computations in cortical circuits, adaptation and normalization, that enables flexible adjustments to the structure of the environment, including the temporal relationships among sensory stimuli.SIGNIFICANCE STATEMENT Two fundamental features of sensory responses are that they are influenced by adaptation and that they are modulated by the activity of other nearby neurons via normalization. Our findings reveal a strong interaction between these two aspects of cortical computation. Specifically, we show that cross-orientation masking, a form of normalization, can be strengthened or weakened by adaptation depending on the temporal contingencies between sensory inputs. Our findings support theoretical proposals that some adaptation effects may involve altered normalization and offer a network-based explanation for how cortex adjusts to current sensory demands.
Collapse
Affiliation(s)
- Amir Aschner
- Dominik Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York 10461,
| | - Samuel G Solomon
- Department of Experimental Psychology, University College London, London, United Kingdom WC1H 0AP
| | - Michael S Landy
- Department of Psychology and Center for Neural Science, New York University, New York, New York 10003
| | - David J Heeger
- Department of Psychology and Center for Neural Science, New York University, New York, New York 10003
| | - Adam Kohn
- Dominik Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York 10461
- Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, New York 10461, and
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, New York 10461
| |
Collapse
|
39
|
Echeveste R, Lengyel M. The Redemption of Noise: Inference with Neural Populations. Trends Neurosci 2018; 41:767-770. [PMID: 30366563 PMCID: PMC6416224 DOI: 10.1016/j.tins.2018.09.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Accepted: 09/07/2018] [Indexed: 11/30/2022]
Abstract
In 2006, Ma et al. presented an elegant theory for how populations of neurons might represent uncertainty to perform Bayesian inference. Critically, according to this theory, neural variability is no longer a nuisance, but rather a vital part of how the brain encodes probability distributions and performs computations with them.
Collapse
Affiliation(s)
- Rodrigo Echeveste
- Computational and Biological Learning Laboratory, Department of Engineering, University of Cambridge, Cambridge, UK
| | - Máté Lengyel
- Computational and Biological Learning Laboratory, Department of Engineering, University of Cambridge, Cambridge, UK; Department of Cognitive Science, Central European University, Budapest, Hungary.
| |
Collapse
|
40
|
Abedi Khoozani P, Blohm G. Neck muscle spindle noise biases reaches in a multisensory integration task. J Neurophysiol 2018; 120:893-909. [PMID: 29742021 PMCID: PMC6171065 DOI: 10.1152/jn.00643.2017] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2017] [Revised: 04/25/2018] [Accepted: 04/25/2018] [Indexed: 11/22/2022] Open
Abstract
Reference frame transformations (RFTs) are crucial components of sensorimotor transformations in the brain. Stochasticity in RFTs has been suggested to add noise to the transformed signal due to variability in transformation parameter estimates (e.g., angle) as well as the stochastic nature of computations in spiking networks of neurons. Here, we varied the RFT angle together with the associated variability and evaluated the behavioral impact in a reaching task that required variability-dependent visual-proprioceptive multisensory integration. Crucially, reaches were performed with the head either straight or rolled 30° to either shoulder, and we also applied neck loads of 0 or 1.8 kg (left or right) in a 3 × 3 design, resulting in different combinations of estimated head roll angle magnitude and variance required in RFTs. A novel three-dimensional stochastic model of multisensory integration across reference frames was fitted to the data and captured our main behavioral findings: 1) neck load biased head angle estimation across all head roll orientations, resulting in systematic shifts in reach errors; 2) increased neck muscle tone led to increased reach variability due to signal-dependent noise; and 3) both head roll and neck load created larger angular errors in reaches to visual targets away from the body compared with reaches toward the body. These results show that noise in muscle spindles and stochasticity in general have a tangible effect on RFTs underlying reach planning. Since RFTs are omnipresent in the brain, our results could have implications for processes as diverse as motor control, decision making, posture/balance control, and perception. NEW & NOTEWORTHY We show that increasing neck muscle tone systematically biases reach movements. A novel three-dimensional multisensory integration across reference frames model captures the data well and provides evidence that the brain must have online knowledge of full-body geometry together with the associated variability to plan reach movements accurately.
Collapse
Affiliation(s)
- Parisa Abedi Khoozani
- Centre for Neuroscience Studies, Queen's University , Kingston, Ontario , Canada
- Canadian Action and Perception Network , Toronto, Ontario , Canada
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University , Kingston, Ontario , Canada
- Canadian Action and Perception Network , Toronto, Ontario , Canada
- Association for Canadian Neuroinformatics and Computational Neuroscience , Kingston, Ontario , Canada
| |
Collapse
|
41
|
Rosenberg A, Sunkara A. Does attenuated divisive normalization affect gaze processing in autism spectrum disorder? A commentary on Palmer et al. (2018). Cortex 2018; 111:316-318. [PMID: 30086826 DOI: 10.1016/j.cortex.2018.06.017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2018] [Revised: 06/13/2018] [Accepted: 07/03/2018] [Indexed: 10/28/2022]
Affiliation(s)
- Ari Rosenberg
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin - Madison, Madison, WI, USA.
| | - Adhira Sunkara
- Department of Surgery, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
42
|
How Does the Brain Tell Self-Motion from Object Motion? J Neurosci 2018; 38:3875-3877. [PMID: 29669798 DOI: 10.1523/jneurosci.0039-18.2018] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Revised: 02/22/2018] [Accepted: 03/01/2018] [Indexed: 11/21/2022] Open
|
43
|
Dakin CJ, Rosenberg A. Gravity estimation and verticality perception. HANDBOOK OF CLINICAL NEUROLOGY 2018; 159:43-59. [PMID: 30482332 DOI: 10.1016/b978-0-444-63916-5.00003-3] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
Gravity is a defining force that governs the evolution of mechanical forms, shapes and anchors our perception of the environment, and imposes fundamental constraints on our interactions with the world. Within the animal kingdom, humans are relatively unique in having evolved a vertical, bipedal posture. Although a vertical posture confers numerous benefits, it also renders us less stable than quadrupeds, increasing susceptibility to falls. The ability to accurately and precisely estimate our orientation relative to gravity is therefore of utmost importance. Here we review sensory information and computational processes underlying gravity estimation and verticality perception. Central to gravity estimation and verticality perception is multisensory cue combination, which serves to improve the precision of perception and resolve ambiguities in sensory representations by combining information from across the visual, vestibular, and somatosensory systems. We additionally review experimental paradigms for evaluating verticality perception, and discuss how particular disorders affect the perception of upright. Together, the work reviewed here highlights the critical role of multisensory cue combination in gravity estimation, verticality perception, and creating stable gravity-centered representations of our environment.
Collapse
Affiliation(s)
- Christopher J Dakin
- Department of Kinesiology and Health Science, Utah State University, Logan, UT, United States.
| | - Ari Rosenberg
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin - Madison, Madison, WI, United States
| |
Collapse
|
44
|
Kilpatrick ZP, Poll DB. Neural field model of memory-guided search. Phys Rev E 2017; 96:062411. [PMID: 29347320 DOI: 10.1103/physreve.96.062411] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2017] [Indexed: 11/07/2022]
Abstract
Many organisms can remember locations they have previously visited during a search. Visual search experiments have shown exploration is guided away from these locations, reducing redundancies in the search path before finding a hidden target. We develop and analyze a two-layer neural field model that encodes positional information during a search task. A position-encoding layer sustains a bump attractor corresponding to the searching agent's current location, and search is modeled by velocity input that propagates the bump. A memory layer sustains persistent activity bounded by a wave front, whose edges expand in response to excitatory input from the position layer. Search can then be biased in response to remembered locations, influencing velocity inputs to the position layer. Asymptotic techniques are used to reduce the dynamics of our model to a low-dimensional system of equations that track the bump position and front boundary. Performance is compared for different target-finding tasks.
Collapse
Affiliation(s)
- Zachary P Kilpatrick
- Department of Applied Mathematics, University of Colorado, Boulder, Colorado 80309, USA.,Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, Colorado 80045, USA
| | - Daniel B Poll
- Department of Mathematics, University of Houston, Houston, Texas 77204, USA.,Department of Engineering Sciences and Applied Mathematics, Northwestern University, Evanston, Illinois 60208, USA
| |
Collapse
|
45
|
Dissociation of Self-Motion and Object Motion by Linear Population Decoding That Approximates Marginalization. J Neurosci 2017; 37:11204-11219. [PMID: 29030435 DOI: 10.1523/jneurosci.1177-17.2017] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2017] [Revised: 10/02/2017] [Accepted: 10/06/2017] [Indexed: 11/21/2022] Open
Abstract
We use visual image motion to judge the movement of objects, as well as our own movements through the environment. Generally, image motion components caused by object motion and self-motion are confounded in the retinal image. Thus, to estimate heading, the brain would ideally marginalize out the effects of object motion (or vice versa), but little is known about how this is accomplished neurally. Behavioral studies suggest that vestibular signals play a role in dissociating object motion and self-motion, and recent computational work suggests that a linear decoder can approximate marginalization by taking advantage of diverse multisensory representations. By measuring responses of MSTd neurons in two male rhesus monkeys and by applying a recently-developed method to approximate marginalization by linear population decoding, we tested the hypothesis that vestibular signals help to dissociate self-motion and object motion. We show that vestibular signals stabilize tuning for heading in neurons with congruent visual and vestibular heading preferences, whereas they stabilize tuning for object motion in neurons with discrepant preferences. Thus, vestibular signals enhance the separability of joint tuning for object motion and self-motion. We further show that a linear decoder, designed to approximate marginalization, allows the population to represent either self-motion or object motion with good accuracy. Decoder weights are broadly consistent with a readout strategy, suggested by recent computational work, in which responses are decoded according to the vestibular preferences of multisensory neurons. These results demonstrate, at both single neuron and population levels, that vestibular signals help to dissociate self-motion and object motion.SIGNIFICANCE STATEMENT The brain often needs to estimate one property of a changing environment while ignoring others. This can be difficult because multiple properties of the environment may be confounded in sensory signals. The brain can solve this problem by marginalizing over irrelevant properties to estimate the property-of-interest. We explore this problem in the context of self-motion and object motion, which are inherently confounded in the retinal image. We examine how diversity in a population of multisensory neurons may be exploited to decode self-motion and object motion from the population activity of neurons in macaque area MSTd.
Collapse
|
46
|
Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback. Nat Commun 2017; 8:138. [PMID: 28743932 PMCID: PMC5527101 DOI: 10.1038/s41467-017-00181-8] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2016] [Accepted: 06/08/2017] [Indexed: 02/01/2023] Open
Abstract
Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey’s learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules. Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.
Collapse
|
47
|
Gain Modulation as a Mechanism for Coding Depth from Motion Parallax in Macaque Area MT. J Neurosci 2017; 37:8180-8197. [PMID: 28739582 DOI: 10.1523/jneurosci.0393-17.2017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2017] [Revised: 06/30/2017] [Accepted: 07/20/2017] [Indexed: 11/21/2022] Open
Abstract
Observer translation produces differential image motion between objects that are located at different distances from the observer's point of fixation [motion parallax (MP)]. However, MP can be ambiguous with respect to depth sign (near vs far), and this ambiguity can be resolved by combining retinal image motion with signals regarding eye movement relative to the scene. We have previously demonstrated that both extra-retinal and visual signals related to smooth eye movements can modulate the responses of neurons in area MT of macaque monkeys, and that these modulations generate neural selectivity for depth sign. However, the neural mechanisms that govern this selectivity have remained unclear. In this study, we analyze responses of MT neurons as a function of both retinal velocity and direction of eye movement, and we show that smooth eye movements modulate MT responses in a systematic, temporally precise, and directionally specific manner to generate depth-sign selectivity. We demonstrate that depth-sign selectivity is primarily generated by multiplicative modulations of the response gain of MT neurons. Through simulations, we further demonstrate that depth can be estimated reasonably well by a linear decoding of a population of MT neurons with response gains that depend on eye velocity. Together, our findings provide the first mechanistic description of how visual cortical neurons signal depth from MP.SIGNIFICANCE STATEMENT Motion parallax is a monocular cue to depth that commonly arises during observer translation. To compute from motion parallax whether an object appears nearer or farther than the point of fixation requires combining retinal image motion with signals related to eye rotation, but the neurobiological mechanisms have remained unclear. This study provides the first mechanistic account of how this interaction takes place in the responses of cortical neurons. Specifically, we show that smooth eye movements modulate the gain of responses of neurons in area MT in a directionally specific manner to generate selectivity for depth sign from motion parallax. We also show, through simulations, that depth could be estimated from a population of such gain-modulated neurons.
Collapse
|
48
|
Representation of Multidimensional Stimuli: Quantifying the Most Informative Stimulus Dimension from Neural Responses. J Neurosci 2017; 37:7332-7346. [PMID: 28663198 DOI: 10.1523/jneurosci.0318-17.2017] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2017] [Revised: 06/09/2017] [Accepted: 06/17/2017] [Indexed: 11/21/2022] Open
Abstract
A common way to assess the function of sensory neurons is to measure the number of spikes produced by individual neurons while systematically varying a given dimension of the stimulus. Such measured tuning curves can then be used to quantify the accuracy of the neural representation of the stimulus dimension under study, which can in turn be related to behavioral performance. However, tuning curves often change shape when other dimensions of the stimulus are varied, reflecting the simultaneous sensitivity of neurons to multiple stimulus features. Here we illustrate how one-dimensional information analyses are misleading in this context, and propose a framework derived from Fisher information that allows the quantification of information carried by neurons in multidimensional stimulus spaces. We use this method to probe the representation of sound localization in auditory neurons of chinchillas and guinea pigs of both sexes, and show how heterogeneous tuning properties contribute to a representation of sound source position that is robust to changes in sound level.SIGNIFICANCE STATEMENT Sensory neurons' responses are typically modulated simultaneously by numerous stimulus properties, which can result in an overestimation of neural acuity with existing one-dimensional neural information transmission measures. To overcome this limitation, we develop new, compact expressions of Fisher information-derived measures that bound the robust encoding of separate stimulus dimensions in the context of multidimensional stimuli. We apply this method to the problem of the representation of sound source location in the face of changes in sound source level by neurons of the auditory midbrain.
Collapse
|
49
|
Sokoloski S. Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics. Neural Comput 2017; 29:2450-2490. [PMID: 28599113 DOI: 10.1162/neco_a_00991] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In order to interact intelligently with objects in the world, animals must first transform neural population responses into estimates of the dynamic, unknown stimuli that caused them. The Bayesian solution to this problem is known as a Bayes filter, which applies Bayes' rule to combine population responses with the predictions of an internal model. The internal model of the Bayes filter is based on the true stimulus dynamics, and in this note, we present a method for training a theoretical neural circuit to approximately implement a Bayes filter when the stimulus dynamics are unknown. To do this we use the inferential properties of linear probabilistic population codes to compute Bayes' rule and train a neural network to compute approximate predictions by the method of maximum likelihood. In particular, we perform stochastic gradient descent on the negative log-likelihood of the neural network parameters with a novel approximation of the gradient. We demonstrate our methods on a finite-state, a linear, and a nonlinear filtering problem and show how the hidden layer of the neural network develops tuning curves consistent with findings in experimental neuroscience.
Collapse
Affiliation(s)
- Sacha Sokoloski
- Max Planck Institute for Mathematics in the Sciences, Leipzig, 04103, Germany, and Albert Einstein College of Medicine, New York, NY 10461, U.S.A.
| |
Collapse
|
50
|
Pitkow X, Angelaki DE. Inference in the Brain: Statistics Flowing in Redundant Population Codes. Neuron 2017; 94:943-953. [PMID: 28595050 PMCID: PMC5543692 DOI: 10.1016/j.neuron.2017.05.028] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2016] [Revised: 05/10/2017] [Accepted: 05/19/2017] [Indexed: 12/25/2022]
Abstract
It is widely believed that the brain performs approximate probabilistic inference to estimate causal variables in the world from ambiguous sensory data. To understand these computations, we need to analyze how information is represented and transformed by the actions of nonlinear recurrent neural networks. We propose that these probabilistic computations function by a message-passing algorithm operating at the level of redundant neural populations. To explain this framework, we review its underlying concepts, including graphical models, sufficient statistics, and message-passing, and then describe how these concepts could be implemented by recurrently connected probabilistic population codes. The relevant information flow in these networks will be most interpretable at the population level, particularly for redundant neural codes. We therefore outline a general approach to identify the essential features of a neural message-passing algorithm. Finally, we argue that to reveal the most important aspects of these neural computations, we must study large-scale activity patterns during moderately complex, naturalistic behaviors.
Collapse
Affiliation(s)
- Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA; Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA.
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA; Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| |
Collapse
|