1
|
Penaloza B, Shivkumar S, Lengyel G, DeAngelis GC, Haefner RM. Causal inference predicts the transition from integration to segmentation in motion perception. Sci Rep 2024; 14:27704. [PMID: 39533022 PMCID: PMC11558006 DOI: 10.1038/s41598-024-78820-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2024] [Accepted: 11/04/2024] [Indexed: 11/16/2024] Open
Abstract
Motion provides a powerful sensory cue for segmenting a visual scene into objects and inferring the causal relationships between objects. Fundamental mechanisms involved in this process are the integration and segmentation of local motion signals. However, the computations that govern whether local motion signals are perceptually integrated or segmented remain unclear. Hierarchical Bayesian causal inference has recently been proposed as a model for these computations, yet a hallmark prediction of the model - its dependency on sensory uncertainty - has remained untested. We used a recently developed hierarchical stimulus configuration to measure how human subjects integrate or segment local motion signals while manipulating motion coherence to control sensory uncertainty. We found that (a) the perceptual transition from motion integration to segmentation shifts with sensory uncertainty, and (b) that perceptual variability is maximal around this transition point. Both findings were predicted by the model and challenge conventional interpretations of motion repulsion effects.
Collapse
Affiliation(s)
- Boris Penaloza
- Department of Brain and Cognitive Sciences and Center for Visual Science, University of Rochester, Rochester, NY, USA.
- Department of Psychology, Northeastern University, Boston, MA, USA.
| | - Sabyasachi Shivkumar
- Department of Brain and Cognitive Sciences and Center for Visual Science, University of Rochester, Rochester, NY, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Gabor Lengyel
- Department of Brain and Cognitive Sciences and Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences and Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Ralf M Haefner
- Department of Brain and Cognitive Sciences and Center for Visual Science, University of Rochester, Rochester, NY, USA
| |
Collapse
|
2
|
Xia J, Jasper A, Kohn A, Miller KD. Circuit-motivated generalized affine models characterize stimulus-dependent visual cortical shared variability. iScience 2024; 27:110512. [PMID: 39156642 PMCID: PMC11328009 DOI: 10.1016/j.isci.2024.110512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 04/01/2024] [Accepted: 07/12/2024] [Indexed: 08/20/2024] Open
Abstract
Correlated variability in the visual cortex is modulated by stimulus properties. The stimulus dependence of correlated variability impacts stimulus coding and is indicative of circuit structure. An affine model combining a multiplicative factor and an additive offset has been proposed to explain how correlated variability in primary visual cortex (V1) depends on stimulus orientations. However, whether the affine model could be extended to explain modulations by other stimulus variables or variability shared between two brain areas is unknown. Motivated by a simple neural circuit mechanism, we modified the affine model to better explain the contrast dependence of neural variability shared within either primary or secondary visual cortex (V1 or V2) as well as the orientation dependence of neural variability shared between V1 and V2. Our results bridge neural circuit mechanisms and statistical models and provide a parsimonious explanation for the stimulus dependence of correlated variability within and between visual areas.
Collapse
Affiliation(s)
- Ji Xia
- Center for Theoretical Neuroscience and Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Anna Jasper
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Adam Kohn
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
- Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, NY, USA
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Kenneth D. Miller
- Center for Theoretical Neuroscience and Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
- Department of Neuroscience, Swartz Program in Theoretical Neuroscience, Kavli Institute for Brain Science, College of Physicians and Surgeons and Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York City, NY 10027, USA
| |
Collapse
|
3
|
Malkin J, O'Donnell C, Houghton CJ, Aitchison L. Signatures of Bayesian inference emerge from energy-efficient synapses. eLife 2024; 12:RP92595. [PMID: 39106188 PMCID: PMC11302983 DOI: 10.7554/elife.92595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/09/2024] Open
Abstract
Biological synaptic transmission is unreliable, and this unreliability likely degrades neural circuit performance. While there are biophysical mechanisms that can increase reliability, for instance by increasing vesicle release probability, these mechanisms cost energy. We examined four such mechanisms along with the associated scaling of the energetic costs. We then embedded these energetic costs for reliability in artificial neural networks (ANNs) with trainable stochastic synapses, and trained these networks on standard image classification tasks. The resulting networks revealed a tradeoff between circuit performance and the energetic cost of synaptic reliability. Additionally, the optimised networks exhibited two testable predictions consistent with pre-existing experimental data. Specifically, synapses with lower variability tended to have (1) higher input firing rates and (2) lower learning rates. Surprisingly, these predictions also arise when synapse statistics are inferred through Bayesian inference. Indeed, we were able to find a formal, theoretical link between the performance-reliability cost tradeoff and Bayesian inference. This connection suggests two incompatible possibilities: evolution may have chanced upon a scheme for implementing Bayesian inference by optimising energy efficiency, or alternatively, energy-efficient synapses may display signatures of Bayesian inference without actually using Bayes to reason about uncertainty.
Collapse
Affiliation(s)
- James Malkin
- Faculty of Engineering, University of BristolBristolUnited Kingdom
| | - Cian O'Donnell
- Faculty of Engineering, University of BristolBristolUnited Kingdom
- Intelligent Systems Research Centre, School of Computing, Engineering, and Intelligent Systems, Ulster UniversityDerry/LondonderryUnited Kingdom
| | - Conor J Houghton
- Faculty of Engineering, University of BristolBristolUnited Kingdom
| | | |
Collapse
|
4
|
Goris RLT, Coen-Cagli R, Miller KD, Priebe NJ, Lengyel M. Response sub-additivity and variability quenching in visual cortex. Nat Rev Neurosci 2024; 25:237-252. [PMID: 38374462 PMCID: PMC11444047 DOI: 10.1038/s41583-024-00795-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/24/2024] [Indexed: 02/21/2024]
Abstract
Sub-additivity and variability are ubiquitous response motifs in the primary visual cortex (V1). Response sub-additivity enables the construction of useful interpretations of the visual environment, whereas response variability indicates the factors that limit the precision with which the brain can do this. There is increasing evidence that experimental manipulations that elicit response sub-additivity often also quench response variability. Here, we provide an overview of these phenomena and suggest that they may have common origins. We discuss empirical findings and recent model-based insights into the functional operations, computational objectives and circuit mechanisms underlying V1 activity. These different modelling approaches all predict that response sub-additivity and variability quenching often co-occur. The phenomenology of these two response motifs, as well as many of the insights obtained about them in V1, generalize to other cortical areas. Thus, the connection between response sub-additivity and variability quenching may be a canonical motif across the cortex.
Collapse
Affiliation(s)
- Robbe L T Goris
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA.
| | - Ruben Coen-Cagli
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, NY, USA
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
- Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Kenneth D Miller
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA
- Dept. of Neuroscience, College of Physicians and Surgeons, Columbia University, New York, NY, USA
- Morton B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- Swartz Program in Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Nicholas J Priebe
- Center for Learning and Memory, University of Texas at Austin, Austin, TX, USA
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
5
|
Peters B, DiCarlo JJ, Gureckis T, Haefner R, Isik L, Tenenbaum J, Konkle T, Naselaris T, Stachenfeld K, Tavares Z, Tsao D, Yildirim I, Kriegeskorte N. How does the primate brain combine generative and discriminative computations in vision? ARXIV 2024:arXiv:2401.06005v1. [PMID: 38259351 PMCID: PMC10802669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
Vision is widely understood as an inference problem. However, two contrasting conceptions of the inference process have each been influential in research on biological vision as well as the engineering of machine vision. The first emphasizes bottom-up signal flow, describing vision as a largely feedforward, discriminative inference process that filters and transforms the visual information to remove irrelevant variation and represent behaviorally relevant information in a format suitable for downstream functions of cognition and behavioral control. In this conception, vision is driven by the sensory data, and perception is direct because the processing proceeds from the data to the latent variables of interest. The notion of "inference" in this conception is that of the engineering literature on neural networks, where feedforward convolutional neural networks processing images are said to perform inference. The alternative conception is that of vision as an inference process in Helmholtz's sense, where the sensory evidence is evaluated in the context of a generative model of the causal processes that give rise to it. In this conception, vision inverts a generative model through an interrogation of the sensory evidence in a process often thought to involve top-down predictions of sensory data to evaluate the likelihood of alternative hypotheses. The authors include scientists rooted in roughly equal numbers in each of the conceptions and motivated to overcome what might be a false dichotomy between them and engage the other perspective in the realm of theory and experiment. The primate brain employs an unknown algorithm that may combine the advantages of both conceptions. We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends the dichotomy and sets the stage for revealing the mysterious hybrid algorithm of primate vision.
Collapse
Affiliation(s)
- Benjamin Peters
- Zuckerman Mind Brain Behavior Institute, Columbia University
- School of Psychology & Neuroscience, University of Glasgow
| | - James J DiCarlo
- Department of Brain and Cognitive Sciences, MIT
- McGovern Institute for Brain Research, MIT
- NSF Center for Brains, Minds and Machines, MIT
- Quest for Intelligence, Schwarzman College of Computing, MIT
| | | | - Ralf Haefner
- Brain and Cognitive Sciences, University of Rochester
- Center for Visual Science, University of Rochester
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University
| | - Joshua Tenenbaum
- Department of Brain and Cognitive Sciences, MIT
- NSF Center for Brains, Minds and Machines, MIT
- Computer Science and Artificial Intelligence Laboratory, MIT
| | - Talia Konkle
- Department of Psychology, Harvard University
- Center for Brain Science, Harvard University
- Kempner Institute for Natural and Artificial Intelligence, Harvard University
| | | | | | - Zenna Tavares
- Zuckerman Mind Brain Behavior Institute, Columbia University
- Data Science Institute, Columbia University
| | - Doris Tsao
- Dept of Molecular & Cell Biology, University of California Berkeley
- Howard Hughes Medical Institute
| | - Ilker Yildirim
- Department of Psychology, Yale University
- Department of Statistics and Data Science, Yale University
| | - Nikolaus Kriegeskorte
- Zuckerman Mind Brain Behavior Institute, Columbia University
- Department of Psychology, Columbia University
- Department of Neuroscience, Columbia University
- Department of Electrical Engineering, Columbia University
| |
Collapse
|
6
|
Lange RD, Shivkumar S, Chattoraj A, Haefner RM. Bayesian encoding and decoding as distinct perspectives on neural coding. Nat Neurosci 2023; 26:2063-2072. [PMID: 37996525 PMCID: PMC11003438 DOI: 10.1038/s41593-023-01458-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Accepted: 09/08/2023] [Indexed: 11/25/2023]
Abstract
The Bayesian brain hypothesis is one of the most influential ideas in neuroscience. However, unstated differences in how Bayesian ideas are operationalized make it difficult to draw general conclusions about how Bayesian computations map onto neural circuits. Here, we identify one such unstated difference: some theories ask how neural circuits could recover information about the world from sensory neural activity (Bayesian decoding), whereas others ask how neural circuits could implement inference in an internal model (Bayesian encoding). These two approaches require profoundly different assumptions and lead to different interpretations of empirical data. We contrast them in terms of motivations, empirical support and relationship to neural data. We also use a simple model to argue that encoding and decoding models are complementary rather than competing. Appreciating the distinction between Bayesian encoding and Bayesian decoding will help to organize future work and enable stronger empirical tests about the nature of inference in the brain.
Collapse
Affiliation(s)
- Richard D Lange
- Department of Neurobiology, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Computer Science, Rochester Institute of Technology, Rochester, NY, USA.
| | - Sabyasachi Shivkumar
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Ankani Chattoraj
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Ralf M Haefner
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| |
Collapse
|
7
|
Zhang WH, Wu S, Josić K, Doiron B. Sampling-based Bayesian inference in recurrent circuits of stochastic spiking neurons. Nat Commun 2023; 14:7074. [PMID: 37925497 PMCID: PMC10625605 DOI: 10.1038/s41467-023-41743-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 09/15/2023] [Indexed: 11/06/2023] Open
Abstract
Two facts about cortex are widely accepted: neuronal responses show large spiking variability with near Poisson statistics and cortical circuits feature abundant recurrent connections between neurons. How these spiking and circuit properties combine to support sensory representation and information processing is not well understood. We build a theoretical framework showing that these two ubiquitous features of cortex combine to produce optimal sampling-based Bayesian inference. Recurrent connections store an internal model of the external world, and Poissonian variability of spike responses drives flexible sampling from the posterior stimulus distributions obtained by combining feedforward and recurrent neuronal inputs. We illustrate how this framework for sampling-based inference can be used by cortex to represent latent multivariate stimuli organized either hierarchically or in parallel. A neural signature of such network sampling are internally generated differential correlations whose amplitude is determined by the prior stored in the circuit, which provides an experimentally testable prediction for our framework.
Collapse
Affiliation(s)
- Wen-Hao Zhang
- Department of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Si Wu
- School of Psychological and Cognitive Sciences, Peking University, Beijing, 100871, China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing, 100871, China
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, 100871, China
- Center of Quantitative Biology, Peking University, Beijing, 100871, China
| | - Krešimir Josić
- Department of Mathematics, University of Houston, Houston, TX, USA.
- Department of Biology and Biochemistry, University of Houston, Houston, TX, USA.
| | - Brent Doiron
- Department of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA.
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA.
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA.
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
| |
Collapse
|
8
|
Weiss O, Bounds HA, Adesnik H, Coen-Cagli R. Modeling the diverse effects of divisive normalization on noise correlations. PLoS Comput Biol 2023; 19:e1011667. [PMID: 38033166 PMCID: PMC10715670 DOI: 10.1371/journal.pcbi.1011667] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 12/12/2023] [Accepted: 11/07/2023] [Indexed: 12/02/2023] Open
Abstract
Divisive normalization, a prominent descriptive model of neural activity, is employed by theories of neural coding across many different brain areas. Yet, the relationship between normalization and the statistics of neural responses beyond single neurons remains largely unexplored. Here we focus on noise correlations, a widely studied pairwise statistic, because its stimulus and state dependence plays a central role in neural coding. Existing models of covariability typically ignore normalization despite empirical evidence suggesting it affects correlation structure in neural populations. We therefore propose a pairwise stochastic divisive normalization model that accounts for the effects of normalization and other factors on covariability. We first show that normalization modulates noise correlations in qualitatively different ways depending on whether normalization is shared between neurons, and we discuss how to infer when normalization signals are shared. We then apply our model to calcium imaging data from mouse primary visual cortex (V1), and find that it accurately fits the data, often outperforming a popular alternative model of correlations. Our analysis indicates that normalization signals are often shared between V1 neurons in this dataset. Our model will enable quantifying the relation between normalization and covariability in a broad range of neural systems, which could provide new constraints on circuit mechanisms of normalization and their role in information transmission and representation.
Collapse
Affiliation(s)
- Oren Weiss
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, New York, United States of America
| | - Hayley A. Bounds
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
| | - Hillel Adesnik
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, California, United States of America
| | - Ruben Coen-Cagli
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, New York, United States of America
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
- Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, New York, United States of America
| |
Collapse
|
9
|
Efficient coding theory of dynamic attentional modulation. PLoS Biol 2022; 20:e3001889. [PMID: 36542662 PMCID: PMC9831638 DOI: 10.1371/journal.pbio.3001889] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Revised: 01/10/2023] [Accepted: 10/24/2022] [Indexed: 12/24/2022] Open
Abstract
Activity of sensory neurons is driven not only by external stimuli but also by feedback signals from higher brain areas. Attention is one particularly important internal signal whose presumed role is to modulate sensory representations such that they only encode information currently relevant to the organism at minimal cost. This hypothesis has, however, not yet been expressed in a normative computational framework. Here, by building on normative principles of probabilistic inference and efficient coding, we developed a model of dynamic population coding in the visual cortex. By continuously adapting the sensory code to changing demands of the perceptual observer, an attention-like modulation emerges. This modulation can dramatically reduce the amount of neural activity without deteriorating the accuracy of task-specific inferences. Our results suggest that a range of seemingly disparate cortical phenomena such as intrinsic gain modulation, attention-related tuning modulation, and response variability could be manifestations of the same underlying principles, which combine efficient sensory coding with optimal probabilistic inference in dynamic environments.
Collapse
|