1
|
Kappel D, Tetzlaff C. Synapses learn to utilize stochastic pre-synaptic release for the prediction of postsynaptic dynamics. PLoS Comput Biol 2024; 20:e1012531. [PMID: 39495714 DOI: 10.1371/journal.pcbi.1012531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 10/01/2024] [Indexed: 11/06/2024] Open
Abstract
Synapses in the brain are highly noisy, which leads to a large trial-by-trial variability. Given how costly synapses are in terms of energy consumption these high levels of noise are surprising. Here we propose that synapses use noise to represent uncertainties about the somatic activity of the postsynaptic neuron. To show this, we developed a mathematical framework, in which the synapse as a whole interacts with the soma of the postsynaptic neuron in a similar way to an agent that is situated and behaves in an uncertain, dynamic environment. This framework suggests that synapses use an implicit internal model of the somatic membrane dynamics that is being updated by a synaptic learning rule, which resembles experimentally well-established LTP/LTD mechanisms. In addition, this approach entails that a synapse utilizes its inherently noisy synaptic release to also encode its uncertainty about the state of the somatic potential. Although each synapse strives for predicting the somatic dynamics of its postsynaptic neuron, we show that the emergent dynamics of many synapses in a neuronal network resolve different learning problems such as pattern classification or closed-loop control in a dynamic environment. Hereby, synapses coordinate themselves to represent and utilize uncertainties on the network level in behaviorally ambiguous situations.
Collapse
Affiliation(s)
- David Kappel
- III. Physikalisches Institut - Biophysik, Georg-August Universität, Göttingen, Germany
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Bochum, Germany
| | - Christian Tetzlaff
- III. Physikalisches Institut - Biophysik, Georg-August Universität, Göttingen, Germany
- Group of Computational Synaptic Physiology, Department for Neuro- and Sensory Physiology, University Medical Center Göttingen, Göttingen, Germany
| |
Collapse
|
2
|
Safavi S, Chalk M, Logothetis NK, Levina A. Signatures of criticality in efficient coding networks. Proc Natl Acad Sci U S A 2024; 121:e2302730121. [PMID: 39352933 PMCID: PMC11474077 DOI: 10.1073/pnas.2302730121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 07/24/2024] [Indexed: 10/04/2024] Open
Abstract
The critical brain hypothesis states that the brain can benefit from operating close to a second-order phase transition. While it has been shown that several computational aspects of sensory processing (e.g., sensitivity to input) can be optimal in this regime, it is still unclear whether these computational benefits of criticality can be leveraged by neural systems performing behaviorally relevant computations. To address this question, we investigate signatures of criticality in networks optimized to perform efficient coding. We consider a spike-coding network of leaky integrate-and-fire neurons with synaptic transmission delays. Previously, it was shown that the performance of such networks varies nonmonotonically with the noise amplitude. Interestingly, we find that in the vicinity of the optimal noise level for efficient coding, the network dynamics exhibit some signatures of criticality, namely, scale-free dynamics of the spiking and the presence of crackling noise relation. Our work suggests that two influential, and previously disparate theories of neural processing optimization (efficient coding and criticality) may be intimately related.
Collapse
Affiliation(s)
- Shervin Safavi
- Computational Neuroscience, Department of Child and Adolescent Psychiatry, Faculty of Medicine, Technische Universität Dresden, Dresden01307, Germany
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen72076, Germany
| | - Matthew Chalk
- Institut de la Vision, INSERM, CNRS, Sorbonne Université, Paris75014, France
| | - Nikos K. Logothetis
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen72076, Germany
- International Center for Primate Brain Research, Shanghai201602, China
| | - Anna Levina
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen72076, Germany
- Department of Computer Science, University of Tübingen, Tübingen72076, Germany
- Bernstein Center for Computational Neuroscience Tübingen, Tübingen72076, Germany
| |
Collapse
|
3
|
Martin AE, Guevara Beltran D, Koster J, Tracy JL. Is gender primacy universal? Proc Natl Acad Sci U S A 2024; 121:e2401919121. [PMID: 39159369 PMCID: PMC11363275 DOI: 10.1073/pnas.2401919121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 06/22/2024] [Indexed: 08/21/2024] Open
Abstract
Emerging evidence suggests that gender is a defining feature of personhood. Studies show that gender is the primary social category individuals use to perceive humanness and the social category most strongly related to seeing someone-or something-as human. However, the universality of gender's primacy in social perception and its precedence over other social categories like race and age have been debated. We examined the primacy of gender perception in the Mayangna community of Nicaragua, a population with minimal exposure to Western influences, to test whether the primacy of gender categorization in humanization is more likely to be a culturally specific construct or a cross-cultural and potentially universal phenomenon. Consistent with findings from North American populations [A. E. Martin, M. F. Mason, J. Pers. Soc. Psychol. 123, 292-315 (2022)], the Mayangna ascribed gender to nonhuman objects more strongly than any other social category-including age, race, sexual orientation, disability, and religion-and gender was the only social category that uniquely predicted perceived humanness (i.e., the extent to which a nonhuman entity was seen as "human"). This pattern persisted even in the most isolated subgroup of the sample, who had no exposure to Western culture or media. The present results thus suggest that gender's primacy in social cognition is a widely generalizable, and potentially universal, phenomenon.
Collapse
Affiliation(s)
- Ashley E. Martin
- Graduate School of Business, Stanford University, Stanford, CA94305
| | | | - Jeremy Koster
- Department of Human Behavior, Ecology, and Culture, Max Planck Institute for Evolutionary Anthropology, Leipzig04103, Germany
| | - Jessica L. Tracy
- Department of Psychology, University of British Columbia, Vancouver, BCV6T 1Z4, Canada
| |
Collapse
|
4
|
Yamane Y. Adaptation of the inferior temporal neurons and efficient visual processing. Front Behav Neurosci 2024; 18:1398874. [PMID: 39132448 PMCID: PMC11310006 DOI: 10.3389/fnbeh.2024.1398874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2024] [Accepted: 07/16/2024] [Indexed: 08/13/2024] Open
Abstract
Numerous studies examining the responses of individual neurons in the inferior temporal (IT) cortex have revealed their characteristics such as two-dimensional or three-dimensional shape tuning, objects, or category selectivity. While these basic selectivities have been studied assuming that their response to stimuli is relatively stable, physiological experiments have revealed that the responsiveness of IT neurons also depends on visual experience. The activity changes of IT neurons occur over various time ranges; among these, repetition suppression (RS), in particular, is robustly observed in IT neurons without any behavioral or task constraints. I observed a similar phenomenon in the ventral visual neurons in macaque monkeys while they engaged in free viewing and actively fixated on one consistent object multiple times. This observation indicates that the phenomenon also occurs in natural situations during which the subject actively views stimuli without forced fixation, suggesting that this phenomenon is an everyday occurrence and widespread across regions of the visual system, making it a default process for visual neurons. Such short-term activity modulation may be a key to understanding the visual system; however, the circuit mechanism and the biological significance of RS remain unclear. Thus, in this review, I summarize the observed modulation types in IT neurons and the known properties of RS. Subsequently, I discuss adaptation in vision, including concepts such as efficient and predictive coding, as well as the relationship between adaptation and psychophysical aftereffects. Finally, I discuss some conceptual implications of this phenomenon as well as the circuit mechanisms and the models that may explain adaptation as a fundamental aspect of visual processing.
Collapse
Affiliation(s)
- Yukako Yamane
- Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| |
Collapse
|
5
|
Tashjian SM, Cussen J, Deng W, Zhang B, Mobbs D. Adaptive Safety Coding in the Prefrontal Cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.19.604228. [PMID: 39091862 PMCID: PMC11291074 DOI: 10.1101/2024.07.19.604228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 08/04/2024]
Abstract
Pivotal to self-preservation is the ability to identify when we are safe and when we are in danger. Previous studies have focused on safety estimations based on the features of external threats and do not consider how the brain integrates other key factors, including estimates about our ability to protect ourselves. Here we examine the neural systems underlying the online dynamic encoding of safety. The current preregistered study used two novel tasks to test four facets of safety estimation: Safety Prediction, Meta-representation, Recognition, and Value Updating. We experimentally manipulated safety estimation changing both levels of external threats and self-protection. Data were collected in two independent samples (behavioral N=100; fMRI N=30). We found consistent evidence of subjective changes in the sensitivity to safety conferred through protection. Neural responses in the ventromedial prefrontal cortex (vmPFC) tracked increases in safety during all safety estimation facets, with specific tuning to protection. Further, informational connectivity analyses revealed distinct hubs of safety coding in the posterior and anterior vmPFC for external threats and protection, respectively. These findings reveal a central role of the vmPFC for coding safety.
Collapse
Affiliation(s)
- Sarah M. Tashjian
- School of Psychological Sciences, University of Melbourne, Parkville, VIC 3052, Australia
- Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA
| | - Joseph Cussen
- School of Psychological Sciences, University of Melbourne, Parkville, VIC 3052, Australia
| | - Wenning Deng
- Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA
| | - Bo Zhang
- Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA
| | - Dean Mobbs
- Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA
- Computation and Neural Systems, California Institute of Technology, Pasadena, CA 91125, USA
| |
Collapse
|
6
|
Moore JJ, Genkin A, Tournoy M, Pughe-Sanford JL, de Ruyter van Steveninck RR, Chklovskii DB. The neuron as a direct data-driven controller. Proc Natl Acad Sci U S A 2024; 121:e2311893121. [PMID: 38913890 PMCID: PMC11228465 DOI: 10.1073/pnas.2311893121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 04/12/2024] [Indexed: 06/26/2024] Open
Abstract
In the quest to model neuronal function amid gaps in physiological data, a promising strategy is to develop a normative theory that interprets neuronal physiology as optimizing a computational objective. This study extends current normative models, which primarily optimize prediction, by conceptualizing neurons as optimal feedback controllers. We posit that neurons, especially those beyond early sensory areas, steer their environment toward a specific desired state through their output. This environment comprises both synaptically interlinked neurons and external motor sensory feedback loops, enabling neurons to evaluate the effectiveness of their control via synaptic feedback. To model neurons as biologically feasible controllers which implicitly identify loop dynamics, infer latent states, and optimize control we utilize the contemporary direct data-driven control (DD-DC) framework. Our DD-DC neuron model explains various neurophysiological phenomena: the shift from potentiation to depression in spike-timing-dependent plasticity with its asymmetry, the duration and adaptive nature of feedforward and feedback neuronal filters, the imprecision in spike generation under constant stimulation, and the characteristic operational variability and noise in the brain. Our model presents a significant departure from the traditional, feedforward, instant-response McCulloch-Pitts-Rosenblatt neuron, offering a modern, biologically informed fundamental unit for constructing neural networks.
Collapse
Affiliation(s)
- Jason J Moore
- Neuroscience Institute, New York University Grossman School of Medicine, New York City, NY 10016
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| | - Alexander Genkin
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| | - Magnus Tournoy
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| | | | | | - Dmitri B Chklovskii
- Neuroscience Institute, New York University Grossman School of Medicine, New York City, NY 10016
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| |
Collapse
|
7
|
Matteucci G, Piasini E, Zoccolan D. Unsupervised learning of mid-level visual representations. Curr Opin Neurobiol 2024; 84:102834. [PMID: 38154417 DOI: 10.1016/j.conb.2023.102834] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 12/03/2023] [Accepted: 12/05/2023] [Indexed: 12/30/2023]
Abstract
Recently, a confluence between trends in neuroscience and machine learning has brought a renewed focus on unsupervised learning, where sensory processing systems learn to exploit the statistical structure of their inputs in the absence of explicit training targets or rewards. Sophisticated experimental approaches have enabled the investigation of the influence of sensory experience on neural self-organization and its synaptic bases. Meanwhile, novel algorithms for unsupervised and self-supervised learning have become increasingly popular both as inspiration for theories of the brain, particularly for the function of intermediate visual cortical areas, and as building blocks of real-world learning machines. Here we review some of these recent developments, placing them in historical context and highlighting some research lines that promise exciting breakthroughs in the near future.
Collapse
Affiliation(s)
- Giulio Matteucci
- Department of Basic Neurosciences, University of Geneva, Geneva, 1206, Switzerland. https://twitter.com/giulio_matt
| | - Eugenio Piasini
- International School for Advanced Studies (SISSA), Trieste, 34136, Italy
| | - Davide Zoccolan
- International School for Advanced Studies (SISSA), Trieste, 34136, Italy.
| |
Collapse
|
8
|
Purcell J, Wiley R, Won J, Callow D, Weiss L, Alfini A, Wei Y, Carson Smith J. Increased neural differentiation after a single session of aerobic exercise in older adults. Neurobiol Aging 2023; 132:67-84. [PMID: 37742442 DOI: 10.1016/j.neurobiolaging.2023.08.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 08/19/2023] [Accepted: 08/24/2023] [Indexed: 09/26/2023]
Abstract
Aging is associated with decreased cognitive function. One theory posits that this decline is in part due to multiple neural systems becoming dedifferentiated in older adults. Exercise is known to improve cognition in older adults, even after only a single session. We hypothesized that one mechanism of improvement is a redifferentiation of neural systems. We used a within-participant, cross-over design involving 2 sessions: either 30 minutes of aerobic exercise or 30 minutes of seated rest (n = 32; ages 55-81 years). Both functional Magnetic Resonance Imaging (fMRI) and Stroop performance were acquired soon after exercise and rest. We quantified neural differentiation via general heterogeneity regression. There were 3 prominent findings following the exercise. First, participants were better at reducing Stroop interference. Second, while there was greater neural differentiation within the hippocampal formation and cerebellum, there was lower neural differentiation within frontal cortices. Third, this greater neural differentiation in the cerebellum and temporal lobe was more pronounced in the older ages. These data suggest that exercise can induce greater neural differentiation in healthy aging.
Collapse
Affiliation(s)
- Jeremy Purcell
- Department of Kinesiology, University of Maryland, College Park, MD, USA; Maryland Neuroimaging Center, University of Maryland, College Park, MD, USA.
| | - Robert Wiley
- Department of Psychology, University of North Carolina at Greensboro, Greensboro, NC, USA
| | - Junyeon Won
- Department of Kinesiology, University of Maryland, College Park, MD, USA; Institute for Exercise and Environmental Medicine, Texas Health Presbyterian Dallas, Dallas, TX, USA
| | - Daniel Callow
- Department of Kinesiology, University of Maryland, College Park, MD, USA; Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, USA
| | - Lauren Weiss
- Department of Kinesiology, University of Maryland, College Park, MD, USA; Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, USA
| | - Alfonso Alfini
- National Center on Sleep Disorders Research, Division of Lung Diseases, National Heart, Lung, and Blood Institute, Bethesda, MD, USA
| | - Yi Wei
- Maryland Neuroimaging Center, University of Maryland, College Park, MD, USA
| | - J Carson Smith
- Department of Kinesiology, University of Maryland, College Park, MD, USA; Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, USA.
| |
Collapse
|
9
|
Makarov R, Pagkalos M, Poirazi P. Dendrites and efficiency: Optimizing performance and resource utilization. Curr Opin Neurobiol 2023; 83:102812. [PMID: 37980803 DOI: 10.1016/j.conb.2023.102812] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 10/19/2023] [Accepted: 10/21/2023] [Indexed: 11/21/2023]
Abstract
The brain is a highly efficient system that has evolved to optimize performance under limited resources. In this review, we highlight recent theoretical and experimental studies that support the view that dendrites make information processing and storage in the brain more efficient. This is achieved through the dynamic modulation of integration versus segregation of inputs and activity within a neuron. We argue that under conditions of limited energy and space, dendrites help biological networks to implement complex functions such as processing natural stimuli on behavioral timescales, performing the inference process on those stimuli in a context-specific manner, and storing the information in overlapping populations of neurons. A global picture starts to emerge, in which dendrites help the brain achieve efficiency through a combination of optimization strategies that balance the tradeoff between performance and resource utilization.
Collapse
Affiliation(s)
- Roman Makarov
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology Hellas (FORTH), Heraklion, 70013, Greece; Department of Biology, University of Crete, Heraklion, 70013, Greece. https://twitter.com/_RomanMakarov
| | - Michalis Pagkalos
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology Hellas (FORTH), Heraklion, 70013, Greece; Department of Biology, University of Crete, Heraklion, 70013, Greece. https://twitter.com/MPagkalos
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology Hellas (FORTH), Heraklion, 70013, Greece.
| |
Collapse
|
10
|
Nardin M, Csicsvari J, Tkačik G, Savin C. The Structure of Hippocampal CA1 Interactions Optimizes Spatial Coding across Experience. J Neurosci 2023; 43:8140-8156. [PMID: 37758476 PMCID: PMC10697404 DOI: 10.1523/jneurosci.0194-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 09/11/2023] [Accepted: 09/14/2023] [Indexed: 10/03/2023] Open
Abstract
Although much is known about how single neurons in the hippocampus represent an animal's position, how circuit interactions contribute to spatial coding is less well understood. Using a novel statistical estimator and theoretical modeling, both developed in the framework of maximum entropy models, we reveal highly structured CA1 cell-cell interactions in male rats during open field exploration. The statistics of these interactions depend on whether the animal is in a familiar or novel environment. In both conditions the circuit interactions optimize the encoding of spatial information, but for regimes that differ in the informativeness of their spatial inputs. This structure facilitates linear decodability, making the information easy to read out by downstream circuits. Overall, our findings suggest that the efficient coding hypothesis is not only applicable to individual neuron properties in the sensory periphery, but also to neural interactions in the central brain.SIGNIFICANCE STATEMENT Local circuit interactions play a key role in neural computation and are dynamically shaped by experience. However, measuring and assessing their effects during behavior remains a challenge. Here, we combine techniques from statistical physics and machine learning to develop new tools for determining the effects of local network interactions on neural population activity. This approach reveals highly structured local interactions between hippocampal neurons, which make the neural code more precise and easier to read out by downstream circuits, across different levels of experience. More generally, the novel combination of theory and data analysis in the framework of maximum entropy models enables traditional neural coding questions to be asked in naturalistic settings.
Collapse
Affiliation(s)
- Michele Nardin
- Institute of Science and Technology Austria, Klosterneuburg AT-3400, Austria
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia 20147
| | - Jozsef Csicsvari
- Institute of Science and Technology Austria, Klosterneuburg AT-3400, Austria
| | - Gašper Tkačik
- Institute of Science and Technology Austria, Klosterneuburg AT-3400, Austria
| | - Cristina Savin
- Center for Neural Science, New York University, New York, New York 10003
- Center for Data Science, New York University, New York, New York 10011
| |
Collapse
|
11
|
Singer Y, Taylor L, Willmore BDB, King AJ, Harper NS. Hierarchical temporal prediction captures motion processing along the visual pathway. eLife 2023; 12:e52599. [PMID: 37844199 PMCID: PMC10629830 DOI: 10.7554/elife.52599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Accepted: 10/04/2023] [Indexed: 10/18/2023] Open
Abstract
Visual neurons respond selectively to features that become increasingly complex from the eyes to the cortex. Retinal neurons prefer flashing spots of light, primary visual cortical (V1) neurons prefer moving bars, and those in higher cortical areas favor complex features like moving textures. Previously, we showed that V1 simple cell tuning can be accounted for by a basic model implementing temporal prediction - representing features that predict future sensory input from past input (Singer et al., 2018). Here, we show that hierarchical application of temporal prediction can capture how tuning properties change across at least two levels of the visual system. This suggests that the brain does not efficiently represent all incoming information; instead, it selectively represents sensory inputs that help in predicting the future. When applied hierarchically, temporal prediction extracts time-varying features that depend on increasingly high-level statistics of the sensory input.
Collapse
Affiliation(s)
- Yosef Singer
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| | - Luke Taylor
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| | - Ben DB Willmore
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| | - Nicol S Harper
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| |
Collapse
|
12
|
Tjalma AJ, Galstyan V, Goedhart J, Slim L, Becker NB, ten Wolde PR. Trade-offs between cost and information in cellular prediction. Proc Natl Acad Sci U S A 2023; 120:e2303078120. [PMID: 37792515 PMCID: PMC10576116 DOI: 10.1073/pnas.2303078120] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 08/23/2023] [Indexed: 10/06/2023] Open
Abstract
Living cells can leverage correlations in environmental fluctuations to predict the future environment and mount a response ahead of time. To this end, cells need to encode the past signal into the output of the intracellular network from which the future input is predicted. Yet, storing information is costly while not all features of the past signal are equally informative on the future input signal. Here, we show for two classes of input signals that cellular networks can reach the fundamental bound on the predictive information as set by the information extracted from the past signal: Push-pull networks can reach this information bound for Markovian signals, while networks that take a temporal derivative can reach the bound for predicting the future derivative of non-Markovian signals. However, the bits of past information that are most informative about the future signal are also prohibitively costly. As a result, the optimal system that maximizes the predictive information for a given resource cost is, in general, not at the information bound. Applying our theory to the chemotaxis network of Escherichia coli reveals that its adaptive kernel is optimal for predicting future concentration changes over a broad range of background concentrations, and that the system has been tailored to predicting these changes in shallow gradients.
Collapse
Affiliation(s)
- Age J. Tjalma
- AMOLF, Science Park 104, 1098 XGAmsterdam, The Netherlands
| | - Vahe Galstyan
- AMOLF, Science Park 104, 1098 XGAmsterdam, The Netherlands
| | | | - Lotte Slim
- AMOLF, Science Park 104, 1098 XGAmsterdam, The Netherlands
| | - Nils B. Becker
- Theoretical Systems Biology, German Cancer Research Center, 69120Heidelberg, Germany
| | | |
Collapse
|
13
|
Charvin H, Catenacci Volpi N, Polani D. Exact and Soft Successive Refinement of the Information Bottleneck. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1355. [PMID: 37761653 PMCID: PMC10528077 DOI: 10.3390/e25091355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 09/08/2023] [Accepted: 09/13/2023] [Indexed: 09/29/2023]
Abstract
The information bottleneck (IB) framework formalises the essential requirement for efficient information processing systems to achieve an optimal balance between the complexity of their representation and the amount of information extracted about relevant features. However, since the representation complexity affordable by real-world systems may vary in time, the processing cost of updating the representations should also be taken into account. A crucial question is thus the extent to which adaptive systems can leverage the information content of already existing IB-optimal representations for producing new ones, which target the same relevant features but at a different granularity. We investigate the information-theoretic optimal limits of this process by studying and extending, within the IB framework, the notion of successive refinement, which describes the ideal situation where no information needs to be discarded for adapting an IB-optimal representation's granularity. Thanks in particular to a new geometric characterisation, we analytically derive the successive refinability of some specific IB problems (for binary variables, for jointly Gaussian variables, and for the relevancy variable being a deterministic function of the source variable), and provide a linear-programming-based tool to numerically investigate, in the discrete case, the successive refinement of the IB. We then soften this notion into a quantification of the loss of information optimality induced by several-stage processing through an existing measure of unique information. Simple numerical experiments suggest that this quantity is typically low, though not entirely negligible. These results could have important implications for (i) the structure and efficiency of incremental learning in biological and artificial agents, (ii) the comparison of IB-optimal observation channels in statistical decision problems, and (iii) the IB theory of deep neural networks.
Collapse
Affiliation(s)
- Hippolyte Charvin
- School of Physics, Engineering and Computer Science, University of Hertfordshire, Hatfield AL10 9AB, UK; (N.C.V.); (D.P.)
| | | | | |
Collapse
|
14
|
Pan X, DeForge A, Schwartz O. Generalizing biological surround suppression based on center surround similarity via deep neural network models. PLoS Comput Biol 2023; 19:e1011486. [PMID: 37738258 PMCID: PMC10550176 DOI: 10.1371/journal.pcbi.1011486] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 10/04/2023] [Accepted: 09/04/2023] [Indexed: 09/24/2023] Open
Abstract
Sensory perception is dramatically influenced by the context. Models of contextual neural surround effects in vision have mostly accounted for Primary Visual Cortex (V1) data, via nonlinear computations such as divisive normalization. However, surround effects are not well understood within a hierarchy, for neurons with more complex stimulus selectivity beyond V1. We utilized feedforward deep convolutional neural networks and developed a gradient-based technique to visualize the most suppressive and excitatory surround. We found that deep neural networks exhibited a key signature of surround effects in V1, highlighting center stimuli that visually stand out from the surround and suppressing responses when the surround stimulus is similar to the center. We found that in some neurons, especially in late layers, when the center stimulus was altered, the most suppressive surround surprisingly can follow the change. Through the visualization approach, we generalized previous understanding of surround effects to more complex stimuli, in ways that have not been revealed in visual cortices. In contrast, the suppression based on center surround similarity was not observed in an untrained network. We identified further successes and mismatches of the feedforward CNNs to the biology. Our results provide a testable hypothesis of surround effects in higher visual cortices, and the visualization approach could be adopted in future biological experimental designs.
Collapse
Affiliation(s)
- Xu Pan
- Department of Computer Science, University of Miami, Coral Gables, FL, United States of America
| | - Annie DeForge
- School of Information, University of California, Berkeley, CA, United States of America
- Bentley University, Waltham, MA, United States of America
| | - Odelia Schwartz
- Department of Computer Science, University of Miami, Coral Gables, FL, United States of America
| |
Collapse
|
15
|
Dutta S, Iyer KK, Vanhatalo S, Breakspear M, Roberts JA. Mechanisms underlying pathological cortical bursts during metabolic depletion. Nat Commun 2023; 14:4792. [PMID: 37553358 PMCID: PMC10409751 DOI: 10.1038/s41467-023-40437-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 07/27/2023] [Indexed: 08/10/2023] Open
Abstract
Cortical activity depends upon a continuous supply of oxygen and other metabolic resources. Perinatal disruption of oxygen availability is a common clinical scenario in neonatal intensive care units, and a leading cause of lifelong disability. Pathological patterns of brain activity including burst suppression and seizures are a hallmark of the recovery period, yet the mechanisms by which these patterns arise remain poorly understood. Here, we use computational modeling of coupled metabolic-neuronal activity to explore the mechanisms by which oxygen depletion generates pathological brain activity. We find that restricting oxygen supply drives transitions from normal activity to several pathological activity patterns (isoelectric, burst suppression, and seizures), depending on the potassium supply. Trajectories through parameter space track key features of clinical electrophysiology recordings and reveal how infants with good recovery outcomes track toward normal parameter values, whereas the parameter values for infants with poor outcomes dwell around the pathological values. These findings open avenues for studying and monitoring the metabolically challenged infant brain, and deepen our understanding of the link between neuronal and metabolic activity.
Collapse
Affiliation(s)
- Shrey Dutta
- Brain Modelling Group, QIMR Berghofer Medical Research Institute, Brisbane, QLD, Australia.
- Faculty of Medicine, University of Queensland, Brisbane, QLD, Australia.
- School of Psychological Sciences, College of Engineering, Science and Environment, University of Newcastle, Callaghan, NSW, Australia.
| | - Kartik K Iyer
- Brain Modelling Group, QIMR Berghofer Medical Research Institute, Brisbane, QLD, Australia
| | - Sampsa Vanhatalo
- Pediatric Research Center, Department of Physiology, Helsinki University Hospital, University of Helsinki, Helsinki, Finland
| | - Michael Breakspear
- School of Psychological Sciences, College of Engineering, Science and Environment, University of Newcastle, Callaghan, NSW, Australia
- School of Medicine and Public Health, College of Health and Medicine, University of Newcastle, Callaghan, NSW, Australia
| | - James A Roberts
- Brain Modelling Group, QIMR Berghofer Medical Research Institute, Brisbane, QLD, Australia
- Faculty of Medicine, University of Queensland, Brisbane, QLD, Australia
| |
Collapse
|
16
|
Price BH, Jensen CM, Khoudary AA, Gavornik JP. Expectation violations produce error signals in mouse V1. Cereb Cortex 2023; 33:8803-8820. [PMID: 37183176 PMCID: PMC10321125 DOI: 10.1093/cercor/bhad163] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 04/22/2023] [Accepted: 04/25/2023] [Indexed: 05/16/2023] Open
Abstract
Repeated exposure to visual sequences changes the form of evoked activity in the primary visual cortex (V1). Predictive coding theory provides a potential explanation for this, namely that plasticity shapes cortical circuits to encode spatiotemporal predictions and that subsequent responses are modulated by the degree to which actual inputs match these expectations. Here we use a recently developed statistical modeling technique called Model-Based Targeted Dimensionality Reduction (MbTDR) to study visually evoked dynamics in mouse V1 in the context of an experimental paradigm called "sequence learning." We report that evoked spiking activity changed significantly with training, in a manner generally consistent with the predictive coding framework. Neural responses to expected stimuli were suppressed in a late window (100-150 ms) after stimulus onset following training, whereas responses to novel stimuli were not. Substituting a novel stimulus for a familiar one led to increases in firing that persisted for at least 300 ms. Omitting predictable stimuli in trained animals also led to increased firing at the expected time of stimulus onset. Finally, we show that spiking data can be used to accurately decode time within the sequence. Our findings are consistent with the idea that plasticity in early visual circuits is involved in coding spatiotemporal information.
Collapse
Affiliation(s)
- Byron H Price
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA 02215, USA
- Graduate Program in Neuroscience, Boston University, Boston, MA 02215, USA
| | - Cambria M Jensen
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA 02215, USA
| | - Anthony A Khoudary
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA 02215, USA
| | - Jeffrey P Gavornik
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA 02215, USA
- Graduate Program in Neuroscience, Boston University, Boston, MA 02215, USA
| |
Collapse
|
17
|
Sihn D, Kwon OS, Kim SP. Robust and efficient representations of dynamic stimuli in hierarchical neural networks via temporal smoothing. Front Comput Neurosci 2023; 17:1164595. [PMID: 37398935 PMCID: PMC10307978 DOI: 10.3389/fncom.2023.1164595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 05/24/2023] [Indexed: 07/04/2023] Open
Abstract
Introduction Efficient coding that minimizes informational redundancy of neural representations is a widely accepted neural coding principle. Despite the benefit, maximizing efficiency in neural coding can make neural representation vulnerable to random noise. One way to achieve robustness against random noise is smoothening neural responses. However, it is not clear whether the smoothness of neural responses can hold robust neural representations when dynamic stimuli are processed through a hierarchical brain structure, in which not only random noise but also systematic error due to temporal lag can be induced. Methods In the present study, we showed that smoothness via spatio-temporally efficient coding can achieve both efficiency and robustness by effectively dealing with noise and neural delay in the visual hierarchy when processing dynamic visual stimuli. Results The simulation results demonstrated that a hierarchical neural network whose bidirectional synaptic connections were learned through spatio-temporally efficient coding with natural scenes could elicit neural responses to visual moving bars similar to those to static bars with the identical position and orientation, indicating robust neural responses against erroneous neural information. It implies that spatio-temporally efficient coding preserves the structure of visual environments locally in the neural responses of hierarchical structures. Discussion The present results suggest the importance of a balance between efficiency and robustness in neural coding for visual processing of dynamic stimuli across hierarchical brain structures.
Collapse
|
18
|
Makarov R, Pagkalos M, Poirazi P. Dendrites and Efficiency: Optimizing Performance and Resource Utilization. ARXIV 2023:arXiv:2306.07101v1. [PMID: 37396597 PMCID: PMC10312813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
The brain is a highly efficient system evolved to achieve high performance with limited resources. We propose that dendrites make information processing and storage in the brain more efficient through the segregation of inputs and their conditional integration via nonlinear events, the compartmentalization of activity and plasticity and the binding of information through synapse clustering. In real-world scenarios with limited energy and space, dendrites help biological networks process natural stimuli on behavioral timescales, perform the inference process on those stimuli in a context-specific manner, and store the information in overlapping populations of neurons. A global picture starts to emerge, in which dendrites help the brain achieve efficiency through a combination of optimization strategies balancing the tradeoff between performance and resource utilization.
Collapse
Affiliation(s)
- Roman Makarov
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology Hellas (FORTH), Heraklion, 70013, Greece
- Department of Biology, University of Crete, Heraklion, 70013, Greece
| | - Michalis Pagkalos
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology Hellas (FORTH), Heraklion, 70013, Greece
- Department of Biology, University of Crete, Heraklion, 70013, Greece
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology Hellas (FORTH), Heraklion, 70013, Greece
| |
Collapse
|
19
|
Auksztulewicz R, Rajendran VG, Peng F, Schnupp JWH, Harper NS. Omission responses in local field potentials in rat auditory cortex. BMC Biol 2023; 21:130. [PMID: 37254137 PMCID: PMC10230691 DOI: 10.1186/s12915-023-01592-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 04/11/2023] [Indexed: 06/01/2023] Open
Abstract
BACKGROUND Non-invasive recordings of gross neural activity in humans often show responses to omitted stimuli in steady trains of identical stimuli. This has been taken as evidence for the neural coding of prediction or prediction error. However, evidence for such omission responses from invasive recordings of cellular-scale responses in animal models is scarce. Here, we sought to characterise omission responses using extracellular recordings in the auditory cortex of anaesthetised rats. We profiled omission responses across local field potentials (LFP), analogue multiunit activity (AMUA), and single/multi-unit spiking activity, using stimuli that were fixed-rate trains of acoustic noise bursts where 5% of bursts were randomly omitted. RESULTS Significant omission responses were observed in LFP and AMUA signals, but not in spiking activity. These omission responses had a lower amplitude and longer latency than burst-evoked sensory responses, and omission response amplitude increased as a function of the number of preceding bursts. CONCLUSIONS Together, our findings show that omission responses are most robustly observed in LFP and AMUA signals (relative to spiking activity). This has implications for models of cortical processing that require many neurons to encode prediction errors in their spike output.
Collapse
Affiliation(s)
- Ryszard Auksztulewicz
- Center for Cognitive Neuroscience Berlin, Free University Berlin, Berlin, Germany.
- Dept of Neuroscience, City University of Hong Kong, Hong Kong, Hong Kong S.A.R..
| | | | - Fei Peng
- Dept of Neuroscience, City University of Hong Kong, Hong Kong, Hong Kong S.A.R
| | | | | |
Collapse
|
20
|
Qiu Y, Klindt DA, Szatko KP, Gonschorek D, Hoefling L, Schubert T, Busse L, Bethge M, Euler T. Efficient coding of natural scenes improves neural system identification. PLoS Comput Biol 2023; 19:e1011037. [PMID: 37093861 PMCID: PMC10159360 DOI: 10.1371/journal.pcbi.1011037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 05/04/2023] [Accepted: 03/20/2023] [Indexed: 04/25/2023] Open
Abstract
Neural system identification aims at learning the response function of neurons to arbitrary stimuli using experimentally recorded data, but typically does not leverage normative principles such as efficient coding of natural environments. Visual systems, however, have evolved to efficiently process input from the natural environment. Here, we present a normative network regularization for system identification models by incorporating, as a regularizer, the efficient coding hypothesis, which states that neural response properties of sensory representations are strongly shaped by the need to preserve most of the stimulus information with limited resources. Using this approach, we explored if a system identification model can be improved by sharing its convolutional filters with those of an autoencoder which aims to efficiently encode natural stimuli. To this end, we built a hybrid model to predict the responses of retinal neurons to noise stimuli. This approach did not only yield a higher performance than the "stand-alone" system identification model, it also produced more biologically plausible filters, meaning that they more closely resembled neural representation in early visual systems. We found these results applied to retinal responses to different artificial stimuli and across model architectures. Moreover, our normatively regularized model performed particularly well in predicting responses of direction-of-motion sensitive retinal neurons. The benefit of natural scene statistics became marginal, however, for predicting the responses to natural movies. In summary, our results indicate that efficiently encoding environmental inputs can improve system identification models, at least for noise stimuli, and point to the benefit of probing the visual system with naturalistic stimuli.
Collapse
Affiliation(s)
- Yongrong Qiu
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, U Tübingen, Tübingen, Germany
| | - David A Klindt
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Klaudia P Szatko
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, U Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
| | - Dominic Gonschorek
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Research Training Group 2381, U Tübingen, Tübingen, Germany
| | - Larissa Hoefling
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
| | - Timm Schubert
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
| | - Laura Busse
- Division of Neurobiology, Faculty of Biology, LMU Munich, Planegg-Martinsried, Germany
- Bernstein Center for Computational Neuroscience, Planegg-Martinsried, Germany
| | - Matthias Bethge
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
- Institute for Theoretical Physics, U Tübingen, Tübingen, Germany
| | - Thomas Euler
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
| |
Collapse
|
21
|
Gupta D, Młynarski W, Sumser A, Symonova O, Svatoň J, Joesch M. Panoramic visual statistics shape retina-wide organization of receptive fields. Nat Neurosci 2023; 26:606-614. [PMID: 36959418 PMCID: PMC10076217 DOI: 10.1038/s41593-023-01280-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 02/14/2023] [Indexed: 03/25/2023]
Abstract
Statistics of natural scenes are not uniform-their structure varies dramatically from ground to sky. It remains unknown whether these nonuniformities are reflected in the large-scale organization of the early visual system and what benefits such adaptations would confer. Here, by relying on the efficient coding hypothesis, we predict that changes in the structure of receptive fields across visual space increase the efficiency of sensory coding. Using the mouse (Mus musculus) as a model species, we show that receptive fields of retinal ganglion cells change their shape along the dorsoventral retinal axis, with a marked surround asymmetry at the visual horizon, in agreement with our predictions. Our work demonstrates that, according to principles of efficient coding, the panoramic structure of natural scenes is exploited by the retina across space and cell types.
Collapse
Affiliation(s)
- Divyansh Gupta
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Wiktor Młynarski
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Anton Sumser
- Institute of Science and Technology Austria, Klosterneuburg, Austria
- Division of Neuroscience, Faculty of Biology, LMU, Munich, Germany
| | - Olga Symonova
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Jan Svatoň
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Maximilian Joesch
- Institute of Science and Technology Austria, Klosterneuburg, Austria.
| |
Collapse
|
22
|
Jaskir A, Frank MJ. On the normative advantages of dopamine and striatal opponency for learning and choice. eLife 2023; 12:e85107. [PMID: 36946371 PMCID: PMC10198727 DOI: 10.7554/elife.85107] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 03/14/2023] [Indexed: 03/23/2023] Open
Abstract
The basal ganglia (BG) contribute to reinforcement learning (RL) and decision-making, but unlike artificial RL agents, it relies on complex circuitry and dynamic dopamine modulation of opponent striatal pathways to do so. We develop the OpAL* model to assess the normative advantages of this circuitry. In OpAL*, learning induces opponent pathways to differentially emphasize the history of positive or negative outcomes for each action. Dynamic DA modulation then amplifies the pathway most tuned for the task environment. This efficient coding mechanism avoids a vexing explore-exploit tradeoff that plagues traditional RL models in sparse reward environments. OpAL* exhibits robust advantages over alternative models, particularly in environments with sparse reward and large action spaces. These advantages depend on opponent and nonlinear Hebbian plasticity mechanisms previously thought to be pathological. Finally, OpAL* captures risky choice patterns arising from DA and environmental manipulations across species, suggesting that they result from a normative biological mechanism.
Collapse
Affiliation(s)
- Alana Jaskir
- Department of Cognitive, Linguistic and Psychological Sciences, Carney Institute for Brain Science, Brown UniversityProvidenceUnited States
| | - Michael J Frank
- Department of Cognitive, Linguistic and Psychological Sciences, Carney Institute for Brain Science, Brown UniversityProvidenceUnited States
| |
Collapse
|
23
|
Qin S, Farashahi S, Lipshutz D, Sengupta AM, Chklovskii DB, Pehlevan C. Coordinated drift of receptive fields in Hebbian/anti-Hebbian network models during noisy representation learning. Nat Neurosci 2023; 26:339-349. [PMID: 36635497 DOI: 10.1038/s41593-022-01225-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 10/28/2022] [Indexed: 01/13/2023]
Abstract
Recent experiments have revealed that neural population codes in many brain areas continuously change even when animals have fully learned and stably perform their tasks. This representational 'drift' naturally leads to questions about its causes, dynamics and functions. Here we explore the hypothesis that neural representations optimize a representational objective with a degenerate solution space, and noisy synaptic updates drive the network to explore this (near-)optimal space causing representational drift. We illustrate this idea and explore its consequences in simple, biologically plausible Hebbian/anti-Hebbian network models of representation learning. We find that the drifting receptive fields of individual neurons can be characterized by a coordinated random walk, with effective diffusion constants depending on various parameters such as learning rate, noise amplitude and input statistics. Despite such drift, the representational similarity of population codes is stable over time. Our model recapitulates experimental observations in the hippocampus and posterior parietal cortex and makes testable predictions that can be probed in future experiments.
Collapse
Affiliation(s)
- Shanshan Qin
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Shiva Farashahi
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
| | - David Lipshutz
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
| | - Anirvan M Sengupta
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
- Department of Physics and Astronomy, Rutgers University, New Brunswick, NJ, USA
| | - Dmitri B Chklovskii
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
- NYU Langone Medical Center, New York, NY, USA
| | - Cengiz Pehlevan
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA.
- Center for Brain Science, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
24
|
Repetto C, Riva G. The neuroscience of body memory: Recent findings and conceptual advances. EXCLI JOURNAL 2023; 22:191-206. [PMID: 36998712 PMCID: PMC10043453 DOI: 10.17179/excli2023-5877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 01/31/2023] [Indexed: 04/01/2023]
Abstract
The body is a very special object, as it corresponds to the physical component of the self and it is the medium through which we interact with the world. Our body awareness includes the mental representation of the body that happens to be our own, and traditionally has been defined in terms of body schema and body image. Starting from the distinction between these two types of representations, the present paper tries to reconcile the literature around body representations under the common framework of body memory. The body memory develops ontogenetically from birth and across all the life span and is directly linked to the development of the self. Therefore, our sense of self and identity is fundamentally based on multisensory knowledge accumulated in body memory, so that the sensations collected by our body, stored as implicit memory, can unfold in the future, under suitable circumstances. Indeed, these sets of bodily information had been proposed as possible key factors underpinning several mental health illnesses. Following this perspective, the Embodied Medicine approach put forward the use of advanced technologies to alter the dysfunctional body memory to enhance people's well-being. In the last sections, recent experimental pieces of evidence will be illustrated that targeted specifically bodily information for increasing health and wellbeing, by means of two strategies: interoceptive feedback and bodily illusions. See also Figure 1(Fig. 1).
Collapse
Affiliation(s)
- Claudia Repetto
- Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
- *To whom correspondence should be addressed: Claudia Repetto, Department of Psychology, Università Cattolica del Sacro Cuore, Largo Gemelli, 1, 20121 Milan, Italy; Tel: + 39 02 7234 2585, E-mail:
| | - Giuseppe Riva
- Humane Technology Lab, Università Cattolica del Sacro Cuore, Milan, Italy
- Applied Technology for Neuropsychology Lab, Istituto Auxologico Italiano IRCCS, Milan, Italy
| |
Collapse
|
25
|
Efficient coding theory of dynamic attentional modulation. PLoS Biol 2022; 20:e3001889. [PMID: 36542662 PMCID: PMC9831638 DOI: 10.1371/journal.pbio.3001889] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Revised: 01/10/2023] [Accepted: 10/24/2022] [Indexed: 12/24/2022] Open
Abstract
Activity of sensory neurons is driven not only by external stimuli but also by feedback signals from higher brain areas. Attention is one particularly important internal signal whose presumed role is to modulate sensory representations such that they only encode information currently relevant to the organism at minimal cost. This hypothesis has, however, not yet been expressed in a normative computational framework. Here, by building on normative principles of probabilistic inference and efficient coding, we developed a model of dynamic population coding in the visual cortex. By continuously adapting the sensory code to changing demands of the perceptual observer, an attention-like modulation emerges. This modulation can dramatically reduce the amount of neural activity without deteriorating the accuracy of task-specific inferences. Our results suggest that a range of seemingly disparate cortical phenomena such as intrinsic gain modulation, attention-related tuning modulation, and response variability could be manifestations of the same underlying principles, which combine efficient sensory coding with optimal probabilistic inference in dynamic environments.
Collapse
|
26
|
Bordelon B, Pehlevan C. Population codes enable learning from few examples by shaping inductive bias. eLife 2022; 11:e78606. [PMID: 36524716 PMCID: PMC9839349 DOI: 10.7554/elife.78606] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 12/15/2022] [Indexed: 12/23/2022] Open
Abstract
Learning from a limited number of experiences requires suitable inductive biases. To identify how inductive biases are implemented in and shaped by neural codes, we analyze sample-efficient learning of arbitrary stimulus-response maps from arbitrary neural codes with biologically-plausible readouts. We develop an analytical theory that predicts the generalization error of the readout as a function of the number of observed examples. Our theory illustrates in a mathematically precise way how the structure of population codes shapes inductive bias, and how a match between the code and the task is crucial for sample-efficient learning. It elucidates a bias to explain observed data with simple stimulus-response maps. Using recordings from the mouse primary visual cortex, we demonstrate the existence of an efficiency bias towards low-frequency orientation discrimination tasks for grating stimuli and low spatial frequency reconstruction tasks for natural images. We reproduce the discrimination bias in a simple model of primary visual cortex, and further show how invariances in the code to certain stimulus variations alter learning performance. We extend our methods to time-dependent neural codes and predict the sample efficiency of readouts from recurrent networks. We observe that many different codes can support the same inductive bias. By analyzing recordings from the mouse primary visual cortex, we demonstrate that biological codes have lower total activity than other codes with identical bias. Finally, we discuss implications of our theory in the context of recent developments in neuroscience and artificial intelligence. Overall, our study provides a concrete method for elucidating inductive biases of the brain and promotes sample-efficient learning as a general normative coding principle.
Collapse
Affiliation(s)
- Blake Bordelon
- John A Paulson School of Engineering and Applied Sciences, Harvard UniversityCambridgeUnited States
- Center for Brain Science, Harvard UniversityCambridgeUnited States
| | - Cengiz Pehlevan
- John A Paulson School of Engineering and Applied Sciences, Harvard UniversityCambridgeUnited States
- Center for Brain Science, Harvard UniversityCambridgeUnited States
| |
Collapse
|
27
|
Ali A, Ahmad N, de Groot E, Johannes van Gerven MA, Kietzmann TC. Predictive coding is a consequence of energy efficiency in recurrent neural networks. PATTERNS (NEW YORK, N.Y.) 2022; 3:100639. [PMID: 36569556 PMCID: PMC9768680 DOI: 10.1016/j.patter.2022.100639] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 12/24/2021] [Accepted: 10/27/2022] [Indexed: 11/24/2022]
Abstract
Predictive coding is a promising framework for understanding brain function. It postulates that the brain continuously inhibits predictable sensory input, ensuring preferential processing of surprising elements. A central aspect of this view is its hierarchical connectivity, involving recurrent message passing between excitatory bottom-up signals and inhibitory top-down feedback. Here we use computational modeling to demonstrate that such architectural hardwiring is not necessary. Rather, predictive coding is shown to emerge as a consequence of energy efficiency. When training recurrent neural networks to minimize their energy consumption while operating in predictive environments, the networks self-organize into prediction and error units with appropriate inhibitory and excitatory interconnections and learn to inhibit predictable sensory input. Moving beyond the view of purely top-down-driven predictions, we demonstrate, via virtual lesioning experiments, that networks perform predictions on two timescales: fast lateral predictions among sensory units and slower prediction cycles that integrate evidence over time.
Collapse
Affiliation(s)
- Abdullahi Ali
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands,Corresponding author
| | - Nasir Ahmad
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Elgar de Groot
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands,Department of Experimental Psychology, Utrecht University, Utrecht, the Netherlands
| | | | - Tim Christian Kietzmann
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany,Corresponding author
| |
Collapse
|
28
|
Bauer M. How does an organism extract relevant information from transcription factor concentrations? Biochem Soc Trans 2022; 50:1365-1376. [PMID: 36111776 PMCID: PMC9704516 DOI: 10.1042/bst20220333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Revised: 08/16/2022] [Accepted: 08/17/2022] [Indexed: 09/10/2024]
Abstract
How does an organism regulate its genes? The involved regulation typically occurs in terms of a signal processing chain: an externally applied stimulus or a maternally supplied transcription factor leads to the expression of some downstream genes, which, in turn, are transcription factors for further genes. Especially during development, these transcription factors are frequently expressed in amounts where noise is still important; yet, the signals that they provide must not be lost in the noise. Thus, the organism needs to extract exactly relevant information in the signal. New experimental approaches involving single-molecule measurements at high temporal precision as well as increased precision in manipulations directly on the genome are allowing us to tackle this question anew. These new experimental advances mean that also from the theoretical side, theoretical advances should be possible. In this review, I will describe, specifically on the example of fly embryo gene regulation, how theoretical approaches, especially from inference and information theory, can help in understanding gene regulation. To do so, I will first review some more traditional theoretical models for gene regulation, followed by a brief discussion of information-theoretical approaches and when they can be applied. I will then introduce early fly development as an exemplary system where such information-theoretical approaches have traditionally been applied and can be applied; I will specifically focus on how one such method, namely the information bottleneck approach, has recently been used to infer structural features of enhancer architecture.
Collapse
Affiliation(s)
- Marianne Bauer
- Bionanoscience Department, Delft University of Technology, van der Maasweg 9, 2629 Delft, The Netherlands
- Joseph Henry Laboratories of Physics, Princeton University, Princeton, NJ 08544, U.S.A
- Lewis–Sigler Institute for Integrative Genomics Princeton University, Princeton, NJ 08544, U.S.A
| |
Collapse
|
29
|
Divisive normalization is an efficient code for multivariate Pareto-distributed environments. Proc Natl Acad Sci U S A 2022; 119:e2120581119. [PMID: 36161961 PMCID: PMC9546555 DOI: 10.1073/pnas.2120581119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Divisive normalization is a canonical computation in the brain, observed across neural systems, that is often considered to be an implementation of the efficient coding principle. We provide a theoretical result that makes the conditions under which divisive normalization is an efficient code analytically precise: We show that, in a low-noise regime, encoding an n-dimensional stimulus via divisive normalization is efficient if and only if its prevalence in the environment is described by a multivariate Pareto distribution. We generalize this multivariate analog of histogram equalization to allow for arbitrary metabolic costs of the representation, and show how different assumptions on costs are associated with different shapes of the distributions that divisive normalization efficiently encodes. Our result suggests that divisive normalization may have evolved to efficiently represent stimuli with Pareto distributions. We demonstrate that this efficiently encoded distribution is consistent with stylized features of naturalistic stimulus distributions such as their characteristic conditional variance dependence, and we provide empirical evidence suggesting that it may capture the statistics of filter responses to naturalistic images. Our theoretical finding also yields empirically testable predictions across sensory domains on how the divisive normalization parameters should be tuned to features of the input distribution.
Collapse
|
30
|
Pandey B, Pachitariu M, Brunton BW, Harris KD. Structured random receptive fields enable informative sensory encodings. PLoS Comput Biol 2022; 18:e1010484. [PMID: 36215307 PMCID: PMC9584455 DOI: 10.1371/journal.pcbi.1010484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 10/20/2022] [Accepted: 08/11/2022] [Indexed: 11/10/2022] Open
Abstract
Brains must represent the outside world so that animals survive and thrive. In early sensory systems, neural populations have diverse receptive fields structured to detect important features in inputs, yet significant variability has been ignored in classical models of sensory neurons. We model neuronal receptive fields as random, variable samples from parameterized distributions and demonstrate this model in two sensory modalities using data from insect mechanosensors and mammalian primary visual cortex. Our approach leads to a significant theoretical connection between the foundational concepts of receptive fields and random features, a leading theory for understanding artificial neural networks. The modeled neurons perform a randomized wavelet transform on inputs, which removes high frequency noise and boosts the signal. Further, these random feature neurons enable learning from fewer training samples and with smaller networks in artificial tasks. This structured random model of receptive fields provides a unifying, mathematically tractable framework to understand sensory encodings across both spatial and temporal domains.
Collapse
Affiliation(s)
- Biraj Pandey
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| | - Marius Pachitariu
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Bingni W. Brunton
- Department of Biology, University of Washington, Seattle, Washington, United States of America
| | - Kameron Decker Harris
- Department of Computer Science, Western Washington University, Bellingham, Washington, United States of America
| |
Collapse
|
31
|
Seenivasan P, Narayanan R. Efficient information coding and degeneracy in the nervous system. Curr Opin Neurobiol 2022; 76:102620. [PMID: 35985074 PMCID: PMC7613645 DOI: 10.1016/j.conb.2022.102620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 07/01/2022] [Accepted: 07/07/2022] [Indexed: 11/25/2022]
Abstract
Efficient information coding (EIC) is a universal biological framework rooted in the fundamental principle that system responses should match their natural stimulus statistics for maximizing environmental information. Quantitatively assessed through information theory, such adaptation to the environment occurs at all biological levels and timescales. The context dependence of environmental stimuli and the need for stable adaptations make EIC a daunting task. We argue that biological complexity is the principal architect that subserves deft execution of stable EIC. Complexity in a system is characterized by several functionally segregated subsystems that show a high degree of functional integration when they interact with each other. Complex biological systems manifest heterogeneities and degeneracy, wherein structurally different subsystems could interact to yield the same functional outcome. We argue that complex systems offer several choices that effectively implement EIC and homeostasis for each of the different contexts encountered by the system.
Collapse
Affiliation(s)
- Pavithraa Seenivasan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, 560012, India. https://twitter.com/PaveeSeeni
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, 560012, India.
| |
Collapse
|
32
|
Richter D, Heilbron M, de Lange FP. Dampened sensory representations for expected input across the ventral visual stream. OXFORD OPEN NEUROSCIENCE 2022; 1:kvac013. [PMID: 38596702 PMCID: PMC10939312 DOI: 10.1093/oons/kvac013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 03/29/2022] [Accepted: 07/12/2022] [Indexed: 04/11/2024]
Abstract
Expectations, derived from previous experience, can help in making perception faster, more reliable and informative. A key neural signature of perceptual expectations is expectation suppression, an attenuated neural response to expected compared with unexpected stimuli. While expectation suppression has been reported using a variety of paradigms and recording methods, it remains unclear what neural modulation underlies this response attenuation. Sharpening models propose that neural populations tuned away from an expected stimulus are particularly suppressed by expectations, thereby resulting in an attenuated, but sharper population response. In contrast, dampening models suggest that neural populations tuned toward the expected stimulus are most suppressed, thus resulting in a dampened, less redundant population response. Empirical support is divided, with some studies favoring sharpening, while others support dampening. A key limitation of previous neuroimaging studies is the ability to draw inferences about neural-level modulations based on population (e.g. voxel) level signals. Indeed, recent simulations of repetition suppression showed that opposite neural modulations can lead to comparable population-level modulations. Forward models provide one solution to this inference limitation. Here, we used forward models to implement sharpening and dampening models, mapping neural modulations to voxel-level data. We show that a feature-specific gain modulation, suppressing neurons tuned toward the expected stimulus, best explains the empirical fMRI data. Thus, our results support the dampening account of expectation suppression, suggesting that expectations reduce redundancy in sensory cortex, and thereby promote updating of internal models on the basis of surprising information.
Collapse
Affiliation(s)
- David Richter
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6500 HB Nijmegen, The Netherlands
| | - Micha Heilbron
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6500 HB Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6500 HB Nijmegen, The Netherlands
| |
Collapse
|
33
|
Edmondson LR, Jiménez Rodríguez A, Saal HP. Expansion and contraction of resource allocation in sensory bottlenecks. eLife 2022; 11:70777. [PMID: 35924884 PMCID: PMC9391039 DOI: 10.7554/elife.70777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 07/29/2022] [Indexed: 11/22/2022] Open
Abstract
Topographic sensory representations often do not scale proportionally to the size of their input regions, with some expanded and others contracted. In vision, the foveal representation is magnified cortically, as are the fingertips in touch. What principles drive this allocation, and how should receptor density, for example, the high innervation of the fovea or the fingertips, and stimulus statistics, for example, the higher contact frequencies on the fingertips, contribute? Building on work in efficient coding, we address this problem using linear models that optimally decorrelate the sensory signals. We introduce a sensory bottleneck to impose constraints on resource allocation and derive the optimal neural allocation. We find that bottleneck width is a crucial factor in resource allocation, inducing either expansion or contraction. Both receptor density and stimulus statistics affect allocation and jointly determine convergence for wider bottlenecks. Furthermore, we show a close match between the predicted and empirical cortical allocations in a well-studied model system, the star-nosed mole. Overall, our results suggest that the strength of cortical magnification depends on resource limits.
Collapse
Affiliation(s)
- Laura R Edmondson
- Department of Psychology, University of Sheffield, Sheffield, United Kingdom
| | | | - Hannes P Saal
- Department of Psychology, University of Sheffield, Sheffield, United Kingdom
| |
Collapse
|
34
|
Abstract
When grasping objects, we rely on our sense of touch to adjust our grip and react against external perturbations. Less than 200 ms after an unexpected event, the sensorimotor system is able to process tactile information to deduce the frictional strength of the contact and to react accordingly. Given that roughly 1,300 afferents innervate the fingertips, it is unclear how the nervous system can process such a large influx of data in a sufficiently short time span. In this study, we measured the deformation of the skin during the initial stages of incipient sliding for a wide range of frictional conditions. We show that the dominant patterns of deformation are sufficient to estimate the distance between the frictional force and the frictional strength of the contact. From these stereotypical patterns, a classifier can predict if an object is about to slide during the initial stages of incipient slip. The prediction is robust to the actual value of the interfacial friction, showing sensory invariance. These results suggest the existence of a possible compact set of bases that we call Eigenstrains. These Eigenstrains are a potential mechanism to rapidly decode the margin from full slip from the tactile information contained in the deformation of the skin. Our findings suggest that only 6 of these Eigenstrains are necessary to classify whether the object is firmly stuck to the fingers or is close to slipping away. These findings give clues about the tactile regulation of grasp and the insights are directly applicable to the design of robotic grippers and prosthetics that rapidly react to external perturbations.
Collapse
|
35
|
Price BH, Gavornik JP. Efficient Temporal Coding in the Early Visual System: Existing Evidence and Future Directions. Front Comput Neurosci 2022; 16:929348. [PMID: 35874317 PMCID: PMC9298461 DOI: 10.3389/fncom.2022.929348] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 06/13/2022] [Indexed: 01/16/2023] Open
Abstract
While it is universally accepted that the brain makes predictions, there is little agreement about how this is accomplished and under which conditions. Accurate prediction requires neural circuits to learn and store spatiotemporal patterns observed in the natural environment, but it is not obvious how such information should be stored, or encoded. Information theory provides a mathematical formalism that can be used to measure the efficiency and utility of different coding schemes for data transfer and storage. This theory shows that codes become efficient when they remove predictable, redundant spatial and temporal information. Efficient coding has been used to understand retinal computations and may also be relevant to understanding more complicated temporal processing in visual cortex. However, the literature on efficient coding in cortex is varied and can be confusing since the same terms are used to mean different things in different experimental and theoretical contexts. In this work, we attempt to provide a clear summary of the theoretical relationship between efficient coding and temporal prediction, and review evidence that efficient coding principles explain computations in the retina. We then apply the same framework to computations occurring in early visuocortical areas, arguing that data from rodents is largely consistent with the predictions of this model. Finally, we review and respond to criticisms of efficient coding and suggest ways that this theory might be used to design future experiments, with particular focus on understanding the extent to which neural circuits make predictions from efficient representations of environmental statistics.
Collapse
Affiliation(s)
| | - Jeffrey P. Gavornik
- Center for Systems Neuroscience, Graduate Program in Neuroscience, Department of Biology, Boston University, Boston, MA, United States
| |
Collapse
|
36
|
Llanos F, Nike Gnanateja G, Chandrasekaran B. Principal component decomposition of acoustic and neural representations of time-varying pitch reveals adaptive efficient coding of speech covariation patterns. BRAIN AND LANGUAGE 2022; 230:105122. [PMID: 35460953 PMCID: PMC9934908 DOI: 10.1016/j.bandl.2022.105122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 03/30/2022] [Accepted: 04/02/2022] [Indexed: 06/14/2023]
Abstract
Understanding the effects of statistical regularities on speech processing is a central issue in auditory neuroscience. To investigate the effects of distributional covariance on the neural processing of speech features, we introduce and validate a novel approach: decomposition of time-varying signals into patterns of covariation extracted with Principal Component Analysis. We used this decomposition to assay the sensory representation of pitch covariation patterns in native Chinese listeners and non-native learners of Mandarin Chinese tones. Sensory representations were examined using the frequency-following response, a far-field potential that reflects phase-locked activity from neural ensembles along the auditory pathway. We found a more efficient representation of the covariation patterns that accounted for more redundancy in the form of distributional covariance. Notably, long-term language and short-term training experiences enhanced the sensory representation of these covariation patterns.
Collapse
Affiliation(s)
- Fernando Llanos
- Department of Linguistics, The University of Texas at Austin, Austin, TX 78712, USA.
| | - G Nike Gnanateja
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Bharath Chandrasekaran
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, PA 15260, USA.
| |
Collapse
|
37
|
Fountas Z, Sylaidi A, Nikiforou K, Seth AK, Shanahan M, Roseboom W. A Predictive Processing Model of Episodic Memory and Time Perception. Neural Comput 2022; 34:1501-1544. [PMID: 35671462 DOI: 10.1162/neco_a_01514] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 03/06/2022] [Indexed: 11/04/2022]
Abstract
Human perception and experience of time are strongly influenced by ongoing stimulation, memory of past experiences, and required task context. When paying attention to time, time experience seems to expand; when distracted, it seems to contract. When considering time based on memory, the experience may be different than what is in the moment, exemplified by sayings like "time flies when you're having fun." Experience of time also depends on the content of perceptual experience-rapidly changing or complex perceptual scenes seem longer in duration than less dynamic ones. The complexity of interactions among attention, memory, and perceptual stimulation is a likely reason that an overarching theory of time perception has been difficult to achieve. Here, we introduce a model of perceptual processing and episodic memory that makes use of hierarchical predictive coding, short-term plasticity, spatiotemporal attention, and episodic memory formation and recall, and apply this model to the problem of human time perception. In an experiment with approximately 13,000 human participants, we investigated the effects of memory, cognitive load, and stimulus content on duration reports of dynamic natural scenes up to about 1 minute long. Using our model to generate duration estimates, we compared human and model performance. Model-based estimates replicated key qualitative biases, including differences by cognitive load (attention), scene type (stimulation), and whether the judgment was made based on current or remembered experience (memory). Our work provides a comprehensive model of human time perception and a foundation for exploring the computational basis of episodic memory within a hierarchical predictive coding framework.
Collapse
Affiliation(s)
- Zafeirios Fountas
- Emotech Labs, London, N1 7EU U.K.,Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London WC1N 3AR, U.K.
| | | | | | - Anil K Seth
- Department of Informatics and Sackler Centre for Consciousness Science, University of Sussex, Brighton, BN1 9RH, U.K.,Canadian Institute for Advanced Research Program on Brain, Mind, and Consciousness, Toronto, ON M5G 1M1, Canada
| | - Murray Shanahan
- Department of Computing, Imperial College London, London, SW7 2RH, U.K.
| | - Warrick Roseboom
- Department of Informatics and Sackler Centre for Consciousness Science, University of Sussex, Brighton BN1 9RH, U.K.
| |
Collapse
|
38
|
Optimal Population Coding for Dynamic Input by Nonequilibrium Networks. ENTROPY 2022; 24:e24050598. [PMID: 35626482 PMCID: PMC9140425 DOI: 10.3390/e24050598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 04/07/2022] [Accepted: 04/19/2022] [Indexed: 12/04/2022]
Abstract
The efficient coding hypothesis states that neural response should maximize its information about the external input. Theoretical studies focus on optimal response in single neuron and population code in networks with weak pairwise interactions. However, more biological settings with asymmetric connectivity and the encoding for dynamical stimuli have not been well-characterized. Here, we study the collective response in a kinetic Ising model that encodes the dynamic input. We apply gradient-based method and mean-field approximation to reconstruct networks given the neural code that encodes dynamic input patterns. We measure network asymmetry, decoding performance, and entropy production from networks that generate optimal population code. We analyze how stimulus correlation, time scale, and reliability of the network affect optimal encoding networks. Specifically, we find network dynamics altered by statistics of the dynamic input, identify stimulus encoding strategies, and show optimal effective temperature in the asymmetric networks. We further discuss how this approach connects to the Bayesian framework and continuous recurrent neural networks. Together, these results bridge concepts of nonequilibrium physics with the analyses of dynamics and coding in networks.
Collapse
|
39
|
Kline AG, Palmer SE. Gaussian Information Bottleneck and the Non-Perturbative Renormalization Group. NEW JOURNAL OF PHYSICS 2022; 24:033007. [PMID: 35368649 PMCID: PMC8967309 DOI: 10.1088/1367-2630/ac395d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The renormalization group (RG) is a class of theoretical techniques used to explain the collective physics of interacting, many-body systems. It has been suggested that the RG formalism may be useful in finding and interpreting emergent low-dimensional structure in complex systems outside of the traditional physics context, such as in biology or computer science. In such contexts, one common dimensionality-reduction framework already in use is information bottleneck (IB), in which the goal is to compress an "input" signal X while maximizing its mutual information with some stochastic "relevance" variable Y. IB has been applied in the vertebrate and invertebrate processing systems to characterize optimal encoding of the future motion of the external world. Other recent work has shown that the RG scheme for the dimer model could be "discovered" by a neural network attempting to solve an IB-like problem. This manuscript explores whether IB and any existing formulation of RG are formally equivalent. A class of soft-cutoff non-perturbative RG techniques are defined by families of non-deterministic coarsening maps, and hence can be formally mapped onto IB, and vice versa. For concreteness, this discussion is limited entirely to Gaussian statistics (GIB), for which IB has exact, closed-form solutions. Under this constraint, GIB has a semigroup structure, in which successive transformations remain IB-optimal. Further, the RG cutoff scheme associated with GIB can be identified. Our results suggest that IB can be used to impose a notion of "large scale" structure, such as biological function, on an RG procedure.
Collapse
Affiliation(s)
- Adam G Kline
- Department of Physics, The University of Chicago, Chicago IL 60637
| | - Stephanie E Palmer
- Department of Organismal Biology and Anatomy and Department of Physics, The University of Chicago, Chicago IL 60637
| |
Collapse
|
40
|
Zhou D, Lynn CW, Cui Z, Ciric R, Baum GL, Moore TM, Roalf DR, Detre JA, Gur RC, Gur RE, Satterthwaite TD, Bassett DS. Efficient coding in the economics of human brain connectomics. Netw Neurosci 2022; 6:234-274. [PMID: 36605887 PMCID: PMC9810280 DOI: 10.1162/netn_a_00223] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 12/08/2021] [Indexed: 01/07/2023] Open
Abstract
In systems neuroscience, most models posit that brain regions communicate information under constraints of efficiency. Yet, evidence for efficient communication in structural brain networks characterized by hierarchical organization and highly connected hubs remains sparse. The principle of efficient coding proposes that the brain transmits maximal information in a metabolically economical or compressed form to improve future behavior. To determine how structural connectivity supports efficient coding, we develop a theory specifying minimum rates of message transmission between brain regions to achieve an expected fidelity, and we test five predictions from the theory based on random walk communication dynamics. In doing so, we introduce the metric of compression efficiency, which quantifies the trade-off between lossy compression and transmission fidelity in structural networks. In a large sample of youth (n = 1,042; age 8-23 years), we analyze structural networks derived from diffusion-weighted imaging and metabolic expenditure operationalized using cerebral blood flow. We show that structural networks strike compression efficiency trade-offs consistent with theoretical predictions. We find that compression efficiency prioritizes fidelity with development, heightens when metabolic resources and myelination guide communication, explains advantages of hierarchical organization, links higher input fidelity to disproportionate areal expansion, and shows that hubs integrate information by lossy compression. Lastly, compression efficiency is predictive of behavior-beyond the conventional network efficiency metric-for cognitive domains including executive function, memory, complex reasoning, and social cognition. Our findings elucidate how macroscale connectivity supports efficient coding and serve to foreground communication processes that utilize random walk dynamics constrained by network connectivity.
Collapse
Affiliation(s)
- Dale Zhou
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Christopher W. Lynn
- Initiative for the Theoretical Sciences, Graduate Center, City University of New York, New York, NY, USA,Joseph Henry Laboratories of Physics, Princeton University, Princeton, NJ, USA
| | - Zaixu Cui
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Rastko Ciric
- Department of Bioengineering, Schools of Engineering and Medicine, Stanford University, Stanford, CA, USA
| | - Graham L. Baum
- Department of Psychology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Tyler M. Moore
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - David R. Roalf
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - John A. Detre
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ruben C. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - Raquel E. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - Theodore D. Satterthwaite
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - Dani S. Bassett
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Department of Physics & Astronomy, College of Arts and Sciences, University of Pennsylvania, Philadelphia, PA, USA,Department of Bioengineering, School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA, USA,Department of Electrical & Systems Engineering, School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA, USA,Santa Fe Institute, Santa Fe, NM, USA,* Corresponding Author:
| |
Collapse
|
41
|
Voina D, Recanatesi S, Hu B, Shea-Brown E, Mihalas S. Single Circuit in V1 Capable of Switching Contexts during Movement Using an Inhibitory Population as a Switch. Neural Comput 2022; 34:541-594. [PMID: 35016220 DOI: 10.1162/neco_a_01472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 09/21/2021] [Indexed: 11/04/2022]
Abstract
As animals adapt to their environments, their brains are tasked with processing stimuli in different sensory contexts. Whether these computations are context dependent or independent, they are all implemented in the same neural tissue. A crucial question is what neural architectures can respond flexibly to a range of stimulus conditions and switch between them. This is a particular case of flexible architecture that permits multiple related computations within a single circuit. Here, we address this question in the specific case of the visual system circuitry, focusing on context integration, defined as the integration of feedforward and surround information across visual space. We show that a biologically inspired microcircuit with multiple inhibitory cell types can switch between visual processing of the static context and the moving context. In our model, the VIP population acts as the switch and modulates the visual circuit through a disinhibitory motif. Moreover, the VIP population is efficient, requiring only a relatively small number of neurons to switch contexts. This circuit eliminates noise in videos by using appropriate lateral connections for contextual spatiotemporal surround modulation, having superior denoising performance compared to circuits where only one context is learned. Our findings shed light on a minimally complex architecture that is capable of switching between two naturalistic contexts using few switching units.
Collapse
Affiliation(s)
- Doris Voina
- Applied Mathematics, University of Washington, Seattle, WA 98195 U.S.A.
| | - Stefano Recanatesi
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, U.S.A.
| | - Brian Hu
- Allen Institute for Brain Science, Seattle, WA 98109 U.S.A
| | - Eric Shea-Brown
- Applied Mathematics, University of Washington, Seattle, WA 98195, U.S.A., and Allen Institute for Brain Science, Seattle, WA 98109, U.S.A.
| | - Stefan Mihalas
- Applied Mathematics, University of Washington, Seattle, WA 98195, U.S.A., and Allen Institute for Brain Science, Seattle, WA 98109, U.S.A.
| |
Collapse
|
42
|
Predictive encoding of motion begins in the primate retina. Nat Neurosci 2021; 24:1280-1291. [PMID: 34341586 PMCID: PMC8728393 DOI: 10.1038/s41593-021-00899-1] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 06/25/2021] [Indexed: 02/06/2023]
Abstract
Predictive motion encoding is an important aspect of visually guided behavior that allows animals to estimate the trajectory of moving objects. Motion prediction is understood primarily in the context of translational motion, but the environment contains other types of behaviorally salient motion correlation such as those produced by approaching or receding objects. However, the neural mechanisms that detect and predictively encode these correlations remain unclear. We report here that four of the parallel output pathways in the primate retina encode predictive motion information, and this encoding occurs for several classes of spatiotemporal correlation that are found in natural vision. Such predictive coding can be explained by known nonlinear circuit mechanisms that produce a nearly optimal encoding, with transmitted information approaching the theoretical limit imposed by the stimulus itself. Thus, these neural circuit mechanisms efficiently separate predictive information from nonpredictive information during the encoding process.
Collapse
|
43
|
Niemeyer N, Schleimer JH, Schreiber S. Biophysical models of intrinsic homeostasis: Firing rates and beyond. Curr Opin Neurobiol 2021; 70:81-88. [PMID: 34454303 DOI: 10.1016/j.conb.2021.07.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2021] [Revised: 06/14/2021] [Accepted: 07/14/2021] [Indexed: 12/01/2022]
Abstract
In view of ever-changing conditions both in the external world and in intrinsic brain states, maintaining the robustness of computations poses a challenge, adequate solutions to which we are only beginning to understand. At the level of cell-intrinsic properties, biophysical models of neurons permit one to identify relevant physiological substrates that can serve as regulators of neuronal excitability and to test how feedback loops can stabilize crucial variables such as long-term calcium levels and firing rates. Mathematical theory has also revealed a rich set of complementary computational properties arising from distinct cellular dynamics and even shaping processing at the network level. Here, we provide an overview over recently explored homeostatic mechanisms derived from biophysical models and hypothesize how multiple dynamical characteristics of cells, including their intrinsic neuronal excitability classes, can be stably controlled.
Collapse
Affiliation(s)
- Nelson Niemeyer
- Institute for Theoretical Biology, Humboldt-Universität zu Berlin, 10115, Berlin, Germany; Einstein Center for Neurosciences Berlin, Charitéplatz 1, 10117, Berlin, Germany; Bernstein Center for Computational Neuroscience, 10115, Berlin, Germany
| | - Jan-Hendrik Schleimer
- Institute for Theoretical Biology, Humboldt-Universität zu Berlin, 10115, Berlin, Germany; Bernstein Center for Computational Neuroscience, 10115, Berlin, Germany
| | - Susanne Schreiber
- Institute for Theoretical Biology, Humboldt-Universität zu Berlin, 10115, Berlin, Germany; Einstein Center for Neurosciences Berlin, Charitéplatz 1, 10117, Berlin, Germany; Bernstein Center for Computational Neuroscience, 10115, Berlin, Germany.
| |
Collapse
|
44
|
Dora S, Bohte SM, Pennartz CMA. Deep Gated Hebbian Predictive Coding Accounts for Emergence of Complex Neural Response Properties Along the Visual Cortical Hierarchy. Front Comput Neurosci 2021; 15:666131. [PMID: 34393744 PMCID: PMC8355371 DOI: 10.3389/fncom.2021.666131] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Accepted: 06/28/2021] [Indexed: 11/13/2022] Open
Abstract
Predictive coding provides a computational paradigm for modeling perceptual processing as the construction of representations accounting for causes of sensory inputs. Here, we developed a scalable, deep network architecture for predictive coding that is trained using a gated Hebbian learning rule and mimics the feedforward and feedback connectivity of the cortex. After training on image datasets, the models formed latent representations in higher areas that allowed reconstruction of the original images. We analyzed low- and high-level properties such as orientation selectivity, object selectivity and sparseness of neuronal populations in the model. As reported experimentally, image selectivity increased systematically across ascending areas in the model hierarchy. Depending on the strength of regularization factors, sparseness also increased from lower to higher areas. The results suggest a rationale as to why experimental results on sparseness across the cortical hierarchy have been inconsistent. Finally, representations for different object classes became more distinguishable from lower to higher areas. Thus, deep neural networks trained using a gated Hebbian formulation of predictive coding can reproduce several properties associated with neuronal responses along the visual cortical hierarchy.
Collapse
Affiliation(s)
- Shirin Dora
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands.,Intelligent Systems Research Centre, Ulster University, Londonderry, United Kingdom
| | - Sander M Bohte
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands.,Machine Learning Group, Centre of Mathematics and Computer Science, Amsterdam, Netherlands
| | - Cyriel M A Pennartz
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
45
|
Abstract
In addition to the role that our visual system plays in determining what we are seeing right now, visual computations contribute in important ways to predicting what we will see next. While the role of memory in creating future predictions is often overlooked, efficient predictive computation requires the use of information about the past to estimate future events. In this article, we introduce a framework for understanding the relationship between memory and visual prediction and review the two classes of mechanisms that the visual system relies on to create future predictions. We also discuss the principles that define the mapping from predictive computations to predictive mechanisms and how downstream brain areas interpret the predictive signals computed by the visual system. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Nicole C Rust
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania 19104;
| | - Stephanie E Palmer
- Department of Organismal Biology and Anatomy, University of Chicago, Illinois 60637;
| |
Collapse
|
46
|
Qiu Y, Zhao Z, Klindt D, Kautzky M, Szatko KP, Schaeffel F, Rifai K, Franke K, Busse L, Euler T. Natural environment statistics in the upper and lower visual field are reflected in mouse retinal specializations. Curr Biol 2021; 31:3233-3247.e6. [PMID: 34107304 DOI: 10.1016/j.cub.2021.05.017] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 04/06/2021] [Accepted: 05/11/2021] [Indexed: 12/29/2022]
Abstract
Pressures for survival make sensory circuits adapted to a species' natural habitat and its behavioral challenges. Thus, to advance our understanding of the visual system, it is essential to consider an animal's specific visual environment by capturing natural scenes, characterizing their statistical regularities, and using them to probe visual computations. Mice, a prominent visual system model, have salient visual specializations, being dichromatic with enhanced sensitivity to green and UV in the dorsal and ventral retina, respectively. However, the characteristics of their visual environment that likely have driven these adaptations are rarely considered. Here, we built a UV-green-sensitive camera to record footage from mouse habitats. This footage is publicly available as a resource for mouse vision research. We found chromatic contrast to greatly diverge in the upper, but not the lower, visual field. Moreover, training a convolutional autoencoder on upper, but not lower, visual field scenes was sufficient for the emergence of color-opponent filters, suggesting that this environmental difference might have driven superior chromatic opponency in the ventral mouse retina, supporting color discrimination in the upper visual field. Furthermore, the upper visual field was biased toward dark UV contrasts, paralleled by more light-offset-sensitive ganglion cells in the ventral retina. Finally, footage recorded at twilight suggests that UV promotes aerial predator detection. Our findings support that natural scene statistics shaped early visual processing in evolution.
Collapse
Affiliation(s)
- Yongrong Qiu
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, University of Tübingen, 72076 Tübingen, Germany
| | - Zhijian Zhao
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany
| | - David Klindt
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, University of Tübingen, 72076 Tübingen, Germany
| | - Magdalena Kautzky
- Division of Neurobiology, Faculty of Biology, LMU Munich, 82152 Planegg-Martinsried, Germany; Graduate School of Systemic Neurosciences (GSN), LMU Munich, 82152 Planegg-Martinsried, Germany
| | - Klaudia P Szatko
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, University of Tübingen, 72076 Tübingen, Germany; Bernstein Centre for Computational Neuroscience, 72076 Tübingen, Germany
| | - Frank Schaeffel
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany
| | - Katharina Rifai
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Carl Zeiss Vision International GmbH, 73430 Aalen, Germany
| | - Katrin Franke
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Bernstein Centre for Computational Neuroscience, 72076 Tübingen, Germany
| | - Laura Busse
- Division of Neurobiology, Faculty of Biology, LMU Munich, 82152 Planegg-Martinsried, Germany; Bernstein Centre for Computational Neuroscience, 82152 Planegg-Martinsried, Germany.
| | - Thomas Euler
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Bernstein Centre for Computational Neuroscience, 72076 Tübingen, Germany.
| |
Collapse
|
47
|
Maximally efficient prediction in the early fly visual system may support evasive flight maneuvers. PLoS Comput Biol 2021; 17:e1008965. [PMID: 34014926 PMCID: PMC8136689 DOI: 10.1371/journal.pcbi.1008965] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Accepted: 04/13/2021] [Indexed: 11/20/2022] Open
Abstract
The visual system must make predictions to compensate for inherent delays in its processing. Yet little is known, mechanistically, about how prediction aids natural behaviors. Here, we show that despite a 20-30ms intrinsic processing delay, the vertical motion sensitive (VS) network of the blowfly achieves maximally efficient prediction. This prediction enables the fly to fine-tune its complex, yet brief, evasive flight maneuvers according to its initial ego-rotation at the time of detection of the visual threat. Combining a rich database of behavioral recordings with detailed compartmental modeling of the VS network, we further show that the VS network has axonal gap junctions that are critical for optimal prediction. During evasive maneuvers, a VS subpopulation that directly innervates the neck motor center can convey predictive information about the fly’s future ego-rotation, potentially crucial for ongoing flight control. These results suggest a novel sensory-motor pathway that links sensory prediction to behavior. Survival-critical behaviors shape neural circuits to translate sensory information into strikingly fast predictions, e.g. in escaping from a predator faster than the system’s processing delay. We show that the fly visual system implements fast and accurate prediction of its visual experience. This provides crucial information for directing fast evasive maneuvers that unfold over just 40ms. Our work shows how this fast prediction is implemented, mechanistically, and suggests the existence of a novel sensory-motor pathway from the fly visual system to a wing steering motor neuron. Echoing and amplifying previous work in the retina, our work hypothesizes that the efficient encoding of predictive information is a universal design principle supporting fast, natural behaviors.
Collapse
|
48
|
Chalk M, Tkacik G, Marre O. Inferring the function performed by a recurrent neural network. PLoS One 2021; 16:e0248940. [PMID: 33857170 PMCID: PMC8049287 DOI: 10.1371/journal.pone.0248940] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Accepted: 03/08/2021] [Indexed: 11/19/2022] Open
Abstract
A central goal in systems neuroscience is to understand the functions performed by neural circuits. Previous top-down models addressed this question by comparing the behaviour of an ideal model circuit, optimised to perform a given function, with neural recordings. However, this requires guessing in advance what function is being performed, which may not be possible for many neural systems. To address this, we propose an inverse reinforcement learning (RL) framework for inferring the function performed by a neural network from data. We assume that the responses of each neuron in a network are optimised so as to drive the network towards 'rewarded' states, that are desirable for performing a given function. We then show how one can use inverse RL to infer the reward function optimised by the network from observing its responses. This inferred reward function can be used to predict how the neural network should adapt its dynamics to perform the same function when the external environment or network structure changes. This could lead to theoretical predictions about how neural network dynamics adapt to deal with cell death and/or varying sensory stimulus statistics.
Collapse
Affiliation(s)
- Matthew Chalk
- Institut de la Vision, INSERM, CNRS, Sorbonne Université, Paris, France
| | | | - Olivier Marre
- Institut de la Vision, INSERM, CNRS, Sorbonne Université, Paris, France
| |
Collapse
|
49
|
Statistical analysis and optimality of neural systems. Neuron 2021; 109:1227-1241.e5. [DOI: 10.1016/j.neuron.2021.01.020] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Revised: 09/10/2020] [Accepted: 01/19/2021] [Indexed: 11/19/2022]
|
50
|
Sachdeva V, Mora T, Walczak AM, Palmer SE. Optimal prediction with resource constraints using the information bottleneck. PLoS Comput Biol 2021; 17:e1008743. [PMID: 33684112 PMCID: PMC7971903 DOI: 10.1371/journal.pcbi.1008743] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 03/18/2021] [Accepted: 01/27/2021] [Indexed: 11/19/2022] Open
Abstract
Responding to stimuli requires that organisms encode information about the external world. Not all parts of the input are important for behavior, and resource limitations demand that signals be compressed. Prediction of the future input is widely beneficial in many biological systems. We compute the trade-offs between representing the past faithfully and predicting the future using the information bottleneck approach, for input dynamics with different levels of complexity. For motion prediction, we show that, depending on the parameters in the input dynamics, velocity or position information is more useful for accurate prediction. We show which motion representations are easiest to re-use for accurate prediction in other motion contexts, and identify and quantify those with the highest transferability. For non-Markovian dynamics, we explore the role of long-term memory in shaping the internal representation. Lastly, we show that prediction in evolutionary population dynamics is linked to clustering allele frequencies into non-overlapping memories.
Collapse
Affiliation(s)
- Vedant Sachdeva
- Graduate Program in Biophysical Sciences, University of Chicago, Chicago, Illinois, United States of America
| | - Thierry Mora
- Laboratoire de physique de l’École normale supérieure, Centre National de la Recherche Scientifique, Paris, France
- Paris Sciences et Lettres University Paris, Paris, France
- Sorbonne Université Paris, Paris, France
- Université de Paris, Paris, France
| | - Aleksandra M. Walczak
- Laboratoire de physique de l’École normale supérieure, Centre National de la Recherche Scientifique, Paris, France
- Paris Sciences et Lettres University Paris, Paris, France
- Sorbonne Université Paris, Paris, France
- Université de Paris, Paris, France
| | - Stephanie E. Palmer
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, Illinois, United States of America
- Department of Physics, University of Chicago, Chicago, Illinois, United States of America
| |
Collapse
|