1
|
Blanco Malerba S, Micheli A, Woodford M, Azeredo da Silveira R. Jointly efficient encoding and decoding in neural populations. PLoS Comput Biol 2024; 20:e1012240. [PMID: 38985828 DOI: 10.1371/journal.pcbi.1012240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 06/07/2024] [Indexed: 07/12/2024] Open
Abstract
The efficient coding approach proposes that neural systems represent as much sensory information as biological constraints allow. It aims at formalizing encoding as a constrained optimal process. A different approach, that aims at formalizing decoding, proposes that neural systems instantiate a generative model of the sensory world. Here, we put forth a normative framework that characterizes neural systems as jointly optimizing encoding and decoding. It takes the form of a variational autoencoder: sensory stimuli are encoded in the noisy activity of neurons to be interpreted by a flexible decoder; encoding must allow for an accurate stimulus reconstruction from neural activity. Jointly, neural activity is required to represent the statistics of latent features which are mapped by the decoder into distributions over sensory stimuli; decoding correspondingly optimizes the accuracy of the generative model. This framework yields in a family of encoding-decoding models, which result in equally accurate generative models, indexed by a measure of the stimulus-induced deviation of neural activity from the marginal distribution over neural activity. Each member of this family predicts a specific relation between properties of the sensory neurons-such as the arrangement of the tuning curve means (preferred stimuli) and widths (degrees of selectivity) in the population-as a function of the statistics of the sensory world. Our approach thus generalizes the efficient coding approach. Notably, here, the form of the constraint on the optimization derives from the requirement of an accurate generative model, while it is arbitrary in efficient coding models. Moreover, solutions do not require the knowledge of the stimulus distribution, but are learned on the basis of data samples; the constraint further acts as regularizer, allowing the model to generalize beyond the training data. Finally, we characterize the family of models we obtain through alternate measures of performance, such as the error in stimulus reconstruction. We find that a range of models admits comparable performance; in particular, a population of sensory neurons with broad tuning curves as observed experimentally yields both low reconstruction stimulus error and an accurate generative model that generalizes robustly to unseen data.
Collapse
Affiliation(s)
- Simone Blanco Malerba
- Laboratoire de Physique de l'Ecole Normale Supérieure, ENS, Université PSL, CNRS, 3 Sorbonne Université, Université de Paris, Paris, France
- Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Aurora Micheli
- Laboratoire de Physique de l'Ecole Normale Supérieure, ENS, Université PSL, CNRS, 3 Sorbonne Université, Université de Paris, Paris, France
| | - Michael Woodford
- Department of Economics, Columbia University, New York, New York, United States of America
| | - Rava Azeredo da Silveira
- Laboratoire de Physique de l'Ecole Normale Supérieure, ENS, Université PSL, CNRS, 3 Sorbonne Université, Université de Paris, Paris, France
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
- Faculty of Science, University of Basel, Basel, Switzerland
| |
Collapse
|
2
|
Kessler F, Frankenstein J, Rothkopf CA. Human navigation strategies and their errors result from dynamic interactions of spatial uncertainties. Nat Commun 2024; 15:5677. [PMID: 38971789 PMCID: PMC11227593 DOI: 10.1038/s41467-024-49722-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 06/14/2024] [Indexed: 07/08/2024] Open
Abstract
Goal-directed navigation requires continuously integrating uncertain self-motion and landmark cues into an internal sense of location and direction, concurrently planning future paths, and sequentially executing motor actions. Here, we provide a unified account of these processes with a computational model of probabilistic path planning in the framework of optimal feedback control under uncertainty. This model gives rise to diverse human navigational strategies previously believed to be distinct behaviors and predicts quantitatively both the errors and the variability of navigation across numerous experiments. This furthermore explains how sequential egocentric landmark observations form an uncertain allocentric cognitive map, how this internal map is used both in route planning and during execution of movements, and reconciles seemingly contradictory results about cue-integration behavior in navigation. Taken together, the present work provides a parsimonious explanation of how patterns of human goal-directed navigation behavior arise from the continuous and dynamic interactions of spatial uncertainties in perception, cognition, and action.
Collapse
Affiliation(s)
- Fabian Kessler
- Centre for Cognitive Science & Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany.
| | - Julia Frankenstein
- Centre for Cognitive Science & Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany
| | - Constantin A Rothkopf
- Centre for Cognitive Science & Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany
- Frankfurt Institute for Advanced Studies, Goethe University, Frankfurt, Germany
| |
Collapse
|
3
|
Redman WT, Acosta-Mendoza S, Wei XX, Goard MJ. Robust variability of grid cell properties within individual grid modules enhances encoding of local space. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.27.582373. [PMID: 38915504 PMCID: PMC11195105 DOI: 10.1101/2024.02.27.582373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
Although grid cells are one of the most well studied functional classes of neurons in the mammalian brain, the assumption that there is a single grid orientation and spacing per grid module has not been carefully tested. We investigate and analyze a recent large-scale recording of medial entorhinal cortex to characterize the presence and degree of heterogeneity of grid properties within individual modules. We find evidence for small, but robust, variability and hypothesize that this property of the grid code could enhance the ability of encoding local spatial information. Performing analysis on synthetic populations of grid cells, where we have complete control over the amount heterogeneity in grid properties, we demonstrate that variability, of a similar magnitude to the analyzed data, leads to significantly decreased decoding error, even when restricted to activity from a single module. Our results highlight how the heterogeneity of the neural response properties may benefit coding and opens new directions for theoretical and experimental analysis of grid cells.
Collapse
Affiliation(s)
- William T Redman
- Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara
- Intelligent Systems Center, Johns Hopkins University Applied Physics Lab
| | - Santiago Acosta-Mendoza
- Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara
| | - Xue-Xin Wei
- Department of Neuroscience, The University of Texas at Austin
- Department of Psychology, The University of Texas at Austin
- Center for Perceptual Systems, The University of Texas at Austin
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
| | - Michael J Goard
- Department of Psychological and Brain Sciences, University of California, Santa Barbara
- Department of Molecular, Cellular, and Developmental Biology, University of California, Santa Barbara
- Neuroscience Research Institute, University of California Santa Barbara
| |
Collapse
|
4
|
Neupane S, Fiete I, Jazayeri M. Mental navigation in the primate entorhinal cortex. Nature 2024; 630:704-711. [PMID: 38867051 PMCID: PMC11224022 DOI: 10.1038/s41586-024-07557-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Accepted: 05/10/2024] [Indexed: 06/14/2024]
Abstract
A cognitive map is a suitably structured representation that enables novel computations using previous experience; for example, planning a new route in a familiar space1. Work in mammals has found direct evidence for such representations in the presence of exogenous sensory inputs in both spatial2,3 and non-spatial domains4-10. Here we tested a foundational postulate of the original cognitive map theory1,11: that cognitive maps support endogenous computations without external input. We recorded from the entorhinal cortex of monkeys in a mental navigation task that required the monkeys to use a joystick to produce one-dimensional vectors between pairs of visual landmarks without seeing the intermediate landmarks. The ability of the monkeys to perform the task and generalize to new pairs indicated that they relied on a structured representation of the landmarks. Task-modulated neurons exhibited periodicity and ramping that matched the temporal structure of the landmarks and showed signatures of continuous attractor networks12,13. A continuous attractor network model of path integration14 augmented with a Hebbian-like learning mechanism provided an explanation of how the system could endogenously recall landmarks. The model also made an unexpected prediction that endogenous landmarks transiently slow path integration, reset the dynamics and thereby reduce variability. This prediction was borne out in a reanalysis of firing rate variability and behaviour. Our findings link the structured patterns of activity in the entorhinal cortex to the endogenous recruitment of a cognitive map during mental navigation.
Collapse
Affiliation(s)
- Sujaya Neupane
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Ila Fiete
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
5
|
Clark H, Nolan MF. Task-anchored grid cell firing is selectively associated with successful path integration-dependent behaviour. eLife 2024; 12:RP89356. [PMID: 38546203 PMCID: PMC10977970 DOI: 10.7554/elife.89356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/01/2024] Open
Abstract
Grid firing fields have been proposed as a neural substrate for spatial localisation in general or for path integration in particular. To distinguish these possibilities, we investigate firing of grid and non-grid cells in the mouse medial entorhinal cortex during a location memory task. We find that grid firing can either be anchored to the task environment, or can encode distance travelled independently of the task reference frame. Anchoring varied between and within sessions, while spatial firing of non-grid cells was either coherent with the grid population, or was stably anchored to the task environment. We took advantage of the variability in task-anchoring to evaluate whether and when encoding of location by grid cells might contribute to behaviour. We find that when reward location is indicated by a visual cue, performance is similar regardless of whether grid cells are task-anchored or not, arguing against a role for grid representations when location cues are available. By contrast, in the absence of the visual cue, performance was enhanced when grid cells were anchored to the task environment. Our results suggest that anchoring of grid cells to task reference frames selectively enhances performance when path integration is required.
Collapse
Affiliation(s)
- Harry Clark
- Centre for Discovery Brain Sciences, Simons Initiative for the Developing Brain, Hugh Robson Building, University of EdinburghEdinburghUnited Kingdom
| | - Matthew F Nolan
- Centre for Discovery Brain Sciences, Simons Initiative for the Developing Brain, Hugh Robson Building, University of EdinburghEdinburghUnited Kingdom
| |
Collapse
|
6
|
Dabaghian Y. Grid cells, border cells, and discrete complex analysis. Front Comput Neurosci 2023; 17:1242300. [PMID: 37881247 PMCID: PMC10595009 DOI: 10.3389/fncom.2023.1242300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 09/22/2023] [Indexed: 10/27/2023] Open
Abstract
We propose a mechanism enabling the appearance of border cells-neurons firing at the boundaries of the navigated enclosures. The approach is based on the recent discovery of discrete complex analysis on a triangular lattice, which allows constructing discrete epitomes of complex-analytic functions and making use of their inherent ability to attain maximal values at the boundaries of generic lattice domains. As it turns out, certain elements of the discrete-complex framework readily appear in the oscillatory models of grid cells. We demonstrate that these models can extend further, producing cells that increase their activity toward the frontiers of the navigated environments. We also construct a network model of neurons with border-bound firing that conforms with the oscillatory models.
Collapse
Affiliation(s)
- Yuri Dabaghian
- Department of Neurology, The University of Texas, McGovern Medical Center at Houston, Houston, TX, United States
| |
Collapse
|
7
|
Orloff MA, Boorman ED. Cognitive maps: Constructing a route with your snout. Curr Biol 2023; 33:R963-R965. [PMID: 37751711 DOI: 10.1016/j.cub.2023.08.053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/28/2023]
Abstract
Humans construct cognitive maps of the physical, imagined, and abstract world around us based on visually sampled information. A new study shows how the human brain can also use olfactory cues to form and use cognitive maps.
Collapse
Affiliation(s)
- Mark A Orloff
- Center for Mind and Brain, University of California, Davis, 267 Cousteau Place, Davis, CA 95618, USA.
| | - Erie D Boorman
- Center for Mind and Brain, University of California, Davis, 267 Cousteau Place, Davis, CA 95618, USA; Department of Psychology, University of California, Davis, 135 Young Hall, One Shields Avenue, Davis, CA 95616, USA.
| |
Collapse
|
8
|
Parra-Barrero E, Vijayabaskaran S, Seabrook E, Wiskott L, Cheng S. A map of spatial navigation for neuroscience. Neurosci Biobehav Rev 2023; 152:105200. [PMID: 37178943 DOI: 10.1016/j.neubiorev.2023.105200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 04/13/2023] [Accepted: 04/24/2023] [Indexed: 05/15/2023]
Abstract
Spatial navigation has received much attention from neuroscientists, leading to the identification of key brain areas and the discovery of numerous spatially selective cells. Despite this progress, our understanding of how the pieces fit together to drive behavior is generally lacking. We argue that this is partly caused by insufficient communication between behavioral and neuroscientific researchers. This has led the latter to under-appreciate the relevance and complexity of spatial behavior, and to focus too narrowly on characterizing neural representations of space-disconnected from the computations these representations are meant to enable. We therefore propose a taxonomy of navigation processes in mammals that can serve as a common framework for structuring and facilitating interdisciplinary research in the field. Using the taxonomy as a guide, we review behavioral and neural studies of spatial navigation. In doing so, we validate the taxonomy and showcase its usefulness in identifying potential issues with common experimental approaches, designing experiments that adequately target particular behaviors, correctly interpreting neural activity, and pointing to new avenues of research.
Collapse
Affiliation(s)
- Eloy Parra-Barrero
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany; International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Sandhiya Vijayabaskaran
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany
| | - Eddie Seabrook
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany
| | - Laurenz Wiskott
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany; International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Sen Cheng
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany; International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany.
| |
Collapse
|
9
|
Dabaghian Y. Grid Cells, Border Cells and Discrete Complex Analysis. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.06.539720. [PMID: 37214803 PMCID: PMC10197584 DOI: 10.1101/2023.05.06.539720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
We propose a mechanism enabling the appearance of border cells-neurons firing at the boundaries of the navigated enclosures. The approach is based on the recent discovery of discrete complex analysis on a triangular lattice, which allows constructing discrete epitomes of complex-analytic functions and making use of their inherent ability to attain maximal values at the boundaries of generic lattice domains. As it turns out, certain elements of the discrete-complex framework readily appear in the oscillatory models of grid cells. We demonstrate that these models can extend further, producing cells that increase their activity towards the frontiers of the navigated environments. We also construct a network model of neurons with border-bound firing that conforms with the oscillatory models.
Collapse
Affiliation(s)
- Yuri Dabaghian
- Department of Neurology, The University of Texas McGovern Medical School, 6431 Fannin St, Houston, TX 77030
| |
Collapse
|
10
|
Kang YHR, Wolpert DM, Lengyel M. Spatial uncertainty and environmental geometry in navigation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.30.526278. [PMID: 36778354 PMCID: PMC9915518 DOI: 10.1101/2023.01.30.526278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Variations in the geometry of the environment, such as the shape and size of an enclosure, have profound effects on navigational behavior and its neural underpinning. Here, we show that these effects arise as a consequence of a single, unifying principle: to navigate efficiently, the brain must maintain and update the uncertainty about one's location. We developed an image-computable Bayesian ideal observer model of navigation, continually combining noisy visual and self-motion inputs, and a neural encoding model optimized to represent the location uncertainty computed by the ideal observer. Through mathematical analysis and numerical simulations, we show that the ideal observer accounts for a diverse range of sometimes paradoxical distortions of human homing behavior in anisotropic and deformed environments, including 'boundary tethering', and its neural encoding accounts for distortions of rodent grid cell responses under identical environmental manipulations. Our results demonstrate that spatial uncertainty plays a key role in navigation.
Collapse
Affiliation(s)
- Yul HR Kang
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
- Department of Biological and Experimental Psychology, Queen Mary University of London, London, UK
| | - Daniel M Wolpert
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- Department of Neuroscience, Columbia University, New York, NY, USA
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
11
|
Khona M, Fiete IR. Attractor and integrator networks in the brain. Nat Rev Neurosci 2022; 23:744-766. [DOI: 10.1038/s41583-022-00642-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/22/2022] [Indexed: 11/06/2022]
|
12
|
Wang R, Kang L. Multiple bumps can enhance robustness to noise in continuous attractor networks. PLoS Comput Biol 2022; 18:e1010547. [PMID: 36215305 PMCID: PMC9584540 DOI: 10.1371/journal.pcbi.1010547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 10/20/2022] [Accepted: 09/06/2022] [Indexed: 11/19/2022] Open
Abstract
A central function of continuous attractor networks is encoding coordinates and accurately updating their values through path integration. To do so, these networks produce localized bumps of activity that move coherently in response to velocity inputs. In the brain, continuous attractors are believed to underlie grid cells and head direction cells, which maintain periodic representations of position and orientation, respectively. These representations can be achieved with any number of activity bumps, and the consequences of having more or fewer bumps are unclear. We address this knowledge gap by constructing 1D ring attractor networks with different bump numbers and characterizing their responses to three types of noise: fluctuating inputs, spiking noise, and deviations in connectivity away from ideal attractor configurations. Across all three types, networks with more bumps experience less noise-driven deviations in bump motion. This translates to more robust encodings of linear coordinates, like position, assuming that each neuron represents a fixed length no matter the bump number. Alternatively, we consider encoding a circular coordinate, like orientation, such that the network distance between adjacent bumps always maps onto 360 degrees. Under this mapping, bump number does not significantly affect the amount of error in the coordinate readout. Our simulation results are intuitively explained and quantitatively matched by a unified theory for path integration and noise in multi-bump networks. Thus, to suppress the effects of biologically relevant noise, continuous attractor networks can employ more bumps when encoding linear coordinates; this advantage disappears when encoding circular coordinates. Our findings provide motivation for multiple bumps in the mammalian grid network.
Collapse
Affiliation(s)
- Raymond Wang
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, California, United States of America
- Neural Circuits and Computations Unit, RIKEN Center for Brain Science, Wako, Saitama, Japan
| | - Louis Kang
- Neural Circuits and Computations Unit, RIKEN Center for Brain Science, Wako, Saitama, Japan
- * E-mail:
| |
Collapse
|
13
|
Abstract
Learning and interpreting the structure of the environment is an innate feature of biological systems, and is integral to guiding flexible behaviors for evolutionary viability. The concept of a cognitive map has emerged as one of the leading metaphors for these capacities, and unraveling the learning and neural representation of such a map has become a central focus of neuroscience. In recent years, many models have been developed to explain cellular responses in the hippocampus and other brain areas. Because it can be difficult to see how these models differ, how they relate and what each model can contribute, this Review aims to organize these models into a clear ontology. This ontology reveals parallels between existing empirical results, and implies new approaches to understand hippocampal-cortical interactions and beyond.
Collapse
|
14
|
Yang C, Xiong Z, Liu J, Chao L, Chen Y. A Path Integration Approach Based on Multiscale Grid Cells for Large-Scale Navigation. IEEE Trans Cogn Dev Syst 2022. [DOI: 10.1109/tcds.2021.3092609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Chuang Yang
- Navigation Research Center, School of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Zhi Xiong
- Navigation Research Center, School of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jianye Liu
- Navigation Research Center, School of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Lijun Chao
- Navigation Research Center, School of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Yudi Chen
- Navigation Research Center, School of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| |
Collapse
|
15
|
Waaga T, Agmon H, Normand VA, Nagelhus A, Gardner RJ, Moser MB, Moser EI, Burak Y. Grid-cell modules remain coordinated when neural activity is dissociated from external sensory cues. Neuron 2022; 110:1843-1856.e6. [PMID: 35385698 PMCID: PMC9235855 DOI: 10.1016/j.neuron.2022.03.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 01/25/2022] [Accepted: 03/09/2022] [Indexed: 11/30/2022]
Abstract
The representation of an animal’s position in the medial entorhinal cortex (MEC) is distributed across several modules of grid cells, each characterized by a distinct spatial scale. The population activity within each module is tightly coordinated and preserved across environments and behavioral states. Little is known, however, about the coordination of activity patterns across modules. We analyzed the joint activity patterns of hundreds of grid cells simultaneously recorded in animals that were foraging either in the light, when sensory cues could stabilize the representation, or in darkness, when such stabilization was disrupted. We found that the states of different modules are tightly coordinated, even in darkness, when the internal representation of position within the MEC deviates substantially from the true position of the animal. These findings suggest that internal brain mechanisms dynamically coordinate the representation of position in different modules, ensuring that they jointly encode a coherent and smooth trajectory. Hundreds of grid cells were recorded simultaneously from multiple grid modules Coordination between grid modules was assessed in rats that foraged in darkness Coordination persists despite relative drift of the represented versus true position This suggests that internal network mechanisms maintain inter-module coordination
Collapse
Affiliation(s)
- Torgeir Waaga
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway
| | - Haggai Agmon
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel.
| | - Valentin A Normand
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway
| | - Anne Nagelhus
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway
| | - Richard J Gardner
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway
| | - May-Britt Moser
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway
| | - Edvard I Moser
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway.
| | - Yoram Burak
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel; Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem, Israel.
| |
Collapse
|
16
|
Zeng T, Si B, Li X. Entorhinal-hippocampal interactions lead to globally coherent representations of space. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 3:100035. [PMID: 36685760 PMCID: PMC9846457 DOI: 10.1016/j.crneur.2022.100035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Revised: 02/08/2022] [Accepted: 03/09/2022] [Indexed: 01/25/2023] Open
Abstract
The firing maps of grid cells in the entorhinal cortex are thought to provide an efficient metric system capable of supporting spatial inference in all environments. However, whether spatial representations of grid cells are determined by local environment cues or are organized into globally coherent patterns remains undetermined. We propose a navigation model containing a path integration system in the entorhinal cortex and a cognitive map system in the hippocampus. In the path integration system, grid cell network and head direction (HD) cell network integrate movement and visual information, and form attractor states to represent the positions and head directions of the animal. In the cognitive map system, a topological map is constructed capturing the attractor states of the path integration system as nodes and the transitions between attractor states as links. On loop closure, when the animal revisits a familiar place, the topological map is calibrated to minimize odometry errors. The change of the topological map is mapped back to the path integration system, to correct the states of the grid cells and the HD cells. The proposed model was tested on iRat, a rat-like miniature robot, in a realistic maze. Experimental results showed that, after familiarization of the environment, both grid cells and HD cells develop globally coherent firing maps by map calibration and activity correction. These results demonstrate that the hippocampus and the entorhinal cortex work together to form globally coherent metric representations of the environment. The underlying mechanisms of the hippocampal-entorhinal circuit in capturing the structure of the environment from sequences of experience are critical for understanding episodic memory.
Collapse
Affiliation(s)
- Taiping Zeng
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo 113-0033, Japan,Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China,Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Fudan University, Ministry of Education, China
| | - Bailu Si
- School of Systems Science, Beijing Normal University, Beijing, 100875, China,Peng Cheng Laboratory, Shenzhen, 518055, China,Corresponding author. School of Systems Science, Beijing Normal University, Beijing, 100875, China.
| | - Xiaoli Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China
| |
Collapse
|
17
|
Roux K, van den Heever D. Orientation Invariant Sensorimotor Object Recognition Using Cortical Grid Cells. Front Neural Circuits 2022; 15:738137. [PMID: 35153678 PMCID: PMC8825787 DOI: 10.3389/fncir.2021.738137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 12/31/2021] [Indexed: 12/01/2022] Open
Abstract
Grid cells enable efficient modeling of locations and movement through path integration. Recent work suggests that the brain might use similar mechanisms to learn the structure of objects and environments through sensorimotor processing. This work is extended in our network to support sensor orientations relative to learned allocentric object representations. The proposed mechanism enables object representations to be learned through sensorimotor sequences, and inference of these learned object representations from novel sensorimotor sequences produced by rotated objects through path integration. The model proposes that orientation-selective cells are present in each column in the neocortex, and provides a biologically plausible implementation that echoes experimental measurements and fits in with theoretical predictions of previous studies.
Collapse
Affiliation(s)
- Kalvyn Roux
- BERG, Department of Mechanical and Mechatronic Engineering, Stellenbosch University, Stellenbosch, South Africa
- *Correspondence: Kalvyn Roux
| | - David van den Heever
- BERG, Department of Mechanical and Mechatronic Engineering, Stellenbosch University, Stellenbosch, South Africa
- Department of Agricultural and Biological Engineering, Mississippi State University, Starkville, MS, United States
| |
Collapse
|
18
|
Zhi Y, Cox D. Neurodegenerative damage reduces firing coherence in a continuous attractor model of grid cells. Phys Rev E 2021; 104:044414. [PMID: 34781544 DOI: 10.1103/physreve.104.044414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Accepted: 08/25/2021] [Indexed: 11/07/2022]
Abstract
Grid cells in the dorsolateral band of the medial entorhinal cortex (dMEC) display strikingly regular periodic firing patterns on a lattice of positions in two-dimensional (2D) space. This helps animals to encode relative spatial location without reference to external cues. The dMEC is damaged in the early stages of Alzheimer's disease, which affects navigation ability of a disease victim, reducing the synaptic density of neurons in the network. Within an established two-dimensional continuous attractor neural network model of grid cell activity, we introduce neural sheet damage parametrized by radius and by the strength of the synaptic output for neurons in the damaged region. The mean proportionality of the grid field flow rate in the dMEC to the velocity of the model animal is maintained, but there is a broadened distribution of flow rates in the damaged case. This flow-rate-to-velocity proportionality is essential to establish coherent grid firing fields for individual grid cells for a roaming animal. When we examine the coherence of the grid cell firing field by studying Bragg peaks of the Fourier transformed lattice firing field intensity in both damaged and undamaged regions, we find that, for a wide range of damage radius and reduced synaptic strength, for undamaged model grid cells there is an incoherent firing field structure with only a single central peak. In the radius-damage plane this is adjacent to narrow bands of striped lattices (two additional Bragg peaks), which abut an orthorhombic pattern (four additional Bragg peaks), that abuts the undamaged hexagonal region (six additional Bragg peaks). Within the damaged region, grid cells show no Bragg peaks outside the central one, which shows reduced intensity with increasing damage, and outside the damaged region the central Bragg peak strength is largely unaffected. There is a reentrant region of normal grid firing fields for very large damage area. We anticipate that the modified grid cell behavior can be observed in noninvasive functional magnetic resonance imaging (fMRI) of the dMEC.
Collapse
Affiliation(s)
- Yuduo Zhi
- Physics Department, University of California, Davis, California 95616, USA
| | - Daniel Cox
- Physics Department, University of California, Davis, California 95616, USA
| |
Collapse
|
19
|
DiTullio RW, Balasubramanian V. Dynamical self-organization and efficient representation of space by grid cells. Curr Opin Neurobiol 2021; 70:206-213. [PMID: 34861597 PMCID: PMC8688296 DOI: 10.1016/j.conb.2021.11.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 11/09/2021] [Indexed: 10/19/2022]
Abstract
To plan trajectories and navigate, animals must maintain a mental representation of the environment and their own position within it. This "cognitive map" is thought to be supported in part by the entorhinal cortex, where grid cells are active when an animal occupies the vertices of a scaling hierarchy of periodic lattices of locations in an enclosure. Here, we review computational developments which suggest that the grid cell network is: (a) efficient, providing required spatial resolution with a minimum number of neurons, (b) self-organizing, dynamically coordinating the structure and scale of the responses, and (c) adaptive, re-organizing in response to changes in landmarks and the structure of the boundaries of spaces. We consider these ideas in light of recent discoveries of similar structures in the mental representation of abstract spaces of shapes and smells, and in other brain areas, and highlight promising directions for future research.
Collapse
Affiliation(s)
- Ronald W. DiTullio
- David Rittenhouse Laboratories & Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA 19104
| | - Vijay Balasubramanian
- David Rittenhouse Laboratories & Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA 19104
| |
Collapse
|
20
|
Eliav T, Maimon SR, Aljadeff J, Tsodyks M, Ginosar G, Las L, Ulanovsky N. Multiscale representation of very large environments in the hippocampus of flying bats. Science 2021; 372:372/6545/eabg4020. [PMID: 34045327 DOI: 10.1126/science.abg4020] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 04/06/2021] [Indexed: 12/14/2022]
Abstract
Hippocampal place cells encode the animal's location. Place cells were traditionally studied in small environments, and nothing is known about large ethologically relevant spatial scales. We wirelessly recorded from hippocampal dorsal CA1 neurons of wild-born bats flying in a long tunnel (200 meters). The size of place fields ranged from 0.6 to 32 meters. Individual place cells exhibited multiple fields and a multiscale representation: Place fields of the same neuron differed up to 20-fold in size. This multiscale coding was observed from the first day of exposure to the environment, and also in laboratory-born bats that never experienced large environments. Theoretical decoding analysis showed that the multiscale code allows representation of very large environments with much higher precision than that of other codes. Together, by increasing the spatial scale, we discovered a neural code that is radically different from classical place codes.
Collapse
Affiliation(s)
- Tamir Eliav
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Shir R Maimon
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Johnatan Aljadeff
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel.,Section of Neurobiology, Division of Biological Sciences, University of California, San Diego, CA 92093, USA
| | - Misha Tsodyks
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel.,The Simons Center for Systems Biology, Institute for Advanced Study, Princeton, NJ 08540, USA
| | - Gily Ginosar
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Liora Las
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Nachum Ulanovsky
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel.
| |
Collapse
|
21
|
Remapping and realignment in the human hippocampal formation predict context-dependent spatial behavior. Nat Neurosci 2021; 24:863-872. [PMID: 33859438 DOI: 10.1038/s41593-021-00835-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Accepted: 03/08/2021] [Indexed: 02/02/2023]
Abstract
To guide spatial behavior, the brain must retrieve memories that are appropriately associated with different navigational contexts. Contextual memory might be mediated by cell ensembles in the hippocampal formation that alter their responses to changes in context, processes known as remapping and realignment in the hippocampus and entorhinal cortex, respectively. However, whether remapping and realignment guide context-dependent spatial behavior is unclear. To address this issue, human participants learned object-location associations within two distinct virtual reality environments and subsequently had their memory tested during functional MRI (fMRI) scanning. Entorhinal grid-like representations showed realignment between the two contexts, and coincident changes in fMRI activity patterns consistent with remapping were observed in the hippocampus. Critically, in a third ambiguous context, trial-by-trial remapping and realignment in the hippocampal-entorhinal network predicted context-dependent behavior. These results reveal the hippocampal-entorhinal mechanisms mediating human contextual memory and suggest that the hippocampal formation plays a key role in spatial behavior under uncertainty.
Collapse
|
22
|
Yim MY, Sadun LA, Fiete IR, Taillefumier T. Place-cell capacity and volatility with grid-like inputs. eLife 2021; 10:62702. [PMID: 34028354 PMCID: PMC8294848 DOI: 10.7554/elife.62702] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Accepted: 04/28/2021] [Indexed: 01/07/2023] Open
Abstract
What factors constrain the arrangement of the multiple fields of a place cell? By modeling place cells as perceptrons that act on multiscale periodic grid-cell inputs, we analytically enumerate a place cell’s repertoire – how many field arrangements it can realize without external cues while its grid inputs are unique – and derive its capacity – the spatial range over which it can achieve any field arrangement. We show that the repertoire is very large and relatively noise-robust. However, the repertoire is a vanishing fraction of all arrangements, while capacity scales only as the sum of the grid periods so field arrangements are constrained over larger distances. Thus, grid-driven place field arrangements define a large response scaffold that is strongly constrained by its structured inputs. Finally, we show that altering grid-place weights to generate an arbitrary new place field strongly affects existing arrangements, which could explain the volatility of the place code.
Collapse
Affiliation(s)
- Man Yi Yim
- Center for Theoretical and Computational Neuroscience, University of Texas, Austin, United States.,Department of Neuroscience, University of Texas, Austin, United States.,Department of Brain and Cognitive Sciences and McGovern Institute, MIT, Austin, United States
| | - Lorenzo A Sadun
- Department of Mathematics and Neuroscience, The University of Texas, Austin, United States
| | - Ila R Fiete
- Center for Theoretical and Computational Neuroscience, University of Texas, Austin, United States.,Department of Brain and Cognitive Sciences and McGovern Institute, MIT, Austin, United States
| | - Thibaud Taillefumier
- Center for Theoretical and Computational Neuroscience, University of Texas, Austin, United States.,Department of Neuroscience, University of Texas, Austin, United States.,Department of Mathematics and Neuroscience, The University of Texas, Austin, United States
| |
Collapse
|
23
|
Why grid cells function as a metric for space. Neural Netw 2021; 142:128-137. [PMID: 34000560 DOI: 10.1016/j.neunet.2021.04.031] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2020] [Revised: 04/16/2021] [Accepted: 04/23/2021] [Indexed: 11/20/2022]
Abstract
The brain is able to calculate the distance and direction to the desired position based on grid cells. Extensive neurophysiological studies of rodent navigation have postulated the grid cells function as a metric for space, and have inspired many computational studies to develop innovative navigation approaches. Furthermore, grid cells may provide a general encoding scheme for high-order nonspatial information. Built upon existing neuroscience and machine learning work, this paper provides theoretical clarity on that the grid cell population codes can be taken as a metric for space. The metric is generated by a shift-invariant positive definite kernel via kernel distance method and embeds isometrically in a Euclidean space, and the inner product of the grid cell population code exponentially converges to the kernel. We also provide a method to learn the distribution of grid cell population efficiently. Grid cells, as a scalable position encoding method, can encode the spatial relationships of places and enable grid cells to outperform place cells in navigation. Further, we extend the grid cell to images encoding and find that grid cells embed images into a mental map, where geometric relationships are conceptual relationships of images. The theoretical model and analysis would contribute to establishing the grid cell code as a generic coding scheme for both spatial and conceptual spaces, and is promising for a multitude of problems across spatial cognition, machine learning and semantic cognition.
Collapse
|
24
|
Azeredo da Silveira R, Rieke F. The Geometry of Information Coding in Correlated Neural Populations. Annu Rev Neurosci 2021; 44:403-424. [PMID: 33863252 DOI: 10.1146/annurev-neuro-120320-082744] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Neurons in the brain represent information in their collective activity. The fidelity of this neural population code depends on whether and how variability in the response of one neuron is shared with other neurons. Two decades of studies have investigated the influence of these noise correlations on the properties of neural coding. We provide an overview of the theoretical developments on the topic. Using simple, qualitative, and general arguments, we discuss, categorize, and relate the various published results. We emphasize the relevance of the fine structure of noise correlation, and we present a new approach to the issue. Throughout this review, we emphasize a geometrical picture of how noise correlations impact the neural code.
Collapse
Affiliation(s)
| | - Fred Rieke
- Department of Physics, Ecole Normale Supérieure, 75005 Paris, France;
| |
Collapse
|
25
|
Kang L, Xu B, Morozov D. Evaluating State Space Discovery by Persistent Cohomology in the Spatial Representation System. Front Comput Neurosci 2021; 15:616748. [PMID: 33897395 PMCID: PMC8060447 DOI: 10.3389/fncom.2021.616748] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Accepted: 03/11/2021] [Indexed: 12/02/2022] Open
Abstract
Persistent cohomology is a powerful technique for discovering topological structure in data. Strategies for its use in neuroscience are still undergoing development. We comprehensively and rigorously assess its performance in simulated neural recordings of the brain's spatial representation system. Grid, head direction, and conjunctive cell populations each span low-dimensional topological structures embedded in high-dimensional neural activity space. We evaluate the ability for persistent cohomology to discover these structures for different dataset dimensions, variations in spatial tuning, and forms of noise. We quantify its ability to decode simulated animal trajectories contained within these topological structures. We also identify regimes under which mixtures of populations form product topologies that can be detected. Our results reveal how dataset parameters affect the success of topological discovery and suggest principles for applying persistent cohomology, as well as persistent homology, to experimental neural recordings.
Collapse
Affiliation(s)
- Louis Kang
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, United States
- Neural Circuits and Computations Unit, RIKEN Center for Brain Science, Wako, Japan
| | - Boyan Xu
- Department of Mathematics, University of California, Berkeley, Berkeley, CA, United States
| | - Dmitriy Morozov
- Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA, United States
| |
Collapse
|
26
|
Dannenberg H, Lazaro H, Nambiar P, Hoyland A, Hasselmo ME. Effects of visual inputs on neural dynamics for coding of location and running speed in medial entorhinal cortex. eLife 2020; 9:62500. [PMID: 33300873 PMCID: PMC7773338 DOI: 10.7554/elife.62500] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 12/09/2020] [Indexed: 12/13/2022] Open
Abstract
Neuronal representations of spatial location and movement speed in the medial entorhinal cortex during the ‘active’ theta state of the brain are important for memory-guided navigation and rely on visual inputs. However, little is known about how visual inputs change neural dynamics as a function of running speed and time. By manipulating visual inputs in mice, we demonstrate that changes in spatial stability of grid cell firing correlate with changes in a proposed speed signal by local field potential theta frequency. In contrast, visual inputs do not alter the running speed-dependent gain in neuronal firing rates. Moreover, we provide evidence that sensory inputs other than visual inputs can support grid cell firing, though less accurately, in complete darkness. Finally, changes in spatial accuracy of grid cell firing on a 10 s time scale suggest that grid cell firing is a function of velocity signals integrated over past time.
Collapse
Affiliation(s)
- Holger Dannenberg
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, Boston, United States
| | - Hallie Lazaro
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, Boston, United States
| | - Pranav Nambiar
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, Boston, United States
| | - Alec Hoyland
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, Boston, United States
| | - Michael E Hasselmo
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, Boston, United States
| |
Collapse
|
27
|
Abstract
Animals frequently need to choose the best alternative from a set of possibilities, whether it is which direction to swim in or which food source to favor. How long should a network of neurons take to choose the best of N options? Theoretical results suggest that the optimal time grows as log(N), if the values of each option are imperfectly perceived. However, standard self-terminating neural network models of decision-making cannot achieve this optimal behavior. We show how using certain additional nonlinear response properties in neurons, which are ignored in standard models, results in a decision-making architecture that both achieves the optimal scaling of decision time and accounts for multiple experimentally observed features of neural decision-making. An elemental computation in the brain is to identify the best in a set of options and report its value. It is required for inference, decision-making, optimization, action selection, consensus, and foraging. Neural computing is considered powerful because of its parallelism; however, it is unclear whether neurons can perform this max-finding operation in a way that improves upon the prohibitively slow optimal serial max-finding computation (which takes ∼Nlog(N) time for N noisy candidate options) by a factor of N, the benchmark for parallel computation. Biologically plausible architectures for this task are winner-take-all (WTA) networks, where individual neurons inhibit each other so only those with the largest input remain active. We show that conventional WTA networks fail the parallelism benchmark and, worse, in the presence of noise, altogether fail to produce a winner when N is large. We introduce the nWTA network, in which neurons are equipped with a second nonlinearity that prevents weakly active neurons from contributing inhibition. Without parameter fine-tuning or rescaling as N varies, the nWTA network achieves the parallelism benchmark. The network reproduces experimentally observed phenomena like Hick’s law without needing an additional readout stage or adaptive N-dependent thresholds. Our work bridges scales by linking cellular nonlinearities to circuit-level decision-making, establishes that distributed computation saturating the parallelism benchmark is possible in networks of noisy, finite-memory neurons, and shows that Hick’s law may be a symptom of near-optimal parallel decision-making with noisy input.
Collapse
|
28
|
Kim W, Yoo Y. Toward a Unified Framework for Cognitive Maps. Neural Comput 2020; 32:2455-2485. [PMID: 32946705 DOI: 10.1162/neco_a_01326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In this study, we integrated neural encoding and decoding into a unified framework for spatial information processing in the brain. Specifically, the neural representations of self-location in the hippocampus (HPC) and entorhinal cortex (EC) play crucial roles in spatial navigation. Intriguingly, these neural representations in these neighboring brain areas show stark differences. Whereas the place cells in the HPC fire as a unimodal function of spatial location, the grid cells in the EC show periodic tuning curves with different periods for different subpopulations (called modules). By combining an encoding model for this modular neural representation and a realistic decoding model based on belief propagation, we investigated the manner in which self-location is encoded by neurons in the EC and then decoded by downstream neurons in the HPC. Through the results of numerical simulations, we first show the positive synergy effects of the modular structure in the EC. The modular structure introduces more coupling between heterogeneous modules with different periodicities, which provides increased error-correcting capabilities. This is also demonstrated through a comparison of the beliefs produced for decoding two- and four-module codes. Whereas the former resulted in a complete decoding failure, the latter correctly recovered the self-location even from the same inputs. Further analysis of belief propagation during decoding revealed complex dynamics in information updates due to interactions among multiple modules having diverse scales. Therefore, the proposed unified framework allows one to investigate the overall flow of spatial information, closing the loop of encoding and decoding self-location in the brain.
Collapse
Affiliation(s)
- Woori Kim
- Department of Special Education, Chonnam National University, Buk-gu, Gwangju, 61186, Korea
| | - Yongseok Yoo
- Department of Electronics Engineering, Incheon National University, Yeonsu-gu, Incheon 22012, Korea
| |
Collapse
|
29
|
Katyare N, Sikdar SK. Theta resonance and synaptic modulation scale activity patterns in the medial entorhinal cortex stellate cells. Ann N Y Acad Sci 2020; 1478:92-112. [PMID: 32794193 DOI: 10.1111/nyas.14434] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2019] [Revised: 05/31/2020] [Accepted: 06/19/2020] [Indexed: 11/28/2022]
Abstract
Stellate cells (SCs) of the medial entorhinal cortex (MEC) are rich in hyperpolarization-activated cyclic nucleotide-gated (HCN) channels, which are known to effectively shape their activity patterns. The explanatory mechanisms, however, have remained elusive. One important but previously unassessed possibility is that HCN channels control the gain of synaptic inputs to these cells. Here, we test this possibility in rat brain slices, while subjecting SCs to a stochastic synaptic bombardment using the dynamic clamp. We show that in the presence of synaptic noise, HCN channels mainly exert their influence by increasing the relative signal gain in the theta frequency through the theta modulation of stochastic synaptic inputs. This subthreshold synaptic modulation then translates into a spiking resonance, which steepens with excitation in the presence of HCN channels. We present here a systematic assessment of synaptic theta modulation and trace its implications to the suprathreshold control of firing rate motifs. Such analysis was yet lacking in the SC literature. Furthermore, we assess the impact of noise statistics on this gain modulation and indicate possible mechanisms for the emergence of membrane theta oscillations and synaptic ramps, as observed in vivo. We support the data with a computational model that further unveils a competing role of inhibition, suggesting important implications for MEC computations.
Collapse
Affiliation(s)
- Nupur Katyare
- Molecular Biophysics Unit, Indian Institute of Science, Bengaluru, Karnataka, India
| | - Sujit Kumar Sikdar
- Molecular Biophysics Unit, Indian Institute of Science, Bengaluru, Karnataka, India
| |
Collapse
|
30
|
Agmon H, Burak Y. A theory of joint attractor dynamics in the hippocampus and the entorhinal cortex accounts for artificial remapping and grid cell field-to-field variability. eLife 2020; 9:56894. [PMID: 32779570 PMCID: PMC7447444 DOI: 10.7554/elife.56894] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 08/07/2020] [Indexed: 01/04/2023] Open
Abstract
The representation of position in the mammalian brain is distributed across multiple neural populations. Grid cell modules in the medial entorhinal cortex (MEC) express activity patterns that span a low-dimensional manifold which remains stable across different environments. In contrast, the activity patterns of hippocampal place cells span distinct low-dimensional manifolds in different environments. It is unknown how these multiple representations of position are coordinated. Here, we develop a theory of joint attractor dynamics in the hippocampus and the MEC. We show that the system exhibits a coordinated, joint representation of position across multiple environments, consistent with global remapping in place cells and grid cells. In addition, our model accounts for recent experimental observations that lack a mechanistic explanation: variability in the firing rate of single grid cells across firing fields, and artificial remapping of place cells under depolarization, but not under hyperpolarization, of layer II stellate cells of the MEC.
Collapse
Affiliation(s)
- Haggai Agmon
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Yoram Burak
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel.,Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
31
|
Issa JB, Tocker G, Hasselmo ME, Heys JG, Dombeck DA. Navigating Through Time: A Spatial Navigation Perspective on How the Brain May Encode Time. Annu Rev Neurosci 2020; 43:73-93. [PMID: 31961765 PMCID: PMC7351603 DOI: 10.1146/annurev-neuro-101419-011117] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Interval timing, which operates on timescales of seconds to minutes, is distributed across multiple brain regions and may use distinct circuit mechanisms as compared to millisecond timing and circadian rhythms. However, its study has proven difficult, as timing on this scale is deeply entangled with other behaviors. Several circuit and cellular mechanisms could generate sequential or ramping activity patterns that carry timing information. Here we propose that a productive approach is to draw parallels between interval timing and spatial navigation, where direct analogies can be made between the variables of interest and the mathematical operations necessitated. Along with designing experiments that isolate or disambiguate timing behavior from other variables, new techniques will facilitate studies that directly address the neural mechanisms that are responsible for interval timing.
Collapse
Affiliation(s)
- John B Issa
- Department of Neurobiology, Northwestern University, Evanston, Illinois 60208, USA;
| | - Gilad Tocker
- Department of Neurobiology, Northwestern University, Evanston, Illinois 60208, USA;
| | - Michael E Hasselmo
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts 02215, USA
| | - James G Heys
- Department of Neurobiology and Anatomy, University of Utah, Salt Lake City, Utah 84112, USA
| | - Daniel A Dombeck
- Department of Neurobiology, Northwestern University, Evanston, Illinois 60208, USA;
| |
Collapse
|
32
|
Klukas M, Lewis M, Fiete I. Efficient and flexible representation of higher-dimensional cognitive variables with grid cells. PLoS Comput Biol 2020; 16:e1007796. [PMID: 32343687 PMCID: PMC7209352 DOI: 10.1371/journal.pcbi.1007796] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Revised: 05/08/2020] [Accepted: 03/18/2020] [Indexed: 11/18/2022] Open
Abstract
We shed light on the potential of entorhinal grid cells to efficiently encode variables of dimension greater than two, while remaining faithful to empirical data on their low-dimensional structure. Our model constructs representations of high-dimensional inputs through a combination of low-dimensional random projections and "classical" low-dimensional hexagonal grid cell responses. Without reconfiguration of the recurrent circuit, the same system can flexibly encode multiple variables of different dimensions while maximizing the coding range (per dimension) by automatically trading-off dimension with an exponentially large coding range. It achieves high efficiency and flexibility by combining two powerful concepts, modularity and mixed selectivity, in what we call "mixed modular coding". In contrast to previously proposed schemes, the model does not require the formation of higher-dimensional grid responses, a cell-inefficient and rigid mechanism. The firing fields observed in flying bats or climbing rats can be generated by neurons that combine activity from multiple grid modules, each representing higher-dimensional spaces according to our model. The idea expands our understanding of grid cells, suggesting that they could implement a general circuit that generates on-demand coding and memory states for variables in high-dimensional vector spaces.
Collapse
Affiliation(s)
- Mirko Klukas
- MIT Department of Brain and Cognitive Sciences, Cambridge, Massachusetts, United States of America
- Numenta, Redwood City, California, United States of America
- * E-mail:
| | - Marcus Lewis
- Numenta, Redwood City, California, United States of America
| | - Ila Fiete
- MIT Department of Brain and Cognitive Sciences, Cambridge, Massachusetts, United States of America
| |
Collapse
|
33
|
Hausler S, Chen Z, Hasselmo ME, Milford M. Bio-inspired multi-scale fusion. BIOLOGICAL CYBERNETICS 2020; 114:209-229. [PMID: 32322978 DOI: 10.1007/s00422-020-00831-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Accepted: 03/27/2020] [Indexed: 06/11/2023]
Abstract
We reveal how implementing the homogeneous, multi-scale mapping frameworks observed in the mammalian brain's mapping systems radically improves the performance of a range of current robotic localization techniques. Roboticists have developed a range of predominantly single- or dual-scale heterogeneous mapping approaches (typically locally metric and globally topological) that starkly contrast with neural encoding of space in mammalian brains: a multi-scale map underpinned by spatially responsive cells like the grid cells found in the rodent entorhinal cortex. Yet the full benefits of a homogeneous multi-scale mapping framework remain unknown in both robotics and biology: in robotics because of the focus on single- or two-scale systems and limits in the scalability and open-field nature of current test environments and benchmark datasets; in biology because of technical limitations when recording from rodents during movement over large areas. New global spatial databases with visual information varying over several orders of magnitude in scale enable us to investigate this question for the first time in real-world environments. In particular, we investigate and answer the following questions: why have multi-scale representations, how many scales should there be, what should the size ratio between consecutive scales be and how does the absolute scale size affect performance? We answer these questions by developing and evaluating a homogeneous, multi-scale mapping framework mimicking aspects of the rodent multi-scale map, but using current robotic place recognition techniques at each scale. Results in large-scale real-world environments demonstrate multi-faceted and significant benefits for mapping and localization performance and identify the key factors that determine performance.
Collapse
|
34
|
Fischer LF, Mojica Soto-Albors R, Buck F, Harnett MT. Representation of visual landmarks in retrosplenial cortex. eLife 2020; 9:51458. [PMID: 32154781 PMCID: PMC7064342 DOI: 10.7554/elife.51458] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Accepted: 02/03/2020] [Indexed: 11/13/2022] Open
Abstract
The process by which visual information is incorporated into the brain’s spatial framework to represent landmarks is poorly understood. Studies in humans and rodents suggest that retrosplenial cortex (RSC) plays a key role in these computations. We developed an RSC-dependent behavioral task in which head-fixed mice learned the spatial relationship between visual landmark cues and hidden reward locations. Two-photon imaging revealed that these cues served as dominant reference points for most task-active neurons and anchored the spatial code in RSC. This encoding was more robust after task acquisition. Decoupling the virtual environment from mouse behavior degraded spatial representations and provided evidence that supralinear integration of visual and motor inputs contributes to landmark encoding. V1 axons recorded in RSC were less modulated by task engagement but showed surprisingly similar spatial tuning. Our data indicate that landmark representations in RSC are the result of local integration of visual, motor, and spatial information. When moving through a city, people often use notable or familiar landmarks to help them navigate. Landmarks provide us with information about where we are and where we need to go next. But despite the ease with which we – and most other animals – use landmarks to find our way around, it remains unclear exactly how the brain makes this possible. One area that seems to have a key role is the retrosplenial cortex, which is located deep within the back of the brain in humans. This area becomes more active when animals use visual landmarks to navigate. It is also one of the first brain regions to be affected in Alzheimer's disease, which may help to explain why patients with this condition can become lost and disoriented, even in places they have been many times before. To find out how the retrosplenial cortex supports navigation, Fischer et al. measured its activity in mice exploring a virtual reality world. The mice ran through simulated corridors in which visual landmarks indicated where hidden rewards could be found. The activity of most neurons in the retrosplenial cortex was most strongly influenced by the mouse’s position relative to the landmark; for example, some neurons were always active 10 centimeters after the landmark. In other experiments, when the landmarks were present but no longer indicated the location of a reward, the same neurons were much less active. Fischer et al. also measured the activity of the neurons when the mice were running with nothing shown on the virtual reality, and when they saw a landmark but did not run. Notably, the activity seen when the mice were using the landmarks to find rewards was greater than the sum of that recorded when the mice were just running or just seeing the landmark without a reward, making the “landmark response” an example of so-called supralinear processing. Fischer et al. showed that visual centers of the brain send information about landmarks to retrosplenial cortex. But only the latter adjusts its activity depending on whether the mouse is using that landmark to navigate. These findings provide the first evidence for a “landmark code” at the level of neurons and lay the foundations for studying impaired navigation in patients with Alzheimer's disease. By showing that retrosplenial cortex neurons combine different types of input in a supralinear fashion, the results also point to general principles for how neurons in the brain perform complex calculations.
Collapse
Affiliation(s)
- Lukas F Fischer
- Department of Brain and Cognitive Sciences, MGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, United States
| | - Raul Mojica Soto-Albors
- Department of Brain and Cognitive Sciences, MGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, United States
| | - Friederike Buck
- Department of Brain and Cognitive Sciences, MGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, United States
| | - Mark T Harnett
- Department of Brain and Cognitive Sciences, MGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, United States
| |
Collapse
|
35
|
Hasselmo ME, Alexander AS, Dannenberg H, Newman EL. Overview of computational models of hippocampus and related structures: Introduction to the special issue. Hippocampus 2020; 30:295-301. [PMID: 32119171 DOI: 10.1002/hipo.23201] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Extensive computational modeling has focused on the hippocampal formation and associated cortical structures. This overview describes some of the factors that have motivated the strong focus on these structures, including major experimental findings and their impact on computational models. This overview provides a framework for describing the topics addressed by individual articles in this special issue of the journal Hippocampus.
Collapse
Affiliation(s)
- Michael E Hasselmo
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts
| | - Andrew S Alexander
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts
| | - Holger Dannenberg
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts
| | - Ehren L Newman
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, Indiana
| |
Collapse
|
36
|
Johnston WJ, Palmer SE, Freedman DJ. Nonlinear mixed selectivity supports reliable neural computation. PLoS Comput Biol 2020; 16:e1007544. [PMID: 32069273 PMCID: PMC7048320 DOI: 10.1371/journal.pcbi.1007544] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Revised: 02/28/2020] [Accepted: 11/12/2019] [Indexed: 12/17/2022] Open
Abstract
Neuronal activity in the brain is variable, yet both perception and behavior are generally reliable. How does the brain achieve this? Here, we show that the conjunctive coding of multiple stimulus features, commonly known as nonlinear mixed selectivity, may be used by the brain to support reliable information transmission using unreliable neurons. Nonlinearly mixed feature representations have been observed throughout primary sensory, decision-making, and motor brain areas. In these areas, different features are almost always nonlinearly mixed to some degree, rather than represented separately or with only additive (linear) mixing, which we refer to as pure selectivity. Mixed selectivity has been previously shown to support flexible linear decoding for complex behavioral tasks. Here, we show that it has another important benefit: in many cases, it makes orders of magnitude fewer decoding errors than pure selectivity even when both forms of selectivity use the same number of spikes. This benefit holds for sensory, motor, and more abstract, cognitive representations. Further, we show experimental evidence that mixed selectivity exists in the brain even when it does not enable behaviorally useful linear decoding. This suggests that nonlinear mixed selectivity may be a general coding scheme exploited by the brain for reliable and efficient neural computation. Neurons in the brain are unreliable, while both perception and behavior are generally reliable. In this work, we study how the neural population response to sensory, motor, and cognitive features can produce this reliability. Across the brain, single neurons have been shown to respond to particular conjunctions of multiple features, termed nonlinear mixed selectivity. In this work, we show that populations of these mixed selective neurons lead to many fewer decoding errors than populations without mixed selectivity, even when both neural codes are given the same number of spikes. We show that the reliability benefits from mixed selectivity are quite general, holding under different assumptions about metabolic costs and neural noise as well as for both categorical and sensory errors. Further, previous theoretical work has shown that mixed selectivity enables the learning of complex behaviors with simple decoders. Through the analysis of neural data, we show that the brain implements mixed selectivity even when it would not serve this purpose. Thus, we argue that the brain also implements mixed selectivity to exploit its general benefits for reliable and efficient neural computation.
Collapse
Affiliation(s)
- W. Jeffrey Johnston
- Graduate Program in Computational Neuroscience, The University of Chicago, Chicago, Illinois, United States of America
- Department of Neurobiology, The University of Chicago, Chicago, Illinois, United States of America
- * E-mail:
| | - Stephanie E. Palmer
- Graduate Program in Computational Neuroscience, The University of Chicago, Chicago, Illinois, United States of America
- Grossman Institute for Neuroscience, Quantitative Biology, and Human Behavior, The University of Chicago, Chicago, Illinois, United States of America
- Department of Organismal Biology and Anatomy, The University of Chicago, Chicago, Illinois, United States of America
- Department of Physics, The University of Chicago, Chicago, Illinois, United States of America
| | - David J. Freedman
- Graduate Program in Computational Neuroscience, The University of Chicago, Chicago, Illinois, United States of America
- Department of Neurobiology, The University of Chicago, Chicago, Illinois, United States of America
- Grossman Institute for Neuroscience, Quantitative Biology, and Human Behavior, The University of Chicago, Chicago, Illinois, United States of America
| |
Collapse
|
37
|
Waniek N. Transition Scale-Spaces: A Computational Theory for the Discretized Entorhinal Cortex. Neural Comput 2020; 32:330-394. [DOI: 10.1162/neco_a_01255] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Although hippocampal grid cells are thought to be crucial for spatial navigation, their computational purpose remains disputed. Recently, they were proposed to represent spatial transitions and convey this knowledge downstream to place cells. However, a single scale of transitions is insufficient to plan long goal-directed sequences in behaviorally acceptable time. Here, a scale-space data structure is suggested to optimally accelerate retrievals from transition systems, called transition scale-space (TSS). Remaining exclusively on an algorithmic level, the scale increment is proved to be ideally [Formula: see text] for biologically plausible receptive fields. It is then argued that temporal buffering is necessary to learn the scale-space online. Next, two modes for retrieval of sequences from the TSS are presented: top down and bottom up. The two modes are evaluated in symbolic simulations (i.e., without biologically plausible spiking neurons). Additionally, a TSS is used for short-cut discovery in a simulated Morris water maze. Finally, the results are discussed in depth with respect to biological plausibility, and several testable predictions are derived. Moreover, relations to other grid cell models, multiresolution path planning, and scale-space theory are highlighted. Summarized, reward-free transition encoding is shown here, in a theoretical model, to be compatible with the observed discretization along the dorso-ventral axis of the medial entorhinal cortex. Because the theoretical model generalizes beyond navigation, the TSS is suggested to be a general-purpose cortical data structure for fast retrieval of sequences and relational knowledge. Source code for all simulations presented in this paper can be found at https://github.com/rochus/transitionscalespace .
Collapse
Affiliation(s)
- Nicolai Waniek
- Bosch Center for Artificial Intelligence, Robert Bosch GmbH, 71272 Renningen, Germany
| |
Collapse
|
38
|
Mosheiff N, Burak Y. Velocity coupling of grid cell modules enables stable embedding of a low dimensional variable in a high dimensional neural attractor. eLife 2019; 8:e48494. [PMID: 31469365 PMCID: PMC6756787 DOI: 10.7554/elife.48494] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Accepted: 08/29/2019] [Indexed: 01/17/2023] Open
Abstract
Grid cells in the medial entorhinal cortex (MEC) encode position using a distributed representation across multiple neural populations (modules), each possessing a distinct spatial scale. The modular structure of the representation confers the grid cell neural code with large capacity. Yet, the modularity poses significant challenges for the neural circuitry that maintains the representation, and updates it based on self motion. Small incompatible drifts in different modules, driven by noise, can rapidly lead to large, abrupt shifts in the represented position, resulting in catastrophic readout errors. Here, we propose a theoretical model of coupled modules. The coupling suppresses incompatible drifts, allowing for a stable embedding of a two-dimensional variable (position) in a higher dimensional neural attractor, while preserving the large capacity. We propose that coupling of this type may be implemented by recurrent synaptic connectivity within the MEC with a relatively simple and biologically plausible structure.
Collapse
Affiliation(s)
- Noga Mosheiff
- Racah Institute of PhysicsHebrew UniversityJerusalemIsrael
| | - Yoram Burak
- Racah Institute of PhysicsHebrew UniversityJerusalemIsrael
- Edmond and Lily Safra Center for Brain SciencesHebrew UniversityJerusalemIsrael
| |
Collapse
|
39
|
Kang L, Balasubramanian V. A geometric attractor mechanism for self-organization of entorhinal grid modules. eLife 2019; 8:46687. [PMID: 31373556 PMCID: PMC6776444 DOI: 10.7554/elife.46687] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2019] [Accepted: 08/01/2019] [Indexed: 11/13/2022] Open
Abstract
Grid cells in the medial entorhinal cortex (MEC) respond when an animal occupies a periodic lattice of 'grid fields' in the environment. The grids are organized in modules with spatial periods, or scales, clustered around discrete values separated on average by ratios in the range 1.4-1.7. We propose a mechanism that produces this modular structure through dynamical self-organization in the MEC. In attractor network models of grid formation, the grid scale of a single module is set by the distance of recurrent inhibition between neurons. We show that the MEC forms a hierarchy of discrete modules if a smooth increase in inhibition distance along its dorso-ventral axis is accompanied by excitatory interactions along this axis. Moreover, constant scale ratios between successive modules arise through geometric relationships between triangular grids and have values that fall within the observed range. We discuss how interactions required by our model might be tested experimentally.
Collapse
Affiliation(s)
- Louis Kang
- David Rittenhouse Laboratories, University of Pennsylvania, Philadelphia, United States.,Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, United States
| | - Vijay Balasubramanian
- David Rittenhouse Laboratories, University of Pennsylvania, Philadelphia, United States
| |
Collapse
|
40
|
Sugar J, Moser MB. Episodic memory: Neuronal codes for what, where, and when. Hippocampus 2019; 29:1190-1205. [PMID: 31334573 DOI: 10.1002/hipo.23132] [Citation(s) in RCA: 83] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2019] [Revised: 06/06/2019] [Accepted: 06/12/2019] [Indexed: 11/07/2022]
Abstract
Episodic memory is defined as the ability to recall events in a spatiotemporal context. Formation of such memories is critically dependent on the hippocampal formation and its inputs from the entorhinal cortex. To be able to support the formation of episodic memories, entorhinal cortex and hippocampal formation should contain a neuronal code that follows several requirements. First, the code should include information about position of the agent ("where"), sequence of events ("when"), and the content of the experience itself ("what"). Second, the code should arise instantly thereby being able to support memory formation of one-shot experiences. For successful encoding and to avoid interference between memories during recall, variations in location, time, or in content of experience should result in unique ensemble activity. Finally, the code should capture several different resolutions of experience so that the necessary details relevant for future memory-based predictions will be stored. We review how neuronal codes in entorhinal cortex and hippocampus follow these requirements and argue that during formation of episodic memories entorhinal cortex provides hippocampus with instant information about ongoing experience. Such information originates from (a) spatially modulated neurons in medial entorhinal cortex, including grid cells, which provide a stable and universal positional metric of the environment; (b) a continuously varying signal in lateral entorhinal cortex providing a code for the temporal progression of events; and (c) entorhinal neurons coding the content of experiences exemplified by object-coding and odor-selective neurons. During formation of episodic memories, information from these systems are thought to be encoded as unique sequential ensemble activity in hippocampus, thereby encoding associations between the content of an event and its spatial and temporal contexts. Upon exposure to parts of the encoded stimuli, activity in these ensembles can be reinstated, leading to reactivation of the encoded activity pattern and memory recollection.
Collapse
Affiliation(s)
- Jørgen Sugar
- Centre for Neural Computation, Egil and Pauline Braathen and Fred Kavli Center for Cortical Microcircuits, Kavli Institute for Systems Neuroscience, Norwegian University for Science and Technology (NTNU), Trondheim, Norway
| | - May-Britt Moser
- Centre for Neural Computation, Egil and Pauline Braathen and Fred Kavli Center for Cortical Microcircuits, Kavli Institute for Systems Neuroscience, Norwegian University for Science and Technology (NTNU), Trondheim, Norway
| |
Collapse
|
41
|
Schwartz DM, Koyluoglu OO. On the Organization of Grid and Place Cells: Neural Denoising via Subspace Learning. Neural Comput 2019; 31:1519-1550. [PMID: 31260389 DOI: 10.1162/neco_a_01208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Place cells in the hippocampus (HC) are active when an animal visits a certain location (referred to as a place field) within an environment. Grid cells in the medial entorhinal cortex (MEC) respond at multiple locations, with firing fields that form a periodic and hexagonal tiling of the environment. The joint activity of grid and place cell populations, as a function of location, forms a neural code for space. In this article, we develop an understanding of the relationships between coding theoretically relevant properties of the combined activity of these populations and how these properties limit the robustness of this representation to noise-induced interference. These relationships are revisited by measuring the performances of biologically realizable algorithms implemented by networks of place and grid cell populations, as well as constraint neurons, which perform denoising operations. Contributions of this work include the investigation of coding theoretic limitations of the mammalian neural code for location and how communication between grid and place cell networks may improve the accuracy of each population's representation. Simulations demonstrate that denoising mechanisms analyzed here can significantly improve the fidelity of this neural representation of space. Furthermore, patterns observed in connectivity of each population of simulated cells predict that anti-Hebbian learning drives decreases in inter-HC-MEC connectivity along the dorsoventral axis.
Collapse
Affiliation(s)
- David M Schwartz
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ 85719, U.S.A.
| | - O Ozan Koyluoglu
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, Berkeley, CA 94720, U.S.A.
| |
Collapse
|
42
|
Dannenberg H, Alexander AS, Robinson JC, Hasselmo ME. The Role of Hierarchical Dynamical Functions in Coding for Episodic Memory and Cognition. J Cogn Neurosci 2019; 31:1271-1289. [PMID: 31251890 DOI: 10.1162/jocn_a_01439] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Behavioral research in human verbal memory function led to the initial definition of episodic memory and semantic memory. A complete model of the neural mechanisms of episodic memory must include the capacity to encode and mentally reconstruct everything that humans can recall from their experience. This article proposes new model features necessary to address the complexity of episodic memory encoding and recall in the context of broader cognition and the functional properties of neurons that could contribute to this broader scope of memory. Many episodic memory models represent individual snapshots of the world with a sequence of vectors, but a full model must represent complex functions encoding and retrieving the relations between multiple stimulus features across space and time on multiple hierarchical scales. Episodic memory involves not only the space and time of an agent experiencing events within an episode but also features shown in neurophysiological data such as coding of speed, direction, boundaries, and objects. Episodic memory includes not only a spatio-temporal trajectory of a single agent but also segments of spatio-temporal trajectories for other agents and objects encountered in the environment consistent with data on encoding the position and angle of sensory features of objects and boundaries. We will discuss potential interactions of episodic memory circuits in the hippocampus and entorhinal cortex with distributed neocortical circuits that must represent all features of human cognition.
Collapse
|
43
|
Stellate Cells in the Medial Entorhinal Cortex Are Required for Spatial Learning. Cell Rep 2019; 22:1313-1324. [PMID: 29386117 PMCID: PMC5809635 DOI: 10.1016/j.celrep.2018.01.005] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2017] [Revised: 12/05/2017] [Accepted: 01/02/2018] [Indexed: 11/24/2022] Open
Abstract
Spatial learning requires estimates of location that may be obtained by path integration or from positional cues. Grid and other spatial firing patterns of neurons in the superficial medial entorhinal cortex (MEC) suggest roles in behavioral estimation of location. However, distinguishing the contributions of path integration and cue-based signals to spatial behaviors is challenging, and the roles of identified MEC neurons are unclear. We use virtual reality to dissociate linear path integration from other strategies for behavioral estimation of location. We find that mice learn to path integrate using motor-related self-motion signals, with accuracy that decreases steeply as a function of distance. We show that inactivation of stellate cells in superficial MEC impairs spatial learning in virtual reality and in a real world object location recognition task. Our results quantify contributions of path integration to behavior and corroborate key predictions of models in which stellate cells contribute to location estimation. Mice learn to estimate location by path integration and cue-based strategies Motor-related self-motion signals are used for path integration Accuracy of path integration decreases with distance Stellate cells in medial entorhinal cortex are required for spatial learning
Collapse
|
44
|
Lewis M, Purdy S, Ahmad S, Hawkins J. Locations in the Neocortex: A Theory of Sensorimotor Object Recognition Using Cortical Grid Cells. Front Neural Circuits 2019; 13:22. [PMID: 31068793 PMCID: PMC6491744 DOI: 10.3389/fncir.2019.00022] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Accepted: 03/19/2019] [Indexed: 12/23/2022] Open
Abstract
The neocortex is capable of anticipating the sensory results of movement but the neural mechanisms are poorly understood. In the entorhinal cortex, grid cells represent the location of an animal in its environment, and this location is updated through movement and path integration. In this paper, we propose that sensory neocortex incorporates movement using grid cell-like neurons that represent the location of sensors on an object. We describe a two-layer neural network model that uses cortical grid cells and path integration to robustly learn and recognize objects through movement and predict sensory stimuli after movement. A layer of cells consisting of several grid cell-like modules represents a location in the reference frame of a specific object. Another layer of cells which processes sensory input receives this location input as context and uses it to encode the sensory input in the object's reference frame. Sensory input causes the network to invoke previously learned locations that are consistent with the input, and motor input causes the network to update those locations. Simulations show that the model can learn hundreds of objects even when object features alone are insufficient for disambiguation. We discuss the relationship of the model to cortical circuitry and suggest that the reciprocal connections between layers 4 and 6 fit the requirements of the model. We propose that the subgranular layers of cortical columns employ grid cell-like mechanisms to represent object specific locations that are updated through movement.
Collapse
Affiliation(s)
| | - Scott Purdy
- Numenta Inc., Redwood City, CA, United States
| | | | | |
Collapse
|
45
|
Hawkins J, Lewis M, Klukas M, Purdy S, Ahmad S. A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex. Front Neural Circuits 2019; 12:121. [PMID: 30687022 PMCID: PMC6336927 DOI: 10.3389/fncir.2018.00121] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2018] [Accepted: 12/24/2018] [Indexed: 11/17/2022] Open
Abstract
How the neocortex works is a mystery. In this paper we propose a novel framework for understanding its function. Grid cells are neurons in the entorhinal cortex that represent the location of an animal in its environment. Recent evidence suggests that grid cell-like neurons may also be present in the neocortex. We propose that grid cells exist throughout the neocortex, in every region and in every cortical column. They define a location-based framework for how the neocortex functions. Whereas grid cells in the entorhinal cortex represent the location of one thing, the body relative to its environment, we propose that cortical grid cells simultaneously represent the location of many things. Cortical columns in somatosensory cortex track the location of tactile features relative to the object being touched and cortical columns in visual cortex track the location of visual features relative to the object being viewed. We propose that mechanisms in the entorhinal cortex and hippocampus that evolved for learning the structure of environments are now used by the neocortex to learn the structure of objects. Having a representation of location in each cortical column suggests mechanisms for how the neocortex represents object compositionality and object behaviors. It leads to the hypothesis that every part of the neocortex learns complete models of objects and that there are many models of each object distributed throughout the neocortex. The similarity of circuitry observed in all cortical regions is strong evidence that even high-level cognitive tasks are learned and represented in a location-based framework.
Collapse
|
46
|
Levakova M, Kostal L, Monsempès C, Jacob V, Lucas P. Moth olfactory receptor neurons adjust their encoding efficiency to temporal statistics of pheromone fluctuations. PLoS Comput Biol 2018; 14:e1006586. [PMID: 30422975 PMCID: PMC6258558 DOI: 10.1371/journal.pcbi.1006586] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2018] [Revised: 11/27/2018] [Accepted: 10/19/2018] [Indexed: 11/19/2022] Open
Abstract
The efficient coding hypothesis predicts that sensory neurons adjust their coding resources to optimally represent the stimulus statistics of their environment. To test this prediction in the moth olfactory system, we have developed a stimulation protocol that mimics the natural temporal structure within a turbulent pheromone plume. We report that responses of antennal olfactory receptor neurons to pheromone encounters follow the temporal fluctuations in such a way that the most frequent stimulus timescales are encoded with maximum accuracy. We also observe that the average coding precision of the neurons adjusted to the stimulus-timescale statistics at a given distance from the pheromone source is higher than if the same encoding model is applied at a shorter, non-matching, distance. Finally, the coding accuracy profile and the stimulus-timescale distribution are related in the manner predicted by the information theory for the many-to-one convergence scenario of the moth peripheral sensory system.
Collapse
Affiliation(s)
- Marie Levakova
- Institute of Physiology of the Czech Academy of Sciences, Prague, Czech Republic
| | - Lubomir Kostal
- Institute of Physiology of the Czech Academy of Sciences, Prague, Czech Republic
| | | | - Vincent Jacob
- Institute of Ecology and Environmental Sciences, INRA, Versailles, France
- Peuplements végétaux et bioagresseurs en milieu végétal, CIRAD, Université de la Réunion, Saint Pierre, Ile de la Réunion, France
| | - Philippe Lucas
- Institute of Ecology and Environmental Sciences, INRA, Versailles, France
| |
Collapse
|
47
|
Waniek N. Hexagonal Grid Fields Optimally Encode Transitions in Spatiotemporal Sequences. Neural Comput 2018; 30:2691-2725. [PMID: 30148705 DOI: 10.1162/neco_a_01122] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Grid cells of the rodent entorhinal cortex are essential for spatial navigation. Although their function is commonly believed to be either path integration or localization, the origin or purpose of their hexagonal firing fields remains disputed. Here they are proposed to arise as an optimal encoding of transitions in sequences. First, storage requirements for transitions in general episodic sequences are examined using propositional logic and graph theory. Subsequently, transitions in complete metric spaces are considered under the assumption of an ideal sampling of an input space. It is shown that memory capacity of neurons that have to encode multiple feasible spatial transitions is maximized by a hexagonal pattern. Grid cells are proposed to encode spatial transitions in spatiotemporal sequences, with the entorhinal-hippocampal loop forming a multitransition system.
Collapse
Affiliation(s)
- Nicolai Waniek
- Neuroscientific System Theory, Technical University of Munich, 80333 Munich, Germany
| |
Collapse
|
48
|
Widloski J, Marder MP, Fiete IR. Inferring circuit mechanisms from sparse neural recording and global perturbation in grid cells. eLife 2018; 7:e33503. [PMID: 29985132 PMCID: PMC6078497 DOI: 10.7554/elife.33503] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2017] [Accepted: 07/07/2018] [Indexed: 02/02/2023] Open
Abstract
A goal of systems neuroscience is to discover the circuit mechanisms underlying brain function. Despite experimental advances that enable circuit-wide neural recording, the problem remains open in part because solving the 'inverse problem' of inferring circuity and mechanism by merely observing activity is hard. In the grid cell system, we show through modeling that a technique based on global circuit perturbation and examination of a novel theoretical object called the distribution of relative phase shifts (DRPS) could reveal the mechanisms of a cortical circuit at unprecedented detail using extremely sparse neural recordings. We establish feasibility, showing that the method can discriminate between recurrent versus feedforward mechanisms and amongst various recurrent mechanisms using recordings from a handful of cells. The proposed strategy demonstrates that sparse recording coupled with simple perturbation can reveal more about circuit mechanism than can full knowledge of network activity or the synaptic connectivity matrix.
Collapse
Affiliation(s)
- John Widloski
- Department of PsychologyThe University of CaliforniaBerkeleyUnited States
| | | | - Ila R Fiete
- Department of PhysicsThe University of TexasAustinUnited States
- Center for Learning and MemoryThe University of TexasAustinUnited States
| |
Collapse
|
49
|
Mittal D, Narayanan R. Degeneracy in the robust expression of spectral selectivity, subthreshold oscillations, and intrinsic excitability of entorhinal stellate cells. J Neurophysiol 2018; 120:576-600. [PMID: 29718802 PMCID: PMC6101195 DOI: 10.1152/jn.00136.2018] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
Abstract
Biological heterogeneities are ubiquitous and play critical roles in the emergence of physiology at multiple scales. Although neurons in layer II (LII) of the medial entorhinal cortex (MEC) express heterogeneities in channel properties, the impact of such heterogeneities on the robustness of their cellular-scale physiology has not been assessed. Here, we performed a 55-parameter stochastic search spanning nine voltage- or calcium-activated channels to assess the impact of channel heterogeneities on the concomitant emergence of 10 in vitro electrophysiological characteristics of LII stellate cells (SCs). We generated 150,000 models and found a heterogeneous subpopulation of 449 valid models to robustly match all electrophysiological signatures. We employed this heterogeneous population to demonstrate the emergence of cellular-scale degeneracy in SCs, whereby disparate parametric combinations expressing weak pairwise correlations resulted in similar models. We then assessed the impact of virtually knocking out each channel from all valid models and demonstrate that the mapping between channels and measurements was many-to-many, a critical requirement for the expression of degeneracy. Finally, we quantitatively predict that the spike-triggered average of SCs should be endowed with theta-frequency spectral selectivity and coincidence detection capabilities in the fast gamma-band. We postulate this fast gamma-band coincidence detection as an instance of cellular-scale-efficient coding, whereby SC response characteristics match the dominant oscillatory signals in LII MEC. The heterogeneous population of valid SC models built here unveils the robust emergence of cellular-scale physiology despite significant channel heterogeneities, and forms an efficacious substrate for evaluating the impact of biological heterogeneities on entorhinal network function. NEW & NOTEWORTHY We assessed the impact of heterogeneities in channel properties on the robustness of cellular-scale physiology of medial entorhinal cortical stellate neurons. We demonstrate that neuronal models with disparate channel combinations were endowed with similar physiological characteristics, as a consequence of the many-to-many mapping between channel properties and the physiological characteristics that they modulate. We predict that the spike-triggered average of stellate cells should be endowed with theta-frequency spectral selectivity and fast gamma-band coincidence detection capabilities.
Collapse
Affiliation(s)
- Divyansh Mittal
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science , Bangalore , India
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science , Bangalore , India
| |
Collapse
|
50
|
Yu L, Jacobson A, Milford M. Rhythmic Representations: Learning Periodic Patterns for Scalable Place Recognition at a Sublinear Storage Cost. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2792144] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|