1
|
Dong LL, Fiete IR. Grid Cells in Cognition: Mechanisms and Function. Annu Rev Neurosci 2024; 47:345-368. [PMID: 38684081 DOI: 10.1146/annurev-neuro-101323-112047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
The activity patterns of grid cells form distinctively regular triangular lattices over the explored spatial environment and are largely invariant to visual stimuli, animal movement, and environment geometry. These neurons present numerous fascinating challenges to the curious (neuro)scientist: What are the circuit mechanisms responsible for creating spatially periodic activity patterns from the monotonic input-output responses of single neurons? How and why does the brain encode a local, nonperiodic variable-the allocentric position of the animal-with a periodic, nonlocal code? And, are grid cells truly specialized for spatial computations? Otherwise, what is their role in general cognition more broadly? We review efforts in uncovering the mechanisms and functional properties of grid cells, highlighting recent progress in the experimental validation of mechanistic grid cell models, and discuss the coding properties and functional advantages of the grid code as suggested by continuous attractor network models of grid cells.
Collapse
Affiliation(s)
- Ling L Dong
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA;
| | - Ila R Fiete
- McGovern Institute and K. Lisa Yang Integrative Computational Neuroscience Center, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA;
| |
Collapse
|
2
|
Mondal SS, Frankland S, Webb TW, Cohen JD. Determinantal point process attention over grid cell code supports out of distribution generalization. eLife 2024; 12:RP89911. [PMID: 39088258 PMCID: PMC11293867 DOI: 10.7554/elife.89911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/02/2024] Open
Abstract
Deep neural networks have made tremendous gains in emulating human-like intelligence, and have been used increasingly as ways of understanding how the brain may solve the complex computational problems on which this relies. However, these still fall short of, and therefore fail to provide insight into how the brain supports strong forms of generalization of which humans are capable. One such case is out-of-distribution (OOD) generalization - successful performance on test examples that lie outside the distribution of the training set. Here, we identify properties of processing in the brain that may contribute to this ability. We describe a two-part algorithm that draws on specific features of neural computation to achieve OOD generalization, and provide a proof of concept by evaluating performance on two challenging cognitive tasks. First we draw on the fact that the mammalian brain represents metric spaces using grid cell code (e.g., in the entorhinal cortex): abstract representations of relational structure, organized in recurring motifs that cover the representational space. Second, we propose an attentional mechanism that operates over the grid cell code using determinantal point process (DPP), that we call DPP attention (DPP-A) - a transformation that ensures maximum sparseness in the coverage of that space. We show that a loss function that combines standard task-optimized error with DPP-A can exploit the recurring motifs in the grid cell code, and can be integrated with common architectures to achieve strong OOD generalization performance on analogy and arithmetic tasks. This provides both an interpretation of how the grid cell code in the mammalian brain may contribute to generalization performance, and at the same time a potential means for improving such capabilities in artificial neural networks.
Collapse
Affiliation(s)
- Shanka Subhra Mondal
- Department of Electrical and Computer Engineering, Princeton UniversityPrincetonUnited States
| | - Steven Frankland
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Taylor W Webb
- Department of Psychology, University of California, Los AngelesLos AngelesUnited States
| | - Jonathan D Cohen
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| |
Collapse
|
3
|
Di Tullio RW, Wei L, Balasubramanian V. Slow and steady: auditory features for discriminating animal vocalizations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.20.599962. [PMID: 39005308 PMCID: PMC11244870 DOI: 10.1101/2024.06.20.599962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
We propose that listeners can use temporal regularities - spectro-temporal correlations that change smoothly over time - to discriminate animal vocalizations within and between species. To test this idea, we used Slow Feature Analysis (SFA) to find the most temporally regular components of vocalizations from birds (blue jay, house finch, American yellow warbler, and great blue heron), humans (English speakers), and rhesus macaques. We projected vocalizations into the learned feature space and tested intra-class (same speaker/species) and inter-class (different speakers/species) auditory discrimination by a trained classifier. We found that: 1) Vocalization discrimination was excellent (> 95%) in all cases; 2) Performance depended primarily on the ~10 most temporally regular features; 3) Most vocalizations are dominated by ~10 features with high temporal regularity; and 4) These regular features are highly correlated with the most predictable components of animal sounds.
Collapse
Affiliation(s)
- Ronald W Di Tullio
- David Rittenhouse Laboratory, Department of Physics and Astronomy, University of Pennsylvania, USA
- Computational Neuroscience Initiative, University of Pennsylvania, USA
| | - Linran Wei
- David Rittenhouse Laboratory, Department of Physics and Astronomy, University of Pennsylkvania, USA
| | - Vijay Balasubramanian
- David Rittenhouse Laboratory, Department of Physics and Astronomy, University of Pennsylvania, USA
- Computational Neuroscience Initiative, University of Pennsylvania, USA
- Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA
| |
Collapse
|
4
|
Kymn CJ, Mazelet S, Thomas A, Kleyko D, Frady EP, Sommer FT, Olshausen BA. Binding in hippocampal-entorhinal circuits enables compositionality in cognitive maps. ARXIV 2024:arXiv:2406.18808v1. [PMID: 38979486 PMCID: PMC11230348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 07/10/2024]
Abstract
We propose a normative model for spatial representation in the hippocampal formation that combines optimality principles, such as maximizing coding range and spatial information per neuron, with an algebraic framework for computing in distributed representation. Spatial position is encoded in a residue number system, with individual residues represented by high-dimensional, complex-valued vectors. These are composed into a single vector representing position by a similarity-preserving, conjunctive vector-binding operation. Self-consistency between the representations of the overall position and of the individual residues is enforced by a modular attractor network whose modules correspond to the grid cell modules in entorhinal cortex. The vector binding operation can also associate different contexts to spatial representations, yielding a model for entorhinal cortex and hippocampus. We show that the model achieves normative desiderata including superlinear scaling of patterns with dimension, robust error correction, and hexagonal, carry-free encoding of spatial position. These properties in turn enable robust path integration and association with sensory inputs. More generally, the model formalizes how compositional computations could occur in the hippocampal formation and leads to testable experimental predictions.
Collapse
Affiliation(s)
| | - Sonia Mazelet
- Redwood Center for Theoretical Neuroscience, UC Berkeley, Berkeley, USA
- Université Paris-Saclay, ENS Paris-Saclay, Gif-sur-Yvette, France
| | - Anthony Thomas
- Redwood Center for Theoretical Neuroscience, UC Berkeley, Berkeley, USA
| | - Denis Kleyko
- Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden
| | | | - Friedrich T Sommer
- Redwood Center for Theoretical Neuroscience, UC Berkeley, Berkeley, USA
- Intel Labs, Santa Clara, USA
| | - Bruno A Olshausen
- Redwood Center for Theoretical Neuroscience, UC Berkeley, Berkeley, USA
| |
Collapse
|
5
|
Redman WT, Acosta-Mendoza S, Wei XX, Goard MJ. Robust variability of grid cell properties within individual grid modules enhances encoding of local space. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.27.582373. [PMID: 38915504 PMCID: PMC11195105 DOI: 10.1101/2024.02.27.582373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
Although grid cells are one of the most well studied functional classes of neurons in the mammalian brain, the assumption that there is a single grid orientation and spacing per grid module has not been carefully tested. We investigate and analyze a recent large-scale recording of medial entorhinal cortex to characterize the presence and degree of heterogeneity of grid properties within individual modules. We find evidence for small, but robust, variability and hypothesize that this property of the grid code could enhance the ability of encoding local spatial information. Performing analysis on synthetic populations of grid cells, where we have complete control over the amount heterogeneity in grid properties, we demonstrate that variability, of a similar magnitude to the analyzed data, leads to significantly decreased decoding error, even when restricted to activity from a single module. Our results highlight how the heterogeneity of the neural response properties may benefit coding and opens new directions for theoretical and experimental analysis of grid cells.
Collapse
Affiliation(s)
- William T Redman
- Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara
- Intelligent Systems Center, Johns Hopkins University Applied Physics Lab
| | - Santiago Acosta-Mendoza
- Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara
| | - Xue-Xin Wei
- Department of Neuroscience, The University of Texas at Austin
- Department of Psychology, The University of Texas at Austin
- Center for Perceptual Systems, The University of Texas at Austin
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
| | - Michael J Goard
- Department of Psychological and Brain Sciences, University of California, Santa Barbara
- Department of Molecular, Cellular, and Developmental Biology, University of California, Santa Barbara
- Neuroscience Research Institute, University of California Santa Barbara
| |
Collapse
|
6
|
Sutton NM, Gutiérrez-Guzmán BE, Dannenberg H, Ascoli GA. A Continuous Attractor Model with Realistic Neural and Synaptic Properties Quantitatively Reproduces Grid Cell Physiology. Int J Mol Sci 2024; 25:6059. [PMID: 38892248 PMCID: PMC11173171 DOI: 10.3390/ijms25116059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 05/25/2024] [Accepted: 05/26/2024] [Indexed: 06/21/2024] Open
Abstract
Computational simulations with data-driven physiological detail can foster a deeper understanding of the neural mechanisms involved in cognition. Here, we utilize the wealth of cellular properties from Hippocampome.org to study neural mechanisms of spatial coding with a spiking continuous attractor network model of medial entorhinal cortex circuit activity. The primary goal is to investigate if adding such realistic constraints could produce firing patterns similar to those measured in real neurons. Biological characteristics included in the work are excitability, connectivity, and synaptic signaling of neuron types defined primarily by their axonal and dendritic morphologies. We investigate the spiking dynamics in specific neuron types and the synaptic activities between groups of neurons. Modeling the rodent hippocampal formation keeps the simulations to a computationally reasonable scale while also anchoring the parameters and results to experimental measurements. Our model generates grid cell activity that well matches the spacing, size, and firing rates of grid fields recorded in live behaving animals from both published datasets and new experiments performed for this study. Our simulations also recreate different scales of those properties, e.g., small and large, as found along the dorsoventral axis of the medial entorhinal cortex. Computational exploration of neuronal and synaptic model parameters reveals that a broad range of neural properties produce grid fields in the simulation. These results demonstrate that the continuous attractor network model of grid cells is compatible with a spiking neural network implementation sourcing data-driven biophysical and anatomical parameters from Hippocampome.org. The software (version 1.0) is released as open source to enable broad community reuse and encourage novel applications.
Collapse
Affiliation(s)
- Nate M. Sutton
- Bioengineering Department, George Mason University, Fairfax, VA 22030, USA; (N.M.S.); (B.E.G.-G.); (H.D.)
| | - Blanca E. Gutiérrez-Guzmán
- Bioengineering Department, George Mason University, Fairfax, VA 22030, USA; (N.M.S.); (B.E.G.-G.); (H.D.)
| | - Holger Dannenberg
- Bioengineering Department, George Mason University, Fairfax, VA 22030, USA; (N.M.S.); (B.E.G.-G.); (H.D.)
- Interdisciplinary Program in Neuroscience, George Mason University, Fairfax, VA 22030, USA
| | - Giorgio A. Ascoli
- Bioengineering Department, George Mason University, Fairfax, VA 22030, USA; (N.M.S.); (B.E.G.-G.); (H.D.)
- Interdisciplinary Program in Neuroscience, George Mason University, Fairfax, VA 22030, USA
| |
Collapse
|
7
|
Sutton N, Gutiérrez-Guzmán B, Dannenberg H, Ascoli GA. A Continuous Attractor Model with Realistic Neural and Synaptic Properties Quantitatively Reproduces Grid Cell Physiology. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.29.591748. [PMID: 38746202 PMCID: PMC11092518 DOI: 10.1101/2024.04.29.591748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Computational simulations with data-driven physiological detail can foster a deeper understanding of the neural mechanisms involved in cognition. Here, we utilize the wealth of cellular properties from Hippocampome.org to study neural mechanisms of spatial coding with a spiking continuous attractor network model of medial entorhinal cortex circuit activity. The primary goal was to investigate if adding such realistic constraints could produce firing patterns similar to those measured in real neurons. Biological characteristics included in the work are excitability, connectivity, and synaptic signaling of neuron types defined primarily by their axonal and dendritic morphologies. We investigate the spiking dynamics in specific neuron types and the synaptic activities between groups of neurons. Modeling the rodent hippocampal formation keeps the simulations to a computationally reasonable scale while also anchoring the parameters and results to experimental measurements. Our model generates grid cell activity that well matches the spacing, size, and firing rates of grid fields recorded in live behaving animals from both published datasets and new experiments performed for this study. Our simulations also recreate different scales of those properties, e.g., small and large, as found along the dorsoventral axis of the medial entorhinal cortex. Computational exploration of neuronal and synaptic model parameters reveals that a broad range of neural properties produce grid fields in the simulation. These results demonstrate that the continuous attractor network model of grid cells is compatible with a spiking neural network implementation sourcing data-driven biophysical and anatomical parameters from Hippocampome.org. The software is released as open source to enable broad community reuse and encourage novel applications.
Collapse
Affiliation(s)
- Nate Sutton
- Bioengineering Department, at George Mason University
| | | | - Holger Dannenberg
- Bioengineering Department, at George Mason University
- Interdisciplinary Program in Neuroscience at George Mason University
| | - Giorgio A. Ascoli
- Bioengineering Department, at George Mason University
- Interdisciplinary Program in Neuroscience at George Mason University
| |
Collapse
|
8
|
Chen D, Axmacher N, Wang L. Grid codes underlie multiple cognitive maps in the human brain. Prog Neurobiol 2024; 233:102569. [PMID: 38232782 DOI: 10.1016/j.pneurobio.2024.102569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 01/07/2024] [Accepted: 01/10/2024] [Indexed: 01/19/2024]
Abstract
Grid cells fire at multiple positions that organize the vertices of equilateral triangles tiling a 2D space and are well studied in rodents. The last decade witnessed rapid progress in two other research lines on grid codes-empirical studies on distributed human grid-like representations in physical and multiple non-physical spaces, and cognitive computational models addressing the function of grid cells based on principles of efficient and predictive coding. Here, we review the progress in these fields and integrate these lines into a systematic organization. We also discuss the coordinate mechanisms of grid codes in the human entorhinal cortex and medial prefrontal cortex and their role in neurological and psychiatric diseases.
Collapse
Affiliation(s)
- Dong Chen
- CAS Key Laboratory of Mental Health, Institute of Psychology, 100101, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, 100101, Beijing, China
| | - Nikolai Axmacher
- Department of Neuropsychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, 44801, Bochum, Germany
| | - Liang Wang
- CAS Key Laboratory of Mental Health, Institute of Psychology, 100101, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, 100101, Beijing, China.
| |
Collapse
|
9
|
Schaeffer R, Khona M, Koyejo S, Fiete IR. Disentangling Fact from Grid Cell Fiction in Trained Deep Path Integrators. ARXIV 2023:arXiv:2312.03954v3. [PMID: 38106458 PMCID: PMC10723537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Work on deep learning-based models of grid cells suggests that grid cells generically and robustly arise from optimizing networks to path integrate, i.e., track one's spatial position by integrating self-velocity signals. In previous work [27], we challenged this path integration hypothesis by showing that deep neural networks trained to path integrate almost always do so, but almost never learn grid-like tuning unless separately inserted by researchers via mechanisms unrelated to path integration. In this work, we restate the key evidence substantiating these insights, then address a response to [27] by authors of one of the path integration hypothesis papers [32]. First, we show that the response misinterprets our work, indirectly confirming our points. Second, we evaluate the response's preferred "unified theory for the origin of grid cells" in trained deep path integrators [31, 33, 34] and show that it is at best "occasionally suggestive," not exact or comprehensive. We finish by considering why assessing model quality through prediction of biological neural activity by regression of activity in deep networks [23] can lead to the wrong conclusions.
Collapse
Affiliation(s)
| | | | | | - Ila Rani Fiete
- Brain and Cognitive Sciences, MIT
- McGovern Institute for Brain Research at MIT
| |
Collapse
|
10
|
Ohki T, Kunii N, Chao ZC. Efficient, continual, and generalized learning in the brain - neural mechanism of Mental Schema 2.0. Rev Neurosci 2023; 34:839-868. [PMID: 36960579 DOI: 10.1515/revneuro-2022-0137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 02/26/2023] [Indexed: 03/25/2023]
Abstract
There has been tremendous progress in artificial neural networks (ANNs) over the past decade; however, the gap between ANNs and the biological brain as a learning device remains large. With the goal of closing this gap, this paper reviews learning mechanisms in the brain by focusing on three important issues in ANN research: efficiency, continuity, and generalization. We first discuss the method by which the brain utilizes a variety of self-organizing mechanisms to maximize learning efficiency, with a focus on the role of spontaneous activity of the brain in shaping synaptic connections to facilitate spatiotemporal learning and numerical processing. Then, we examined the neuronal mechanisms that enable lifelong continual learning, with a focus on memory replay during sleep and its implementation in brain-inspired ANNs. Finally, we explored the method by which the brain generalizes learned knowledge in new situations, particularly from the mathematical generalization perspective of topology. Besides a systematic comparison in learning mechanisms between the brain and ANNs, we propose "Mental Schema 2.0," a new computational property underlying the brain's unique learning ability that can be implemented in ANNs.
Collapse
Affiliation(s)
- Takefumi Ohki
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo Institutes for Advanced Study, The University of Tokyo, Tokyo 113-0033, Japan
| | - Naoto Kunii
- Department of Neurosurgery, The University of Tokyo, Tokyo 113-0033, Japan
| | - Zenas C Chao
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo Institutes for Advanced Study, The University of Tokyo, Tokyo 113-0033, Japan
| |
Collapse
|
11
|
Parra-Barrero E, Vijayabaskaran S, Seabrook E, Wiskott L, Cheng S. A map of spatial navigation for neuroscience. Neurosci Biobehav Rev 2023; 152:105200. [PMID: 37178943 DOI: 10.1016/j.neubiorev.2023.105200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 04/13/2023] [Accepted: 04/24/2023] [Indexed: 05/15/2023]
Abstract
Spatial navigation has received much attention from neuroscientists, leading to the identification of key brain areas and the discovery of numerous spatially selective cells. Despite this progress, our understanding of how the pieces fit together to drive behavior is generally lacking. We argue that this is partly caused by insufficient communication between behavioral and neuroscientific researchers. This has led the latter to under-appreciate the relevance and complexity of spatial behavior, and to focus too narrowly on characterizing neural representations of space-disconnected from the computations these representations are meant to enable. We therefore propose a taxonomy of navigation processes in mammals that can serve as a common framework for structuring and facilitating interdisciplinary research in the field. Using the taxonomy as a guide, we review behavioral and neural studies of spatial navigation. In doing so, we validate the taxonomy and showcase its usefulness in identifying potential issues with common experimental approaches, designing experiments that adequately target particular behaviors, correctly interpreting neural activity, and pointing to new avenues of research.
Collapse
Affiliation(s)
- Eloy Parra-Barrero
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany; International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Sandhiya Vijayabaskaran
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany
| | - Eddie Seabrook
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany
| | - Laurenz Wiskott
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany; International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Sen Cheng
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany; International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany.
| |
Collapse
|
12
|
Sorscher B, Mel GC, Ocko SA, Giocomo LM, Ganguli S. A unified theory for the computational and mechanistic origins of grid cells. Neuron 2023; 111:121-137.e13. [PMID: 36306779 DOI: 10.1016/j.neuron.2022.10.003] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 05/05/2022] [Accepted: 10/03/2022] [Indexed: 02/05/2023]
Abstract
The discovery of entorhinal grid cells has generated considerable interest in how and why hexagonal firing fields might emerge in a generic manner from neural circuits, and what their computational significance might be. Here, we forge a link between the problem of path integration and the existence of hexagonal grids, by demonstrating that such grids arise in neural networks trained to path integrate under simple biologically plausible constraints. Moreover, we develop a unifying theory for why hexagonal grids are ubiquitous in path-integrator circuits. Such trained networks also yield powerful mechanistic hypotheses, exhibiting realistic levels of biological variability not captured by hand-designed models. We furthermore develop methods to analyze the connectome and activity maps of our networks to elucidate fundamental mechanisms underlying path integration. These methods provide a road map to go from connectomic and physiological measurements to conceptual understanding in a manner that could generalize to other settings.
Collapse
Affiliation(s)
- Ben Sorscher
- Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Gabriel C Mel
- Neurosciences PhD Program, Stanford University, Stanford, CA 94305, USA.
| | - Samuel A Ocko
- Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Lisa M Giocomo
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, CA 94305, USA; Department of Neurobiology, Stanford University School of Medicine, Stanford, CA 94305, USA
| |
Collapse
|
13
|
Tesileanu T, Piasini E, Balasubramanian V. Efficient processing of natural scenes in visual cortex. Front Cell Neurosci 2022; 16:1006703. [PMID: 36545653 PMCID: PMC9760692 DOI: 10.3389/fncel.2022.1006703] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 11/17/2022] [Indexed: 12/12/2022] Open
Abstract
Neural circuits in the periphery of the visual, auditory, and olfactory systems are believed to use limited resources efficiently to represent sensory information by adapting to the statistical structure of the natural environment. This "efficient coding" principle has been used to explain many aspects of early visual circuits including the distribution of photoreceptors, the mosaic geometry and center-surround structure of retinal receptive fields, the excess OFF pathways relative to ON pathways, saccade statistics, and the structure of simple cell receptive fields in V1. We know less about the extent to which such adaptations may occur in deeper areas of cortex beyond V1. We thus review recent developments showing that the perception of visual textures, which depends on processing in V2 and beyond in mammals, is adapted in rats and humans to the multi-point statistics of luminance in natural scenes. These results suggest that central circuits in the visual brain are adapted for seeing key aspects of natural scenes. We conclude by discussing how adaptation to natural temporal statistics may aid in learning and representing visual objects, and propose two challenges for the future: (1) explaining the distribution of shape sensitivity in the ventral visual stream from the statistics of object shape in natural images, and (2) explaining cell types of the vertebrate retina in terms of feature detectors that are adapted to the spatio-temporal structures of natural stimuli. We also discuss how new methods based on machine learning may complement the normative, principles-based approach to theoretical neuroscience.
Collapse
Affiliation(s)
- Tiberiu Tesileanu
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, United States,*Correspondence: Tiberiu Tesileanu
| | - Eugenio Piasini
- Scuola Internazionale Superiore di Studi Avanzati (SISSA), Trieste, Italy,Eugenio Piasini
| | - Vijay Balasubramanian
- Department of Physics and Astronomy, David Rittenhouse Laboratory, University of Pennsylvania, Philadelphia, PA, United States,Santa Fe Institute, Santa Fe, NM, United States
| |
Collapse
|
14
|
Wang R, Kang L. Multiple bumps can enhance robustness to noise in continuous attractor networks. PLoS Comput Biol 2022; 18:e1010547. [PMID: 36215305 PMCID: PMC9584540 DOI: 10.1371/journal.pcbi.1010547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 10/20/2022] [Accepted: 09/06/2022] [Indexed: 11/19/2022] Open
Abstract
A central function of continuous attractor networks is encoding coordinates and accurately updating their values through path integration. To do so, these networks produce localized bumps of activity that move coherently in response to velocity inputs. In the brain, continuous attractors are believed to underlie grid cells and head direction cells, which maintain periodic representations of position and orientation, respectively. These representations can be achieved with any number of activity bumps, and the consequences of having more or fewer bumps are unclear. We address this knowledge gap by constructing 1D ring attractor networks with different bump numbers and characterizing their responses to three types of noise: fluctuating inputs, spiking noise, and deviations in connectivity away from ideal attractor configurations. Across all three types, networks with more bumps experience less noise-driven deviations in bump motion. This translates to more robust encodings of linear coordinates, like position, assuming that each neuron represents a fixed length no matter the bump number. Alternatively, we consider encoding a circular coordinate, like orientation, such that the network distance between adjacent bumps always maps onto 360 degrees. Under this mapping, bump number does not significantly affect the amount of error in the coordinate readout. Our simulation results are intuitively explained and quantitatively matched by a unified theory for path integration and noise in multi-bump networks. Thus, to suppress the effects of biologically relevant noise, continuous attractor networks can employ more bumps when encoding linear coordinates; this advantage disappears when encoding circular coordinates. Our findings provide motivation for multiple bumps in the mammalian grid network.
Collapse
Affiliation(s)
- Raymond Wang
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, California, United States of America
- Neural Circuits and Computations Unit, RIKEN Center for Brain Science, Wako, Saitama, Japan
| | - Louis Kang
- Neural Circuits and Computations Unit, RIKEN Center for Brain Science, Wako, Saitama, Japan
- * E-mail:
| |
Collapse
|
15
|
Wang J, Yan R, Tang H. Grid cell modeling with mapping representation of self-motion for path integration. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06039-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
16
|
Abstract
A central goal of neuroscience is to understand the representations formed by brain activity patterns and their connection to behaviour. The classic approach is to investigate how individual neurons encode stimuli and how their tuning determines the fidelity of the neural representation. Tuning analyses often use the Fisher information to characterize the sensitivity of neural responses to small changes of the stimulus. In recent decades, measurements of large populations of neurons have motivated a complementary approach, which focuses on the information available to linear decoders. The decodable information is captured by the geometry of the representational patterns in the multivariate response space. Here we review neural tuning and representational geometry with the goal of clarifying the relationship between them. The tuning induces the geometry, but different sets of tuned neurons can induce the same geometry. The geometry determines the Fisher information, the mutual information and the behavioural performance of an ideal observer in a range of psychophysical tasks. We argue that future studies can benefit from considering both tuning and geometry to understand neural codes and reveal the connections between stimuli, brain activity and behaviour.
Collapse
|
17
|
DiTullio RW, Balasubramanian V. Dynamical self-organization and efficient representation of space by grid cells. Curr Opin Neurobiol 2021; 70:206-213. [PMID: 34861597 PMCID: PMC8688296 DOI: 10.1016/j.conb.2021.11.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 11/09/2021] [Indexed: 10/19/2022]
Abstract
To plan trajectories and navigate, animals must maintain a mental representation of the environment and their own position within it. This "cognitive map" is thought to be supported in part by the entorhinal cortex, where grid cells are active when an animal occupies the vertices of a scaling hierarchy of periodic lattices of locations in an enclosure. Here, we review computational developments which suggest that the grid cell network is: (a) efficient, providing required spatial resolution with a minimum number of neurons, (b) self-organizing, dynamically coordinating the structure and scale of the responses, and (c) adaptive, re-organizing in response to changes in landmarks and the structure of the boundaries of spaces. We consider these ideas in light of recent discoveries of similar structures in the mental representation of abstract spaces of shapes and smells, and in other brain areas, and highlight promising directions for future research.
Collapse
Affiliation(s)
- Ronald W. DiTullio
- David Rittenhouse Laboratories & Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA 19104
| | - Vijay Balasubramanian
- David Rittenhouse Laboratories & Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA 19104
| |
Collapse
|
18
|
Learning an Efficient Hippocampal Place Map from Entorhinal Inputs Using Non-Negative Sparse Coding. eNeuro 2021; 8:ENEURO.0557-20.2021. [PMID: 34162691 PMCID: PMC8266216 DOI: 10.1523/eneuro.0557-20.2021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 06/08/2021] [Accepted: 06/11/2021] [Indexed: 12/03/2022] Open
Abstract
Cells in the entorhinal cortex (EC) contain rich spatial information and project strongly to the hippocampus where a cognitive map is supposedly created. These cells range from cells with structured spatial selectivity, such as grid cells in the medial EC (MEC) that are selective to an array of spatial locations that form a hexagonal grid, to weakly spatial cells, such as non-grid cells in the MEC and lateral EC (LEC) that contain spatial information but have no structured spatial selectivity. However, in a small environment, place cells in the hippocampus are generally selective to a single location of the environment, while granule cells in the dentate gyrus of the hippocampus have multiple discrete firing locations but lack spatial periodicity. Given the anatomic connection from the EC to the hippocampus, how the hippocampus retrieves information from upstream EC remains unclear. Here, we propose a unified learning model that can describe the spatial tuning properties of both hippocampal place cells and dentate gyrus granule cells based on non-negative sparse coding from EC inputs. Sparse coding plays an important role in many cortical areas and is proposed here to have a key role in the hippocampus. Our results show that the hexagonal patterns of MEC grid cells with various orientations, grid spacings and phases are necessary for the model to learn different place cells that efficiently tile the entire spatial environment. However, if there is a lack of diversity in any grid parameters or a lack of hippocampal cells in the network, this will lead to the emergence of hippocampal cells that have multiple firing locations. More surprisingly, the model can also learn hippocampal place cells even when weakly spatial cells, instead of grid cells, are used as the input to the hippocampus. This work suggests that sparse coding may be one of the underlying organizing principles for the navigational system of the brain.
Collapse
|
19
|
Why grid cells function as a metric for space. Neural Netw 2021; 142:128-137. [PMID: 34000560 DOI: 10.1016/j.neunet.2021.04.031] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2020] [Revised: 04/16/2021] [Accepted: 04/23/2021] [Indexed: 11/20/2022]
Abstract
The brain is able to calculate the distance and direction to the desired position based on grid cells. Extensive neurophysiological studies of rodent navigation have postulated the grid cells function as a metric for space, and have inspired many computational studies to develop innovative navigation approaches. Furthermore, grid cells may provide a general encoding scheme for high-order nonspatial information. Built upon existing neuroscience and machine learning work, this paper provides theoretical clarity on that the grid cell population codes can be taken as a metric for space. The metric is generated by a shift-invariant positive definite kernel via kernel distance method and embeds isometrically in a Euclidean space, and the inner product of the grid cell population code exponentially converges to the kernel. We also provide a method to learn the distribution of grid cell population efficiently. Grid cells, as a scalable position encoding method, can encode the spatial relationships of places and enable grid cells to outperform place cells in navigation. Further, we extend the grid cell to images encoding and find that grid cells embed images into a mental map, where geometric relationships are conceptual relationships of images. The theoretical model and analysis would contribute to establishing the grid cell code as a generic coding scheme for both spatial and conceptual spaces, and is promising for a multitude of problems across spatial cognition, machine learning and semantic cognition.
Collapse
|
20
|
Kang L, Xu B, Morozov D. Evaluating State Space Discovery by Persistent Cohomology in the Spatial Representation System. Front Comput Neurosci 2021; 15:616748. [PMID: 33897395 PMCID: PMC8060447 DOI: 10.3389/fncom.2021.616748] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Accepted: 03/11/2021] [Indexed: 12/02/2022] Open
Abstract
Persistent cohomology is a powerful technique for discovering topological structure in data. Strategies for its use in neuroscience are still undergoing development. We comprehensively and rigorously assess its performance in simulated neural recordings of the brain's spatial representation system. Grid, head direction, and conjunctive cell populations each span low-dimensional topological structures embedded in high-dimensional neural activity space. We evaluate the ability for persistent cohomology to discover these structures for different dataset dimensions, variations in spatial tuning, and forms of noise. We quantify its ability to decode simulated animal trajectories contained within these topological structures. We also identify regimes under which mixtures of populations form product topologies that can be detected. Our results reveal how dataset parameters affect the success of topological discovery and suggest principles for applying persistent cohomology, as well as persistent homology, to experimental neural recordings.
Collapse
Affiliation(s)
- Louis Kang
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, United States
- Neural Circuits and Computations Unit, RIKEN Center for Brain Science, Wako, Japan
| | - Boyan Xu
- Department of Mathematics, University of California, Berkeley, Berkeley, CA, United States
| | - Dmitriy Morozov
- Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA, United States
| |
Collapse
|
21
|
Agmon H, Burak Y. A theory of joint attractor dynamics in the hippocampus and the entorhinal cortex accounts for artificial remapping and grid cell field-to-field variability. eLife 2020; 9:56894. [PMID: 32779570 PMCID: PMC7447444 DOI: 10.7554/elife.56894] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 08/07/2020] [Indexed: 01/04/2023] Open
Abstract
The representation of position in the mammalian brain is distributed across multiple neural populations. Grid cell modules in the medial entorhinal cortex (MEC) express activity patterns that span a low-dimensional manifold which remains stable across different environments. In contrast, the activity patterns of hippocampal place cells span distinct low-dimensional manifolds in different environments. It is unknown how these multiple representations of position are coordinated. Here, we develop a theory of joint attractor dynamics in the hippocampus and the MEC. We show that the system exhibits a coordinated, joint representation of position across multiple environments, consistent with global remapping in place cells and grid cells. In addition, our model accounts for recent experimental observations that lack a mechanistic explanation: variability in the firing rate of single grid cells across firing fields, and artificial remapping of place cells under depolarization, but not under hyperpolarization, of layer II stellate cells of the MEC.
Collapse
Affiliation(s)
- Haggai Agmon
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Yoram Burak
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel.,Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
22
|
Tesileanu T, Conte MM, Briguglio JJ, Hermundstad AM, Victor JD, Balasubramanian V. Efficient coding of natural scene statistics predicts discrimination thresholds for grayscale textures. eLife 2020; 9:e54347. [PMID: 32744505 PMCID: PMC7494356 DOI: 10.7554/elife.54347] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 07/31/2020] [Indexed: 11/13/2022] Open
Abstract
Previously, in Hermundstad et al., 2014, we showed that when sampling is limiting, the efficient coding principle leads to a 'variance is salience' hypothesis, and that this hypothesis accounts for visual sensitivity to binary image statistics. Here, using extensive new psychophysical data and image analysis, we show that this hypothesis accounts for visual sensitivity to a large set of grayscale image statistics at a striking level of detail, and also identify the limits of the prediction. We define a 66-dimensional space of local grayscale light-intensity correlations, and measure the relevance of each direction to natural scenes. The 'variance is salience' hypothesis predicts that two-point correlations are most salient, and predicts their relative salience. We tested these predictions in a texture-segregation task using un-natural, synthetic textures. As predicted, correlations beyond second order are not salient, and predicted thresholds for over 300 second-order correlations match psychophysical thresholds closely (median fractional error <0.13).
Collapse
Affiliation(s)
| | - Mary M Conte
- Feil Family Brain and Mind Institute, Weill Cornell Medical CollegeNew YorkUnited States
| | | | | | - Jonathan D Victor
- Feil Family Brain and Mind Institute, Weill Cornell Medical CollegeNew YorkUnited States
| | | |
Collapse
|
23
|
Waniek N. Transition Scale-Spaces: A Computational Theory for the Discretized Entorhinal Cortex. Neural Comput 2020; 32:330-394. [DOI: 10.1162/neco_a_01255] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Although hippocampal grid cells are thought to be crucial for spatial navigation, their computational purpose remains disputed. Recently, they were proposed to represent spatial transitions and convey this knowledge downstream to place cells. However, a single scale of transitions is insufficient to plan long goal-directed sequences in behaviorally acceptable time. Here, a scale-space data structure is suggested to optimally accelerate retrievals from transition systems, called transition scale-space (TSS). Remaining exclusively on an algorithmic level, the scale increment is proved to be ideally [Formula: see text] for biologically plausible receptive fields. It is then argued that temporal buffering is necessary to learn the scale-space online. Next, two modes for retrieval of sequences from the TSS are presented: top down and bottom up. The two modes are evaluated in symbolic simulations (i.e., without biologically plausible spiking neurons). Additionally, a TSS is used for short-cut discovery in a simulated Morris water maze. Finally, the results are discussed in depth with respect to biological plausibility, and several testable predictions are derived. Moreover, relations to other grid cell models, multiresolution path planning, and scale-space theory are highlighted. Summarized, reward-free transition encoding is shown here, in a theoretical model, to be compatible with the observed discretization along the dorso-ventral axis of the medial entorhinal cortex. Because the theoretical model generalizes beyond navigation, the TSS is suggested to be a general-purpose cortical data structure for fast retrieval of sequences and relational knowledge. Source code for all simulations presented in this paper can be found at https://github.com/rochus/transitionscalespace .
Collapse
Affiliation(s)
- Nicolai Waniek
- Bosch Center for Artificial Intelligence, Robert Bosch GmbH, 71272 Renningen, Germany
| |
Collapse
|
24
|
Mok RM, Love BC. A non-spatial account of place and grid cells based on clustering models of concept learning. Nat Commun 2019; 10:5685. [PMID: 31831749 PMCID: PMC6908717 DOI: 10.1038/s41467-019-13760-8] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2018] [Accepted: 11/24/2019] [Indexed: 11/24/2022] Open
Abstract
One view is that conceptual knowledge is organized using the circuitry in the medial temporal lobe (MTL) that supports spatial processing and navigation. In contrast, we find that a domain-general learning algorithm explains key findings in both spatial and conceptual domains. When the clustering model is applied to spatial navigation tasks, so-called place and grid cell-like representations emerge because of the relatively uniform distribution of possible inputs in these tasks. The same mechanism applied to conceptual tasks, where the overall space can be higher-dimensional and sampling sparser, leading to representations more aligned with human conceptual knowledge. Although the types of memory supported by the MTL are superficially dissimilar, the information processing steps appear shared. Our account suggests that the MTL uses a general-purpose algorithm to learn and organize context-relevant information in a useful format, rather than relying on navigation-specific neural circuitry.
Collapse
Affiliation(s)
- Robert M Mok
- Department of Experimental Psychology, University College London, 26 Bedford Way, London, WC1H 0AP, UK.
| | - Bradley C Love
- Department of Experimental Psychology, University College London, 26 Bedford Way, London, WC1H 0AP, UK.
- The Alan Turing Institute, London, UK.
| |
Collapse
|
25
|
Mosheiff N, Burak Y. Velocity coupling of grid cell modules enables stable embedding of a low dimensional variable in a high dimensional neural attractor. eLife 2019; 8:e48494. [PMID: 31469365 PMCID: PMC6756787 DOI: 10.7554/elife.48494] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Accepted: 08/29/2019] [Indexed: 01/17/2023] Open
Abstract
Grid cells in the medial entorhinal cortex (MEC) encode position using a distributed representation across multiple neural populations (modules), each possessing a distinct spatial scale. The modular structure of the representation confers the grid cell neural code with large capacity. Yet, the modularity poses significant challenges for the neural circuitry that maintains the representation, and updates it based on self motion. Small incompatible drifts in different modules, driven by noise, can rapidly lead to large, abrupt shifts in the represented position, resulting in catastrophic readout errors. Here, we propose a theoretical model of coupled modules. The coupling suppresses incompatible drifts, allowing for a stable embedding of a two-dimensional variable (position) in a higher dimensional neural attractor, while preserving the large capacity. We propose that coupling of this type may be implemented by recurrent synaptic connectivity within the MEC with a relatively simple and biologically plausible structure.
Collapse
Affiliation(s)
- Noga Mosheiff
- Racah Institute of PhysicsHebrew UniversityJerusalemIsrael
| | - Yoram Burak
- Racah Institute of PhysicsHebrew UniversityJerusalemIsrael
- Edmond and Lily Safra Center for Brain SciencesHebrew UniversityJerusalemIsrael
| |
Collapse
|
26
|
Kang L, Balasubramanian V. A geometric attractor mechanism for self-organization of entorhinal grid modules. eLife 2019; 8:46687. [PMID: 31373556 PMCID: PMC6776444 DOI: 10.7554/elife.46687] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2019] [Accepted: 08/01/2019] [Indexed: 11/13/2022] Open
Abstract
Grid cells in the medial entorhinal cortex (MEC) respond when an animal occupies a periodic lattice of 'grid fields' in the environment. The grids are organized in modules with spatial periods, or scales, clustered around discrete values separated on average by ratios in the range 1.4-1.7. We propose a mechanism that produces this modular structure through dynamical self-organization in the MEC. In attractor network models of grid formation, the grid scale of a single module is set by the distance of recurrent inhibition between neurons. We show that the MEC forms a hierarchy of discrete modules if a smooth increase in inhibition distance along its dorso-ventral axis is accompanied by excitatory interactions along this axis. Moreover, constant scale ratios between successive modules arise through geometric relationships between triangular grids and have values that fall within the observed range. We discuss how interactions required by our model might be tested experimentally.
Collapse
Affiliation(s)
- Louis Kang
- David Rittenhouse Laboratories, University of Pennsylvania, Philadelphia, United States.,Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, United States
| | - Vijay Balasubramanian
- David Rittenhouse Laboratories, University of Pennsylvania, Philadelphia, United States
| |
Collapse
|
27
|
Rodríguez-Domínguez U, Caplan JB. A hexagonal Fourier model of grid cells. Hippocampus 2018; 29:37-45. [DOI: 10.1002/hipo.23028] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2017] [Revised: 08/29/2018] [Accepted: 09/02/2018] [Indexed: 11/10/2022]
Affiliation(s)
- Ulises Rodríguez-Domínguez
- Department of Psychology and Neuroscience and Mental Health Institute; University of Alberta; Edmonton Alberta Canada
| | - Jeremy B. Caplan
- Department of Psychology and Neuroscience and Mental Health Institute; University of Alberta; Edmonton Alberta Canada
| |
Collapse
|
28
|
Keinath AT, Epstein RA, Balasubramanian V. Environmental deformations dynamically shift the grid cell spatial metric. eLife 2018; 7:38169. [PMID: 30346272 PMCID: PMC6203432 DOI: 10.7554/elife.38169] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Accepted: 10/21/2018] [Indexed: 01/07/2023] Open
Abstract
In familiar environments, the firing fields of entorhinal grid cells form regular triangular lattices. However, when the geometric shape of the environment is deformed, these time-averaged grid patterns are distorted in a grid scale-dependent and local manner. We hypothesized that this distortion in part reflects dynamic anchoring of the grid code to displaced boundaries, possibly through border cell-grid cell interactions. To test this hypothesis, we first reanalyzed two existing rodent grid rescaling datasets to identify previously unrecognized boundary-tethered shifts in grid phase that contribute to the appearance of rescaling. We then demonstrated in a computational model that boundary-tethered phase shifts, as well as scale-dependent and local distortions of the time-averaged grid pattern, could emerge from border-grid interactions without altering inherent grid scale. Together, these results demonstrate that environmental deformations induce history-dependent shifts in grid phase, and implicate border-grid interactions as a potential mechanism underlying these dynamics.
Collapse
Affiliation(s)
- Alexandra T Keinath
- Department of Psychology, University of Pennsylvania, Pennsylvania, United States
| | - Russell A Epstein
- Department of Psychology, University of Pennsylvania, Pennsylvania, United States
| | | |
Collapse
|
29
|
Vágó L, Ujfalussy BB. Robust and efficient coding with grid cells. PLoS Comput Biol 2018; 14:e1005922. [PMID: 29309406 PMCID: PMC5774847 DOI: 10.1371/journal.pcbi.1005922] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2017] [Revised: 01/19/2018] [Accepted: 12/08/2017] [Indexed: 11/24/2022] Open
Abstract
The neuronal code arising from the coordinated activity of grid cells in the rodent entorhinal cortex can uniquely represent space across a large range of distances, but the precise conditions for optimal coding capacity are known only for environments with finite size. Here we consider a coding scheme that is suitable for unbounded environments, and present a novel, number theoretic approach to derive the grid parameters that maximise the coding range in the presence of noise. We derive an analytic upper bound on the coding range and provide examples for grid scales that achieve this bound and hence are optimal for encoding in unbounded environments. We show that in the absence of neuronal noise, the capacity of the system is extremely sensitive to the choice of the grid periods. However, when the accuracy of the representation is limited by neuronal noise, the capacity quickly becomes more robust against the choice of grid scales as the number of modules increases. Importantly, we found that the capacity of the system is near optimal even for random scale choices already for a realistic number of grid modules. Our study demonstrates that robust and efficient coding can be achieved without parameter tuning in the case of grid cell representation and provides a solid theoretical explanation for the large diversity of the grid scales observed in experimental studies. Moreover, we suggest that having multiple grid modules in the entorhinal cortex is not only required for the exponentially large coding capacity, but is also a prerequisite for the robustness of the system.
Collapse
Affiliation(s)
- Lajos Vágó
- NAP-B PATTERN Group, MTA Wigner Research Center for Physics, Budapest, Hungary
| | - Balázs B. Ujfalussy
- NAP-B PATTERN Group, MTA Wigner Research Center for Physics, Budapest, Hungary
| |
Collapse
|
30
|
Herz AV, Mathis A, Stemmler M. Periodic population codes: From a single circular variable to higher dimensions, multiple nested scales, and conceptual spaces. Curr Opin Neurobiol 2017; 46:99-108. [PMID: 28888183 DOI: 10.1016/j.conb.2017.07.005] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2017] [Revised: 07/06/2017] [Accepted: 07/19/2017] [Indexed: 12/27/2022]
Abstract
Across the nervous system, neurons often encode circular stimuli using tuning curves that are not sine or cosine functions, but that belong to the richer class of von Mises functions, which are periodic variants of Gaussians. For a population of neurons encoding a single circular variable with such canonical tuning curves, computing a simple population vector is the optimal read-out of the most likely stimulus. We argue that the advantages of population vector read-outs are so compelling that even the neural representation of the outside world's flat Euclidean geometry is curled up into a torus (a circle times a circle), creating the hexagonal activity patterns of mammalian grid cells. Here, the circular scale is not set a priori, so the nervous system can use multiple scales and gain fields to overcome the ambiguity inherent in periodic representations of linear variables. We review the experimental evidence for this framework and discuss its testable predictions and generalizations to more abstract grid-like neural representations.
Collapse
Affiliation(s)
- Andreas Vm Herz
- Bernstein Center for Computational Neuroscience Munich and Faculty of Biology, Ludwig-Maximilians-Universität München, Grosshadernerstrasse 2, 82152 Planegg-Martinsried, Germany.
| | - Alexander Mathis
- Department of Molecular and Cellular Biology and Center for Brain Science, Harvard University, 16 Divinity Avenue, Cambridge, MA 02138, USA; Werner Reichardt Centre for Integrative Neuroscience and Institute for Theoretical Physics, University of Tübingen, 72076 Tübingen, Germany
| | - Martin Stemmler
- Bernstein Center for Computational Neuroscience Munich and Faculty of Biology, Ludwig-Maximilians-Universität München, Grosshadernerstrasse 2, 82152 Planegg-Martinsried, Germany
| |
Collapse
|
31
|
Mosheiff N, Agmon H, Moriel A, Burak Y. An efficient coding theory for a dynamic trajectory predicts non-uniform allocation of entorhinal grid cells to modules. PLoS Comput Biol 2017; 13:e1005597. [PMID: 28628647 PMCID: PMC5495497 DOI: 10.1371/journal.pcbi.1005597] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2017] [Revised: 07/03/2017] [Accepted: 05/25/2017] [Indexed: 11/25/2022] Open
Abstract
Grid cells in the entorhinal cortex encode the position of an animal in its environment with spatially periodic tuning curves with different periodicities. Recent experiments established that these cells are functionally organized in discrete modules with uniform grid spacing. Here we develop a theory for efficient coding of position, which takes into account the temporal statistics of the animal's motion. The theory predicts a sharp decrease of module population sizes with grid spacing, in agreement with the trend seen in the experimental data. We identify a simple scheme for readout of the grid cell code by neural circuitry, that can match in accuracy the optimal Bayesian decoder. This readout scheme requires persistence over different timescales, depending on the grid cell module. Thus, we propose that the brain may employ an efficient representation of position which takes advantage of the spatiotemporal statistics of the encoded variable, in similarity to the principles that govern early sensory processing.
Collapse
Affiliation(s)
- Noga Mosheiff
- Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Haggai Agmon
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Avraham Moriel
- Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Yoram Burak
- Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem, Israel
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
32
|
Sanzeni A, Balasubramanian V, Tiana G, Vergassola M. Complete coverage of space favors modularity of the grid system in the brain. Phys Rev E 2016; 94:062409. [PMID: 28085304 DOI: 10.1103/physreve.94.062409] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2016] [Indexed: 11/07/2022]
Abstract
Grid cells in the entorhinal cortex fire when animals that are exploring a certain region of space occupy the vertices of a triangular grid that spans the environment. Different neurons feature triangular grids that differ in their properties of periodicity, orientation, and ellipticity. Taken together, these grids allow the animal to maintain an internal, mental representation of physical space. Experiments show that grid cells are modular, i.e., there are groups of neurons which have grids with similar periodicity, orientation, and ellipticity. We use statistical physics methods to derive a relation between variability of the properties of the grids within a module and the range of space that can be covered completely (i.e., without gaps) by the grid system with high probability. Larger variability shrinks the range of representation, providing a functional rationale for the experimentally observed comodularity of grid cell periodicity, orientation, and ellipticity. We obtain a scaling relation between the number of neurons and the period of a module, given the variability and coverage range. Specifically, we predict how many more neurons are required at smaller grid scales than at larger ones.
Collapse
Affiliation(s)
- A Sanzeni
- Department of Physics, University of Milan and INFN, Via Celoria 13, 20133 Milano, Italy.,Department of Physics, University of California San Diego, La Jolla, California 92093-0374, USA
| | - V Balasubramanian
- David Rittenhouse Laboratory, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - G Tiana
- Centre for Complexity & Biosystems and Department of Physics, University of Milan and INFN, University of Milan, via Celoria 16, 20133 Milano, Italy
| | - M Vergassola
- Department of Physics, University of California San Diego, La Jolla, California 92093-0374, USA
| |
Collapse
|
33
|
Abstract
The medial entorhinal cortex (MEC) creates a neural representation of space through a set of functionally dedicated cell types: grid cells, border cells, head direction cells, and speed cells. Grid cells, the most abundant functional cell type in the MEC, have hexagonally arranged firing fields that tile the surface of the environment. These cells were discovered only in 2005, but after 10 years of investigation, we are beginning to understand how they are organized in the MEC network, how their periodic firing fields might be generated, how they are shaped by properties of the environment, and how they interact with the rest of the MEC network. The aim of this review is to summarize what we know about grid cells and point out where our knowledge is still incomplete.
Collapse
Affiliation(s)
- David C Rowland
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, 7491 Trondheim, Norway; , , ,
| | - Yasser Roudi
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, 7491 Trondheim, Norway; , , ,
| | - May-Britt Moser
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, 7491 Trondheim, Norway; , , ,
| | - Edvard I Moser
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, 7491 Trondheim, Norway; , , ,
| |
Collapse
|
34
|
Aljadeff J, Renfrew D, Vegué M, Sharpee TO. Low-dimensional dynamics of structured random networks. Phys Rev E 2016; 93:022302. [PMID: 26986347 DOI: 10.1103/physreve.93.022302] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2015] [Indexed: 01/12/2023]
Abstract
Using a generalized random recurrent neural network model, and by extending our recently developed mean-field approach [J. Aljadeff, M. Stern, and T. Sharpee, Phys. Rev. Lett. 114, 088101 (2015)], we study the relationship between the network connectivity structure and its low-dimensional dynamics. Each connection in the network is a random number with mean 0 and variance that depends on pre- and postsynaptic neurons through a sufficiently smooth function g of their identities. We find that these networks undergo a phase transition from a silent to a chaotic state at a critical point we derive as a function of g. Above the critical point, although unit activation levels are chaotic, their autocorrelation functions are restricted to a low-dimensional subspace. This provides a direct link between the network's structure and some of its functional characteristics. We discuss example applications of the general results to neuroscience where we derive the support of the spectrum of connectivity matrices with heterogeneous and possibly correlated degree distributions, and to ecology where we study the stability of the cascade model for food web structure.
Collapse
Affiliation(s)
- Johnatan Aljadeff
- Department of Neurobiology, University of Chicago, Chicago, Illinois, USA.,Computational Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, California, USA
| | - David Renfrew
- Department of Mathematics, University of California Los Angeles, Los Angeles, California, USA
| | - Marina Vegué
- Centre de Recerca Matemàtica, Campus de Bellaterra, Barcelona, Spain.,Departament de Matemàtiques, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Tatyana O Sharpee
- Computational Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, California, USA
| |
Collapse
|