1
|
Tye KM, Miller EK, Taschbach FH, Benna MK, Rigotti M, Fusi S. Mixed selectivity: Cellular computations for complexity. Neuron 2024; 112:2289-2303. [PMID: 38729151 PMCID: PMC11257803 DOI: 10.1016/j.neuron.2024.04.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 03/08/2024] [Accepted: 04/12/2024] [Indexed: 05/12/2024]
Abstract
The property of mixed selectivity has been discussed at a computational level and offers a strategy to maximize computational power by adding versatility to the functional role of each neuron. Here, we offer a biologically grounded implementational-level mechanistic explanation for mixed selectivity in neural circuits. We define pure, linear, and nonlinear mixed selectivity and discuss how these response properties can be obtained in simple neural circuits. Neurons that respond to multiple, statistically independent variables display mixed selectivity. If their activity can be expressed as a weighted sum, then they exhibit linear mixed selectivity; otherwise, they exhibit nonlinear mixed selectivity. Neural representations based on diverse nonlinear mixed selectivity are high dimensional; hence, they confer enormous flexibility to a simple downstream readout neural circuit. However, a simple neural circuit cannot possibly encode all possible mixtures of variables simultaneously, as this would require a combinatorially large number of mixed selectivity neurons. Gating mechanisms like oscillations and neuromodulation can solve this problem by dynamically selecting which variables are mixed and transmitted to the readout.
Collapse
Affiliation(s)
- Kay M Tye
- Salk Institute for Biological Studies, La Jolla, CA, USA; Howard Hughes Medical Institute, La Jolla, CA; Department of Neurobiology, School of Biological Sciences, University of California, San Diego, La Jolla, CA 92093, USA; Kavli Institute for Brain and Mind, San Diego, CA, USA.
| | - Earl K Miller
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | - Felix H Taschbach
- Salk Institute for Biological Studies, La Jolla, CA, USA; Biological Science Graduate Program, University of California, San Diego, La Jolla, CA 92093, USA; Department of Neurobiology, School of Biological Sciences, University of California, San Diego, La Jolla, CA 92093, USA.
| | - Marcus K Benna
- Department of Neurobiology, School of Biological Sciences, University of California, San Diego, La Jolla, CA 92093, USA.
| | | | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| |
Collapse
|
2
|
Rolls ET. Two what, two where, visual cortical streams in humans. Neurosci Biobehav Rev 2024; 160:105650. [PMID: 38574782 DOI: 10.1016/j.neubiorev.2024.105650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 03/25/2024] [Accepted: 03/31/2024] [Indexed: 04/06/2024]
Abstract
ROLLS, E. T. Two What, Two Where, Visual Cortical Streams in Humans. NEUROSCI BIOBEHAV REV 2024. Recent cortical connectivity investigations lead to new concepts about 'What' and 'Where' visual cortical streams in humans, and how they connect to other cortical systems. A ventrolateral 'What' visual stream leads to the inferior temporal visual cortex for object and face identity, and provides 'What' information to the hippocampal episodic memory system, the anterior temporal lobe semantic system, and the orbitofrontal cortex emotion system. A superior temporal sulcus (STS) 'What' visual stream utilising connectivity from the temporal and parietal visual cortex responds to moving objects and faces, and face expression, and connects to the orbitofrontal cortex for emotion and social behaviour. A ventromedial 'Where' visual stream builds feature combinations for scenes, and provides 'Where' inputs via the parahippocampal scene area to the hippocampal episodic memory system that are also useful for landmark-based navigation. The dorsal 'Where' visual pathway to the parietal cortex provides for actions in space, but also provides coordinate transforms to provide inputs to the parahippocampal scene area for self-motion update of locations in scenes in the dark or when the view is obscured.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK; Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China.
| |
Collapse
|
3
|
Sabinasz D, Richter M, Schöner G. Neural dynamic foundations of a theory of higher cognition: the case of grounding nested phrases. Cogn Neurodyn 2024; 18:557-579. [PMID: 38699609 PMCID: PMC11061088 DOI: 10.1007/s11571-023-10007-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 07/21/2023] [Accepted: 09/10/2023] [Indexed: 05/05/2024] Open
Abstract
Because cognitive competences emerge in evolution and development from the sensory-motor domain, we seek a neural process account for higher cognition in which all representations are necessarily grounded in perception and action. The challenge is to understand how hallmarks of higher cognition, productivity, systematicity, and compositionality, may emerge from such a bottom-up approach. To address this challenge, we present key ideas from Dynamic Field Theory which postulates that neural populations are organized by recurrent connectivity to create stable localist representations. Dynamic instabilities enable the autonomous generation of sequences of mental states. The capacity to apply neural circuitry across broad sets of inputs that emulates the function call postulated in symbolic computation emerges through coordinate transforms implemented in neural gain fields. We show how binding localist neural representations through a shared index dimension enables conceptual structure, in which the interdependence among components of a representation is flexibly expressed. We demonstrate these principles in a neural dynamic architecture that represents and perceptually grounds nested relational and action phrases. Sequences of neural processing steps are generated autonomously to attentionally select the referenced objects and events in a manner that is sensitive to their interdependencies. This solves the problem of 2 and the massive binding problem in expressions such as "the small tree that is to the left of the lake which is to the left of the large tree". We extend earlier work by incorporating new types of grammatical constructions and a larger vocabulary. We discuss the DFT framework relative to other neural process accounts of higher cognition and assess the scope and challenges of such neural theories.
Collapse
Affiliation(s)
- Daniel Sabinasz
- Institute for Neural Computation, Ruhr-University Bochum, Bochum, Germany
| | - Mathis Richter
- Neuromorphic Computing Lab, Intel Germany GmbH, Feldkirchen, Germany
| | - Gregor Schöner
- Institute for Neural Computation, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
4
|
Rutler O, Persaud S, Kosmidis S, Park JM, Harano N, Bruno RM, Goldberg ME. Mice require proprioception to establish long-term visuospatial memory. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.03.560558. [PMID: 37873372 PMCID: PMC10592928 DOI: 10.1101/2023.10.03.560558] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
Because the retina moves constantly, the retinotopic representation of the visual world is spatially inaccurate and the brain must transform this spatially inaccurate retinal signal to a spatially accurate signal usable for perception and action. One of the salient discoveries of modern neuroscience is the role of the hippocampus in establishing gaze-independent, long-term visuospatial memories. The rat hippocampus has neurons which report the animal's position in space regardless of its angle of gaze. Rats with hippocampal lesions are unable to find the location of an escape platform hidden in a pool of opaque fluid, the Morris Water Maze (MWM) based on the visual aspects of their surrounding environment. Here we show that the representation of proprioception in the dysgranular zone of primary somatosensory cortex is equivalently necessary for mice to learn the location of the hidden platform, presumably because without it they cannot create a long-term gaze-independent visuospatial representation of their environment from the retinal signal. They have no trouble finding the platform when it is marked by a flag, and they have no motor or vestibular deficits.
Collapse
|
5
|
Burkhardt M, Bergelt J, Gönner L, Dinkelbach HÜ, Beuth F, Schwarz A, Bicanski A, Burgess N, Hamker FH. A large-scale neurocomputational model of spatial cognition integrating memory with vision. Neural Netw 2023; 167:473-488. [PMID: 37688954 DOI: 10.1016/j.neunet.2023.08.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 06/29/2023] [Accepted: 08/20/2023] [Indexed: 09/11/2023]
Abstract
We introduce a large-scale neurocomputational model of spatial cognition called 'Spacecog', which integrates recent findings from mechanistic models of visual and spatial perception. As a high-level cognitive ability, spatial cognition requires the processing of behaviourally relevant features in complex environments and, importantly, the updating of this information during processes of eye and body movement. The Spacecog model achieves this by interfacing spatial memory and imagery with mechanisms of object localisation, saccade execution, and attention through coordinate transformations in parietal areas of the brain. We evaluate the model in a realistic virtual environment where our neurocognitive model steers an agent to perform complex visuospatial tasks. Our modelling approach opens up new possibilities in the assessment of neuropsychological data and human spatial cognition.
Collapse
Affiliation(s)
| | - Julia Bergelt
- Chemnitz University of Technology, 09107, Chemnitz Germany.
| | - Lorenz Gönner
- Technische Universität Dresden, Faculty of Psychology, 01062, Dresden Germany; Technische Universität Dresden, Department of Psychiatry, 01307, Dresden Germany.
| | | | - Frederik Beuth
- Chemnitz University of Technology, 09107, Chemnitz Germany.
| | - Alex Schwarz
- Chemnitz University of Technology, 09107, Chemnitz Germany.
| | - Andrej Bicanski
- Newcastle University, NE1 7RU, Newcastle upon Tyne United Kingdom.
| | - Neil Burgess
- University College London, WC1E 6BT, London United Kingdom.
| | - Fred H Hamker
- Chemnitz University of Technology, 09107, Chemnitz Germany.
| |
Collapse
|
6
|
Alexander AS, Robinson JC, Stern CE, Hasselmo ME. Gated transformations from egocentric to allocentric reference frames involving retrosplenial cortex, entorhinal cortex, and hippocampus. Hippocampus 2023; 33:465-487. [PMID: 36861201 PMCID: PMC10403145 DOI: 10.1002/hipo.23513] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 01/22/2023] [Accepted: 01/25/2023] [Indexed: 03/03/2023]
Abstract
This paper reviews the recent experimental finding that neurons in behaving rodents show egocentric coding of the environment in a number of structures associated with the hippocampus. Many animals generating behavior on the basis of sensory input must deal with the transformation of coordinates from the egocentric position of sensory input relative to the animal, into an allocentric framework concerning the position of multiple goals and objects relative to each other in the environment. Neurons in retrosplenial cortex show egocentric coding of the position of boundaries in relation to an animal. These neuronal responses are discussed in relation to existing models of the transformation from egocentric to allocentric coordinates using gain fields and a new model proposing transformations of phase coding that differ from current models. The same type of transformations could allow hierarchical representations of complex scenes. The responses in rodents are also discussed in comparison to work on coordinate transformations in humans and non-human primates.
Collapse
Affiliation(s)
- Andrew S Alexander
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, USA
| | - Jennifer C Robinson
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, USA
| | - Chantal E Stern
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, USA
| | - Michael E Hasselmo
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, USA
| |
Collapse
|
7
|
Alexander AS, Place R, Starrett MJ, Chrastil ER, Nitz DA. Rethinking retrosplenial cortex: Perspectives and predictions. Neuron 2023; 111:150-175. [PMID: 36460006 DOI: 10.1016/j.neuron.2022.11.006] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 08/09/2022] [Accepted: 11/06/2022] [Indexed: 12/03/2022]
Abstract
The last decade has produced exciting new ideas about retrosplenial cortex (RSC) and its role in integrating diverse inputs. Here, we review the diversity in forms of spatial and directional tuning of RSC activity, temporal organization of RSC activity, and features of RSC interconnectivity with other brain structures. We find that RSC anatomy and dynamics are more consistent with roles in multiple sensorimotor and cognitive processes than with any isolated function. However, two more generalized categories of function may best characterize roles for RSC in complex cognitive processes: (1) shifting and relating perspectives for spatial cognition and (2) prediction and error correction for current sensory states with internal representations of the environment. Both functions likely take advantage of RSC's capacity to encode conjunctions among sensory, motor, and spatial mapping information streams. Together, these functions provide the scaffold for intelligent actions, such as navigation, perspective taking, interaction with others, and error detection.
Collapse
Affiliation(s)
- Andrew S Alexander
- Department of Psychological and Brain Sciences, Boston University, Boston, MA 02215, USA
| | - Ryan Place
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA 92093, USA
| | - Michael J Starrett
- Department of Neurobiology & Behavior, University of California, Irvine, Irvine, CA 92697, USA
| | - Elizabeth R Chrastil
- Department of Neurobiology & Behavior, University of California, Irvine, Irvine, CA 92697, USA; Department of Cognitive Sciences, University of California, Irvine, Irvine, CA 92697, USA.
| | - Douglas A Nitz
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA 92093, USA.
| |
Collapse
|
8
|
Inoue M, Furuki D, Takiyama K. Detecting task-relevant spatiotemporal modules and their relation to motor adaptation. PLoS One 2022; 17:e0275820. [PMID: 36206279 PMCID: PMC9543959 DOI: 10.1371/journal.pone.0275820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 09/25/2022] [Indexed: 11/16/2022] Open
Abstract
How does the central nervous system (CNS) control our bodies, including hundreds of degrees of freedom (DoFs)? A hypothesis to reduce the number of DoFs posits that the CNS controls groups of joints or muscles (i.e., modules) rather than each joint or muscle independently. Another hypothesis posits that the CNS primarily controls motion components relevant to task achievements (i.e., task-relevant components). Although the two hypotheses are examined intensively, the relationship between the two concepts remains unknown, e.g., unimportant modules may possess task-relevant information. Here, we propose a framework of task-relevant modules, i.e., modules relevant to task achievements, while combining the two concepts mentioned above in a data-driven manner. To examine the possible role of the task-relevant modules, we examined the modulation of the task-relevant modules in a motor adaptation paradigm in which trial-to-trial modifications of motor output are observable. The task-relevant modules, rather than conventional modules, showed adaptation-dependent modulations, indicating the relevance of task-relevant modules to trial-to-trial updates of motor output. Our method provides insight into motor control and adaptation via an integrated framework of modules and task-relevant components.
Collapse
Affiliation(s)
- Masato Inoue
- Department of Electrical Engineering and Computer Science, Tokyo University of Agriculture and Technology, Koganei, Tokyo, Japan
| | - Daisuke Furuki
- Department of Electrical Engineering and Computer Science, Tokyo University of Agriculture and Technology, Koganei, Tokyo, Japan
| | - Ken Takiyama
- Department of Electrical Engineering and Computer Science, Tokyo University of Agriculture and Technology, Koganei, Tokyo, Japan
- * E-mail:
| |
Collapse
|
9
|
Ramezanpour H, Fallah M. The role of temporal cortex in the control of attention. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 3:100038. [PMID: 36685758 PMCID: PMC9846471 DOI: 10.1016/j.crneur.2022.100038] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2021] [Revised: 02/05/2022] [Accepted: 04/01/2022] [Indexed: 01/25/2023] Open
Abstract
Attention is an indispensable component of active vision. Contrary to the widely accepted notion that temporal cortex processing primarily focusses on passive object recognition, a series of very recent studies emphasize the role of temporal cortex structures, specifically the superior temporal sulcus (STS) and inferotemporal (IT) cortex, in guiding attention and implementing cognitive programs relevant for behavioral tasks. The goal of this theoretical paper is to advance the hypothesis that the temporal cortex attention network (TAN) entails necessary components to actively participate in attentional control in a flexible task-dependent manner. First, we will briefly discuss the general architecture of the temporal cortex with a focus on the STS and IT cortex of monkeys and their modulation with attention. Then we will review evidence from behavioral and neurophysiological studies that support their guidance of attention in the presence of cognitive control signals. Next, we propose a mechanistic framework for executive control of attention in the temporal cortex. Finally, we summarize the role of temporal cortex in implementing cognitive programs and discuss how they contribute to the dynamic nature of visual attention to ensure flexible behavior.
Collapse
Affiliation(s)
- Hamidreza Ramezanpour
- Centre for Vision Research, York University, Toronto, Ontario, Canada,School of Kinesiology and Health Science, Faculty of Health, York University, Toronto, Ontario, Canada,VISTA: Vision Science to Application, York University, Toronto, Ontario, Canada,Corresponding author. Centre for Vision Research, York University, Toronto, Ontario, Canada.
| | - Mazyar Fallah
- Centre for Vision Research, York University, Toronto, Ontario, Canada,School of Kinesiology and Health Science, Faculty of Health, York University, Toronto, Ontario, Canada,VISTA: Vision Science to Application, York University, Toronto, Ontario, Canada,Department of Psychology, Faculty of Health, York University, Toronto, Ontario, Canada,Department of Human Health and Nutritional Sciences, College of Biological Science, University of Guelph, Guelph, Ontario, Canada,Corresponding author. Department of Human Health and Nutritional Sciences, College of Biological Science, University of Guelph, Guelph, Ontario, Canada.
| |
Collapse
|
10
|
Khalifa K, Islam F, Gamboa JP, Wilkenfeld DA, Kostić D. Integrating Philosophy of Understanding With the Cognitive Sciences. Front Syst Neurosci 2022; 16:764708. [PMID: 35359623 PMCID: PMC8960449 DOI: 10.3389/fnsys.2022.764708] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 02/10/2022] [Indexed: 11/25/2022] Open
Abstract
We provide two programmatic frameworks for integrating philosophical research on understanding with complementary work in computer science, psychology, and neuroscience. First, philosophical theories of understanding have consequences about how agents should reason if they are to understand that can then be evaluated empirically by their concordance with findings in scientific studies of reasoning. Second, these studies use a multitude of explanations, and a philosophical theory of understanding is well suited to integrating these explanations in illuminating ways.
Collapse
Affiliation(s)
- Kareem Khalifa
- Department of Philosophy, Middlebury College, Middlebury, VT, United States
| | - Farhan Islam
- Independent Researcher, Madison, WI, United States
| | - J. P. Gamboa
- Department of History and Philosophy of Science, University of Pittsburgh, Pittsburgh, PA, United States
| | - Daniel A. Wilkenfeld
- Department of Acute and Tertiary Care, University of Pittsburgh School of Nursing, Pittsburgh, PA, United States
| | - Daniel Kostić
- Institute for Science in Society (ISiS), Radboud University, Nijmegen, Netherlands
| |
Collapse
|
11
|
Xie Y, Hu P, Li J, Chen J, Song W, Wang XJ, Yang T, Dehaene S, Tang S, Min B, Wang L. Geometry of sequence working memory in macaque prefrontal cortex. Science 2022; 375:632-639. [PMID: 35143322 DOI: 10.1126/science.abm0204] [Citation(s) in RCA: 44] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
How the brain stores a sequence in memory remains largely unknown. We investigated the neural code underlying sequence working memory using two-photon calcium imaging to record thousands of neurons in the prefrontal cortex of macaque monkeys memorizing and then reproducing a sequence of locations after a delay. We discovered a regular geometrical organization: The high-dimensional neural state space during the delay could be decomposed into a sum of low-dimensional subspaces, each storing the spatial location at a given ordinal rank, which could be generalized to novel sequences and explain monkey behavior. The rank subspaces were distributed across large overlapping neural groups, and the integration of ordinal and spatial information occurred at the collective level rather than within single neurons. Thus, a simple representational geometry underlies sequence working memory.
Collapse
Affiliation(s)
- Yang Xie
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Peiyao Hu
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Junru Li
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Jingwen Chen
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Weibin Song
- Peking University School of Life Sciences and Peking-Tsinghua Center for Life Sciences, Beijing 100871, China
| | - Xiao-Jing Wang
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Tianming Yang
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, CEA, INSERM, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France.,Collège de France, Universite Paris Sciences Lettres, 75005 Paris, France
| | - Shiming Tang
- Peking University School of Life Sciences and Peking-Tsinghua Center for Life Sciences, Beijing 100871, China.,IDG/McGovern Institute for Brain Research at Peking University, Beijing 100871, China
| | - Bin Min
- Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai 200031, China
| | - Liping Wang
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| |
Collapse
|
12
|
Cesanek E, Zhang Z, Ingram JN, Wolpert DM, Flanagan JR. Motor memories of object dynamics are categorically organized. eLife 2021; 10:71627. [PMID: 34796873 PMCID: PMC8635978 DOI: 10.7554/elife.71627] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 11/18/2021] [Indexed: 11/13/2022] Open
Abstract
The ability to predict the dynamics of objects, linking applied force to motion, underlies our capacity to perform many of the tasks we carry out on a daily basis. Thus, a fundamental question is how the dynamics of the myriad objects we interact with are organized in memory. Using a custom-built three-dimensional robotic interface that allowed us to simulate objects of varying appearance and weight, we examined how participants learned the weights of sets of objects that they repeatedly lifted. We find strong support for the novel hypothesis that motor memories of object dynamics are organized categorically, in terms of families, based on covariation in their visual and mechanical properties. A striking prediction of this hypothesis, supported by our findings and not predicted by standard associative map models, is that outlier objects with weights that deviate from the family-predicted weight will never be learned despite causing repeated lifting errors.
Collapse
Affiliation(s)
- Evan Cesanek
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States.,Department of Neuroscience, Columbia University, New York, NY, United States
| | - Zhaoran Zhang
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States.,Department of Neuroscience, Columbia University, New York, NY, United States
| | - James N Ingram
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States.,Department of Neuroscience, Columbia University, New York, NY, United States
| | - Daniel M Wolpert
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States.,Department of Neuroscience, Columbia University, New York, NY, United States
| | - J Randall Flanagan
- Department of Psychology and Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
| |
Collapse
|
13
|
Hulse BK, Haberkern H, Franconville R, Turner-Evans D, Takemura SY, Wolff T, Noorman M, Dreher M, Dan C, Parekh R, Hermundstad AM, Rubin GM, Jayaraman V. A connectome of the Drosophila central complex reveals network motifs suitable for flexible navigation and context-dependent action selection. eLife 2021; 10:e66039. [PMID: 34696823 PMCID: PMC9477501 DOI: 10.7554/elife.66039] [Citation(s) in RCA: 122] [Impact Index Per Article: 40.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 09/07/2021] [Indexed: 11/13/2022] Open
Abstract
Flexible behaviors over long timescales are thought to engage recurrent neural networks in deep brain regions, which are experimentally challenging to study. In insects, recurrent circuit dynamics in a brain region called the central complex (CX) enable directed locomotion, sleep, and context- and experience-dependent spatial navigation. We describe the first complete electron microscopy-based connectome of the Drosophila CX, including all its neurons and circuits at synaptic resolution. We identified new CX neuron types, novel sensory and motor pathways, and network motifs that likely enable the CX to extract the fly's head direction, maintain it with attractor dynamics, and combine it with other sensorimotor information to perform vector-based navigational computations. We also identified numerous pathways that may facilitate the selection of CX-driven behavioral patterns by context and internal state. The CX connectome provides a comprehensive blueprint necessary for a detailed understanding of network dynamics underlying sleep, flexible navigation, and state-dependent action selection.
Collapse
Affiliation(s)
- Brad K Hulse
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Hannah Haberkern
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Romain Franconville
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Daniel Turner-Evans
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Shin-ya Takemura
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Tanya Wolff
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Marcella Noorman
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Marisa Dreher
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Chuntao Dan
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Ruchi Parekh
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Ann M Hermundstad
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Gerald M Rubin
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Vivek Jayaraman
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| |
Collapse
|
14
|
Richter M, Lins J, Schöner G. A Neural Dynamic Model of the Perceptual Grounding of Spatial and Movement Relations. Cogn Sci 2021; 45:e13045. [PMID: 34647339 DOI: 10.1111/cogs.13045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Revised: 08/12/2021] [Accepted: 08/17/2021] [Indexed: 11/27/2022]
Abstract
How does the human brain link relational concepts to perceptual experience? For example, a speaker may say "the cup to the left of the computer" to direct the listener's attention to one of two cups on a desk. We provide a neural dynamic account for both perceptual grounding, in which relational concepts enable the attentional selection of objects in the visual array, and for the generation of descriptions of the visual array using relational concepts. In the model, activation in neural populations evolves dynamically under the influence of both inputs and strong interaction as formalized in dynamic field theory. Relational concepts are modeled as patterns of connectivity to perceptual representations. These generalize across the visual array through active coordinate transforms that center the representation of target objects in potential reference objects. How the model perceptually grounds or generates relational descriptions is probed in 104 simulations that systematically vary the spatial and movement relations employed, the number of feature dimensions used, and the number of matching and nonmatching objects. We explain how sequences of decisions emerge from the time- and state-continuous neural dynamics, how relational hypotheses are generated and either accepted or rejected, followed by the selection of new objects or the generation of new relational hypotheses. Its neural realism distinguishes the model from information processing accounts, its capacity to autonomously generate sequences of processing steps distinguishes it from deep neural network accounts. The model points toward a neural dynamic theory of higher cognition.
Collapse
Affiliation(s)
| | - Jonas Lins
- Institut für Neuroinformatik, Ruhr-Universität Bochum
| | | |
Collapse
|
15
|
Rolls ET. Learning Invariant Object and Spatial View Representations in the Brain Using Slow Unsupervised Learning. Front Comput Neurosci 2021; 15:686239. [PMID: 34366818 PMCID: PMC8335547 DOI: 10.3389/fncom.2021.686239] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 06/29/2021] [Indexed: 11/13/2022] Open
Abstract
First, neurophysiological evidence for the learning of invariant representations in the inferior temporal visual cortex is described. This includes object and face representations with invariance for position, size, lighting, view and morphological transforms in the temporal lobe visual cortex; global object motion in the cortex in the superior temporal sulcus; and spatial view representations in the hippocampus that are invariant with respect to eye position, head direction, and place. Second, computational mechanisms that enable the brain to learn these invariant representations are proposed. For the ventral visual system, one key adaptation is the use of information available in the statistics of the environment in slow unsupervised learning to learn transform-invariant representations of objects. This contrasts with deep supervised learning in artificial neural networks, which uses training with thousands of exemplars forced into different categories by neuronal teachers. Similar slow learning principles apply to the learning of global object motion in the dorsal visual system leading to the cortex in the superior temporal sulcus. The learning rule that has been explored in VisNet is an associative rule with a short-term memory trace. The feed-forward architecture has four stages, with convergence from stage to stage. This type of slow learning is implemented in the brain in hierarchically organized competitive neuronal networks with convergence from stage to stage, with only 4-5 stages in the hierarchy. Slow learning is also shown to help the learning of coordinate transforms using gain modulation in the dorsal visual system extending into the parietal cortex and retrosplenial cortex. Representations are learned that are in allocentric spatial view coordinates of locations in the world and that are independent of eye position, head direction, and the place where the individual is located. This enables hippocampal spatial view cells to use idiothetic, self-motion, signals for navigation when the view details are obscured for short periods.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, United Kingdom.,Department of Computer Science, University of Warwick, Coventry, United Kingdom
| |
Collapse
|
16
|
Cottereau BR, Trotter Y, Durand JB. An egocentric straight-ahead bias in primate's vision. Brain Struct Funct 2021; 226:2897-2909. [PMID: 34120262 PMCID: PMC8541962 DOI: 10.1007/s00429-021-02314-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 06/04/2021] [Indexed: 12/23/2022]
Abstract
As we plan to reach or manipulate objects, we generally orient our body so as to face them. Other objects occupying the same portion of space will likely represent potential obstacles for the intended action. Thus, either as targets or as obstacles, the objects located straight in front of us are often endowed with a special behavioral status. Here, we review a set of recent electrophysiological, imaging and behavioral studies bringing converging evidence that the objects which lie straight-ahead are subject to privileged visual processing. More precisely, these works collectively demonstrate that when gaze steers central vision away from the straight-ahead direction, the latter is still prioritized in peripheral vision. Straight-ahead objects evoke (1) stronger neuronal responses in macaque peripheral V1 neurons, (2) stronger EEG and fMRI activations across the human visual cortex and (3) faster reactive hand and eye movements. Here, we discuss the functional implications and underlying mechanisms behind this phenomenon. Notably, we propose that it can be considered as a new type of visuospatial attentional mechanism, distinct from the previously documented classes of endogenous and exogenous attention.
Collapse
Affiliation(s)
- Benoit R Cottereau
- Centre de Recherche Cerveau Et Cognition, Université de Toulouse, 31052, Toulouse, France. .,Centre National de La Recherche Scientifique, 31055, Toulouse, France.
| | - Yves Trotter
- Centre de Recherche Cerveau Et Cognition, Université de Toulouse, 31052, Toulouse, France.,Centre National de La Recherche Scientifique, 31055, Toulouse, France
| | - Jean-Baptiste Durand
- Centre de Recherche Cerveau Et Cognition, Université de Toulouse, 31052, Toulouse, France.,Centre National de La Recherche Scientifique, 31055, Toulouse, France
| |
Collapse
|
17
|
O'Reilly RC, Russin JL, Zolfaghar M, Rohrlich J. Deep Predictive Learning in Neocortex and Pulvinar. J Cogn Neurosci 2021; 33:1158-1196. [PMID: 34428793 PMCID: PMC10164227 DOI: 10.1162/jocn_a_01708] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
How do humans learn from raw sensory experience? Throughout life, but most obviously in infancy, we learn without explicit instruction. We propose a detailed biological mechanism for the widely embraced idea that learning is driven by the differences between predictions and actual outcomes (i.e., predictive error-driven learning). Specifically, numerous weak projections into the pulvinar nucleus of the thalamus generate top-down predictions, and sparse driver inputs from lower areas supply the actual outcome, originating in Layer 5 intrinsic bursting neurons. Thus, the outcome representation is only briefly activated, roughly every 100 msec (i.e., 10 Hz, alpha), resulting in a temporal difference error signal, which drives local synaptic changes throughout the neocortex. This results in a biologically plausible form of error backpropagation learning. We implemented these mechanisms in a large-scale model of the visual system and found that the simulated inferotemporal pathway learns to systematically categorize 3-D objects according to invariant shape properties, based solely on predictive learning from raw visual inputs. These categories match human judgments on the same stimuli and are consistent with neural representations in inferotemporal cortex in primates.
Collapse
|
18
|
Caruso VC, Pages DS, Sommer MA, Groh JM. Compensating for a shifting world: evolving reference frames of visual and auditory signals across three multimodal brain areas. J Neurophysiol 2021; 126:82-94. [PMID: 33852803 DOI: 10.1152/jn.00385.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Stimulus locations are detected differently by different sensory systems, but ultimately they yield similar percepts and behavioral responses. How the brain transcends initial differences to compute similar codes is unclear. We quantitatively compared the reference frames of two sensory modalities, vision and audition, across three interconnected brain areas involved in generating saccades, namely the frontal eye fields (FEF), lateral and medial parietal cortex (M/LIP), and superior colliculus (SC). We recorded from single neurons in head-restrained monkeys performing auditory- and visually guided saccades from variable initial fixation locations and evaluated whether their receptive fields were better described as eye-centered, head-centered, or hybrid (i.e. not anchored uniquely to head- or eye-orientation). We found a progression of reference frames across areas and across time, with considerable hybrid-ness and persistent differences between modalities during most epochs/brain regions. For both modalities, the SC was more eye-centered than the FEF, which in turn was more eye-centered than the predominantly hybrid M/LIP. In all three areas and temporal epochs from stimulus onset to movement, visual signals were more eye-centered than auditory signals. In the SC and FEF, auditory signals became more eye-centered at the time of the saccade than they were initially after stimulus onset, but only in the SC at the time of the saccade did the auditory signals become "predominantly" eye-centered. The results indicate that visual and auditory signals both undergo transformations, ultimately reaching the same final reference frame but via different dynamics across brain regions and time.NEW & NOTEWORTHY Models for visual-auditory integration posit that visual signals are eye-centered throughout the brain, whereas auditory signals are converted from head-centered to eye-centered coordinates. We show instead that both modalities largely employ hybrid reference frames: neither fully head- nor eye-centered. Across three hubs of the oculomotor network (intraparietal cortex, frontal eye field, and superior colliculus) visual and auditory signals evolve from hybrid to a common eye-centered format via different dynamics across brain areas and time.
Collapse
Affiliation(s)
- Valeria C Caruso
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Psychiatry, University of Michigan, Ann Arbor, Michigan
| | - Daniel S Pages
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina
| | - Marc A Sommer
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Biomedical Engineering, Duke University, Durham, North Carolina
| | - Jennifer M Groh
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Biomedical Engineering, Duke University, Durham, North Carolina
| |
Collapse
|
19
|
Fabius JH, Fracasso A, Acunzo DJ, Van der Stigchel S, Melcher D. Low-Level Visual Information Is Maintained across Saccades, Allowing for a Postsaccadic Handoff between Visual Areas. J Neurosci 2020; 40:9476-9486. [PMID: 33115930 PMCID: PMC7724139 DOI: 10.1523/jneurosci.1169-20.2020] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 09/17/2020] [Accepted: 10/20/2020] [Indexed: 01/01/2023] Open
Abstract
Experience seems continuous and detailed despite saccadic eye movements changing retinal input several times per second. There is debate whether neural signals related to updating across saccades contain information about stimulus features, or only location pointers without visual details. We investigated the time course of low-level visual information processing across saccades by decoding the spatial frequency of a stationary stimulus that changed from one visual hemifield to the other because of a horizontal saccadic eye movement. We recorded magnetoencephalography while human subjects (both sexes) monitored the orientation of a grating stimulus, making spatial frequency task irrelevant. Separate trials, in which subjects maintained fixation, were used to train a classifier, whose performance was then tested on saccade trials. Decoding performance showed that spatial frequency information of the presaccadic stimulus remained present for ∼200 ms after the saccade, transcending retinotopic specificity. Postsaccadic information ramped up rapidly after saccade offset. There was an overlap of over 100 ms during which decoding was significant from both presaccadic and postsaccadic processing areas. This suggests that the apparent richness of perception across saccades may be supported by the continuous availability of low-level information with a "soft handoff" of information during the initial processing sweep of the new fixation.SIGNIFICANCE STATEMENT Saccades create frequent discontinuities in visual input, yet perception appears stable and continuous. How is this discontinuous input processed resulting in visual stability? Previous studies have focused on presaccadic remapping. Here we examined the time course of processing of low-level visual information (spatial frequency) across saccades with magnetoencephalography. The results suggest that spatial frequency information is not predictively remapped but also is not discarded. Instead, they suggest a soft handoff over time between different visual areas, making this information continuously available across the saccade. Information about the presaccadic stimulus remains available, while the information about the postsaccadic stimulus has also become available. The simultaneous availability of both the presaccadic and postsaccadic information could enable rich and continuous perception across saccades.
Collapse
Affiliation(s)
- Jasper H Fabius
- Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QQ, United Kingdom
| | - Alessio Fracasso
- Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QQ, United Kingdom
| | - David J Acunzo
- Center for Mind/Brain Sciences and Department of Psychology and Cognitive Sciences, University of Trento, I-38122 Trento, Italy
| | - Stefan Van der Stigchel
- Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS, Utrecht, The Netherlands
| | - David Melcher
- Center for Mind/Brain Sciences and Department of Psychology and Cognitive Sciences, University of Trento, I-38122 Trento, Italy
- Psychology Program, Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
20
|
Avila E, Lakshminarasimhan KJ, DeAngelis GC, Angelaki DE. Visual and Vestibular Selectivity for Self-Motion in Macaque Posterior Parietal Area 7a. Cereb Cortex 2020; 29:3932-3947. [PMID: 30365011 DOI: 10.1093/cercor/bhy272] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2018] [Revised: 09/05/2018] [Indexed: 01/12/2023] Open
Abstract
We examined the responses of neurons in posterior parietal area 7a to passive rotational and translational self-motion stimuli, while systematically varying the speed of visually simulated (optic flow cues) or actual (vestibular cues) self-motion. Contrary to a general belief that responses in area 7a are predominantly visual, we found evidence for a vestibular dominance in self-motion processing. Only a small fraction of neurons showed multisensory convergence of visual/vestibular and linear/angular self-motion cues. These findings suggest possibly independent neuronal population codes for visual versus vestibular and linear versus angular self-motion. Neural responses scaled with self-motion magnitude (i.e., speed) but temporal dynamics were diverse across the population. Analyses of laminar recordings showed a strong distance-dependent decrease for correlations in stimulus-induced (signal correlation) and stimulus-independent (noise correlation) components of spike-count variability, supporting the notion that neurons are spatially clustered with respect to their sensory representation of motion. Single-unit and multiunit response patterns were also correlated, but no other systematic dependencies on cortical layers or columns were observed. These findings describe a likely independent multimodal neural code for linear and angular self-motion in a posterior parietal area of the macaque brain that is connected to the hippocampal formation.
Collapse
Affiliation(s)
- Eric Avila
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | | | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA.,Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| |
Collapse
|
21
|
Abstract
Several types of neurons involved in spatial navigation and memory encode the distance and direction (that is, the vector) between an agent and items in its environment. Such vectorial information provides a powerful basis for spatial cognition by representing the geometric relationships between the self and the external world. Here, we review the explicit encoding of vectorial information by neurons in and around the hippocampal formation, far from the sensory periphery. The parahippocampal, retrosplenial and parietal cortices, as well as the hippocampal formation and striatum, provide a plethora of examples of vector coding at the single neuron level. We provide a functional taxonomy of cells with vectorial receptive fields as reported in experiments and proposed in theoretical work. The responses of these neurons may provide the fundamental neural basis for the (bottom-up) representation of environmental layout and (top-down) memory-guided generation of visuospatial imagery and navigational planning.
Collapse
|
22
|
Spiking neurons with spatiotemporal dynamics and gain modulation for monolithically integrated memristive neural networks. Nat Commun 2020; 11:3399. [PMID: 32636385 PMCID: PMC7341810 DOI: 10.1038/s41467-020-17215-3] [Citation(s) in RCA: 65] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Accepted: 06/15/2020] [Indexed: 11/18/2022] Open
Abstract
As a key building block of biological cortex, neurons are powerful information processing units and can achieve highly complex nonlinear computations even in individual cells. Hardware implementation of artificial neurons with similar capability is of great significance for the construction of intelligent, neuromorphic systems. Here, we demonstrate an artificial neuron based on NbOx volatile memristor that not only realizes traditional all-or-nothing, threshold-driven spiking and spatiotemporal integration, but also enables dynamic logic including XOR function that is not linearly separable and multiplicative gain modulation among different dendritic inputs, therefore surpassing neuronal functions described by a simple point neuron model. A monolithically integrated 4 × 4 fully memristive neural network consisting of volatile NbOx memristor based neurons and nonvolatile TaOx memristor based synapses in a single crossbar array is experimentally demonstrated, showing capability in pattern recognition through online learning using a simplified δ-rule and coincidence detection, which paves the way for bio-inspired intelligent systems. Designing energy efficient and scalable artificial networks for neuromorphic computing remains a challenge. Here, the authors demonstrate online learning in a monolithically integrated 4 × 4 fully memristive neural network consisting of volatile NbOx memristor neurons and nonvolatile TaOx memristor synapses.
Collapse
|
23
|
Alexander AS, Carstensen LC, Hinman JR, Raudies F, Chapman GW, Hasselmo ME. Egocentric boundary vector tuning of the retrosplenial cortex. SCIENCE ADVANCES 2020; 6:eaaz2322. [PMID: 32128423 PMCID: PMC7035004 DOI: 10.1126/sciadv.aaz2322] [Citation(s) in RCA: 96] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 11/27/2019] [Indexed: 05/17/2023]
Abstract
The retrosplenial cortex is reciprocally connected with multiple structures implicated in spatial cognition, and damage to the region itself produces numerous spatial impairments. Here, we sought to characterize spatial correlates of neurons within the region during free exploration in two-dimensional environments. We report that a large percentage of retrosplenial cortex neurons have spatial receptive fields that are active when environmental boundaries are positioned at a specific orientation and distance relative to the animal itself. We demonstrate that this vector-based location signal is encoded in egocentric coordinates, is localized to the dysgranular retrosplenial subregion, is independent of self-motion, and is context invariant. Further, we identify a subpopulation of neurons with this response property that are synchronized with the hippocampal theta oscillation. Accordingly, the current work identifies a robust egocentric spatial code in retrosplenial cortex that can facilitate spatial coordinate system transformations and support the anchoring, generation, and utilization of allocentric representations.
Collapse
Affiliation(s)
- Andrew S. Alexander
- Center for Systems Neuroscience, Boston University, 610 Commonwealth Ave., Boston, MA 02215, USA
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA 02215, USA
- Corresponding author.
| | - Lucas C. Carstensen
- Center for Systems Neuroscience, Boston University, 610 Commonwealth Ave., Boston, MA 02215, USA
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA 02215, USA
- Graduate Program for Neuroscience, Boston University, 610 Commonwealth Ave., Boston, MA 02215, USA
| | - James R. Hinman
- Center for Systems Neuroscience, Boston University, 610 Commonwealth Ave., Boston, MA 02215, USA
| | - Florian Raudies
- Center for Systems Neuroscience, Boston University, 610 Commonwealth Ave., Boston, MA 02215, USA
| | - G. William Chapman
- Center for Systems Neuroscience, Boston University, 610 Commonwealth Ave., Boston, MA 02215, USA
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA 02215, USA
| | - Michael E. Hasselmo
- Center for Systems Neuroscience, Boston University, 610 Commonwealth Ave., Boston, MA 02215, USA
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA 02215, USA
- Graduate Program for Neuroscience, Boston University, 610 Commonwealth Ave., Boston, MA 02215, USA
| |
Collapse
|
24
|
Schneider L, Dominguez-Vargas AU, Gibson L, Kagan I, Wilke M. Eye position signals in the dorsal pulvinar during fixation and goal-directed saccades. J Neurophysiol 2020; 123:367-391. [DOI: 10.1152/jn.00432.2019] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Sensorimotor cortical areas contain eye position information thought to ensure perceptual stability across saccades and underlie spatial transformations supporting goal-directed actions. One pathway by which eye position signals could be relayed to and across cortical areas is via the dorsal pulvinar. Several studies have demonstrated saccade-related activity in the dorsal pulvinar, and we have recently shown that many neurons exhibit postsaccadic spatial preference. In addition, dorsal pulvinar lesions lead to gaze-holding deficits expressed as nystagmus or ipsilesional gaze bias, prompting us to investigate the effects of eye position. We tested three starting eye positions (−15°, 0°, 15°) in monkeys performing a visually cued memory saccade task. We found two main types of gaze dependence. First, ~50% of neurons showed dependence on static gaze direction during initial and postsaccadic fixation, and might be signaling the position of the eyes in the orbit or coding foveal targets in a head/body/world-centered reference frame. The population-derived eye position signal lagged behind the saccade. Second, many neurons showed a combination of eye-centered and gaze-dependent modulation of visual, memory, and saccadic responses to a peripheral target. A small subset showed effects consistent with eye position-dependent gain modulation. Analysis of reference frames across task epochs from visual cue to postsaccadic fixation indicated a transition from predominantly eye-centered encoding to representation of final gaze or foveated locations in nonretinocentric coordinates. These results show that dorsal pulvinar neurons carry information about eye position, which could contribute to steady gaze during postural changes and to reference frame transformations for visually guided eye and limb movements. NEW & NOTEWORTHY Work on the pulvinar focused on eye-centered visuospatial representations, but position of the eyes in the orbit is also an important factor that needs to be taken into account during spatial orienting and goal-directed reaching. We show that dorsal pulvinar neurons are influenced by eye position. Gaze direction modulated ongoing firing during stable fixation, as well as visual and saccade responses to peripheral targets, suggesting involvement of the dorsal pulvinar in spatial coordinate transformations.
Collapse
Affiliation(s)
- Lukas Schneider
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
| | - Adan-Ulises Dominguez-Vargas
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Escuela Nacional de Estudios Superiores Unidad-León, Universidad Nacional Autónoma de México, León, Guanajuato, Mexico
| | - Lydia Gibson
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
| | - Igor Kagan
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
- Leibniz ScienceCampus Primate Cognition, Goettingen, Germany
| | - Melanie Wilke
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
- Leibniz ScienceCampus Primate Cognition, Goettingen, Germany
| |
Collapse
|
25
|
Alexander AS, Robinson JC, Dannenberg H, Kinsky NR, Levy SJ, Mau W, Chapman GW, Sullivan DW, Hasselmo ME. Neurophysiological coding of space and time in the hippocampus, entorhinal cortex, and retrosplenial cortex. Brain Neurosci Adv 2020; 4:2398212820972871. [PMID: 33294626 PMCID: PMC7708714 DOI: 10.1177/2398212820972871] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 10/21/2020] [Indexed: 11/18/2022] Open
Abstract
Neurophysiological recordings in behaving rodents demonstrate neuronal response properties that may code space and time for episodic memory and goal-directed behaviour. Here, we review recordings from hippocampus, entorhinal cortex, and retrosplenial cortex to address the problem of how neurons encode multiple overlapping spatiotemporal trajectories and disambiguate these for accurate memory-guided behaviour. The solution could involve neurons in the entorhinal cortex and hippocampus that show mixed selectivity, coding both time and location. Some grid cells and place cells that code space also respond selectively as time cells, allowing differentiation of time intervals when a rat runs in the same location during a delay period. Cells in these regions also develop new representations that differentially code the context of prior or future behaviour allowing disambiguation of overlapping trajectories. Spiking activity is also modulated by running speed and head direction, supporting the coding of episodic memory not as a series of snapshots but as a trajectory that can also be distinguished on the basis of speed and direction. Recent data also address the mechanisms by which sensory input could distinguish different spatial locations. Changes in firing rate reflect running speed on long but not short time intervals, and few cells code movement direction, arguing against path integration for coding location. Instead, new evidence for neural coding of environmental boundaries in egocentric coordinates fits with a modelling framework in which egocentric coding of barriers combined with head direction generates distinct allocentric coding of location. The egocentric input can be used both for coding the location of spatiotemporal trajectories and for retrieving specific viewpoints of the environment. Overall, these different patterns of neural activity can be used for encoding and disambiguation of prior episodic spatiotemporal trajectories or for planning of future goal-directed spatiotemporal trajectories.
Collapse
Affiliation(s)
| | | | | | | | - Samuel J. Levy
- Center for Systems Neuroscience, Boston University, Boston, MA, USA
| | - William Mau
- Center for Systems Neuroscience, Boston University, Boston, MA, USA
| | | | | | | |
Collapse
|
26
|
Rolls ET. Spatial coordinate transforms linking the allocentric hippocampal and egocentric parietal primate brain systems for memory, action in space, and navigation. Hippocampus 2019; 30:332-353. [PMID: 31697002 DOI: 10.1002/hipo.23171] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Revised: 10/05/2019] [Accepted: 10/09/2019] [Indexed: 01/03/2023]
Abstract
A theory and model of spatial coordinate transforms in the dorsal visual system through the parietal cortex that enable an interface via posterior cingulate and related retrosplenial cortex to allocentric spatial representations in the primate hippocampus is described. First, a new approach to coordinate transform learning in the brain is proposed, in which the traditional gain modulation is complemented by temporal trace rule competitive network learning. It is shown in a computational model that the new approach works much more precisely than gain modulation alone, by enabling neurons to represent the different combinations of signal and gain modulator more accurately. This understanding may have application to many brain areas where coordinate transforms are learned. Second, a set of coordinate transforms is proposed for the dorsal visual system/parietal areas that enables a representation to be formed in allocentric spatial view coordinates. The input stimulus is merely a stimulus at a given position in retinal space, and the gain modulation signals needed are eye position, head direction, and place, all of which are present in the primate brain. Neurons that encode the bearing to a landmark are involved in the coordinate transforms. Part of the importance here is that the coordinates of the allocentric view produced in this model are the same as those of spatial view cells that respond to allocentric view recorded in the primate hippocampus and parahippocampal cortex. The result is that information from the dorsal visual system can be used to update the spatial input to the hippocampus in the appropriate allocentric coordinate frame, including providing for idiothetic update to allow for self-motion. It is further shown how hippocampal spatial view cells could be useful for the transform from hippocampal allocentric coordinates to egocentric coordinates useful for actions in space and for navigation.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK.,Department of Computer Science, University of Warwick, Coventry, UK
| |
Collapse
|
27
|
Onken A, Xie J, Panzeri S, Padoa-Schioppa C. Categorical encoding of decision variables in orbitofrontal cortex. PLoS Comput Biol 2019; 15:e1006667. [PMID: 31609973 PMCID: PMC6812845 DOI: 10.1371/journal.pcbi.1006667] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2018] [Revised: 10/24/2019] [Accepted: 09/02/2019] [Indexed: 11/18/2022] Open
Abstract
A fundamental and recurrent question in systems neuroscience is that of assessing what variables are encoded by a given population of neurons. Such assessments are often challenging because neurons in one brain area may encode multiple variables, and because neuronal representations might be categorical or non-categorical. These issues are particularly pertinent to the representation of decision variables in the orbitofrontal cortex (OFC)-an area implicated in economic choices. Here we present a new algorithm to assess whether a neuronal representation is categorical or non-categorical, and to identify the encoded variables if the representation is indeed categorical. The algorithm is based on two clustering procedures, one variable-independent and the other variable-based. The two partitions are then compared through adjusted mutual information. The present algorithm overcomes limitations of previous approaches and is widely applicable. We tested the algorithm on synthetic data and then used it to examine neuronal data recorded in the primate OFC during economic decisions. Confirming previous assessments, we found the neuronal representation in OFC to be categorical in nature. We also found that neurons in this area encode the value of individual offers, the binary choice outcome and the chosen value. In other words, during economic choice, neurons in the primate OFC encode decision variables in a categorical way.
Collapse
Affiliation(s)
- Arno Onken
- Center for Neuroscience and Cognitive Systems, Istituto Italiano di Tecnologia, Rovereto, Italy
- School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
- * E-mail:
| | - Jue Xie
- Department of Neuroscience, Washington University in St Louis, St Louis, Missouri, United States of America
| | - Stefano Panzeri
- Center for Neuroscience and Cognitive Systems, Istituto Italiano di Tecnologia, Rovereto, Italy
| | - Camillo Padoa-Schioppa
- Department of Neuroscience, Washington University in St Louis, St Louis, Missouri, United States of America
| |
Collapse
|
28
|
Edvardsen V, Bicanski A, Burgess N. Navigating with grid and place cells in cluttered environments. Hippocampus 2019; 30:220-232. [PMID: 31408264 PMCID: PMC8641373 DOI: 10.1002/hipo.23147] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2019] [Revised: 06/26/2019] [Accepted: 07/19/2019] [Indexed: 11/20/2022]
Abstract
Hippocampal formation contains several classes of neurons thought to be involved in navigational processes, in particular place cells and grid cells. Place cells have been associated with a topological strategy for navigation, while grid cells have been suggested to support metric vector navigation. Grid cell‐based vector navigation can support novel shortcuts across unexplored territory by providing the direction toward the goal. However, this strategy is insufficient in natural environments cluttered with obstacles. Here, we show how navigation in complex environments can be supported by integrating a grid cell‐based vector navigation mechanism with local obstacle avoidance mediated by border cells and place cells whose interconnections form an experience‐dependent topological graph of the environment. When vector navigation and object avoidance fail (i.e., the agent gets stuck), place cell replay events set closer subgoals for vector navigation. We demonstrate that this combined navigation model can successfully traverse environments cluttered by obstacles and is particularly useful where the environment is underexplored. Finally, we show that the model enables the simulated agent to successfully navigate experimental maze environments from the animal literature on cognitive mapping. The proposed model is sufficiently flexible to support navigation in different environments, and may inform the design of experiments to relate different navigational abilities to place, grid, and border cell firing.
Collapse
Affiliation(s)
- Vegard Edvardsen
- Department of Computer Science, NTNU-Norwegian University of Science and Technology, Trondheim, Norway
| | - Andrej Bicanski
- Institute of Cognitive Neuroscience, University College London, Alexandra House, 17 Queen Square, WC1N 3AZ London, UK
| | - Neil Burgess
- Institute of Cognitive Neuroscience, University College London, Alexandra House, 17 Queen Square, WC1N 3AZ London, UK
| |
Collapse
|
29
|
Yokoi A, Diedrichsen J. Neural Organization of Hierarchical Motor Sequence Representations in the Human Neocortex. Neuron 2019; 103:1178-1190.e7. [PMID: 31345643 DOI: 10.1016/j.neuron.2019.06.017] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 03/18/2019] [Accepted: 06/21/2019] [Indexed: 12/15/2022]
Abstract
Although it is widely accepted that the brain represents movement sequences hierarchically, the neural implementation of this organization is still poorly understood. To address this issue, we experimentally manipulated how participants represented sequences of finger presses at the levels of individual movements, chunks, and entire sequences. Using representational fMRI analyses, we then examined how this hierarchical structure was reflected in the fine-grained brain activity patterns of the participants while they performed the 8 trained sequences. We found clear evidence of each level of the movement hierarchy at the representational level. However, anatomically, chunk and sequence representations substantially overlapped in the premotor and parietal cortices, whereas individual movements were uniquely represented in the primary motor cortex. The findings challenge the common hypothesis of an orderly anatomical separation of different levels of an action hierarchy and argue for a special status of the distinction between individual movements and sequential context.
Collapse
Affiliation(s)
- Atsushi Yokoi
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Osaka 565-0871, Japan; The Brain and Mind Institute, University of Western Ontario, London, ON N6A 5B7, Canada; Institute of Cognitive Neuroscience, University College London, London, WC1N 3AZ, UK.
| | - Jörn Diedrichsen
- The Brain and Mind Institute, University of Western Ontario, London, ON N6A 5B7, Canada; Department of Statistical and Actuarial Sciences, University of Western Ontario, London, ON N6A 5B7, Canada; Department of Computer Science, University of Western Ontario, London, ON N6A 5B7, Canada; Institute of Cognitive Neuroscience, University College London, London, WC1N 3AZ, UK
| |
Collapse
|
30
|
Tuhkanen S, Pekkanen J, Rinkkala P, Mole C, Wilkie RM, Lappi O. Humans Use Predictive Gaze Strategies to Target Waypoints for Steering. Sci Rep 2019; 9:8344. [PMID: 31171850 PMCID: PMC6554351 DOI: 10.1038/s41598-019-44723-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Accepted: 05/15/2019] [Indexed: 12/22/2022] Open
Abstract
A major unresolved question in understanding visually guided locomotion in humans is whether actions are driven solely by the immediately available optical information (model-free online control mechanisms), or whether internal models have a role in anticipating the future path. We designed two experiments to investigate this issue, measuring spontaneous gaze behaviour while steering, and predictive gaze behaviour when future path information was withheld. In Experiment 1 participants (N = 15) steered along a winding path with rich optic flow: gaze patterns were consistent with tracking waypoints on the future path 1–3 s ahead. In Experiment 2, participants (N = 12) followed a path presented only in the form of visual waypoints located on an otherwise featureless ground plane. New waypoints appeared periodically every 0.75 s and predictably 2 s ahead, except in 25% of the cases the waypoint at the expected location was not displayed. In these cases, there were always other visible waypoints for the participant to fixate, yet participants continued to make saccades to the empty, but predictable, waypoint locations (in line with internal models of the future path guiding gaze fixations). This would not be expected based upon existing model-free online steering control models, and strongly points to a need for models of steering control to include mechanisms for predictive gaze control that support anticipatory path following behaviours.
Collapse
Affiliation(s)
- Samuel Tuhkanen
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland.,TRUlab, University of Helsinki, Helsinki, Finland
| | - Jami Pekkanen
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland.,TRUlab, University of Helsinki, Helsinki, Finland
| | - Paavo Rinkkala
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland.,TRUlab, University of Helsinki, Helsinki, Finland
| | - Callum Mole
- School of Psychology, University of Leeds, Leeds, UK
| | | | - Otto Lappi
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland. .,TRUlab, University of Helsinki, Helsinki, Finland.
| |
Collapse
|
31
|
Beyeler M, Rounds EL, Carlson KD, Dutt N, Krichmar JL. Neural correlates of sparse coding and dimensionality reduction. PLoS Comput Biol 2019; 15:e1006908. [PMID: 31246948 PMCID: PMC6597036 DOI: 10.1371/journal.pcbi.1006908] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
Supported by recent computational studies, there is increasing evidence that a wide range of neuronal responses can be understood as an emergent property of nonnegative sparse coding (NSC), an efficient population coding scheme based on dimensionality reduction and sparsity constraints. We review evidence that NSC might be employed by sensory areas to efficiently encode external stimulus spaces, by some associative areas to conjunctively represent multiple behaviorally relevant variables, and possibly by the basal ganglia to coordinate movement. In addition, NSC might provide a useful theoretical framework under which to understand the often complex and nonintuitive response properties of neurons in other brain areas. Although NSC might not apply to all brain areas (for example, motor or executive function areas) the success of NSC-based models, especially in sensory areas, warrants further investigation for neural correlates in other regions.
Collapse
Affiliation(s)
- Michael Beyeler
- Department of Psychology, University of Washington, Seattle, Washington, United States of America
- Institute for Neuroengineering, University of Washington, Seattle, Washington, United States of America
- eScience Institute, University of Washington, Seattle, Washington, United States of America
- Department of Computer Science, University of California, Irvine, California, United States of America
| | - Emily L. Rounds
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
| | - Kristofor D. Carlson
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
- Sandia National Laboratories, Albuquerque, New Mexico, United States of America
| | - Nikil Dutt
- Department of Computer Science, University of California, Irvine, California, United States of America
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
| | - Jeffrey L. Krichmar
- Department of Computer Science, University of California, Irvine, California, United States of America
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
| |
Collapse
|
32
|
Furuki D, Takiyama K. Decomposing motion that changes over time into task-relevant and task-irrelevant components in a data-driven manner: application to motor adaptation in whole-body movements. Sci Rep 2019; 9:7246. [PMID: 31076575 PMCID: PMC6510796 DOI: 10.1038/s41598-019-43558-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Accepted: 04/26/2019] [Indexed: 01/02/2023] Open
Abstract
Motor variability is inevitable in human body movements and has been addressed from various perspectives in motor neuroscience and biomechanics: it may originate from variability in neural activities, or it may reflect a large number of degrees of freedom inherent in our body movements. How to evaluate motor variability is thus a fundamental question. Previous methods have quantified (at least) two striking features of motor variability: smaller variability in the task-relevant dimension than in the task-irrelevant dimension and a low-dimensional structure often referred to as synergy or principal components. However, the previous methods cannot be used to quantify these features simultaneously and are applicable only under certain limited conditions (e.g., one method does not consider how the motion changes over time, and another does not consider how each motion is relevant to performance). Here, we propose a flexible and straightforward machine learning technique for quantifying task-relevant variability, task-irrelevant variability, and the relevance of each principal component to task performance while considering how the motion changes over time and its relevance to task performance in a data-driven manner. Our method reveals the following novel property: in motor adaptation, the modulation of these different aspects of motor variability differs depending on the perturbation schedule.
Collapse
Affiliation(s)
- Daisuke Furuki
- Department of Electrical and Electronic Engineering, Tokyo University of Agriculture and Technology, Koganei-shi, Tokyo, 184-8588, Japan
| | - Ken Takiyama
- Department of Electrical and Electronic Engineering, Tokyo University of Agriculture and Technology, Koganei-shi, Tokyo, 184-8588, Japan.
| |
Collapse
|
33
|
Morris AP, Krekelberg B. A Stable Visual World in Primate Primary Visual Cortex. Curr Biol 2019; 29:1471-1480.e6. [PMID: 31031112 DOI: 10.1016/j.cub.2019.03.069] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Revised: 02/13/2019] [Accepted: 03/28/2019] [Indexed: 11/26/2022]
Abstract
Humans and other primates rely on eye movements to explore visual scenes and to track moving objects. As a result, the image that is projected onto the retina-and propagated throughout the visual cortical hierarchy-is almost constantly changing and makes little sense without taking into account the momentary direction of gaze. How is this achieved in the visual system? Here, we show that in primary visual cortex (V1), the earliest stage of cortical vision, neural representations carry an embedded "eye tracker" that signals the direction of gaze associated with each image. Using chronically implanted multi-electrode arrays, we recorded the activity of neurons in area V1 of macaque monkeys during tasks requiring fast (exploratory) and slow (pursuit) eye movements. Neurons were stimulated with flickering, full-field luminance noise at all times. As in previous studies, we observed neurons that were sensitive to gaze direction during fixation, despite comparable stimulation of their receptive fields. We trained a decoder to translate neural activity into metric estimates of gaze direction. This decoded signal tracked the eye accurately not only during fixation but also during fast and slow eye movements. After a fast eye movement, the eye-position signal arrived in V1 at approximately the same time at which the new visual information arrived from the retina. Using simulations, we show that this V1 eye-position signal could be used to take into account the sensory consequences of eye movements and map the fleeting positions of objects on the retina onto their stable position in the world.
Collapse
Affiliation(s)
- Adam P Morris
- Neuroscience Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, 26 Innovation Walk, Clayton, Victoria 3800, Australia.
| | - Bart Krekelberg
- Center for Molecular and Behavioral Neuroscience, Rutgers University, 197 University Ave., Newark, New Jersey 07102, USA
| |
Collapse
|
34
|
Pugach G, Pitti A, Tolochko O, Gaussier P. Brain-Inspired Coding of Robot Body Schema Through Visuo-Motor Integration of Touched Events. Front Neurorobot 2019; 13:5. [PMID: 30899217 PMCID: PMC6416207 DOI: 10.3389/fnbot.2019.00005] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Accepted: 02/06/2019] [Indexed: 11/13/2022] Open
Abstract
Representing objects in space is difficult because sensorimotor events are anchored in different reference frames, which can be either eye-, arm-, or target-centered. In the brain, Gain-Field (GF) neurons in the parietal cortex are involved in computing the necessary spatial transformations for aligning the tactile, visual and proprioceptive signals. In reaching tasks, these GF neurons exploit a mechanism based on multiplicative interaction for binding simultaneously touched events from the hand with visual and proprioception information.By doing so, they can infer new reference frames to represent dynamically the location of the body parts in the visual space (i.e., the body schema) and nearby targets (i.e., its peripersonal space). In this line, we propose a neural model based on GF neurons for integrating tactile events with arm postures and visual locations for constructing hand- and target-centered receptive fields in the visual space. In robotic experiments using an artificial skin, we show how our neural architecture reproduces the behaviors of parietal neurons (1) for encoding dynamically the body schema of our robotic arm without any visual tags on it and (2) for estimating the relative orientation and distance of targets to it. We demonstrate how tactile information facilitates the integration of visual and proprioceptive signals in order to construct the body space.
Collapse
Affiliation(s)
- Ganna Pugach
- ETIS Laboratory, University Paris-Seine, CNRS UMR 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France
| | - Alexandre Pitti
- ETIS Laboratory, University Paris-Seine, CNRS UMR 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France
| | - Olga Tolochko
- Faculty of Electric Power Engineering and Automation, National Technical University of Ukraine Kyiv Polytechnic Institute, Kyiv, Ukraine
| | - Philippe Gaussier
- ETIS Laboratory, University Paris-Seine, CNRS UMR 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France
| |
Collapse
|
35
|
Navarro DM, Mender BMW, Smithson HE, Stringer SM. Self-organising coordinate transformation with peaked and monotonic gain modulation in the primate dorsal visual pathway. PLoS One 2018; 13:e0207961. [PMID: 30496225 PMCID: PMC6264903 DOI: 10.1371/journal.pone.0207961] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Accepted: 11/08/2018] [Indexed: 11/20/2022] Open
Abstract
We study a self-organising neural network model of how visual representations in the primate dorsal visual pathway are transformed from an eye-centred to head-centred frame of reference. The model has previously been shown to robustly develop head-centred output neurons with a standard trace learning rule, but only under limited conditions. Specifically it fails when incorporating visual input neurons with monotonic gain modulation by eye-position. Since eye-centred neurons with monotonic gain modulation are so common in the dorsal visual pathway, it is an important challenge to show how efferent synaptic connections from these neurons may self-organise to produce head-centred responses in a subpopulation of postsynaptic neurons. We show for the first time how a variety of modified, yet still biologically plausible, versions of the standard trace learning rule enable the model to perform a coordinate transformation from eye-centred to head-centred reference frames when the visual input neurons have monotonic gain modulation by eye-position.
Collapse
Affiliation(s)
- Daniel M. Navarro
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford, Oxfordshire, United Kingdom
- Oxford Perception Lab, Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford, Oxfordshire, United Kingdom
| | - Bedeho M. W. Mender
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford, Oxfordshire, United Kingdom
| | - Hannah E. Smithson
- Oxford Perception Lab, Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford, Oxfordshire, United Kingdom
| | - Simon M. Stringer
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford, Oxfordshire, United Kingdom
| |
Collapse
|
36
|
Ranganathan GN, Apostolides PF, Harnett MT, Xu NL, Druckmann S, Magee JC. Active dendritic integration and mixed neocortical network representations during an adaptive sensing behavior. Nat Neurosci 2018; 21:1583-1590. [PMID: 30349100 PMCID: PMC6203624 DOI: 10.1038/s41593-018-0254-6] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Accepted: 09/13/2018] [Indexed: 02/08/2023]
Abstract
Animals strategically scan the environment to form an accurate perception of their surroundings. Here we investigated the neuronal representations that mediate this behavior. Ca2+ imaging and selective optogenetic manipulation during an active sensing task reveals that layer 5 pyramidal neurons in the vibrissae cortex produce a diverse and distributed representation that is required for mice to adapt their whisking motor strategy to changing sensory cues. The optogenetic perturbation degraded single-neuron selectivity and network population encoding through a selective inhibition of active dendritic integration. Together the data indicate that active dendritic integration in pyramidal neurons produces a nonlinearly mixed network representation of joint sensorimotor parameters that is used to transform sensory information into motor commands during adaptive behavior. The prevalence of the layer 5 cortical circuit motif suggests that this is a general circuit computation.
Collapse
Affiliation(s)
| | - Pierre F Apostolides
- Howard Hughes Medical Institute, Janelia Research Campus, Ashburn, VA, USA.,Kresge Hearing Research Institute Department of Otolaryngology, University of Michigan , Ann Arbor, MI, USA
| | - Mark T Harnett
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Ning-Long Xu
- Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, China
| | - Shaul Druckmann
- Howard Hughes Medical Institute, Janelia Research Campus, Ashburn, VA, USA
| | - Jeffrey C Magee
- Howard Hughes Medical Institute, Janelia Research Campus, Ashburn, VA, USA. .,Howard Hughes Medical Institute, Baylor College of Medicine, Houston, TX, USA.
| |
Collapse
|
37
|
Abstract
We present a model of how neural representations of egocentric spatial experiences in parietal cortex interface with viewpoint-independent representations in medial temporal areas, via retrosplenial cortex, to enable many key aspects of spatial cognition. This account shows how previously reported neural responses (place, head-direction and grid cells, allocentric boundary- and object-vector cells, gain-field neurons) can map onto higher cognitive function in a modular way, and predicts new cell types (egocentric and head-direction-modulated boundary- and object-vector cells). The model predicts how these neural populations should interact across multiple brain regions to support spatial memory, scene construction, novelty-detection, 'trace cells', and mental navigation. Simulated behavior and firing rate maps are compared to experimental data, for example showing how object-vector cells allow items to be remembered within a contextual representation based on environmental boundaries, and how grid cells could update the viewpoint in imagery during planning and short-cutting by driving sequential place cell activity.
Collapse
Affiliation(s)
- Andrej Bicanski
- Institute of Cognitive NeuroscienceUniversity College LondonLondonUnited Kingdom
| | - Neil Burgess
- Institute of Cognitive NeuroscienceUniversity College LondonLondonUnited Kingdom
| |
Collapse
|
38
|
Neural-like computing with populations of superparamagnetic basis functions. Nat Commun 2018; 9:1533. [PMID: 29670101 PMCID: PMC5906599 DOI: 10.1038/s41467-018-03963-w] [Citation(s) in RCA: 104] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Accepted: 03/26/2018] [Indexed: 11/18/2022] Open
Abstract
In neuroscience, population coding theory demonstrates that neural assemblies can achieve fault-tolerant information processing. Mapped to nanoelectronics, this strategy could allow for reliable computing with scaled-down, noisy, imperfect devices. Doing so requires that the population components form a set of basis functions in terms of their response functions to inputs, offering a physical substrate for computing. Such a population can be implemented with CMOS technology, but the corresponding circuits have high area or energy requirements. Here, we show that nanoscale magnetic tunnel junctions can instead be assembled to meet these requirements. We demonstrate experimentally that a population of nine junctions can implement a basis set of functions, providing the data to achieve, for example, the generation of cursive letters. We design hybrid magnetic-CMOS systems based on interlinked populations of junctions and show that they can learn to realize non-linear variability-resilient transformations with a low imprint area and low power. Population coding, where populations of artificial neurons process information collectively can facilitate robust data processing, but require high circuit overheads. Here, the authors realize this approach with reduced circuit area and power consumption, by utilizing superparamagnetic tunnel junction based neurons.
Collapse
|
39
|
Blanchard TC, Piantadosi ST, Hayden BY. Robust mixture modeling reveals category-free selectivity in reward region neuronal ensembles. J Neurophysiol 2018; 119:1305-1318. [PMID: 29212924 PMCID: PMC5966738 DOI: 10.1152/jn.00808.2017] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2017] [Revised: 11/30/2017] [Accepted: 12/01/2017] [Indexed: 12/29/2022] Open
Abstract
Classification of neurons into clusters based on their response properties is an important tool for gaining insight into neural computations. However, it remains unclear to what extent neurons fall naturally into discrete functional categories. We developed a Bayesian method that models the tuning properties of neural populations as a mixture of multiple types of task-relevant response patterns. We applied this method to data from several cortical and striatal regions in economic choice tasks. In all cases, neurons fell into only two clusters: one multiple-selectivity cluster containing all cells driven by task variables of interest and another of no selectivity for those variables. The single cluster of task-sensitive cells argues against robust categorical tuning in these areas. The no-selectivity cluster was unanticipated and raises important questions about what distinguishes these neurons and what role they play. Moreover, the ability to formally identify these nonselective cells allows for more accurate measurement of ensemble effects by excluding or appropriately down-weighting them in analysis. Our findings provide a valuable tool for analysis of neural data, challenge simple categorization schemes previously proposed for these regions, and place useful constraints on neurocomputational models of economic choice and control. NEW & NOTEWORTHY We present a Bayesian method for formally detecting whether a population of neurons can be naturally classified into clusters based on their response tuning properties. We then examine several data sets of reward system neurons for variables and find in all cases that neurons can be classified into only two categories: a functional class and a non-task-driven class. These results provide important constraints for neural models of the reward system.
Collapse
Affiliation(s)
- Tommy C Blanchard
- Department of Brain and Cognitive Sciences, Center for Visual Science, and Center for the Origins of Cognition, University of Rochester , Rochester, New York
| | - Steven T Piantadosi
- Department of Brain and Cognitive Sciences, Center for Visual Science, and Center for the Origins of Cognition, University of Rochester , Rochester, New York
| | - Benjamin Y Hayden
- Department of Neuroscience and Center for Magnetic Resonance Research, University of Minnesota , Minneapolis, Minnesota
| |
Collapse
|
40
|
Caruso VC, Pages DS, Sommer MA, Groh JM. Beyond the labeled line: variation in visual reference frames from intraparietal cortex to frontal eye fields and the superior colliculus. J Neurophysiol 2018; 119:1411-1421. [PMID: 29357464 PMCID: PMC5966730 DOI: 10.1152/jn.00584.2017] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2017] [Revised: 12/16/2017] [Accepted: 12/18/2017] [Indexed: 11/22/2022] Open
Abstract
We accurately perceive the visual scene despite moving our eyes ~3 times per second, an ability that requires incorporation of eye position and retinal information. In this study, we assessed how this neural computation unfolds across three interconnected structures: frontal eye fields (FEF), intraparietal cortex (LIP/MIP), and the superior colliculus (SC). Single-unit activity was assessed in head-restrained monkeys performing visually guided saccades from different initial fixations. As previously shown, the receptive fields of most LIP/MIP neurons shifted to novel positions on the retina for each eye position, and these locations were not clearly related to each other in either eye- or head-centered coordinates (defined as hybrid coordinates). In contrast, the receptive fields of most SC neurons were stable in eye-centered coordinates. In FEF, visual signals were intermediate between those patterns: around 60% were eye-centered, whereas the remainder showed changes in receptive field location, boundaries, or responsiveness that rendered the response patterns hybrid or occasionally head-centered. These results suggest that FEF may act as a transitional step in an evolution of coordinates between LIP/MIP and SC. The persistence across cortical areas of mixed representations that do not provide unequivocal location labels in a consistent reference frame has implications for how these representations must be read out. NEW & NOTEWORTHY How we perceive the world as stable using mobile retinas is poorly understood. We compared the stability of visual receptive fields across different fixation positions in three visuomotor regions. Irregular changes in receptive field position were ubiquitous in intraparietal cortex, evident but less common in the frontal eye fields, and negligible in the superior colliculus (SC), where receptive fields shifted reliably across fixations. Only the SC provides a stable labeled-line code for stimuli across saccades.
Collapse
Affiliation(s)
- Valeria C Caruso
- Duke Institute for Brain Sciences, Duke University , Durham, North Carolina
- Center for Cognitive Neuroscience, Duke University , Durham, North Carolina
- Department of Psychology and Neuroscience, Duke University , Durham, North Carolina
- Department of Neurobiology, Duke University , Durham, North Carolina
| | - Daniel S Pages
- Duke Institute for Brain Sciences, Duke University , Durham, North Carolina
- Center for Cognitive Neuroscience, Duke University , Durham, North Carolina
- Department of Psychology and Neuroscience, Duke University , Durham, North Carolina
- Department of Neurobiology, Duke University , Durham, North Carolina
| | - Marc A Sommer
- Duke Institute for Brain Sciences, Duke University , Durham, North Carolina
- Center for Cognitive Neuroscience, Duke University , Durham, North Carolina
- Department of Neurobiology, Duke University , Durham, North Carolina
- Department of Biomedical Engineering, Duke University , Durham, North Carolina
| | - Jennifer M Groh
- Duke Institute for Brain Sciences, Duke University , Durham, North Carolina
- Center for Cognitive Neuroscience, Duke University , Durham, North Carolina
- Department of Psychology and Neuroscience, Duke University , Durham, North Carolina
- Department of Neurobiology, Duke University , Durham, North Carolina
| |
Collapse
|
41
|
Role of Rostral Fastigial Neurons in Encoding a Body-Centered Representation of Translation in Three Dimensions. J Neurosci 2018; 38:3584-3602. [PMID: 29487123 DOI: 10.1523/jneurosci.2116-17.2018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Revised: 02/01/2018] [Accepted: 02/20/2018] [Indexed: 11/21/2022] Open
Abstract
Many daily behaviors rely critically on estimates of our body motion. Such estimates must be computed by combining neck proprioceptive signals with vestibular signals that have been transformed from a head- to a body-centered reference frame. Recent studies showed that deep cerebellar neurons in the rostral fastigial nucleus (rFN) reflect these computations, but whether they explicitly encode estimates of body motion remains unclear. A key limitation in addressing this question is that, to date, cell tuning properties have only been characterized for a restricted set of motions across head-re-body orientations in the horizontal plane. Here we examined, for the first time, how 3D spatiotemporal tuning for translational motion varies with head-re-body orientation in both horizontal and vertical planes in the rFN of male macaques. While vestibular coding was profoundly influenced by head-re-body position in both planes, neurons typically reflected at most a partial transformation. However, their tuning shifts were not random but followed the specific spatial trajectories predicted for a 3D transformation. We show that these properties facilitate the linear decoding of fully body-centered motion representations in 3D with a broad range of temporal characteristics from small groups of 5-7 cells. These results demonstrate that the vestibular reference frame transformation required to compute body motion is indeed encoded by cerebellar neurons. We propose that maintaining partially transformed rFN responses with different spatiotemporal properties facilitates the creation of downstream body motion representations with a range of dynamic characteristics, consistent with the functional requirements for tasks such as postural control and reaching.SIGNIFICANCE STATEMENT Estimates of body motion are essential for many daily activities. Vestibular signals are important contributors to such estimates but must be transformed from a head- to a body-centered reference frame. Here, we provide the first direct demonstration that the cerebellum computes this transformation fully in 3D. We show that the output of these computations is reflected in the tuning properties of deep cerebellar rostral fastigial nucleus neurons in a specific distributed fashion that facilitates the efficient creation of body-centered translation estimates with a broad range of temporal properties (i.e., from acceleration to position). These findings support an important role for the rostral fastigial nucleus as a source of body translation estimates functionally relevant for behaviors ranging from postural control to perception.
Collapse
|
42
|
Hinman JR, Dannenberg H, Alexander AS, Hasselmo ME. Neural mechanisms of navigation involving interactions of cortical and subcortical structures. J Neurophysiol 2018; 119:2007-2029. [PMID: 29442559 DOI: 10.1152/jn.00498.2017] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
Animals must perform spatial navigation for a range of different behaviors, including selection of trajectories toward goal locations and foraging for food sources. To serve this function, a number of different brain regions play a role in coding different dimensions of sensory input important for spatial behavior, including the entorhinal cortex, the retrosplenial cortex, the hippocampus, and the medial septum. This article will review data concerning the coding of the spatial aspects of animal behavior, including location of the animal within an environment, the speed of movement, the trajectory of movement, the direction of the head in the environment, and the position of barriers and objects both relative to the animal's head direction (egocentric) and relative to the layout of the environment (allocentric). The mechanisms for coding these important spatial representations are not yet fully understood but could involve mechanisms including integration of self-motion information or coding of location based on the angle of sensory features in the environment. We will review available data and theories about the mechanisms for coding of spatial representations. The computation of different aspects of spatial representation from available sensory input requires complex cortical processing mechanisms for transformation from egocentric to allocentric coordinates that will only be understood through a combination of neurophysiological studies and computational modeling.
Collapse
Affiliation(s)
- James R Hinman
- Center for Systems Neuroscience, Boston University , Boston, Massachusetts
| | - Holger Dannenberg
- Center for Systems Neuroscience, Boston University , Boston, Massachusetts
| | - Andrew S Alexander
- Center for Systems Neuroscience, Boston University , Boston, Massachusetts
| | - Michael E Hasselmo
- Center for Systems Neuroscience, Boston University , Boston, Massachusetts
| |
Collapse
|
43
|
Gilra A, Gerstner W. Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network. eLife 2017; 6:28295. [PMID: 29173280 PMCID: PMC5730383 DOI: 10.7554/elife.28295] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Accepted: 11/22/2017] [Indexed: 12/21/2022] Open
Abstract
The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically.
Collapse
Affiliation(s)
- Aditya Gilra
- Brain-Mind Institute, School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.,School of Computer and Communication Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Wulfram Gerstner
- Brain-Mind Institute, School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.,School of Computer and Communication Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
44
|
Detecting the relevance to performance of whole-body movements. Sci Rep 2017; 7:15659. [PMID: 29142276 PMCID: PMC5688154 DOI: 10.1038/s41598-017-15888-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2017] [Accepted: 11/01/2017] [Indexed: 11/08/2022] Open
Abstract
Goal-directed whole-body movements are fundamental in our daily life, sports, music, art, and other activities. Goal-directed movements have been intensively investigated by focusing on simplified movements (e.g., arm-reaching movements or eye movements); however, the nature of goal-directed whole-body movements has not been sufficiently investigated because of the high-dimensional nonlinear dynamics and redundancy inherent in whole-body motion. One open question is how to overcome high-dimensional nonlinear dynamics and redundancy to achieve the desired performance. It is possible to approach the question by quantifying how the motions of each body part at each time point contribute to movement performance. Nevertheless, it is difficult to identify an explicit relation between each motion element (the motion of each body part at each time point) and performance as a result of the high-dimensional nonlinear dynamics and redundancy inherent in whole-body motion. The current study proposes a data-driven approach to quantify the relevance of each motion element to the performance. The current findings indicate that linear regression may be used to quantify this relevance without considering the high-dimensional nonlinear dynamics of whole-body motion.
Collapse
|
45
|
Diverse coordinate frames on sensorimotor areas in visuomotor transformation. Sci Rep 2017; 7:14950. [PMID: 29097688 PMCID: PMC5668410 DOI: 10.1038/s41598-017-14579-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2017] [Accepted: 10/12/2017] [Indexed: 11/08/2022] Open
Abstract
The visuomotor transformation during a goal-directed movement may involve a coordinate transformation from visual 'extrinsic' to muscle-like 'intrinsic' coordinate frames, which might be processed via a multilayer network architecture composed of neural basis functions. This theory suggests that the postural change during a goal-directed movement task alters activity patterns of the neurons in the intermediate layer of the visuomotor transformation that recieves both visual and proprioceptive inputs, and thus influence the multi-voxel pattern of the blood oxygenation level dependent signal. Using a recently developed multi-voxel pattern decoding method, we found extrinsic, intrinsic and intermediate coordinate frames along the visuomotor cortical pathways during a visuomotor control task. The presented results support the hypothesis that, in human, the extrinsic coordinate frame was transformed to the muscle-like frame over the dorsal pathway from the posterior parietal cortex and the dorsal premotor cortex to the primary motor cortex.
Collapse
|
46
|
Chen Q, Verguts T. Numerical Proportion Representation: A Neurocomputational Account. Front Hum Neurosci 2017; 11:412. [PMID: 28855867 PMCID: PMC5557774 DOI: 10.3389/fnhum.2017.00412] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2017] [Accepted: 07/31/2017] [Indexed: 11/13/2022] Open
Abstract
Proportion representation is an emerging subdomain in numerical cognition. However, its nature and its correlation with simple number representation remain elusive, especially at the theoretical level. To fill this gap, we propose a gain-field model of proportion representation to shed light on the neural and computational basis of proportion representation. The model is based on two well-supported neuroscientific findings. The first, gain modulation, is a general mechanism for information integration in the brain; the second relevant finding is how simple quantity is neurally represented. Based on these principles, the model accounts for recent relevant proportion representation data at both behavioral and neural levels. The model further addresses two key computational problems for the cognitive processing of proportions: invariance and generalization. Finally, the model provides pointers for future empirical testing.
Collapse
Affiliation(s)
- Qi Chen
- School of Psychology, South China Normal UniversityGuangzhou, China.,Center for Studies of Psychological Application, South China Normal UniversityGuangzhou, China.,Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal UniversityGuangzhou, China
| | - Tom Verguts
- Department of Experimental Psychology, Ghent UniversityGhent, Belgium
| |
Collapse
|
47
|
Affiliation(s)
- M. W. Spratling
- Department of Informatics, King's College London, London, UK
| |
Collapse
|
48
|
Environmental Anchoring of Head Direction in a Computational Model of Retrosplenial Cortex. J Neurosci 2017; 36:11601-11618. [PMID: 27852770 DOI: 10.1523/jneurosci.0516-16.2016] [Citation(s) in RCA: 51] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2016] [Revised: 07/30/2016] [Accepted: 08/04/2016] [Indexed: 01/29/2023] Open
Abstract
Allocentric (world-centered) spatial codes driven by path integration accumulate error unless reset by environmental sensory inputs that are necessarily egocentric (body-centered). Previous models of the head direction system avoided the necessary transformation between egocentric and allocentric reference frames by placing visual cues at infinity. Here we present a model of head direction coding that copes with exclusively proximal cues by making use of a conjunctive representation of head direction and location in retrosplenial cortex. Egocentric landmark bearing of proximal cues, which changes with location, is mapped onto this retrosplenial representation. The model avoids distortions due to parallax, which occur in simple models when a single proximal cue card is used, and can also accommodate multiple cues, suggesting how it can generalize to arbitrary sensory environments. It provides a functional account of the anatomical distribution of head direction cells along Papez' circuit, of place-by-direction coding in retrosplenial cortex, the anatomical connection from the anterior thalamic nuclei to retrosplenial cortex, and the involvement of retrosplenial cortex in navigation. In addition to parallax correction, the same mechanism allows for continuity of head direction coding between connected environments, and shows how a head direction representation can be stabilized by a single within arena cue. We also make predictions for drift during exploration of a new environment, the effects of hippocampal lesions on retrosplenial cells, and on head direction coding in differently shaped environments. SIGNIFICANCE STATEMENT The activity of head direction cells signals the direction of an animal's head relative to landmarks in the world. Although driven by internal estimates of head movements, head direction cells must be kept aligned to the external world by sensory inputs, which arrive in the reference frame of the sensory receptors. We present a computational model, which proposes that sensory inputs are correctly associated to head directions by virtue of a conjunctive representation of place and head directions in the retrosplenial cortex. The model allows for a stable head direction signal, even when the sensory input from nearby cues changes dramatically whenever the animal moves to a different location, and enables stable representations of head direction across connected environments.
Collapse
|
49
|
Sokoloski S. Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics. Neural Comput 2017; 29:2450-2490. [PMID: 28599113 DOI: 10.1162/neco_a_00991] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In order to interact intelligently with objects in the world, animals must first transform neural population responses into estimates of the dynamic, unknown stimuli that caused them. The Bayesian solution to this problem is known as a Bayes filter, which applies Bayes' rule to combine population responses with the predictions of an internal model. The internal model of the Bayes filter is based on the true stimulus dynamics, and in this note, we present a method for training a theoretical neural circuit to approximately implement a Bayes filter when the stimulus dynamics are unknown. To do this we use the inferential properties of linear probabilistic population codes to compute Bayes' rule and train a neural network to compute approximate predictions by the method of maximum likelihood. In particular, we perform stochastic gradient descent on the negative log-likelihood of the neural network parameters with a novel approximation of the gradient. We demonstrate our methods on a finite-state, a linear, and a nonlinear filtering problem and show how the hidden layer of the neural network develops tuning curves consistent with findings in experimental neuroscience.
Collapse
Affiliation(s)
- Sacha Sokoloski
- Max Planck Institute for Mathematics in the Sciences, Leipzig, 04103, Germany, and Albert Einstein College of Medicine, New York, NY 10461, U.S.A.
| |
Collapse
|
50
|
Born J, Galeazzi JM, Stringer SM. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system. PLoS One 2017; 12:e0178304. [PMID: 28562618 PMCID: PMC5451055 DOI: 10.1371/journal.pone.0178304] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Accepted: 05/10/2017] [Indexed: 12/05/2022] Open
Abstract
A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet.
Collapse
Affiliation(s)
- Jannis Born
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, Oxfordshire, United Kingdom
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Juan M. Galeazzi
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, Oxfordshire, United Kingdom
| | - Simon M. Stringer
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, Oxfordshire, United Kingdom
| |
Collapse
|