1
|
Rolls ET. Two what, two where, visual cortical streams in humans. Neurosci Biobehav Rev 2024; 160:105650. [PMID: 38574782 DOI: 10.1016/j.neubiorev.2024.105650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 03/25/2024] [Accepted: 03/31/2024] [Indexed: 04/06/2024]
Abstract
ROLLS, E. T. Two What, Two Where, Visual Cortical Streams in Humans. NEUROSCI BIOBEHAV REV 2024. Recent cortical connectivity investigations lead to new concepts about 'What' and 'Where' visual cortical streams in humans, and how they connect to other cortical systems. A ventrolateral 'What' visual stream leads to the inferior temporal visual cortex for object and face identity, and provides 'What' information to the hippocampal episodic memory system, the anterior temporal lobe semantic system, and the orbitofrontal cortex emotion system. A superior temporal sulcus (STS) 'What' visual stream utilising connectivity from the temporal and parietal visual cortex responds to moving objects and faces, and face expression, and connects to the orbitofrontal cortex for emotion and social behaviour. A ventromedial 'Where' visual stream builds feature combinations for scenes, and provides 'Where' inputs via the parahippocampal scene area to the hippocampal episodic memory system that are also useful for landmark-based navigation. The dorsal 'Where' visual pathway to the parietal cortex provides for actions in space, but also provides coordinate transforms to provide inputs to the parahippocampal scene area for self-motion update of locations in scenes in the dark or when the view is obscured.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK; Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China.
| |
Collapse
|
2
|
Alexander AS, Robinson JC, Stern CE, Hasselmo ME. Gated transformations from egocentric to allocentric reference frames involving retrosplenial cortex, entorhinal cortex, and hippocampus. Hippocampus 2023; 33:465-487. [PMID: 36861201 PMCID: PMC10403145 DOI: 10.1002/hipo.23513] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 01/22/2023] [Accepted: 01/25/2023] [Indexed: 03/03/2023]
Abstract
This paper reviews the recent experimental finding that neurons in behaving rodents show egocentric coding of the environment in a number of structures associated with the hippocampus. Many animals generating behavior on the basis of sensory input must deal with the transformation of coordinates from the egocentric position of sensory input relative to the animal, into an allocentric framework concerning the position of multiple goals and objects relative to each other in the environment. Neurons in retrosplenial cortex show egocentric coding of the position of boundaries in relation to an animal. These neuronal responses are discussed in relation to existing models of the transformation from egocentric to allocentric coordinates using gain fields and a new model proposing transformations of phase coding that differ from current models. The same type of transformations could allow hierarchical representations of complex scenes. The responses in rodents are also discussed in comparison to work on coordinate transformations in humans and non-human primates.
Collapse
Affiliation(s)
- Andrew S Alexander
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, USA
| | - Jennifer C Robinson
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, USA
| | - Chantal E Stern
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, USA
| | - Michael E Hasselmo
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, USA
| |
Collapse
|
3
|
Rolls ET. Hippocampal spatial view cells for memory and navigation, and their underlying connectivity in humans. Hippocampus 2023; 33:533-572. [PMID: 36070199 PMCID: PMC10946493 DOI: 10.1002/hipo.23467] [Citation(s) in RCA: 29] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 08/16/2022] [Accepted: 08/16/2022] [Indexed: 01/08/2023]
Abstract
Hippocampal and parahippocampal gyrus spatial view neurons in primates respond to the spatial location being looked at. The representation is allocentric, in that the responses are to locations "out there" in the world, and are relatively invariant with respect to retinal position, eye position, head direction, and the place where the individual is located. The underlying connectivity in humans is from ventromedial visual cortical regions to the parahippocampal scene area, leading to the theory that spatial view cells are formed by combinations of overlapping feature inputs self-organized based on their closeness in space. Thus, although spatial view cells represent "where" for episodic memory and navigation, they are formed by ventral visual stream feature inputs in the parahippocampal gyrus in what is the parahippocampal scene area. A second "where" driver of spatial view cells are parietal inputs, which it is proposed provide the idiothetic update for spatial view cells, used for memory recall and navigation when the spatial view details are obscured. Inferior temporal object "what" inputs and orbitofrontal cortex reward inputs connect to the human hippocampal system, and in macaques can be associated in the hippocampus with spatial view cell "where" representations to implement episodic memory. Hippocampal spatial view cells also provide a basis for navigation to a series of viewed landmarks, with the orbitofrontal cortex reward inputs to the hippocampus providing the goals for navigation, which can then be implemented by hippocampal connectivity in humans to parietal cortex regions involved in visuomotor actions in space. The presence of foveate vision and the highly developed temporal lobe for object and scene processing in primates including humans provide a basis for hippocampal spatial view cells to be key to understanding episodic memory in the primate and human hippocampus, and the roles of this system in primate including human navigation.
Collapse
Affiliation(s)
- Edmund T. Rolls
- Oxford Centre for Computational NeuroscienceOxfordUK
- Department of Computer ScienceUniversity of WarwickCoventryUK
| |
Collapse
|
4
|
McFadyen JR, Heider B, Karkhanis AN, Cloherty SL, Muñoz F, Siegel RM, Morris AP. Robust Coding of Eye Position in Posterior Parietal Cortex despite Context-Dependent Tuning. J Neurosci 2022; 42:4116-4130. [PMID: 35410881 PMCID: PMC9121829 DOI: 10.1523/jneurosci.0674-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 03/30/2022] [Accepted: 03/30/2022] [Indexed: 11/21/2022] Open
Abstract
Neurons in posterior parietal cortex (PPC) encode many aspects of the sensory world (e.g., scene structure), the posture of the body, and plans for action. For a downstream computation, however, only some of these dimensions are relevant; the rest are "nuisance variables" because their influence on neural activity changes with sensory and behavioral context, potentially corrupting the read-out of relevant information. Here we show that a key postural variable for vision (eye position) is represented robustly in male macaque PPC across a range of contexts, although the tuning of single neurons depended strongly on context. Contexts were defined by different stages of a visually guided reaching task, including (1) a visually sparse epoch, (2) a visually rich epoch, (3) a "go" epoch in which the reach was cued, and (4) during the reach itself. Eye position was constant within trials but varied across trials in a 3 × 3 grid spanning 24° × 24°. Using demixed principal component analysis of neural spike-counts, we found that the subspace of the population response encoding eye position is orthogonal to that encoding task context. Accordingly, a context-naive (fixed-parameter) decoder was nevertheless able to estimate eye position reliably across contexts. Errors were small given the sample size (∼1.78°) and would likely be even smaller with larger populations. Moreover, they were comparable to that of decoders that were optimized for each context. Our results suggest that population codes in PPC shield encoded signals from crosstalk to support robust sensorimotor transformations across contexts.SIGNIFICANCE STATEMENT Neurons in posterior parietal cortex (PPC) which are sensitive to gaze direction are thought to play a key role in spatial perception and behavior (e.g., reaching, navigation), and provide a potential substrate for brain-controlled prosthetics. Many, however, change their tuning under different sensory and behavioral contexts, raising the prospect that they provide unreliable representations of egocentric space. Here, we analyze the structure of encoding dimensions for gaze direction and context in PPC during different stages of a visually guided reaching task. We use demixed dimensionality reduction and decoding techniques to show that the coding of gaze direction in PPC is mostly invariant to context. This suggests that PPC can provide reliable spatial information across sensory and behavioral contexts.
Collapse
Affiliation(s)
- Jamie R McFadyen
- Neuroscience Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
| | - Barbara Heider
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Anushree N Karkhanis
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Shaun L Cloherty
- School of Engineering, RMIT University, Melbourne, VIC, 3001, Australia
| | - Fabian Muñoz
- Department of Neuroscience, Columbia University, New York, NY, 10027
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
| | - Ralph M Siegel
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Adam P Morris
- Neuroscience Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
- Monash Data Futures Institute, Monash University, Clayton, VIC, 3800, Australia
| |
Collapse
|
5
|
Zobeiri OA, Cullen KE. Distinct representations of body and head motion are dynamically encoded by Purkinje cell populations in the macaque cerebellum. eLife 2022; 11:75018. [PMID: 35467528 PMCID: PMC9075952 DOI: 10.7554/elife.75018] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 04/22/2022] [Indexed: 11/24/2022] Open
Abstract
The ability to accurately control our posture and perceive our spatial orientation during self-motion requires knowledge of the motion of both the head and body. However, while the vestibular sensors and nuclei directly encode head motion, no sensors directly encode body motion. Instead, the integration of vestibular and neck proprioceptive inputs is necessary to transform vestibular information into the body-centric reference frame required for postural control. The anterior vermis of the cerebellum is thought to play a key role in this transformation, yet how its Purkinje cells transform multiple streams of sensory information into an estimate of body motion remains unknown. Here, we recorded the activity of individual anterior vermis Purkinje cells in alert monkeys during passively applied whole-body, body-under-head, and head-on-body rotations. Most Purkinje cells dynamically encoded an intermediate representation of self-motion between head and body motion. Notably, Purkinje cells responded to both vestibular and neck proprioceptive stimulation with considerable heterogeneity in their response dynamics. Furthermore, their vestibular responses were tuned to head-on-body position. In contrast, targeted neurons in the deep cerebellar nuclei are known to unambiguously encode either head or body motion across conditions. Using a simple population model, we established that combining responses of~40-50 Purkinje cells could explain the responses of these deep cerebellar nuclei neurons across all self-motion conditions. We propose that the observed heterogeneity in Purkinje cell response dynamics underlies the cerebellum’s capacity to compute the dynamic representation of body motion required to ensure accurate postural control and perceptual stability in our daily lives.
Collapse
Affiliation(s)
- Omid A Zobeiri
- Department of Biomedical Engineering, McGill University, Montreal, Canada
| | - Kathleen E Cullen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, United States
| |
Collapse
|
6
|
He D, Ogmen H. Sensorimotor Self-organization via Circular-Reactions. Front Neurorobot 2021; 15:658450. [PMID: 34966265 PMCID: PMC8710445 DOI: 10.3389/fnbot.2021.658450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 11/22/2021] [Indexed: 11/15/2022] Open
Abstract
Newborns demonstrate innate abilities in coordinating their sensory and motor systems through reflexes. One notable characteristic is circular reactions consisting of self-generated motor actions that lead to correlated sensory and motor activities. This paper describes a model for goal-directed reaching based on circular reactions and exocentric reference-frames. The model is built using physiologically plausible visual processing modules and arm-control neural networks. The model incorporates map representations with ego- and exo-centric reference frames for sensory inputs, vector representations for motor systems, as well as local associative learning that result from arm explorations. The integration of these modules is simulated and tested in a three-dimensional spatial environment using Unity3D. The results show that, through self-generated activities, the model self-organizes to generate accurate arm movements that are tolerant with respect to various sources of noise.
Collapse
Affiliation(s)
- Dongcheng He
- Laboratory of Perceptual and Cognitive Dynamics, Department of Electrical & Computer Engineering, Ritchie School of Engineering and Computer Science, University of Denver, Denver, CO, United States
| | - Haluk Ogmen
- Laboratory of Perceptual and Cognitive Dynamics, Department of Electrical & Computer Engineering, Ritchie School of Engineering and Computer Science, University of Denver, Denver, CO, United States
| |
Collapse
|
7
|
Hulse BK, Haberkern H, Franconville R, Turner-Evans D, Takemura SY, Wolff T, Noorman M, Dreher M, Dan C, Parekh R, Hermundstad AM, Rubin GM, Jayaraman V. A connectome of the Drosophila central complex reveals network motifs suitable for flexible navigation and context-dependent action selection. eLife 2021; 10:e66039. [PMID: 34696823 PMCID: PMC9477501 DOI: 10.7554/elife.66039] [Citation(s) in RCA: 122] [Impact Index Per Article: 40.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 09/07/2021] [Indexed: 11/13/2022] Open
Abstract
Flexible behaviors over long timescales are thought to engage recurrent neural networks in deep brain regions, which are experimentally challenging to study. In insects, recurrent circuit dynamics in a brain region called the central complex (CX) enable directed locomotion, sleep, and context- and experience-dependent spatial navigation. We describe the first complete electron microscopy-based connectome of the Drosophila CX, including all its neurons and circuits at synaptic resolution. We identified new CX neuron types, novel sensory and motor pathways, and network motifs that likely enable the CX to extract the fly's head direction, maintain it with attractor dynamics, and combine it with other sensorimotor information to perform vector-based navigational computations. We also identified numerous pathways that may facilitate the selection of CX-driven behavioral patterns by context and internal state. The CX connectome provides a comprehensive blueprint necessary for a detailed understanding of network dynamics underlying sleep, flexible navigation, and state-dependent action selection.
Collapse
Affiliation(s)
- Brad K Hulse
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Hannah Haberkern
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Romain Franconville
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Daniel Turner-Evans
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Shin-ya Takemura
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Tanya Wolff
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Marcella Noorman
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Marisa Dreher
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Chuntao Dan
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Ruchi Parekh
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Ann M Hermundstad
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Gerald M Rubin
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Vivek Jayaraman
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| |
Collapse
|
8
|
Rolls ET. Learning Invariant Object and Spatial View Representations in the Brain Using Slow Unsupervised Learning. Front Comput Neurosci 2021; 15:686239. [PMID: 34366818 PMCID: PMC8335547 DOI: 10.3389/fncom.2021.686239] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 06/29/2021] [Indexed: 11/13/2022] Open
Abstract
First, neurophysiological evidence for the learning of invariant representations in the inferior temporal visual cortex is described. This includes object and face representations with invariance for position, size, lighting, view and morphological transforms in the temporal lobe visual cortex; global object motion in the cortex in the superior temporal sulcus; and spatial view representations in the hippocampus that are invariant with respect to eye position, head direction, and place. Second, computational mechanisms that enable the brain to learn these invariant representations are proposed. For the ventral visual system, one key adaptation is the use of information available in the statistics of the environment in slow unsupervised learning to learn transform-invariant representations of objects. This contrasts with deep supervised learning in artificial neural networks, which uses training with thousands of exemplars forced into different categories by neuronal teachers. Similar slow learning principles apply to the learning of global object motion in the dorsal visual system leading to the cortex in the superior temporal sulcus. The learning rule that has been explored in VisNet is an associative rule with a short-term memory trace. The feed-forward architecture has four stages, with convergence from stage to stage. This type of slow learning is implemented in the brain in hierarchically organized competitive neuronal networks with convergence from stage to stage, with only 4-5 stages in the hierarchy. Slow learning is also shown to help the learning of coordinate transforms using gain modulation in the dorsal visual system extending into the parietal cortex and retrosplenial cortex. Representations are learned that are in allocentric spatial view coordinates of locations in the world and that are independent of eye position, head direction, and the place where the individual is located. This enables hippocampal spatial view cells to use idiothetic, self-motion, signals for navigation when the view details are obscured for short periods.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, United Kingdom.,Department of Computer Science, University of Warwick, Coventry, United Kingdom
| |
Collapse
|
9
|
Rolls ET. Neurons including hippocampal spatial view cells, and navigation in primates including humans. Hippocampus 2021; 31:593-611. [PMID: 33760309 DOI: 10.1002/hipo.23324] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 03/01/2021] [Accepted: 03/13/2021] [Indexed: 01/11/2023]
Abstract
A new theory is proposed of mechanisms of navigation in primates including humans in which spatial view cells found in the primate hippocampus and parahippocampal gyrus are used to guide the individual from landmark to landmark. The navigation involves approach to each landmark in turn (taxis), using spatial view cells to identify the next landmark in the sequence, and does not require a topological map. Two other cell types found in primates, whole body motion cells, and head direction cells, can be utilized in the spatial view cell navigational mechanism, but are not essential. If the landmarks become obscured, then the spatial view representations can be updated by self-motion (idiothetic) path integration using spatial coordinate transform mechanisms in the primate dorsal visual system to transform from egocentric to allocentric spatial view coordinates. A continuous attractor network or time cells or working memory is used in this approach to navigation to encode and recall the spatial view sequences involved. I also propose how navigation can be performed using a further type of neuron found in primates, allocentric-bearing-to-a-landmark neurons, in which changes of direction are made when a landmark reaches a particular allocentric bearing. This is useful if a landmark cannot be approached. The theories are made explicit in models of navigation, which are then illustrated by computer simulations. These types of navigation are contrasted with triangulation, which requires a topological map. It is proposed that the first strategy utilizing spatial view cells is used frequently in humans, and is relatively simple because primates have spatial view neurons that respond allocentrically to locations in spatial scenes. An advantage of this approach to navigation is that hippocampal spatial view neurons are also useful for episodic memory, and for imagery.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK.,Department of Computer Science, University of Warwick, Coventry, UK
| |
Collapse
|
10
|
Abstract
AbstractSafe human-robot interactions require robots to be able to learn how to behave appropriately in spaces populated by people and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyze what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration.
Collapse
|
11
|
Mallory CS, Hardcastle K, Campbell MG, Attinger A, Low IIC, Raymond JL, Giocomo LM. Mouse entorhinal cortex encodes a diverse repertoire of self-motion signals. Nat Commun 2021; 12:671. [PMID: 33510164 PMCID: PMC7844029 DOI: 10.1038/s41467-021-20936-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 12/31/2020] [Indexed: 01/30/2023] Open
Abstract
Neural circuits generate representations of the external world from multiple information streams. The navigation system provides an exceptional lens through which we may gain insights about how such computations are implemented. Neural circuits in the medial temporal lobe construct a map-like representation of space that supports navigation. This computation integrates multiple sensory cues, and, in addition, is thought to require cues related to the individual's movement through the environment. Here, we identify multiple self-motion signals, related to the position and velocity of the head and eyes, encoded by neurons in a key node of the navigation circuitry of mice, the medial entorhinal cortex (MEC). The representation of these signals is highly integrated with other cues in individual neurons. Such information could be used to compute the allocentric location of landmarks from visual cues and to generate internal representations of space.
Collapse
Affiliation(s)
- Caitlin S Mallory
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Kiah Hardcastle
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Malcolm G Campbell
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Alexander Attinger
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Isabel I C Low
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Jennifer L Raymond
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Lisa M Giocomo
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA.
| |
Collapse
|
12
|
Murphy E. Commentary: A Compositional Neural Architecture for Language. Front Psychol 2020; 11:2101. [PMID: 32982860 PMCID: PMC7492643 DOI: 10.3389/fpsyg.2020.02101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Accepted: 07/28/2020] [Indexed: 11/20/2022] Open
Affiliation(s)
- Elliot Murphy
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center, Houston, TX, United States
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center, Houston, TX, United States
- *Correspondence: Elliot Murphy
| |
Collapse
|
13
|
Abstract
Abstract
Hierarchical structure and compositionality imbue human language with unparalleled expressive power and set it apart from other perception–action systems. However, neither formal nor neurobiological models account for how these defining computational properties might arise in a physiological system. I attempt to reconcile hierarchy and compositionality with principles from cell assembly computation in neuroscience; the result is an emerging theory of how the brain could convert distributed perceptual representations into hierarchical structures across multiple timescales while representing interpretable incremental stages of (de)compositional meaning. The model's architecture—a multidimensional coordinate system based on neurophysiological models of sensory processing—proposes that a manifold of neural trajectories encodes sensory, motor, and abstract linguistic states. Gain modulation, including inhibition, tunes the path in the manifold in accordance with behavior and is how latent structure is inferred. As a consequence, predictive information about upcoming sensory input during production and comprehension is available without a separate operation. The proposed processing mechanism is synthesized from current models of neural entrainment to speech, concepts from systems neuroscience and category theory, and a symbolic-connectionist computational model that uses time and rhythm to structure information. I build on evidence from cognitive neuroscience and computational modeling that suggests a formal and mechanistic alignment between structure building and neural oscillations, and moves toward unifying basic insights from linguistics and psycholinguistics with the currency of neural computation.
Collapse
Affiliation(s)
- Andrea E. Martin
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
14
|
Sajad A, Sadeh M, Crawford JD. Spatiotemporal transformations for gaze control. Physiol Rep 2020; 8:e14533. [PMID: 32812395 PMCID: PMC7435051 DOI: 10.14814/phy2.14533] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 06/30/2020] [Accepted: 07/01/2020] [Indexed: 12/13/2022] Open
Abstract
Sensorimotor transformations require spatiotemporal coordination of signals, that is, through both time and space. For example, the gaze control system employs signals that are time-locked to various sensorimotor events, but the spatial content of these signals is difficult to assess during ordinary gaze shifts. In this review, we describe the various models and methods that have been devised to test this question, and their limitations. We then describe a new method that can (a) simultaneously test between all of these models during natural, head-unrestrained conditions, and (b) track the evolving spatial continuum from target (T) to future gaze coding (G, including errors) through time. We then summarize some applications of this technique, comparing spatiotemporal coding in the primate frontal eye field (FEF) and superior colliculus (SC). The results confirm that these areas preferentially encode eye-centered, effector-independent parameters, and show-for the first time in ordinary gaze shifts-a spatial transformation between visual and motor responses from T to G coding. We introduce a new set of spatial models (T-G continuum) that revealed task-dependent timing of this transformation: progressive during a memory delay between vision and action, and almost immediate without such a delay. We synthesize the results from our studies and supplement it with previous knowledge of anatomy and physiology to propose a conceptual model where cumulative transformation noise is realized as inaccuracies in gaze behavior. We conclude that the spatiotemporal transformation for gaze is both local (observed within and across neurons in a given area) and distributed (with common signals shared across remote but interconnected structures).
Collapse
Affiliation(s)
- Amirsaman Sajad
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Psychology DepartmentVanderbilt UniversityNashvilleTNUSA
| | - Morteza Sadeh
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Department of NeurosurgeryUniversity of Illinois at ChicagoChicagoILUSA
| | - John Douglas Crawford
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Vision: Science to Applications Program (VISTA)Neuroscience Graduate Diploma ProgramDepartments of Psychology, Biology, Kinesiology & Health SciencesYork UniversityTorontoONCanada
| |
Collapse
|
15
|
Dowiasch S, Meyer-Stender S, Klingenhoefer S, Bremmer F. Nonretinocentric localization of successively presented flashes during smooth pursuit eye movements. J Vis 2020; 20:8. [PMID: 32298416 PMCID: PMC7405758 DOI: 10.1167/jov.20.4.8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Keeping track of objects in our environment across body and eye movements is essential for perceptual stability and localization of external objects. As of yet, it is largely unknown how this perceptual stability is achieved. A common behavioral approach to investigate potential neuronal mechanisms underlying spatial vision has been the presentation of one brief visual stimulus across eye movements. Here, we adopted this approach and aimed to determine the reference frame of the perceptual localization of two successively presented flashes during fixation and smooth pursuit eye movements (SPEMs). To this end, eccentric flashes with a stimulus onset asynchrony of zero or ± 200 ms had to be localized with respect to each other during fixation and SPEMs. The results were used to evaluate different models predicting the reference frame in which the spatial information is represented. First, we were able to reproduce the well-known effect of relative mislocalization during fixation. Second, smooth pursuit led to a characteristic relative mislocalization, different from that during fixation. A model assuming that relative localization takes place in a nonretinocentric reference frame described our data best. This suggests that the relative localization judgment is performed at a stage of visual processing in which retinal and nonretinal information is available.
Collapse
|
16
|
Schneider L, Dominguez-Vargas AU, Gibson L, Kagan I, Wilke M. Eye position signals in the dorsal pulvinar during fixation and goal-directed saccades. J Neurophysiol 2020; 123:367-391. [DOI: 10.1152/jn.00432.2019] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Sensorimotor cortical areas contain eye position information thought to ensure perceptual stability across saccades and underlie spatial transformations supporting goal-directed actions. One pathway by which eye position signals could be relayed to and across cortical areas is via the dorsal pulvinar. Several studies have demonstrated saccade-related activity in the dorsal pulvinar, and we have recently shown that many neurons exhibit postsaccadic spatial preference. In addition, dorsal pulvinar lesions lead to gaze-holding deficits expressed as nystagmus or ipsilesional gaze bias, prompting us to investigate the effects of eye position. We tested three starting eye positions (−15°, 0°, 15°) in monkeys performing a visually cued memory saccade task. We found two main types of gaze dependence. First, ~50% of neurons showed dependence on static gaze direction during initial and postsaccadic fixation, and might be signaling the position of the eyes in the orbit or coding foveal targets in a head/body/world-centered reference frame. The population-derived eye position signal lagged behind the saccade. Second, many neurons showed a combination of eye-centered and gaze-dependent modulation of visual, memory, and saccadic responses to a peripheral target. A small subset showed effects consistent with eye position-dependent gain modulation. Analysis of reference frames across task epochs from visual cue to postsaccadic fixation indicated a transition from predominantly eye-centered encoding to representation of final gaze or foveated locations in nonretinocentric coordinates. These results show that dorsal pulvinar neurons carry information about eye position, which could contribute to steady gaze during postural changes and to reference frame transformations for visually guided eye and limb movements. NEW & NOTEWORTHY Work on the pulvinar focused on eye-centered visuospatial representations, but position of the eyes in the orbit is also an important factor that needs to be taken into account during spatial orienting and goal-directed reaching. We show that dorsal pulvinar neurons are influenced by eye position. Gaze direction modulated ongoing firing during stable fixation, as well as visual and saccade responses to peripheral targets, suggesting involvement of the dorsal pulvinar in spatial coordinate transformations.
Collapse
Affiliation(s)
- Lukas Schneider
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
| | - Adan-Ulises Dominguez-Vargas
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Escuela Nacional de Estudios Superiores Unidad-León, Universidad Nacional Autónoma de México, León, Guanajuato, Mexico
| | - Lydia Gibson
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
| | - Igor Kagan
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
- Leibniz ScienceCampus Primate Cognition, Goettingen, Germany
| | - Melanie Wilke
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
- Leibniz ScienceCampus Primate Cognition, Goettingen, Germany
| |
Collapse
|
17
|
Rolls ET. Spatial coordinate transforms linking the allocentric hippocampal and egocentric parietal primate brain systems for memory, action in space, and navigation. Hippocampus 2019; 30:332-353. [PMID: 31697002 DOI: 10.1002/hipo.23171] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Revised: 10/05/2019] [Accepted: 10/09/2019] [Indexed: 01/03/2023]
Abstract
A theory and model of spatial coordinate transforms in the dorsal visual system through the parietal cortex that enable an interface via posterior cingulate and related retrosplenial cortex to allocentric spatial representations in the primate hippocampus is described. First, a new approach to coordinate transform learning in the brain is proposed, in which the traditional gain modulation is complemented by temporal trace rule competitive network learning. It is shown in a computational model that the new approach works much more precisely than gain modulation alone, by enabling neurons to represent the different combinations of signal and gain modulator more accurately. This understanding may have application to many brain areas where coordinate transforms are learned. Second, a set of coordinate transforms is proposed for the dorsal visual system/parietal areas that enables a representation to be formed in allocentric spatial view coordinates. The input stimulus is merely a stimulus at a given position in retinal space, and the gain modulation signals needed are eye position, head direction, and place, all of which are present in the primate brain. Neurons that encode the bearing to a landmark are involved in the coordinate transforms. Part of the importance here is that the coordinates of the allocentric view produced in this model are the same as those of spatial view cells that respond to allocentric view recorded in the primate hippocampus and parahippocampal cortex. The result is that information from the dorsal visual system can be used to update the spatial input to the hippocampus in the appropriate allocentric coordinate frame, including providing for idiothetic update to allow for self-motion. It is further shown how hippocampal spatial view cells could be useful for the transform from hippocampal allocentric coordinates to egocentric coordinates useful for actions in space and for navigation.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK.,Department of Computer Science, University of Warwick, Coventry, UK
| |
Collapse
|
18
|
Paraskevoudi N, Pezaris JS. Eye Movement Compensation and Spatial Updating in Visual Prosthetics: Mechanisms, Limitations and Future Directions. Front Syst Neurosci 2019; 12:73. [PMID: 30774585 PMCID: PMC6368147 DOI: 10.3389/fnsys.2018.00073] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 12/21/2018] [Indexed: 01/01/2023] Open
Abstract
Despite appearing automatic and effortless, perceiving the visual world is a highly complex process that depends on intact visual and oculomotor function. Understanding the mechanisms underlying spatial updating (i.e., gaze contingency) represents an important, yet unresolved issue in the fields of visual perception and cognitive neuroscience. Many questions regarding the processes involved in updating visual information as a function of the movements of the eyes are still open for research. Beyond its importance for basic research, gaze contingency represents a challenge for visual prosthetics as well. While most artificial vision studies acknowledge its importance in providing accurate visual percepts to the blind implanted patients, the majority of the current devices do not compensate for gaze position. To-date, artificial percepts to the blind population have been provided either by intraocular light-sensing circuitry or by using external cameras. While the former commonly accounts for gaze shifts, the latter requires the use of eye-tracking or similar technology in order to deliver percepts based on gaze position. Inspired by the need to overcome the hurdle of gaze contingency in artificial vision, we aim to provide a thorough overview of the research addressing the neural underpinnings of eye compensation, as well as its relevance in visual prosthetics. The present review outlines what is currently known about the mechanisms underlying spatial updating and reviews the attempts of current visual prosthetic devices to overcome the hurdle of gaze contingency. We discuss the limitations of the current devices and highlight the need to use eye-tracking methodology in order to introduce gaze-contingent information to visual prosthetics.
Collapse
Affiliation(s)
- Nadia Paraskevoudi
- Brainlab – Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Barcelona, Spain
| | - John S. Pezaris
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, United States
- Department of Neurosurgery, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
19
|
Ranganathan GN, Apostolides PF, Harnett MT, Xu NL, Druckmann S, Magee JC. Active dendritic integration and mixed neocortical network representations during an adaptive sensing behavior. Nat Neurosci 2018; 21:1583-1590. [PMID: 30349100 PMCID: PMC6203624 DOI: 10.1038/s41593-018-0254-6] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Accepted: 09/13/2018] [Indexed: 02/08/2023]
Abstract
Animals strategically scan the environment to form an accurate perception of their surroundings. Here we investigated the neuronal representations that mediate this behavior. Ca2+ imaging and selective optogenetic manipulation during an active sensing task reveals that layer 5 pyramidal neurons in the vibrissae cortex produce a diverse and distributed representation that is required for mice to adapt their whisking motor strategy to changing sensory cues. The optogenetic perturbation degraded single-neuron selectivity and network population encoding through a selective inhibition of active dendritic integration. Together the data indicate that active dendritic integration in pyramidal neurons produces a nonlinearly mixed network representation of joint sensorimotor parameters that is used to transform sensory information into motor commands during adaptive behavior. The prevalence of the layer 5 cortical circuit motif suggests that this is a general circuit computation.
Collapse
Affiliation(s)
| | - Pierre F Apostolides
- Howard Hughes Medical Institute, Janelia Research Campus, Ashburn, VA, USA.,Kresge Hearing Research Institute Department of Otolaryngology, University of Michigan , Ann Arbor, MI, USA
| | - Mark T Harnett
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Ning-Long Xu
- Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, China
| | - Shaul Druckmann
- Howard Hughes Medical Institute, Janelia Research Campus, Ashburn, VA, USA
| | - Jeffrey C Magee
- Howard Hughes Medical Institute, Janelia Research Campus, Ashburn, VA, USA. .,Howard Hughes Medical Institute, Baylor College of Medicine, Houston, TX, USA.
| |
Collapse
|
20
|
Dumoulin SO, Knapen T. How Visual Cortical Organization Is Altered by Ophthalmologic and Neurologic Disorders. Annu Rev Vis Sci 2018; 4:357-379. [DOI: 10.1146/annurev-vision-091517-033948] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Receptive fields are a core property of cortical organization. Modern neuroimaging allows routine access to visual population receptive fields (pRFs), enabling investigations of clinical disorders. Yet how the underlying neural circuitry operates is controversial. The controversy surrounds observations that measurements of pRFs can change in healthy adults as well as in patients with a range of ophthalmological and neurological disorders. The debate relates to the balance between plasticity and stability of the underlying neural circuitry. We propose that to move the debate forward, the field needs to define the implied mechanism. First, we review the pRF changes in both healthy subjects and those with clinical disorders. Then, we propose a computational model that describes how pRFs can change in healthy humans. We assert that we can correctly interpret the pRF changes in clinical disorders only if we establish the capabilities and limitations of pRF dynamics in healthy humans with mechanistic models that provide quantitative predictions.
Collapse
Affiliation(s)
- Serge O. Dumoulin
- Spinoza Centre for Neuroimaging, 1105 BK Amsterdam, Netherlands
- Department of Experimental and Applied Psychology, VU University Amsterdam, 1181 BT Amsterdam, Netherlands
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS Utrecht, Netherlands
| | - Tomas Knapen
- Spinoza Centre for Neuroimaging, 1105 BK Amsterdam, Netherlands
- Department of Experimental and Applied Psychology, VU University Amsterdam, 1181 BT Amsterdam, Netherlands
| |
Collapse
|
21
|
Dowiasch S, Blohm G, Bremmer F. Neural correlate of spatial (mis-)localization during smooth eye movements. Eur J Neurosci 2016; 44:1846-55. [PMID: 27177769 PMCID: PMC5089592 DOI: 10.1111/ejn.13276] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2015] [Accepted: 04/19/2016] [Indexed: 11/29/2022]
Abstract
The dependence of neuronal discharge on the position of the eyes in the orbit is a functional characteristic of many visual cortical areas of the macaque. It has been suggested that these eye-position signals provide relevant information for a coordinate transformation of visual signals into a non-eye-centered frame of reference. This transformation could be an integral part for achieving visual perceptual stability across eye movements. Previous studies demonstrated close to veridical eye-position decoding during stable fixation as well as characteristic erroneous decoding across saccadic eye-movements. Here we aimed to decode eye position during smooth pursuit. We recorded neural activity in macaque area VIP during steady fixation, saccades and smooth-pursuit and investigated the temporal and spatial accuracy of eye position as decoded from the neuronal discharges. Confirming previous results, the activity of the majority of neurons depended linearly on horizontal and vertical eye position. The application of a previously introduced computational approach (isofrequency decoding) allowed eye position decoding with considerable accuracy during steady fixation. We applied the same decoder on the activity of the same neurons during smooth-pursuit. On average, the decoded signal was leading the current eye position. A model combining this constant lead of the decoded eye position with a previously described attentional bias ahead of the pursuit target describes the asymmetric mislocalization pattern for briefly flashed stimuli during smooth pursuit eye movements as found in human behavioral studies.
Collapse
Affiliation(s)
- Stefan Dowiasch
- Department of NeurophysicsPhilipps‐University MarburgKarl‐von‐Frisch‐Straße 8a35043MarburgGermany
| | | | - Frank Bremmer
- Department of NeurophysicsPhilipps‐University MarburgKarl‐von‐Frisch‐Straße 8a35043MarburgGermany
| |
Collapse
|
22
|
Abstract
How do observers recognize objects after spatial transformations? Recent neurocomputational models have proposed that object recognition is based on coordinate transformations that align memory and stimulus representations. If the recognition of a misoriented object is achieved by adjusting a coordinate system (or reference frame), then recognition should be facilitated when the object is preceded by a different object in the same orientation. In the two experiments reported here, two objects were presented in brief masked displays that were in close temporal contiguity; the objects were in either congruent or incongruent picture-plane orientations. Results showed that naming accuracy was higher for congruent than for incongruent orientations. The congruency effect was independent of superordinate category membership (Experiment 1) and was found for objects with different main axes of elongation (Experiment 2). The results indicate congruency effects for common familiar objects even when they have dissimilar shapes. These findings are compatible with models in which object recognition is achieved by an adjustment of a perceptual coordinate system.
Collapse
Affiliation(s)
- M Graf
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany.
| | | | | |
Collapse
|
23
|
Marino AC, Mazer JA. Perisaccadic Updating of Visual Representations and Attentional States: Linking Behavior and Neurophysiology. Front Syst Neurosci 2016; 10:3. [PMID: 26903820 PMCID: PMC4743436 DOI: 10.3389/fnsys.2016.00003] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Accepted: 01/15/2016] [Indexed: 11/13/2022] Open
Abstract
During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron's spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed.
Collapse
Affiliation(s)
- Alexandria C Marino
- Interdepartmental Neuroscience Program, Yale UniversityNew Haven, CT, USA; Medical Scientist Training Program, Yale University School of MedicineNew Haven, CT, USA
| | - James A Mazer
- Interdepartmental Neuroscience Program, Yale UniversityNew Haven, CT, USA; Department of Neurobiology, Yale University School of MedicineNew Haven, CT, USA; Department of Psychology, Yale UniversityNew Haven, CT, USA
| |
Collapse
|
24
|
Fusi S, Miller EK, Rigotti M. Why neurons mix: high dimensionality for higher cognition. Curr Opin Neurobiol 2016; 37:66-74. [PMID: 26851755 DOI: 10.1016/j.conb.2016.01.010] [Citation(s) in RCA: 349] [Impact Index Per Article: 43.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2015] [Revised: 01/14/2016] [Accepted: 01/18/2016] [Indexed: 12/15/2022]
Abstract
Neurons often respond to diverse combinations of task-relevant variables. This form of mixed selectivity plays an important computational role which is related to the dimensionality of the neural representations: high-dimensional representations with mixed selectivity allow a simple linear readout to generate a huge number of different potential responses. In contrast, neural representations based on highly specialized neurons are low dimensional and they preclude a linear readout from generating several responses that depend on multiple task-relevant variables. Here we review the conceptual and theoretical framework that explains the importance of mixed selectivity and the experimental evidence that recorded neural representations are high-dimensional. We end by discussing the implications for the design of future experiments.
Collapse
Affiliation(s)
- Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University College of Physicians and Surgeons, USA.
| | - Earl K Miller
- The Picower Institute for Learning and Memory & Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA
| | - Mattia Rigotti
- IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA
| |
Collapse
|
25
|
Lehky SR, Sereno ME, Sereno AB. Characteristics of Eye-Position Gain Field Populations Determine Geometry of Visual Space. Front Integr Neurosci 2016; 9:72. [PMID: 26834587 PMCID: PMC4718998 DOI: 10.3389/fnint.2015.00072] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2015] [Accepted: 12/21/2015] [Indexed: 11/17/2022] Open
Abstract
We have previously demonstrated differences in eye-position spatial maps for anterior inferotemporal cortex (AIT) in the ventral stream and lateral intraparietal cortex (LIP) in the dorsal stream, based on population decoding of gaze angle modulations of neural visual responses (i.e., eye-position gain fields). Here we explore the basis of such spatial encoding differences through modeling of gain field characteristics. We created a population of model neurons, each having a different eye-position gain field. This population was used to reconstruct eye-position visual space using multidimensional scaling. As gain field shapes have never been well-established experimentally, we examined different functions, including planar, sigmoidal, elliptical, hyperbolic, and mixtures of those functions. All functions successfully recovered positions, indicating weak constraints on allowable gain field shapes. We then used a genetic algorithm to modify the characteristics of model gain field populations until the recovered spatial maps closely matched those derived from monkey neurophysiological data in AIT and LIP. The primary differences found between model AIT and LIP gain fields were that AIT gain fields were more foveally dominated. That is, gain fields in AIT operated on smaller spatial scales and smaller dispersions than in LIP. Thus, we show that the geometry of eye-position visual space depends on the population characteristics of gain fields, and that differences in gain field characteristics for different cortical areas may underlie differences in the representation of space.
Collapse
Affiliation(s)
- Sidney R Lehky
- Computational Neurobiology Laboratory, The Salk Institute La Jolla, CA, USA
| | | | - Anne B Sereno
- Department of Neurobiology and Anatomy, University of Texas Medical School Houston, TX, USA
| |
Collapse
|
26
|
Transsaccadic processing: stability, integration, and the potential role of remapping. Atten Percept Psychophys 2015; 77:3-27. [PMID: 25380979 DOI: 10.3758/s13414-014-0751-y] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
While our frequent saccades allow us to sample the complex visual environment in a highly efficient manner, they also raise certain challenges for interpreting and acting upon visual input. In the present, selective review, we discuss key findings from the domains of cognitive psychology, visual perception, and neuroscience concerning two such challenges: (1) maintaining the phenomenal experience of visual stability despite our rapidly shifting gaze, and (2) integrating visual information across discrete fixations. In the first two sections of the article, we focus primarily on behavioral findings. Next, we examine the possibility that a neural phenomenon known as predictive remapping may provide an explanation for aspects of transsaccadic processing. In this section of the article, we delineate and critically evaluate multiple proposals about the potential role of predictive remapping in light of both theoretical principles and empirical findings.
Collapse
|
27
|
A prefrontal-thalamo-hippocampal circuit for goal-directed spatial navigation. Nature 2015; 522:50-5. [PMID: 26017312 DOI: 10.1038/nature14396] [Citation(s) in RCA: 289] [Impact Index Per Article: 32.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2014] [Accepted: 03/06/2015] [Indexed: 12/20/2022]
Abstract
Spatial navigation requires information about the relationship between current and future positions. The activity of hippocampal neurons appears to reflect such a relationship, representing not only instantaneous position but also the path towards a goal location. However, how the hippocampus obtains information about goal direction is poorly understood. Here we report a prefrontal-thalamic neural circuit that is required for hippocampal representation of routes or trajectories through the environment. Trajectory-dependent firing was observed in medial prefrontal cortex, the nucleus reuniens of the thalamus, and the CA1 region of the hippocampus in rats. Lesioning or optogenetic silencing of the nucleus reuniens substantially reduced trajectory-dependent CA1 firing. Trajectory-dependent activity was almost absent in CA3, which does not receive nucleus reuniens input. The data suggest that projections from medial prefrontal cortex, via the nucleus reuniens, are crucial for representation of the future path during goal-directed behaviour and point to the thalamus as a key node in networks for long-range communication between cortical regions involved in navigation.
Collapse
|
28
|
Marques HG, Bharadwaj A, Iida F. From spontaneous motor activity to coordinated behaviour: a developmental model. PLoS Comput Biol 2014; 10:e1003653. [PMID: 25057775 PMCID: PMC4109855 DOI: 10.1371/journal.pcbi.1003653] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2013] [Accepted: 04/18/2014] [Indexed: 01/09/2023] Open
Abstract
In mammals, the developmental path that links the primary behaviours observed during foetal stages to the full fledged behaviours observed in adults is still beyond our understanding. Often theories of motor control try to deal with the process of incremental learning in an abstract and modular way without establishing any correspondence with the mammalian developmental stages. In this paper, we propose a computational model that links three distinct behaviours which appear at three different stages of development. In order of appearance, these behaviours are: spontaneous motor activity (SMA), reflexes, and coordinated behaviours, such as locomotion. The goal of our model is to address in silico four hypotheses that are currently hard to verify in vivo: First, the hypothesis that spinal reflex circuits can be self-organized from the sensor and motor activity induced by SMA. Second, the hypothesis that supraspinal systems can modulate reflex circuits to achieve coordinated behaviour. Third, the hypothesis that, since SMA is observed in an organism throughout its entire lifetime, it provides a mechanism suitable to maintain the reflex circuits aligned with the musculoskeletal system, and thus adapt to changes in body morphology. And fourth, the hypothesis that by changing the modulation of the reflex circuits over time, one can switch between different coordinated behaviours. Our model is tested in a simulated musculoskeletal leg actuated by six muscles arranged in a number of different ways. Hopping is used as a case study of coordinated behaviour. Our results show that reflex circuits can be self-organized from SMA, and that, once these circuits are in place, they can be modulated to achieve coordinated behaviour. In addition, our results show that our model can naturally adapt to different morphological changes and perform behavioural transitions.
Collapse
Affiliation(s)
| | - Arjun Bharadwaj
- Dept. of Mechanical and Process Engineering, ETH, Zurich, Switzerland
| | - Fumiya Iida
- Dept. of Mechanical and Process Engineering, ETH, Zurich, Switzerland
| |
Collapse
|
29
|
Sereno AB, Sereno ME, Lehky SR. Recovering stimulus locations using populations of eye-position modulated neurons in dorsal and ventral visual streams of non-human primates. Front Integr Neurosci 2014; 8:28. [PMID: 24734008 PMCID: PMC3975102 DOI: 10.3389/fnint.2014.00028] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2013] [Accepted: 03/08/2014] [Indexed: 11/13/2022] Open
Abstract
We recorded visual responses while monkeys fixated the same target at different gaze angles, both dorsally (lateral intraparietal cortex, LIP) and ventrally (anterior inferotemporal cortex, AIT). While eye-position modulations occurred in both areas, they were both more frequent and stronger in LIP neurons. We used an intrinsic population decoding technique, multidimensional scaling (MDS), to recover eye positions, equivalent to recovering fixated target locations. We report that eye-position based visual space in LIP was more accurate (i.e., metric). Nevertheless, the AIT spatial representation remained largely topologically correct, perhaps indicative of a categorical spatial representation (i.e., a qualitative description such as "left of" or "above" as opposed to a quantitative, metrically precise description). Additionally, we developed a simple neural model of eye position signals and illustrate that differences in single cell characteristics can influence the ability to recover target position in a population of cells. We demonstrate for the first time that the ventral stream contains sufficient information for constructing an eye-position based spatial representation. Furthermore we demonstrate, in dorsal and ventral streams as well as modeling, that target locations can be extracted directly from eye position signals in cortical visual responses without computing coordinate transforms of visual space.
Collapse
Affiliation(s)
- Anne B Sereno
- Department of Neurobiology and Anatomy, University of Texas Health Science Center at Houston Houston, TX, USA
| | | | - Sidney R Lehky
- Computational Neurobiology Laboratory, The Salk Institute for Biological Studies La Jolla, CA, USA
| |
Collapse
|
30
|
Khan AZ, Pisella L, Blohm G. Causal evidence for posterior parietal cortex involvement in visual-to-motor transformations of reach targets. Cortex 2013; 49:2439-48. [DOI: 10.1016/j.cortex.2012.12.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2012] [Revised: 08/30/2012] [Accepted: 12/04/2012] [Indexed: 11/25/2022]
|
31
|
A single functional model of drivers and modulators in cortex. J Comput Neurosci 2013; 36:97-118. [DOI: 10.1007/s10827-013-0471-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2013] [Revised: 05/10/2013] [Accepted: 06/05/2013] [Indexed: 10/26/2022]
|
32
|
Attentional enhancement of spatial resolution: linking behavioural and neurophysiological evidence. Nat Rev Neurosci 2013; 14:188-200. [PMID: 23422910 DOI: 10.1038/nrn3443] [Citation(s) in RCA: 163] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Attention allows us to select relevant sensory information for preferential processing. Behaviourally, it improves performance in various visual tasks. One prominent effect of attention is the modulation of performance in tasks that involve the visual system's spatial resolution. Physiologically, attention modulates neuronal responses and alters the profile and position of receptive fields near the attended location. Here, we develop a hypothesis linking the behavioural and electrophysiological evidence. The proposed framework seeks to explain how these receptive field changes enhance the visual system's effective spatial resolution and how the same mechanisms may also underlie attentional effects on the representation of spatial information.
Collapse
|
33
|
Lipinski J, Schneegans S, Sandamirskaya Y, Spencer JP, Schöner G. A neurobehavioral model of flexible spatial language behaviors. J Exp Psychol Learn Mem Cogn 2012; 38:1490-511. [PMID: 21517224 PMCID: PMC3665425 DOI: 10.1037/a0022643] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that the system can extract spatial relations from visual scenes, select items based on relational spatial descriptions, and perform reference object selection in a single unified architecture. We further show that the performance of the system is consistent with behavioral data in humans by simulating results from 2 independent empirical studies, 1 spatial term rating task and 1 study of reference object selection behavior. The architecture we present thereby achieves a high degree of task flexibility under realistic stimulus conditions. At the same time, it also provides a detailed neural grounding for complex behavioral and cognitive processes.
Collapse
Affiliation(s)
- John Lipinski
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Bochum, Germany.
| | | | | | | | | |
Collapse
|
34
|
Simulating the cortical 3D visuomotor transformation of reach depth. PLoS One 2012; 7:e41241. [PMID: 22815979 PMCID: PMC3397995 DOI: 10.1371/journal.pone.0041241] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2011] [Accepted: 06/22/2012] [Indexed: 11/22/2022] Open
Abstract
We effortlessly perform reach movements to objects in different directions and depths. However, how networks of cortical neurons compute reach depth from binocular visual inputs remains largely unknown. To bridge the gap between behavior and neurophysiology, we trained a feed-forward artificial neural network to uncover potential mechanisms that might underlie the 3D transformation of reach depth. Our physiologically-inspired 4-layer network receives distributed 3D visual inputs (1st layer) along with eye, head and vergence signals. The desired motor plan was coded in a population (3rd layer) that we read out (4th layer) using an optimal linear estimator. After training, our network was able to reproduce all known single-unit recording evidence on depth coding in the parietal cortex. Network analyses predict the presence of eye/head and vergence changes of depth tuning, pointing towards a gain-modulation mechanism of depth transformation. In addition, reach depth was computed directly from eye-centered (relative) visual distances, without explicit absolute depth coding. We suggest that these effects should be observable in parietal and pre-motor areas.
Collapse
|
35
|
A computational model for the influence of corollary discharge and proprioception on the perisaccadic mislocalization of briefly presented stimuli in complete darkness. J Neurosci 2012; 31:17392-405. [PMID: 22131401 DOI: 10.1523/jneurosci.3407-11.2011] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Spatial perception, the localization of stimuli in space, can rely on visual reference stimuli or on egocentric factors such as a stimulus position relative to eye gaze. In total darkness, only an egocentric reference frame provides sufficient information. When stimuli are briefly flashed around saccades, the localization error reveals potential mechanisms of updating such reference frames as described in several theories and computational models. Recent novel experimental evidence, however, showed that the maximum amount of mislocalization does not scale linearly with saccade amplitude but rather stays below 13° even for long saccades, which is different from predicted by present models. We propose a new model of perisaccadic mislocalization in complete darkness to account for this observation. According to this model, mislocalization arises not on the motor side by comparing a retinal position signal with an extraretinal eye position related signal but by updating stimulus position in visual areas through a combination of proprioceptive eye position and corollary discharge. Simulations with realistic input signals and temporal dynamics show that both signals together are used for spatial updating and in turn bring about perisaccadic mislocalization.
Collapse
|
36
|
Intrinsic reference frames of superior colliculus visuomotor receptive fields during head-unrestrained gaze shifts. J Neurosci 2012; 31:18313-26. [PMID: 22171035 DOI: 10.1523/jneurosci.0990-11.2011] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
A sensorimotor neuron's receptive field and its frame of reference are easily conflated within the natural variability of spatial behavior. Here, we capitalized on such natural variations in 3-D eye and head positions during head-unrestrained gaze shifts to visual targets in two monkeys: to determine whether intermediate/deep layer superior colliculus (SC) receptive fields code visual targets or gaze kinematics, within four different frames of reference. Visuomotor receptive fields were either characterized during gaze shifts to visual targets from a central fixation position (32 U) or were partially characterized from each of three initial fixation points (31 U). Natural variations of initial 3-D gaze and head orientation (including torsion) provided spatial separation between four different coordinate frame models (space, head, eye, fixed-vector relative to fixation), whereas natural saccade errors provided spatial separation between target and gaze positions. Using a new statistical method based on predictive sum-of-squares, we found that in our population of 63 neurons (1) receptive field fits to target positions were significantly better than fits to actual gaze shift locations and (2) eye-centered models gave significantly better fits than the head or space frame. An intermediate frames analysis confirmed that individual neuron fits were distributed target-in-eye coordinates. Gaze position "gain" effects with the spatial tuning required for a 3-D reference frame transformation were significant in 23% (7/31) of neurons tested. We conclude that the SC primarily represents gaze targets relative to the eye but also carries early signatures of the 3-D sensorimotor transformation.
Collapse
|
37
|
Medendorp WP. Spatial constancy mechanisms in motor control. Philos Trans R Soc Lond B Biol Sci 2011; 366:476-91. [PMID: 21242137 DOI: 10.1098/rstb.2010.0089] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The success of the human species in interacting with the environment depends on the ability to maintain spatial stability despite the continuous changes in sensory and motor inputs owing to movements of eyes, head and body. In this paper, I will review recent advances in the understanding of how the brain deals with the dynamic flow of sensory and motor information in order to maintain spatial constancy of movement goals. The first part summarizes studies in the saccadic system, showing that spatial constancy is governed by a dynamic feed-forward process, by gaze-centred remapping of target representations in anticipation of and across eye movements. The subsequent sections relate to other oculomotor behaviour, such as eye-head gaze shifts, smooth pursuit and vergence eye movements, and their implications for feed-forward mechanisms for spatial constancy. Work that studied the geometric complexities in spatial constancy and saccadic guidance across head and body movements, distinguishing between self-generated and passively induced motion, indicates that both feed-forward and sensory feedback processing play a role in spatial updating of movement goals. The paper ends with a discussion of the behavioural mechanisms of spatial constancy for arm motor control and their physiological implications for the brain. Taken together, the emerging picture is that the brain computes an evolving representation of three-dimensional action space, whose internal metric is updated in a nonlinear way, by optimally integrating noisy and ambiguous afferent and efferent signals.
Collapse
Affiliation(s)
- W Pieter Medendorp
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, PO Box 9104, NL-6500 HE Nijmegen, The Netherlands.
| |
Collapse
|
38
|
Hoffmann M, Marques H, Arieta A, Sumioka H, Lungarella M, Pfeifer R. Body Schema in Robotics: A Review. ACTA ACUST UNITED AC 2010. [DOI: 10.1109/tamd.2010.2086454] [Citation(s) in RCA: 178] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
39
|
Rigotti M, Ben Dayan Rubin D, Wang XJ, Fusi S. Internal representation of task rules by recurrent dynamics: the importance of the diversity of neural responses. Front Comput Neurosci 2010; 4:24. [PMID: 21048899 PMCID: PMC2967380 DOI: 10.3389/fncom.2010.00024] [Citation(s) in RCA: 107] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2010] [Accepted: 06/29/2010] [Indexed: 11/17/2022] Open
Abstract
Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context-dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics), the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding). A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context-dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation.
Collapse
Affiliation(s)
- Mattia Rigotti
- Center for Theoretical Neuroscience, College of Physicians and Surgeons, Columbia University New York, NY, USA
| | | | | | | |
Collapse
|
40
|
Nagy B, Corneil BD. Representation of Horizontal head-on-body position in the primate superior colliculus. J Neurophysiol 2009; 103:858-74. [PMID: 20007503 DOI: 10.1152/jn.00099.2009] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Movement-related activity within the superior colliculus (SC) represents the desired displacement of an impending gaze shift. This representation must ultimately be transformed into position-based reference frames appropriate for coordinated eye-head gaze shifts. Parietal areas that project to the SC are modulated by the initial position of both the eye-re-head and head-re-body and SC activity is modulated by eye-re-head position. These considerations led us to investigate whether SC activity is modulated by the head-re-body position. We recorded activity from movement-related SC neurons while head-restrained monkeys performed a delayed-saccade task. Across blocks of trials, the horizontal position of the body was rotated under a space-fixed head to three to five different positions spanning +/-25 degrees . We observed a significant influence of body-under-head position on SC activity in 50/60 neurons. This influence was expressed predominantly as a linear gain field, scaling task-related SC activity without changing the location of the response field (linear gain fields explained >/=20% of the variance in neural activity in approximately 50% of our sample). Smaller nonlinear modulations were also observed in roughly 30% of our sample. SC activity was equally likely to increase or decrease as the body was rotated to the side of neuronal recording and we found no systematic relationship between the directionality or magnitude of the linear gain field with recording location in the SC. We conclude that a signal conveying head-re-body position is present in the SC. Although the functional significance remains open, our findings are consistent with the SC contributing to a displacement-to-position transformation for oculomotor control.
Collapse
Affiliation(s)
- Benjamin Nagy
- Canadian Institutes of Health Research Group in Action and Perception, University of Western Ontario, London, Ontario, Canada
| | | |
Collapse
|
41
|
Royal DW, Carriere BN, Wallace MT. Spatiotemporal architecture of cortical receptive fields and its impact on multisensory interactions. Exp Brain Res 2009; 198:127-36. [PMID: 19308362 DOI: 10.1007/s00221-009-1772-y] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2008] [Accepted: 03/05/2009] [Indexed: 11/29/2022]
Abstract
Recent electrophysiology studies have suggested that neuronal responses to multisensory stimuli may possess a unique temporal signature. To evaluate this temporal dynamism, unisensory and multisensory spatiotemporal receptive fields (STRFs) of neurons in the cortex of the cat anterior ectosylvian sulcus were constructed. Analyses revealed that the multisensory STRFs of these neurons differed significantly from the component unisensory STRFs and their linear summation. Most notably, multisensory responses were found to have higher peak firing rates, shorter response latencies, and longer discharge durations. More importantly, multisensory STRFs were characterized by two distinct temporal phases of enhanced integration that reflected the shorter response latencies and longer discharge durations. These findings further our understanding of the temporal architecture of cortical multisensory processing, and thus provide important insights into the possible functional role(s) played by multisensory cortex in spatially directed perceptual processes.
Collapse
Affiliation(s)
- David W Royal
- Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN 37232, USA.
| | | | | |
Collapse
|
42
|
Lehky SR, Peng X, McAdams CJ, Sereno AB. Spatial modulation of primate inferotemporal responses by eye position. PLoS One 2008; 3:e3492. [PMID: 18946508 PMCID: PMC2567040 DOI: 10.1371/journal.pone.0003492] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2008] [Accepted: 09/15/2008] [Indexed: 01/19/2023] Open
Abstract
Background A key aspect of representations for object recognition and scene analysis in the ventral visual stream is the spatial frame of reference, be it a viewer-centered, object-centered, or scene-based coordinate system. Coordinate transforms from retinocentric space to other reference frames involve combining neural visual responses with extraretinal postural information. Methodology/Principal Findings We examined whether such spatial information is available to anterior inferotemporal (AIT) neurons in the macaque monkey by measuring the effect of eye position on responses to a set of simple 2D shapes. We report, for the first time, a significant eye position effect in over 40% of recorded neurons with small gaze angle shifts from central fixation. Although eye position modulates responses, it does not change shape selectivity. Conclusions/Significance These data demonstrate that spatial information is available in AIT for the representation of objects and scenes within a non-retinocentric frame of reference. More generally, the availability of spatial information in AIT calls into questions the classic dichotomy in visual processing that associates object shape processing with ventral structures such as AIT but places spatial processing in a separate anatomical stream projecting to dorsal structures.
Collapse
Affiliation(s)
- Sidney R. Lehky
- Computational Neuroscience Laboratory, The Salk Institute, La Jolla, California, United States of America
- Department of Neurobiology and Anatomy, University of Texas Houston Health Science Center, Houston, Texas, United States of America
| | - Xinmiao Peng
- Department of Neurobiology and Anatomy, University of Texas Houston Health Science Center, Houston, Texas, United States of America
| | - Carrie J. McAdams
- Department of Psychiatry, University of Texas Southwestern Medical Center, Dallas, Texas, United States of America
| | - Anne B. Sereno
- Department of Neurobiology and Anatomy, University of Texas Houston Health Science Center, Houston, Texas, United States of America
- * E-mail:
| |
Collapse
|
43
|
Receptive field shift and shrinkage in macaque middle temporal area through attentional gain modulation. J Neurosci 2008; 28:8934-44. [PMID: 18768687 DOI: 10.1523/jneurosci.4030-07.2008] [Citation(s) in RCA: 120] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Selective attention is the top-down mechanism to allocate neuronal processing resources to the most relevant subset of the information provided by an organism's sensors. Attentional selection of a spatial location modulates the spatial-tuning characteristics (i.e., the receptive fields of neurons in macaque visual cortex). These tuning changes include a shift of receptive field centers toward the focus of attention and a narrowing of the receptive field when the attentional focus is directed into the receptive field. Here, we report that when attention is directed into versus of receptive fields of neurons in the middle temporal visual area (area MT), the magnitude of the shift of the spatial-tuning functions is positively correlated with a narrowing of spatial tuning around the attentional focus. By developing and applying a general attentional gain model, we show that these nonmultiplicative attentional modulations of basic neuronal-tuning characteristics could be a direct consequence of a spatially distributed multiplicative interaction of a bell-shaped attentional spotlight with the spatially fined-grained sensory inputs of MT neurons. Additionally, the model lets us estimate the spatial spread of the attentional top-down signal impinging on visual cortex. Consistent with psychophysical reports, the estimated size of the "spotlight of attention" indicates a coarse spatial resolution of attention. These results illustrate how spatially specific nonmultiplicative attentional changes of neuronal-tuning functions can be the result of multiplicative gain modulation affecting sensory neurons in a widely distributed region in cortical space.
Collapse
|
44
|
Blohm G, Keith GP, Crawford JD. Decoding the cortical transformations for visually guided reaching in 3D space. ACTA ACUST UNITED AC 2008; 19:1372-93. [PMID: 18842662 DOI: 10.1093/cercor/bhn177] [Citation(s) in RCA: 61] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
To explore the possible cortical mechanisms underlying the 3-dimensional (3D) visuomotor transformation for reaching, we trained a 4-layer feed-forward artificial neural network to compute a reach vector (output) from the visual positions of both the hand and target viewed from different eye and head orientations (inputs). The emergent properties of the intermediate layers reflected several known neurophysiological findings, for example, gain field-like modulations and position-dependent shifting of receptive fields (RFs). We performed a reference frame analysis for each individual network unit, simulating standard electrophysiological experiments, that is, RF mapping (unit input), motor field mapping, and microstimulation effects (unit outputs). At the level of individual units (in both intermediate layers), the 3 different electrophysiological approaches identified different reference frames, demonstrating that these techniques reveal different neuronal properties and suggesting that a comparison across these techniques is required to understand the neural code of physiological networks. This analysis showed fixed input-output relationships within each layer and, more importantly, within each unit. These local reference frame transformation modules provide the basic elements for the global transformation; their parallel contributions are combined in a gain field-like fashion at the population level to implement both the linear and nonlinear elements of the 3D visuomotor transformation.
Collapse
Affiliation(s)
- Gunnar Blohm
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | | | | |
Collapse
|
45
|
Botvinick M, Watanabe T. From numerosity to ordinal rank: a gain-field model of serial order representation in cortical working memory. J Neurosci 2007; 27:8636-42. [PMID: 17687041 PMCID: PMC6672950 DOI: 10.1523/jneurosci.2110-07.2007] [Citation(s) in RCA: 76] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Encoding the serial order of events is an essential function of working memory, but one whose neural basis is not yet well understood. In the present work, we advance a new model of how serial order is represented in working memory. Our approach is predicated on three key findings from neurophysiological research: (1) prefrontal neurons that code conjunctively for item and order, (2) parietal neurons that represent count information through a graded and compressive code, and (3) multiplicative gain modulation as a mechanism for information integration. We used an artificial neural network, integrating across these three findings, to simulate human immediate serial recall performance. The model reproduced a core set of benchmark empirical findings, including primacy and recency effects, transposition gradients, effects of interitem similarity, and developmental effects. The model moves beyond previous accounts by bridging between neuroscientific findings and detailed behavioral data, and gives rise to several testable predictions.
Collapse
Affiliation(s)
- Matthew Botvinick
- Psychology Department and Institute for Neuroscience, Princeton University, Princeton, New Jersey 08540, USA.
| | | |
Collapse
|
46
|
Abstract
AbstractSomewhat in contrast to their proposal of two separate somatosensory streams, Dijkerman & de Haan (D&dH) propose that tactile recognition involves active manual exploration, and therefore involves parietal cortex. I argue that interactions from perception for action to object recognition can be found also in vision. Furthermore, there is evidence that perception for action and perception for recognition rely on similar processing principles.
Collapse
|
47
|
Schoppik D, Lisberger SG. Saccades exert spatial control of motion processing for smooth pursuit eye movements. J Neurosci 2006; 26:7607-18. [PMID: 16855088 PMCID: PMC2548311 DOI: 10.1523/jneurosci.1719-06.2006] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Saccades modulate the relationship between visual motion and smooth eye movement. Before a saccade, pursuit eye movements reflect a vector average of motion across the visual field. After a saccade, pursuit primarily reflects the motion of the target closest to the endpoint of the saccade. We tested the hypothesis that the saccade produces a spatial weighting of motion around the endpoint of the saccade. Using a moving pursuit stimulus that stepped to a new spatial location just before a targeting saccade, we controlled the distance between the endpoint of the saccade and the position of the moving target. We demonstrate that the smooth eye velocity following the targeting saccade weights the presaccadic visual motion inputs by the distance from their location in space to the endpoint of the saccade, defining the extent of a spatiotemporal filter for driving the eyes. The center of the filter is located at the endpoint of the saccade in space, not at the position of the fovea. The filter is stable in the face of a distracter target, is present for saccades to stationary and moving targets, and affects both the speed and direction of the postsaccadic eye movement. The spatial filter can explain the target-selecting gain change in postsaccadic pursuit, and has intriguing parallels to the process by which perceptual decisions about a restricted region of space are enhanced by attention. The effect of the spatial saccade plan on the pursuit response to a given retinal motion describes the dynamics of a coordinate transformation.
Collapse
Affiliation(s)
- David Schoppik
- Howard Hughes Medical Institute, Neuroscience Graduate Program, W. M. Keck Foundation Center for Integrative Neuroscience, and Department of Physiology, University of California, San Francisco, California 94143, USA.
| | | |
Collapse
|
48
|
Dessing JC, Caljouw SR, Peper PE, Beek PJ. A dynamical neural network for hitting an approaching object. BIOLOGICAL CYBERNETICS 2004; 91:377-387. [PMID: 15599591 DOI: 10.1007/s00422-004-0520-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2004] [Accepted: 09/13/2004] [Indexed: 05/24/2023]
Abstract
Besides making contact with an approaching ball at the proper place and time, hitting requires control of the effector velocity at contact. A dynamical neural network for the planning of hitting movements was derived in order to account for both these requirements. The model in question implements continuous required velocity control by extending the Vector Integration To Endpoint model while providing explicit control of effector velocity at interception. It was shown that the planned movement trajectories generated by the model agreed qualitatively with the kinematics of hitting movements as observed in two recent experiments. Outstanding features of this comparison concerned the timing and amplitude of the empirical backswing movements, which were largely consistent with the predictions from the model. Several theoretical implications as well as the informational basis and possible neural underpinnings of the model were discussed.
Collapse
Affiliation(s)
- Joost C Dessing
- Institute for Fundamental and Clinical Human Movement Sciences, Amsterdam/Nijmegen, The Netherlands.
| | | | | | | |
Collapse
|
49
|
Abstract
Background Behavior results from the integration of ongoing sensory signals and contextual information in various forms, such as past experience, expectations, current goals, etc. Thus, the response to a specific stimulus, say the ringing of a doorbell, varies depending on whether you are at home or in someone else's house. What is the neural basis of this flexibility? What mechanism is capable of selecting, in a context-dependent way an adequate response to a given stimulus? One possibility is based on a nonlinear neural representation in which context information regulates the gain of stimulus-evoked responses. Here I explore the properties of this mechanism. Results By means of three hypothetical visuomotor tasks, I study a class of neural network models in which any one of several possible stimulus-response maps or rules can be selected according to context. The underlying mechanism based on gain modulation has three key features: (1) modulating the sensory responses is equivalent to switching on or off different subpopulations of neurons, (2) context does not need to be represented continuously, although this is advantageous for generalization, and (3) context-dependent selection is independent of the discriminability of the stimuli. In all cases, the contextual cues can quickly turn on or off a sensory-motor map, effectively changing the functional connectivity between inputs and outputs in the networks. Conclusions The modulation of sensory-triggered activity by proprioceptive signals such as eye or head position is regarded as a general mechanism for performing coordinate transformations in vision. The present results generalize this mechanism to situations where the modulatory quantity and the input-output relationships that it selects are arbitrary. The model predicts that sensory responses that are nonlinearly modulated by arbitrary context signals should be found in behavioral situations that involve choosing or switching between multiple sensory-motor maps. Because any relevant circumstancial information can be part of the context, this mechanism may partly explain the complex and rich behavioral repertoire of higher organisms.
Collapse
Affiliation(s)
- Emilio Salinas
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, NC 27157-1010, USA.
| |
Collapse
|
50
|
Smith MA, Crawford JD. Distributed population mechanism for the 3-D oculomotor reference frame transformation. J Neurophysiol 2004; 93:1742-61. [PMID: 15537819 DOI: 10.1152/jn.00306.2004] [Citation(s) in RCA: 47] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Human saccades require a nonlinear, eye orientation-dependent reference frame transformation to transform visual codes to the motor commands for eye muscles. Primate neurophysiology suggests that this transformation is performed between the superior colliculus and brain stem burst neurons, but provides little clues as to how this is done. To understand how the brain might accomplish this, we trained a 3-layer neural net to generate accurate commands for kinematically correct 3-D saccades. The inputs to the network were a 2-D, eye-centered, topographic map of Gaussian visual receptive fields and an efference copy of eye position in 6-dimensional, push-pull "neural integrator" coordinates. The output was an eye orientation displacement command in similar coordinates appropriate to drive brain stem burst neurons. The network learned to generate accurate, kinematically correct saccades, including the eye orientation-dependent tilts in saccade motor error commands required to match saccade trajectories to their visual input. Our analysis showed that the hidden units developed complex, eye-centered visual receptive fields, widely distributed fixed-vector motor commands, and "gain field"-like eye position sensitivities. The latter evoked subtle adjustments in the relative motor contributions of each hidden unit, thereby rotating the population motor vector into the correct correspondence with the visual target input for each eye orientation: a distributed population mechanism for the visuomotor reference frame transformation. These findings were robust; there was little variation across networks with between 9 and 49 hidden units. Because essentially the same observations have been reported in the visuomotor transformations of the real oculomotor system, as well as other visuomotor systems (although interpreted elsewhere in terms of other models) we suggest that the mechanism for visuomotor reference frame transformations identified here is the same solution used in the real brain.
Collapse
Affiliation(s)
- Michael A Smith
- York Centre for Vision Research, Canadian Institute of Health Research Group for Action and Perception, Department of Psychology, York University, Computer Science Building, 4700 Keele Street, Toronto, Ontario M3J 1P3, Canada
| | | |
Collapse
|