1
|
Takahashi M, Veale R. Pathways for Naturalistic Looking Behavior in Primate I: Behavioral Characteristics and Brainstem Circuits. Neuroscience 2023; 532:133-163. [PMID: 37776945 DOI: 10.1016/j.neuroscience.2023.09.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 09/09/2023] [Accepted: 09/18/2023] [Indexed: 10/02/2023]
Abstract
Organisms control their visual worlds by moving their eyes, heads, and bodies. This control of "gaze" or "looking" is key to survival and intelligence, but our investigation of the underlying neural mechanisms in natural conditions is hindered by technical limitations. Recent advances have enabled measurement of both brain and behavior in freely moving animals in complex environments, expanding on historical head-fixed laboratory investigations. We juxtapose looking behavior as traditionally measured in the laboratory against looking behavior in naturalistic conditions, finding that behavior changes when animals are free to move or when stimuli have depth or sound. We specifically focus on the brainstem circuits driving gaze shifts and gaze stabilization. The overarching goal of this review is to reconcile historical understanding of the differential neural circuits for different "classes" of gaze shift with two inconvenient truths. (1) "classes" of gaze behavior are artificial. (2) The neural circuits historically identified to control each "class" of behavior do not operate in isolation during natural behavior. Instead, multiple pathways combine adaptively and non-linearly depending on individual experience. While the neural circuits for reflexive and voluntary gaze behaviors traverse somewhat independent brainstem and spinal cord circuits, both can be modulated by feedback, meaning that most gaze behaviors are learned rather than hardcoded. Despite this flexibility, there are broadly enumerable neural pathways commonly adopted among primate gaze systems. Parallel pathways which carry simultaneous evolutionary and homeostatic drives converge in superior colliculus, a layered midbrain structure which integrates and relays these volitional signals to brainstem gaze-control circuits.
Collapse
Affiliation(s)
- Mayu Takahashi
- Department of Systems Neurophysiology, Graduate School of Medical and Dental, Sciences, Tokyo Medical and Dental University, Japan.
| | - Richard Veale
- Department of Neurobiology, Graduate School of Medicine, Kyoto University, Japan
| |
Collapse
|
2
|
Abedi Khoozani P, Bharmauria V, Schütz A, Wildes RP, Crawford JD. Integration of allocentric and egocentric visual information in a convolutional/multilayer perceptron network model of goal-directed gaze shifts. Cereb Cortex Commun 2022; 3:tgac026. [PMID: 35909704 PMCID: PMC9334293 DOI: 10.1093/texcom/tgac026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 06/14/2022] [Accepted: 06/21/2022] [Indexed: 11/13/2022] Open
Abstract
Allocentric (landmark-centered) and egocentric (eye-centered) visual codes are fundamental for spatial cognition, navigation, and goal-directed movement. Neuroimaging and neurophysiology suggest these codes are initially segregated, but then reintegrated in frontal cortex for movement control. We created and validated a theoretical framework for this process using physiologically constrained inputs and outputs. To implement a general framework, we integrated a convolutional neural network (CNN) of the visual system with a multilayer perceptron (MLP) model of the sensorimotor transformation. The network was trained on a task where a landmark shifted relative to the saccade target. These visual parameters were input to the CNN, the CNN output and initial gaze position to the MLP, and a decoder transformed MLP output into saccade vectors. Decoded saccade output replicated idealized training sets with various allocentric weightings and actual monkey data where the landmark shift had a partial influence (R2 = 0.8). Furthermore, MLP output units accurately simulated prefrontal response field shifts recorded from monkeys during the same paradigm. In summary, our model replicated both the general properties of the visuomotor transformations for gaze and specific experimental results obtained during allocentric–egocentric integration, suggesting it can provide a general framework for understanding these and other complex visuomotor behaviors.
Collapse
Affiliation(s)
- Parisa Abedi Khoozani
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
| | - Vishal Bharmauria
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
| | - Adrian Schütz
- Department of Neurophysics Phillips-University Marburg , Marburg 35037 , Germany
| | - Richard P Wildes
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
- Department of Electrical Engineering and Computer Science , York University, Toronto, ON M3J 1P3 , Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
- Departments of Psychology, Biology and Kinesiology & Health Sciences, York University , Toronto, Ontario M3J 1P3 , Canada
| |
Collapse
|
3
|
Caruso VC, Pages DS, Sommer MA, Groh JM. Compensating for a shifting world: evolving reference frames of visual and auditory signals across three multimodal brain areas. J Neurophysiol 2021; 126:82-94. [PMID: 33852803 DOI: 10.1152/jn.00385.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Stimulus locations are detected differently by different sensory systems, but ultimately they yield similar percepts and behavioral responses. How the brain transcends initial differences to compute similar codes is unclear. We quantitatively compared the reference frames of two sensory modalities, vision and audition, across three interconnected brain areas involved in generating saccades, namely the frontal eye fields (FEF), lateral and medial parietal cortex (M/LIP), and superior colliculus (SC). We recorded from single neurons in head-restrained monkeys performing auditory- and visually guided saccades from variable initial fixation locations and evaluated whether their receptive fields were better described as eye-centered, head-centered, or hybrid (i.e. not anchored uniquely to head- or eye-orientation). We found a progression of reference frames across areas and across time, with considerable hybrid-ness and persistent differences between modalities during most epochs/brain regions. For both modalities, the SC was more eye-centered than the FEF, which in turn was more eye-centered than the predominantly hybrid M/LIP. In all three areas and temporal epochs from stimulus onset to movement, visual signals were more eye-centered than auditory signals. In the SC and FEF, auditory signals became more eye-centered at the time of the saccade than they were initially after stimulus onset, but only in the SC at the time of the saccade did the auditory signals become "predominantly" eye-centered. The results indicate that visual and auditory signals both undergo transformations, ultimately reaching the same final reference frame but via different dynamics across brain regions and time.NEW & NOTEWORTHY Models for visual-auditory integration posit that visual signals are eye-centered throughout the brain, whereas auditory signals are converted from head-centered to eye-centered coordinates. We show instead that both modalities largely employ hybrid reference frames: neither fully head- nor eye-centered. Across three hubs of the oculomotor network (intraparietal cortex, frontal eye field, and superior colliculus) visual and auditory signals evolve from hybrid to a common eye-centered format via different dynamics across brain areas and time.
Collapse
Affiliation(s)
- Valeria C Caruso
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Psychiatry, University of Michigan, Ann Arbor, Michigan
| | - Daniel S Pages
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina
| | - Marc A Sommer
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Biomedical Engineering, Duke University, Durham, North Carolina
| | - Jennifer M Groh
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Biomedical Engineering, Duke University, Durham, North Carolina
| |
Collapse
|
4
|
Spatiotemporal Coding in the Macaque Supplementary Eye Fields: Landmark Influence in the Target-to-Gaze Transformation. eNeuro 2021; 8:ENEURO.0446-20.2020. [PMID: 33318073 PMCID: PMC7877461 DOI: 10.1523/eneuro.0446-20.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 11/24/2020] [Indexed: 11/21/2022] Open
Abstract
Eye-centered (egocentric) and landmark-centered (allocentric) visual signals influence spatial cognition, navigation, and goal-directed action, but the neural mechanisms that integrate these signals for motor control are poorly understood. A likely candidate for egocentric/allocentric integration in the gaze control system is the supplementary eye fields (SEF), a mediofrontal structure with high-level “executive” functions, spatially tuned visual/motor response fields, and reciprocal projections with the frontal eye fields (FEF). To test this hypothesis, we trained two head-unrestrained monkeys (Macaca mulatta) to saccade toward a remembered visual target in the presence of a visual landmark that shifted during the delay, causing gaze end points to shift partially in the same direction. A total of 256 SEF neurons were recorded, including 68 with spatially tuned response fields. Model fits to the latter established that, like the FEF and superior colliculus (SC), spatially tuned SEF responses primarily showed an egocentric (eye-centered) target-to-gaze position transformation. However, the landmark shift influenced this default egocentric transformation: during the delay, motor neurons (with no visual response) showed a transient but unintegrated shift (i.e., not correlated with the target-to-gaze transformation), whereas during the saccade-related burst visuomotor (VM) neurons showed an integrated shift (i.e., correlated with the target-to-gaze transformation). This differed from our simultaneous FEF recordings (Bharmauria et al., 2020), which showed a transient shift in VM neurons, followed by an integrated response in all motor responses. Based on these findings and past literature, we propose that prefrontal cortex incorporates landmark-centered information into a distributed, eye-centered target-to-gaze transformation through a reciprocal prefrontal circuit.
Collapse
|
5
|
Sajad A, Sadeh M, Crawford JD. Spatiotemporal transformations for gaze control. Physiol Rep 2020; 8:e14533. [PMID: 32812395 PMCID: PMC7435051 DOI: 10.14814/phy2.14533] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 06/30/2020] [Accepted: 07/01/2020] [Indexed: 12/13/2022] Open
Abstract
Sensorimotor transformations require spatiotemporal coordination of signals, that is, through both time and space. For example, the gaze control system employs signals that are time-locked to various sensorimotor events, but the spatial content of these signals is difficult to assess during ordinary gaze shifts. In this review, we describe the various models and methods that have been devised to test this question, and their limitations. We then describe a new method that can (a) simultaneously test between all of these models during natural, head-unrestrained conditions, and (b) track the evolving spatial continuum from target (T) to future gaze coding (G, including errors) through time. We then summarize some applications of this technique, comparing spatiotemporal coding in the primate frontal eye field (FEF) and superior colliculus (SC). The results confirm that these areas preferentially encode eye-centered, effector-independent parameters, and show-for the first time in ordinary gaze shifts-a spatial transformation between visual and motor responses from T to G coding. We introduce a new set of spatial models (T-G continuum) that revealed task-dependent timing of this transformation: progressive during a memory delay between vision and action, and almost immediate without such a delay. We synthesize the results from our studies and supplement it with previous knowledge of anatomy and physiology to propose a conceptual model where cumulative transformation noise is realized as inaccuracies in gaze behavior. We conclude that the spatiotemporal transformation for gaze is both local (observed within and across neurons in a given area) and distributed (with common signals shared across remote but interconnected structures).
Collapse
Affiliation(s)
- Amirsaman Sajad
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Psychology DepartmentVanderbilt UniversityNashvilleTNUSA
| | - Morteza Sadeh
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Department of NeurosurgeryUniversity of Illinois at ChicagoChicagoILUSA
| | - John Douglas Crawford
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Vision: Science to Applications Program (VISTA)Neuroscience Graduate Diploma ProgramDepartments of Psychology, Biology, Kinesiology & Health SciencesYork UniversityTorontoONCanada
| |
Collapse
|
6
|
Role of Rostral Fastigial Neurons in Encoding a Body-Centered Representation of Translation in Three Dimensions. J Neurosci 2018; 38:3584-3602. [PMID: 29487123 DOI: 10.1523/jneurosci.2116-17.2018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Revised: 02/01/2018] [Accepted: 02/20/2018] [Indexed: 11/21/2022] Open
Abstract
Many daily behaviors rely critically on estimates of our body motion. Such estimates must be computed by combining neck proprioceptive signals with vestibular signals that have been transformed from a head- to a body-centered reference frame. Recent studies showed that deep cerebellar neurons in the rostral fastigial nucleus (rFN) reflect these computations, but whether they explicitly encode estimates of body motion remains unclear. A key limitation in addressing this question is that, to date, cell tuning properties have only been characterized for a restricted set of motions across head-re-body orientations in the horizontal plane. Here we examined, for the first time, how 3D spatiotemporal tuning for translational motion varies with head-re-body orientation in both horizontal and vertical planes in the rFN of male macaques. While vestibular coding was profoundly influenced by head-re-body position in both planes, neurons typically reflected at most a partial transformation. However, their tuning shifts were not random but followed the specific spatial trajectories predicted for a 3D transformation. We show that these properties facilitate the linear decoding of fully body-centered motion representations in 3D with a broad range of temporal characteristics from small groups of 5-7 cells. These results demonstrate that the vestibular reference frame transformation required to compute body motion is indeed encoded by cerebellar neurons. We propose that maintaining partially transformed rFN responses with different spatiotemporal properties facilitates the creation of downstream body motion representations with a range of dynamic characteristics, consistent with the functional requirements for tasks such as postural control and reaching.SIGNIFICANCE STATEMENT Estimates of body motion are essential for many daily activities. Vestibular signals are important contributors to such estimates but must be transformed from a head- to a body-centered reference frame. Here, we provide the first direct demonstration that the cerebellum computes this transformation fully in 3D. We show that the output of these computations is reflected in the tuning properties of deep cerebellar rostral fastigial nucleus neurons in a specific distributed fashion that facilitates the efficient creation of body-centered translation estimates with a broad range of temporal properties (i.e., from acceleration to position). These findings support an important role for the rostral fastigial nucleus as a source of body translation estimates functionally relevant for behaviors ranging from postural control to perception.
Collapse
|
7
|
Zhou W, Zhai X, Ghahari A, Korentis GA, Kaputa D, Enderle JD. Static Characteristics of a New Three-Dimensional Linear Homeomorphic Saccade Model. Int J Neural Syst 2017; 28:1750049. [PMID: 29241397 DOI: 10.1142/s0129065717500496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
A linear homeomorphic saccade model that produces 3D saccadic eye movements consistent with physiological and anatomical evidence is introduced. Central to the model is the implementation of a time-optimal controller with six linear muscles and pulleys that represent the saccade oculomotor plant. Each muscle is modeled as a parallel combination of viscosity [Formula: see text] and series elasticity [Formula: see text] connected to the parallel combination of active-state tension generator [Formula: see text], viscosity element [Formula: see text], and length tension elastic element [Formula: see text]. Additionally, passive tissues involving the eyeball include a viscosity element [Formula: see text], elastic element [Formula: see text], and moment of inertia [Formula: see text]. The neural input for each muscle is separately maintained, whereas the effective pulling direction is modulated by its respective mid-orbital constraint from the pulleys. Initial parameter values for the oculomotor plant are based on anatomical and physiological evidence. The oculomotor plant uses a time-optimal, 2D commutative neural controller, together with the pulley system that actively functions to implement Listing's law during both static and dynamic conditions. In a companion paper, the dynamic characteristics of the saccade model is analyzed using a time domain system identification technique to estimate the final parameter values and neural inputs from saccade data. An excellent match between the model estimates and the data is observed, whereby a total of 20 horizontal, 5 vertical, and 64 oblique saccades are analyzed.
Collapse
Affiliation(s)
- Wei Zhou
- 1 Department of Biomedical Engineering, University of Connecticut, 260 Glenbrook Road, Storrs, CT 06269-3247, USA
| | - Xiu Zhai
- 1 Department of Biomedical Engineering, University of Connecticut, 260 Glenbrook Road, Storrs, CT 06269-3247, USA
| | - Alireza Ghahari
- 1 Department of Biomedical Engineering, University of Connecticut, 260 Glenbrook Road, Storrs, CT 06269-3247, USA
| | - G Alex Korentis
- 1 Department of Biomedical Engineering, University of Connecticut, 260 Glenbrook Road, Storrs, CT 06269-3247, USA
| | - David Kaputa
- 1 Department of Biomedical Engineering, University of Connecticut, 260 Glenbrook Road, Storrs, CT 06269-3247, USA
| | - John D Enderle
- 1 Department of Biomedical Engineering, University of Connecticut, 260 Glenbrook Road, Storrs, CT 06269-3247, USA
| |
Collapse
|
8
|
Affiliation(s)
- M. W. Spratling
- Department of Informatics, King's College London, London, UK
| |
Collapse
|
9
|
Sadeh M, Sajad A, Wang H, Yan X, Crawford JD. Spatial transformations between superior colliculus visual and motor response fields during head-unrestrained gaze shifts. Eur J Neurosci 2016; 42:2934-51. [PMID: 26448341 DOI: 10.1111/ejn.13093] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2014] [Revised: 09/14/2015] [Accepted: 09/30/2015] [Indexed: 11/27/2022]
Abstract
We previously reported that visuomotor activity in the superior colliculus (SC)--a key midbrain structure for the generation of rapid eye movements--preferentially encodes target position relative to the eye (Te) during low-latency head-unrestrained gaze shifts (DeSouza et al., 2011). Here, we trained two monkeys to perform head-unrestrained gaze shifts after a variable post-stimulus delay (400-700 ms), to test whether temporally separated SC visual and motor responses show different spatial codes. Target positions, final gaze positions and various frames of reference (eye, head, and space) were dissociated through natural (untrained) trial-to-trial variations in behaviour. 3D eye and head orientations were recorded, and 2D response field data were fitted against multiple models by use of a statistical method reported previously (Keith et al., 2009). Of 60 neurons, 17 showed a visual response, 12 showed a motor response, and 31 showed both visual and motor responses. The combined visual response field population (n = 48) showed a significant preference for Te, which was also preferred in each visual subpopulation. In contrast, the motor response field population (n = 43) showed a preference for final (relative to initial) gaze position models, and the Te model was statistically eliminated in the motor-only population. There was also a significant shift of coding from the visual to motor response within visuomotor neurons. These data confirm that SC response fields are gaze-centred, and show a target-to-gaze transformation between visual and motor responses. Thus, visuomotor transformations can occur between, and even within, neurons within a single frame of reference and brain structure.
Collapse
Affiliation(s)
- Morteza Sadeh
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Amirsaman Sajad
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Hongying Wang
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Xiaogang Yan
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - John Douglas Crawford
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| |
Collapse
|
10
|
Mohsenzadeh Y, Dash S, Crawford JD. A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements. Front Syst Neurosci 2016; 10:39. [PMID: 27242452 PMCID: PMC4867689 DOI: 10.3389/fnsys.2016.00039] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2015] [Accepted: 04/19/2016] [Indexed: 12/02/2022] Open
Abstract
In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks.
Collapse
Affiliation(s)
- Yalda Mohsenzadeh
- York Center for Vision Research, Canadian Action and Perception Network, York University Toronto, ON, Canada
| | - Suryadeep Dash
- York Center for Vision Research, Canadian Action and Perception Network, York UniversityToronto, ON, Canada; Department of Physiology and Pharmacology, Robarts Research Institute, Western UniversityLondon, ON, Canada
| | - J Douglas Crawford
- York Center for Vision Research, Canadian Action and Perception Network, York UniversityToronto, ON, Canada; Departments of Psychology, Biology, and Kinesiology and Health Sciences, York UniversityToronto, ON, Canada
| |
Collapse
|
11
|
Üstün C. A Sensorimotor Model for Computing Intended Reach Trajectories. PLoS Comput Biol 2016; 12:e1004734. [PMID: 26985662 PMCID: PMC4795795 DOI: 10.1371/journal.pcbi.1004734] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2015] [Accepted: 01/05/2016] [Indexed: 11/19/2022] Open
Abstract
The presumed role of the primate sensorimotor system is to transform reach targets from retinotopic to joint coordinates for producing motor output. However, the interpretation of neurophysiological data within this framework is ambiguous, and has led to the view that the underlying neural computation may lack a well-defined structure. Here, I consider a model of sensorimotor computation in which temporal as well as spatial transformations generate representations of desired limb trajectories, in visual coordinates. This computation is suggested by behavioral experiments, and its modular implementation makes predictions that are consistent with those observed in monkey posterior parietal cortex (PPC). In particular, the model provides a simple explanation for why PPC encodes reach targets in reference frames intermediate between the eye and hand, and further explains why these reference frames shift during movement. Representations in PPC are thus consistent with the orderly processing of information, provided we adopt the view that sensorimotor computation manipulates desired movement trajectories, and not desired movement endpoints. Does the brain explicitly plan entire movement trajectories or are these emergent properties of motor control? Although behavioral studies support the notion of trajectory planning for visually guided reaches, a neurobiologically plausible mechanism for this observation has been lacking. I discuss a model that generates representations of desired reach trajectories (i.e., paths and speed profiles) for point-to-point reaches. I show that the predictions of this model closely resemble the population responses of neurons in posterior parietal cortex, a visuomotor planning area of the monkey brain. Several aspects of population responses that are puzzling from the point of view of traditional sensorimotor models are coherently explained by this mechanism.
Collapse
Affiliation(s)
- Cevat Üstün
- Division of Biology, California Institute of Technology, Pasadena, California, United States of America
- * E-mail:
| |
Collapse
|
12
|
Murdison TS, Leclercq G, Lefèvre P, Blohm G. Computations underlying the visuomotor transformation for smooth pursuit eye movements. J Neurophysiol 2015; 113:1377-99. [PMID: 25475344 DOI: 10.1152/jn.00273.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103-2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| | - Guillaume Leclercq
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Philippe Lefèvre
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| |
Collapse
|
13
|
Sajad A, Sadeh M, Keith GP, Yan X, Wang H, Crawford JD. Visual-Motor Transformations Within Frontal Eye Fields During Head-Unrestrained Gaze Shifts in the Monkey. Cereb Cortex 2014; 25:3932-52. [PMID: 25491118 PMCID: PMC4585524 DOI: 10.1093/cercor/bhu279] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual–motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas.
Collapse
Affiliation(s)
- Amirsaman Sajad
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Neuroscience Graduate Diploma Program Department of Biology
| | - Morteza Sadeh
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Neuroscience Graduate Diploma Program School of Kinesiology and Health Sciences
| | - Gerald P Keith
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Department of Psychology, York University, Toronto, ON, Canada M3J 1P3
| | - Xiaogang Yan
- Centre for Vision Research Canadian Action and Perception Network (CAPnet)
| | - Hongying Wang
- Centre for Vision Research Canadian Action and Perception Network (CAPnet)
| | - John Douglas Crawford
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Neuroscience Graduate Diploma Program Department of Biology School of Kinesiology and Health Sciences Department of Psychology, York University, Toronto, ON, Canada M3J 1P3
| |
Collapse
|
14
|
A single functional model of drivers and modulators in cortex. J Comput Neurosci 2013; 36:97-118. [DOI: 10.1007/s10827-013-0471-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2013] [Revised: 05/10/2013] [Accepted: 06/05/2013] [Indexed: 10/26/2022]
|
15
|
Monteon JA, Wang H, Martinez-Trujillo J, Crawford JD. Frames of reference for eye-head gaze shifts evoked during frontal eye field stimulation. Eur J Neurosci 2013; 37:1754-65. [PMID: 23489744 DOI: 10.1111/ejn.12175] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2010] [Revised: 01/14/2013] [Accepted: 01/30/2013] [Indexed: 11/29/2022]
Abstract
The frontal eye field (FEF), in the prefrontal cortex, participates in the transformation of visual signals into saccade motor commands and in eye-head gaze control. The FEF is thought to show eye-fixed visual codes in head-restrained monkeys, but it is not known how it transforms these inputs into spatial codes for head-unrestrained gaze commands. Here, we tested if the FEF influences desired gaze commands within a simple eye-fixed frame, like the superior colliculus (SC), or in more complex egocentric frames like the supplementary eye fields (SEFs). We electrically stimulated 95 FEF sites in two head-unrestrained monkeys to evoke 3D eye-head gaze shifts and then mathematically rotated these trajectories into various reference frames. In theory, each stimulation site should specify a specific spatial goal when the evoked gaze shifts are plotted in the appropriate frame. We found that these motor output frames varied site by site, mainly within the eye-to-head frame continuum. Thus, consistent with the intermediate placement of the FEF within the high-level circuits for gaze control, its stimulation-evoked output showed an intermediate trend between the multiple reference frame codes observed in SEF-evoked gaze shifts and the simpler eye-fixed reference frame observed in SC-evoked movements. These results suggest that, although the SC, FEF and SEF carry eye-fixed information at the level of their unit response fields, this information is transformed differently in their output projections to the eye and head controllers.
Collapse
Affiliation(s)
- Jachin A Monteon
- Centre for Vision Research, York University, Toronto, ON, Canada
| | | | | | | |
Collapse
|
16
|
Dolk T, Liepelt R, Prinz W, Fiehler K. Visual experience determines the use of external reference frames in joint action control. PLoS One 2013; 8:e59008. [PMID: 23536848 PMCID: PMC3594222 DOI: 10.1371/journal.pone.0059008] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2012] [Accepted: 02/08/2013] [Indexed: 11/18/2022] Open
Abstract
Vision plays a crucial role in human interaction by facilitating the coordination of one's own actions with those of others in space and time. While previous findings have demonstrated that vision determines the default use of reference frames, little is known about the role of visual experience in coding action-space during joint action. Here, we tested if and how visual experience influences the use of reference frames in joint action control. Dyads of congenitally-blind, blindfolded-sighted, and seeing individuals took part in an auditory version of the social Simon task, which required each participant to respond to one of two sounds presented to the left or right of both participants. To disentangle the contribution of external—agent-based and response-based—reference frames during joint action, participants performed the task with their respective response (right) hands uncrossed or crossed over one another. Although the location of the auditory stimulus was completely task-irrelevant, participants responded overall faster when the stimulus location spatially corresponded to the required response side than when they were spatially non-corresponding: a phenomenon known as the social Simon effect (SSE). In sighted participants, the SSE occurred irrespective of whether hands were crossed or uncrossed, suggesting the use of external, response-based reference frames. Congenitally-blind participants also showed an SSE, but only with uncrossed hands. We argue that congenitally-blind people use both agent-based and response-based reference frames resulting in conflicting spatial information when hands are crossed and, thus, canceling out the SSE. These results imply that joint action control functions on the basis of external reference frames independent of the presence or (transient/permanent) absence of vision. However, the type of external reference frames used for organizing motor control in joint action seems to be determined by visual experience.
Collapse
Affiliation(s)
- Thomas Dolk
- Department of Psychology, Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- * E-mail: (TD); (KF)
| | - Roman Liepelt
- Institute for Psychology, University of Muenster, Muenster, Germany
| | - Wolfgang Prinz
- Department of Psychology, Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Katja Fiehler
- Department of Psychology, Justus-Liebig-University Giessen, Giessen, Germany
- * E-mail: (TD); (KF)
| |
Collapse
|
17
|
Simulating the cortical 3D visuomotor transformation of reach depth. PLoS One 2012; 7:e41241. [PMID: 22815979 PMCID: PMC3397995 DOI: 10.1371/journal.pone.0041241] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2011] [Accepted: 06/22/2012] [Indexed: 11/22/2022] Open
Abstract
We effortlessly perform reach movements to objects in different directions and depths. However, how networks of cortical neurons compute reach depth from binocular visual inputs remains largely unknown. To bridge the gap between behavior and neurophysiology, we trained a feed-forward artificial neural network to uncover potential mechanisms that might underlie the 3D transformation of reach depth. Our physiologically-inspired 4-layer network receives distributed 3D visual inputs (1st layer) along with eye, head and vergence signals. The desired motor plan was coded in a population (3rd layer) that we read out (4th layer) using an optimal linear estimator. After training, our network was able to reproduce all known single-unit recording evidence on depth coding in the parietal cortex. Network analyses predict the presence of eye/head and vergence changes of depth tuning, pointing towards a gain-modulation mechanism of depth transformation. In addition, reach depth was computed directly from eye-centered (relative) visual distances, without explicit absolute depth coding. We suggest that these effects should be observable in parietal and pre-motor areas.
Collapse
|
18
|
Casarotti M, Lisi M, Umiltà C, Zorzi M. Paying Attention through Eye Movements: A Computational Investigation of the Premotor Theory of Spatial Attention. J Cogn Neurosci 2012; 24:1519-31. [DOI: 10.1162/jocn_a_00231] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Growing evidence indicates that planning eye movements and orienting visuospatial attention share overlapping brain mechanisms. A tight link between endogenous attention and eye movements is maintained by the premotor theory, in contrast to other accounts that postulate the existence of specific attention mechanisms that modulate the activity of information processing systems. The strong assumption of equivalence between attention and eye movements, however, is challenged by demonstrations that human observers are able to keep attention on a specific location while moving the eyes elsewhere. Here we investigate whether a recurrent model of saccadic planning can account for attentional effects without requiring additional or specific mechanisms separate from the circuits that perform sensorimotor transformations for eye movements. The model builds on the basis function approach and includes a circuit that performs spatial remapping using an “internal forward model” of how visual inputs are modified as a result of saccadic movements. Simulations show that the latter circuit is crucial to account for dissociations between attention and eye movements that may be invoked to disprove the premotor theory. The model provides new insights into how spatial remapping may be implemented in parietal cortex and offers a computational framework for recent proposals that link visual stability with remapping of attention pointers.
Collapse
|
19
|
Intrinsic reference frames of superior colliculus visuomotor receptive fields during head-unrestrained gaze shifts. J Neurosci 2012; 31:18313-26. [PMID: 22171035 DOI: 10.1523/jneurosci.0990-11.2011] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
A sensorimotor neuron's receptive field and its frame of reference are easily conflated within the natural variability of spatial behavior. Here, we capitalized on such natural variations in 3-D eye and head positions during head-unrestrained gaze shifts to visual targets in two monkeys: to determine whether intermediate/deep layer superior colliculus (SC) receptive fields code visual targets or gaze kinematics, within four different frames of reference. Visuomotor receptive fields were either characterized during gaze shifts to visual targets from a central fixation position (32 U) or were partially characterized from each of three initial fixation points (31 U). Natural variations of initial 3-D gaze and head orientation (including torsion) provided spatial separation between four different coordinate frame models (space, head, eye, fixed-vector relative to fixation), whereas natural saccade errors provided spatial separation between target and gaze positions. Using a new statistical method based on predictive sum-of-squares, we found that in our population of 63 neurons (1) receptive field fits to target positions were significantly better than fits to actual gaze shift locations and (2) eye-centered models gave significantly better fits than the head or space frame. An intermediate frames analysis confirmed that individual neuron fits were distributed target-in-eye coordinates. Gaze position "gain" effects with the spatial tuning required for a 3-D reference frame transformation were significant in 23% (7/31) of neurons tested. We conclude that the SC primarily represents gaze targets relative to the eye but also carries early signatures of the 3-D sensorimotor transformation.
Collapse
|
20
|
Crawford JD, Henriques DYP, Medendorp WP. Three-dimensional transformations for goal-directed action. Annu Rev Neurosci 2011; 34:309-31. [PMID: 21456958 DOI: 10.1146/annurev-neuro-061010-113749] [Citation(s) in RCA: 124] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Much of the central nervous system is involved in visuomotor transformations for goal-directed gaze and reach movements. These transformations are often described in terms of stimulus location, gaze fixation, and reach endpoints, as viewed through the lens of translational geometry. Here, we argue that the intrinsic (primarily rotational) 3-D geometry of the eye-head-reach systems determines the spatial relationship between extrinsic goals and effector commands, and therefore the required transformations. This approach provides a common theoretical framework for understanding both gaze and reach control. Combined with an assessment of the behavioral, neurophysiological, imaging, and neuropsychological literature, this framework leads us to conclude that (a) the internal representation and updating of visual goals are dominated by gaze-centered mechanisms, but (b) these representations must then be transformed as a function of eye and head orientation signals into effector-specific 3-D movement commands.
Collapse
Affiliation(s)
- J Douglas Crawford
- York Centre for Vision Research, Canadian Action and Perception Network, and Departments of Psychology, Toronto, Ontario, Canada, M3J 1P3.
| | | | | |
Collapse
|
21
|
Abstract
This article presents an approach to understanding human spatial competence that focuses on the representations and processes of spatial cognition and how they are integrated with cognition more generally. The foundational theoretical argument for this research is that spatial information processing is central to cognition more generally, in the sense that it is brought to bear ubiquitously to improve the adaptivity and effectiveness of perception, cognitive processing, and motor action. We describe research spanning multiple levels of complexity to understand both the detailed mechanisms of spatial cognition, and how they are utilized in complex, naturalistic tasks. In the process, we discuss the critical role of cognitive architectures in developing a consistent account that spans this breadth, and we note some areas in which the current version of a popular architecture, ACT-R, may need to be augmented. Finally, we suggest a framework for understanding the representations and processes of spatial competence and their role in human cognition generally.
Collapse
Affiliation(s)
- Glenn Gunzelmann
- Air Force Research LaboratoryL3 Communications at Air Force Research Laboratory
| | | |
Collapse
|
22
|
Idiosyncratic and systematic aspects of spatial representations in the macaque parietal cortex. Proc Natl Acad Sci U S A 2010; 107:7951-6. [PMID: 20375282 DOI: 10.1073/pnas.0913209107] [Citation(s) in RCA: 103] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The sensorimotor transformations for visually guided reaching were originally thought to take place in a series of discrete transitions from one systematic frame of reference to the next with neurons coding location relative to the fixation position (gaze-centered) in occipital and posterior parietal areas, relative to the shoulder in dorsal premotor cortex, and in muscle- or joint-based coordinates in motor output neurons. Recent empirical and theoretical work has suggested that spatial encodings that use a range of idiosyncratic representations may increase computational power and flexibility. We now show that neurons in the parietal reach region use nonuniform and idiosyncratic frames of reference. We also show that these nonsystematic reference frames coexist with a systematic compound gain field that modulates activity proportional to the distance between the eyes and the hand. Thus, systematic and idiosyncratic signals may coexist within individual neurons.
Collapse
|
23
|
Abstract
For more than two decades, neuroscientists have debated the role of "gain fields" in sensorimotor transformations. In this issue of Neuron, Chang et al. demonstrate a tight correlation between eye and hand position gain fields in the "parietal reach region," strongly suggesting that they play a functional role in computing the reach command.
Collapse
Affiliation(s)
- Gunnar Blohm
- Centre for Neuroscience Studies, Department of Physiology and Faculty of Arts and Science, Queen's University, Kingston, Ontario K7L 3N6, Canada.
| | | |
Collapse
|
24
|
Chang SWC, Papadimitriou C, Snyder LH. Using a compound gain field to compute a reach plan. Neuron 2010; 64:744-55. [PMID: 20005829 DOI: 10.1016/j.neuron.2009.11.005] [Citation(s) in RCA: 73] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/18/2009] [Indexed: 10/20/2022]
Abstract
A gain field, the scaling of a tuned neuronal response by a postural signal, may help support neuronal computation. Here, we characterize eye and hand position gain fields in the parietal reach region (PRR). Eye and hand gain fields in individual PRR neurons are similar in magnitude but opposite in sign to one another. This systematic arrangement produces a compound gain field that is proportional to the distance between gaze location and initial hand position. As a result, the visual response to a target for an upcoming reach is scaled by the initial gaze-to-hand distance. Such a scaling is similar to what would be predicted in a neural network that mediates between eye- and hand-centered representations of target location. This systematic arrangement supports a role of PRR in visually guided reaching and provides strong evidence that gain fields are used for neural computations.
Collapse
Affiliation(s)
- Steve W C Chang
- Department of Anatomy and Neurobiology, Washington University in St. Louis School of Medicine, St. Louis, MO 63110, USA.
| | | | | |
Collapse
|
25
|
Green AM, Angelaki DE. Internal models and neural computation in the vestibular system. Exp Brain Res 2010; 200:197-222. [PMID: 19937232 PMCID: PMC2853943 DOI: 10.1007/s00221-009-2054-4] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2009] [Accepted: 10/08/2009] [Indexed: 10/20/2022]
Abstract
The vestibular system is vital for motor control and spatial self-motion perception. Afferents from the otolith organs and the semicircular canals converge with optokinetic, somatosensory and motor-related signals in the vestibular nuclei, which are reciprocally interconnected with the vestibulocerebellar cortex and deep cerebellar nuclei. Here, we review the properties of the many cell types in the vestibular nuclei, as well as some fundamental computations implemented within this brainstem-cerebellar circuitry. These include the sensorimotor transformations for reflex generation, the neural computations for inertial motion estimation, the distinction between active and passive head movements, as well as the integration of vestibular and proprioceptive information for body motion estimation. A common theme in the solution to such computational problems is the concept of internal models and their neural implementation. Recent studies have shed new insights into important organizational principles that closely resemble those proposed for other sensorimotor systems, where their neural basis has often been more difficult to identify. As such, the vestibular system provides an excellent model to explore common neural processing strategies relevant both for reflexive and for goal-directed, voluntary movement as well as perception.
Collapse
Affiliation(s)
- Andrea M Green
- Dépt. de Physiologie, Université de Montréal, 2960 Chemin de la Tour, Rm. 4141, Montreal, QC H3T 1J4, Canada.
| | | |
Collapse
|
26
|
Nagy B, Corneil BD. Representation of Horizontal head-on-body position in the primate superior colliculus. J Neurophysiol 2009; 103:858-74. [PMID: 20007503 DOI: 10.1152/jn.00099.2009] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Movement-related activity within the superior colliculus (SC) represents the desired displacement of an impending gaze shift. This representation must ultimately be transformed into position-based reference frames appropriate for coordinated eye-head gaze shifts. Parietal areas that project to the SC are modulated by the initial position of both the eye-re-head and head-re-body and SC activity is modulated by eye-re-head position. These considerations led us to investigate whether SC activity is modulated by the head-re-body position. We recorded activity from movement-related SC neurons while head-restrained monkeys performed a delayed-saccade task. Across blocks of trials, the horizontal position of the body was rotated under a space-fixed head to three to five different positions spanning +/-25 degrees . We observed a significant influence of body-under-head position on SC activity in 50/60 neurons. This influence was expressed predominantly as a linear gain field, scaling task-related SC activity without changing the location of the response field (linear gain fields explained >/=20% of the variance in neural activity in approximately 50% of our sample). Smaller nonlinear modulations were also observed in roughly 30% of our sample. SC activity was equally likely to increase or decrease as the body was rotated to the side of neuronal recording and we found no systematic relationship between the directionality or magnitude of the linear gain field with recording location in the SC. We conclude that a signal conveying head-re-body position is present in the SC. Although the functional significance remains open, our findings are consistent with the SC contributing to a displacement-to-position transformation for oculomotor control.
Collapse
Affiliation(s)
- Benjamin Nagy
- Canadian Institutes of Health Research Group in Action and Perception, University of Western Ontario, London, Ontario, Canada
| | | |
Collapse
|
27
|
Keith GP, Blohm G, Crawford JD. Influence of saccade efference copy on the spatiotemporal properties of remapping: a neural network study. J Neurophysiol 2009; 103:117-39. [PMID: 19846615 DOI: 10.1152/jn.91191.2008] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Remapping of gaze-centered target-position signals across saccades has been observed in the superior colliculus and several cortical areas. It is generally assumed that this remapping is driven by saccade-related signals. What is not known is how the different potential forms of this signal (i.e., visual, visuomotor, or motor) might influence this remapping. We trained a three-layer recurrent neural network to update target position (represented as a "hill" of activity in a gaze-centered topographic map) across saccades, using discrete time steps and backpropagation-through-time algorithm. Updating was driven by an efference copy of one of three saccade-related signals: a transient visual response to the saccade-target in two-dimensional (2-D) topographic coordinates (Vtop), a temporally extended motor burst in 2-D topographic coordinates (Mtop), or a 3-D eye velocity signal in brain stem coordinates (EV). The Vtop model produced presaccadic remapping in the output layer, with a "jumping hill" of activity and intrasaccadic suppression. The Mtop model also produced presaccadic remapping with a dispersed moving hill of activity that closely reproduced the quantitative results of Sommer and Wurtz. The EV model produced a coherent moving hill of activity but failed to produce presaccadic remapping. When eye velocity and a topographic (Vtop or Mtop) updater signal were used together, the remapping relied primarily on the topographic signal. An analysis of the hidden layer activity revealed that the transient remapping was highly dispersed across hidden-layer units in both Vtop and Mtop models but tightly clustered in the EV model. These results show that the nature of the updater signal influences both the mechanism and final dynamics of remapping. Taken together with the currently known physiology, our simulations suggest that different brain areas might rely on different signals and mechanisms for updating that should be further distinguishable through currently available single- and multiunit recording paradigms.
Collapse
Affiliation(s)
- Gerald P Keith
- York Centre for Vision Research, and Canadian Institute of Health Research Group, York University, 4700 Keele St., Toronto, Ontario, Canada
| | | | | |
Collapse
|
28
|
Keith GP, DeSouza JFX, Yan X, Wang H, Crawford JD. A method for mapping response fields and determining intrinsic reference frames of single-unit activity: applied to 3D head-unrestrained gaze shifts. J Neurosci Methods 2009; 180:171-84. [PMID: 19427544 DOI: 10.1016/j.jneumeth.2009.03.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2009] [Revised: 03/08/2009] [Accepted: 03/09/2009] [Indexed: 10/21/2022]
Abstract
Natural movements towards a target show metric variations between trials. When movements combine contributions from multiple body-parts, such as head-unrestrained gaze shifts involving both eye and head rotation, the individual body-part movements may vary even more than the overall movement. The goal of this investigation was to develop a general method for both mapping sensory or motor response fields of neurons and determining their intrinsic reference frames, where these movement variations are actually utilized rather than avoided. We used head-unrestrained gaze shifts, three-dimensional (3D) geometry, and naturalistic distributions of eye and head orientation to explore the theoretical relationship between the intrinsic reference frame of a sensorimotor neuron's response field and the coherence of the activity when this response field is fitted non-parametrically using different kernel bandwidths in different reference frames. We measure how well the regression surface predicts unfitted data using the PREdictive Sum-of-Squares (PRESS) statistic. The reference frame with the smallest PRESS statistic was categorized as the intrinsic reference frame if the PRESS statistic was significantly larger in other reference frames. We show that the method works best when targets are at regularly spaced positions within the response field's active region, and that the method identifies the best kernel bandwidth for response field estimation. We describe how gain-field effects may be dealt with, and how to test neurons within a population that fall on a continuum between specific reference frames. This method may be applied to any spatially coherent single-unit activity related to sensation and/or movement during naturally varying behaviors.
Collapse
Affiliation(s)
- Gerald P Keith
- Canadian Action and Perception Network, York University, 4700 Keele Street, Toronto, Ontario M3J1P3, Canada
| | | | | | | | | |
Collapse
|
29
|
Blohm G, Keith GP, Crawford JD. Decoding the cortical transformations for visually guided reaching in 3D space. ACTA ACUST UNITED AC 2008; 19:1372-93. [PMID: 18842662 DOI: 10.1093/cercor/bhn177] [Citation(s) in RCA: 61] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
To explore the possible cortical mechanisms underlying the 3-dimensional (3D) visuomotor transformation for reaching, we trained a 4-layer feed-forward artificial neural network to compute a reach vector (output) from the visual positions of both the hand and target viewed from different eye and head orientations (inputs). The emergent properties of the intermediate layers reflected several known neurophysiological findings, for example, gain field-like modulations and position-dependent shifting of receptive fields (RFs). We performed a reference frame analysis for each individual network unit, simulating standard electrophysiological experiments, that is, RF mapping (unit input), motor field mapping, and microstimulation effects (unit outputs). At the level of individual units (in both intermediate layers), the 3 different electrophysiological approaches identified different reference frames, demonstrating that these techniques reveal different neuronal properties and suggesting that a comparison across these techniques is required to understand the neural code of physiological networks. This analysis showed fixed input-output relationships within each layer and, more importantly, within each unit. These local reference frame transformation modules provide the basic elements for the global transformation; their parallel contributions are combined in a gain field-like fashion at the population level to implement both the linear and nonlinear elements of the 3D visuomotor transformation.
Collapse
Affiliation(s)
- Gunnar Blohm
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | | | | |
Collapse
|
30
|
Mechanism of gain modulation at single neuron and network levels. J Comput Neurosci 2008; 25:158-68. [PMID: 18214663 DOI: 10.1007/s10827-007-0070-6] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2007] [Revised: 11/17/2007] [Accepted: 12/03/2007] [Indexed: 10/22/2022]
Abstract
Gain modulation, in which the sensitivity of a neural response to one input is modified by a second input, is studied at single-neuron and network levels. At the single neuron level, gain modulation can arise if the two inputs are subject to a direct multiplicative interaction. Alternatively, these inputs can be summed in a linear manner by the neuron and gain modulation can arise, instead, from a nonlinear input-output relationship. We derive a mathematical constraint that can distinguish these two mechanisms even though they can look very similar, provided sufficient data of the appropriate type are available. Previously, it has been shown in coordinate transformation studies that artificial neurons with sigmoid transfer functions can acquire a nonlinear additive form of gain modulation through learning-driven adjustment of synaptic weights. We use the constraint derived for single-neuron studies to compare responses in this network with those of another network model based on a biologically inspired transfer function that can support approximately multiplicative interactions.
Collapse
|
31
|
Brozović M, Gail A, Andersen RA. Gain mechanisms for contextually guided visuomotor transformations. J Neurosci 2007; 27:10588-96. [PMID: 17898230 PMCID: PMC6673148 DOI: 10.1523/jneurosci.2685-07.2007] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2006] [Revised: 08/04/2007] [Accepted: 08/14/2007] [Indexed: 11/21/2022] Open
Abstract
A prevailing question in sensorimotor research is the integration of sensory signals with abstract behavioral rules (contexts) and how this results in decisions about motor actions. We used neural network models to study how context-specific visuomotor remapping may depend on the functional connectivity among multiple layers. Networks were trained to perform different rotational visuomotor associations, depending on the stimulus color (a nonspatial context signal). In network I, the context signal was propagated forward through the network (bottom-up), whereas in network II, it was propagated backwards (top-down). During the presentation of the visual cue stimulus, both networks integrate the context with the sensory information via a mechanism similar to the classic gain field. The recurrence in the networks hidden layers allowed a simulation of the multimodal integration over time. Network I learned to perform the proper visuomotor transformations based on a context-modulated memory of the visual cue in its hidden layer activity. In network II, a brief visual response, which was driven by the sensory input, is quickly replaced by a context-modulated motor-goal representation in the hidden layer. This happens because of a dominant feedback signal from the output layer that first conveys context information, and then, after the disappearance of the visual cue, conveys motor goal information. We also show that the origin of the context information is not necessarily closely tied to the top-down feedback. However, we suggest that the predominance of motor-goal representations found in the parietal cortex during context-specific movement planning might be the consequence of strong top-down feedback originating from within the parietal lobe or from the frontal lobe.
Collapse
Affiliation(s)
- Marina Brozović
- Division of Biology, California Institute of Technology, Pasadena, California 91125, USA.
| | | | | |
Collapse
|
32
|
Keith GP, Crawford JD. Saccade-related remapping of target representations between topographic maps: a neural network study. J Comput Neurosci 2007; 24:157-78. [PMID: 17636448 DOI: 10.1007/s10827-007-0046-6] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2006] [Revised: 05/31/2007] [Accepted: 06/01/2007] [Indexed: 11/26/2022]
Abstract
The goal of this study was to explore how a neural network could solve the updating task associated with the double-saccade paradigm, where two targets are flashed in succession and the subject must make saccades to the remembered locations of both targets. Because of the eye rotation of the saccade to the first target, the remembered retinal position of the second target must be updated if an accurate saccade to that target is to be made. We trained a three-layer, feed-forward neural network to solve this updating task using back-propagation. The network's inputs were the initial retinal position of the second target represented by a hill of activation in a 2D topographic array of units, as well as the initial eye orientation and the motor error of the saccade to the first target, each represented as 3D vectors in brainstem coordinates. The output of the network was the updated retinal position of the second target, also represented in a 2D topographic array of units. The network was trained to perform this updating using the full 3D geometry of eye rotations, and was able to produce the updated second-target position to within a 1 degrees RMS accuracy for a set of test points that included saccades of up to 70 degrees . Emergent properties in the network's hidden layer included sigmoidal receptive fields whose orientations formed distinct clusters, and predictive remapping similar to that seen in brain areas associated with saccade generation. Networks with the larger numbers of hidden-layer units developed two distinct types of units with different transformation properties: units that preferentially performed the linear remapping of vector subtraction, and units that performed the nonlinear elements of remapping that arise from initial eye orientation.
Collapse
Affiliation(s)
- Gerald P Keith
- York Centre for Vision Research, CIHR Group for Action and Perception, Department of Psychology, York University, Toronto, ON M3J 1P3, Canada.
| | | |
Collapse
|
33
|
Walker MF, Tian J, Zee DS. Kinematics of the Rotational Vestibuloocular Reflex: Role of the Cerebellum. J Neurophysiol 2007; 98:295-302. [PMID: 17522172 DOI: 10.1152/jn.00215.2007] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We studied the effect of cerebellar lesions on the 3-D control of the rotational vestibuloocular reflex (RVOR) to abrupt yaw-axis head rotation. Using search coils, three-dimensional (3-D) eye movements were recorded from nine patients with cerebellar disease and seven normal subjects during brief chair rotations (200°/s2 to 40°/s) and manual head impulses. We determined the amount of eye-position dependent torsion during yaw-axis rotation by calculating the torsional-horizontal eye-velocity axis for each of three vertical eye positions (0°, ±15°) and performing a linear regression to determine the relationship of the 3-D velocity axis to vertical eye position. The slope of this regression is the tilt angle slope. Overall, cerebellar patients showed a clear increase in the tilt angle slope for both chair rotations and head impulses. For chair rotations, the effect was not seen at the onset of head rotation when both patients and normal subjects had nearly head-fixed responses (no eye-position-dependent torsion). Over time, however, both groups showed an increasing tilt-angle slope but to a much greater degree in cerebellar patients. Two important conclusions emerge from these findings: the axis of eye rotation at the onset of head rotation is set to a value close to head-fixed (i.e., optimal for gaze stabilization during head rotation), independent of the cerebellum and once the head rotation is in progress, the cerebellum plays a crucial role in keeping the axis of eye rotation about halfway between head-fixed and that required for Listing's Law to be obeyed.
Collapse
Affiliation(s)
- Mark F Walker
- Dept of Neurology, The Johns Hopkins University, Baltimore, MD 21287, USA.
| | | | | |
Collapse
|
34
|
Constantin AG, Wang H, Martinez-Trujillo JC, Crawford JD. Frames of reference for gaze saccades evoked during stimulation of lateral intraparietal cortex. J Neurophysiol 2007; 98:696-709. [PMID: 17553952 DOI: 10.1152/jn.00206.2007] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Previous studies suggest that stimulation of lateral intraparietal cortex (LIP) evokes saccadic eye movements toward eye- or head-fixed goals, whereas most single-unit studies suggest that LIP uses an eye-fixed frame with eye-position modulations. The goal of our study was to determine the reference frame for gaze shifts evoked during LIP stimulation in head-unrestrained monkeys. Two macaques (M1 and M2) were implanted with recording chambers over the right intraparietal sulcus and with search coils for recording three-dimensional eye and head movements. The LIP region was microstimulated using pulse trains of 300 Hz, 100-150 microA, and 200 ms. Eighty-five putative LIP sites in M1 and 194 putative sites in M2 were used in our quantitative analysis throughout this study. Average amplitude of the stimulation-evoked gaze shifts was 8.67 degrees for M1 and 7.97 degrees for M2 with very small head movements. When these gaze-shift trajectories were rotated into three coordinate frames (eye, head, and body), gaze endpoint distribution for all sites was most convergent to a common point when plotted in eye coordinates. Across all sites, the eye-centered model provided a significantly better fit compared with the head, body, or fixed-vector models (where the latter model signifies no modulation of the gaze trajectory as a function of initial gaze position). Moreover, the probability of evoking a gaze shift from any one particular position was modulated by the current gaze direction (independent of saccade direction). These results provide causal evidence that the motor commands from LIP encode gaze command in eye-fixed coordinates but are also subtly modulated by initial gaze position.
Collapse
Affiliation(s)
- A G Constantin
- Center for Vision Research, York University, Toronto, Ontario, Canada
| | | | | | | |
Collapse
|
35
|
White RL, Snyder LH. Spatial constancy and the brain: insights from neural networks. Philos Trans R Soc Lond B Biol Sci 2007; 362:375-82. [PMID: 17255021 PMCID: PMC2323556 DOI: 10.1098/rstb.2006.1965] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
To form an accurate internal representation of visual space, the brain must accurately account for movements of the eyes, head or body. Updating of internal representations in response to these movements is especially important when remembering spatial information, such as the location of an object, since the brain must rely on non-visual extra-retinal signals to compensate for self-generated movements. We investigated the computations underlying spatial updating by constructing a recurrent neural network model to store and update a spatial location based on a gaze shift signal, and to do so flexibly based on a contextual cue. We observed a striking similarity between the patterns of behaviour produced by the model and monkeys trained to perform the same task, as well as between the hidden units of the model and neurons in the lateral intraparietal area (LIP). In this report, we describe the similarities between the model and single unit physiology to illustrate the usefulness of neural networks as a tool for understanding specific computations performed by the brain.
Collapse
|
36
|
Klier EM, Wang H, Crawford JD. Interstitial Nucleus of Cajal Encodes Three-Dimensional Head Orientations in Fick-Like Coordinates. J Neurophysiol 2007; 97:604-17. [PMID: 17079347 DOI: 10.1152/jn.00379.2006] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Two central, related questions in motor control are 1) how the brain represents movement directions of various effectors like the eyes and head and 2) how it constrains their redundant degrees of freedom. The interstitial nucleus of Cajal (INC) integrates velocity commands from the gaze control system into position signals for three-dimensional eye and head posture. It has been shown that the right INC encodes clockwise (CW)-up and CW-down eye and head components, whereas the left INC encodes counterclockwise (CCW)-up and CCW-down components, similar to the sensitivity directions of the vertical semicircular canals. For the eyes, these canal-like coordinates align with Listing’s plane (a behavioral strategy limiting torsion about the gaze axis). By analogy, we predicted that the INC also encodes head orientation in canal-like coordinates, but instead, aligned with the coordinate axes for the Fick strategy (which constrains head torsion). Unilateral stimulation (50 μA, 300 Hz, 200 ms) evoked CW head rotations from the right INC and CCW rotations from the left INC, with variable vertical components. The observed axes of head rotation were consistent with a canal-like coordinate system. Moreover, as predicted, these axes remained fixed in the head, rotating with initial head orientation like the horizontal and torsional axes of a Fick coordinate system. This suggests that the head is ordinarily constrained to zero torsion in Fick coordinates by equally activating CW/CCW populations of neurons in the right/left INC. These data support a simple mechanism for controlling head orientation through the alignment of brain stem neural coordinates with natural behavioral constraints.
Collapse
Affiliation(s)
- Eliana M Klier
- Department of Anatomy and Neurobiology, Box 8108, Washington University School of Medicine, 660 South Euclid Avenue, St. Louis, MO 63110, USA.
| | | | | |
Collapse
|
37
|
Keith GP, Smith MA, Crawford JD. Functional organization within a neural network trained to update target representations across 3-D saccades. J Comput Neurosci 2006; 22:191-209. [PMID: 17120151 DOI: 10.1007/s10827-006-0007-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2005] [Revised: 08/18/2006] [Accepted: 08/21/2006] [Indexed: 10/24/2022]
Abstract
The goal of this study was to understand how neural networks solve the 3-D aspects of updating in the double-saccade task, where subjects make sequential saccades to the remembered locations of two targets. We trained a 3-layer, feed-forward neural network, using back-propagation, to calculate the 3-D motor error the second saccade. Network inputs were a 2-D topographic map of the direction of the second target in retinal coordinates, and 3-D vector representations of initial eye orientation and motor error of the first saccade in head-fixed coordinates. The network learned to account for all 3-D aspects of updating. Hidden-layer units (HLUs) showed retinal-coordinate visual receptive fields that were remapped across the first saccade. Two classes of HLUs emerged from the training, one class primarily implementing the linear aspects of updating using vector subtraction, the second class implementing the eye-orientation-dependent, non-linear aspects of updating. These mechanisms interacted at the unit level through gain-field-like input summations, and through the parallel "tweaking" of optimally-tuned HLU contributions to the output that shifted the overall population output vector to the correct second-saccade motor error. These observations may provide clues for the biological implementation of updating.
Collapse
Affiliation(s)
- Gerald P Keith
- Department of Psychology, Centre for Vision Research and Canadian Institute of Health Research Group, York University, 4700 Keele Street, Toronto, Ontario, Canada
| | | | | |
Collapse
|
38
|
Vesia M, Monteon JA, Sergio LE, Crawford JD. Hemispheric asymmetry in memory-guided pointing during single-pulse transcranial magnetic stimulation of human parietal cortex. J Neurophysiol 2006; 96:3016-27. [PMID: 17005619 DOI: 10.1152/jn.00411.2006] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Dorsal posterior parietal cortex (PPC) has been implicated through single-unit recordings, neuroimaging data, and studies of brain-damaged humans in the spatial guidance of reaching and pointing movements. The present study examines the causal effect of single-pulse transcranial magnetic stimulation (TMS) over the left and right dorsal posterior parietal cortex during a memory-guided "reach-to-touch" movement task in six human subjects. Stimulation of the left parietal hemisphere significantly increased endpoint variability, independent of visual field, with no horizontal bias. In contrast, right parietal stimulation did not increase variability, but instead produced a significantly systematic leftward directional shift in pointing (contralateral to stimulation site) in both visual fields. Furthermore, the same lateralized pattern persisted with left-hand movement, suggesting that these aspects of parietal control of pointing movements are spatially fixed. To test whether the right parietal TMS shift occurs in visual or motor coordinates, we trained subjects to point correctly to optically reversed peripheral targets, viewed through a left-right Dove reversing prism. After prism adaptation, the horizontal pointing direction for a given visual target reversed, but the direction of shift during right parietal TMS did not reverse. Taken together, these data suggest that induction of a focal current reveals a hemispheric asymmetry in the early stages of the putative spatial processing in PPC. These results also suggest that a brief TMS pulse modifies the output of the right PPC in motor coordinates downstream from the adapted visuomotor reversal, rather than modifying the upstream visual coordinates of the memory representation.
Collapse
Affiliation(s)
- Michael Vesia
- York University, 4700 Keele Street, Toronto, Ontario, Canada M3J 1P3
| | | | | | | |
Collapse
|
39
|
Schoppik D, Lisberger SG. Saccades exert spatial control of motion processing for smooth pursuit eye movements. J Neurosci 2006; 26:7607-18. [PMID: 16855088 PMCID: PMC2548311 DOI: 10.1523/jneurosci.1719-06.2006] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Saccades modulate the relationship between visual motion and smooth eye movement. Before a saccade, pursuit eye movements reflect a vector average of motion across the visual field. After a saccade, pursuit primarily reflects the motion of the target closest to the endpoint of the saccade. We tested the hypothesis that the saccade produces a spatial weighting of motion around the endpoint of the saccade. Using a moving pursuit stimulus that stepped to a new spatial location just before a targeting saccade, we controlled the distance between the endpoint of the saccade and the position of the moving target. We demonstrate that the smooth eye velocity following the targeting saccade weights the presaccadic visual motion inputs by the distance from their location in space to the endpoint of the saccade, defining the extent of a spatiotemporal filter for driving the eyes. The center of the filter is located at the endpoint of the saccade in space, not at the position of the fovea. The filter is stable in the face of a distracter target, is present for saccades to stationary and moving targets, and affects both the speed and direction of the postsaccadic eye movement. The spatial filter can explain the target-selecting gain change in postsaccadic pursuit, and has intriguing parallels to the process by which perceptual decisions about a restricted region of space are enhanced by attention. The effect of the spatial saccade plan on the pursuit response to a given retinal motion describes the dynamics of a coordinate transformation.
Collapse
Affiliation(s)
- David Schoppik
- Howard Hughes Medical Institute, Neuroscience Graduate Program, W. M. Keck Foundation Center for Integrative Neuroscience, and Department of Physiology, University of California, San Francisco, California 94143, USA.
| | | |
Collapse
|
40
|
Abstract
The response fields of higher cortical neurons are usually approximated with smooth mathematical functions for the purpose of population parameterization or theoretical modeling. We used instead two nonparametric methods (principal component analysis and independent component analysis), which provided a basis for the response field clustering. Although both methods performed satisfactorily, the principal component analysis space is more straightforward to calculate. It also gave a clear preference toward the smallest number of functional response field classes. Clustering was performed with both K-means and superparamagnetic clustering algorithms with similar results. We also show that the shapes of the eigenvectors remain consistent regardless of the response field data sets size. This finding reflects the fact that the response fields were generated by the same neural network and encode the same underlying process.
Collapse
Affiliation(s)
- Marina Brozović
- Division of Biology, California Institute of Technology, Pasadena, California 91125, USA.
| | | |
Collapse
|
41
|
Hanes DA, McCollum G. Variables contributing to the coordination of rapid eye/head gaze shifts. BIOLOGICAL CYBERNETICS 2006; 94:300-24. [PMID: 16538479 DOI: 10.1007/s00422-006-0049-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2005] [Accepted: 01/09/2006] [Indexed: 05/07/2023]
Abstract
In this article results of several published studies are synthesized in order to address the neural system for the determination of eye and head movement amplitudes of horizontal eye/head gaze shifts with arbitrary initial head and eye positions. Target position, initial head position, and initial eye position span the space of physical parameters for a planned eye/head gaze saccade. The principal result is that a functional mechanism for determining the amplitudes of the component eye and head movements must use the entire space of variables. Moreover, it is shown that amplitudes cannot be determined additively by summing contributions from single variables. Many earlier models calculate amplitudes as a function of one or two variables and/or restrict consideration to best-fit linear formulae. Our analysis systematically eliminates such models as candidates for a system that can generate appropriate movements for all possible initial conditions. The results of this study are stated in terms of properties of the response system. Certain axiom sets for the intrinsic organization of the response system obey these properties. We briefly provide one example of such an axiomatic model. The results presented in this article help to characterize the actual neural system for the control of rapid eye/head gaze shifts by showing that, in order to account for behavioral data, certain physical quantities must be represented in and used by the neural system. Our theoretical analysis generates predictions and identifies gaps in the data. We suggest needed experiments.
Collapse
Affiliation(s)
- Douglas A Hanes
- Neuro-otology Department, Legacy Research Center, 1225 NE 2nd Avenue, Portland, OR 97232, USA.
| | | |
Collapse
|
42
|
Ghasia FF, Angelaki DE. Do motoneurons encode the noncommutativity of ocular rotations? Neuron 2005; 47:281-93. [PMID: 16039569 DOI: 10.1016/j.neuron.2005.05.031] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2005] [Revised: 05/09/2005] [Accepted: 05/26/2005] [Indexed: 11/16/2022]
Abstract
As we look around, the orientation of our eyes depends on the order of the rotations that are carried out, a mathematical feature of rotatory motions known as noncommutativity. Theorists and experimentalists continue to debate how biological systems deal with this property when generating kinematically appropriate movements. Some believe that this is always done by neural commands to a simplified eye plant. Others have postulated that noncommutativity is implemented solely by the mechanical properties of the eyeball. Here we directly examined what the brain tells the muscles, by recording motoneuron activities as monkeys made eye movements. We found that vertical recti and superior/inferior oblique motoneurons, which drive sensory-generated torsional eye movements, do not modulate their firing rates according to the noncommutative-driven torsion during pursuit. We conclude that part of the solution for kinematically appropriate eye movements is found in the mechanical properties of the eyeball, although neural computations remain necessary and become increasingly important during head movements.
Collapse
Affiliation(s)
- Fatema F Ghasia
- Department of Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | | |
Collapse
|
43
|
Crane BT, Tian J, Demer JL. Kinematics of vertical saccades during the yaw vestibulo-ocular reflex in humans. Invest Ophthalmol Vis Sci 2005; 46:2800-9. [PMID: 16043853 PMCID: PMC1876708 DOI: 10.1167/iovs.05-0147] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
PURPOSE Listing's law (LL) constrains the rotational axes of saccades and pursuit eye movements to Listing's plane (LP). In the velocity domain, LL is ordinarily equivalent to a tilt in the ocular velocity axis equal to half the change in eye position, giving a tilt angle ratio (TAR) of 0.5. This study was undertaken to investigate vertical saccade behavior after the yaw vestibulo-ocular reflex (VOR) had driven eye torsion out of LP, an initial condition causing the position and velocity domain formulations of LL to differ. METHODS Binocular eye and head motions were recorded with magnetic search coils in eight humans. With the head immobile, LP was determined for each eye, and mean TAR was 0.50 +/- 0.07 (mean +/- SD) for horizontal and 0.45 +/- 0.11 for vertical saccades. The VOR was evoked by transient, whole-body yaw at 2800 deg/s2 peak acceleration, capable of evoking large, uninterrupted VOR slow phases. Before rotation, subjects viewed a target at eye level, 20 degrees up, or 20 degrees down. In two thirds of the trials, the target moved upward or downward at systematically varying times, triggering a vertical saccade during the horizontal VOR slow phase. RESULTS Because the head rotation axis was generally misaligned with LP, the eye averaged 3.6 degrees out of LP at vertical saccade onset. During the saccade, eye position continued to depart LP by an average 0.8 degrees. The horizontal TAR at saccade onset was 0.29 +/- 0.07. At peak saccade velocity 35 +/- 3 ms later, the vertical TAR was 0.45 +/- 0.07, statistically similar to that of head fixed saccades. Saccades did not return to LP. CONCLUSIONS Although they did not observe the position domain formulation of LL, vertical saccades, during the VOR, observed the half-angle velocity domain formulation of LL.
Collapse
Affiliation(s)
- Benjamin T. Crane
- Department of Surgery (Division of Otolaryngology), University of California, Los Angeles, California
| | - Junru Tian
- Department of Ophthalmology, University of California, Los Angeles, California
| | - Joseph L. Demer
- Department of Ophthalmology, University of California, Los Angeles, California
- Department of Neurology, University of California, Los Angeles, California
- Department of Neuroscience, University of California, Los Angeles, California
- Department of Bioengineering Interdepartmental Programs, University of California, Los Angeles, California
| |
Collapse
|