1
|
Abstract
Blindsight is a visual phenomenon whereby hemianopic patients are able to process visual information in their blind visual field without awareness. Previous research demonstrating the existence of blindsight in hemianopic patients has been criticized for the nature of the paradigms used, for the presence of methodological artifacts, and for the possibility that spared islands of visual cortex may have sustained the phenomenon because the patients generally had small circumscribed lesions. To respond to these criticisms, the authors have been investigating for several years now residual visual abilities in the blind field of hemispherectomized patients in whom a whole cerebral hemisphere has been removed or disconnected from the rest of the brain. These patients have offered a unique opportunity to establish the existence of blindsight and to investigate its underlying neuronal mechanisms because in these cases, spared islands of visual cortex cannot be evoked to explain the presence of visual abilities in the blind field. In addition, the authors have been using precise behavioral paradigms, strict control for potential methodological artifacts such as light scatter, fixation, criterion effects, and macular sparing, and they have utilized new neuroimaging techniques such as diffusion tensor imaging tractography to enhance their understanding of the phenomenon. The following article is a review of their research on the involvement of the superior colliculi in blindsight in hemispherectomized patients. NEUROSCIENTIST 13(5):506—518, 2007.
Collapse
Affiliation(s)
- Alain Ptito
- Cognitive Neuroscience Unit, Montreal Neurological Institute and Hospital, McGill University, Montreal, Canada.
| | | |
Collapse
|
2
|
Mohsenzadeh Y, Dash S, Crawford JD. A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements. Front Syst Neurosci 2016; 10:39. [PMID: 27242452 PMCID: PMC4867689 DOI: 10.3389/fnsys.2016.00039] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2015] [Accepted: 04/19/2016] [Indexed: 12/02/2022] Open
Abstract
In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks.
Collapse
Affiliation(s)
- Yalda Mohsenzadeh
- York Center for Vision Research, Canadian Action and Perception Network, York University Toronto, ON, Canada
| | - Suryadeep Dash
- York Center for Vision Research, Canadian Action and Perception Network, York UniversityToronto, ON, Canada; Department of Physiology and Pharmacology, Robarts Research Institute, Western UniversityLondon, ON, Canada
| | - J Douglas Crawford
- York Center for Vision Research, Canadian Action and Perception Network, York UniversityToronto, ON, Canada; Departments of Psychology, Biology, and Kinesiology and Health Sciences, York UniversityToronto, ON, Canada
| |
Collapse
|
3
|
Dash S, Nazari SA, Yan X, Wang H, Crawford JD. Superior Colliculus Responses to Attended, Unattended, and Remembered Saccade Targets during Smooth Pursuit Eye Movements. Front Syst Neurosci 2016; 10:34. [PMID: 27147987 PMCID: PMC4828430 DOI: 10.3389/fnsys.2016.00034] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 03/30/2016] [Indexed: 11/16/2022] Open
Abstract
In realistic environments, keeping track of multiple visual targets during eye movements likely involves an interaction between vision, top-down spatial attention, memory, and self-motion information. Recently we found that the superior colliculus (SC) visual memory response is attention-sensitive and continuously updated relative to gaze direction. In that study, animals were trained to remember the location of a saccade target across an intervening smooth pursuit (SP) eye movement (Dash et al., 2015). Here, we modified this paradigm to directly compare the properties of visual and memory updating responses to attended and unattended targets. Our analysis shows that during SP, active SC visual vs. memory updating responses share similar gaze-centered spatio-temporal profiles (suggesting a common mechanism), but updating was weaker by ~25%, delayed by ~55 ms, and far more dependent on attention. Further, during SP the sum of passive visual responses (to distracter stimuli) and memory updating responses (to saccade targets) closely resembled the responses for active attentional tracking of visible saccade targets. These results suggest that SP updating signals provide a damped, delayed estimate of attended location that contributes to the gaze-centered tracking of both remembered and visible saccade targets.
Collapse
Affiliation(s)
- Suryadeep Dash
- Center for Vision Research, York UniversityToronto, ON, Canada; Department of Physiology and Pharmacology, Robarts Research Institute, Western UniversityLondon, ON, Canada
| | | | - Xiaogang Yan
- Center for Vision Research, York University Toronto, ON, Canada
| | - Hongying Wang
- Center for Vision Research, York University Toronto, ON, Canada
| | - J Douglas Crawford
- Center for Vision Research, York UniversityToronto, ON, Canada; Department of Psychology, Biology and Kinesiology and Health Sciences, York UniversityToronto, ON, Canada
| |
Collapse
|
4
|
Ruhland JL, Jones AE, Yin TCT. Dynamic sound localization in cats. J Neurophysiol 2015; 114:958-68. [PMID: 26063772 DOI: 10.1152/jn.00105.2015] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2015] [Accepted: 06/05/2015] [Indexed: 11/22/2022] Open
Abstract
Sound localization in cats and humans relies on head-centered acoustic cues. Studies have shown that humans are able to localize sounds during rapid head movements that are directed toward the target or other objects of interest. We studied whether cats are able to utilize similar dynamic acoustic cues to localize acoustic targets delivered during rapid eye-head gaze shifts. We trained cats with visual-auditory two-step tasks in which we presented a brief sound burst during saccadic eye-head gaze shifts toward a prior visual target. No consistent or significant differences in accuracy or precision were found between this dynamic task (2-step saccade) and the comparable static task (single saccade when the head is stable) in either horizontal or vertical direction. Cats appear to be able to process dynamic auditory cues and execute complex motor adjustments to accurately localize auditory targets during rapid eye-head gaze shifts.
Collapse
Affiliation(s)
- Janet L Ruhland
- Department of Neuroscience and Neuroscience Training Program, University of Wisconsin, Madison, Wisconsin
| | - Amy E Jones
- Department of Neuroscience and Neuroscience Training Program, University of Wisconsin, Madison, Wisconsin
| | - Tom C T Yin
- Department of Neuroscience and Neuroscience Training Program, University of Wisconsin, Madison, Wisconsin
| |
Collapse
|
5
|
Rath-Wilson K, Guitton D. Oculomotor control after hemidecortication: A single hemisphere encodes corollary discharges for bilateral saccades. Cortex 2015; 63:232-49. [DOI: 10.1016/j.cortex.2014.08.020] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2014] [Revised: 07/25/2014] [Accepted: 08/27/2014] [Indexed: 10/24/2022]
|
6
|
Continuous updating of visuospatial memory in superior colliculus during slow eye movements. Curr Biol 2015; 25:267-274. [PMID: 25601549 DOI: 10.1016/j.cub.2014.11.064] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2013] [Revised: 10/15/2014] [Accepted: 11/25/2014] [Indexed: 11/23/2022]
Abstract
BACKGROUND Primates can remember and spatially update the visual direction of previously viewed objects during various types of self-motion. It is known that the brain "remaps" visual memory traces relative to gaze just before and after, but not during, discrete gaze shifts called saccades. However, it is not known how visual memory is updated during slow, continuous motion of the eyes. RESULTS Here, we recorded the midbrain superior colliculus (SC) of two rhesus monkeys that were trained to spatially update the location of a saccade target across an intervening smooth pursuit (SP) eye movement. Saccade target location was varied across trials so that it passed through the neuron's receptive field at different points of the SP trajectory. Nearly all (99% of) visual responsive neurons, but no motor neurons, showed a transient memory response that continuously updated the saccade goal during SP. These responses were gaze centered (i.e., shifting across the SC's retinotopic map in opposition to gaze). Furthermore, this response was strongly enhanced by attention and/or saccade target selection. CONCLUSIONS This is the first demonstration of continuous updating of visual memory responses during eye motion. We expect that this would generalize to other visuomotor structures when gaze shifts in a continuous, unpredictable fashion.
Collapse
|
7
|
Daye PM, Blohm G, Lefèvre P. Catch-up saccades in head-unrestrained conditions reveal that saccade amplitude is corrected using an internal model of target movement. J Vis 2014; 14:14.1.12. [PMID: 24424378 DOI: 10.1167/14.1.12] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023] Open
Abstract
This study analyzes how human participants combine saccadic and pursuit gaze movements when they track an oscillating target moving along a randomly oriented straight line with the head free to move. We found that to track the moving target appropriately, participants triggered more saccades with increasing target oscillation frequency to compensate for imperfect tracking gains. Our sinusoidal paradigm allowed us to show that saccade amplitude was better correlated with internal estimates of position and velocity error at saccade onset than with those parameters 100 ms before saccade onset as head-restrained studies have shown. An analysis of saccadic onset time revealed that most of the saccades were triggered when the target was accelerating. Finally, we found that most saccades were triggered when small position errors were combined with large velocity errors at saccade onset. This could explain why saccade amplitude was better correlated with velocity error than with position error. Therefore, our results indicate that the triggering mechanism of head-unrestrained catch-up saccades combines position and velocity error at saccade onset to program and correct saccade amplitude rather than using sensory information 100 ms before saccade onset.
Collapse
Affiliation(s)
- Pierre M Daye
- ICTEAM Institute, Université catholique de Louvain, Louvain-la-Neuve, Belgium
| | | | | |
Collapse
|
8
|
Crawford JD, Henriques DYP, Medendorp WP. Three-dimensional transformations for goal-directed action. Annu Rev Neurosci 2011; 34:309-31. [PMID: 21456958 DOI: 10.1146/annurev-neuro-061010-113749] [Citation(s) in RCA: 124] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Much of the central nervous system is involved in visuomotor transformations for goal-directed gaze and reach movements. These transformations are often described in terms of stimulus location, gaze fixation, and reach endpoints, as viewed through the lens of translational geometry. Here, we argue that the intrinsic (primarily rotational) 3-D geometry of the eye-head-reach systems determines the spatial relationship between extrinsic goals and effector commands, and therefore the required transformations. This approach provides a common theoretical framework for understanding both gaze and reach control. Combined with an assessment of the behavioral, neurophysiological, imaging, and neuropsychological literature, this framework leads us to conclude that (a) the internal representation and updating of visual goals are dominated by gaze-centered mechanisms, but (b) these representations must then be transformed as a function of eye and head orientation signals into effector-specific 3-D movement commands.
Collapse
Affiliation(s)
- J Douglas Crawford
- York Centre for Vision Research, Canadian Action and Perception Network, and Departments of Psychology, Toronto, Ontario, Canada, M3J 1P3.
| | | | | |
Collapse
|
9
|
Tanaka M, Kunimatsu J. Contribution of the central thalamus to the generation of volitional saccades. Eur J Neurosci 2011; 33:2046-57. [PMID: 21645100 DOI: 10.1111/j.1460-9568.2011.07699.x] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Lesions in the motor thalamus can cause deficits in somatic movements. However, the involvement of the thalamus in the generation of eye movements has only recently been elucidated. In this article, we review recent advances into the role of the thalamus in eye movements. Anatomically, the anterior group of the intralaminar nuclei and paralaminar portion of the ventrolateral, ventroanterior and mediodorsal nuclei of the thalamus send massive projections to the frontal eye field and supplementary eye field. In addition, these parts of the thalamus, collectively known as the 'oculomotor thalamus', receive inputs from the cerebellum, the basal ganglia and virtually all stages of the saccade-generating pathways in the brainstem. In their pioneering work in the 1980s, Schlag and Schlag-Rey found a variety of eye movement-related neurons in the oculomotor thalamus, and proposed that this region might constitute a 'central controller' playing a role in monitoring eye movements and generating self-paced saccades. This hypothesis has been evaluated by recent experiments in non-human primates and by clinical observations of subjects with thalamic lesions. In addition, several recent studies have also addressed the involvement of the oculomotor thalamus in the generation of anti-saccades and the selection of targets for saccades. These studies have revealed the impact of subcortical signals on the higher-order cortical processing underlying saccades, and suggest the possibility of future studies using the oculomotor system as a model to explore the neural mechanisms of global cortico-subcortical loops and the neural basis of a local network between the thalamus and cortex.
Collapse
Affiliation(s)
- Masaki Tanaka
- Department of Physiology, Hokkaido University School of Medicine, Sapporo 060-8638, Japan.
| | | |
Collapse
|
10
|
Absence of spatial updating when the visuomotor system is unsure about stimulus motion. J Neurosci 2011; 31:10558-68. [PMID: 21775600 DOI: 10.1523/jneurosci.0998-11.2011] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
How does the visuomotor system decide whether a target is moving or stationary in space or whether it moves relative to the eyes or head? A visual flash during a rapid eye-head gaze shift produces a brief visual streak on the retina that could provide information about target motion, when appropriately combined with eye and head self-motion signals. Indeed, double-step experiments have demonstrated that the visuomotor system incorporates actively generated intervening gaze shifts in the final localization response. Also saccades to brief head-fixed flashes during passive whole-body rotation compensate for vestibular-induced ocular nystagmus. However, both the amount of retinal motion to invoke spatial updating and the default strategy in the absence of detectable retinal motion remain unclear. To study these questions, we determined the contribution of retinal motion and the vestibular canals to spatial updating of visual flashes during passive whole-body rotation. Head- and body-restrained humans made saccades toward very brief (0.5 and 4 ms) and long (100 ms) visual flashes during sinusoidal rotation around the vertical body axis in total darkness. Stimuli were either attached to the chair (head-fixed) or stationary in space and were always well localizable. Surprisingly, spatial updating only occurred when retinal stimulus motion provided sufficient information: long-duration stimuli were always appropriately localized, thus adequately compensating for vestibular nystagmus and the passive head movement during the saccade reaction time. For the shortest stimuli, however, the target was kept in retinocentric coordinates, thus ignoring intervening nystagmus and passive head displacement, regardless of whether the target was moving with the head or not.
Collapse
|
11
|
Medendorp WP. Spatial constancy mechanisms in motor control. Philos Trans R Soc Lond B Biol Sci 2011; 366:476-91. [PMID: 21242137 DOI: 10.1098/rstb.2010.0089] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The success of the human species in interacting with the environment depends on the ability to maintain spatial stability despite the continuous changes in sensory and motor inputs owing to movements of eyes, head and body. In this paper, I will review recent advances in the understanding of how the brain deals with the dynamic flow of sensory and motor information in order to maintain spatial constancy of movement goals. The first part summarizes studies in the saccadic system, showing that spatial constancy is governed by a dynamic feed-forward process, by gaze-centred remapping of target representations in anticipation of and across eye movements. The subsequent sections relate to other oculomotor behaviour, such as eye-head gaze shifts, smooth pursuit and vergence eye movements, and their implications for feed-forward mechanisms for spatial constancy. Work that studied the geometric complexities in spatial constancy and saccadic guidance across head and body movements, distinguishing between self-generated and passively induced motion, indicates that both feed-forward and sensory feedback processing play a role in spatial updating of movement goals. The paper ends with a discussion of the behavioural mechanisms of spatial constancy for arm motor control and their physiological implications for the brain. Taken together, the emerging picture is that the brain computes an evolving representation of three-dimensional action space, whose internal metric is updated in a nonlinear way, by optimally integrating noisy and ambiguous afferent and efferent signals.
Collapse
Affiliation(s)
- W Pieter Medendorp
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, PO Box 9104, NL-6500 HE Nijmegen, The Netherlands.
| |
Collapse
|
12
|
Daye PM, Blohm G, Lefèvre P. Saccadic Compensation for Smooth Eye and Head Movements During Head-Unrestrained Two-Dimensional Tracking. J Neurophysiol 2010; 103:543-56. [DOI: 10.1152/jn.00656.2009] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Spatial updating is the ability to keep track of the position of world-fixed objects while we move. In the case of vision, this phenomenon is called spatial constancy and has been studied in head-restraint conditions. During head-restrained smooth pursuit, it has been shown that the saccadic system has access to extraretinal information from the pursuit system to update the objects' position in the surrounding environment. However, during head-unrestrained smooth pursuit, the saccadic system needs to keep track of three different motor commands: the ocular smooth pursuit command, the vestibuloocular reflex (VOR), and the head movement command. The question then arises whether saccades compensate for these movements. To address this question, we briefly presented a target during sinusoidal head-unrestrained smooth pursuit in darkness. Subjects were instructed to look at the flash as soon as they saw it. We observed that subjects were able to orient their gaze to the memorized (and spatially updated) position of the flashed target generally using one to three successive saccades. Similar to the behavior in the head-restrained condition, we found that the longer the gaze saccade latency, the better the compensation for intervening smooth gaze displacements; after about 400 ms, 62% of the smooth gaze displacement had been compensated for. This compensation depended on two independent parameters: the latency of the saccade and the eye contribution to the gaze displacement during this latency period. Separating gaze into eye and head contributions, we show that the larger the eye contribution to the gaze displacement, the better the overall compensation. Finally, we found that the compensation was a function of the head oscillation frequency and we suggest that this relationship is linked to the modulation of VOR gain. We conclude that the general mechanisms of compensation for smooth gaze displacements are similar to those observed in the head-restrained condition.
Collapse
Affiliation(s)
- P. M. Daye
- Center for Systems Engineering and Applied Mechanics, Université catholique de Louvain, Louvain-la-Neuve
- Laboratory of Neurophysiology, Université catholique de Louvain, Brussels, Belgium; and
| | - G. Blohm
- Centre for Neurosciences Studies, Queen's University, Kingston, Ontario, Canada
| | - P. Lefèvre
- Center for Systems Engineering and Applied Mechanics, Université catholique de Louvain, Louvain-la-Neuve
- Laboratory of Neurophysiology, Université catholique de Louvain, Brussels, Belgium; and
| |
Collapse
|
13
|
Keith GP, Blohm G, Crawford JD. Influence of saccade efference copy on the spatiotemporal properties of remapping: a neural network study. J Neurophysiol 2009; 103:117-39. [PMID: 19846615 DOI: 10.1152/jn.91191.2008] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Remapping of gaze-centered target-position signals across saccades has been observed in the superior colliculus and several cortical areas. It is generally assumed that this remapping is driven by saccade-related signals. What is not known is how the different potential forms of this signal (i.e., visual, visuomotor, or motor) might influence this remapping. We trained a three-layer recurrent neural network to update target position (represented as a "hill" of activity in a gaze-centered topographic map) across saccades, using discrete time steps and backpropagation-through-time algorithm. Updating was driven by an efference copy of one of three saccade-related signals: a transient visual response to the saccade-target in two-dimensional (2-D) topographic coordinates (Vtop), a temporally extended motor burst in 2-D topographic coordinates (Mtop), or a 3-D eye velocity signal in brain stem coordinates (EV). The Vtop model produced presaccadic remapping in the output layer, with a "jumping hill" of activity and intrasaccadic suppression. The Mtop model also produced presaccadic remapping with a dispersed moving hill of activity that closely reproduced the quantitative results of Sommer and Wurtz. The EV model produced a coherent moving hill of activity but failed to produce presaccadic remapping. When eye velocity and a topographic (Vtop or Mtop) updater signal were used together, the remapping relied primarily on the topographic signal. An analysis of the hidden layer activity revealed that the transient remapping was highly dispersed across hidden-layer units in both Vtop and Mtop models but tightly clustered in the EV model. These results show that the nature of the updater signal influences both the mechanism and final dynamics of remapping. Taken together with the currently known physiology, our simulations suggest that different brain areas might rely on different signals and mechanisms for updating that should be further distinguishable through currently available single- and multiunit recording paradigms.
Collapse
Affiliation(s)
- Gerald P Keith
- York Centre for Vision Research, and Canadian Institute of Health Research Group, York University, 4700 Keele St., Toronto, Ontario, Canada
| | | | | |
Collapse
|
14
|
Klier EM, Angelaki DE. Spatial updating and the maintenance of visual constancy. Neuroscience 2008; 156:801-18. [PMID: 18786618 DOI: 10.1016/j.neuroscience.2008.07.079] [Citation(s) in RCA: 66] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2008] [Revised: 07/29/2008] [Accepted: 07/30/2008] [Indexed: 11/16/2022]
Abstract
Spatial updating is the means by which we keep track of the locations of objects in space even as we move. Four decades of research have shown that humans and non-human primates can take the amplitude and direction of intervening movements into account, including saccades (both head-fixed and head-free), pursuit, whole-body rotations and translations. At the neuronal level, spatial updating is thought to be maintained by receptive field locations that shift with changes in gaze, and evidence for such shifts has been shown in several cortical areas. These regions receive information about the intervening movement from several sources including motor efference copies when a voluntary movement is made and vestibular/somatosensory signals when the body is in motion. Many of these updating signals arise from brainstem regions that monitor our ongoing movements and subsequently transmit this information to the cortex via pathways that likely include the thalamus. Several issues of debate include (1) the relative contribution of extra-retinal sensory and efference copy signals to spatial updating, (2) the source of an updating signal for real life, three-dimensional motion that cannot arise from brain areas encoding only two-dimensional commands, and (3) the reference frames used by the brain to integrate updating signals from various sources. This review highlights the relevant spatial updating studies and provides a summary of the field today. We find that spatial constancy is maintained by a highly evolved neural mechanism that keeps track of our movements, transmits this information to relevant brain regions, and then uses this information to change the way in which single neurons respond. In this way, we are able to keep track of relevant objects in the outside world and interact with them in meaningful ways.
Collapse
Affiliation(s)
- E M Klier
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | | |
Collapse
|
15
|
Ruiz-Ruiz M, Martinez-Trujillo JC. Human updating of visual motion direction during head rotations. J Neurophysiol 2008; 99:2558-76. [PMID: 18337365 DOI: 10.1152/jn.00931.2007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Previous studies have demonstrated that human subjects update the location of visual targets for saccades after head and body movements and in the absence of visual feedback. This phenomenon is known as spatial updating. Here we investigated whether a similar mechanism exists for the perception of motion direction. We recorded eye positions in three dimensions and behavioral responses in seven subjects during a motion task in two different conditions: when the subject's head remained stationary and when subjects rotated their heads around an anteroposterior axis (head tilt). We demonstrated that after head-tilt subjects updated the direction of saccades made in the perceived stimulus direction (direction of motion updating), the amount of updating varied across subjects and stimulus directions, the amount of motion direction updating was highly correlated with the amount of spatial updating during a memory-guided saccade task, subjects updated the stimulus direction during a two-alternative forced-choice direction discrimination task in the absence of saccadic eye movements (perceptual updating), perceptual updating was more accurate than motion direction updating involving saccades, and subjects updated motion direction similarly during active and passive head rotation. These results demonstrate the existence of an updating mechanism for the perception of motion direction in the human brain that operates during active and passive head rotations and that resembles the one of spatial updating. Such a mechanism operates during different tasks involving different motor and perceptual skills (saccade and motion direction discrimination) with different degrees of accuracy.
Collapse
Affiliation(s)
- Mario Ruiz-Ruiz
- Cognitive Neurophysiology Laboratory, Department of Physiology, McGill University, Montreal, Quebec, Canada
| | | |
Collapse
|
16
|
Klier EM, Hess BJM, Angelaki DE. Human visuospatial updating after passive translations in three-dimensional space. J Neurophysiol 2008; 99:1799-809. [PMID: 18256164 DOI: 10.1152/jn.01091.2007] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To maintain a stable representation of the visual environment as we move, the brain must update the locations of targets in space using extra-retinal signals. Humans can accurately update after intervening active whole-body translations. But can they also update for passive translations (i.e., without efference copy signals of an outgoing motor command)? We asked six head-fixed subjects to remember the location of a briefly flashed target (five possible targets were located at depths of 23, 33, 43, 63, and 150 cm in front of the cyclopean eye) as they moved 10 cm left, right, up, down, forward, or backward while fixating a head-fixed target at 53 cm. After the movement, the subjects made a saccade to the remembered location of the flash with a combination of version and vergence eye movements. We computed an updating ratio where 0 indicates no updating and 1 indicates perfect updating. For lateral and vertical whole-body motion, where updating performance is judged by the size of the version movement, the updating ratios were similar for leftward and rightward translations, averaging 0.84 +/- 0.28 (mean +/- SD) as compared with 0.51 +/- 0.33 for downward and 1.05 +/- 0.50 for upward translations. For forward/backward movements, where updating performance is judged by the size of the vergence movement, the average updating ratio was 1.12 +/- 0.45. Updating ratios tended to be larger for far targets than near targets, although both intra- and intersubject variabilities were smallest for near targets. Thus in addition to self-generated movements, extra-retinal signals involving otolith and proprioceptive cues can also be used for spatial constancy.
Collapse
Affiliation(s)
- Eliana M Klier
- Department of Neurobiology, Washington University School of Medicine, 660 S. Euclid Ave., St. Louis, MO 63110, USA.
| | | | | |
Collapse
|
17
|
Klier EM, Angelaki DE, Hess BJM. Human visuospatial updating after noncommutative rotations. J Neurophysiol 2007; 98:537-44. [PMID: 17442766 DOI: 10.1152/jn.01229.2006] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
As we move our bodies in space, we often undergo head and body rotations about different axes-yaw, pitch, and roll. The order in which we rotate about these axes is an important factor in determining the final position of our bodies in space because rotations, unlike translations, do not commute. Does our brain keep track of the noncommutativity of rotations when computing changes in head and body orientation and then use this information when planning subsequent motor commands? We used a visuospatial updating task to investigate whether saccades to remembered visual targets are accurate after intervening, whole-body rotational sequences. The sequences were reversed, either yaw then roll or roll then yaw, such that the final required eye movements to reach the same space-fixed target were different in each case. While each subject performed consistently irrespective of target location and rotational combination, we found great intersubject variability in their capacity to update. The distance between the noncommutative endpoints was, on average, half of that predicted by perfect noncommutativity. Nevertheless, most subjects did make eye movements to distinct final endpoint locations and not to one unique location in space as predicted by a commutative model. In addition, their noncommutative performance significantly improved when their less than ideal updating performance was taken into account. Thus the brain can produce movements that are consistent with the processing of noncommutative rotations, although it is often poor in using internal estimates of rotation for updating.
Collapse
Affiliation(s)
- Eliana M Klier
- Dept of Neurobiology, Washington University School of Medicine, St Louis, MO 63110, USA.
| | | | | |
Collapse
|
18
|
Schlicht EJ, Schrater PR. Impact of coordinate transformation uncertainty on human sensorimotor control. J Neurophysiol 2007; 97:4203-14. [PMID: 17409174 DOI: 10.1152/jn.00160.2007] [Citation(s) in RCA: 50] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Humans build representations of objects and their locations by integrating imperfect information from multiple perceptual modalities (e.g., visual, haptic). Because sensory information is specified in different frames of reference (i.e., eye- and body-centered), it must be remapped into a common coordinate frame before integration and storage in memory. Such transformations require an understanding of body articulation, which is estimated through noisy sensory data. Consequently, target information acquires additional coordinate transformation uncertainty (CTU) during remapping because of errors in joint angle sensing. As a result, CTU creates differences in the reliability of target information depending on the reference frame used for storage. This paper explores whether the brain represents and compensates for CTU when making grasping movements. To address this question, we varied eye position in the head, while participants reached to grasp a spatially fixed object, both when the object was in view and when it was occluded. Varying eye position changes CTU between eye and head, producing additional uncertainty in remapped information away from forward view. The results showed that people adjust their maximum grip aperture to compensate both for changes in visual information and for changes in CTU when the target is occluded. Moreover, the amount of compensation is predicted by a Bayesian model for location inference that uses eye-centered storage.
Collapse
Affiliation(s)
- Erik J Schlicht
- Department of Psychology, Univ. of Minnesota, N218 Elliott Hall, 75 East River Rd., Minneapolis, MN 55455, USA
| | | |
Collapse
|
19
|
Van Pelt S, Medendorp WP. Gaze-Centered Updating of Remembered Visual Space During Active Whole-Body Translations. J Neurophysiol 2007; 97:1209-20. [PMID: 17135474 DOI: 10.1152/jn.00882.2006] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Various cortical and sub-cortical brain structures update the gaze-centered coordinates of remembered stimuli to maintain an accurate representation of visual space across eyes rotations and to produce suitable motor plans. A major challenge for the computations by these structures is updating across eye translations. When the eyes translate, objects in front of and behind the eyes’ fixation point shift in opposite directions on the retina due to motion parallax. It is not known if the brain uses gaze coordinates to compute parallax in the translational updating of remembered space or if it uses gaze-independent coordinates to maintain spatial constancy across translational motion. We tested this by having subjects view targets, flashed in darkness in front of or behind fixation, then translate their body sideways, and subsequently reach to the memorized target. Reach responses showed parallax-sensitive updating errors: errors increased with depth from fixation and reversed in lateral direction for targets presented at opposite depths from fixation. In a series of control experiments, we ruled out possible biasing factors such as the presence of a fixation light during the translation, the eyes accompanying the hand to the target, and the presence of visual feedback about hand position. Quantitative geometrical analysis confirmed that updating errors were better described by using gaze-centered than gaze-independent coordinates. We conclude that spatial updating for translational motion operates in gaze-centered coordinates. Neural network simulations are presented suggesting that the brain relies on ego-velocity signals and stereoscopic depth and direction information in spatial updating during self-motion.
Collapse
Affiliation(s)
- Stan Van Pelt
- Nijmegen Institute for Cognition and Information, Radboud University Nijmegen, NL-6500 HE Nijmegen, The Netherlands.
| | | |
Collapse
|
20
|
Wei M, Li N, Newlands SD, Dickman JD, Angelaki DE. Deficits and Recovery in Visuospatial Memory During Head Motion After Bilateral Labyrinthine Lesion. J Neurophysiol 2006; 96:1676-82. [PMID: 16760354 DOI: 10.1152/jn.00012.2006] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To keep a stable internal representation of the environment as we move, extraretinal sensory or motor cues are critical for updating neural maps of visual space. Using a memory-saccade task, we studied whether visuospatial updating uses vestibular information. Specifically, we tested whether trained rhesus monkeys maintain the ability to update the conjugate and vergence components of memory-guided eye movements in response to passive translational or rotational head and body movements after bilateral labyrinthine lesion. We found that lesioned animals were acutely compromised in generating the appropriate horizontal versional responses necessary to update the directional goal of memory-guided eye movements after leftward or rightward rotation/translation. This compromised function recovered in the long term, likely using extravestibular (e.g., somatosensory) signals, such that nearly normal performance was observed 4 mo after the lesion. Animals also lost their ability to adjust memory vergence to account for relative distance changes after motion in depth. Not only were these depth deficits larger than the respective effects on version, but they also showed little recovery. We conclude that intact labyrinthine signals are functionally useful for proper visuospatial memory updating during passive head and body movements.
Collapse
Affiliation(s)
- Min Wei
- Department of Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | | | | | | | | |
Collapse
|
21
|
Blohm G, Optican LM, Lefèvre P. A model that integrates eye velocity commands to keep track of smooth eye displacements. J Comput Neurosci 2006; 21:51-70. [PMID: 16633937 DOI: 10.1007/s10827-006-7199-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2005] [Revised: 01/12/2006] [Accepted: 01/13/2006] [Indexed: 12/20/2022]
Abstract
Past results have reported conflicting findings on the oculomotor system's ability to keep track of smooth eye movements in darkness. Whereas some results indicate that saccades cannot compensate for smooth eye displacements, others report that memory-guided saccades during smooth pursuit are spatially correct. Recently, it was shown that the amount of time before the saccade made a difference: short-latency saccades were retinotopically coded, whereas long-latency saccades were spatially coded. Here, we propose a model of the saccadic system that can explain the available experimental data. The novel part of this model consists of a delayed integration of efferent smooth eye velocity commands. Two alternative physiologically realistic neural mechanisms for this integration stage are proposed. Model simulations accurately reproduced prior findings. Thus, this model reconciles the earlier contradictory reports from the literature about compensation for smooth eye movements before saccades because it involves a slow integration process.
Collapse
Affiliation(s)
- Gunnar Blohm
- CESAME, Université catholique de Louvain, 4, avenue G. Lemaître, 1348, Louvain-la-Neuve, Belgium.
| | | | | |
Collapse
|
22
|
Li N, Angelaki DE. Updating visual space during motion in depth. Neuron 2006; 48:149-58. [PMID: 16202715 DOI: 10.1016/j.neuron.2005.08.021] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2004] [Revised: 04/04/2005] [Accepted: 08/15/2005] [Indexed: 10/25/2022]
Abstract
Whether we are riding in a car or walking, our internal map of the environment must be continuously updated to maintain spatial constancy. Using a memory eye movement task, we examined whether nonhuman primates can keep track of changes in the distance of nearby objects when moved toward or away from them. We report that memory-guided eye movements take into account the change in distance traveled, illustrating that monkeys can update retinal disparity information in order to reconstruct three-dimensional visual space during motion in depth. This ability was compromised after destruction of the vestibular labyrinths, suggesting that the extraretinal signals needed for updating can arise from vestibular information signaling self-motion through space.
Collapse
Affiliation(s)
- Nuo Li
- Department of Neurobiology and Biomedical Engineering, Washington University School of Medicine, St. Louis, Missouri 63110, USA
| | | |
Collapse
|
23
|
Li N, Wei M, Angelaki DE. Primate memory saccade amplitude after intervened motion depends on target distance. J Neurophysiol 2005; 94:722-33. [PMID: 15788513 DOI: 10.1152/jn.01339.2004] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To keep a stable internal representation of the visual world as our eyes, head, and body move around, humans and monkeys must continuously adjust neural maps of visual space using extraretinal sensory or motor cues. When such movements include translation, the amount of body displacement must be weighted differently in the updating of far versus near targets. Using a memory-saccade task, we have investigated whether nonhuman primates can benefit from this geometry when passively moved sideways. We report that monkeys made appropriate memory saccades, taking into account not only the amplitude and nature (rotation vs. translation) of the movement, but also the distance of the memorized target: i.e., the amplitude of memory saccades was larger for near versus far targets. The scaling by viewing distance, however, was less than geometrically required, such that memory saccades consistently undershot near targets. Such a less-than-ideal scaling of memory saccades is reminiscent of the viewing distance-dependent properties of the vestibuloocular reflex. We propose that a similar viewing distance-dependent vestibular signal is used as an extraretinal compensation for the visuomotor consequences of the geometry of motion parallax by scaling both memory saccades and reflexive eye movements during motion through space.
Collapse
Affiliation(s)
- Nuo Li
- Department of Anatomy and Neurobiology, Box 8108, Washington University School of Medicine, 660 South Euclid Avenue, St. Louis, Missouri 63110, USA
| | | | | |
Collapse
|
24
|
Blohm G, Missal M, Lefèvre P. Processing of Retinal and Extraretinal Signals for Memory-Guided Saccades During Smooth Pursuit. J Neurophysiol 2005; 93:1510-22. [PMID: 15483070 DOI: 10.1152/jn.00543.2004] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
It is an essential feature for the visual system to keep track of self-motion to maintain space constancy. Therefore the saccadic system uses extraretinal information about previous saccades to update the internal representation of memorized targets, an ability that has been identified in behavioral and electrophysiological studies. However, a smooth eye movement induced in the latency period of a memory-guided saccade yielded contradictory results. Indeed some studies described spatially accurate saccades, whereas others reported retinal coding of saccades. Today, it is still unclear how the saccadic system keeps track of smooth eye movements in the absence of vision. Here, we developed an original two-dimensional behavioral paradigm to further investigate how smooth eye displacements could be compensated to ensure space constancy. Human subjects were required to pursue a moving target and to orient their eyes toward the memorized position of a briefly presented second target (flash) once it appeared. The analysis of the first orientation saccade revealed a bimodal latency distribution related to two different saccade programming strategies. Short-latency (<175 ms) saccades were coded using the only available retinal information, i.e., position error. In addition to position error, longer-latency (>175 ms) saccades used extraretinal information about the smooth eye displacement during the latency period to program spatially more accurate saccades. Sensory parameters at the moment of the flash (retinal position error and eye velocity) influenced the choice between both strategies. We hypothesize that this tradeoff between speed and accuracy of the saccadic response reveals the presence of two coupled neural pathways for saccadic programming. A fast striatal-collicular pathway might only use retinal information about the flash location to program the first saccade. The slower pathway could involve the posterior parietal cortex to update the internal representation of the flash once extraretinal smooth eye displacement information becomes available to the system.
Collapse
Affiliation(s)
- Gunnar Blohm
- Centre for Systems Engineering and Applied Mechanics, Université Catholique de Louvain, 4, Avenue G. Lemaître, 1348 Louvain-la-Neuve, Belgium
| | | | | |
Collapse
|
25
|
Klier EM, Angelaki DE, Hess BJM. Roles of gravitational cues and efference copy signals in the rotational updating of memory saccades. J Neurophysiol 2005; 94:468-78. [PMID: 15716372 DOI: 10.1152/jn.00700.2004] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Primates are able to localize a briefly flashed target despite intervening movements of the eyes, head, or body. This ability, often referred to as updating, requires extraretinal signals related to the intervening movement. With active roll rotations of the head from an upright position it has been shown that the updating mechanism is 3-dimensional, robust, and geometrically sophisticated. Here we examine whether such a rotational updating mechanism operates during passive motion both with and without inertial cues about head/body position in space. Subjects were rotated from either an upright or supine position, about a nasal-occipital axis, briefly shown a world-fixed target, rotated back to their original position, and then asked to saccade to the remembered target location. Using this paradigm, we tested subjects' abilities to update from various tilt angles (0, +/-30, +/-45, +/-90 degrees), to 8 target directions and 2 target eccentricities. In the upright condition, subjects accurately updated the remembered locations from all tilt angles independent of target direction or eccentricity. Slopes of directional errors versus tilt angle ranged from -0.011 to 0.15, and were significantly different from a slope of 1 (no compensation for head-in-space roll) and a slope of 0.9 (no compensation for eye-in-space roll). Because the eyes, head, and body were fixed throughout these passive movements, subjects could not use efference copies or neck proprioceptive cues to assess the amount of tilt, suggesting that vestibular signals and/or body proprioceptive cues suffice for updating. In the supine condition, where gravitational signals could not contribute, slopes ranged from 0.60 to 0.82, indicating poor updating performance. Thus information specifying the body's orientation relative to gravity is critical for maintaining spatial constancy and for distinguishing body-fixed versus world-fixed reference frames.
Collapse
Affiliation(s)
- Eliana M Klier
- Department of Neurobiology, Box 8108, Washington University School of Medicine, 660 South Euclid Avenue, St. Louis, Missouri 63110, USA.
| | | | | |
Collapse
|
26
|
Abstract
As we move through space, stationary objects around us show motion parallax: their directions relative to us change at different rates, depending on their distance. Does the brain incorporate parallax when it updates its stored representations of space? We had subjects fixate a distant target and then we flashed lights, at different distances, onto the retinal periphery. Subjects translated sideways while keeping their gaze on the distant target, and then they looked to the remembered location of the flash. Their responses corrected almost perfectly for parallax: they turned their eyes farther for nearer targets, in the predicted nonlinear patterns. Computer simulations suggest a neural mechanism in which feedback about self-motion updates remembered locations of objects within an internal map of three-dimensional visual space.
Collapse
|
27
|
Klier EM, Martinez-Trujillo JC, Medendorp WP, Smith MA, Crawford JD. Neural control of 3-D gaze shifts in the primate. PROGRESS IN BRAIN RESEARCH 2003; 142:109-24. [PMID: 12693257 DOI: 10.1016/s0079-6123(03)42009-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
Abstract
The neural mechanisms that specify target locations for gaze shifts and then convert these into desired patterns of coordinated eye and head movements are complex. Much of this complexity is only revealed when one takes a realistic three-dimensional (3-D) view of these processes, where fundamental computational problems such as kinematic redundancy, reference-frame transformations, and non-commutativity emerge. Here we review the underlying mechanisms and solutions for these problems, starting with a consideration of the kinematics of 3-D gaze shifts in human and non-human primates. We then consider the neural mechanisms, including cortical representation of gaze targets, the nature of the gaze motor command used by the superior colliculus, and how these gaze commands are decomposed into brainstem motor commands for the eyes and head. A general conclusion is that fairly simple coding mechanisms may be used to represent gaze at the cortical and collicular level, but this then necessitates complexity for the spatial updating of these representations and in the brainstem sensorimotor transformations that convert these signals into eye and head movements.
Collapse
Affiliation(s)
- Eliana M Klier
- CIHR Group for Action and Perception, Centre for Vision Research, Department of Biology, York University, Toronto, ON M3J 1P3, Canada
| | | | | | | | | |
Collapse
|
28
|
Baker JT, Harper TM, Snyder LH. Spatial memory following shifts of gaze. I. Saccades to memorized world-fixed and gaze-fixed targets. J Neurophysiol 2003; 89:2564-76. [PMID: 12740406 DOI: 10.1152/jn.00610.2002] [Citation(s) in RCA: 53] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
During a shift of gaze, an object can move along with gaze or stay fixed in the world. To examine the effect of an object's reference frame on spatial working memory, we trained monkeys to memorize locations of visual stimuli as either fixed in the world or fixed to gaze. Each trial consisted of an initial reference frame instruction, followed by a peripheral visual flash, a memory-period gaze shift, and finally a memory-guided saccade to the location consistent with the instructed reference frame. The memory-period gaze shift was either rapid (a saccade) or slow (smooth pursuit or whole body rotation). This design allowed a comparison of memory-guided saccade performance under various conditions. Our data indicate that after a rotation or smooth-pursuit eye movement, saccades to memorized world-fixed targets are more variable than saccades to memorized gaze-fixed targets. In contrast, memory-guided saccades to world- and gaze-fixed targets are equally variable following a visually guided saccade. Across all conditions, accuracy, latency, and main sequence characteristics of memory-guided saccades are not influenced by the target's reference frame. Memory-guided saccades are, however, more accurate after fast compared with slow gaze shifts. These results are most consistent with an eye-centered representational system for storing the spatial locations of memorized objects but suggest that the visual system may engage different mechanisms to update the stored signal depending on how gaze is shifted.
Collapse
Affiliation(s)
- Justin T Baker
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110, USA
| | | | | |
Collapse
|
29
|
|