1
|
Wu X, Spering M. Tracking and perceiving diverse motion signals: Directional biases in human smooth pursuit and perception. PLoS One 2022; 17:e0275324. [PMID: 36174036 PMCID: PMC9522262 DOI: 10.1371/journal.pone.0275324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 09/14/2022] [Indexed: 11/19/2022] Open
Abstract
Human smooth pursuit eye movements and motion perception behave similarly when observers track and judge the motion of simple objects, such as dots. But moving objects in our natural environment are complex and contain internal motion. We ask how pursuit and perception integrate the motion of objects with motion that is internal to the object. Observers (n = 20) tracked a moving random-dot kinematogram with their eyes and reported the object’s perceived direction. Objects moved horizontally with vertical shifts of 0, ±3, ±6, or ±9° and contained internal dots that were static or moved ±90° up/down. Results show that whereas pursuit direction was consistently biased in the direction of the internal dot motion, perceptual biases differed between observers. Interestingly, the perceptual bias was related to the magnitude of the pursuit bias (r = 0.75): perceptual and pursuit biases were directionally aligned in observers that showed a large pursuit bias, but went in opposite directions in observers with a smaller pursuit bias. Dissociations between perception and pursuit might reflect different functional demands of the two systems. Pursuit integrates all available motion signals in order to maximize the ability to monitor and collect information from the whole scene. Perception needs to recognize and classify visual information, thus segregating the target from its context. Ambiguity in whether internal motion is part of the scene or contributes to object motion might have resulted in individual differences in perception. The perception-pursuit correlation suggests shared early-stage motion processing or perception-pursuit interactions.
Collapse
Affiliation(s)
- Xiuyun Wu
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, BC, Canada
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, BC, Canada
- * E-mail:
| | - Miriam Spering
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, BC, Canada
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, BC, Canada
- Djavad Mowafaghian Center for Brain Health, University of British Columbia, Vancouver, BC, Canada
- Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
2
|
Murdison TS, Standage DI, Lefèvre P, Blohm G. Effector-dependent stochastic reference frame transformations alter decision-making. J Vis 2022; 22:1. [PMID: 35816048 PMCID: PMC9284468 DOI: 10.1167/jov.22.8.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Psychophysical, motor control, and modeling studies have revealed that sensorimotor reference frame transformations (RFTs) add variability to transformed signals. For perceptual decision-making, this phenomenon could decrease the fidelity of a decision signal's representation or alternatively improve its processing through stochastic facilitation. We investigated these two hypotheses under various sensorimotor RFT constraints. Participants performed a time-limited, forced-choice motion discrimination task under eight combinations of head roll and/or stimulus rotation while responding either with a saccade or button press. This paradigm, together with the use of a decision model, allowed us to parameterize and correlate perceptual decision behavior with eye-, head-, and shoulder-centered sensory and motor reference frames. Misalignments between sensory and motor reference frames produced systematic changes in reaction time and response accuracy. For some conditions, these changes were consistent with a degradation of motion evidence commensurate with a decrease in stimulus strength in our model framework. Differences in participant performance were explained by a continuum of eye–head–shoulder representations of accumulated motion evidence, with an eye-centered bias during saccades and a shoulder-centered bias during button presses. In addition, we observed evidence for stochastic facilitation during head-rolled conditions (i.e., head roll resulted in faster, more accurate decisions in oblique motion for a given stimulus–response misalignment). We show that perceptual decision-making and stochastic RFTs are inseparable within the present context. We show that by simply rolling one's head, perceptual decision-making is altered in a way that is predicted by stochastic RFTs.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada.,
| | - Dominic I Standage
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada.,School of Psychology, University of Birmingham, UK.,
| | - Philippe Lefèvre
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium.,
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada.,
| |
Collapse
|
3
|
Murdison TS, Blohm G, Bremmer F. Saccade-induced changes in ocular torsion reveal predictive orientation perception. J Vis 2020; 19:10. [PMID: 31533148 DOI: 10.1167/19.11.10] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Natural orienting of gaze often results in a retinal image that is rotated relative to space due to ocular torsion. However, we perceive neither this rotation nor a moving world despite visual rotational motion on the retina. This perceptual stability is often attributed to the phenomenon known as predictive remapping, but the current remapping literature ignores this torsional component. In addition, studies often simply measure remapping across either space or features (e.g., orientation) but in natural circumstances, both components are bound together for stable perception. One natural circumstance in which the perceptual system must account for the current and future eye orientation to correctly interpret the orientation of external stimuli occurs during movements to or from oblique eye orientations (i.e., eye orientations with both a horizontal and vertical angular component relative to the primary position). Here we took advantage of oblique eye orientation-induced ocular torsion to examine perisaccadic orientation perception. First, we found that orientation perception was largely predicted by the rotated retinal image. Second, we observed a presaccadic remapping of orientation perception consistent with maintaining a stable (but spatially inaccurate) retinocentric perception throughout the saccade. These findings strongly suggest that our seamless perceptual stability relies on retinocentric signals that are predictively remapped in all three ocular dimensions with each saccade.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada
| | - Frank Bremmer
- Department of Neurophysics, Philipps-Universität Marburg, Germany
| |
Collapse
|
4
|
Wu X, Spering M. Ocular torsion is related to perceived motion-induced position shifts. J Vis 2019; 19:11. [PMID: 31621818 DOI: 10.1167/19.12.11] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Ocular torsion (i.e., rotations of the eye about the line of sight) can be induced by visual rotational motion. It remains unclear whether and how such visually induced torsion is related to perception. By using the flash-grab effect, an illusory position shift of a briefly flashed stationary target superimposed on a rotating pattern, we examined the relationship between torsion and perception. In two experiments, 25 observers reported the perceived location of a flash while their three-dimensional eye movements were recorded. In Experiment 1, the flash coincided with a direction reversal of a large, centrally displayed, rotating grating. The grating triggered visually induced torsion in the direction of stimulus rotation. The magnitude of torsional eye rotation correlated with the illusory perceptual position shift. To test whether torsion caused the illusion, in Experiment 2, the flash was superimposed on two peripheral gratings rotating in opposite directions. Even though torsion was eliminated, the illusory position shift persisted. Despite the lack of a causal relationship, the torsion-perception correlations indicate a close link between both systems, either through similar visual-input processing or a boost of visual rotational signal strength via oculomotor feedback.
Collapse
Affiliation(s)
- Xiuyun Wu
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada.,Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada.,Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada.,Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, Canada.,Djavad Mowafaghian Center for Brain Health, University of British Columbia, Vancouver, Canada
| |
Collapse
|
5
|
Sadeh M, Sajad A, Wang H, Yan X, Crawford JD. Spatial transformations between superior colliculus visual and motor response fields during head-unrestrained gaze shifts. Eur J Neurosci 2016; 42:2934-51. [PMID: 26448341 DOI: 10.1111/ejn.13093] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2014] [Revised: 09/14/2015] [Accepted: 09/30/2015] [Indexed: 11/27/2022]
Abstract
We previously reported that visuomotor activity in the superior colliculus (SC)--a key midbrain structure for the generation of rapid eye movements--preferentially encodes target position relative to the eye (Te) during low-latency head-unrestrained gaze shifts (DeSouza et al., 2011). Here, we trained two monkeys to perform head-unrestrained gaze shifts after a variable post-stimulus delay (400-700 ms), to test whether temporally separated SC visual and motor responses show different spatial codes. Target positions, final gaze positions and various frames of reference (eye, head, and space) were dissociated through natural (untrained) trial-to-trial variations in behaviour. 3D eye and head orientations were recorded, and 2D response field data were fitted against multiple models by use of a statistical method reported previously (Keith et al., 2009). Of 60 neurons, 17 showed a visual response, 12 showed a motor response, and 31 showed both visual and motor responses. The combined visual response field population (n = 48) showed a significant preference for Te, which was also preferred in each visual subpopulation. In contrast, the motor response field population (n = 43) showed a preference for final (relative to initial) gaze position models, and the Te model was statistically eliminated in the motor-only population. There was also a significant shift of coding from the visual to motor response within visuomotor neurons. These data confirm that SC response fields are gaze-centred, and show a target-to-gaze transformation between visual and motor responses. Thus, visuomotor transformations can occur between, and even within, neurons within a single frame of reference and brain structure.
Collapse
Affiliation(s)
- Morteza Sadeh
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Amirsaman Sajad
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Hongying Wang
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Xiaogang Yan
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - John Douglas Crawford
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| |
Collapse
|
6
|
Dowiasch S, Blohm G, Bremmer F. Neural correlate of spatial (mis-)localization during smooth eye movements. Eur J Neurosci 2016; 44:1846-55. [PMID: 27177769 PMCID: PMC5089592 DOI: 10.1111/ejn.13276] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2015] [Accepted: 04/19/2016] [Indexed: 11/29/2022]
Abstract
The dependence of neuronal discharge on the position of the eyes in the orbit is a functional characteristic of many visual cortical areas of the macaque. It has been suggested that these eye-position signals provide relevant information for a coordinate transformation of visual signals into a non-eye-centered frame of reference. This transformation could be an integral part for achieving visual perceptual stability across eye movements. Previous studies demonstrated close to veridical eye-position decoding during stable fixation as well as characteristic erroneous decoding across saccadic eye-movements. Here we aimed to decode eye position during smooth pursuit. We recorded neural activity in macaque area VIP during steady fixation, saccades and smooth-pursuit and investigated the temporal and spatial accuracy of eye position as decoded from the neuronal discharges. Confirming previous results, the activity of the majority of neurons depended linearly on horizontal and vertical eye position. The application of a previously introduced computational approach (isofrequency decoding) allowed eye position decoding with considerable accuracy during steady fixation. We applied the same decoder on the activity of the same neurons during smooth-pursuit. On average, the decoded signal was leading the current eye position. A model combining this constant lead of the decoded eye position with a previously described attentional bias ahead of the pursuit target describes the asymmetric mislocalization pattern for briefly flashed stimuli during smooth pursuit eye movements as found in human behavioral studies.
Collapse
Affiliation(s)
- Stefan Dowiasch
- Department of NeurophysicsPhilipps‐University MarburgKarl‐von‐Frisch‐Straße 8a35043MarburgGermany
| | | | - Frank Bremmer
- Department of NeurophysicsPhilipps‐University MarburgKarl‐von‐Frisch‐Straße 8a35043MarburgGermany
| |
Collapse
|
7
|
Einarsson EJ, Patel M, Petersen H, Wiebe T, Magnusson M, Moëll C, Fransson PA. Oculomotor Deficits after Chemotherapy in Childhood. PLoS One 2016; 11:e0147703. [PMID: 26815789 PMCID: PMC4731397 DOI: 10.1371/journal.pone.0147703] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2015] [Accepted: 01/07/2016] [Indexed: 11/18/2022] Open
Abstract
Advances in the diagnosis and treatment of pediatric malignancies have substantially increased the number of childhood cancer survivors. However, reports suggest that some of the chemotherapy agents used for treatment can cross the blood brain barrier which may lead to a host of neurological symptoms including oculomotor dysfunction. Whether chemotherapy at young age causes oculomotor dysfunction later in life is unknown. Oculomotor performance was assessed with traditional and novel methods in 23 adults (mean age 25.3 years, treatment age 10.2 years) treated with chemotherapy for a solid malignant tumor not affecting the central nervous system. Their results were compared to those from 25 healthy, age-matched controls (mean age 25.1 years). Correlation analysis was performed between the subjective symptoms reported by the chemotherapy treated subjects (CTS) and oculomotor performance. In CTS, the temporal control of the smooth pursuit velocity (velocity accuracy) was markedly poorer (p<0.001) and the saccades had disproportionally shorter amplitude than normal for the associated saccade peak velocity (main sequence) (p = 0.004), whereas smooth pursuit and saccade onset times were shorter (p = 0.004) in CTS compared with controls. The CTS treated before 12 years of age manifested more severe oculomotor deficits. CTS frequently reported subjective symptoms of visual disturbances (70%), unsteadiness, light-headedness and that things around them were spinning or moving (87%). Several subjective symptoms were significantly related to deficits in oculomotor performance. To conclude, chemotherapy in childhood or adolescence can result in severe oculomotor dysfunctions in adulthood. The revealed oculomotor dysfunctions were significantly related to the subjects' self-perception of visual disturbances, dizziness, light-headedness and sensing unsteadiness. Assessments of oculomotor function may, thus, offer an objective method to track and rate the level of neurological complications following chemotherapy.
Collapse
Affiliation(s)
- Einar-Jón Einarsson
- Department of Clinical Sciences, Lund University, Lund, Sweden
- Faculty of Medicine, University of Iceland, Reykjavik, Iceland
| | - Mitesh Patel
- School of Biosciences, University of East London, London, United Kingdom
- Division of Brain Sciences, Imperial College London, London, United Kingdom
| | - Hannes Petersen
- Faculty of Medicine, University of Iceland, Reykjavik, Iceland
- Department of Otorhinolaryngology, Landspitali University Hospital, Reykjavik, Iceland
| | - Thomas Wiebe
- Department of Paediatrics, Skane University Hospital, Lund, Sweden
| | - Måns Magnusson
- Department of Clinical Sciences, Lund University, Lund, Sweden
- Department of Otorhinolaryngology, Skane University Hospital, Lund, Sweden
| | - Christian Moëll
- Department of Paediatrics, Skane University Hospital, Lund, Sweden
| | | |
Collapse
|
8
|
Daemi M, Crawford JD. A kinematic model for 3-D head-free gaze-shifts. Front Comput Neurosci 2015; 9:72. [PMID: 26113816 PMCID: PMC4461827 DOI: 10.3389/fncom.2015.00072] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2014] [Accepted: 05/27/2015] [Indexed: 11/13/2022] Open
Abstract
Rotations of the line of sight are mainly implemented by coordinated motion of the eyes and head. Here, we propose a model for the kinematics of three-dimensional (3-D) head-unrestrained gaze-shifts. The model was designed to account for major principles in the known behavior, such as gaze accuracy, spatiotemporal coordination of saccades with vestibulo-ocular reflex (VOR), relative eye and head contributions, the non-commutativity of rotations, and Listing's and Fick constraints for the eyes and head, respectively. The internal design of the model was inspired by known and hypothesized elements of gaze control physiology. Inputs included retinocentric location of the visual target and internal representations of initial 3-D eye and head orientation, whereas outputs were 3-D displacements of eye relative to the head and head relative to shoulder. Internal transformations decomposed the 2-D gaze command into 3-D eye and head commands with the use of three coordinated circuits: (1) a saccade generator, (2) a head rotation generator, (3) a VOR predictor. Simulations illustrate that the model can implement: (1) the correct 3-D reference frame transformations to generate accurate gaze shifts (despite variability in other parameters), (2) the experimentally verified constraints on static eye and head orientations during fixation, and (3) the experimentally observed 3-D trajectories of eye and head motion during gaze-shifts. We then use this model to simulate how 2-D eye-head coordination strategies interact with 3-D constraints to influence 3-D orientations of the eye-in-space, and the implications of this for spatial vision.
Collapse
Affiliation(s)
- Mehdi Daemi
- Department of Biology and Neuroscience Graduate Diploma, York University Toronto, ON, Canada ; Centre for Vision Research, York University Toronto, ON, Canada ; CAN-ACT NSERC CREATE Program Toronto, ON, Canada ; Canadian Action and Perception Network Toronto, ON, Canada
| | - J Douglas Crawford
- Department of Biology and Neuroscience Graduate Diploma, York University Toronto, ON, Canada ; Centre for Vision Research, York University Toronto, ON, Canada ; CAN-ACT NSERC CREATE Program Toronto, ON, Canada ; Canadian Action and Perception Network Toronto, ON, Canada ; Department of Psychology, York University Toronto, ON, Canada ; School of Kinesiology and Health Sciences, York University Toronto, ON, Canada ; Brain in Action NSERC CREATE/DFG IRTG Program Canada/Germany
| |
Collapse
|
9
|
Murdison TS, Leclercq G, Lefèvre P, Blohm G. Computations underlying the visuomotor transformation for smooth pursuit eye movements. J Neurophysiol 2015; 113:1377-99. [PMID: 25475344 DOI: 10.1152/jn.00273.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103-2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| | - Guillaume Leclercq
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Philippe Lefèvre
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| |
Collapse
|
10
|
Abstract
One of the hallmarks of an eye movement that follows Listing’s law is the half-angle rule that says that the angular velocity of the eye tilts by half the angle of eccentricity of the line of sight relative to primary eye position. Since all visually-guided eye movements in the regime of far viewing follow Listing’s law (with the head still and upright), the question about its origin is of considerable importance. Here, we provide theoretical and experimental evidence that Listing’s law results from a unique motor strategy that allows minimizing ocular torsion while smoothly tracking objects of interest along any path in visual space. The strategy consists in compounding conventional ocular rotations in meridian planes, that is in horizontal, vertical and oblique directions (which are all torsion-free) with small linear displacements of the eye in the frontal plane. Such compound rotation-displacements of the eye can explain the kinematic paradox that the fixation point may rotate in one plane while the eye rotates in other planes. Its unique signature is the half-angle law in the position domain, which means that the rotation plane of the eye tilts by half-the angle of gaze eccentricity. We show that this law does not readily generalize to the velocity domain of visually-guided eye movements because the angular eye velocity is the sum of two terms, one associated with rotations in meridian planes and one associated with displacements of the eye in the frontal plane. While the first term does not depend on eye position the second term does depend on eye position. We show that compounded rotation - displacements perfectly predict the average smooth kinematics of the eye during steady- state pursuit in both the position and velocity domain.
Collapse
Affiliation(s)
- Bernhard J. M. Hess
- Department of Neurology, University Hospital Zurich, Zurich, Switzerland
- * E-mail:
| | - Jakob S. Thomassen
- Department of Neurology, University Hospital Zurich, Zurich, Switzerland
| |
Collapse
|
11
|
Leclercq G, Blohm G, Lefèvre P. Accounting for direction and speed of eye motion in planning visually guided manual tracking. J Neurophysiol 2013; 110:1945-57. [DOI: 10.1152/jn.00130.2013] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
Accurate motor planning in a dynamic environment is a critical skill for humans because we are often required to react quickly and adequately to the visual motion of objects. Moreover, we are often in motion ourselves, and this complicates motor planning. Indeed, the retinal and spatial motions of an object are different because of the retinal motion component induced by self-motion. Many studies have investigated motion perception during smooth pursuit and concluded that eye velocity is partially taken into account by the brain. Here we investigate whether the eye velocity during ongoing smooth pursuit is taken into account for the planning of visually guided manual tracking. We had 10 human participants manually track a target while in steady-state smooth pursuit toward another target such that the difference between the retinal and spatial target motion directions could be large, depending on both the direction and the speed of the eye. We used a measure of initial arm movement direction to quantify whether motor planning occurred in retinal coordinates (not accounting for eye motion) or was spatially correct (incorporating eye velocity). Results showed that the eye velocity was nearly fully taken into account by the neuronal areas involved in the visuomotor velocity transformation (between 75% and 102%). In particular, these neuronal pathways accounted for the nonlinear effects due to the relative velocity between the target and the eye. In conclusion, the brain network transforming visual motion into a motor plan for manual tracking adequately uses extraretinal signals about eye velocity.
Collapse
Affiliation(s)
- Guillaume Leclercq
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université catholique de Louvain, Louvain-la-Neuve, Belgium
- Institute of Neuroscience (IoNS), Université catholique de Louvain, Brussels, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; and
- Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada
| | - Philippe Lefèvre
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université catholique de Louvain, Louvain-la-Neuve, Belgium
- Institute of Neuroscience (IoNS), Université catholique de Louvain, Brussels, Belgium
| |
Collapse
|
12
|
Murdison TS, Paré-Bingley CA, Blohm G. Evidence for a retinal velocity memory underlying the direction of anticipatory smooth pursuit eye movements. J Neurophysiol 2013; 110:732-47. [PMID: 23678014 DOI: 10.1152/jn.00991.2012] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To compute spatially correct smooth pursuit eye movements, the brain uses both retinal motion and extraretinal signals about the eyes and head in space (Blohm and Lefèvre 2010). However, when smooth eye movements rely solely on memorized target velocity, such as during anticipatory pursuit, it is unknown if this velocity memory also accounts for extraretinal information, such as head roll and ocular torsion. To answer this question, we used a novel behavioral updating paradigm in which participants pursued a repetitive, spatially constant fixation-gap-ramp stimulus in series of five trials. During the first four trials, participants' heads were rolled toward one shoulder, inducing ocular counterroll (OCR). With each repetition, participants increased their anticipatory pursuit gain, indicating a robust encoding of velocity memory. On the fifth trial, they rolled their heads to the opposite shoulder before pursuit, also inducing changes in ocular torsion. Consequently, for spatially accurate anticipatory pursuit, the velocity memory had to be updated across changes in head roll and ocular torsion. We tested how the velocity memory accounted for head roll and OCR by observing the effects of changes to these signals on anticipatory trajectories of the memory decoding (fifth) trials. We found that anticipatory pursuit was updated for changes in head roll; however, we observed no evidence of compensation for OCR, representing the absence of ocular torsion signals within the velocity memory. This indicated that the directional component of the memory must be coded retinally and updated to account for changes in head roll, but not OCR.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | | | | |
Collapse
|
13
|
Hess BJM, Thomassen JS. Quick phases control ocular torsion during smooth pursuit. J Neurophysiol 2011; 106:2151-66. [PMID: 21715669 DOI: 10.1152/jn.00194.2011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
One of the open questions in oculomotor control of visually guided eye movements is whether it is possible to smoothly track a target along a curvilinear path across the visual field without changing the torsional stance of the eye. We show in an experimental study of three-dimensional eye movements in subhuman primates (Macaca mulatta) that although the pursuit system is able to smoothly change the orbital orientation of the eye's rotation axis, the smooth ocular motion was interrupted every few hundred milliseconds by a small quick phase with amplitude <1.5° while the animal tracked a target along a circle or ellipse. Specifically, during circular pursuit of targets moving at different angular eccentricities (5°, 10°, and 15°) relative to straight ahead at spatial frequencies of 0.067 and 0.1 Hz, the torsional amplitude of the intervening quick phases was typically around 1° or smaller and changed direction for clockwise vs. counterclockwise tracking. Reverse computations of the eye rotation based on the recorded angular eye velocity showed that the quick phases facilitate the overall control of ocular orientation in the roll plane, thereby minimizing torsional disturbances of the visual field. On the basis of a detailed kinematic analysis, we suggest that quick phases during curvilinear smooth tracking serve to minimize deviations from Donders' law, which are inevitable due to the spherical configuration space of smooth eye movements.
Collapse
Affiliation(s)
- Bernhard J M Hess
- Neurology Dept., Univ. Hospital Zurich, Zurich CH-8091, Switzerland.
| | | |
Collapse
|