1
|
Heinen, Chandna, Singh, Watamaniuk. A new oculomotor model demystifies "Remarkable Saccades". BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.14.599100. [PMID: 38915723 PMCID: PMC11195182 DOI: 10.1101/2024.06.14.599100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
Hering's Law of binocular eye movement control guides most oculomotor research and supports diagnosis and treatment of clinical eye misalignment (strabismus). The law states that all eye movements are controlled by a unitary conjugate signal and a unitary vergence signal that sum. Recent evidence of temporally asynchronous inter-eye rotations during vergence (Chandna et al., 2021) challenges the viability of a unitary vergence signal. An alternative theory that might explain these anomalous results posits that the eyes are controlled independently. Yet independent control fails to explain a phenomenon known as "Remarkable Saccades" where an inappropriate saccade occurs from an eye aligned on a target during asymmetric vergence (Enright, 1992). We introduce a new model formulated to describe the Chandna et al. (2021) midline vergence result that generates remarkable saccades as an emergent property. The Hybrid Binocular Control model incorporates independent controllers for each eye with a cortical origin that interact with a unitary conjugate controller residing in brainstem. The model also accounts for behavioral variations in remarkable saccades when observers attend to an eye. Furthermore, it suggests more generally how the eyes are controlled during vergence and other voluntary eye movements, thus challenging documented oculomotor neural circuitry and suggesting that refinements are needed for clinical oculomotor interventions.
Collapse
|
2
|
Borot L, Ogden R, Bennett SJ. Prefrontal cortex activity and functional organisation in dual-task ocular pursuit is affected by concurrent upper limb movement. Sci Rep 2024; 14:9996. [PMID: 38693184 PMCID: PMC11063197 DOI: 10.1038/s41598-024-57012-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 03/13/2024] [Indexed: 05/03/2024] Open
Abstract
Tracking a moving object with the eyes seems like a simple task but involves areas of prefrontal cortex (PFC) associated with attention, working memory and prediction. Increasing the demand on these processes with secondary tasks can affect eye movements and/or perceptual judgments. This is particularly evident in chronic or acute neurological conditions such as Alzheimer's disease or mild traumatic brain injury. Here, we combined near infrared spectroscopy and video-oculography to examine the effects of concurrent upper limb movement, which provides additional afference and efference that facilitates tracking of a moving object, in a novel dual-task pursuit protocol. We confirmed the expected effects on judgement accuracy in the primary and secondary tasks, as well as a reduction in eye velocity when the moving object was occluded. Although there was limited evidence of oculo-manual facilitation on behavioural measures, performing concurrent upper limb movement did result in lower activity in left medial PFC, as well as a change in PFC network organisation, which was shown by Graph analysis to be locally and globally more efficient. These findings extend upon previous work by showing how PFC is functionally organised to support eye-hand coordination when task demands more closely replicate daily activities.
Collapse
Affiliation(s)
- Lénaïc Borot
- School of Sport and Exercise Sciences, Faculty of Science, Liverpool John Moores University, Liverpool, UK
| | - Ruth Ogden
- School of Psychology, Faculty of Health, Liverpool John Moores University, Liverpool, UK
| | - Simon J Bennett
- School of Sport and Exercise Sciences, Faculty of Science, Liverpool John Moores University, Liverpool, UK.
| |
Collapse
|
3
|
Li Y, Li X, Grant PR, Zheng B. Quantifying the Impact of Motions on Human Aiming Performance: Evidence from Eye Tracking and Bio-Signals. SENSORS (BASEL, SWITZERLAND) 2024; 24:1518. [PMID: 38475054 DOI: 10.3390/s24051518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 02/21/2024] [Accepted: 02/22/2024] [Indexed: 03/14/2024]
Abstract
Working on a moving platform can significantly impede human performance. Previous studies on moving vehicles have often focused on the overall impact on general task performance, whereas our study's emphasis is on precise hand movements, exploring the interaction between body motion and the escalation of task difficulty. We recruited 28 participants to engage in reciprocal aiming tasks, following Paul Fitts's setting, under both in-motion and stationary conditions. The task index of difficulty (ID) was manipulated by varying the width of the targets and the distance between the targets. We measured participants' movement time (MT), performance errors, and monitored their eye movements using an eye-tracking device, heart rate (HR), and respiration rate (RR) during the tasks. The measured parameters were compared across two experimental conditions and three ID levels. Compared to the stationary conditions, the in-motion conditions degraded human aiming performance, resulting in significantly prolonged MT, increased errors, and longer durations of eye fixations and saccades. Furthermore, HR and RR increased under the in-motion conditions. Linear relationships between MT and ID exhibited steeper slopes under the in-motion conditions compared to the stationary conditions. This study builds a foundation for us to explore the control mechanisms of individuals working in dynamic and demanding environments, such as pilots in airplanes and paramedics in ambulances.
Collapse
Affiliation(s)
- Yuzhang Li
- Department of Mechanical Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada
| | - Xinming Li
- Department of Mechanical Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada
| | - Peter R Grant
- Institute for Aerospace Studies, University of Toronto, Toronto, ON M3H 5T6, Canada
| | - Bin Zheng
- Department of Surgery, University of Alberta, Edmonton, AB T6G 2B7, Canada
| |
Collapse
|
4
|
Rubinstein JF, Singh M, Kowler E. Bayesian approaches to smooth pursuit of random dot kinematograms: effects of varying RDK noise and the predictability of RDK direction. J Neurophysiol 2024; 131:394-416. [PMID: 38149327 DOI: 10.1152/jn.00116.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 11/30/2023] [Accepted: 12/20/2023] [Indexed: 12/28/2023] Open
Abstract
Smooth pursuit eye movements respond on the basis of both immediate and anticipated target motion, where anticipations may be derived from either memory or perceptual cues. To study the combined influence of both immediate sensory motion and anticipation, subjects pursued clear or noisy random dot kinematograms (RDKs) whose mean directions were chosen from Gaussian distributions with SDs = 10° (narrow prior) or 45° (wide prior). Pursuit directions were consistent with Bayesian theory in that transitions over time from dependence on the prior to near total dependence on immediate sensory motion (likelihood) took longer with the noisier RDKs and with the narrower, more reliable, prior. Results were fit to Bayesian models in which parameters representing the variability of the likelihood either were or were not constrained to be the same for both priors. The unconstrained model provided a statistically better fit, with the influence of the prior in the constrained model smaller than predicted from strict reliability-based weighting of prior and likelihood. Factors that may have contributed to this outcome include prior variability different from nominal values, low-level sensorimotor learning with the narrow prior, or departures of pursuit from strict adherence to reliability-based weighting. Although modifications of, or alternatives to, the normative Bayesian model will be required, these results, along with previous studies, suggest that Bayesian approaches are a promising framework to understand how pursuit combines immediate sensory motion, past history, and informative perceptual cues to accurately track the target motion that is most likely to occur in the immediate future.NEW & NOTEWORTHY Smooth pursuit eye movements respond on the basis of anticipated, as well as immediate, target motions. Bayesian models using reliability-based weighting of previous (prior) and immediate target motions (likelihood) accounted for many, but not all, aspects of pursuit of clear and noisy random dot kinematograms with different levels of predictability. Bayesian approaches may solve the long-standing problem of how pursuit combines immediate sensory motion and anticipation of future motion to configure an effective response.
Collapse
Affiliation(s)
- Jason F Rubinstein
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| | - Manish Singh
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| | - Eileen Kowler
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| |
Collapse
|
5
|
Stolte M, Kraus L, Ansorge U. Visual attentional guidance during smooth pursuit eye movements: Distractor interference is independent of distractor-target similarity. Psychophysiology 2023; 60:e14384. [PMID: 37431573 DOI: 10.1111/psyp.14384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 05/31/2023] [Accepted: 06/26/2023] [Indexed: 07/12/2023]
Abstract
In the current study, we used abrupt-onset distractors similar and dissimilar in luminance to the target of a smooth pursuit eye-movement to test if abrupt-onset distractors capture attention in a top-down or bottom-up fashion while the eyes track a moving object. Abrupt onset distractors were presented at different positions relative to the current position of a pursuit target during the closed-loop phase of smooth pursuit. Across experiments, we varied the duration of the distractors, their motion direction, and task-relevance. We found that abrupt-onset distractors decreased the gain of horizontally directed smooth-pursuit eye-movements. This effect, however, was independent of the similarity in luminance between distractor and target. In addition, distracting effects on horizontal gain were the same, regardless of the exact duration and position of the distractors, suggesting that capture was relatively unspecific and short-lived (Experiments 1 and 2). This was different with distractors moving in a vertical direction, perpendicular to the horizontally moving target. In line with past findings, these distractors caused suppression of vertical gain (Experiment 3). Finally, making distractors task-relevant by asking observers to report distractor positions increased the pursuit gain effect of the distractors. This effect was also independent of target-distractor similarity (Experiment 4). In conclusion, the results suggest that a strong location signal exerted by the pursuit targets led to very brief and largely location-unspecific interference through the abrupt onsets and that this interference was bottom-up, implying that the control of smooth pursuit was independent of other target features besides its motion signal.
Collapse
Affiliation(s)
- Moritz Stolte
- Department of Cognition, Emotion, and Methods in Psychology, University of Vienna, Vienna, Austria
| | - Leon Kraus
- Department of Cognition, Emotion, and Methods in Psychology, University of Vienna, Vienna, Austria
| | - Ulrich Ansorge
- Department of Cognition, Emotion, and Methods in Psychology, University of Vienna, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
- Research Platform Mediatised Lifeworlds, University of Vienna, Vienna, Austria
| |
Collapse
|
6
|
Takahashi M, Veale R. Pathways for Naturalistic Looking Behavior in Primate I: Behavioral Characteristics and Brainstem Circuits. Neuroscience 2023; 532:133-163. [PMID: 37776945 DOI: 10.1016/j.neuroscience.2023.09.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 09/09/2023] [Accepted: 09/18/2023] [Indexed: 10/02/2023]
Abstract
Organisms control their visual worlds by moving their eyes, heads, and bodies. This control of "gaze" or "looking" is key to survival and intelligence, but our investigation of the underlying neural mechanisms in natural conditions is hindered by technical limitations. Recent advances have enabled measurement of both brain and behavior in freely moving animals in complex environments, expanding on historical head-fixed laboratory investigations. We juxtapose looking behavior as traditionally measured in the laboratory against looking behavior in naturalistic conditions, finding that behavior changes when animals are free to move or when stimuli have depth or sound. We specifically focus on the brainstem circuits driving gaze shifts and gaze stabilization. The overarching goal of this review is to reconcile historical understanding of the differential neural circuits for different "classes" of gaze shift with two inconvenient truths. (1) "classes" of gaze behavior are artificial. (2) The neural circuits historically identified to control each "class" of behavior do not operate in isolation during natural behavior. Instead, multiple pathways combine adaptively and non-linearly depending on individual experience. While the neural circuits for reflexive and voluntary gaze behaviors traverse somewhat independent brainstem and spinal cord circuits, both can be modulated by feedback, meaning that most gaze behaviors are learned rather than hardcoded. Despite this flexibility, there are broadly enumerable neural pathways commonly adopted among primate gaze systems. Parallel pathways which carry simultaneous evolutionary and homeostatic drives converge in superior colliculus, a layered midbrain structure which integrates and relays these volitional signals to brainstem gaze-control circuits.
Collapse
Affiliation(s)
- Mayu Takahashi
- Department of Systems Neurophysiology, Graduate School of Medical and Dental, Sciences, Tokyo Medical and Dental University, Japan.
| | - Richard Veale
- Department of Neurobiology, Graduate School of Medicine, Kyoto University, Japan
| |
Collapse
|
7
|
Feng X, Wang Q, Cong H, Zhang Y, Qiu M. Gaze Point Tracking Based on a Robotic Body-Head-Eye Coordination Method. SENSORS (BASEL, SWITZERLAND) 2023; 23:6299. [PMID: 37514595 PMCID: PMC10383314 DOI: 10.3390/s23146299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 06/29/2023] [Accepted: 07/06/2023] [Indexed: 07/30/2023]
Abstract
When the magnitude of a gaze is too large, human beings change the orientation of their head or body to assist their eyes in tracking targets because saccade alone is insufficient to keep a target at the center region of the retina. To make a robot gaze at targets rapidly and stably (as a human does), it is necessary to design a body-head-eye coordinated motion control strategy. A robot system equipped with eyes and a head is designed in this paper. Gaze point tracking problems are divided into two sub-problems: in situ gaze point tracking and approaching gaze point tracking. In the in situ gaze tracking state, the desired positions of the eye, head and body are calculated on the basis of minimizing resource consumption and maximizing stability. In the approaching gaze point tracking state, the robot is expected to approach the object at a zero angle. In the process of tracking, the three-dimensional (3D) coordinates of the object are obtained by the bionic eye and then converted to the head coordinate system and the mobile robot coordinate system. The desired positions of the head, eyes and body are obtained according to the object's 3D coordinates. Then, using sophisticated motor control methods, the head, eyes and body are controlled to the desired position. This method avoids the complex process of adjusting control parameters and does not require the design of complex control algorithms. Based on this strategy, in situ gaze point tracking and approaching gaze point tracking experiments are performed by the robot. The experimental results show that body-head-eye coordination gaze point tracking based on the 3D coordinates of an object is feasible. This paper provides a new method that differs from the traditional two-dimensional image-based method for robotic body-head-eye gaze point tracking.
Collapse
Affiliation(s)
- Xingyang Feng
- Army Academy of Armored Forces, Beijing 100072, China
| | - Qingbin Wang
- Research Center of Precision Sensing and Control, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Hua Cong
- Army Academy of Armored Forces, Beijing 100072, China
| | - Yu Zhang
- Army Academy of Armored Forces, Beijing 100072, China
| | - Mianhao Qiu
- Army Academy of Armored Forces, Beijing 100072, China
| |
Collapse
|
8
|
Behling S, Lisberger SG. A sensory-motor decoder that transforms neural responses in extrastriate area MT into smooth pursuit eye movements. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.12.540526. [PMID: 37214841 PMCID: PMC10197634 DOI: 10.1101/2023.05.12.540526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Visual motion drives smooth pursuit eye movements through a sensory-motor decoder that uses multiple parallel components and neural pathways to transform the population response in extrastriate area MT into movement. We evaluated the decoder by challenging pursuit in monkeys with reduced motion reliability created by reducing coherence of motion in patches of dots. Reduced dot coherence caused deficits in both the initiation of pursuit and steady-state tracking, revealing the paradox of steady-state eye speeds that fail to accelerate to target speed in spite of persistent image motion. We recorded neural responses to reduced dot coherence in MT and found a decoder that transforms MT population responses into eye movements. During pursuit initiation, decreased dot coherence reduces MT population response amplitude without changing the preferred speed at the peak of the population response. The successful decoder reproduces the measured eye movements by multiplication of (i) the estimate of target speed from the peak of the population response with (ii) visual-motor gain based on the amplitude of the population response. During steady-state tracking, the decoder that worked for pursuit initiation failed. It predicted eye acceleration to target speed even when monkeys' eye speeds were steady at a level well below target speed. We can account for the effect of dot coherence on steady-state eye speed if sensorymotor gain also modulates the eye velocity positive feedback that normally sustains perfect steadystate tracking. Then, poor steady-state tracking persists because of balance between deceleration caused by low positive feedback gain and acceleration driven by MT.
Collapse
Affiliation(s)
- Stuart Behling
- Department of Neurobiology, Duke University School of Medicine Durham, North Carolina, USA
| | - Stephen G Lisberger
- Department of Neurobiology, Duke University School of Medicine Durham, North Carolina, USA
| |
Collapse
|
9
|
Watamaniuk SNJ, Badler JB, Heinen SJ. Peripheral targets attenuate miniature eye movements during fixation. Sci Rep 2023; 13:7418. [PMID: 37150766 PMCID: PMC10164736 DOI: 10.1038/s41598-023-34066-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 04/24/2023] [Indexed: 05/09/2023] Open
Abstract
Fixating a small dot is a universal technique for stabilizing gaze in vision and eye movement research, and for clinical imaging of normal and diseased retinae. During fixation, microsaccades and drifts occur that presumably benefit vision, yet microsaccades compromise image stability and usurp task attention. Previous work suggested that microsaccades and smooth pursuit catch-up saccades are controlled by similar mechanisms. This, and other previous work showing fewer catch-up saccades during smooth pursuit of peripheral targets suggested that a peripheral target might similarly mitigate microsaccades. Here, human observers fixated one of three stimuli: a small central dot, the center of a peripheral, circular array of small dots, or a central/peripheral stimulus created by combining the two. The microsaccade rate was significantly lower with the peripheral array than with the dot. However, inserting the dot into the array increased the microsaccade rate to single-dot levels. Drift speed also decreased with the peripheral array, both with and without the central dot. Eye position variability was higher with the array than with the composite stimulus. The results suggest that analogous to the foveal pursuit, foveating a stationary target engages the saccadic system likely compromising retinal-image stability. In contrast, fixating a peripheral stimulus improves stability, thereby affording better retinal imaging and releasing attention for experimental tasks.
Collapse
Affiliation(s)
- Scott N J Watamaniuk
- Wright State University, Dayton, USA.
- The Smith-Kettlewell Eye Research Institute, San Francisco, USA.
| | - Jeremy B Badler
- The Smith-Kettlewell Eye Research Institute, San Francisco, USA
- University of Marburg, Marburg, Germany
| | | |
Collapse
|
10
|
Korai Y, Miura K. A dynamical model of visual motion processing for arbitrary stimuli including type II plaids. Neural Netw 2023; 162:46-68. [PMID: 36878170 DOI: 10.1016/j.neunet.2023.02.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Revised: 02/23/2023] [Accepted: 02/25/2023] [Indexed: 03/04/2023]
Abstract
To explore the operating principle of visual motion processing in the brain underlying perception and eye movements, we model the information processing of velocity estimate of the visual stimulus at the algorithmic level using the dynamical system approach. In this study, we formulate the model as an optimization process of an appropriately defined objective function. The model is applicable to arbitrary visual stimuli. We find that our theoretical predictions qualitatively agree with time evolution of eye movement reported by previous works across various types of stimulus. Our results suggest that the brain implements the present framework as the internal model of motion vision. We anticipate our model to be a promising building block for more profound understanding of visual motion processing as well as for the development of robotics.
Collapse
Affiliation(s)
- Yusuke Korai
- Integrated Clinical Education Center, Kyoto University Hospital, Kyoto University, Kyoto 606-8507, Japan.
| | - Kenichiro Miura
- Graduate School of Medicine, Kyoto University, Kyoto 606-8501, Japan; Department of Pathology of Mental Diseases, National Institute of Mental Health, National Center of Neurology and Psychiatry, Tokyo 187-8551, Japan.
| |
Collapse
|
11
|
Lisberger SG. Toward a Biomimetic Neural Circuit Model of Sensory-Motor Processing. Neural Comput 2023; 35:384-412. [PMID: 35671470 PMCID: PMC9971833 DOI: 10.1162/neco_a_01516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 03/31/2022] [Indexed: 11/04/2022]
Abstract
Computational models have been a mainstay of research on smooth pursuit eye movements in monkeys. Pursuit is a sensory-motor system that is driven by the visual motion of small targets. It creates a smooth eye movement that accelerates up to target speed and tracks the moving target essentially perfectly. In this review of my laboratory's research, I trace the development of computational models of pursuit eye movements from the early control-theory models to the most recent neural circuit models. I outline a combined experimental and computational plan to move the models to the next level. Finally, I explain why research on nonhuman primates is so critical to the development of the neural circuit models I think we need.
Collapse
Affiliation(s)
- Stephen G. Lisberger
- Department of Neurobiology, Duke University School of Medicine, Durham, NC 27710, U.S.A
| |
Collapse
|
12
|
Jeong W, Kim S, Park J, Lee J. Multivariate EEG activity reflects the Bayesian integration and the integrated Galilean relative velocity of sensory motion during sensorimotor behavior. Commun Biol 2023; 6:113. [PMID: 36709242 PMCID: PMC9884247 DOI: 10.1038/s42003-023-04481-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 01/12/2023] [Indexed: 01/29/2023] Open
Abstract
Humans integrate multiple sources of information for action-taking, using the reliability of each source to allocate weight to the data. This reliability-weighted information integration is a crucial property of Bayesian inference. In this study, participants were asked to perform a smooth pursuit eye movement task in which we independently manipulated the reliability of pursuit target motion and the direction-of-motion cue. Through an analysis of pursuit initiation and multivariate electroencephalography activity, we found neural and behavioral evidence of Bayesian information integration: more attraction toward the cue direction was generated when the target motion was weak and unreliable. Furthermore, using mathematical modeling, we found that the neural signature of Bayesian information integration had extra-retinal origins, although most of the multivariate electroencephalography activity patterns during pursuit were best correlated with the retinal velocity errors accumulated over time. Our results demonstrated neural implementation of Bayesian inference in human oculomotor behavior.
Collapse
Affiliation(s)
- Woojae Jeong
- grid.410720.00000 0004 1784 4496Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419 Republic of Korea ,grid.42505.360000 0001 2156 6853Department of Biomedical Engineering, University of Southern California, Los Angeles, CA 90089 USA
| | - Seolmin Kim
- grid.410720.00000 0004 1784 4496Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419 Republic of Korea ,grid.264381.a0000 0001 2181 989XDepartment of Biomedical Engineering, Sungkyunkwan University, Suwon, 16419 Republic of Korea
| | - JeongJun Park
- grid.410720.00000 0004 1784 4496Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419 Republic of Korea ,grid.4367.60000 0001 2355 7002Division of Biology and Biomedical Sciences, Program in Neurosciences, Washington University in St. Louis, St. Louis, MO 63130 USA
| | - Joonyeol Lee
- grid.410720.00000 0004 1784 4496Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419 Republic of Korea ,grid.264381.a0000 0001 2181 989XDepartment of Biomedical Engineering, Sungkyunkwan University, Suwon, 16419 Republic of Korea ,grid.264381.a0000 0001 2181 989XDepartment of Intelligent Precision Healthcare Convergence, Sungkyunkwan University, Suwon, 16419 Republic of Korea
| |
Collapse
|
13
|
Qiao H, Chen J, Huang X. A Survey of Brain-Inspired Intelligent Robots: Integration of Vision, Decision, Motion Control, and Musculoskeletal Systems. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:11267-11280. [PMID: 33909584 DOI: 10.1109/tcyb.2021.3071312] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Current robotic studies are focused on the performance of specific tasks. However, such tasks cannot be generalized, and some special tasks, such as compliant and precise manipulation, fast and flexible response, and deep collaboration between humans and robots, cannot be realized. Brain-inspired intelligent robots imitate humans and animals, from inner mechanisms to external structures, through an integration of visual cognition, decision making, motion control, and musculoskeletal systems. This kind of robot is more likely to realize the functions that current robots cannot realize and become human friends. With the focus on the development of brain-inspired intelligent robots, this article reviews cutting-edge research in the areas of brain-inspired visual cognition, decision making, musculoskeletal robots, motion control, and their integration. It aims to provide greater insight into brain-inspired intelligent robots and attracts more attention to this field from the global research community.
Collapse
|
14
|
Lee SU, Kim HJ, Choi JY, Choi JH, Zee DS, Kim JS. Nystagmus only with fixation in the light: a rare central sign due to cerebellar malfunction. J Neurol 2022; 269:3879-3890. [PMID: 35396603 DOI: 10.1007/s00415-022-11108-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 03/24/2022] [Accepted: 03/25/2022] [Indexed: 11/30/2022]
Abstract
Fixation nystagmus refers to the nystagmus that appears or markedly increases with fixation. While relatively common in infantile (congenital) nystagmus, acquired fixation nystagmus is unusual and has been ascribed to lesions involving the cerebellar nuclei or the fibers projecting from the cerebellum to the brainstem. We aimed to report the clinical features of patients with acquired fixation nystagmus and discuss possible mechanisms using a model simulation and diagnostic significance. We describe four patients with acquired fixation nystagmus that appears or markedly increases with visual fixation. All patients had lesions involving the cerebellum or dorsal medulla. All patients showed direction-changing gaze-evoked nystagmus, impaired smooth pursuit, and decreased vestibular responses on head-impulse tests. The clinical implication of fixation nystagmus is that it may occur in central lesions that impair both smooth pursuit and the vestibulo-ocular reflex (VOR) but without creating a spontaneous nystagmus in the dark. We develop a mathematical model that hypothesizes that fixation nystagmus reflects a central tone imbalance due to abnormal function in cerebellar circuits that normally optimize the interaction between visual following (pursuit) and VOR during attempted fixation. Patients with fixation nystagmus have central lesions involving the cerebellar circuits that are involved in visual-vestibular interactions and normally eliminate biases that cause a spontaneous nystagmus.
Collapse
Affiliation(s)
- Sun-Uk Lee
- Department of Neurology, Korea University Medical Center, Seoul, South Korea.,Department of Neurology, Dizziness Center, Clinical Neuroscience Center, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Hyo-Jung Kim
- Research Administration Team, Seoul National University Bundang Hospital, 173-82 Gumi-ro, Bundang-gu, Gyeonggi-do, Seongnam-si, 13620, South Korea
| | - Jeong-Yoon Choi
- Department of Neurology, Dizziness Center, Clinical Neuroscience Center, Seoul National University Bundang Hospital, Seongnam, South Korea.,Department of Neurology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Jae-Hwan Choi
- Department of Neurology, Pusan National University School of Medicine, Pusan National University Yangsan Hospital, Yangsan, South Korea
| | - David S Zee
- Departments of Neurology, Ophthalmology, Otolaryngology-Head and Neck Surgery, and Neuroscience, Division of Neuro-Visual and Vestibular Disorders, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Ji-Soo Kim
- Department of Neurology, Dizziness Center, Clinical Neuroscience Center, Seoul National University Bundang Hospital, Seongnam, South Korea. .,Department of Neurology, Seoul National University Bundang Hospital, Seongnam, South Korea.
| |
Collapse
|
15
|
Robinson DA. Properties of pursuit movements. PROGRESS IN BRAIN RESEARCH 2022; 267:391-410. [PMID: 35074064 DOI: 10.1016/bs.pbr.2021.10.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This chapter describes dynamic properties of smooth pursuit, visual and non-visual stimuli for pursuit, smooth eye-head tracking movements, and plastic-adaptive properties of pursuit. Step-ramp visual stimulus motion has revealed important properties of pursuit, including the latency to onset, initial acceleration, accuracy, and transient oscillations-all features that have been used to develop models of the pursuit system, discussed in the chapter "Models of pursuit" by Robinson. The role of predictive neural mechanisms in generating pursuit movements that anticipate target motion, and that enable near-perfect tracking of sinusoidal target motion, are examined. Smooth pursuit can be generated in response to targets that do not move, such as stroboscopic lights and images stabilized in the periphery of vision. The view that, during combined eye-head pursuit, the pursuit signal is used to cancel the vestibulo-ocular reflex is an incomplete hypothesis, contradicted by behavioral and electrophysiological findings. Smooth pursuit shows adaptive capabilities, evident in individuals who develop extraocular muscle palsies.
Collapse
Affiliation(s)
- David A Robinson
- Late Professor of Ophthalmology, Biomedical Engineering and Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
16
|
Models of pursuit. PROGRESS IN BRAIN RESEARCH 2022; 267:411-422. [PMID: 35074065 DOI: 10.1016/bs.pbr.2021.10.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
This chapter deals with mathematical models for smooth-pursuit eye movements, starting with simple negative-feedback schemes. After pointing out their deficiencies, Robinson developed models that account for specific dynamic properties of pursuit behavior, such as the transient ocular oscillations that may occur at onset, and the adaptive properties of pursuit. The challenges posed by the inherent latency of visual responses to target motion-specifically the instability of a negative feedback model-are resolved by including an efference copy internal positive feedback loop, and distributing system delays throughout the model's pathways. A model for smooth combined eye-head tracking is presented in which the brain sends an efference copy of the planned head movement to null out the vestibular signal expected.
Collapse
|
17
|
Robinson DA. Eye stabilization. PROGRESS IN BRAIN RESEARCH 2022; 267:379-390. [PMID: 35074063 DOI: 10.1016/bs.pbr.2021.10.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
This chapter summarizes how visual feedback could be used to stabilize the line of sight and optimize vision during attempted fixation of a stationary target. Quantitative features of oculomotor noise that causes drifts of the eye away from the target are analyzed. The sources of such noise, including the ripples in eye position due to muscle fiber twitches, and drifts of the eye away from the visual target due to vestibular imbalance, are examined. Evidence for a promptly responding stabilization system, distinct from optokinetic or pursuit eye movements is reviewed. Smooth eye movements that negate drifts of the eyes, which are discussed here, are distinct from microsaccades, which are discussed in chapter "Behavior of the saccadic system: Metrics of timing and accuracy" by Robinson.
Collapse
Affiliation(s)
- David A Robinson
- Late Professor of Ophthalmology, Biomedical Engineering and Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
18
|
Mahanama B, Jayawardana Y, Rengarajan S, Jayawardena G, Chukoskie L, Snider J, Jayarathna S. Eye Movement and Pupil Measures: A Review. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2021.733531] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Our subjective visual experiences involve complex interaction between our eyes, our brain, and the surrounding world. It gives us the sense of sight, color, stereopsis, distance, pattern recognition, motor coordination, and more. The increasing ubiquity of gaze-aware technology brings with it the ability to track gaze and pupil measures with varying degrees of fidelity. With this in mind, a review that considers the various gaze measures becomes increasingly relevant, especially considering our ability to make sense of these signals given different spatio-temporal sampling capacities. In this paper, we selectively review prior work on eye movements and pupil measures. We first describe the main oculomotor events studied in the literature, and their characteristics exploited by different measures. Next, we review various eye movement and pupil measures from prior literature. Finally, we discuss our observations based on applications of these measures, the benefits and practical challenges involving these measures, and our recommendations on future eye-tracking research directions.
Collapse
|
19
|
Abstract
The retrosplenial complex (RSC) plays a crucial role in spatial orientation by computing heading direction and translating between distinct spatial reference frames based on multi-sensory information. While invasive studies allow investigating heading computation in moving animals, established non-invasive analyses of human brain dynamics are restricted to stationary setups. To investigate the role of the RSC in heading computation of actively moving humans, we used a Mobile Brain/Body Imaging approach synchronizing electroencephalography with motion capture and virtual reality. Data from physically rotating participants were contrasted with rotations based only on visual flow. During physical rotation, varying rotation velocities were accompanied by pronounced wide frequency band synchronization in RSC, the parietal and occipital cortices. In contrast, the visual flow rotation condition was associated with pronounced alpha band desynchronization, replicating previous findings in desktop navigation studies, and notably absent during physical rotation. These results suggest an involvement of the human RSC in heading computation based on visual, vestibular, and proprioceptive input and implicate revisiting traditional findings of alpha desynchronization in areas of the navigation network during spatial orientation in movement-restricted participants.
Collapse
|
20
|
Eye tracking to assess concussions: an intra-rater reliability study with healthy youth and adult athletes of selected contact and collision team sports. Exp Brain Res 2021; 239:3289-3302. [PMID: 34467416 DOI: 10.1007/s00221-021-06205-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Accepted: 08/21/2021] [Indexed: 10/20/2022]
Abstract
Eye movements that are dependent on cognition hold promise in assessing sports-related concussions but research on reliability of eye tracking measurements in athletic cohorts is very limited. This observational test-retest study aimed to establish whether eye tracking technology is a reliable tool for assessing sports-related concussions in youth and adult athletes partaking in contact and collision team sports. Forty-three youth (15.4 ± 2.2 years) and 27 adult (22.2 ± 2.9 years) Rugby Union and soccer players completed the study. Eye movements were recorded using SMIRED250mobile while participants completed a test battery twice, with a 1-week interval that included self-paced saccade (SPS), fixation stability, memory-guided sequence (MGS), smooth pursuit (SP), and antisaccades (AS) tasks. Intra-class correlation coefficient (ICC), measurement error (SEM) and smallest real difference (SRD) were calculated for 47 variables. Seventeen variables achieved an ICC > 0.50. In the adults, saccade count in SPS had good reliability (ICC = 0.86, SRD = 146.6 saccades). In the youth, the average blink duration in MGS had excellent reliability (ICC = 0.99, SRD = 59.4 ms); directional errors in AS tasks and gain of diagonal SP had good reliability (ICC = 0.78 and 0.77, SRD = 25.3 and 395.1%, respectively). Four metrics were found in this study to be reliable candidates for further biomarker validity research in contact and collision sport cohorts. Many variables failed to present a sufficient level of robustness for a practical diagnostic tool; possibly, because athletic cohorts have higher homogeneity, along with latent adverse effects of undetected concussions and repetitive head impacts. Since reliability of a measure can influence type II error, effect sizes, and confidence intervals, it is strongly advocated to conduct dedicated reliability evaluations prior to any validity studies.
Collapse
|
21
|
Fanning A, Shakhawat A, Raymond JL. Population calcium responses of Purkinje cells in the oculomotor cerebellum driven by non-visual input. J Neurophysiol 2021; 126:1391-1402. [PMID: 34346783 DOI: 10.1152/jn.00715.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The climbing fiber input to the cerebellum conveys instructive signals that can induce synaptic plasticity and learning by triggering complex spikes accompanied by large calcium transients in Purkinje cells. In the cerebellar flocculus, which supports oculomotor learning, complex spikes are driven by image motion on the retina, which could indicate an oculomotor error. In the same neurons, complex spikes also can be driven by non-visual signals. It has been shown that the calcium transients accompanying each complex spike can vary in amplitude, even within a given cell, therefore, we compared the calcium responses associated with the visual and non-visual inputs to floccular Purkinje cells. The calcium indicator GCaMP6f was selectively expressed in Purkinje cells, and fiber photometry was used to record the calcium responses from a population of Purkinje cells in the flocculus of awake behaving mice. During visual (optokinetic) stimuli and pairing of vestibular and visual stimuli, the calcium level increased during contraversive retinal image motion. During performance of the vestibulo-ocular reflex in the dark, calcium increased during contraversive head rotation and the associated ipsiverse eye movements. The amplitude of this non-visual calcium response was comparable to that during conditions with retinal image motion present that induce oculomotor learning. Thus, population calcium responses of Purkinje cells in the cerebellar flocculus to visual and non-visual input are similar to what has been reported previously for complex spikes, suggesting that multimodal instructive signals control the synaptic plasticity supporting oculomotor learning.
Collapse
Affiliation(s)
- Alexander Fanning
- Department of Neurobiology, Stanford University, Stanford, CA, United States
| | - Amin Shakhawat
- Department of Neurobiology, Stanford University, Stanford, CA, United States
| | - Jennifer L Raymond
- Department of Neurobiology, Stanford University, Stanford, CA, United States
| |
Collapse
|
22
|
Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels. SENSORS 2021; 21:s21144686. [PMID: 34300425 PMCID: PMC8309511 DOI: 10.3390/s21144686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 06/28/2021] [Accepted: 07/06/2021] [Indexed: 11/17/2022]
Abstract
Many gaze data visualization techniques intuitively show eye movement together with visual stimuli. The eye tracker records a large number of eye movements within a short period. Therefore, visualizing raw gaze data with the visual stimulus appears complicated and obscured, making it difficult to gain insight through visualization. To avoid the complication, we often employ fixation identification algorithms for more abstract visualizations. In the past, many scientists have focused on gaze data abstraction with the attention map and analyzed detail gaze movement patterns with the scanpath visualization. Abstract eye movement patterns change dramatically depending on fixation identification algorithms in the preprocessing. However, it is difficult to find out how fixation identification algorithms affect gaze movement pattern visualizations. Additionally, scientists often spend much time on adjusting parameters manually in the fixation identification algorithms. In this paper, we propose a gaze behavior-based data processing method for abstract gaze data visualization. The proposed method classifies raw gaze data using machine learning models for image classification, such as CNN, AlexNet, and LeNet. Additionally, we compare the velocity-based identification (I-VT), dispersion-based identification (I-DT), density-based fixation identification, velocity and dispersion-based (I-VDT), and machine learning based and behavior-based modelson various visualizations at each abstraction level, such as attention map, scanpath, and abstract gaze movement visualization.
Collapse
|
23
|
Korda A, Zee DS, Wyss T, Zamaro E, Caversaccio MD, Wagner F, Kalla R, Mantokoudis G. Impaired fixation suppression of horizontal vestibular nystagmus during smooth pursuit: pathophysiology and clinical implications. Eur J Neurol 2021; 28:2614-2621. [PMID: 33983645 PMCID: PMC8362184 DOI: 10.1111/ene.14909] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 05/04/2021] [Accepted: 05/05/2021] [Indexed: 12/27/2022]
Abstract
Background and purpose A peripheral spontaneous nystagmus (SN) is typically enhanced or revealed by removing fixation. Conversely, failure of fixation suppression of SN is usually a sign of a central disorder. Based on Luebke and Robinson (Vision Res 1988, vol. 28 (8), pp. 941–946), who suggested that the normal fixation mechanism is disengaged during pursuit, it is hypothesized that vertical tracking in the light would bring out or enhance a horizontal SN. Methods Eighteen patients with acute vestibular neuritis were studied. Eye movements were recorded using video‐oculography at straight‐ahead gaze with and without visual fixation, and during smooth pursuit. The slow‐phase velocity and the fixation suppression indices of nystagmus (relative to SN in darkness) were compared in each condition. Results During vertical tracking, the slow‐phase velocity of horizontal SN with eyes near straight‐ahead gaze was significantly higher (median 2.7°/s) than under static visual fixation (median 1.2°/s). Likewise, the fixation index was significantly higher (worse suppression) during pursuit (median 48%) than during fixation (median 26%). A release of SN was also suggested during horizontal pursuit, if one assumes superposition of SN on a normal and symmetrical pursuit capability.
Collapse
Affiliation(s)
- Athanasia Korda
- Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, University Hospital Bern and University of Bern, Bern, Switzerland
| | - David S Zee
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Thomas Wyss
- Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, University Hospital Bern and University of Bern, Bern, Switzerland
| | - Ewa Zamaro
- Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, University Hospital Bern and University of Bern, Bern, Switzerland
| | - Marco D Caversaccio
- Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, University Hospital Bern and University of Bern, Bern, Switzerland
| | - Franca Wagner
- University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, University Hospital Bern and University of Bern, Bern, Switzerland
| | - Roger Kalla
- Department of Neurology, Inselspital, University Hospital Bern and University of Bern, Bern, Switzerland
| | - Georgios Mantokoudis
- Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, University Hospital Bern and University of Bern, Bern, Switzerland
| |
Collapse
|
24
|
A covered eye fails to follow an object moving in depth. Sci Rep 2021; 11:10983. [PMID: 34040063 PMCID: PMC8154899 DOI: 10.1038/s41598-021-90371-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Accepted: 05/06/2021] [Indexed: 11/08/2022] Open
Abstract
To clearly view approaching objects, the eyes rotate inward (vergence), and the intraocular lenses focus (accommodation). Current ocular control models assume both eyes are driven by unitary vergence and unitary accommodation commands that causally interact. The models typically describe discrete gaze shifts to non-accommodative targets performed under laboratory conditions. We probe these unitary signals using a physical stimulus moving in depth on the midline while recording vergence and accommodation simultaneously from both eyes in normal observers. Using monocular viewing, retinal disparity is removed, leaving only monocular cues for interpreting the object's motion in depth. The viewing eye always followed the target's motion. However, the occluded eye did not follow the target, and surprisingly, rotated out of phase with it. In contrast, accommodation in both eyes was synchronized with the target under monocular viewing. The results challenge existing unitary vergence command theories, and causal accommodation-vergence linkage.
Collapse
|
25
|
Adaptive Response Behavior in the Pursuit of Unpredictably Moving Sounds. eNeuro 2021; 8:ENEURO.0556-20.2021. [PMID: 33875456 PMCID: PMC8116108 DOI: 10.1523/eneuro.0556-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 03/05/2021] [Accepted: 03/13/2021] [Indexed: 11/21/2022] Open
Abstract
Although moving sound-sources abound in natural auditory scenes, it is not clear how the human brain processes auditory motion. Previous studies have indicated that, although ocular localization responses to stationary sounds are quite accurate, ocular smooth pursuit of moving sounds is very poor. We here demonstrate that human subjects faithfully track a sound’s unpredictable movements in the horizontal plane with smooth-pursuit responses of the head. Our analysis revealed that the stimulus–response relation was well described by an under-damped passive, second-order low-pass filter in series with an idiosyncratic, fixed, pure delay. The model contained only two free parameters: the system’s damping coefficient, and its central (resonance) frequency. We found that the latter remained constant at ∼0.6 Hz throughout the experiment for all subjects. Interestingly, the damping coefficient systematically increased with trial number, suggesting the presence of an adaptive mechanism in the auditory pursuit system (APS). This mechanism functions even for unpredictable sound-motion trajectories endowed with fixed, but covert, frequency characteristics in open-loop tracking conditions. We conjecture that the APS optimizes a trade-off between response speed and effort. Taken together, our data support the existence of a pursuit system for auditory head-tracking, which would suggest the presence of a neural representation of a spatial auditory fovea (AF).
Collapse
|
26
|
Jin H, Ma M, Liu H, Li M, Zhang H. Saccadic intrusion recognition intelligent algorithms based on different eye movement date classification methods. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-189953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The saccadic intrusion recognition algorithm proposed by Tokuda doesn’t take into account individual differences and also, since the accuracy is not high enough, three aspects in this regard have been vastly improved. Firstly, the algorithm is tailored to deal with three types of missing data such as low confidence, not on the screen, and time missing, which improves the data fault tolerance of the algorithm. Secondly, the improved algorithm refers to the E&K algorithm, utilizing the ratio of saccade speed to overall speed. Then, the adaptive speed threshold has been determined accurately, which improves the sensitivity of the algorithm. Finally, the algorithm adds the upper limit of the amplitude for identifying regular saccades and filters out rapid retrospectives with large amplitude. A combination of all three minor improvements boosts the overall accuracy of the algorithm. In addition, the N-back task experiment provided data, which has been processed by the improved algorithm, and aided in arriving at a conclusion consistent with the previous ones. The experimental data compares the effect of the improved algorithm and DBSCAN in identifying fixation points. Furthermore, the results demonstrate the improved algorithm being significantly better than the DBSCAN algorithm in regard to sequence and sensitivity of data points.
Collapse
Affiliation(s)
- Huibin Jin
- General Aviation College, Civil Aviation University of China, Tianjin, China
| | - Mingxia Ma
- Flight Technology College, Civil Aviation University of China, Tianjin, China
| | - Haibo Liu
- School of Civil Aviation and Flight, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Mengjie Li
- Administrative Department, Civil Aviation University of China, Tianjin, China
| | - Hong Zhang
- General Aviation College, Civil Aviation University of China, Tianjin, China
| |
Collapse
|
27
|
Souto D, Kerzel D. Visual selective attention and the control of tracking eye movements: a critical review. J Neurophysiol 2021; 125:1552-1576. [DOI: 10.1152/jn.00145.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
People’s eyes are directed at objects of interest with the aim of acquiring visual information. However, processing this information is constrained in capacity, requiring task-driven and salience-driven attentional mechanisms to select few among the many available objects. A wealth of behavioral and neurophysiological evidence has demonstrated that visual selection and the motor selection of saccade targets rely on shared mechanisms. This coupling supports the premotor theory of visual attention put forth more than 30 years ago, postulating visual selection as a necessary stage in motor selection. In this review, we examine to which extent the coupling of visual and motor selection observed with saccades is replicated during ocular tracking. Ocular tracking combines catch-up saccades and smooth pursuit to foveate a moving object. We find evidence that ocular tracking requires visual selection of the speed and direction of the moving target, but the position of the motion signal may not coincide with the position of the pursuit target. Further, visual and motor selection can be spatially decoupled when pursuit is initiated (open-loop pursuit). We propose that a main function of coupled visual and motor selection is to serve the coordination of catch-up saccades and pursuit eye movements. A simple race-to-threshold model is proposed to explain the variable coupling of visual selection during pursuit, catch-up and regular saccades, while generating testable predictions. We discuss pending issues, such as disentangling visual selection from preattentive visual processing and response selection, and the pinpointing of visual selection mechanisms, which have begun to be addressed in the neurophysiological literature.
Collapse
Affiliation(s)
- David Souto
- Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, United Kingdom
| | - Dirk Kerzel
- Faculté de Psychologie et des Sciences de l’Education, University of Geneva, Geneva, Switzerland
| |
Collapse
|
28
|
Fooken J, Kreyenmeier P, Spering M. The role of eye movements in manual interception: A mini-review. Vision Res 2021; 183:81-90. [PMID: 33743442 DOI: 10.1016/j.visres.2021.02.007] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 01/28/2021] [Accepted: 02/04/2021] [Indexed: 10/21/2022]
Abstract
When we catch a moving object in mid-flight, our eyes and hands are directed toward the object. Yet, the functional role of eye movements in guiding interceptive hand movements is not yet well understood. This review synthesizes emergent views on the importance of eye movements during manual interception with an emphasis on laboratory studies published since 2015. We discuss the role of eye movements in forming visual predictions about a moving object, and for enhancing the accuracy of interceptive hand movements through feedforward (extraretinal) and feedback (retinal) signals. We conclude by proposing a framework that defines the role of human eye movements for manual interception accuracy as a function of visual certainty and object motion predictability.
Collapse
Affiliation(s)
- Jolande Fooken
- Department of Psychology and Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada.
| | - Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada; Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada.
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada; Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada; Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, Canada; Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, Canada
| |
Collapse
|
29
|
Schröder R, Kasparbauer AM, Meyhöfer I, Steffens M, Trautner P, Ettinger U. Functional connectivity during smooth pursuit eye movements. J Neurophysiol 2020; 124:1839-1856. [PMID: 32997563 DOI: 10.1152/jn.00317.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Smooth pursuit eye movements (SPEM) hold the image of a slowly moving stimulus on the fovea. The neural system underlying SPEM primarily includes visual, parietal, and frontal areas. In the present study, we investigated how these areas are functionally coupled and how these couplings are influenced by target motion frequency. To this end, healthy participants (n = 57) were instructed to follow a sinusoidal target stimulus moving horizontally at two different frequencies (0.2 Hz, 0.4 Hz). Eye movements and blood oxygen level-dependent (BOLD) activity were recorded simultaneously. Functional connectivity of the key areas of the SPEM network was investigated with a psychophysiological interaction (PPI) approach. How activity in five eye movement-related seed regions (lateral geniculate nucleus, V1, V5, posterior parietal cortex, frontal eye fields) relates to activity in other parts of the brain during SPEM was analyzed. The behavioral results showed clear deterioration of SPEM performance at higher target frequency. BOLD activity during SPEM versus fixation occurred in a geniculo-occipito-parieto-frontal network, replicating previous findings. PPI analysis yielded widespread, partially overlapping networks. In particular, frontal eye fields and posterior parietal cortex showed task-dependent connectivity to large parts of the entire cortex, whereas other seed regions demonstrated more regionally focused connectivity. Higher target frequency was associated with stronger activations in visual areas but had no effect on functional connectivity. In summary, the results confirm and extend previous knowledge regarding the neural mechanisms underlying SPEM and provide a valuable basis for further investigations such as in patients with SPEM impairments and known alterations in brain connectivity.NEW & NOTEWORTHY This study provides a comprehensive investigation of blood oxygen level-dependent (BOLD) functional connectivity during smooth pursuit eye movements. Results from a large sample of healthy participants suggest that key oculomotor regions interact closely with each other but also with regions not primarily associated with eye movements. Understanding functional connectivity during smooth pursuit is important, given its potential role as an endophenotype of psychoses.
Collapse
Affiliation(s)
| | | | - Inga Meyhöfer
- Department of Psychology, University of Bonn, Bonn, Germany
| | - Maria Steffens
- Department of Psychology, University of Bonn, Bonn, Germany
| | - Peter Trautner
- Institute for Experimental Epileptology and Cognition Research, University of Bonn, Bonn, Germany.,Core Facility MRI, Bonn Technology Campus, University of Bonn, Bonn, Germany
| | | |
Collapse
|
30
|
Anderson SR, Porrill J, Dean P. World Statistics Drive Learning of Cerebellar Internal Models in Adaptive Feedback Control: A Case Study Using the Optokinetic Reflex. Front Syst Neurosci 2020; 14:11. [PMID: 32269515 PMCID: PMC7111124 DOI: 10.3389/fnsys.2020.00011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2019] [Accepted: 02/07/2020] [Indexed: 01/06/2023] Open
Abstract
The cerebellum is widely implicated in having an important role in adaptive motor control. Many of the computational studies on cerebellar motor control to date have focused on the associated architecture and learning algorithms in an effort to further understand cerebellar function. In this paper we switch focus to the signals driving cerebellar adaptation that arise through different motor behavior. To do this, we investigate computationally the contribution of the cerebellum to the optokinetic reflex (OKR), a visual feedback control scheme for image stabilization. We develop a computational model of the adaptation of the cerebellar response to the world velocity signals that excite the OKR (where world velocity signals are used to emulate head velocity signals when studying the OKR in head-fixed experimental laboratory conditions). The results show that the filter learnt by the cerebellar model is highly dependent on the power spectrum of the colored noise world velocity excitation signal. Thus, the key finding here is that the cerebellar filter is determined by the statistics of the OKR excitation signal.
Collapse
Affiliation(s)
- Sean R. Anderson
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, United Kingdom
| | - John Porrill
- Department of Psychology, University of Sheffield, Sheffield, United Kingdom
| | - Paul Dean
- Department of Psychology, University of Sheffield, Sheffield, United Kingdom
| |
Collapse
|
31
|
Behling S, Lisberger SG. Different mechanisms for modulation of the initiation and steady-state of smooth pursuit eye movements. J Neurophysiol 2020; 123:1265-1276. [PMID: 32073944 DOI: 10.1152/jn.00710.2019] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Smooth pursuit eye movements are used by primates to track moving objects. They are initiated by sensory estimates of target speed represented in the middle temporal (MT) area of extrastriate visual cortex and then supported by motor feedback to maintain steady-state eye speed at target speed. Here, we show that reducing the coherence in a patch of dots for a tracking target degrades the eye speed both at the initiation of pursuit and during steady-state tracking, when eye speed reaches an asymptote well below target speed. The deficits are quantitatively different between the motor-supported steady-state of pursuit and the sensory-driven initiation of pursuit, suggesting separate mechanisms. The deficit in visually guided pursuit initiation could not explain the deficit in steady-state tracking. Pulses of target speed during steady-state tracking revealed lower sensitivities to image motion across the retina for lower values of dot coherence. However, sensitivity was not zero, implying that visual motion should still be driving eye velocity toward target velocity. When we changed dot coherence from 100% to lower values during accurate steady-state pursuit, we observed larger eye decelerations for lower coherences, as expected if motor feedback was reduced in gain. A simple pursuit model accounts for our data based on separate modulation of the strength of visual-motor transmission and motor feedback. We suggest that reduced dot coherence allows us to observe evidence for separate modulations of the gain of visual-motor transmission during pursuit initiation and of the motor corollary discharges that comprise eye velocity memory and support steady-state tracking.NEW & NOTEWORTHY We exploit low-coherence patches of dots to control the initiation and steady state of smooth pursuit eye movements and show that these two phases of movement are modulated separately by the reliability of visual motion signals. We conclude that the neural circuit for pursuit includes separate modulation of the strength of visual-motor transmission for movement initiation and of eye velocity positive feedback to support steady-state tracking.
Collapse
Affiliation(s)
- Stuart Behling
- Department of Neurobiology, Duke University School of Medicine, Durham, North Carolina
| | - Stephen G Lisberger
- Department of Neurobiology, Duke University School of Medicine, Durham, North Carolina
| |
Collapse
|
32
|
Badler JB, Watamaniuk SNJ, Heinen SJ. A common mechanism modulates saccade timing during pursuit and fixation. J Neurophysiol 2019; 122:1981-1988. [PMID: 31533016 DOI: 10.1152/jn.00198.2019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Smooth pursuit is punctuated by catch-up saccades, which are thought to automatically correct sensory errors in retinal position and velocity. Recent studies have shown that the timing of catch-up saccades is susceptible to cognitive modulation, as is the timing of fixational microsaccades. Are the timing of catchup and microsaccades thus modulated by the same mechanism? Here, we test directly whether pursuit catch-up saccades and fixational microsaccades exhibit the same temporal pattern of task-related bursts and subsidence. Observers pursued a linear array of 15 alphanumeric characters that translated across the screen and simultaneously performed a character identification task on it. At a fixed time, a cue briefly surrounded the central element to specify it as the pursuit target. After a random delay, a probe (E or 3) appeared briefly at a randomly selected character location, and observers identified it. For comparison, a fixation condition was also tested with trial parameters identical to the pursuit condition, except that the array remained stationary. We found that during both pursuit and fixation tasks, saccades paused after the cue and then rebounded as expected but also subsided in anticipation of the task. The time courses of the reactive pause, rebound, and anticipatory subsidence were similar, and idiosyncratic subject behavior was consistent across pursuit and fixation. The results provide evidence for a common mechanism of saccade control during pursuit and fixation, which is predictive as well as reactive and has an identifiable temporal signature in individual observers.NEW & NOTEWORTHY During natural scene viewing, voluntary saccades reorient the fovea to different locations for high-acuity viewing. Less is known about small "microsaccades" that also occur when fixating stationary objects and "catch-up saccades" that occur during smooth pursuit of moving objects. We provide evidence that microsaccade and catch-up saccade frequencies are generally modulated by the same mechanism. Furthermore, on a finer time scale the mechanism operates differently in different observers, suggesting that neural saccade generators are individually unique.
Collapse
Affiliation(s)
- Jeremy B Badler
- Smith-Kettlewell Eye Research Institute, San Francisco, California
| | - Scott N J Watamaniuk
- Smith-Kettlewell Eye Research Institute, San Francisco, California.,Wright State University, Dayton, Ohio
| | - Stephen J Heinen
- Smith-Kettlewell Eye Research Institute, San Francisco, California
| |
Collapse
|
33
|
Mcilreavy L, Freeman TCA, Erichsen JT. Two-Dimensional Analysis of Smooth Pursuit Eye Movements Reveals Quantitative Deficits in Precision and Accuracy. Transl Vis Sci Technol 2019; 8:7. [PMID: 31588372 PMCID: PMC6753966 DOI: 10.1167/tvst.8.5.7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 06/23/2019] [Indexed: 01/30/2023] Open
Abstract
Purpose Small moving targets are followed by pursuit eye movements, with success ubiquitously defined by gain. Gain quantifies accuracy, rather than precision, and only for eye movements along the target trajectory. Analogous to previous studies of fixation, we analyzed pursuit performance in two dimensions as a function of target direction, velocity, and amplitude. As a subsidiary experiment, we compared pursuit performance against that of fixation. Methods Eye position was recorded from 15 observers during pursuit. The target was a 0.4° dot that moved across a large screen at 8°/s or 16°/s, either horizontally or vertically, through peak-to-peak amplitudes of 8°, 16°, or 32°. Two-dimensional eye velocity was expressed relative to the target, and a bivariate probability density function computed to obtain accuracy and precision. As a comparison, identical metrics were derived from fixation data. Results For all target directions, eye velocity was less precise along the target trajectory. Eye velocities orthogonal to the target trajectory were more accurate during vertical pursuit than horizontal. Pursuit accuracy and precision along and orthogonal to the target trajectory decreased at the higher target velocity. Accuracy along the target trajectory decreased with smaller target amplitudes. Conclusions Orthogonal to the target trajectory, pursuit was inaccurate and imprecise. Compared to fixation, pursuit was less precise and less accurate even when following the stimulus that gave the best performance. Translational Relevance This analytical approach may help the detection of subtle deficits in slow phase eye movements that could be used as biomarkers for disease progression and/or treatment.
Collapse
Affiliation(s)
- Lee Mcilreavy
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | | | | |
Collapse
|
34
|
Stone LS, Tyson TL, Cravalho PF, Feick NH, Flynn-Evans EE. Distinct pattern of oculomotor impairment associated with acute sleep loss and circadian misalignment. J Physiol 2019; 597:4643-4660. [PMID: 31389043 PMCID: PMC6852126 DOI: 10.1113/jp277779] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Accepted: 06/20/2019] [Indexed: 11/29/2022] Open
Abstract
Key points Inadequate sleep and irregular work schedules have not only adverse consequences for individual health and well‐being, but also enormous economic and safety implications for society as a whole. This study demonstrates that visual motion processing and coordinated eye movements are significantly impaired when performed after sleep loss and during the biological night, and thus may be contributing to human error and accidents. Because affected individuals are often unaware of their sensorimotor and cognitive deficits, there is a critical need for non‐invasive, objective indicators of mild, yet potentially unsafe, impairment due to disrupted sleep or biological rhythms. Our findings show that a set of eye‐movement measures can be used to provide sensitive and reliable indicators of such mild neural impairments.
Abstract Sleep loss and circadian misalignment have long been known to impair human cognitive and motor performance with significant societal and health consequences. It is well known that human reaction time to a visual cue is impaired following sleep loss and circadian misalignment, but it has remained unclear how more complex visuomotor control behaviour is altered under these conditions. In this study, we measured 14 parameters of the voluntary ocular tracking response of 12 human participants (six females) to systematically examine the effects of sleep loss and circadian misalignment using a constant routine 24‐h acute sleep‐deprivation paradigm. The combination of state‐of‐the‐art oculometric and sleep‐research methodologies allowed us to document, for the first time, large changes in many components of pursuit, saccades and visual motion processing as a function of time awake and circadian phase. Further, we observed a pattern of impairment across our set of oculometric measures that is qualitatively different from that observed previously with other mild neural impairments. We conclude that dynamic vision and visuomotor control exhibit a distinct pattern of impairment linked with time awake and circadian phase. Therefore, a sufficiently broad set of oculometric measures could provide a sensitive and specific behavioural biomarker of acute sleep loss and circadian misalignment. We foresee potential applications of such oculometric biomarkers assisting in the assessment of readiness‐to‐perform higher risk tasks and in the characterization of sub‐clinical neural impairment in the face of a multiplicity of potential risk factors, including disrupted sleep and circadian rhythms. Inadequate sleep and irregular work schedules have not only adverse consequences for individual health and well‐being, but also enormous economic and safety implications for society as a whole. This study demonstrates that visual motion processing and coordinated eye movements are significantly impaired when performed after sleep loss and during the biological night, and thus may be contributing to human error and accidents. Because affected individuals are often unaware of their sensorimotor and cognitive deficits, there is a critical need for non‐invasive, objective indicators of mild, yet potentially unsafe, impairment due to disrupted sleep or biological rhythms. Our findings show that a set of eye‐movement measures can be used to provide sensitive and reliable indicators of such mild neural impairments.
Collapse
Affiliation(s)
- Leland S Stone
- Visuomotor Control Laboratory, Human Systems Integration Division, NASA Ames Research Center, Moffett Field, CA, USA
| | - Terence L Tyson
- Visuomotor Control Laboratory, Human Systems Integration Division, NASA Ames Research Center, Moffett Field, CA, USA
| | | | | | - Erin E Flynn-Evans
- Fatigue Countermeasures Laboratory, Human Systems Integration Division, NASA Ames Research Center, Moffett Field, CA, USA
| |
Collapse
|
35
|
Abstract
Smooth pursuit eye movements maintain the line of sight on smoothly moving targets. Although often studied as a response to sensory motion, pursuit anticipates changes in motion trajectories, thus reducing harmful consequences due to sensorimotor processing delays. Evidence for predictive pursuit includes (a) anticipatory smooth eye movements (ASEM) in the direction of expected future target motion that can be evoked by perceptual cues or by memory for recent motion, (b) pursuit during periods of target occlusion, and (c) improved accuracy of pursuit with self-generated or biologically realistic target motions. Predictive pursuit has been linked to neural activity in the frontal cortex and in sensory motion areas. As behavioral and neural evidence for predictive pursuit grows and statistically based models augment or replace linear systems approaches, pursuit is being regarded less as a reaction to immediate sensory motion and more as a predictive response, with retinal motion serving as one of a number of contributing cues.
Collapse
Affiliation(s)
- Eileen Kowler
- Department of Psychology, Rutgers University, Piscataway, New Jersey 08854, USA; , ,
| | - Jason F Rubinstein
- Department of Psychology, Rutgers University, Piscataway, New Jersey 08854, USA; , ,
| | - Elio M Santos
- Department of Psychology, Rutgers University, Piscataway, New Jersey 08854, USA; , , .,Current affiliation: Department of Psychology, State University of New York, College at Oneonta, Oneonta, New York 13820, USA;
| | - Jie Wang
- Department of Psychology, Rutgers University, Piscataway, New Jersey 08854, USA; , ,
| |
Collapse
|
36
|
Denes G, Maruszczyk K, Ash G, Mantiuk RK. Temporal Resolution Multiplexing: Exploiting the limitations of spatio-temporal vision for more efficient VR rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:2072-2082. [PMID: 30794178 DOI: 10.1109/tvcg.2019.2898741] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Rendering in virtual reality (VR) requires substantial computational power to generate 90 frames per second at high resolution with good-quality antialiasing. The video data sent to a VR headset requires high bandwidth, achievable only on dedicated links. In this paper we explain how rendering requirements and transmission bandwidth can be reduced using a conceptually simple technique that integrates well with existing rendering pipelines. Every even-numbered frame is rendered at a lower resolution, and every odd-numbered frame is kept at high resolution but is modified in order to compensate for the previous loss of high spatial frequencies. When the frames are seen at a high frame rate, they are fused and perceived as high-resolution and high-frame-rate animation. The technique relies on the limited ability of the visual system to perceive high spatio-temporal frequencies. Despite its conceptual simplicity, correct execution of the technique requires a number of non-trivial steps: display photometric temporal response must be modeled, flicker and motion artifacts must be avoided, and the generated signal must not exceed the dynamic range of the display. Our experiments, performed on a high-frame-rate LCD monitor and OLED-based VR headsets, explore the parameter space of the proposed technique and demonstrate that its perceived quality is indistinguishable from full-resolution rendering. The technique is an attractive alternative to reprojection and resolution reduction of all frames.
Collapse
|
37
|
Cochrane GD, Christy JB, Almutairi A, Busettini C, Swanson MW, Weise KK. Visuo-oculomotor Function and Reaction Times in Athletes with and without Concussion. Optom Vis Sci 2019; 96:256-265. [PMID: 30907863 PMCID: PMC6445703 DOI: 10.1097/opx.0000000000001364] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
SIGNIFICANCE Oculomotor tests in concussion commonly show impairment in smooth pursuit and saccadic function. Honing in on the systems likely to be affected by concussion will streamline use of oculomotor function as a supplemental diagnostic and prognostic tool, as well as improve our understanding of the pathophysiology of concussion. PURPOSE This study investigates oculomotor function between concussed and healthy collegiate athletes and determines measurement test-retest reliability of those tools. METHODS Eighty-seven healthy athletes were recruited from a U.S. Division 1 sports university and completed a 30-minute vestibular ocular testing battery in an enclosed rotary chair system equipped with 100-Hz eye-tracking goggles. Forty-three individuals completed the battery twice. Twenty-eight individuals with a current diagnosis of concussion also completed the battery. All participants were aged 18 to 24 years. Bivariate statistical tests examined differences in scores across groups, and intraclass coefficients were computed to test reliability. RESULTS Concussed individuals had significantly longer saccadic, visual, and dual-task reaction times and reduced saccadic accuracy. There was no difference in optokinetic reflex gain, but few concussed individuals tolerated the task. Reaction time latencies and optokinetic gain show moderate test-retest reliability. Smooth pursuit tasks and saccadic accuracies showed poor test-retest reliability. CONCLUSIONS Saccadic latency was the most sensitive oculomotor function to change after concussion and was reliable over time. Saccadic accuracy was significantly lower in the concussed group but had poor retest reliability. Optokinetic gain may warrant more investigation because of its high test-retest reliability and symptom provocation in concussion, despite not showing a significant difference between groups.
Collapse
Affiliation(s)
| | - Jennifer B Christy
- Department of Physical Therapy, University of Alabama at Birmingham, Birmingham, Alabama
| | - Anwar Almutairi
- Department of Physical Therapy, University of Alabama at Birmingham, Birmingham, Alabama
| | - Claudio Busettini
- Department of Optometry and Vision Science, University of Alabama at Birmingham, Birmingham, Alabama
- Vision Science Research Center, University of Alabama at Birmingham, Birmingham, Alabama
| | - Mark W Swanson
- Department of Optometry and Vision Science, University of Alabama at Birmingham, Birmingham, Alabama
| | - Katherine K Weise
- Department of Optometry and Vision Science, University of Alabama at Birmingham, Birmingham, Alabama
| |
Collapse
|
38
|
Crevecoeur F, Gevers M. Filtering Compensation for Delays and Prediction Errors during Sensorimotor Control. Neural Comput 2019; 31:738-764. [DOI: 10.1162/neco_a_01170] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Compensating for sensorimotor noise and for temporal delays has been identified as a major function of the nervous system. Although these aspects have often been described separately in the frameworks of optimal cue combination or motor prediction during movement planning, control-theoretic models suggest that these two operations are performed simultaneously, and mounting evidence supports that motor commands are based on sensory predictions rather than sensory states. In this letter, we study the benefit of state estimation for predictive sensorimotor control. More precisely, we combine explicit compensation for sensorimotor delays and optimal estimation derived in the context of Kalman filtering. We show, based on simulations of human-inspired eye and arm movements, that filtering sensory predictions improves the stability margin of the system against prediction errors due to low-dimensional predictions or to errors in the delay estimate. These simulations also highlight that prediction errors qualitatively account for a broad variety of movement disorders typically associated with cerebellar dysfunctions. We suggest that adaptive filtering in cerebellum, instead of often-assumed feedforward predictions, may achieve simple compensation for sensorimotor delays and support stable closed-loop control of movements.
Collapse
Affiliation(s)
- F. Crevecoeur
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics, University of Louvain, Louvain-la-Neuve 1348, Belgium, and Institute of Neuroscience, University of Louvain, Brussels 1200, Belgium
| | - M. Gevers
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics, University of Louvain, Louvain-la-Neuve 1348, Belgium
| |
Collapse
|
39
|
Abstract
Opsoclonus/flutter (O/F) is a rare disorder of the saccadic system. Previously, we modeled O/F that developed in a patient following abuse of anabolic steroids. That model, as in all models of the saccadic system, generates commands to make a change in eye position. Recently, we saw a patient who developed a unique form of opsoclonus following a concussion. The patient had postsaccadic ocular flutter in both directions of gaze, and opsoclonus during fixation and pursuit in the left hemifield. A new model of the saccadic system is needed to account for this gaze-position dependent O/F. We started with our prior model, which contains two key elements, mutual inhibition between inhibitory burst neurons on both sides and a prolonged reactivation time of the omnipause neurons (OPNs). We included new inputs to the OPNs from the nucleus prepositus hypoglossi and the frontal eye fields, which contain position-dependent neurons. This provides a mechanism for delaying OPN reactivation, and creating a gaze-position dependence. A simplified pursuit system was also added, the output of which inhibits the OPNs, providing a mechanism for gaze-dependence during pursuit. The rest of the model continues to generate a command to change eye position.
Collapse
|
40
|
Ward BK, Zee DS, Roberts DC, Schubert MC, Pérez-Fernández N, Otero-Millan J. Visual Fixation and Continuous Head Rotations Have Minimal Effect on Set-Point Adaptation to Magnetic Vestibular Stimulation. Front Neurol 2019; 9:1197. [PMID: 30723456 PMCID: PMC6349782 DOI: 10.3389/fneur.2018.01197] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2018] [Accepted: 12/31/2018] [Indexed: 11/13/2022] Open
Abstract
Background: Strong static magnetic fields such as those in an MRI machine can induce sensations of self-motion and nystagmus. The proposed mechanism is a Lorentz force resulting from the interaction between strong static magnetic fields and ionic currents in the inner ear endolymph that causes displacement of the semicircular canal cupulae. Nystagmus persists throughout an individual's exposure to the magnetic field, though its slow-phase velocity partially declines due to adaptation. After leaving the magnetic field an after effect occurs in which the nystagmus and sensations of rotation reverse direction, reflecting the adaptation that occurred while inside the MRI. However, the effects of visual fixation and of head shaking on this early type of vestibular adaptation are unknown. Methods: Three-dimensional infrared video-oculography was performed in six individuals just before, during (5, 20, or 60 min) and after (4, 15, or 20 min) lying supine inside a 7T MRI scanner. Trials began by entering the magnetic field in darkness followed 60 s later, either by light with visual fixation and head still, or by continuous yaw head rotations (2 Hz) in either darkness or light with visual fixation. Subjects were always placed in darkness 10 or 30 s before exiting the bore. In control conditions subjects remained in the dark with the head still for the entire duration. Results: In darkness with head still all subjects developed horizontal nystagmus inside the magnetic field, with slow-phase velocity partially decreasing over time. An after effect followed on exiting the magnet, with nystagmus in the opposite direction. Nystagmus was suppressed during visual fixation; however, after resuming darkness just before exiting the magnet, nystagmus returned with velocity close to the control condition and with a comparable after effect. Similar after effects occurred with continuous yaw head rotations while in the scanner whether in darkness or light. Conclusions: Visual fixation and sustained head shaking either in the dark or with fixation inside a strong static magnetic field have minimal impact on the short-term mechanisms that attempt to null unwanted spontaneous nystagmus when the head is still, so called VOR set-point adaptation. This contrasts with the critical influence of vision and slippage of images on the retina on the dynamic (gain and direction) components of VOR adaptation.
Collapse
Affiliation(s)
- Bryan K Ward
- Department of Otolaryngology-Head and Neck Surgery, The Johns Hopkins University, Baltimore, MD, United States
| | - David S Zee
- Department of Otolaryngology-Head and Neck Surgery, The Johns Hopkins University, Baltimore, MD, United States.,Department of Neurology, The Johns Hopkins University, Baltimore, MD, United States.,Department of Neuroscience, The Johns Hopkins University, Baltimore, MD, United States.,Department of Ophthalmology, The Johns Hopkins University, Baltimore, MD, United States
| | - Dale C Roberts
- Department of Otolaryngology-Head and Neck Surgery, The Johns Hopkins University, Baltimore, MD, United States.,Department of Neurology, The Johns Hopkins University, Baltimore, MD, United States
| | - Michael C Schubert
- Department of Otolaryngology-Head and Neck Surgery, The Johns Hopkins University, Baltimore, MD, United States.,Department of Physical Medicine and Rehabilitation, The Johns Hopkins University, Baltimore, MD, United States
| | | | - Jorge Otero-Millan
- Department of Neurology, The Johns Hopkins University, Baltimore, MD, United States
| |
Collapse
|
41
|
Goffart L, Bourrelly C, Quinton JC. Neurophysiology of visually guided eye movements: critical review and alternative viewpoint. J Neurophysiol 2018; 120:3234-3245. [PMID: 30379628 DOI: 10.1152/jn.00402.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In this article, we perform a critical examination of assumptions that led to the assimilation of measurements of the movement of a rigid body in the physical world to parameters encoded within brain activity. In many neurophysiological studies of goal-directed eye movements, equivalence has indeed been made between the kinematics of the eyes or of a targeted object and the associated neuronal processes. Such a way of proceeding brings up the reduction encountered in projective geometry when a multidimensional object is being projected onto a one-dimensional segment. The measurement of a movement indeed consists of generation of a series of numerical values from which magnitudes such as amplitude, duration, and their ratio (speed) are calculated. By contrast, movement generation consists of activation of multiple parallel channels in the brain. Yet, for many years, kinematic parameters were supposed to be encoded in brain activity, even though the neuronal image of most physical events is distributed both spatially and temporally. After explaining why the "neuronalization" of such parameters is questionable for elucidating the neural processes underlying the execution of saccadic and pursuit eye movements, we propose an alternative to the framework that has dominated the last five decades. A viewpoint is presented in which these processes follow principles that are defined by intrinsic properties of the brain (population coding, multiplicity of transmission delays, synchrony of firing, connectivity). We propose reconsideration of the time course of saccadic and pursuit eye movements as the restoration of equilibria between neural populations that exert opposing motor tendencies.
Collapse
Affiliation(s)
- Laurent Goffart
- Aix Marseille Université, Centre National de la Recherche Scientifique, Institut de Neurosciences de la Timone, Marseille, France.,Aix Marseille Université, Centre National de la Recherche Scientifique, Centre Gilles Gaston Granger, Aix-en-Provence, France
| | - Clara Bourrelly
- Aix Marseille Université, Centre National de la Recherche Scientifique, Institut de Neurosciences de la Timone, Marseille, France
| | - Jean-Charles Quinton
- Université Grenoble Alpes, Centre National de la Recherche Scientifique, Laboratoire Jean Kuntzmann, Grenoble, France
| |
Collapse
|
42
|
Encoding of Reward and Decoding Movement from the Frontal Eye Field during Smooth Pursuit Eye Movements. J Neurosci 2018; 38:10515-10524. [PMID: 30355635 DOI: 10.1523/jneurosci.1654-18.2018] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2018] [Revised: 10/02/2018] [Accepted: 10/05/2018] [Indexed: 11/21/2022] Open
Abstract
Expectation of reward potentiates sensorimotor transformations to drive vigorous movements. One of the main challenges in studying reward is to determine how representations of reward interact with the computations that drive behavior. We recorded activity in smooth pursuit neurons in the frontal eye field (FEF) of two male rhesus monkeys while controlling the eye speed by manipulating either reward size or target speed. The neurons encoded the different reward conditions more strongly than the different target speed conditions. This pattern could not be explained by differences in the eye speed, since the eye speed sensitivity of the neurons was also larger for the reward conditions. Pooling the responses by the preferred direction of the neurons attenuated the reward modulation and led to a tighter association between neural activity and behavior. Therefore, a plausible decoder such as the population vector could explain how the FEF both drives behavior and encodes reward beyond behavior.SIGNIFICANCE STATEMENT Motor areas combine sensory and reward information to drive movement. To disambiguate these sources, we manipulated the speed of smooth pursuit eye movements by controlling either the size of the reward or the speed of the visual motion signals. We found that the relationship between activity in frontal eye field and eye kinematics varied: the eye speed sensitivity was larger for the different reward conditions than for the different target speed conditions. Decoders that pooled signals by the preferred direction of the neurons attenuated the reward modulations. These decoders may indicate how reward can be both encoded beyond eye kinematics at the single neuron level and drive movement at the population level.
Collapse
|
43
|
Markkula G, Boer E, Romano R, Merat N. Sustained sensorimotor control as intermittent decisions about prediction errors: computational framework and application to ground vehicle steering. BIOLOGICAL CYBERNETICS 2018; 112:181-207. [PMID: 29453689 PMCID: PMC6002515 DOI: 10.1007/s00422-017-0743-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2017] [Accepted: 12/16/2017] [Indexed: 06/07/2023]
Abstract
A conceptual and computational framework is proposed for modelling of human sensorimotor control and is exemplified for the sensorimotor task of steering a car. The framework emphasises control intermittency and extends on existing models by suggesting that the nervous system implements intermittent control using a combination of (1) motor primitives, (2) prediction of sensory outcomes of motor actions, and (3) evidence accumulation of prediction errors. It is shown that approximate but useful sensory predictions in the intermittent control context can be constructed without detailed forward models, as a superposition of simple prediction primitives, resembling neurobiologically observed corollary discharges. The proposed mathematical framework allows straightforward extension to intermittent behaviour from existing one-dimensional continuous models in the linear control and ecological psychology traditions. Empirical data from a driving simulator are used in model-fitting analyses to test some of the framework's main theoretical predictions: it is shown that human steering control, in routine lane-keeping and in a demanding near-limit task, is better described as a sequence of discrete stepwise control adjustments, than as continuous control. Results on the possible roles of sensory prediction in control adjustment amplitudes, and of evidence accumulation mechanisms in control onset timing, show trends that match the theoretical predictions; these warrant further investigation. The results for the accumulation-based model align with other recent literature, in a possibly converging case against the type of threshold mechanisms that are often assumed in existing models of intermittent control.
Collapse
Affiliation(s)
- Gustav Markkula
- Institute for Transport Studies, University of Leeds, Leeds, UK.
| | - Erwin Boer
- Institute for Transport Studies, University of Leeds, Leeds, UK
| | - Richard Romano
- Institute for Transport Studies, University of Leeds, Leeds, UK
| | - Natasha Merat
- Institute for Transport Studies, University of Leeds, Leeds, UK
| |
Collapse
|
44
|
Bansal S, Ford JM, Spering M. The function and failure of sensory predictions. Ann N Y Acad Sci 2018; 1426:199-220. [PMID: 29683518 DOI: 10.1111/nyas.13686] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2017] [Revised: 02/26/2018] [Accepted: 02/27/2018] [Indexed: 01/24/2023]
Abstract
Humans and other primates are equipped with neural mechanisms that allow them to automatically make predictions about future events, facilitating processing of expected sensations and actions. Prediction-driven control and monitoring of perceptual and motor acts are vital to normal cognitive functioning. This review provides an overview of corollary discharge mechanisms involved in predictions across sensory modalities and discusses consequences of predictive coding for cognition and behavior. Converging evidence now links impairments in corollary discharge mechanisms to neuropsychiatric symptoms such as hallucinations and delusions. We review studies supporting a prediction-failure hypothesis of perceptual and cognitive disturbances. We also outline neural correlates underlying prediction function and failure, highlighting similarities across the visual, auditory, and somatosensory systems. In linking basic psychophysical and psychophysiological evidence of visual, auditory, and somatosensory prediction failures to neuropsychiatric symptoms, our review furthers our understanding of disease mechanisms.
Collapse
Affiliation(s)
- Sonia Bansal
- Maryland Psychiatric Research Center, University of Maryland, Catonsville, Maryland
| | - Judith M Ford
- University of California and Veterans Affairs Medical Center, San Francisco, California
| | - Miriam Spering
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
45
|
Heinen SJ, Badler JB, Watamaniuk SNJ. Choosing a foveal goal recruits the saccadic system during smooth pursuit. J Neurophysiol 2018; 120:489-496. [PMID: 29668381 DOI: 10.1152/jn.00418.2017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Models of smooth pursuit eye movements stabilize an object's retinal image, yet pursuit is peppered with small, destabilizing "catch-up" saccades. Catch-up saccades might help follow a small, spot stimulus used in most pursuit experiments, since fewer of them occur with large stimuli. However, they can return when a large stimulus has a small central feature. It may be that a central feature on a large object automatically recruits the saccadic system. Alternatively, a cognitive choice is made that the feature is the pursuit goal, and the saccadic system is then recruited to pursue it. Observers pursued a 5-dot stimulus composed of a central dot surrounded by four peripheral dots arranged as a diamond. An attention task specified the pursuit goal as either the central element, or the diamond gestalt. Fewer catch-up saccades occurred with the Gestalt goal than with the central goal, although the additional saccades with the central goal neither enhanced nor impeded pursuit. Furthermore, removing the central element from the diamond goal further reduced catch-up saccade frequency, indicating that the central element automatically triggered some saccades. Higher saccade frequency was not simply due to narrowly focused attention, since attending a small peripheral diamond during pursuit elicited fewer saccades than attending the diamond positioned foveally. The results suggest some saccades are automatically elicited by a small central element, but when it is chosen as the pursuit goal the saccadic system is further recruited to pursue it. NEW & NOTEWORTHY Smooth-pursuit eye movements stabilize retinal image motion to prevent blur. Curiously, smooth pursuit is frequently supplemented by small catchup saccades that could reduce image clarity. Catchup saccades might only be needed to pursue small laboratory stimuli, as they are infrequent during large object pursuit. Yet large objects with central features revive them. Here, we show that voluntarily selecting a feature as the pursuit goal elicits saccades that do not help pursuit.
Collapse
Affiliation(s)
- Stephen J Heinen
- Smith-Kettlewell Eye Research Institute , San Francisco, California
| | - Jeremy B Badler
- Smith-Kettlewell Eye Research Institute , San Francisco, California
| | | |
Collapse
|
46
|
Rey-Martinez J, Batuecas-Caletrio A, Matiño E, Trinidad-Ruiz G, Altuna X, Perez-Fernandez N. Mathematical Methods for Measuring the Visually Enhanced Vestibulo-Ocular Reflex and Preliminary Results from Healthy Subjects and Patient Groups. Front Neurol 2018; 9:69. [PMID: 29483893 PMCID: PMC5816338 DOI: 10.3389/fneur.2018.00069] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2017] [Accepted: 01/29/2018] [Indexed: 12/22/2022] Open
Abstract
Background Visually enhanced vestibulo–ocular reflex (VVOR) is a well-known bedside clinical test to evaluate visuo–vestibular interaction, with clinical applications in patients with neurological and vestibular dysfunctions. Owing to recently developed diagnostic technologies, the possibility to perform an easy and objective measurement of the VVOR has increased, but there is a lack of computational methods designed to obtain an objective VVOR measurement. Objectives To develop a method for the assessment of the VVOR to obtain a gain value that compares head and eye velocities and to test this method in patients and healthy subjects. Methods Two computational methods were developed to measure the VVOR test responses: the first method was based on the area under curve of head and eye velocity plots and the second method was based on the slope of the linear regression obtained for head and eye velocity data. VVOR gain and vestibulo–ocular reflex (VOR) gain were analyzed with the data obtained from 35 subjects divided into four groups: healthy (N = 10), unilateral vestibular with vestibular neurectomy (N = 8), bilateral vestibulopathy (N = 12), and cerebellar ataxia, neuropathy, and vestibular areflexia syndrome (CANVAS) (N = 5). Results Intra-class correlation index for the two developed VVOR analysis methods was 0.99. Statistical differences were obtained by analysis of variance statistical method, comparing the healthy group (VVOR mean gain of 1 ± 0) with all other groups. The CANVAS group exhibited (VVOR mean gain of 0.4 ± 0.1) differences when compared to all other groups. VVOR mean gain for the vestibular bilateral group was 0.8 ± 0.1. VVOR mean gain in the unilateral group was 0.6 ± 0.1, with a Pearson’s correlation of 0.52 obtained when VVOR gain was compared to the VOR gain of the operated side. Conclusion Two computational methods to measure the gain of VVOR were successfully developed. The VVOR gain values appear to objectively characterize the VVOR alteration observed in CANVAS patients, and also distinguish between healthy subjects and patients with some vestibular disorders.
Collapse
Affiliation(s)
- Jorge Rey-Martinez
- Otorhinolaringology, Hospital Universitario Donostia, San Sebastian, Spain
| | | | | | | | - Xabier Altuna
- Otorhinolaringology, Hospital Universitario Donostia, San Sebastian, Spain
| | | |
Collapse
|
47
|
Zhang X, Wang S, Hoagg JB, Seigler TM. The Roles of Feedback and Feedforward as Humans Learn to Control Unknown Dynamic Systems. IEEE TRANSACTIONS ON CYBERNETICS 2018; 48:543-555. [PMID: 28141541 DOI: 10.1109/tcyb.2016.2646483] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We present results from an experiment in which human subjects interact with an unknown dynamic system 40 times during a two-week period. During each interaction, subjects are asked to perform a command-following (i.e., pursuit tracking) task. Each subject's performance at that task improves from the first trial to the last trial. For each trial, we use subsystem identification to estimate each subject's feedforward (or anticipatory) control, feedback (or reactive) control, and feedback time delay. Over the 40 trials, the magnitudes of the identified feedback controllers and the identified feedback time delays do not change significantly. In contrast, the identified feedforward controllers do change significantly. By the last trial, the average identified feedforward controller approximates the inverse of the dynamic system. This observation provides evidence that a fundamental component of human learning is updating the anticipatory control until it models the inverse dynamics.
Collapse
|
48
|
Ma Z, Watamaniuk SNJ, Heinen SJ. Illusory motion reveals velocity matching, not foveation, drives smooth pursuit of large objects. J Vis 2017; 17:20. [PMID: 29090315 PMCID: PMC5665499 DOI: 10.1167/17.12.20] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
When small objects move in a scene, we keep them foveated with smooth pursuit eye movements. Although large objects such as people and animals are common, it is nonetheless unknown how we pursue them since they cannot be foveated. It might be that the brain calculates an object's centroid, and then centers the eyes on it during pursuit as a foveation mechanism might. Alternatively, the brain merely matches the velocity by motion integration. We test these alternatives with an illusory motion stimulus that translates at a speed different from its retinal motion. The stimulus was a Gabor array that translated at a fixed velocity, with component Gabors that drifted with motion consistent or inconsistent with the translation. Velocity matching predicts different pursuit behaviors across drift conditions, while centroid matching predicts no difference. We also tested whether pursuit can segregate and ignore irrelevant local drifts when motion and centroid information are consistent by surrounding the Gabors with solid frames. Finally, observers judged the global translational speed of the Gabors to determine whether smooth pursuit and motion perception share mechanisms. We found that consistent Gabor motion enhanced pursuit gain while inconsistent, opposite motion diminished it, drawing the eyes away from the center of the stimulus and supporting a motion-based pursuit drive. Catch-up saccades tended to counter the position offset, directing the eyes opposite to the deviation caused by the pursuit gain change. Surrounding the Gabors with visible frames canceled both the gain increase and the compensatory saccades. Perceived speed was modulated analogous to pursuit gain. The results suggest that smooth pursuit of large stimuli depends on the magnitude of integrated retinal motion information, not its retinal location, and that the position system might be unnecessary for generating smooth velocity to large pursuit targets.
Collapse
Affiliation(s)
- Zheng Ma
- Smith-Kettlewell Eye Research Institute, San Francisco, CA, USA
| | | | - Stephen J Heinen
- The Smith-Kettlewell Eye Research Institute, San Francisco, CA, USA
| |
Collapse
|
49
|
A Subconscious Interaction between Fixation and Anticipatory Pursuit. J Neurosci 2017; 37:11424-11430. [PMID: 29061701 DOI: 10.1523/jneurosci.2186-17.2017] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2017] [Revised: 10/06/2017] [Accepted: 10/12/2017] [Indexed: 11/21/2022] Open
Abstract
Ocular smooth pursuit and fixation are typically viewed as separate systems, yet there is evidence that the brainstem fixation system inhibits pursuit. Here we present behavioral evidence that the fixation system modulates pursuit behavior outside of conscious awareness. Human observers (male and female) either pursued a small spot that translated across a screen, or fixated it as it remained stationary. As shown previously, pursuit trials potentiated the oculomotor system, producing anticipatory eye velocity on the next trial before the target moved that mimicked the stimulus-driven velocity. Randomly interleaving fixation trials reduced anticipatory pursuit, suggesting that a potentiated fixation system interacted with pursuit to suppress eye velocity in upcoming pursuit trials. The reduction was not due to passive decay of the potentiated pursuit signal because interleaving "blank" trials in which no target appeared did not reduce anticipatory pursuit. Interspersed short fixation trials reduced anticipation on long pursuit trials, suggesting that fixation potentiation was stronger than pursuit potentiation. Furthermore, adding more pursuit trials to a block did not restore anticipatory pursuit, suggesting that fixation potentiation was not overridden by certainty of an imminent pursuit trial but rather was immune to conscious intervention. To directly test whether cognition can override fixation suppression, we alternated pursuit and fixation trials to perfectly specify trial identity. Still, anticipatory pursuit did not rise above that observed with an equal number of random fixation trials. The results suggest that potentiated fixation circuitry interacts with pursuit circuitry at a subconscious level to inhibit pursuit.SIGNIFICANCE STATEMENT When an object moves, we view it with smooth pursuit eye movements. When an object is stationary, we view it with fixational eye movements. Pursuit and fixation are historically regarded as controlled by different neural circuitry, and alternating between invoking them is thought to be guided by a conscious decision. However, our results show that pursuit is actively suppressed by prior fixation of a stationary object. This suppression is involuntary, and cannot be avoided even if observers are certain that the object will move. The results suggest that the neural fixation circuitry is potentiated by engaging stationary objects, and interacts with pursuit outside of conscious awareness.
Collapse
|
50
|
The reference frame for encoding and retention of motion depends on stimulus set size. Atten Percept Psychophys 2017; 79:888-910. [PMID: 28092077 DOI: 10.3758/s13414-016-1258-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The goal of this study was to investigate the reference frames used in perceptual encoding and storage of visual motion information. In our experiments, observers viewed multiple moving objects and reported the direction of motion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding of motion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case of complex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding of motion information.
Collapse
|