1
|
Ugolini G, Graf W. Pathways from the superior colliculus and the nucleus of the optic tract to the posterior parietal cortex in macaque monkeys: Functional frameworks for representation updating and online movement guidance. Eur J Neurosci 2024; 59:2792-2825. [PMID: 38544445 DOI: 10.1111/ejn.16314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 01/31/2024] [Accepted: 02/22/2024] [Indexed: 05/22/2024]
Abstract
The posterior parietal cortex (PPC) integrates multisensory and motor-related information for generating and updating body representations and movement plans. We used retrograde transneuronal transfer of rabies virus combined with a conventional tracer in macaque monkeys to identify direct and disynaptic pathways to the arm-related rostral medial intraparietal area (MIP), the ventral lateral intraparietal area (LIPv), belonging to the parietal eye field, and the pursuit-related lateral subdivision of the medial superior temporal area (MSTl). We found that these areas receive major disynaptic pathways via the thalamus from the nucleus of the optic tract (NOT) and the superior colliculus (SC), mainly ipsilaterally. NOT pathways, targeting MSTl most prominently, serve to process the sensory consequences of slow eye movements for which the NOT is the key sensorimotor interface. They potentially contribute to the directional asymmetry of the pursuit and optokinetic systems. MSTl and LIPv receive feedforward inputs from SC visual layers, which are potential correlates for fast detection of motion, perceptual saccadic suppression and visual spatial attention. MSTl is the target of efference copy pathways from saccade- and head-related compartments of SC motor layers and head-related reticulospinal neurons. They are potential sources of extraretinal signals related to eye and head movement in MSTl visual-tracking neurons. LIPv and rostral MIP receive efference copy pathways from all SC motor layers, providing online estimates of eye, head and arm movements. Our findings have important implications for understanding the role of the PPC in representation updating, internal models for online movement guidance, eye-hand coordination and optic ataxia.
Collapse
Affiliation(s)
- Gabriella Ugolini
- Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR9197 CNRS - Université Paris-Saclay, Campus CEA Saclay, Saclay, France
| | - Werner Graf
- Department of Physiology and Biophysics, Howard University, Washington, DC, USA
| |
Collapse
|
2
|
Falconbridge M, Stamps RL, Edwards M, Badcock DR. Target motion misjudgments reflect a misperception of the background; revealed using continuous psychophysics. Iperception 2023; 14:20416695231214439. [PMID: 38680843 PMCID: PMC11046177 DOI: 10.1177/20416695231214439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Accepted: 10/29/2023] [Indexed: 05/01/2024] Open
Abstract
Determining the velocities of target objects as we navigate complex environments is made more difficult by the fact that our own motion adds systematic motion signals to the visual scene. The flow-parsing hypothesis asserts that the background motion is subtracted from visual scenes in such cases as a way for the visual system to determine target motions relative to the scene. Here, we address the question of why backgrounds are only partially subtracted in lab settings. At the same time, we probe a much-neglected aspect of scene perception in flow-parsing studies, that is, the perception of the background itself. Here, we present results from three experienced psychophysical participants and one inexperienced participant who took part in three continuous psychophysics experiments. We show that, when the background optic flow pattern is composed of local elements whose motions are congruent with the global optic flow pattern, the incompleteness of the background subtraction can be entirely accounted for by a misperception of the background. When the local velocities comprising the background are randomly dispersed around the average global velocity, an additional factor is needed to explain the subtraction incompleteness. We show that a model where background perception is a result of the brain attempting to infer scene motion due to self-motion can account for these results.
Collapse
Affiliation(s)
- Michael Falconbridge
- School of Psychology, University of Western Australia, Crawley, Western Australia, Australia
| | - Robert L. Stamps
- Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Mark Edwards
- Research School of Psychology, Australian National University, Canberra, Australia
| | - David R. Badcock
- School of Psychology, University of Western Australia, Crawley, Western Australia, Australia
| |
Collapse
|
3
|
Rosenberg A, Thompson LW, Doudlah R, Chang TY. Neuronal Representations Supporting Three-Dimensional Vision in Nonhuman Primates. Annu Rev Vis Sci 2023; 9:337-359. [PMID: 36944312 DOI: 10.1146/annurev-vision-111022-123857] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2023]
Abstract
The visual system must reconstruct the dynamic, three-dimensional (3D) world from ambiguous two-dimensional (2D) retinal images. In this review, we synthesize current literature on how the visual system of nonhuman primates performs this transformation through multiple channels within the classically defined dorsal (where) and ventral (what) pathways. Each of these channels is specialized for processing different 3D features (e.g., the shape, orientation, or motion of objects, or the larger scene structure). Despite the common goal of 3D reconstruction, neurocomputational differences between the channels impose distinct information-limiting constraints on perception. Convergent evidence further points to the little-studied area V3A as a potential branchpoint from which multiple 3D-fugal processing channels diverge. We speculate that the expansion of V3A in humans may have supported the emergence of advanced 3D spatial reasoning skills. Lastly, we discuss future directions for exploring 3D information transmission across brain areas and experimental approaches that can further advance the understanding of 3D vision.
Collapse
Affiliation(s)
- Ari Rosenberg
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, USA;
| | - Lowell W Thompson
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, USA;
| | - Raymond Doudlah
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, USA;
| | - Ting-Yu Chang
- School of Medicine, National Defense Medical Center, Taipei, Taiwan
| |
Collapse
|
4
|
Nashef A, Spindle MS, Calame DJ, Person AL. A dual Purkinje cell rate and synchrony code sculpts reach kinematics. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.12.548720. [PMID: 37503038 PMCID: PMC10370034 DOI: 10.1101/2023.07.12.548720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
Cerebellar Purkinje cells (PCs) encode movement kinematics in their population firing rates. Firing rate suppression is hypothesized to disinhibit neurons in the cerebellar nuclei, promoting adaptive movement adjustments. Debates persist, however, about whether a second disinhibitory mechanism, PC simple spike synchrony, is a relevant population code. We addressed this question by relating PC rate and synchrony patterns recorded with high density probes, to mouse reach kinematics. We discovered behavioral correlates of PC synchrony that align with a known causal relationship between activity in cerebellar output. Reach deceleration was positively correlated with both Purkinje firing rate decreases and synchrony, consistent with both mechanisms disinhibiting target neurons, which are known to adjust reach velocity. Direct tests of the contribution of each coding scheme to nuclear firing using dynamic clamp, combining physiological rate and synchrony patterns ex vivo, confirmed that physiological levels of PC simple spike synchrony are highly facilitatory for nuclear firing. These findings suggest that PC firing rate and synchrony collaborate to exert fine control of movement.
Collapse
Affiliation(s)
- Abdulraheem Nashef
- Department of Physiology and Biophysics, Anschutz Medical Campus, University of Colorado, Aurora, 80045, CO, USA
| | - Michael S Spindle
- Department of Physiology and Biophysics, Anschutz Medical Campus, University of Colorado, Aurora, 80045, CO, USA
| | - Dylan J Calame
- Department of Physiology and Biophysics, Anschutz Medical Campus, University of Colorado, Aurora, 80045, CO, USA
| | - Abigail L Person
- Department of Physiology and Biophysics, Anschutz Medical Campus, University of Colorado, Aurora, 80045, CO, USA
| |
Collapse
|
5
|
Zhao Z, Ahissar E, Victor JD, Rucci M. Inferring visual space from ultra-fine extra-retinal knowledge of gaze position. Nat Commun 2023; 14:269. [PMID: 36650146 PMCID: PMC9845343 DOI: 10.1038/s41467-023-35834-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 01/03/2023] [Indexed: 01/18/2023] Open
Abstract
It has long been debated how humans resolve fine details and perceive a stable visual world despite the incessant fixational motion of their eyes. Current theories assume these processes to rely solely on the visual input to the retina, without contributions from motor and/or proprioceptive sources. Here we show that contrary to this widespread assumption, the visual system has access to high-resolution extra-retinal knowledge of fixational eye motion and uses it to deduce spatial relations. Building on recent advances in gaze-contingent display control, we created a spatial discrimination task in which the stimulus configuration was entirely determined by oculomotor activity. Our results show that humans correctly infer geometrical relations in the absence of spatial information on the retina and accurately combine high-resolution extraretinal monitoring of gaze displacement with retinal signals. These findings reveal a sensory-motor strategy for encoding space, in which fine oculomotor knowledge is used to interpret the fixational input to the retina.
Collapse
Affiliation(s)
- Zhetuo Zhao
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Ehud Ahissar
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | - Jonathan D Victor
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York, NY, USA
| | - Michele Rucci
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA.
- Center for Visual Science, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
6
|
Causal contribution of optic flow signal in Macaque extrastriate visual cortex for roll perception. Nat Commun 2022; 13:5479. [PMID: 36123363 PMCID: PMC9485245 DOI: 10.1038/s41467-022-33245-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 09/08/2022] [Indexed: 11/08/2022] Open
Abstract
Optic flow is a powerful cue for inferring self-motion status which is critical for postural control, spatial orientation, locomotion and navigation. In primates, neurons in extrastriate visual cortex (MSTd) are predominantly modulated by high-order optic flow patterns (e.g., spiral), yet a functional link to direct perception is lacking. Here, we applied electrical microstimulation to selectively manipulate population of MSTd neurons while macaques discriminated direction of rotation around line-of-sight (roll) or direction of linear-translation (heading), two tasks which were orthogonal in 3D spiral coordinate using a four-alternative-forced-choice paradigm. Microstimulation frequently biased animal's roll perception towards coded labeled-lines of the artificial-stimulated neurons in either context with spiral or pure-rotation stimuli. Choice frequency was also altered between roll and translation flow-pattern. Our results provide direct causal-link evidence supporting that roll signals in MSTd, despite often mixed with translation signals, can be extracted by downstream areas for perception of rotation relative to gravity-vertical.
Collapse
|
7
|
Jörges B, Harris LR. Object speed perception during lateral visual self-motion. Atten Percept Psychophys 2022; 84:25-46. [PMID: 34704212 PMCID: PMC8547725 DOI: 10.3758/s13414-021-02372-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2021] [Indexed: 11/08/2022]
Abstract
Judging object speed during observer self-motion requires disambiguating retinal stimulation from two sources: self-motion and object motion. According to the Flow Parsing hypothesis, observers estimate their own motion, then subtract the retinal corresponding motion from the total retinal stimulation and interpret the remaining stimulation as pertaining to object motion. Subtracting noisier self-motion information from retinal input should lead to a decrease in precision. Furthermore, when self-motion is only simulated visually, self-motion is likely to be underestimated, yielding an overestimation of target speed when target and observer move in opposite directions and an underestimation when they move in the same direction. We tested this hypothesis with a two-alternative forced-choice task in which participants judged which of two motions, presented in an immersive 3D environment, was faster. One motion interval contained a ball cloud whose speed was selected dynamically according to a PEST staircase, while the other contained one big target travelling laterally at a fixed speed. While viewing the big target, participants were either static or experienced visually simulated lateral self-motion in the same or opposite direction of the target. Participants were not significantly biased in either motion profile, and precision was only significantly lower when participants moved visually in the direction opposite to the target. We conclude that, when immersed in an ecologically valid 3D environment with rich self-motion cues, participants perceive an object's speed accurately at a small precision cost, even when self-motion is simulated only visually.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON M3J 1P3 Canada
| | - Laurence R. Harris
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON M3J 1P3 Canada
| |
Collapse
|
8
|
Luna R, Serrano-Pedraza I, Gegenfurtner KR, Schütz AC, Souto D. Achieving visual stability during smooth pursuit eye movements: Directional and confidence judgements favor a recalibration model. Vision Res 2021; 184:58-73. [PMID: 33873123 DOI: 10.1016/j.visres.2021.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 03/05/2021] [Accepted: 03/10/2021] [Indexed: 11/17/2022]
Abstract
During smooth pursuit eye movements, the visual system is faced with the task of telling apart reafferent retinal motion from motion in the world. While an efference copy signal can be used to predict the amount of reafference to subtract from the image, an image-based adaptive mechanism can ensure the continued accuracy of this computation. Indeed, repeatedly exposing observers to background motion with a fixed direction relative to that of the target that is pursued leads to a shift in their point of subjective stationarity (PSS). We asked whether the effect of exposure reflects adaptation to motion contingent on pursuit direction, recalibration of a reference signal or both. A recalibration account predicts a shift in reference signal (i.e. predicted reafference), resulting in a shift of PSS, but no change in sensitivity. Results show that both directional judgements and confidence judgements about them favor a recalibration account, whereby there is an adaptive shift in the reference signal caused by the prevailing retinal motion during pursuit. We also found that the recalibration effect is specific to the exposed visual hemifield.
Collapse
Affiliation(s)
- Raúl Luna
- Department of Experimental Psychology, Faculty of Psychology, Universidad Complutense de Madrid, Madrid, Spain; School of Psychology, Universidad Autónoma de Madrid, Madrid, Spain
| | - Ignacio Serrano-Pedraza
- Department of Experimental Psychology, Faculty of Psychology, Universidad Complutense de Madrid, Madrid, Spain
| | | | - Alexander C Schütz
- Allgemeine und Biologische Psychologie, Phillips-Universität Marburg, Giessen, Germany
| | - David Souto
- Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, United Kingdom.
| |
Collapse
|
9
|
Wild B, Treue S. Primate extrastriate cortical area MST: a gateway between sensation and cognition. J Neurophysiol 2021; 125:1851-1882. [PMID: 33656951 DOI: 10.1152/jn.00384.2020] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Primate visual cortex consists of dozens of distinct brain areas, each providing a highly specialized component to the sophisticated task of encoding the incoming sensory information and creating a representation of our visual environment that underlies our perception and action. One such area is the medial superior temporal cortex (MST), a motion-sensitive, direction-selective part of the primate visual cortex. It receives most of its input from the middle temporal (MT) area, but MST cells have larger receptive fields and respond to more complex motion patterns. The finding that MST cells are tuned for optic flow patterns has led to the suggestion that the area plays an important role in the perception of self-motion. This hypothesis has received further support from studies showing that some MST cells also respond selectively to vestibular cues. Furthermore, the area is part of a network that controls the planning and execution of smooth pursuit eye movements and its activity is modulated by cognitive factors, such as attention and working memory. This review of more than 90 studies focuses on providing clarity of the heterogeneous findings on MST in the macaque cortex and its putative homolog in the human cortex. From this analysis of the unique anatomical and functional position in the hierarchy of areas and processing steps in primate visual cortex, MST emerges as a gateway between perception, cognition, and action planning. Given this pivotal role, this area represents an ideal model system for the transition from sensation to cognition.
Collapse
Affiliation(s)
- Benedict Wild
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Goettingen Graduate Center for Neurosciences, Biophysics, and Molecular Biosciences (GGNB), University of Goettingen, Goettingen, Germany
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Faculty of Biology and Psychology, University of Goettingen, Goettingen, Germany.,Leibniz-ScienceCampus Primate Cognition, Goettingen, Germany.,Bernstein Center for Computational Neuroscience, Goettingen, Germany
| |
Collapse
|
10
|
Oh SW, Son SJ, Morris JA, Choi JH, Lee C, Rah JC. Comprehensive Analysis of Long-Range Connectivity from and to the Posterior Parietal Cortex of the Mouse. Cereb Cortex 2021; 31:356-378. [PMID: 32901251 DOI: 10.1093/cercor/bhaa230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Revised: 06/27/2020] [Accepted: 07/27/2020] [Indexed: 11/14/2022] Open
Abstract
The posterior parietal cortex (PPC) is a major multimodal association cortex implicated in a variety of higher order cognitive functions, such as visuospatial perception, spatial attention, categorization, and decision-making. The PPC is known to receive inputs from a collection of sensory cortices as well as various subcortical areas and integrate those inputs to facilitate the execution of functions that require diverse information. Although many recent works have been performed with the mouse as a model system, a comprehensive understanding of long-range connectivity of the mouse PPC is scarce, preventing integrative interpretation of the rapidly accumulating functional data. In this study, we conducted a detailed neuroanatomic and bioinformatic analysis of the Allen Mouse Brain Connectivity Atlas data to summarize afferent and efferent connections to/from the PPC. Then, we analyzed variability between subregions of the PPC, functional/anatomical modalities, and species, and summarized the organizational principle of the mouse PPC. Finally, we confirmed key results by using additional neurotracers. A comprehensive survey of the connectivity will provide an important future reference to comprehend the function of the PPC and allow effective paths forward to various studies using mice as a model system.
Collapse
Affiliation(s)
| | - Sook Jin Son
- Laboratory of Neurophysiology, Korea Brain Research Institute, Daegu 41062, Korea
| | | | - Joon Ho Choi
- Laboratory of Neurophysiology, Korea Brain Research Institute, Daegu 41062, Korea
| | - Changkyu Lee
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Jong-Cheol Rah
- Laboratory of Neurophysiology, Korea Brain Research Institute, Daegu 41062, Korea.,Department of Brain and Cognitive Sciences, DGIST, Daegu 42988, Korea
| |
Collapse
|
11
|
Rowe EG, Tsuchiya N, Garrido MI. Detecting (Un)seen Change: The Neural Underpinnings of (Un)conscious Prediction Errors. Front Syst Neurosci 2020; 14:541670. [PMID: 33262694 PMCID: PMC7686547 DOI: 10.3389/fnsys.2020.541670] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Accepted: 10/08/2020] [Indexed: 11/16/2022] Open
Abstract
Detecting changes in the environment is fundamental for our survival. According to predictive coding theory, detecting these irregularities relies both on incoming sensory information and our top-down prior expectations (or internal generative models) about the world. Prediction errors (PEs), detectable in event-related potentials (ERPs), occur when there is a mismatch between the sensory input and our internal model (i.e., a surprise event). Many changes occurring in our environment are irrelevant for survival and may remain unseen. Such changes, even if subtle, can nevertheless be detected by the brain without emerging into consciousness. What remains unclear is how these changes are processed in the brain at the network level. Here, we used a visual oddball paradigm in which participants engaged in a central letter task during electroencephalographic (EEG) recordings while presented with task-irrelevant high- or low-coherence background, random-dot motion. Critically, once in a while, the direction of the dots changed. After the EEG session, we confirmed that changes in motion direction at high- and low-coherence were visible and invisible, respectively, using psychophysical measurements. ERP analyses revealed that changes in motion direction elicited PE regardless of the visibility, but with distinct spatiotemporal patterns. To understand these responses, we applied dynamic causal modeling (DCM) to the EEG data. Bayesian Model Averaging showed visible PE relied on a release from adaptation (repetition suppression) within bilateral MT+, whereas invisible PE relied on adaptation at bilateral V1 (and left MT+). Furthermore, while feedforward upregulation was present for invisible PE, the visible change PE also included downregulation of feedback between right MT+ to V1. Our findings reveal a complex interplay of modulation in the generative network models underlying visible and invisible motion changes.
Collapse
Affiliation(s)
- Elise G. Rowe
- School of Psychological Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, VIC, Australia
- Turner Institute for Brain and Mental Health, Monash University, Clayton, VIC, Australia
- Queensland Brain Institute, The University of Queensland, Saint Lucia, QLD, Australia
- Centre for Advanced Imaging, The University of Queensland, Saint Lucia, QLD, Australia
| | - Naotsugu Tsuchiya
- School of Psychological Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, VIC, Australia
- Turner Institute for Brain and Mental Health, Monash University, Clayton, VIC, Australia
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Japan
- Advanced Telecommunications Research Computational Neuroscience Laboratories, Kyoto, Japan
- ARC Centre of Excellence for Integrative Brain Function, Clayton, VIC, Australia
| | - Marta I. Garrido
- Queensland Brain Institute, The University of Queensland, Saint Lucia, QLD, Australia
- Centre for Advanced Imaging, The University of Queensland, Saint Lucia, QLD, Australia
- ARC Centre of Excellence for Integrative Brain Function, Clayton, VIC, Australia
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
12
|
Flexible coding of object motion in multiple reference frames by parietal cortex neurons. Nat Neurosci 2020; 23:1004-1015. [PMID: 32541964 PMCID: PMC7474851 DOI: 10.1038/s41593-020-0656-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 05/14/2020] [Indexed: 12/28/2022]
Abstract
Neurons represent spatial information in diverse reference frames, but it remains unclear whether neural reference frames change with task demands and whether these changes can account for behavior. We examined how neurons represent the direction of a moving object during self-motion, while monkeys switched, from trial to trial, between reporting object direction in head- and world-centered reference frames. Self-motion information is needed to compute object motion in world coordinates, but should be ignored when judging object motion in head coordinates. Neural responses in the ventral intraparietal area are modulated by the task reference frame, such that population activity represents object direction in either reference frame. In contrast, responses in the lateral portion of the medial superior temporal area primarily represent object motion in head coordinates. Our findings demonstrate a neural representation of object motion that changes with task requirements.
Collapse
|
13
|
Cavanagh P, Tse PU. The vector combination underlying the double-drift illusion is based on motion in world coordinates: Evidence from smooth pursuit. J Vis 2020; 19:2. [PMID: 31826247 DOI: 10.1167/19.14.2] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
If a Gabor pattern drifts in one direction while its internal texture drifts in the orthogonal direction, its perceived direction deviates strongly from its true direction and is instead some combination of its real external motion and its internal motion (Tse & Hsieh, 2006). In the first experiment, we confirm that, for the stimuli used in our experiment, the direction shifts on a gray background were explained by a vector combination of the internal and external motions whereas for the Gabor on a black background, we find no illusory shifts. These results suggest that the internal motion contributes to the perceived direction but only when the Gabor's positional uncertainty is high. Next, we test whether the vector combination is based on motions on the retina or motions in the world. When participants track a fixation point that moves in tandem with the Gabor, keeping it roughly stable on the retina, the illusion is undiminished. This finding indicates that the vector combination of internal and external motion that produces the double-drift illusion must happen after the eye movement signals have been factored into the stimulus motions to recover motions in the world, in particular, in areas V3A, V6, MSTd, and VIP.
Collapse
Affiliation(s)
- Patrick Cavanagh
- Department of Psychology, Glendon College, CVR York University, Toronto, ON, Canada.,Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Peter U Tse
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
14
|
Abstract
Smooth pursuit eye movements maintain the line of sight on smoothly moving targets. Although often studied as a response to sensory motion, pursuit anticipates changes in motion trajectories, thus reducing harmful consequences due to sensorimotor processing delays. Evidence for predictive pursuit includes (a) anticipatory smooth eye movements (ASEM) in the direction of expected future target motion that can be evoked by perceptual cues or by memory for recent motion, (b) pursuit during periods of target occlusion, and (c) improved accuracy of pursuit with self-generated or biologically realistic target motions. Predictive pursuit has been linked to neural activity in the frontal cortex and in sensory motion areas. As behavioral and neural evidence for predictive pursuit grows and statistically based models augment or replace linear systems approaches, pursuit is being regarded less as a reaction to immediate sensory motion and more as a predictive response, with retinal motion serving as one of a number of contributing cues.
Collapse
Affiliation(s)
- Eileen Kowler
- Department of Psychology, Rutgers University, Piscataway, New Jersey 08854, USA; , ,
| | - Jason F Rubinstein
- Department of Psychology, Rutgers University, Piscataway, New Jersey 08854, USA; , ,
| | - Elio M Santos
- Department of Psychology, Rutgers University, Piscataway, New Jersey 08854, USA; , , .,Current affiliation: Department of Psychology, State University of New York, College at Oneonta, Oneonta, New York 13820, USA;
| | - Jie Wang
- Department of Psychology, Rutgers University, Piscataway, New Jersey 08854, USA; , ,
| |
Collapse
|
15
|
Sasaki R, Angelaki DE, DeAngelis GC. Processing of object motion and self-motion in the lateral subdivision of the medial superior temporal area in macaques. J Neurophysiol 2019; 121:1207-1221. [PMID: 30699042 DOI: 10.1152/jn.00497.2018] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Multiple areas of macaque cortex are involved in visual motion processing, but their relative functional roles remain unclear. The medial superior temporal (MST) area is typically divided into lateral (MSTl) and dorsal (MSTd) subdivisions that are thought to be involved in processing object motion and self-motion, respectively. Whereas MSTd has been studied extensively with regard to processing visual and nonvisual self-motion cues, little is known about self-motion signals in MSTl, especially nonvisual signals. Moreover, little is known about how self-motion and object motion signals interact in MSTl and how this differs from interactions in MSTd. We compared the visual and vestibular heading tuning of neurons in MSTl and MSTd using identical stimuli. Our findings reveal that both visual and vestibular heading signals are weaker in MSTl than in MSTd, suggesting that MSTl is less well suited to participate in self-motion perception than MSTd. We also tested neurons in both areas with a variety of combinations of object motion and self-motion. Our findings reveal that vestibular signals improve the separability of coding of heading and object direction in both areas, albeit more strongly in MSTd due to the greater strength of vestibular signals. Based on a marginalization technique, population decoding reveals that heading and object direction can be more effectively dissociated from MSTd responses than MSTl responses. Our findings help to clarify the respective contributions that MSTl and MSTd make to processing of object motion and self-motion, although our conclusions may be somewhat specific to the multipart moving objects that we employed. NEW & NOTEWORTHY Retinal image motion reflects contributions from both the observer's self-motion and the movement of objects in the environment. The neural mechanisms by which the brain dissociates self-motion and object motion remain unclear. This study provides the first systematic examination of how the lateral subdivision of area MST (MSTl) contributes to dissociating object motion and self-motion. We also examine, for the first time, how MSTl neurons represent translational self-motion based on both vestibular and visual cues.
Collapse
Affiliation(s)
- Ryo Sasaki
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester , Rochester, New York
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine , Houston, Texas.,Department of Electrical and Computer Engineering, Rice University , Houston, Texas
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester , Rochester, New York
| |
Collapse
|
16
|
Schindler A, Bartels A. Human V6 Integrates Visual and Extra-Retinal Cues during Head-Induced Gaze Shifts. iScience 2018; 7:191-197. [PMID: 30267680 PMCID: PMC6153141 DOI: 10.1016/j.isci.2018.09.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2018] [Revised: 07/13/2018] [Accepted: 09/04/2018] [Indexed: 11/18/2022] Open
Abstract
A key question in vision research concerns how the brain compensates for self-induced eye and head movements to form the world-centered, spatiotopic representations we perceive. Although human V3A and V6 integrate eye movements with vision, it is unclear which areas integrate head motion signals with visual retinotopic representations, as fMRI typically prevents head movement executions. Here we examined whether human early visual cortex V3A and V6 integrate these signals. A previously introduced paradigm allowed participant head movement during trials, but stabilized the head during data acquisition utilizing the delay between blood-oxygen-level-dependent (BOLD) and neural signals. Visual stimuli simulated either a stable environment or one with arbitrary head-coupled visual motion. Importantly, both conditions were matched in retinal and head motion. Contrasts revealed differential responses in human V6. Given the lack of vestibular responses in primate V6, these results suggest multi-modal integration of visual with neck efference copy signals or proprioception in V6. Setup with head-mounted goggles and head movement during fMRI Simulation of forward flow in stable or unstable world during head rotation Human V6 integrates visual self-motion with head motion signals Likely mediated by efference copy or proprioception as V6 lacks vestibular input
Collapse
Affiliation(s)
- Andreas Schindler
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, Otfried-Müller-Str. 25, Tübingen 72076, Germany; Department of Psychology, University of Tübingen, Tübingen 72076, Germany; Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany; Centre for Integrative Neuroscience & MEG Center, University of Tübingen, Tübingen 72076, Germany.
| | - Andreas Bartels
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, Otfried-Müller-Str. 25, Tübingen 72076, Germany; Department of Psychology, University of Tübingen, Tübingen 72076, Germany; Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany.
| |
Collapse
|
17
|
Nau M, Schindler A, Bartels A. Real-motion signals in human early visual cortex. Neuroimage 2018; 175:379-387. [PMID: 29649561 DOI: 10.1016/j.neuroimage.2018.04.012] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2017] [Revised: 04/06/2018] [Accepted: 04/08/2018] [Indexed: 11/25/2022] Open
Abstract
Eye movements induce visual motion that can complicate the stable perception of the world. The visual system compensates for such self-induced visual motion by integrating visual input with efference copies of eye movement commands. This mechanism is central as it does not only support perceptual stability but also mediates reliable perception of world-centered objective motion. In humans, it remains elusive whether visual motion responses in early retinotopic cortex are driven by objective motion or by retinal motion associated with it. To address this question, we used fMRI to examine functional responses of sixteen visual areas to combinations of planar objective motion and pursuit eye movements. Observers were exposed to objective motion that was faster, matched or slower relative to pursuit, allowing us to compare conditions that differed in objective motion velocity while retinal motion and eye movement signals were matched. Our results show that not only higher level motion regions such as V3A and V6, but also early visual areas signaled the velocity of objective motion, hence the product of integrating retinal with non-retinal signals. These results shed new light on mechanisms that mediate perceptual stability and real-motion perception, and show that extra-retinal signals related to pursuit eye movements influence processing in human early visual cortex.
Collapse
Affiliation(s)
- Matthias Nau
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, Trondheim, Norway; Egil & Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, Trondheim, Norway; Norwegian University of Science and Technology, Trondheim, Norway
| | - Andreas Schindler
- Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany; Department of Psychology, University of Tübingen, Tübingen, Germany; Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Andreas Bartels
- Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany; Department of Psychology, University of Tübingen, Tübingen, Germany; Max Planck Institute for Biological Cybernetics, Tübingen, Germany; Bernstein Centre for Computational Neuroscience, Tübingen, Germany.
| |
Collapse
|
18
|
Yang L, Gu Y. Distinct spatial coordinate of visual and vestibular heading signals in macaque FEFsem and MSTd. eLife 2017; 6. [PMID: 29134944 PMCID: PMC5685470 DOI: 10.7554/elife.29809] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Accepted: 11/03/2017] [Indexed: 11/17/2022] Open
Abstract
Precise heading estimate requires integration of visual optic flow and vestibular inertial motion originating from distinct spatial coordinates (eye- and head-centered, respectively). To explore whether the two heading signals may share a common reference frame along the hierarchy of cortical stages, we explored two multisensory areas in macaques: the smooth pursuit area of the frontal eye field (FEFsem) closer to the motor side, and the dorsal portion of medial superior temporal area (MSTd) closer to the sensory side. In both areas, vestibular signals are head-centered, whereas visual signals are mainly eye-centered. However, visual signals in FEFsem are more shifted towards the head coordinate compared to MSTd. These results are robust being largely independent on: (1) smooth pursuit eye movement, (2) motion parallax cue, and (3) behavioral context for active heading estimation, indicating that the visual and vestibular heading signals may be represented in distinct spatial coordinate in sensory cortices.
Collapse
Affiliation(s)
- Lihua Yang
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
19
|
A Neural Model of MST and MT Explains Perceived Object Motion during Self-Motion. J Neurosci 2017; 36:8093-102. [PMID: 27488630 DOI: 10.1523/jneurosci.4593-15.2016] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2015] [Accepted: 06/02/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED When a moving object cuts in front of a moving observer at a 90° angle, the observer correctly perceives that the object is traveling along a perpendicular path just as if viewing the moving object from a stationary vantage point. Although the observer's own (self-)motion affects the object's pattern of motion on the retina, the visual system is able to factor out the influence of self-motion and recover the world-relative motion of the object (Matsumiya and Ando, 2009). This is achieved by using information in global optic flow (Rushton and Warren, 2005; Warren and Rushton, 2009; Fajen and Matthis, 2013) and other sensory arrays (Dupin and Wexler, 2013; Fajen et al., 2013; Dokka et al., 2015) to estimate and deduct the component of the object's local retinal motion that is due to self-motion. However, this account (known as "flow parsing") is qualitative and does not shed light on mechanisms in the visual system that recover object motion during self-motion. We present a simple computational account that makes explicit possible mechanisms in visual cortex by which self-motion signals in the medial superior temporal area interact with object motion signals in the middle temporal area to transform object motion into a world-relative reference frame. The model (1) relies on two mechanisms (MST-MT feedback and disinhibition of opponent motion signals in MT) to explain existing data, (2) clarifies how pathways for self-motion and object-motion perception interact, and (3) unifies the existing flow parsing hypothesis with established neurophysiological mechanisms. SIGNIFICANCE STATEMENT To intercept targets, we must perceive the motion of objects that move independently from us as we move through the environment. Although our self-motion substantially alters the motion of objects on the retina, compelling evidence indicates that the visual system at least partially compensates for self-motion such that object motion relative to the stationary environment can be more accurately perceived. We have developed a model that sheds light on plausible mechanisms within the visual system that transform retinal motion into a world-relative reference frame. Our model reveals how local motion signals (generated through interactions within the middle temporal area) and global motion signals (feedback from the dorsal medial superior temporal area) contribute and offers a new hypothesis about the connection between pathways for heading and object motion perception.
Collapse
|
20
|
Abstract
Primates use two types of voluntary eye movements to track objects of interest: pursuit and saccades. Traditionally, these two eye movements have been viewed as distinct systems that are driven automatically by low-level visual inputs. However, two sets of findings argue for a new perspective on the control of voluntary eye movements. First, recent experiments have shown that pursuit and saccades are not controlled by entirely different neural pathways but are controlled by similar networks of cortical and subcortical regions and, in some cases, by the same neurons. Second, pursuit and saccades are not automatic responses to retinal inputs but are regulated by a process of target selection that involves a basic form of decision making. The selection process itself is guided by a variety of complex processes, including attention, perception, memory, and expectation. Together, these findings indicate that pursuit and saccades share a similar functional architecture. These points of similarity may hold the key for understanding how neural circuits negotiate the links between the many higher order functions that can influence behavior and the singular and coordinated motor actions that follow.
Collapse
Affiliation(s)
- Richard J Krauzlis
- Systems Neurobiology Laboratory, Salk Institute for Biological Studies, 10010 North Torrey Pines Road, La Jolla, CA 92037, USA.
| |
Collapse
|
21
|
Zhang M, Ma X, Qin B, Wang G, Guo Y, Xu Z, Wang Y, Li Y. Information fusion control with time delay for smooth pursuit eye movement. Physiol Rep 2016; 4:4/10/e12775. [PMID: 27230904 PMCID: PMC4886162 DOI: 10.14814/phy2.12775] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2016] [Accepted: 03/25/2016] [Indexed: 11/24/2022] Open
Abstract
Smooth pursuit eye movement depends on prediction and learning, and is subject to time delays in the visual pathways. In this paper, an information fusion control method with time delay is presented, implementing smooth pursuit eye movement with prediction and learning as well as solving the problem of time delays in the visual pathways. By fusing the soft constraint information of the target trajectory of eyes and the ideal control strategy, and the hard constraint information of the eye system state equation and the output equation, optimal estimations of the co-state sequence and the control variable are obtained. The proposed control method can track not only constant velocity, sinusoidal target motion, but also arbitrary moving targets. Moreover, the absolute value of the retinal slip reaches steady state after 0.1 sec. Information fusion control method elegantly describes in a function manner how the brain may deal with arbitrary target velocities, how it implements the smooth pursuit eye movement with prediction, learning, and time delays. These two principles allowed us to accurately describe visually guided, predictive and learning smooth pursuit dynamics observed in a wide variety of tasks within a single theoretical framework. The tracking control performance of the proposed information fusion control with time delays is verified by numerical simulation results.
Collapse
Affiliation(s)
- Menghua Zhang
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Xin Ma
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Bin Qin
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Guangmao Wang
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Yanan Guo
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Zhigang Xu
- School of Life Science, Shandong University, Jinan, China
| | - Yafang Wang
- School of Computer Science and Technology, Shandong University, Jinan, China
| | - Yibin Li
- School of Control Science and Engineering, Shandong University, Jinan, China
| |
Collapse
|
22
|
Abstract
A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.
Collapse
|
23
|
Filimon F. Are All Spatial Reference Frames Egocentric? Reinterpreting Evidence for Allocentric, Object-Centered, or World-Centered Reference Frames. Front Hum Neurosci 2015; 9:648. [PMID: 26696861 PMCID: PMC4673307 DOI: 10.3389/fnhum.2015.00648] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2015] [Accepted: 11/16/2015] [Indexed: 12/19/2022] Open
Abstract
The use and neural representation of egocentric spatial reference frames is well-documented. In contrast, whether the brain represents spatial relationships between objects in allocentric, object-centered, or world-centered coordinates is debated. Here, I review behavioral, neuropsychological, neurophysiological (neuronal recording), and neuroimaging evidence for and against allocentric, object-centered, or world-centered spatial reference frames. Based on theoretical considerations, simulations, and empirical findings from spatial navigation, spatial judgments, and goal-directed movements, I suggest that all spatial representations may in fact be dependent on egocentric reference frames.
Collapse
Affiliation(s)
- Flavia Filimon
- Adaptive Behavior and Cognition, Max Planck Institute for Human Development Berlin, Germany ; Berlin School of Mind and Brain, Humboldt Universität zu Berlin Berlin, Germany
| |
Collapse
|
24
|
Ono S. The neuronal basis of on-line visual control in smooth pursuit eye movements. Vision Res 2014; 110:257-64. [PMID: 24995378 DOI: 10.1016/j.visres.2014.06.008] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2014] [Revised: 06/17/2014] [Accepted: 06/21/2014] [Indexed: 11/24/2022]
Abstract
Smooth pursuit eye movements allow us to maintain the image of a moving target on the fovea. Smooth pursuit consists of separate phases such as initiation and steady-state. These two phases are supported by different visual-motor mechanisms in cortical areas including the middle temporal (MT), the medial superior temporal (MST) areas and the frontal eye field (FEF). Retinal motion signals are responsible for beginning the process of pursuit initiation, whereas extraretinal signals play a role in maintaining tracking speed. Smooth pursuit often requires on-line gain adjustments during tracking in response to a sudden change in target motion. For example, a brief sinusoidal perturbation of target motion induces a corresponding perturbation of eye motion. Interestingly, the perturbation ocular response is enhanced when baseline pursuit velocity is higher, even though the stimulus frequency and amplitude are constant. This on-line gain control mechanism is not simply due to visually driven activity of cortical neurons. Visual and pursuit signals are primarily processed in cortical MT/MST and the magnitude of perturbation responses could be regulated by the internal gain parameter in FEF. Furthermore, the magnitude and the gain slope of perturbation responses are altered by smooth pursuit adaptation using repeated trials of a step-ramp tracking with two different velocities (double-velocity paradigm). Therefore, smooth pursuit adaptation, which is attributed to the cerebellar plasticity mechanism, could affect the on-line gain control mechanism.
Collapse
Affiliation(s)
- Seiji Ono
- Department of Ophthalmology, Washington National Primate Research Center, University of Washington, Seattle, WA 98195, United States.
| |
Collapse
|
25
|
Leclercq G, Blohm G, Lefèvre P. Accounting for direction and speed of eye motion in planning visually guided manual tracking. J Neurophysiol 2013; 110:1945-57. [DOI: 10.1152/jn.00130.2013] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
Accurate motor planning in a dynamic environment is a critical skill for humans because we are often required to react quickly and adequately to the visual motion of objects. Moreover, we are often in motion ourselves, and this complicates motor planning. Indeed, the retinal and spatial motions of an object are different because of the retinal motion component induced by self-motion. Many studies have investigated motion perception during smooth pursuit and concluded that eye velocity is partially taken into account by the brain. Here we investigate whether the eye velocity during ongoing smooth pursuit is taken into account for the planning of visually guided manual tracking. We had 10 human participants manually track a target while in steady-state smooth pursuit toward another target such that the difference between the retinal and spatial target motion directions could be large, depending on both the direction and the speed of the eye. We used a measure of initial arm movement direction to quantify whether motor planning occurred in retinal coordinates (not accounting for eye motion) or was spatially correct (incorporating eye velocity). Results showed that the eye velocity was nearly fully taken into account by the neuronal areas involved in the visuomotor velocity transformation (between 75% and 102%). In particular, these neuronal pathways accounted for the nonlinear effects due to the relative velocity between the target and the eye. In conclusion, the brain network transforming visual motion into a motor plan for manual tracking adequately uses extraretinal signals about eye velocity.
Collapse
Affiliation(s)
- Guillaume Leclercq
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université catholique de Louvain, Louvain-la-Neuve, Belgium
- Institute of Neuroscience (IoNS), Université catholique de Louvain, Brussels, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; and
- Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada
| | - Philippe Lefèvre
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université catholique de Louvain, Louvain-la-Neuve, Belgium
- Institute of Neuroscience (IoNS), Université catholique de Louvain, Brussels, Belgium
| |
Collapse
|
26
|
Dunkley BT, Freeman TC, Muthukumaraswamy SD, Singh KD. Cortical oscillatory changes in human middle temporal cortex underlying smooth pursuit eye movements. Hum Brain Mapp 2013; 34:837-51. [PMID: 22110021 PMCID: PMC6869956 DOI: 10.1002/hbm.21478] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2011] [Revised: 09/01/2011] [Accepted: 09/05/2011] [Indexed: 11/06/2022] Open
Abstract
Extra-striate regions are thought to receive non-retinal signals from the pursuit system to maintain perceptual stability during eye movements. Here, we used magnetoencephalography (MEG) to study changes in oscillatory power related to smooth pursuit in extra-striate visual areas under three conditions: 'pursuit' of a small target, 'retinal motion' of a large background and 'pursuit + retinal motion' combined. All stimuli moved sinusoidally. MEG source reconstruction was performed using synthetic aperture magnetometry. Broadband alpha-beta suppression (5-25 Hz) was observed over bilateral extra-striate cortex (consistent with middle temporal cortex (MT+)) during all conditions. A functional magnetic resonance imaging study using the same experimental protocols confirmed an MT+ localisation of this extra-striate response. The alpha-beta envelope power in the 'pursuit' condition showed a hemifield-dependent eye-position signal, such that the global minimum in the alpha-beta suppression recorded in extra-striate cortex was greatest when the eyes were at maximum contralateral eccentricity. The 'retinal motion' condition produced sustained alpha-beta power decreases for the duration of stimulus motion, while the 'pursuit + retinal motion' condition revealed a double-dip 'W' shaped alpha-beta envelope profile with the peak suppression contiguous with eye position when at opposing maximum eccentricity. These results suggest that MT+ receives retinal as well as extra-retinal signals from the pursuit system as part of the process that enables the visual system to compensate for retinal motion during eye movement. We speculate that the suppression of the alpha-beta rhythm reflects either the integration of an eye position-dependent signal or one that lags the peak velocity of the sinusoidally moving target.
Collapse
Affiliation(s)
- Benjamin T. Dunkley
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Park Place, Cardiff, United Kingdom
| | - Tom C.A. Freeman
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Park Place, Cardiff, United Kingdom
| | - Suresh D. Muthukumaraswamy
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Park Place, Cardiff, United Kingdom
| | - Krish D. Singh
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Park Place, Cardiff, United Kingdom
| |
Collapse
|
27
|
Fukushima K, Fukushima J, Warabi T, Barnes GR. Cognitive processes involved in smooth pursuit eye movements: behavioral evidence, neural substrate and clinical correlation. Front Syst Neurosci 2013; 7:4. [PMID: 23515488 PMCID: PMC3601599 DOI: 10.3389/fnsys.2013.00004] [Citation(s) in RCA: 58] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2013] [Accepted: 03/01/2013] [Indexed: 11/21/2022] Open
Abstract
Smooth-pursuit eye movements allow primates to track moving objects. Efficient pursuit requires appropriate target selection and predictive compensation for inherent processing delays. Prediction depends on expectation of future object motion, storage of motion information and use of extra-retinal mechanisms in addition to visual feedback. We present behavioral evidence of how cognitive processes are involved in predictive pursuit in normal humans and then describe neuronal responses in monkeys and behavioral responses in patients using a new technique to test these cognitive controls. The new technique examines the neural substrate of working memory and movement preparation for predictive pursuit by using a memory-based task in macaque monkeys trained to pursue (go) or not pursue (no-go) according to a go/no-go cue, in a direction based on memory of a previously presented visual motion display. Single-unit task-related neuronal activity was examined in medial superior temporal cortex (MST), supplementary eye fields (SEF), caudal frontal eye fields (FEF), cerebellar dorsal vermis lobules VI–VII, caudal fastigial nuclei (cFN), and floccular region. Neuronal activity reflecting working memory of visual motion direction and go/no-go selection was found predominantly in SEF, cerebellar dorsal vermis and cFN, whereas movement preparation related signals were found predominantly in caudal FEF and the same cerebellar areas. Chemical inactivation produced effects consistent with differences in signals represented in each area. When applied to patients with Parkinson's disease (PD), the task revealed deficits in movement preparation but not working memory. In contrast, patients with frontal cortical or cerebellar dysfunction had high error rates, suggesting impaired working memory. We show how neuronal activity may be explained by models of retinal and extra-retinal interaction in target selection and predictive control and thus aid understanding of underlying pathophysiology.
Collapse
Affiliation(s)
- Kikuro Fukushima
- Department of Neurology, Sapporo Yamanoue Hospital Sapporo, Japan ; Department of Physiology, Hokkaido University School of Medicine Sapporo, Japan
| | | | | | | |
Collapse
|
28
|
Abstract
How our perceptual experience of the world remains stable and continuous despite the frequent repositioning eye movements remains very much a mystery. One possibility is that our brain actively constructs a spatiotopic representation of the world, which is anchored in external—or at least head-centred—coordinates. In this study, we show that the positional motion aftereffect (the change in apparent position after adaptation to motion) is spatially selective in external rather than retinal coordinates, whereas the classic motion aftereffect (the illusion of motion after prolonged inspection of a moving source) is selective in retinotopic coordinates. The results provide clear evidence for a spatiotopic map in humans: one which can be influenced by image motion.
Collapse
Affiliation(s)
- Marco Turi
- Department of Physiological Sciences, Università Degli Studi di Pisa, Via S. Zeno 31, Pisa, Italy
| | | |
Collapse
|
29
|
Fischer E, Bülthoff HH, Logothetis NK, Bartels A. Human areas V3A and V6 compensate for self-induced planar visual motion. Neuron 2012; 73:1228-40. [PMID: 22445349 DOI: 10.1016/j.neuron.2012.01.022] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2012] [Indexed: 10/28/2022]
Abstract
Little is known about mechanisms mediating a stable perception of the world during pursuit eye movements. Here, we used fMRI to determine to what extent human motion-responsive areas integrate planar retinal motion with nonretinal eye movement signals in order to discard self-induced planar retinal motion and to respond to objective ("real") motion. In contrast to other areas, V3A lacked responses to self-induced planar retinal motion but responded strongly to head-centered motion, even when retinally canceled by pursuit. This indicates a near-complete multimodal integration of visual with nonvisual planar motion signals in V3A. V3A could be mapped selectively and robustly in every single subject on this basis. V6 also reported head-centered planar motion, even when 3D flow was added to it, but was suppressed by retinal planar motion. These findings suggest a dominant contribution of human areas V3A and V6 to head-centered motion perception and to perceptual stability during eye movements.
Collapse
Affiliation(s)
- Elvira Fischer
- Vision and Cognition Lab, Centre of Integrative Neuroscience, University of Tübingen, 72076 Tübingen, Germany
| | | | | | | |
Collapse
|
30
|
Furman M, Gur M. And yet it moves: Perceptual illusions and neural mechanisms of pursuit compensation during smooth pursuit eye movements. Neurosci Biobehav Rev 2012; 36:143-51. [DOI: 10.1016/j.neubiorev.2011.05.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2010] [Revised: 05/02/2011] [Accepted: 05/11/2011] [Indexed: 10/18/2022]
|
31
|
Kurkin S, Akao T, Shichinohe N, Fukushima J, Fukushima K. Neuronal activity in medial superior temporal area (MST) during memory-based smooth pursuit eye movements in monkeys. Exp Brain Res 2011; 214:293-301. [PMID: 21837438 PMCID: PMC3174374 DOI: 10.1007/s00221-011-2825-6] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2010] [Accepted: 08/02/2011] [Indexed: 11/26/2022]
Abstract
We examined recently neuronal substrates for predictive pursuit using a memory-based smooth pursuit task that distinguishes the discharge related to memory of visual motion-direction from that related to movement preparation. We found that the supplementary eye fields (SEF) contain separate signals coding memory and assessment of visual motion-direction, decision not-to-pursue, and preparation for pursuit. Since medial superior temporal area (MST) is essential for visual motion processing and projects to SEF, we examined whether MST carried similar signals. We analyzed the discharge of 108 MSTd neurons responding to visual motion stimuli. The majority (69/108 = 64%) were also modulated during smooth pursuit. However, in nearly all (104/108 = 96%) of the MSTd neurons tested, there was no significant discharge modulation during the delay periods that required memory of visual motion-direction or preparation for smooth pursuit or not-to-pursue. Only 4 neurons of the 108 (4%) exhibited significantly higher discharge rates during the delay periods; however, their responses were non-directional and not instruction specific. Representative signals in the MSTd clearly differed from those in the SEF during memory-based smooth pursuit. MSTd neurons are unlikely to provide signals for memory of visual motion-direction or preparation for smooth pursuit eye movements.
Collapse
Affiliation(s)
- Sergei Kurkin
- Department of Physiology, School of Medicine, Hokkaido University, Sapporo, Japan
| | - Teppei Akao
- Department of Physiology, School of Medicine, Hokkaido University, Sapporo, Japan
- Present Address: Department of Physiology, Asahikawa Medical College, Midorigaoka, Asahikawa, Hokkaido, 078-8510 Japan
| | - Natsuko Shichinohe
- Department of Physiology, School of Medicine, Hokkaido University, Sapporo, Japan
- Department of Ophthalmology, School of Medicine, Hokkaido University, Sapporo, Japan
| | - Junko Fukushima
- Faculty of Health Sciences, Hokkaido University, Sapporo, Japan
| | - Kikuro Fukushima
- Department of Physiology, School of Medicine, Hokkaido University, Sapporo, Japan
- Faculty of Health Sciences, Hokkaido University, Sapporo, Japan
- Clinical Brain Research Laboratory, Department of Neurology, Toyokura Memorial Hall, Sapporo Yamanoue Hospital, Yamanote 6-9-1-1, Nishiku, Sapporo, 063-0006 Japan
| |
Collapse
|
32
|
Crespi S, Biagi L, d'Avossa G, Burr DC, Tosetti M, Morrone MC. Spatiotopic coding of BOLD signal in human visual cortex depends on spatial attention. PLoS One 2011; 6:e21661. [PMID: 21750720 PMCID: PMC3131281 DOI: 10.1371/journal.pone.0021661] [Citation(s) in RCA: 65] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2011] [Accepted: 06/05/2011] [Indexed: 11/19/2022] Open
Abstract
The neural substrate of the phenomenological experience of a stable visual world remains obscure. One possible mechanism would be to construct spatiotopic neural maps where the response is selective to the position of the stimulus in external space, rather than to retinal eccentricities, but evidence for these maps has been inconsistent. Here we show, with fMRI, that when human subjects perform concomitantly a demanding attentive task on stimuli displayed at the fovea, BOLD responses evoked by moving stimuli irrelevant to the task were mostly tuned in retinotopic coordinates. However, under more unconstrained conditions, where subjects could attend easily to the motion stimuli, BOLD responses were tuned not in retinal but in external coordinates (spatiotopic selectivity) in many visual areas, including MT, MST, LO and V6, agreeing with our previous fMRI study. These results indicate that spatial attention may play an important role in mediating spatiotopic selectivity.
Collapse
Affiliation(s)
- Sofia Crespi
- Department of Psychology, Università Degli Studi di Firenze, Florence, Italy
- Department of Psychology, Università Vita-Salute San Raffaele, Milan, Italy
| | - Laura Biagi
- Fondazione Stella Maris, Calambrone, Pisa, Italy
| | - Giovanni d'Avossa
- School of Psychology Adeilad Brigantia, Bangor University, Bangor, United Kingdom
| | - David C. Burr
- Department of Psychology, Università Degli Studi di Firenze, Florence, Italy
- Istituto di Neuroscienze, CNR, Pisa, Italy
- * E-mail:
| | | | - Maria Concetta Morrone
- Department of Physiological Sciences, University of Pisa, Pisa, Italy
- Department of Robotic, Brain and Cognitive Sciences, Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
33
|
Adjacent visual representations of self-motion in different reference frames. Proc Natl Acad Sci U S A 2011; 108:11668-73. [PMID: 21709244 DOI: 10.1073/pnas.1102984108] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Recent investigations indicate that retinal motion is not directly available for perception when moving around [Souman JL, et al. (2010) J Vis 10:14], possibly pointing to suppression of retinal speed sensitivity in motion areas. Here, we investigated the distribution of retinocentric and head-centric representations of self-rotation in human lower-tier visual motion areas. Functional MRI responses were measured to a set of visual self-motion stimuli with different levels of simulated gaze and simulated head rotation. A parametric generalized linear model analysis of the blood oxygen level-dependent responses revealed subregions of accessory V3 area, V6(+) area, middle temporal area, and medial superior temporal area that were specifically modulated by the speed of the rotational flow relative to the eye and head. Pursuit signals, which link the two reference frames, were also identified in these areas. To our knowledge, these results are the first demonstration of multiple visual representations of self-motion in these areas. The existence of such adjacent representations points to early transformations of the reference frame for visual self-motion signals and a topography by visual reference frame in lower-order motion-sensitive areas. This suggests that visual decisions for action and perception may take into account retinal and head-centric motion signals according to task requirements.
Collapse
|
34
|
Brostek L, Eggert T, Ono S, Mustari MJ, Büttner U, Glasauer S. An information-theoretic approach for evaluating probabilistic tuning functions of single neurons. Front Comput Neurosci 2011; 5:15. [PMID: 21503137 PMCID: PMC3071493 DOI: 10.3389/fncom.2011.00015] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2010] [Accepted: 03/07/2011] [Indexed: 11/13/2022] Open
Abstract
Neuronal tuning functions can be expressed by the conditional probability of observing a spike given any combination of explanatory variables. However, accurately determining such probabilistic tuning functions from experimental data poses several challenges such as finding the right combination of explanatory variables and determining their proper neuronal latencies. Here we present a novel approach of estimating and evaluating such probabilistic tuning functions, which offers a solution for these problems. By maximizing the mutual information between the probability distributions of spike occurrence and the variables, their neuronal latency can be estimated, and the dependence of neuronal activity on different combinations of variables can be measured. This method was used to analyze neuronal activity in cortical area MSTd in terms of dependence on signals related to eye and retinal image movement. Comparison with conventional feature detection and regression analysis techniques shows that our method offers distinct advantages, if the dependence does not match the regression model.
Collapse
Affiliation(s)
- Lukas Brostek
- Clinical Neurosciences, Ludwig-Maximilians-Universität München Munich, Germany
| | | | | | | | | | | |
Collapse
|
35
|
Abstract
We apply functional magnetic resonance imaging and multivariate analysis methods to study the coordinate frame in which saccades are represented in the human cortex. Subjects performed a memory-guided saccade task in which equal-amplitude eye movements were executed from several starting points to various directions. Response patterns during the memory period for same-vector saccades were correlated in the frontal eye fields and the intraparietal sulcus (IPS), indicating a retinotopic representation. Interestingly, response patterns in the middle aspect of the IPS were also correlated for saccades made to the same destination point, even when their movement vector was different. Thus, this region also contains information about saccade destination in (at least) a head-centered coordinate frame. This finding may explain behavioral and neuropsychological studies demonstrating that eye movements are also anchored to an egocentric or an allocentric representation of space rather than strictly to the retinal visual input and that parietal cortex is involved in maintaining these representations of space.
Collapse
|
36
|
Braun DI, Schütz AC, Gegenfurtner KR. Localization of speed differences of context stimuli during fixation and smooth pursuit eye movements. Vision Res 2010; 50:2740-9. [DOI: 10.1016/j.visres.2010.07.028] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2010] [Revised: 07/15/2010] [Accepted: 07/27/2010] [Indexed: 10/19/2022]
|
37
|
Schütz AC, Braun DI, Movshon JA, Gegenfurtner KR. Does the noise matter? Effects of different kinematogram types on smooth pursuit eye movements and perception. J Vis 2010; 10:26. [PMID: 21149307 DOI: 10.1167/10.13.26] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We investigated how the human visual system and the pursuit system react to visual motion noise. We presented three different types of random-dot kinematograms at five different coherence levels. For transparent motion, the signal and noise labels on each dot were preserved throughout each trial, and noise dots moved with the same speed as the signal dots but in fixed random directions. For white noise motion, every 20 ms the signal and noise labels were randomly assigned to each dot and noise dots appeared at random positions. For Brownian motion, signal and noise labels were also randomly assigned, but the noise dots moved at the signal speed in a direction that varied randomly from moment to moment. Neither pursuit latency nor early eye acceleration differed among the different types of kinematograms. Late acceleration, pursuit gain, and perceived speed all depended on kinematogram type, with good agreement between pursuit gain and perceived speed. For transparent motion, pursuit gain and perceived speed were independent of coherence level. For white and Brownian motions, pursuit gain and perceived speed increased with coherence but were higher for white than for Brownian motion. This suggests that under our conditions, the pursuit system integrates across all directions of motion but not across all speeds.
Collapse
Affiliation(s)
- Alexander C Schütz
- Abteilung Allgemeine Psychologie, Justus-Liebig-Universität, Giessen, Germany.
| | | | | | | |
Collapse
|
38
|
Abstract
We perceive the world around us as stable. This is remarkable given that our body parts as well as we ourselves are constantly in motion. Humans and other primates move their eyes more often than their hearts beat. Such eye movements lead to coherent motion of the images of the outside world across the retina. Furthermore, during everyday life, we constantly approach targets, avoid obstacles or otherwise move in space. These movements induce motion across different sensory receptor epithels: optical flow across the retina, tactile flow across the body surface and even auditory flow as detected from the two ears. It is generally assumed that motion signals as induced by one's own movement have to be identified and differentiated from the real motion in the outside world. In a number of experimental studies we and others have functionally characterized the primate posterior parietal cortex (PPC) and its role in multisensory encoding of spatial and motion information. Extracellular recordings in the macaque monkey showed that during steady fixation the visual, auditory and tactile spatial representations in the ventral intraparietal area (VIP) are congruent. This finding was of major importance given that a functional MRI (fMRI) study determined the functional equivalent of macaque area VIP in humans. Further recordings in other areas of the dorsal stream of the visual cortical system of the macaque pointed towards the neural basis of perceptual phenomena (heading detection during eye movements, saccadic suppression, mislocalization of visual stimuli during eye movements) as determined in psychophysical studies in humans.
Collapse
Affiliation(s)
- Frank Bremmer
- Department of Neurophysics, Philipps-University Marburg, Karl-v-Frisch-Str. 8a, D-35032 Marburg, Germany.
| |
Collapse
|
39
|
Blohm G, Lefèvre P. Visuomotor Velocity Transformations for Smooth Pursuit Eye Movements. J Neurophysiol 2010; 104:2103-15. [PMID: 20719930 DOI: 10.1152/jn.00728.2009] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Smooth pursuit eye movements are driven by retinal motion signals. These retinal motion signals are converted into motor commands that obey Listing's law (i.e., no accumulation of ocular torsion). The fact that smooth pursuit follows Listing's law is often taken as evidence that no explicit reference frame transformation between the retinal velocity input and the head-centered motor command is required. Such eye-position-dependent reference frame transformations between eye- and head-centered coordinates have been well-described for saccades to static targets. Here we suggest that such an eye (and head)-position-dependent reference frame transformation is also required for target motion (i.e., velocity) driving smooth pursuit eye movements. Therefore we tested smooth pursuit initiation under different three-dimensional eye positions and compared human performance to model simulations. We specifically tested if the ocular rotation axis changed with vertical eye position, if the misalignment of the spatial and retinal axes during oblique fixations was taken into account, and if ocular torsion (due to head roll) was compensated for. If no eye-position-dependent velocity transformation was used, the pursuit initiation should follow the retinal direction, independently of eye position; in contrast, a correct visuomotor velocity transformation would result in spatially correct pursuit initiation. Overall subjects accounted for all three components of the visuomotor velocity transformation, but we did observe differences in the compensatory gains between individual subjects. We concluded that the brain does perform a visuomotor velocity transformation but that this transformation was prone to noise and inaccuracies of the internal model.
Collapse
Affiliation(s)
- Gunnar Blohm
- Centre for Neuroscience Studies, Department of Physiology and Faculty of Arts and Science, Queen's University, Kingston, Ontario, Canada; and
- Centre for Systems Engineering and Applied Mechanics and Institute of Neuroscience, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
| | - Philippe Lefèvre
- Centre for Systems Engineering and Applied Mechanics and Institute of Neuroscience, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
| |
Collapse
|
40
|
O'Connor E, Margrain TH, Freeman TCA. Age, eye movement and motion discrimination. Vision Res 2010; 50:2588-99. [PMID: 20732343 DOI: 10.1016/j.visres.2010.08.015] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2010] [Revised: 08/12/2010] [Accepted: 08/13/2010] [Indexed: 11/27/2022]
Abstract
Age is known to affect sensitivity to retinal motion. However, little is known about how age might affect sensitivity to motion during pursuit. We therefore investigated direction discrimination and speed discrimination when moving stimuli were either fixated or pursued. Our experiments showed: (1) age influences direction discrimination at slow speeds but has little affect on speed discrimination; (2) the faster eye movements made in the pursuit conditions produced poorer direction discrimination at slower speeds, and poorer speed discrimination at all speeds; (3) regardless of eye-movement condition, observers always combined retinal and extra-retinal motion signals to make their judgements. Our results support the idea that performance in these tasks is limited by the internal noise associated with retinal and extra-retinal motion signals, both of which feed into a stage responsible for estimating head-centred motion. Imprecise eye movement, or later noise introduced at the combination stage, could not explain the results.
Collapse
Affiliation(s)
- Emer O'Connor
- School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff CF10 3YT, UK
| | | | | |
Collapse
|
41
|
Multisensory integration: resolving sensory ambiguities to build novel representations. Curr Opin Neurobiol 2010; 20:353-60. [PMID: 20471245 DOI: 10.1016/j.conb.2010.04.009] [Citation(s) in RCA: 67] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2010] [Revised: 04/10/2010] [Accepted: 04/14/2010] [Indexed: 11/19/2022]
Abstract
Multisensory integration plays several important roles in the nervous system. One is to combine information from multiple complementary cues to improve stimulus detection and discrimination. Another is to resolve peripheral sensory ambiguities and create novel internal representations that do not exist at the level of individual sensors. Here we focus on how ambiguities inherent in vestibular, proprioceptive and visual signals are resolved to create behaviorally useful internal estimates of our self-motion. We review recent studies that have shed new light on the nature of these estimates and how multiple, but individually ambiguous, sensory signals are processed and combined to compute them. We emphasize the need to combine experiments with theoretical insights to understand the transformations that are being performed.
Collapse
|
42
|
Zambrano D, Falotico E, Manfredi L, Laschi C. A model of the smooth pursuit eye movement with prediction and learning. Appl Bionics Biomech 2010. [DOI: 10.1080/11762321003760944] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
|
43
|
Fujiwara K, Akao T, Kurkin S, Fukushima K. Activity of pursuit-related neurons in medial superior temporal area (MST) during static roll-tilt. Cereb Cortex 2010; 21:155-65. [PMID: 20421248 PMCID: PMC3000568 DOI: 10.1093/cercor/bhq072] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
Recent studies have shown that rhesus macaques can perceive visual motion direction in earth-centered coordinates as accurately as humans. We tested whether coordinate frames representing smooth pursuit and/or visual motion signals in medial superior temporal area (MST) are earth centered to better understand its role in coordinating smooth pursuit. In 2 Japanese macaques, we compared preferred directions (re monkeys' head–trunk axis) of pursuit and/or visual motion responses of MSTd neurons while upright and during static whole-body roll-tilt. In the majority (41/51 = 80%) of neurons tested, preferred directions of pursuit and/or visual motion responses were not significantly different while upright and during 40° static roll-tilt. Preferred directions of the remaining 20% of neurons (n = 10) were shifted beyond the range expected from ocular counter-rolling; the maximum shift was 14°, and the mean shift was 12°. These shifts, however, were still less than half of the expected shift if MST signals are coded in the earth-centered coordinates. Virtually, all tested neurons (44/46 = 96%) failed to exhibit a significant difference between resting discharge rate while upright and during static roll-tilt while fixating a stationary spot. These results suggest that smooth pursuit and/or visual motion signals of MST neurons are not coded in the earth-centered coordinates; our results favor the head- and/or trunk-centered coordinates.
Collapse
Affiliation(s)
- Keishi Fujiwara
- Department of Physiology, Hokkaido University School of Medicine, Sapporo 060-8638, Japan.
| | | | | | | |
Collapse
|
44
|
Freeman TCA, Champion RA, Warren PA. A Bayesian model of perceived head-centered velocity during smooth pursuit eye movement. Curr Biol 2010; 20:757-62. [PMID: 20399096 PMCID: PMC2861164 DOI: 10.1016/j.cub.2010.02.059] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2009] [Revised: 02/18/2010] [Accepted: 02/18/2010] [Indexed: 11/30/2022]
Abstract
During smooth pursuit eye movement, observers often misperceive velocity. Pursued stimuli appear slower (Aubert-Fleishl phenomenon [1, 2]), stationary objects appear to move (Filehne illusion [3]), the perceived direction of moving objects is distorted (trajectory misperception [4]), and self-motion veers away from its true path (e.g., the slalom illusion [5]). Each illusion demonstrates that eye speed is underestimated with respect to image speed, a finding that has been taken as evidence of early sensory signals that differ in accuracy [4, 6-11]. Here we present an alternative Bayesian account, based on the idea that perceptual estimates are increasingly influenced by prior expectations as signals become more uncertain [12-15]. We show that the speeds of pursued stimuli are more difficult to discriminate than fixated stimuli. Observers are therefore less certain about motion signals encoding the speed of pursued stimuli, a finding we use to quantify the Aubert-Fleischl phenomenon based on the assumption that the prior for motion is centered on zero [16-20]. In doing so, we reveal an important property currently overlooked by Bayesian models of motion perception. Two Bayes estimates are needed at a relatively early stage in processing, one for pursued targets and one for image motion.
Collapse
Affiliation(s)
- Tom C A Freeman
- School of Psychology, Cardiff University, Park Place, Cardiff CF10 3AT, UK.
| | | | | |
Collapse
|
45
|
Brostek L, Ono S, Mustari MJ, Nuding U, Büttner U, Glasauer S. Neuronal responses in the cortical area MSTd during smooth pursuit and ocular following eye movements. BMC Neurosci 2009. [DOI: 10.1186/1471-2202-10-s1-p367] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
46
|
Dessing JC, Oostwoud Wijdenes L, Peper CE, Beek PJ. Visuomotor transformation for interception: catching while fixating. Exp Brain Res 2009; 196:511-27. [PMID: 19543722 PMCID: PMC2704620 DOI: 10.1007/s00221-009-1882-6] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2008] [Accepted: 05/21/2009] [Indexed: 11/21/2022]
Abstract
Catching a ball involves a dynamic transformation of visual information about ball motion into motor commands for moving the hand to the right place at the right time. We previously formulated a neural model for this transformation to account for the consistent leftward movement biases observed in our catching experiments. According to the model, these biases arise within the representation of target motion as well as within the transformation from a gaze-centered to a body-centered movement command. Here, we examine the validity of the latter aspect of our model in a catching task involving gaze fixation. Gaze fixation should systematically influence biases in catching movements, because in the model movement commands are only generated in the direction perpendicular to the gaze direction. Twelve participants caught balls while gazing at a fixation point positioned either straight ahead or 14° to the right. Four participants were excluded because they could not adequately maintain fixation. We again observed a consistent leftward movement bias, but the catching movements were unaffected by fixation direction. This result refutes our proposal that the leftward bias partly arises within the visuomotor transformation, and suggests instead that the bias predominantly arises within the early representation of target motion, specifically through an imbalance in the represented radial and azimuthal target motion.
Collapse
Affiliation(s)
- Joost C Dessing
- Research Institute MOVE, Faculty of Human Movement Sciences, VU University Amsterdam, Van der Boechorststraat 9, 1081 BT, Amsterdam, The Netherlands.
| | | | | | | |
Collapse
|
47
|
Schütz AC, Braun DI, Gegenfurtner KR. Chromatic Contrast Sensitivity During Optokinetic Nystagmus, Visually Enhanced Vestibulo-ocular Reflex, and Smooth Pursuit Eye Movements. J Neurophysiol 2009; 101:2317-27. [DOI: 10.1152/jn.91248.2008] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Recently we showed that sensitivity for chromatic- and high-spatial frequency luminance stimuli is enhanced during smooth-pursuit eye movements (SPEMs). Here we investigated whether this enhancement is a general property of slow eye movements. Besides SPEM there are two other classes of eye movements that operate in a similar range of eye velocities: the optokinetic nystagmus (OKN) is a reflexive pattern of alternating fast and slow eye movements elicited by wide-field visual motion and the vestibulo-ocular reflex (VOR) stabilizes the gaze during head movements. In a natural environment all three classes of eye movements act synergistically to allow clear central vision during self- and object motion. To test whether the same improvement of chromatic sensitivity occurs during all of these eye movements, we measured human detection performance of chromatic and luminance line stimuli during OKN and contrast sensitivity during VOR and SPEM at comparable velocities. For comparison, performance in the same tasks was tested during fixation. During the slow phase of OKN we found a similar enhancement of chromatic detection rate like that during SPEM, whereas no enhancement was observable during VOR. This result indicates similarities between slow-phase OKN and SPEM, which are distinct from VOR.
Collapse
|
48
|
Lencer R, Trillenberg P. Neurophysiology and neuroanatomy of smooth pursuit in humans. Brain Cogn 2008; 68:219-28. [PMID: 18835076 DOI: 10.1016/j.bandc.2008.08.013] [Citation(s) in RCA: 95] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2008] [Indexed: 11/17/2022]
Affiliation(s)
- Rebekka Lencer
- Klinik für Psychiatrie und Psychotherapie, Universität zu Lübeck, Ratzeburger Allee 160, 23538 Lübeck, Germany.
| | | |
Collapse
|
49
|
Ilg UJ, Thier P. The neural basis of smooth pursuit eye movements in the rhesus monkey brain. Brain Cogn 2008; 68:229-40. [DOI: 10.1016/j.bandc.2008.08.014] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2008] [Indexed: 12/28/2022]
|
50
|
Improved visual sensitivity during smooth pursuit eye movements. Nat Neurosci 2008; 11:1211-6. [PMID: 18806785 DOI: 10.1038/nn.2194] [Citation(s) in RCA: 56] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2008] [Accepted: 08/05/2008] [Indexed: 11/09/2022]
|