1
|
Causal contribution of optic flow signal in Macaque extrastriate visual cortex for roll perception. Nat Commun 2022; 13:5479. [PMID: 36123363 PMCID: PMC9485245 DOI: 10.1038/s41467-022-33245-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 09/08/2022] [Indexed: 11/08/2022] Open
Abstract
Optic flow is a powerful cue for inferring self-motion status which is critical for postural control, spatial orientation, locomotion and navigation. In primates, neurons in extrastriate visual cortex (MSTd) are predominantly modulated by high-order optic flow patterns (e.g., spiral), yet a functional link to direct perception is lacking. Here, we applied electrical microstimulation to selectively manipulate population of MSTd neurons while macaques discriminated direction of rotation around line-of-sight (roll) or direction of linear-translation (heading), two tasks which were orthogonal in 3D spiral coordinate using a four-alternative-forced-choice paradigm. Microstimulation frequently biased animal's roll perception towards coded labeled-lines of the artificial-stimulated neurons in either context with spiral or pure-rotation stimuli. Choice frequency was also altered between roll and translation flow-pattern. Our results provide direct causal-link evidence supporting that roll signals in MSTd, despite often mixed with translation signals, can be extracted by downstream areas for perception of rotation relative to gravity-vertical.
Collapse
|
2
|
Elshout JA, Bergsma DP, van den Berg AV, Haak KV. Functional MRI of visual cortex predicts training-induced recovery in stroke patients with homonymous visual field defects. NEUROIMAGE-CLINICAL 2021; 31:102703. [PMID: 34062384 PMCID: PMC8173295 DOI: 10.1016/j.nicl.2021.102703] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Revised: 05/17/2021] [Accepted: 05/18/2021] [Indexed: 12/28/2022]
Abstract
Damage to the visual brain typically leads to vision loss. Vision loss may be partially recovered with visual restitution training (VRT) Cortical responses to visual stimulation do not always lead to visual awareness. A mismatch between Humphrey and neural perimetry predicts training outcome. This finding has important implications for better rehabilitation strategies.
Post-chiasmatic damage to the visual system leads to homonymous visual field defects (HVDs), which can severely interfere with daily life activities. Visual Restitution Training (VRT) can recover parts of the affected visual field in patients with chronic HVDs, but training outcome is variable. An untested hypothesis suggests that training potential may be largest in regions with ‘neural reserve’, where cortical responses to visual stimulation do not lead to visual awareness as assessed by Humphrey perimetry—a standard behavioural visual field test. Here, we tested this hypothesis in a sample of twenty-seven hemianopic stroke patients, who participated in an assiduous 80-hour VRT program. For each patient, we collected Humphrey perimetry and wide-field fMRI-based retinotopic mapping data prior to training. In addition, we used Goal Attainment Scaling to assess whether personal activities in daily living improved. After training, we assessed with a second Humphrey perimetry measurement whether the visual field was improved and evaluated which personal goals were attained. Confirming the hypothesis, we found significantly larger improvements of visual sensitivity at field locations with neural reserve. These visual field improvements implicated both regions in primary visual cortex and higher order visual areas. In addition, improvement in daily life activities correlated with the extent of visual field enlargement. Our findings are an important step toward understanding the mechanisms of visual restitution as well as predicting training efficacy in stroke patients with chronic hemianopia.
Collapse
Affiliation(s)
- J A Elshout
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre, Nijmegen, The Netherlands
| | - D P Bergsma
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre, Nijmegen, The Netherlands
| | - A V van den Berg
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre, Nijmegen, The Netherlands
| | - K V Haak
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre, Nijmegen, The Netherlands.
| |
Collapse
|
3
|
Burlingham CS, Heeger DJ. Heading perception depends on time-varying evolution of optic flow. Proc Natl Acad Sci U S A 2020; 117:33161-33169. [PMID: 33328275 PMCID: PMC7776640 DOI: 10.1073/pnas.2022984117] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
There is considerable support for the hypothesis that perception of heading in the presence of rotation is mediated by instantaneous optic flow. This hypothesis, however, has never been tested. We introduce a method, termed "nonvarying phase motion," for generating a stimulus that conveys a single instantaneous optic flow field, even though the stimulus is presented for an extended period of time. In this experiment, observers viewed stimulus videos and performed a forced-choice heading discrimination task. For nonvarying phase motion, observers made large errors in heading judgments. This suggests that instantaneous optic flow is insufficient for heading perception in the presence of rotation. These errors were mostly eliminated when the velocity of phase motion was varied over time to convey the evolving sequence of optic flow fields corresponding to a particular heading. This demonstrates that heading perception in the presence of rotation relies on the time-varying evolution of optic flow. We hypothesize that the visual system accurately computes heading, despite rotation, based on optic acceleration, the temporal derivative of optic flow.
Collapse
Affiliation(s)
| | - David J Heeger
- Department of Psychology, New York University, New York, NY 10003;
- Center for Neural Science, New York University, New York, NY 10003
| |
Collapse
|
4
|
The Effects of Depth Cues and Vestibular Translation Signals on the Rotation Tolerance of Heading Tuning in Macaque Area MSTd. eNeuro 2020; 7:ENEURO.0259-20.2020. [PMID: 33127626 PMCID: PMC7688306 DOI: 10.1523/eneuro.0259-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 10/17/2020] [Accepted: 10/22/2020] [Indexed: 12/03/2022] Open
Abstract
When the eyes rotate during translational self-motion, the focus of expansion (FOE) in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visual mechanism of rotation compensation in heading-selective neurons. Computational theories support a visual compensatory strategy but require different visual depth cues. We examined the rotation tolerance of heading tuning in macaque area MSTd using two different virtual environments, a frontoparallel (2D) wall and a 3D cloud of random dots. Both environments contained rotational optic flow cues (i.e., dynamic perspective), but only the 3D cloud stimulus contained local motion parallax cues, which are required by some models. The 3D cloud environment did not enhance the rotation tolerance of heading tuning for individual MSTd neurons, nor the accuracy of heading estimates decoded from population activity, suggesting a key role for dynamic perspective cues. We also added vestibular translation signals to optic flow, to test whether rotation tolerance is enhanced by non-visual cues to heading. We found no benefit of vestibular signals overall, but a modest effect for some neurons with significant vestibular heading tuning. We also find that neurons with more rotation tolerant heading tuning typically are less selective to pure visual rotation cues. Together, our findings help to clarify the types of information that are used to construct heading representations that are tolerant to eye rotations.
Collapse
|
5
|
When flow is not enough: evidence from a lane changing task. PSYCHOLOGICAL RESEARCH 2018; 84:834-849. [PMID: 30088078 DOI: 10.1007/s00426-018-1070-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2018] [Accepted: 07/31/2018] [Indexed: 10/28/2022]
Abstract
Humans are able to estimate their heading on the basis of optic flow information and it has been argued that we use flow in this way to guide navigation. Consistent with this idea, several studies have reported good navigation performance in flow fields. However, one criticism of these studies is that they have generally focused on the task of walking or steering towards a target, offering an additional, salient directional cue. Hence, it remains a matter of debate as to whether humans are truly able to control steering in the presence of optic flow alone. In this study, we report a set of maneuvers carried out in flow fields in the absence of a physical target. To do this, we studied the everyday task of lane changing, a commonplace multiphase steering maneuver which can be conceptualized without the need for a target. What is more (and here is the crucial quirk), previous literature has found that in the absence of visual feedback, drivers show a systematic, asymmetric steering response, resulting in a systematic final heading error. If optic flow is sufficient for controlling navigation through our environment, we would expect this asymmetry to disappear whenever optic flow is provided. However, our results show that this asymmetry persisted, even in the presence of a flow field, implying that drivers are unable to use flow to guide normal steering responses in this task.
Collapse
|
6
|
Hanna M, Fung J, Lamontagne A. Multisensory control of a straight locomotor trajectory. J Vestib Res 2018; 27:17-25. [PMID: 28387689 DOI: 10.3233/ves-170603] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Locomotor steering is contingent upon orienting oneself spatially in the environment. When the head is turned while walking, the optic flow projected onto the retina is a complex pattern comprising of a translational and a rotational component. We have created a unique paradigm to simulate different optic flows in a virtual environment. We hypothesized that non-visual (vestibular and somatosensory) cues are required for proper control of a straight trajectory while walking. This research study included 9 healthy young subjects walking in a large physical space (40×25m2) while the virtual environment is viewed in a helmet-mounted display. They were instructed to walk straight in the physical world while being exposed to three conditions: (1) self-initiated active head turns (AHT: 40° right, left, or none); (2) visually simulated head turns (SHT); and (3) visually simulated head turns with no target element (SHT_NT). Conditions 1 and 2 involved an eye-level target which subjects were instructed to fixate, whereas condition 3 was similar to condition 2 but with no target. Identical retinal flow patterns were present in the AHT and SHT conditions whereas non-visual cues differed in that a head rotation was sensed only in AHT but not in SHT. Body motions were captured by a 12-camera Vicon system. Horizontal orientations of the head and body segments, as well as the trajectory of the body's centre of mass were analyzed. SHT and SNT_NT yielded similar results. Heading and body segment orientations changed in the direction opposite to the head turns in SHT conditions. Heading remained unchanged across head turn directions in AHT. Results suggest that non-visual information is used in the control of heading while being exposed to changing rotational optic flows. The small magnitude of the changes in SHT conditions suggests that the CNS can re-weight relevant sources of information to minimize heading errors in the presence of sensory conflicts.
Collapse
Affiliation(s)
- Maxim Hanna
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada.,Feil and Oberfeld /CRIR Research Centre, Jewish Rehabilitation Hospital, CISSS-Laval, QC, Canada
| | - Joyce Fung
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada.,Feil and Oberfeld /CRIR Research Centre, Jewish Rehabilitation Hospital, CISSS-Laval, QC, Canada
| | - Anouk Lamontagne
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada.,Feil and Oberfeld /CRIR Research Centre, Jewish Rehabilitation Hospital, CISSS-Laval, QC, Canada
| |
Collapse
|
7
|
Bertin RJV, Israël I. Optic-Flow-Based Perception of Two-Dimensional Trajectories and the Effects of a Single Landmark. Perception 2016; 34:453-75. [PMID: 15943053 DOI: 10.1068/p5292] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Human observers can detect their heading direction on a short time scale on the basis of optic flow. We investigated the visual perception and reconstruction of visually travelled two-dimensional (2-D) trajectories from optic flow, with and without a landmark. As in our previous study, seated, stationary subjects wore a head-mounted display in which optic-flow stimuli were shown that simulated various manoeuvres: linear or curvilinear 2-D trajectories over a horizontal plane, with observer orientation either fixed in space, fixed relative to the path, or changing relative to both. Afterwards, they reproduced the perceived manoeuvre with a model vehicle, whose position and orientation were recorded. Previous results had suggested that our stimuli can induce illusory percepts when translation and yaw are unyoked. We tested that hypothesis and investigated how perception of the travelled trajectory depends on the amount of yaw and the average path-relative orientation. Using a structured visual environment instead of only dots, or making available additional extra-retinal information, can improve perception of ambiguous optic-flow stimuli. We investigated the amount of necessary structuring, specifically the effect of additional visual and/or extra-retinal information provided by a single landmark in conditions where illusory percepts occur. While yaw was perceived correctly, the travelled path was less accurately perceived, but still adequately when the simulated orientation was fixed in space or relative to the trajectory. When the amount of yaw was not equal to the rotation of the path, or in the opposite direction, subjects still perceived orientation as fixed relative to the trajectory. This caused trajectory misperception because yaw was wrongly attributed to a rotation of the path: path perception is governed by the amount of yaw in the manoeuvre. Trajectory misperception also occurs when orientation is fixed relative to a curvilinear path, but not tangential to it. A single landmark could improve perception. Our results confirm and extend previous findings that, for unambiguous perception of ego-motion from optic flow, additional information is required in many cases, which can take the form of fairly minimal, visual information.
Collapse
Affiliation(s)
- René J V Bertin
- College de France/LPPA, 11 place Marcelin Berthelot, 75005 Paris, France.
| | | |
Collapse
|
8
|
Arnoldussen DM, Goossens J, van Den Berg AV. Dissociation of retinal and headcentric disparity signals in dorsal human cortex. Front Syst Neurosci 2015; 9:16. [PMID: 25759642 PMCID: PMC4338660 DOI: 10.3389/fnsys.2015.00016] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2014] [Accepted: 02/02/2015] [Indexed: 11/20/2022] Open
Abstract
Recent fMRI studies have shown fusion of visual motion and disparity signals for shape perception (Ban et al., 2012), and unmasking camouflaged surfaces (Rokers et al., 2009), but no such interaction is known for typical dorsal motion pathway tasks, like grasping and navigation. Here, we investigate human speed perception of forward motion and its representation in the human motion network. We observe strong interaction in medial (V3ab, V6) and lateral motion areas (MT+), which differ significantly. Whereas the retinal disparity dominates the binocular contribution to the BOLD activity in the anterior part of area MT+, headcentric disparity modulation of the BOLD response dominates in area V3ab and V6. This suggests that medial motion areas not only represent rotational speed of the head (Arnoldussen et al., 2011), but also translational speed of the head relative to the scene. Interestingly, a strong response to vergence eye movements was found in area V1, which showed a dependency on visual direction, just like vertical-size disparity. This is the first report of a vertical-size disparity correlate in human striate cortex.
Collapse
Affiliation(s)
- David M Arnoldussen
- Section Biophysics, Department of Cognitive Neuroscience, Radboud University Nijmegen Medical Centre, Donders Institute for Brain, Cognition, and Behavior Nijmegen, Netherlands ; School of Psychology, University of Nottingham Nottingham, UK
| | - Jeroen Goossens
- Section Biophysics, Department of Cognitive Neuroscience, Radboud University Nijmegen Medical Centre, Donders Institute for Brain, Cognition, and Behavior Nijmegen, Netherlands
| | - Albert V van Den Berg
- Section Biophysics, Department of Cognitive Neuroscience, Radboud University Nijmegen Medical Centre, Donders Institute for Brain, Cognition, and Behavior Nijmegen, Netherlands
| |
Collapse
|
9
|
A unified model of heading and path perception in primate MSTd. PLoS Comput Biol 2014; 10:e1003476. [PMID: 24586130 PMCID: PMC3930491 DOI: 10.1371/journal.pcbi.1003476] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2013] [Accepted: 01/03/2014] [Indexed: 11/20/2022] Open
Abstract
Self-motion, steering, and obstacle avoidance during navigation in the real world require humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature, which humans accurately perceive and is critical to everyday locomotion. In primates, including humans, dorsal medial superior temporal area (MSTd) has been implicated in heading perception. However, the majority of MSTd neurons respond optimally to spiral patterns, rather than to the radial expansion patterns associated with heading. No existing theory of curved path perception explains the neural mechanisms by which humans accurately assess path and no functional role for spiral-tuned cells has yet been proposed. Here we present a computational model that demonstrates how the continuum of observed cells (radial to circular) in MSTd can simultaneously code curvature and heading across the neural population. Curvature is encoded through the spirality of the most active cell, and heading is encoded through the visuotopic location of the center of the most active cell's receptive field. Model curvature and heading errors fit those made by humans. Our model challenges the view that the function of MSTd is heading estimation, based on our analysis we claim that it is primarily concerned with trajectory estimation and the simultaneous representation of both curvature and heading. In our model, temporal dynamics afford time-history in the neural representation of optic flow, which may modulate its structure. This has far-reaching implications for the interpretation of studies that assume that optic flow is, and should be, represented as an instantaneous vector field. Our results suggest that spiral motion patterns that emerge in spatio-temporal optic flow are essential for guiding self-motion along complex trajectories, and that cells in MSTd are specifically tuned to extract complex trajectory estimation from flow.
Collapse
|
10
|
Foulkes AJ, Rushton SK, Warren PA. Heading recovery from optic flow: comparing performance of humans and computational models. Front Behav Neurosci 2013; 7:53. [PMID: 23801946 PMCID: PMC3689323 DOI: 10.3389/fnbeh.2013.00053] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2012] [Accepted: 05/07/2013] [Indexed: 11/13/2022] Open
Abstract
Human observers can perceive their direction of heading with a precision of about a degree. Several computational models of the processes underpinning the perception of heading have been proposed. In the present study we set out to assess which of four candidate models best captured human performance; the four models we selected reflected key differences in terms of approach and methods to modelling optic flow processing to recover movement parameters. We first generated a performance profile for human observers by measuring how performance changed as we systematically manipulated both the quantity (number of dots in the stimulus per frame) and quality (amount of 2D directional noise) of the flow field information. We then generated comparable performance profiles for the four candidate models. Models varied markedly in terms of both their performance and similarity to human data. To formally assess the match between the models and human performance we regressed the output of each of the four models against human performance data. We were able to rule out two models that produced very different performance profiles to human observers. The remaining two shared some similarities with human performance profiles in terms of the magnitude and pattern of thresholds. However none of the models tested could capture all aspect of the human data.
Collapse
Affiliation(s)
- Andrew J. Foulkes
- School of Psychological Sciences, The University of ManchesterManchester, UK
| | | | - Paul A. Warren
- School of Psychological Sciences, The University of ManchesterManchester, UK
| |
Collapse
|
11
|
Duijnhouwer J, Noest AJ, Lankheet MJM, van den Berg AV, van Wezel RJA. Speed and direction response profiles of neurons in macaque MT and MST show modest constraint line tuning. Front Behav Neurosci 2013; 7:22. [PMID: 23576963 PMCID: PMC3616296 DOI: 10.3389/fnbeh.2013.00022] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2012] [Accepted: 03/05/2013] [Indexed: 11/13/2022] Open
Abstract
Several models of heading detection during smooth pursuit rely on the assumption of local constraint line tuning to exist in large scale motion detection templates. A motion detector that exhibits pure constraint line tuning responds maximally to any 2D-velocity in the set of vectors that can be decomposed into the central, or classic, preferred velocity (the shortest vector that still yields the maximum response) and any vector orthogonal to that. To test this assumption, we measured the firing rates of isolated middle temporal (MT) and medial superior temporal (MST) neurons to random dot stimuli moving in a range of directions and speeds. We found that as a function of 2D velocity, the pooled responses were best fit with a 2D Gaussian profile with a factor of elongation, orthogonal to the central preferred velocity, of roughly 1.5 for MST and 1.7 for MT. This means that MT and MST cells are more sharply tuned for speed than they are for direction; and that they indeed show some level of constraint line tuning. However, we argue that the observed elongation is insufficient to achieve behavioral heading discrimination accuracy on the order of 1-2 degrees as reported before.
Collapse
Affiliation(s)
- Jacob Duijnhouwer
- Center for Molecular and Behavioral Neuroscience, Rutgers University Newark, NJ, USA
| | | | | | | | | |
Collapse
|
12
|
Zaal PMT, Nieuwenhuizen FM, van Paassen MM, Mulder M. Modeling Human Control of Self-Motion Direction With Optic Flow and Vestibular Motion. IEEE TRANSACTIONS ON CYBERNETICS 2013; 43:544-556. [PMID: 22987529 DOI: 10.1109/tsmcb.2012.2212188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
In this paper, we investigate the effects of visual and motion stimuli on the manual control of one's direction of self-motion. In a flight simulator, subjects conducted an active target-following disturbance-rejection task, using a compensatory display. Simulating a vehicular control task, the direction of vehicular motion was shown on the outside visual display in two ways: an explicit presentation using a symbol and an implicit presentation, namely, through the focus of radial outflow that emerges from optic flow. In addition, the effects of the relative strength of congruent vestibular motion cues were investigated. The dynamic properties of human visual and vestibular motion perception paths were modeled using a control-theoretical approach. As expected, improved tracking performance was found for the configurations that explicitly showed the direction of self-motion. The human visual time delay increased with approximately 150 ms for the optic flow conditions, relative to explicit presentations. Vestibular motion, providing higher order information on the direction of self-motion, allowed subjects to partially compensate for this visual perception delay, improving performance. Parameter estimates of the operator control model show that, with vestibular motion, the visual feedback becomes stronger, indicating that operators are more confident to act on optic flow information when congruent vestibular motion cues are present.
Collapse
|
13
|
Causal links between dorsal medial superior temporal area neurons and multisensory heading perception. J Neurosci 2012; 32:2299-313. [PMID: 22396405 DOI: 10.1523/jneurosci.5154-11.2012] [Citation(s) in RCA: 94] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The dorsal medial superior temporal area (MSTd) in the extrastriate visual cortex is thought to play an important role in heading perception because neurons in this area are tuned to both optic flow and vestibular signals. MSTd neurons also show significant correlations with perceptual judgments during a fine heading direction discrimination task. To test for a causal link with heading perception, we used microstimulation and reversible inactivation techniques to artificially perturb MSTd activity while monitoring behavioral performance. Electrical microstimulation significantly biased monkeys' heading percepts based on optic flow, but did not significantly impact vestibular heading judgments. The latter result may be due to the fact that vestibular heading preferences in MSTd are more weakly clustered than visual preferences and multiunit tuning for vestibular stimuli is weak. Reversible chemical inactivation, however, increased behavioral thresholds when heading judgments were based on either optic flow or vestibular cues, although the magnitude of the effects was substantially stronger for optic flow. Behavioral deficits in a combined visual/vestibular stimulus condition were intermediate between the single-cue effects. Despite deficits in discrimination thresholds, animals were able to combine visual and vestibular cues near optimally, even after large bilateral muscimol injections into MSTd. Simulations show that the overall pattern of results following inactivation is consistent with a mixture of contributions from MSTd and other areas with vestibular-dominant tuning for heading. Our results support a causal link between MSTd neurons and multisensory heading perception but suggest that other multisensory brain areas also contribute.
Collapse
|
14
|
Pitch body orientation influences the perception of self-motion direction induced by optic flow. Neurosci Lett 2010; 482:193-7. [PMID: 20647031 DOI: 10.1016/j.neulet.2010.07.028] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2010] [Revised: 07/13/2010] [Accepted: 07/13/2010] [Indexed: 11/21/2022]
Abstract
We studied the effect of static pitch body tilts on the perception of self-motion direction induced by a visual stimulus. Subjects were seated in front of a screen on which was projected a 3D cluster of moving dots visually simulating a forward motion of the observer with upward or downward directional biases (relative to a true earth horizontal direction). The subjects were tilted at various angles relative to gravity and were asked to estimate the direction of the perceived motion (nose-up, as during take-off or nose-down, as during landing). The data showed that body orientation proportionally affected the amount of error in the reported perceived direction (by 40% of body tilt magnitude in a range of +/-20 degrees) and these errors were systematically recorded in the direction of body tilt. As a consequence, a same visual stimulus was differently interpreted depending on body orientation. While the subjects were required to perform the task in a geocentric reference frame (i.e., relative to a gravity-related direction), they were obviously influenced by egocentric references. These results suggest that the perception of self-motion is not elaborated within an exclusive reference frame (either egocentric or geocentric) but rather results from the combined influence of both.
Collapse
|
15
|
Bremmer F, Kubischik M, Pekel M, Hoffmann KP, Lappe M. Visual selectivity for heading in monkey area MST. Exp Brain Res 2010; 200:51-60. [PMID: 19727690 DOI: 10.1007/s00221-009-1990-3] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2009] [Accepted: 08/08/2009] [Indexed: 12/01/2022]
Abstract
The control of self-motion is supported by visual, vestibular, and proprioceptive signals. Recent research has shown how these signals interact in the monkey medio-superior temporal area (area MST) to enhance and disambiguate the perception of heading during self-motion. Area MST is a central stage for self-motion processing from optic flow, and integrates flow Weld information with vestibular self-motion and extraretinal eye movement information. Such multimodal cue integration is clearly important to solidify perception. However to understand the information processing capabilities of the brain, one must also ask how much information can be deduced from a single cue alone. This is particularly pertinent for optic flow, where controversies over its usefulness for self-motion control have existed ever since Gibson proposed his direct approach to ecological perception. In our study, we therefore, tested macaque MST neurons for their heading selectivity in highly complex flow Welds based on the purely visual mechanisms. We recorded responses of MST neurons to simple radial flow Welds and to distorted flow Welds that simulated a self-motion plus an eye movement. About half of the cells compensated for such distortion and kept the same heading selectivity in both cases. Our results strongly support the notion of an involvement of area MST in the computation of heading.
Collapse
Affiliation(s)
- Frank Bremmer
- Allg. Zoologie und Neurobiologie, Ruhr Universität Bochum, 44780 Bochum, Germany.
| | | | | | | | | |
Collapse
|
16
|
Affiliation(s)
- Kenneth H. Britten
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California 95616;
| |
Collapse
|
17
|
Sarre G, Berard J, Fung J, Lamontagne A. Steering behaviour can be modulated by different optic flows during walking. Neurosci Lett 2008; 436:96-101. [PMID: 18400392 DOI: 10.1016/j.neulet.2008.02.049] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2007] [Revised: 12/21/2007] [Accepted: 02/19/2008] [Indexed: 11/28/2022]
Abstract
Optic flow is a typical pattern of visual motion that can be used to control locomotion. While the ability to discriminate translational or rotational optic flows have been extensively studied, how these flows control steering during locomotion is not known. The goal of this study was to compare the steering behaviour of subjects subjected to rotational, translational, or combined (rotational added to translational) optic flows with a focus of expansion (FOE) located to the right, left, or straight ahead. Ten healthy young subjects were instructed to walk straight in a virtual room viewed through a helmet mounted display while the location of the FOE was randomly offset. Horizontal trajectory of the body's centre of mass (CoM), as well as rotations of the head, trunk and foot were recorded in coordinates of both the physical and virtual worlds. Results show that subjects experienced a mediolateral shift in CoM opposite to the FOE location, with larger corrections being observed at more eccentric FOE locations. Head and body segment reorientations were only observed for optic flows containing a rotational component. CoM trajectory corrections in the physical world were also of small magnitude, leading to deviation errors in the virtual world. Altogether, these results suggest a profound influence of vision, especially due to the pattern of visual motion, on steering behaviours during locomotion.
Collapse
Affiliation(s)
- Guillaume Sarre
- Jewish Rehabilitation Hospital Research Site of CRIR, School of Physical & Occupational Therapy, McGill University, Montréal, Quebec, Canada
| | | | | | | |
Collapse
|
18
|
von Grünau MW, Pilgrim K, Zhou R. Velocity discrimination thresholds for flowfield motions with moving observers. Vision Res 2007; 47:2453-64. [PMID: 17651779 DOI: 10.1016/j.visres.2007.06.008] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2007] [Revised: 06/04/2007] [Accepted: 06/15/2007] [Indexed: 10/23/2022]
Abstract
The visual flow field, produced by forward locomotion, contains useful information about many aspects of visually guided behavior. But locomotion itself also contributes to possible distortions by adding head bobbing motions. Here we examine whether vertical head bobbing affects velocity discrimination thresholds and how the system may compensate for the distortions. Vertical head and eye movements while fixating were recorded during standing, walking or running on a treadmill. Bobbing noise was found to be larger during locomotion. The same observers were equally good at discriminating velocity increases in large accelerating flow fields when standing or walking or running. Simulated head bobbing was compensated when produced by pursuit eye movements, but not when it was part of the flow field. The results showed that these two contributions are additive and dealt with independently before they are combined. Distortions produced by body/head oscillations may also be compensated. Visual performance during running was at least as good as during walking, suggesting more efficient compensation mechanisms for running.
Collapse
Affiliation(s)
- Michael W von Grünau
- Department of Psychology, Concordia University, 7141 Sherbrooke St. W., Montreal, Que., Canada.
| | | | | |
Collapse
|
19
|
Royden CS, Cahill JM, Conti DM. Factors affecting curved versus straight path heading perception. ACTA ACUST UNITED AC 2006; 68:184-93. [PMID: 16773892 DOI: 10.3758/bf03193668] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Displays commonly used for testing heading judgments in the presence of rotations are ambiguous to observers. They can be interpreted equally well as motion in a straight line while rotating the eyes or as motion on a curved path. This has led to conflicting results from studies that use these displays. In this study, we tested several factors that might influence which of these two interpretations observers see. These factors included the size of the field of view, the duration of the stimulus, textured scenes versus random-dot displays, and whether or not observers were given a description of their path. The only factor that had a significant effect on path perception was whether or not observers were given instructions describing their path of motion. Under all conditions without instructions, we found that observers responded in a way that was consistent with the perception of motion on a curved path.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Sciences, College of the Holy Cross, Worcester, MA 01610, USA.
| | | | | |
Collapse
|
20
|
Goossens J, Dukelow SP, Menon RS, Vilis T, van den Berg AV. Representation of head-centric flow in the human motion complex. J Neurosci 2006; 26:5616-27. [PMID: 16723518 PMCID: PMC6675273 DOI: 10.1523/jneurosci.0730-06.2006] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Recent neuroimaging studies have identified putative homologs of macaque middle temporal area (area MT) and medial superior temporal area (area MST) in humans. Little is known about the integration of visual and nonvisual signals in human motion areas compared with monkeys. Through extra-retinal signals, the brain can factor out the components of visual flow on the retina that are induced by eye-in-head and head-in-space rotations and achieve a representation of flow relative to the head (head-centric flow) or body (body-centric flow). Here, we used functional magnetic resonance imaging to test whether extra-retinal eye-movement signals modulate responses to visual flow in the human MT+ complex. We distinguished between MT and MST and tested whether subdivisions of these areas may transform the retinal flow into head-centric flow. We report that interactions between eye-movement signals and visual flow are not evenly distributed across MT+. Pursuit hardly influenced the response of MT to flow, whereas the responses in MST to the same retinal stimuli were stronger during pursuit than during fixation. We also identified two subregions in which the flow-related responses were boosted significantly by pursuit, one overlapping part of MST. In addition, we found evidence of a metric relation between rotational flow relative to the head and fMRI signals in a subregion of MST. The latter findings provide an important advance over published single-cell recordings in monkey MST. A visual representation of the rotation of the head in the world derived from head-centric flow may supplement semicircular canals signals and is appropriate for cross-calibrating vestibular and visual signals.
Collapse
Affiliation(s)
- Jeroen Goossens
- Department of Biophysics, Radboud University Nijmegen Medical Centre, 6500 HB Nijmegen, The Netherlands.
| | | | | | | | | |
Collapse
|
21
|
Poljac E, Neggers B, van den Berg AV. Collision judgment of objects approaching the head. Exp Brain Res 2005; 171:35-46. [PMID: 16328256 DOI: 10.1007/s00221-005-0257-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2004] [Accepted: 09/28/2005] [Indexed: 10/25/2022]
Abstract
Recent investigations have indicated that human perception of the trajectory of objects approaching in the horizontal plane is precise but biased away from straight ahead. This is remarkable because it could mean that subjects perceive objects that approach on a collision course as missing the head. Approach within the horizontal plane through the eyes and the fixation point (the plane of regard) is special, as general motions will also have a component of motion perpendicular to the plane of regard. Thus, we investigated three-dimensional motion perception in the vicinity of the head, including vertical components. Subjects judged whether an object that moved in the mid-sagittal plane was going to hit below or above a well-known reference point on the face like the center of the chin or the forehead (perceptual task). Tactile and proprioceptive information about the reference point significantly improved precision. Precision did not change with distance of the approaching target or with fixation direction. Bias was virtually absent for these vertical motions. When subjects pointed with their index finger to the perceived location of impact on their face (visuo-motor task), they overestimated (1.7 cm) the horizontal eccentricity of the point of impact (pointing task). Vertical bias, however, was again virtually absent. Interestingly, when trajectories intersected the plane of regard, higher precision was observed in the perceptual task regardless of the other conditions. In contrast, neither bias nor precision of the pointing task changed significantly when the trajectories intersected the plane of regard. When asked to point to the location where a trajectory intersected the plane of regard, subjects overestimated the depth component of this intersection location by about 3 cm. The absence of perceptual and pointing bias in the vertical direction in contrast to the clear horizontal bias suggests that different (combinations of) cues are used to judge these components of the trajectory of an approaching object. The results of our perceptual task suggest a role for somatosensory signals in the visual judgment of impending impact.
Collapse
Affiliation(s)
- E Poljac
- Functional Neurobiology, Helmholtz Institute, Padualaan 8, 3584 Utrecht, The Netherlands
| | | | | |
Collapse
|
22
|
Poljac E, Lankheet MJM, van den Berg AV. Perceptual compensation for eye torsion. Vision Res 2005; 45:485-96. [PMID: 15610752 DOI: 10.1016/j.visres.2004.09.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2003] [Revised: 08/31/2004] [Indexed: 11/20/2022]
Abstract
To correctly perceive visual directions relative to the head, one needs to compensate for the eye's orientation in the head. In this study we focus on compensation for the eye's torsion regarding objects that contain the line of sight and objects that do not pass through the fixation point. Subjects judged the location of flashed probe points relative to their binocular plane of regard, the mid-sagittal or the transverse plane of the head, while fixating straight ahead, right upward, or right downward at 30 cm distance, to evoke eye torsion according to Listing's law. In addition, we investigated the effects of head-tilt and monocular versus binocular viewing. Flashed probe points were correctly localized in the plane of regard irrespective of eccentric viewing, head-tilt, and monocular or binocular vision in nearly all subjects and conditions. Thus, eye torsion that varied by +/-9 degrees across these different conditions was in general compensated for. However, the position of probes relative to the midsagittal or the transverse plane, both true head-fixed planes, was misjudged. We conclude that judgment of the orientation of the plane of regard, a plane that contains the line of sight, is veridical, indicating accurate compensation for actual eye torsion. However, when judgment has to be made of a head-fixed plane that is offset with respect to the line of sight, eye torsion that accompanies that eye orientation appears not to be taken into account correctly.
Collapse
Affiliation(s)
- E Poljac
- Functional Neurobiology, Utrecht University, Helmholtz School Padualaan 8, 3584 CH Utrecht, The Netherlands.
| | | | | |
Collapse
|
23
|
Poljac E, van den Berg AV. Localization of the plane of regard in space. Exp Brain Res 2005; 163:457-67. [PMID: 15657697 DOI: 10.1007/s00221-004-2201-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2004] [Accepted: 11/06/2004] [Indexed: 11/28/2022]
Abstract
When we fixate an object in space, the rotation centers of the eyes, together with the object, define a plane of regard. People perceive the elevation of objects relative to this plane accurately, irrespective of eye or head orientation (Poljac et al. (2004) Vision Res, in press). Yet, to create a correct representation of objects in space, the orientation of the plane of regard in space is required. Subjects pointed along an eccentric vertical line on a touch screen to the location where their plane of regard intersected the touch screen positioned on their right. The distance of the vertical line to the subject's eyes varied from 10 to 40 cm. Subjects were sitting upright and fixating one of the nine randomly presented directions ranging from 20 degrees left and down to 20 degrees right and up relative to their straight ahead. The eccentricity of fixations relative to the pointing location varied by up to 40 degrees . Subjects underestimated the elevation of their plane of regard (on average by 3.69 cm, SD=1.44 cm), regardless of the fixation direction or pointing distance. However, when the targets were shown on a display mounted in a table, to provide support of the subject's hand throughout the trial, subjects pointed accurately (average error 0.3 cm, SD=0.8 cm). In addition, head tilt 20 degrees to the left or right did not cause any change in accuracy. The bias observed in the first task could be caused by maintained tonus in arm muscles when the arm is raised, that might interfere with the transformation from visual to motor signals needed to perform the pointing movement. We conclude that the plane of regard is correctly localized in space. This may be a good starting point for representing objects in head-centric coordinates.
Collapse
Affiliation(s)
- Ervin Poljac
- Functional Neurobiology, Padualaan 8, 3584 CH Utrecht, The Netherlands.
| | | |
Collapse
|
24
|
Hanada M. An algorithmic model of heading perception. BIOLOGICAL CYBERNETICS 2005; 92:8-20. [PMID: 15592681 DOI: 10.1007/s00422-004-0529-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2003] [Accepted: 10/07/2004] [Indexed: 05/24/2023]
Abstract
On the basis of Hanada and Ejima's (2000) model, an algorithmic model was presented to explain psychophysical data of van den Berg and Beintema (2000) that are inconsistent with vector-subtractive compensation for the rotational flow. The earlier model was modified in order not to use vector-subtractive compensation for the rotational flow. The proposed model computes the center of flow first and then estimates self-rotation; finally, heading is recovered from the center of flow and the estimate of self-rotation. The model explains the data of van de Berg and Beintema (2000). A fusion model of rotation estimates from different sources (efferent signals, proprioceptive feedback, vestibular signals about eye and head rotation, and visual motion) was also presented.
Collapse
Affiliation(s)
- Mitsuhiko Hanada
- Department of Cognitive and Information Sciences, Faculty of Letters, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, 263-8522, Japan.
| |
Collapse
|
25
|
Poljac E, van den Berg AV. Representation of heading direction in far and near head space. Exp Brain Res 2003; 151:501-13. [PMID: 12830343 DOI: 10.1007/s00221-003-1498-1] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2002] [Accepted: 04/17/2003] [Indexed: 11/24/2022]
Abstract
Manipulation of objects around the head requires an accurate and stable internal representation of their locations in space, also during movements such as that of the eye or head. For far space, the representation of visual stimuli for goal-directed arm movements relies on retinal updating, if eye movements are involved. Recent neurophysiological studies led us to infer that a transformation of visual space from retinocentric to a head-centric representation may be involved for visual objects in close proximity to the head. The first aim of this study was to investigate if there is indeed such a representation for remembered visual targets of goal-directed arm movements. Participants had to point toward an initially foveated central target after an intervening saccade. Participants made errors that reflect a bias in the visuomotor transformation that depends on eye displacement rather than any head-centred variable. The second issue addressed was if pointing toward the centre of a wide-field expanding motion pattern involves a retinal updating mechanism or a transformation to a head-centric map and if that process is distance dependent. The same pattern of pointing errors in relation to gaze displacement was found independent of depth. We conclude that for goal-directed arm movements, representation of the remembered visual targets is updated in a retinal frame, a mechanism that is actively used regardless of target distance, stimulus characteristics or the requirements of the task.
Collapse
Affiliation(s)
- Ervin Poljac
- Neuro-Ethology Group, Padualaan 8, 3584 CH, Utrecht, The Netherlands.
| | | |
Collapse
|
26
|
Wilkie R, Wann J. Controlling steering and judging heading: retinal flow, visual direction, and extraretinal information. J Exp Psychol Hum Percept Perform 2003; 29:363-78. [PMID: 12760621 DOI: 10.1037/0096-1523.29.2.363] [Citation(s) in RCA: 64] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The contribution of retinal flow (RF), extraretinal (ER), and egocentric visual direction (VD) information in locomotor control was explored. First, the recovery of heading from RF was examined when ER information was manipulated; results confirmed that ER signals affect heading judgments. Then the task was translated to steering curved paths, and the availability and veracity of VD were manipulated with either degraded or systematically biased RF. Large steering errors resulted from selective manipulation of RF and VD, providing strong evidence for the combination of RF, ER, and VD. The relative weighting applied to RF and VD was estimated. A point-attractor model is proposed that combines redundant sources of information for robust locomotor control with flexible trajectory planning through active gaze.
Collapse
Affiliation(s)
- Richard Wilkie
- Department of Psychology, University of Reading, Whiteknights, United Kingdom.
| | | |
Collapse
|
27
|
van den Berg AV, Beintema JA, Frens MA. Heading and path percepts from visual flow and eye pursuit signals. Vision Res 2002; 41:3467-86. [PMID: 11718788 DOI: 10.1016/s0042-6989(01)00023-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
The percept of self-motion through the environment is supported by visual motion signals and eye movement signals. The interaction between these signals by decoupling of the eye movement and the pattern of retinal motion during brief simulated ego-movement on straight or circular trajectories was studied. A new response method enabled subjects to report perceived destination and perceived curvature of their future path simultaneously. Various combinations of simulated gaze rotation in the retinal flow and eye pursuit were investigated. Simulated gaze rotation ranged from consistent and larger than, to opponent and larger than eye pursuit. It was found that the perceived destination shifts non-linearly with the mismatch between simulated gaze rotation and eye pursuit. The non-linearity is also revealed in the perceived tangent heading direction and perceived path curvature, although to different extent in different subjects. For the same retinal flow, eye pursuit that is consistent with the simulated gaze rotation reduces heading error and the perceived path straightens out. In contrast, perceived path and/or heading do not become more curved or more biased in the direction opposite to pursuit when the eye -in-head rotation is opposite to the simulated gaze rotation. These observations point to modulation of the effect of the extra-retinal pursuit signal by the visual evidence for eye rotation. In a second experiment, one presented to a stationary eye the sum of a component of simulated gaze rotation and radial flow. It was found that the bi-circular flow component, that characterizes the change in pattern of flow directions by the gaze rotation, induces a shift of perceived heading without appreciable perceived path curvature. Conversely, the complementary component of simulated gaze rotation (bi-radial flow) evokes a percept of motion on a curved path with a small tangent heading error. It was suggested that bi-circular and bi-radial flow components contribute primarily to percepts of heading and path curvature, respectively.
Collapse
Affiliation(s)
- A V van den Berg
- Department of Physiology, Helmholtz School for Autonomous Systems Research, Faculty of Medicine, Erasmus University Rotterdam, PO Box 1738, 3000 DR, Rotterdam, The Netherlands.
| | | | | |
Collapse
|
28
|
Haarmeier T, Bunjes F, Lindner A, Berret E, Thier P. Optimizing visual motion perception during eye movements. Neuron 2001; 32:527-35. [PMID: 11709162 DOI: 10.1016/s0896-6273(01)00486-x] [Citation(s) in RCA: 50] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
We usually perceive a stationary, stable world and we are able to correctly estimate the direction of heading from optic flow despite coherent visual motion induced by eye movements. This astonishing example of perceptual invariance results from a comparison of visual information with internal reference signals predicting the visual consequences of an eye movement. Here we demonstrate that the reference signal predicting the consequences of smooth-pursuit eye movements is continuously calibrated on the basis of direction-selective interactions between the pursuit motor command and the rotational flow induced by the eye movement, thereby minimizing imperfections of the reference signal and guaranteeing an ecologically optimal interpretation of visual motion.
Collapse
Affiliation(s)
- T Haarmeier
- Department of Cognitive Neurology, University of Tübingen, 72076, Tübingen, Germany
| | | | | | | | | |
Collapse
|
29
|
Abstract
By adding retinal and pursuit eye-movement velocity one can determine the motion of an object with respect to the head. It would seem likely that the visual system carries out a similar computation by summing extra-retinal, eye-velocity signals with retinal motion signals. Perceived head-centred motion may therefore be determined by differences in the way these signals encode speed. For example, if extra-retinal signals provide the lower estimate of speed then moving objects will appear slower when pursued (Aubert-Fleischl phenomenon) and stationary objects will move opposite to an eye movement (Filehne illusion). Most previous work proposes that these illusions exist because retinal signals encode retinal motion accurately while extra-retinal signals under-estimate eye speed. A more general model is presented in which both signals could be in error. Two types of input/output speed relationship are examined. The first uses linear speed transducers and the second non-linear speed transducers, the latter based on power laws. It is shown that studies of the Aubert-Fleischl phenomenon and Filehne illusion reveal the gain ratio or power ratio alone. We also consider general velocity-matching and show that in theory matching functions are limited by gain ratio in the linear case. However, in the non-linear case individual transducer shapes are revealed albeit up to an unknown scaling factor. The experiments show that the Aubert-Fleischl phenomenon and Filehne illusion are adequately described by linear speed transducers with a gain ratio less than one. For some observers, this is also the case in general velocity-matching experiments. For other observers, however, behaviour is non-linear and, according to the transducer model, indicates the existence of expansive non-linearities in speed encoding. This surprising result is discussed in relation to other theories of head-centred motion perception and the possible strategies some observers might adopt when judging stimulus motion during an eye movement.
Collapse
Affiliation(s)
- T C Freeman
- School of Psychology, Cardiff University, PO Box 901, CF10 3YG, Cardiff, UK.
| |
Collapse
|
30
|
Beintema JA, van den Berg AV. Pursuit affects precision of perceived heading for small viewing apertures. Vision Res 2001; 41:2375-91. [PMID: 11459594 DOI: 10.1016/s0042-6989(01)00077-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
We investigated the interaction between extra-retinal rotation signals and retinal motion signals in heading perception during pursuit eye movement. For limited viewing aperture, the variability in perceived heading strongly depends on the pattern of motion directions. Heading towards a point outside the aperture generates nearly parallel aperture flow. This results in lower precision of perceived heading than heading that renders the radial pattern of flow visible. We ask if the precision is limited by the pattern of flow visible on the retina or that on the screen. During fixation, the two patterns are identical. They are decoupled during pursuit, since pursuit changes radial flow within the aperture on the screen into nearly parallel flow on the retina, and vice versa. The extra-retinal signal is known to reduce systematic errors in the direction of pursuit, thus compensating for the rotational flow during pursuit. We now ask if the extra-retinal signal also affects the precision of heading percepts. It might if at the spatial integration stage the rotational flow has been subtracted out already. A compensation beyond the integration stage, however, cannot undo the change in retinal motion directions so that an effect of pursuit on precision cannot be avoided. We measured the variable and systematic errors in perceived heading during fixation and pursuit for a frontal plane approach, while varying duration, dot lifetime and aperture size. We found precision is effected by pursuit as much as predicted from the pattern of retinal flow, while compensation is significantly greater than zero. This means that the interaction between the extra-retinal signal and visual motion signals takes place after spatial integration of local motion signals. Furthermore, compensation increased significantly with longer duration (0.5-3.0 s), but not with larger aperture size (10-50 degrees ). A larger aperture size did increase the eccentricity of perceived heading.
Collapse
Affiliation(s)
- J A Beintema
- Department of Zoology and Neurobiology, Ruhr University Bochum, 44780, Bochum, Germany.
| | | |
Collapse
|
31
|
Li L, Warren WH. Perception of heading during rotation: sufficiency of dense motion parallax and reference objects. Vision Res 2001; 40:3873-94. [PMID: 11090678 DOI: 10.1016/s0042-6989(00)00196-6] [Citation(s) in RCA: 75] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
How do observers perceive the path of self-motion during rotation? Previous research suggests that extra-retinal information about eye movements is necessary at high rotation rates (2-5 degrees /s), but those experiments used sparse random-dot displays. With dense texture-mapped displays, we find the path can be perceived from retinal flow alone at high simulated rotation rates if (a) dense motion parallax and (b) at least one reference object are available. We propose that the visual system determines instantaneous heading from the first-order motion parallax field, and recovers the path of self-motion by updating heading over time with respect to reference objects in the scene.
Collapse
Affiliation(s)
- L Li
- Department of Cognitive and Linguistic Sciences, Brown University, Box 1978, 02912, Providence, RI, USA
| | | |
Collapse
|
32
|
Abstract
We examined human heading judgement from second-order motion which was generated by random-dots with the contrast polarity determined randomly on each frame. It was found that human observers can judge heading fairly accurately from second-order motion when pure translation is simulated or when self-motion toward a ground plane with gaze rotation is simulated but they cannot when self-motion toward cloud-like random dots with gaze rotations is simulated. It is suggested that the human visual system cannot decompose the flow fields into rotational and translational components by using second-order motion information alone, but it can do in some ways from the flow field of the ground plane.
Collapse
Affiliation(s)
- M Hanada
- Graduate School of Human and Environmental Studies, Kyoto University, Yoshida-nihonmatsu-cho, Sakyo-ku, 606-8501, Kyoto, Japan.
| | | |
Collapse
|
33
|
Bertin RJ, Israël I, Lappe M. Perception of two-dimensional, simulated ego-motion trajectories from optic flow. Vision Res 2000; 40:2951-71. [PMID: 11000394 DOI: 10.1016/s0042-6989(00)00134-6] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
A veridical percept of ego-motion is normally derived from a combination of visual, vestibular, and proprioceptive signals. A previous study showed that blindfolded subjects can accurately perceive passively travelled straight or curved trajectories provided that the orientation of the head remained constant along the trajectory. When they were turned (whole-body, head-fixed) relative to the trajectory, errors occurred. We ask whether vision allows for better path perception in that situation, to correct or complement vestibular perception. Seated, stationary subjects wore a head mounted display showing optic flow stimuli which simulated linear or curvilinear 2D trajectories over a horizontal ground plane. The observer's orientation was either fixed in space, fixed relative to the path, or changed relative to both. After presentation, subjects reproduced the perceived movement with a model vehicle, of which position and orientation were recorded. They tended to correctly perceive ego-rotation (yaw), but they perceive orientation as fixed relative to trajectory or (unlike in the vestibular study) to space. This caused trajectory misperception when body rotation was wrongly attributed to a rotation of the path. Visual perception was very similar to vestibular perception.
Collapse
Affiliation(s)
- R J Bertin
- Collège de France/LPPA, 11, place Marcelin Berthelot, 75005, Paris, France.
| | | | | |
Collapse
|
34
|
Abstract
A central theme in previous studies of heading judgements has been whether the retinal flow field can be decomposed to recover the translation component of locomotion when flow also contains the effects of gaze rotation. We explored not just the effect of moving gaze, but also moving attention away from the locomotor path by presenting the case of fixating a road sign and completing different attentional tasks during locomotion. Heading errors increased significantly with attentional load, in the absence of extra-retinal gaze information. When we introduced extra-retinal gaze information with the same tasks this resulted in a significant improvement in heading judgements. These results lead us to question whether the decomposition argument translates to real-world judgements of locomotor heading. If observers need to closely attend to roadside information it seems that decomposition is ineffective, whereas if they have the latitude to alternate gaze it is unnecessary.
Collapse
Affiliation(s)
- J P Wann
- Department of Psychology, University of Reading, PO Box 238, 3 Earley Gate, Whiteknights, RG6 6AL, Reading, UK.
| | | | | |
Collapse
|
35
|
Abstract
We investigated effects of roll (rotation around line of sight) and pitch (rotation around the horizontal axis) components of retinal flow on heading judgement from visual motion information. It was found that performance level of human observers for yaw (rotation around the vertical axis) plus pitch is little different from that for only yaw although there is bias in perceived heading toward the fixation point, and that heading judgement is fairly robust with respect to roll. It was also found that there are some observers who can perceive heading with pitch, yaw and roll at a roll rate of 11.5 degrees /s without extra-retinal information. It suggests that there exist compensation mechanisms for roll in the human visual system.
Collapse
Affiliation(s)
- M Hanada
- Graduate School of Human and Environmental Studies, Kyoto University, Yoshida-nihonmatsu-cho, Sakyo-ku, 606-8501, Kyoto, Japan.
| | | |
Collapse
|
36
|
Freeman TC, Banks MS, Crowell JA. Extraretinal and retinal amplitude and phase errors during Filehne illusion and path perception. PERCEPTION & PSYCHOPHYSICS 2000; 62:900-9. [PMID: 10997037 DOI: 10.3758/bf03212076] [Citation(s) in RCA: 22] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Pursuit eye movements give rise to retinal motion. To judge stimulus motion relative to the head, the visual system must correct for the eye movement by using an extraretinal, eye-velocity signal. Such correction is important in a variety of motion estimation tasks including judgments of object motion relative to the head and judgments of self-motion direction from optic flow. The Filehne illusion (where a stationary object appears to move opposite to the pursuit) results from a mismatch between retinal and extraretinal speed estimates. A mismatch in timing could also exist. Speed and timing errors were investigated using sinusoidal pursuit eye movements. We describe a new illusion--the slalom illusion--in which the perceived direction of self-motion oscillates left and right when the eyes move sinusoidally. A linear model is presented that determines the gain ratio and phase difference of extraretinal and retinal signals accompanying the Filehne and slalom illusions. The speed mismatch and timing differences were measured in the Filehne and self-motion situations using a motion-nulling procedure. Timing errors were very small for the Filehne and slalom illusions. However, the ratios of extraretinal to retinal gain were consistently less than 1, so both illusions are the consequence of a mismatch between estimates of retinal and extraretinal speed. The relevance of the results for recovering the direction of self-motion during pursuit eye movements is discussed.
Collapse
|
37
|
Abstract
Observer translation through the environment can be accompanied by rotation of the eye about any axis. For rotation about the vertical axis (horizontal rotation) during translation in the horizontal plane, it is known that the absence of depth in the scene and an extra retinal signal leads to a systematic error in the observer's perceived direction of heading. This heading error is related in magnitude and direction to the shift of the centre of retinal flow (CF) that occurs because of the rotation. Rotation about any axis that deviates from the heading direction results in a CF shift. So far, however, the effect of rotation about the line of sight (torsion) on perceived heading has not been investigated. We simulated observer translation towards a wall or cloud, while simultaneously simulating eye rotation about the vertical axis, the torsional axis or combinations thereof. We find only small systematic effects of torsion on the set of 2D perceived headings, regardless of the simulated horizontal rotation. In proportion to the CF shift, the systematic errors are significantly smaller for pure torsion than for pure horizontal rotation. In contrast to errors caused by horizontal rotation, the torsional errors are hardly reduced by addition of depth to the scene. We suggest the difference in behaviour reflects the difference in symmetry of the field of view relative to the axis of rotation: the higher symmetry in the case of torsion may allow for a more accurate estimation of the rotational flow. Moreover, we report a new phenomenon. Simulated horizontal rotation during simulated wall approach increases the heading-dependency of errors, causing a larger compression of perceived heading in the horizontal direction than in the vertical direction.
Collapse
Affiliation(s)
- J A Beintema
- Medical Faculty, Erasmus Universiteit Rotterdam, The Netherlands.
| | | |
Collapse
|
38
|
Abstract
We developed a new computational model of human heading judgement from retinal flow. The model uses two assumptions: a large number of sampling points in the flow field and a symmetric sampling region around the origin. The algorithm estimates self-rotation parameters by calculating statistics whose expectations correspond to the rotation parameters. After the rotational components are removed from the retinal flow, the heading direction is recovered from the flow field. Performance of the model was compared with human data in three psychophysical experiments. In the first experiment, we generated stimuli which simulated self-motion toward the ground, a cloud or a frontoparallel plane and found that the simulation results of the model were consistent with human performance. In the second and third experiment, we measured the slope of the perceived versus simulated heading function when a perturbation velocity weighted according to the distance relative to the fixation distance was added to the vertical velocity component under the cloud condition. It was found that as the magnitude of the perturbation was increased, the slope of the function increased. The characteristics observed in the experiments can be explained well by the proposed model.
Collapse
Affiliation(s)
- M Hanada
- Graduate School of Human and Environmental Studies, Kyoto University, Japan.
| | | |
Collapse
|
39
|
van den Berg AV, Beintema JA. The mechanism of interaction between visual flow and eye velocity signals for heading perception. Neuron 2000; 26:747-52. [PMID: 10896169 DOI: 10.1016/s0896-6273(00)81210-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
A translating eye receives a radial pattern of motion that is centered on the direction of heading. If the eye is rotating and translating, visual and extraretinal signals help to cancel the rotation and to perceive heading correctly. This involves (1) an interaction between visual and eye movement signals and (2) a motion template stage that analyzes the pattern of visual motion. Early interaction leads to motion templates that integrate head-centered motion signals in the visual field. Integration of retinal motion signals leads to late interaction. Here, we show that retinal flow limits precision of heading. This result argues against an early, vector subtraction type of interaction, but is consistent with a late, gain field type of interaction with eye velocity signals and neurophysiological findings in area MST of the monkey.
Collapse
Affiliation(s)
- A V van den Berg
- Helmholtz School for Autonomous Systems Research, Department of Physiology, Faculty of Medicine, Erasmus University Rotterdam, The Netherlands.
| | | |
Collapse
|
40
|
Abstract
Cyclovergence is a simultaneously occurring cyclorotation of the two eyes in opposite directions. Cyclovergence can be elicited visually by opposite cyclorotation of the two eyes' images. It also can occur in conjunction with horizontal vergence and vertical version in a stereotyped manner as described by the extended Listing's law (or L2). We manipulated L2-related and visually evoked cyclovergence independently, using stereoscopic images of three-dimensional (3D) scenes. During pursuit in the midsagittal plane, cyclovergence followed L2. The amount of L2-related cyclovergence during pursuit varied between subjects. Each pursuit trial was repeated three times. Two of the three trials had additional image rotation to visually evoke cyclovergence. We could separate the L2-related and visual components of cyclovergence by subtraction of the cyclovergence response in matched trials that differed only in the image rotation that was applied during pursuit. This indicates that visual and L2-related contributions to cyclovergence add linearly, suggesting the presence of two independent systems. Visually evoked cyclovergence gains were characteristic for a given subject, little affected by visual stimulus parameters, and usually low (0.1-0.5) when a static target was fixated. Gain and phase lag of the visually evoked cyclovergence during vertical pursuit was comparable with that during fixation of a static target. The binocular orientations are in better agreement to orientations predicted by L2 then would be predicted by nulling of the cyclodisparities. On the basis of our results, we suggest that visually driven and L2-related cyclovergence are independent of each other and superimpose linearly.
Collapse
Affiliation(s)
- I T Hooge
- Department of Physiology, Helmholtz School for Autonomous Systems Research, Erasmus University Rotterdam, The Netherlands
| | | |
Collapse
|
41
|
Affiliation(s)
- A V van den Berg
- Helmholtz School for Autonomous Systems Research, Department of Physiology, Faculty of Medicine, Erasmus University, Rotterdam, The Netherlands
| |
Collapse
|
42
|
Abstract
Humans perceive heading accurately when they rotate their eyes. This is remarkable, because (1) the pursuit eye movement makes the retinal flow more complicated; and (2) the eye rotation causes a continuous change of the heading direction on the retina. The first problem prevents a simple association of the centre of flow on the retina with the heading direction. To solve it, the brain needs to take into account the flow associated with the eye's rotation. But even if this is done correctly, the resulting estimate of the heading is retino-centric and changing over time. Thus, the processing time to retrieve the heading from the flow field will cause a lag with respect to the actual heading direction. We investigated the latency for heading perception. We presented step wise changes of the centre of expanding flow to stationary and moving eyes. This mimics the movement of the heading direction across the retina, but avoids the complicating effects of rotational flow. For a stationary eye, we found a bias in perceived heading that corresponds to a latency of 300 ms or more. Yet, errors in heading perception are marginal normally, because we found an opposite bias for the moving eye, which counters the errors due to latency and a changing retino-centric heading direction. This suggests that the current heading direction is predicted from the extra-retinal signal and the delayed visual signals.
Collapse
|
43
|
Abstract
Accurate and efficient control of self-motion is an important requirement for our daily behavior. Visual feedback about self-motion is provided by optic flow. Optic flow can be used to estimate the direction of self-motion ('heading') rapidly and efficiently. Analysis of oculomotor behavior reveals that eye movements usually accompany self-motion. Such eye movements introduce additional retinal image motion so that the flow pattern on the retina usually consists of a combination of self-movement and eye movement components. The question of whether this 'retinal flow' alone allows the brain to estimate heading, or whether an additional 'extraretinal' eye movement signal is needed, has been controversial. This article reviews recent studies that suggest that heading can be estimated visually but extraretinal signals are used to disambiguate problematic situations. The dorsal stream of primate cortex contains motion processing areas that are selective for optic flow and self-motion. Models that link the properties of neurons in these areas to the properties of heading perception suggest possible underlying mechanisms of the visual perception of self-motion.
Collapse
|
44
|
Grigo A, Lappe M. Dynamical use of different sources of information in heading judgments from retinal flow. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 1999; 16:2079-2091. [PMID: 10474889 DOI: 10.1364/josaa.16.002079] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
The optic flow arising in the eyes of an observer during self-motion is influenced by the occurrence of eye movements. The determination of heading during eye movements may be based on the pattern of retinal image motion (the retinal flow) or on an additional use of an extraretinal eye-movement signal. Previous research has presented support for either of these hypotheses, depending on the movement geometry and the layout of the visual scene. A special situation in which all previous studies unequivocally have agreed that an extra-retinal signal is required occurs when the visual scene consists of a single frontoparallel plane. In this situation eye movements shift the center of expansion on the retina to a location that does not correspond to the direction of self-movement. Without extraretinal input, human observers confuse the center of expansion with their heading and show a systematical heading estimation error. We reexamined and further investigated this situation. We presented retinal flow stimuli on a large projection screen in the absence of extra-retinal input and varied stimulus size, presentation duration, and orientation of the plane. In contrast to previous studies we found that in the case of a perpendicular approach toward the plane, heading judgments can be accurate. Accurate judgments were observed when the field of view was large (90 degrees x 90 degrees) and the stimulus duration was short (< or = 0.5 s). For a small field of view or a prolonged stimulus presentation, a systematic and previously described error appeared that is related to the radial structure of the flow field and the location of the center of expansion. An oblique approach toward the plane results in an ambiguous flow field with two mathematically possible solutions for heading. In this situation, when the stimulus duration was short, subjects reported a perceived heading midway between these two solutions. For longer flow sequences, subjects again chose the center of expansion. Our results suggest a dynamical change in the analysis or interpretation of retinal flow during heading perception.
Collapse
Affiliation(s)
- A Grigo
- Department of Zoology and Neurobiology, Ruhr University Bochum, Germany
| | | |
Collapse
|
45
|
Abstract
If distance, shape and size are judged independently from the retinal and extra-retinal information at hand, different kinds of information can be expected to dominate each judgement, so that errors in one judgement need not be consistent with errors in other judgements. In order to evaluate how independent these three judgments are, we examined how adding information that improves one judgement influences the others. Subjects adjusted the size and the global shape of a computer-simulated ellipsoid to match a tennis ball. They then indicated manually where they judged the simulated ball to be. Adding information about distance improved the three judgements in a consistent manner, demonstrating that a considerable part of the errors in all three judgements were due to misestimating the distance. Adding information about shape that is independent of distance improved subjects' judgements of shape, but did not influence the set size or the manually indicated distance. Thus, subjects ignored conflicts between the cues when judging the shape, rather than using the conflicts to improve their estimate of the ellipsoid's distance. We conclude that the judgements are quite independent, in the sense that no attempt is made to attain consistency, but that they do rely on some common measures, such as that of distance.
Collapse
Affiliation(s)
- E Brenner
- Vakgroep Fysiologie, Erasmus Universiteit, Rotterdam, The Netherlands.
| | | |
Collapse
|
46
|
Crowell JA, Banks MS, Shenoy KV, Andersen RA. Visual self-motion perception during head turns. Nat Neurosci 1998; 1:732-7. [PMID: 10196591 DOI: 10.1038/3732] [Citation(s) in RCA: 109] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Extra-retinal information is critical in the interpretation of visual input during self-motion. Turning our eyes and head to track objects displaces the retinal image but does not affect our ability to navigate because we use extra-retinal information to compensate for these displacements. We showed observers animated displays depicting their forward motion through a scene. They perceived the simulated self-motion accurately while smoothly shifting the gaze by turning the head, but not when the same gaze shift was simulated in the display; this indicates that the visual system also uses extra-retinal information during head turns. Additional experiments compared self-motion judgments during active and passive head turns, passive rotations of the body and rotations of the body with head fixed in space. We found that accurate perception during active head turns is mediated by contributions from three extra-retinal cues: vestibular canal stimulation, neck proprioception and an efference copy of the motor command to turn the head.
Collapse
Affiliation(s)
- J A Crowell
- Caltech Division of Biology, Pasadena 91125, USA.
| | | | | | | |
Collapse
|
47
|
Ehrlich SM, Beck DM, Crowell JA, Freeman TC, Banks MS. Depth information and perceived self-motion during simulated gaze rotations. Vision Res 1998; 38:3129-45. [PMID: 9893821 DOI: 10.1016/s0042-6989(97)00427-6] [Citation(s) in RCA: 38] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
When presented with random-dot displays with little depth information, observers cannot determine their direction of self-motion accurately in the presence of rotational flow without appropriate extra-retinal information (Royden CS et al. Vis Res 1994;34:3197-214.). On theoretical grounds, one might expect improved performance when depth information is added to the display (van den Berg AV and Brenner E. Nature 1994;371:700-2). We examined this possibility by having observers indicate perceived self-motion paths when the amount of depth information was varied. When stereoscopic cues and a variety of monocular depth cues were added, observers still misperceived the depicted self-motion when the rotational flow in the display was not accompanied by an appropriate extra-retinal, eye-velocity signal. Specifically, they perceived curved self-motion paths with the curvature in the direction of the simulated eye rotation. The distance to the response marker was crucial to the objective measurement of this misperception. When the marker distance was small, the observers' settings were reasonably accurate despite the misperception of the depicted self-motion. When the marker distance was large, the settings exhibited the errors reported previously by Royden CS et al. Vis Res 1994;34-3197-3214. The path judgement errors observers make during simulated gaze rotations appear to be the result of misattributing path-independent rotation to self-motion along a circular path with path-dependent rotation. An analysis of the information an observer could use to avoid such errors reveals that the addition of depth information is of little use.
Collapse
Affiliation(s)
- S M Ehrlich
- Department of Psychology, School of Optometry, University of California, Berkeley 94720-2020, USA
| | | | | | | | | |
Collapse
|
48
|
Abstract
When an expansion flow field of moving dots is overlapped by planar motion, observers perceive an illusory displacement of the focus of expansion (FOE) in the direction of the planar motion (Duffy and Wurtz, Vision Research, 1993;33:1481-1490). The illusion may be a consequence of induced motion, wherein an induced component of motion relative to planar dots is added to the motions of expansion dots to produce the FOE shift. While such a process could be mediated by local 'center-surround' receptive fields, the effect could also be due to a higher level process which detects and subtracts large-field planar motion from the flow field. We probed the mechanisms underlying this illusion by adding varying amounts of rotation to the expansion stimulus, and by varying the speed and size of the planar motion field. The introduction of rotation into the stimulus produces an illusory shift in a direction perpendicular to the planar motion. Larger FOE shifts were perceived for greater speeds and sizes of planar motion fields, although the speed effect saturated at high speeds. While the illusion appears to share a common mechanism with center-surround induced motion, our results also point to involvement of a more global mechanism that subtracts coherent planar motion from the flow field. Such a process might help to maintain visual stability during eye movements.
Collapse
Affiliation(s)
- C Pack
- Department of Cognitive and Neural Systems, Boston University, MA 02215, USA.
| | | |
Collapse
|
49
|
Abstract
Eye or head rotation would influence perceived heading direction if it were coded by cells tuned only to retinal flow patterns that correspond to linear self-movement. We propose a model for heading detection based on motion templates that are also Gaussian-tuned to the amount of rotational flow. Such retinal flow templates allow explicit use of extra-retinal signals to create templates tuned to head-centric flow as seen by the stationary eye. Our model predicts an intermediate layer of 'eye velocity gain fields' in which 'rate-coded' eye velocity is multiplied with responses of templates sensitive to specific retinal flow patterns. By combination of the activities of one retinal flow template and many units with an eye velocity gain field, a new type of unit appears: its preferred retinal flow changes dynamically in accordance with the eye rotation velocity. This unit's activity becomes thereby approximately invariant to the amount of eye rotation. The units with eye velocity gain fields from the motion-analogue of the units with eye position gain fields found in area 7a, which according to our general approach, are needed to transform position from retino-centric to head-centric coordinates. The rotation-tuned templates can also provide rate-coded visual estimates of eye rotation to allow a pure visual compensation for rotational flow. Our model is consistent with psychophysical data that indicate a role for extra-retinal as well as visual rotation signals in the correct perception of heading.
Collapse
Affiliation(s)
- J A Beintema
- Helmholtz School for Autonomous Systems Research, Department of Physiology, Erasmus University Rotterdam, The Netherlands
| | | |
Collapse
|
50
|
Britten KH, van Wezel RJ. Electrical microstimulation of cortical area MST biases heading perception in monkeys. Nat Neurosci 1998; 1:59-63. [PMID: 10195110 DOI: 10.1038/259] [Citation(s) in RCA: 226] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
As we move through the environment, the pattern of visual motion on the retina provides rich information about our movement through the scene. Human subjects can use this information, often termed "optic flow", to accurately estimate their direction of self movement (heading) from relatively sparse displays. Physiological observations on the motion-sensitive areas of monkey visual cortex suggest that the medial superior temporal area (MST) is well suited for the analysis of optic flow information. To test whether MST is involved in extracting heading from optic flow, we perturbed its activity in monkeys trained on a heading discrimination task. Electrical microstimulation of MST frequently biased the monkeys' decisions about their heading, and these induced biases were often quite large. This result suggests that MST has a direct role in the perception of heading from optic flow.
Collapse
Affiliation(s)
- K H Britten
- UC Davis Center for Neuroscience, California, USA.
| | | |
Collapse
|