1
|
Patricio Décima A, Fernando Barraza J, López-Moliner J. The perceptual dynamics of the contrast induced speed bias. Vision Res 2021; 191:107966. [PMID: 34808549 DOI: 10.1016/j.visres.2021.107966] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 09/15/2021] [Accepted: 10/17/2021] [Indexed: 11/25/2022]
Abstract
In this article we present a temporal extension of the slow motion prior model to generate predictions regarding the temporal evolution of the contrast induced speed bias. We further tested these predictions using a novel experimental paradigm that allows us to measure the dynamic perceptual difference between stimuli through a series of manual pursuit open loop tasks. Results show good agreement with our model's predictions. The main findings reveal that hand speed dynamics are affected by stimulus contrast in a way that is consistent with a dynamic model of motion perception that assumes a slow motion prior. The proposed model also confirms observations made in previous studies that suggest that motion bias persisted even at high contrast as a consequence of the dynamics of the slow motion prior.
Collapse
Affiliation(s)
| | - José Fernando Barraza
- Dpto. Luminotecnia, Luz y Visión "Herberto C. Bühler" (DLLyV), FACET, UNT, Argentina; Instituto de Investigación en Luz, Ambiente y Visión (ILAV), CONICET-UNT, Argentina
| | - Joan López-Moliner
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Passeig de la Vall d'Hebron 171, 08035 Barcelona, Catalonia, Spain
| |
Collapse
|
2
|
Ma Z, Watamaniuk SNJ, Heinen SJ. Illusory motion reveals velocity matching, not foveation, drives smooth pursuit of large objects. J Vis 2017; 17:20. [PMID: 29090315 PMCID: PMC5665499 DOI: 10.1167/17.12.20] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
When small objects move in a scene, we keep them foveated with smooth pursuit eye movements. Although large objects such as people and animals are common, it is nonetheless unknown how we pursue them since they cannot be foveated. It might be that the brain calculates an object's centroid, and then centers the eyes on it during pursuit as a foveation mechanism might. Alternatively, the brain merely matches the velocity by motion integration. We test these alternatives with an illusory motion stimulus that translates at a speed different from its retinal motion. The stimulus was a Gabor array that translated at a fixed velocity, with component Gabors that drifted with motion consistent or inconsistent with the translation. Velocity matching predicts different pursuit behaviors across drift conditions, while centroid matching predicts no difference. We also tested whether pursuit can segregate and ignore irrelevant local drifts when motion and centroid information are consistent by surrounding the Gabors with solid frames. Finally, observers judged the global translational speed of the Gabors to determine whether smooth pursuit and motion perception share mechanisms. We found that consistent Gabor motion enhanced pursuit gain while inconsistent, opposite motion diminished it, drawing the eyes away from the center of the stimulus and supporting a motion-based pursuit drive. Catch-up saccades tended to counter the position offset, directing the eyes opposite to the deviation caused by the pursuit gain change. Surrounding the Gabors with visible frames canceled both the gain increase and the compensatory saccades. Perceived speed was modulated analogous to pursuit gain. The results suggest that smooth pursuit of large stimuli depends on the magnitude of integrated retinal motion information, not its retinal location, and that the position system might be unnecessary for generating smooth velocity to large pursuit targets.
Collapse
Affiliation(s)
- Zheng Ma
- Smith-Kettlewell Eye Research Institute, San Francisco, CA, USA
| | | | - Stephen J Heinen
- The Smith-Kettlewell Eye Research Institute, San Francisco, CA, USA
| |
Collapse
|
3
|
Zhu JE, Ma WJ. Orientation-dependent biases in length judgments of isolated stimuli. J Vis 2017; 17:20. [PMID: 28245499 DOI: 10.1167/17.2.20] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Vertical line segments tend to be perceived as longer than horizontal ones of the same length, but this may in part be due to configuration effects. To minimize such effects, we used isolated line segments in a two-interval, forced choice paradigm, not limiting ourselves to horizontal and vertical. We fitted psychometric curves using a Bayesian method that assumes that, for a given subject, the lapse rate is the same across all conditions. The closer a line segment's orientation was to vertical, the longer it was perceived to be. Moreover, subjects tended to report the standard line (in the second interval) as longer. The data were well described by a model that contains both an orientation-dependent and an interval-dependent multiplicative bias. Using this model, we estimated that a vertical line was on average perceived as 9.2% ± 2.1% longer than a horizontal line, and a second-interval line was on average perceived as 2.4% ± 0.9% longer than a first-interval line. Moving from a descriptive to an explanatory model, we hypothesized that anisotropy in the polar angle of lines in three dimensions underlies the horizontal-vertical illusion, specifically, that line segments more often have a polar angle of 90° (corresponding to the ground plane) than any other polar angle. This model qualitatively accounts not only for the empirical relationship between projected length and projected orientation that predicts the horizontal-vertical illusion, but also for the empirical distribution of projected orientation in photographs of natural scenes and for paradoxical results reported earlier for slanted surfaces.
Collapse
Affiliation(s)
- Jielei Emma Zhu
- Center for Neural Science and Department of Psychology, New York University, New York, NY,
| | - Wei Ji Ma
- Center for Neural Science and Department of Psychology, New York University, New York, NY,
| |
Collapse
|
4
|
Page WK, Sato N, Froehler MT, Vaughn W, Duffy CJ. Navigational path integration by cortical neurons: origins in higher-order direction selectivity. J Neurophysiol 2015; 113:1896-906. [PMID: 25589586 DOI: 10.1152/jn.00197.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Navigation relies on the neural processing of sensory cues about observer self-movement and spatial location. Neurons in macaque dorsal medial superior temporal cortex (MSTd) respond to visual and vestibular self-movement cues, potentially contributing to navigation and orientation. We moved monkeys on circular paths around a room while recording the activity of MSTd neurons. MSTd neurons show a variety of sensitivities to the monkey's heading direction, circular path through the room, and place in the room. Changing visual cues alters the relative prevalence of those response properties. Disrupting the continuity of self-movement paths through the environment disrupts path selectivity in a manner linked to the time course of single neuron responses. We hypothesize that sensory cues interact with the spatial and temporal integrative properties of MSTd neurons to derive path selectivity for navigational path integration supporting spatial orientation.
Collapse
Affiliation(s)
- William K Page
- Departments of Neurology, Neurobiology and Anatomy, Ophthalmology, Brain and Cognitive Sciences, and The Center for Visual Science, The University of Rochester Medical Center, Rochester, New York
| | - Nobuya Sato
- Departments of Neurology, Neurobiology and Anatomy, Ophthalmology, Brain and Cognitive Sciences, and The Center for Visual Science, The University of Rochester Medical Center, Rochester, New York
| | - Michael T Froehler
- Departments of Neurology, Neurobiology and Anatomy, Ophthalmology, Brain and Cognitive Sciences, and The Center for Visual Science, The University of Rochester Medical Center, Rochester, New York
| | - William Vaughn
- Departments of Neurology, Neurobiology and Anatomy, Ophthalmology, Brain and Cognitive Sciences, and The Center for Visual Science, The University of Rochester Medical Center, Rochester, New York
| | - Charles J Duffy
- Departments of Neurology, Neurobiology and Anatomy, Ophthalmology, Brain and Cognitive Sciences, and The Center for Visual Science, The University of Rochester Medical Center, Rochester, New York
| |
Collapse
|
5
|
Perrinet LU, Masson GS. Motion-based prediction is sufficient to solve the aperture problem. Neural Comput 2012; 24:2726-50. [PMID: 22734489 DOI: 10.1162/neco_a_00332] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In low-level sensory systems, it is still unclear how the noisy information collected locally by neurons may give rise to a coherent global percept. This is well demonstrated for the detection of motion in the aperture problem: as luminance of an elongated line is symmetrical along its axis, tangential velocity is ambiguous when measured locally. Here, we develop the hypothesis that motion-based predictive coding is sufficient to infer global motion. Our implementation is based on a context-dependent diffusion of a probabilistic representation of motion. We observe in simulations a progressive solution to the aperture problem similar to physiology and behavior. We demonstrate that this solution is the result of two underlying mechanisms. First, we demonstrate the formation of a tracking behavior favoring temporally coherent features independent of their texture. Second, we observe that incoherent features are explained away, while coherent information diffuses progressively to the global scale. Most previous models included ad hoc mechanisms such as end-stopped cells or a selection layer to track specific luminance-based features as necessary conditions to solve the aperture problem. Here, we have proved that motion-based predictive coding, as it is implemented in this functional model, is sufficient to solve the aperture problem. This solution may give insights into the role of prediction underlying a large class of sensory computations.
Collapse
Affiliation(s)
- Laurent U Perrinet
- Institut de Neurosciences de la Timone, CNRS/Aix-Marseille University 13385 Marseille Cedex 5, France.
| | | |
Collapse
|
6
|
Beck C, Neumann H. Combining feature selection and integration--a neural model for MT motion selectivity. PLoS One 2011; 6:e21254. [PMID: 21814543 PMCID: PMC3140976 DOI: 10.1371/journal.pone.0021254] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2010] [Accepted: 05/26/2011] [Indexed: 11/18/2022] Open
Abstract
Background The computation of pattern motion in visual area MT based on motion input from area V1 has been investigated in many experiments and models attempting to replicate the main mechanisms. Two different core conceptual approaches were developed to explain the findings. In integrationist models the key mechanism to achieve pattern selectivity is the nonlinear integration of V1 motion activity. In contrast, selectionist models focus on the motion computation at positions with 2D features. Methodology/Principal Findings Recent experiments revealed that neither of the two concepts alone is sufficient to explain all experimental data and that most of the existing models cannot account for the complex behaviour found. MT pattern selectivity changes over time for stimuli like type II plaids from vector average to the direction computed with an intersection of constraint rule or by feature tracking. Also, the spatial arrangement of the stimulus within the receptive field of a MT cell plays a crucial role. We propose a recurrent neural model showing how feature integration and selection can be combined into one common architecture to explain these findings. The key features of the model are the computation of 1D and 2D motion in model area V1 subpopulations that are integrated in model MT cells using feedforward and feedback processing. Our results are also in line with findings concerning the solution of the aperture problem. Conclusions/Significance We propose a new neural model for MT pattern computation and motion disambiguation that is based on a combination of feature selection and integration. The model can explain a range of recent neurophysiological findings including temporally dynamic behaviour.
Collapse
Affiliation(s)
- Cornelia Beck
- Institute of Neural Information Processing, University of Ulm, Ulm, Germany.
| | | |
Collapse
|
7
|
Tsui JMG, Hunter JN, Born RT, Pack CC. The role of V1 surround suppression in MT motion integration. J Neurophysiol 2010; 103:3123-38. [PMID: 20457860 DOI: 10.1152/jn.00654.2009] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Neurons in the primate extrastriate cortex are highly selective for complex stimulus features such as faces, objects, and motion patterns. One explanation for this selectivity is that neurons in these areas carry out sophisticated computations on the outputs of lower-level areas such as primary visual cortex (V1), where neuronal selectivity is often modeled in terms of linear spatiotemporal filters. However, it has long been known that such simple V1 models are incomplete because they fail to capture important nonlinearities that can substantially alter neuronal selectivity for specific stimulus features. Thus a key step in understanding the function of higher cortical areas is the development of realistic models of their V1 inputs. We have addressed this issue by constructing a computational model of the V1 neurons that provide the strongest input to extrastriate cortical middle temporal (MT) area. We find that a modest elaboration to the standard model of V1 direction selectivity generates model neurons with strong end-stopping, a property that is also found in the V1 layers that provide input to MT. With this computational feature in place, the seemingly complex properties of MT neurons can be simulated by assuming that they perform a simple nonlinear summation of their inputs. The resulting model, which has a very small number of free parameters, can simulate many of the diverse properties of MT neurons. In particular, we simulate the invariance of MT tuning curves to the orientation and length of tilted bar stimuli, as well as the accompanying temporal dynamics. We also show how this property relates to the continuum from component to pattern selectivity observed when MT neurons are tested with plaids. Finally, we confirm several key predictions of the model by recording from MT neurons in the alert macaque monkey. Overall our results demonstrate that many of the seemingly complex computations carried out by high-level cortical neurons can in principle be understood by examining the properties of their inputs.
Collapse
Affiliation(s)
- James M G Tsui
- McGill University, Montreal Neurological Institute, 3801 University St., Montreal, QC H3A 2B4, Canada
| | | | | | | |
Collapse
|
8
|
Barthélemy FV, Fleuriet J, Masson GS. Temporal dynamics of 2D motion integration for ocular following in macaque monkeys. J Neurophysiol 2009; 103:1275-82. [PMID: 20032230 DOI: 10.1152/jn.01061.2009] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Several recent studies have shown that extracting pattern motion direction is a dynamical process where edge motion is first extracted and pattern-related information is encoded with a small time lag by MT neurons. A similar dynamics was found for human reflexive or voluntary tracking. Here, we bring an essential, but still missing, piece of information by documenting macaque ocular following responses to gratings, unikinetic plaids, and barber-poles. We found that ocular tracking was always initiated first in the grating motion direction with ultra-short latencies (approximately 55 ms). A second component was driven only 10-15 ms later, rotating tracking toward pattern motion direction. At the end the open-loop period, tracking direction was aligned with pattern motion direction (plaids) or the average of the line-ending motion directions (barber-poles). We characterized the dependency on contrast of each component. Both timing and direction of ocular following were quantitatively very consistent with the dynamics of neuronal responses reported by others. Overall, we found a remarkable consistency between neuronal dynamics and monkey behavior, advocating for a direct link between the neuronal solution of the aperture problem and primate perception and action.
Collapse
Affiliation(s)
- Fréderic V Barthélemy
- Team DyVA, Institut de Neurosciences Cognitives de la Méditerranée, Centre National de la Recherche Scientifique, Aix-Marseille Université, Marseille, France
| | | | | |
Collapse
|
9
|
Abstract
Smooth pursuit eye movements allow the approximate stabilization of a moving visual target on the retina. To study the dynamics of smooth pursuit, we measured eye velocity during the visual tracking of a Gabor target moving at a constant velocity plus a noisy perturbation term. The optimal linear filter linking fluctuations in target velocity to evoked fluctuations in eye velocity was computed. These filters predicted eye velocity to novel stimuli in the 0- to 15-Hz band with good accuracy, showing that pursuit maintenance is approximately linear under these conditions. The shape of the filters were indicative of fast dynamics, with pure delays of merely approximately 67 ms, times-to-peak of approximately 115 ms, and effective integration times of approximately 45 ms. The gain of the system, reflected in the amplitude of the filters, was inversely proportional to the size of the velocity fluctuations and independent of the target mean speed. A modest slow-down of the dynamics was observed as the contrast of the target decreased. Finally, the temporal filters recovered during fixation and pursuit were similar in shape, supporting the notion that they might share a common underlying circuitry. These findings show that the visual tracking of moving objects by the human eye includes a reflexive-like pathway with high contrast sensitivity and fast dynamics.
Collapse
Affiliation(s)
- Abtine Tavassoli
- Department of Neurobiology, Jules Stein Eye Institute, David Geffen School of Medicine, University of California, Los Angeles, CA 90095, USA
| | | |
Collapse
|
10
|
Braun DI, Mennie N, Rasche C, Schütz AC, Hawken MJ, Gegenfurtner KR. Smooth Pursuit Eye Movements to Isoluminant Targets. J Neurophysiol 2008; 100:1287-300. [DOI: 10.1152/jn.00747.2007] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
At slow speeds, chromatic isoluminant stimuli are perceived to move much slower than comparable luminance stimuli. We investigated whether smooth pursuit eye movements to isoluminant stimuli show an analogous slowing. Beside pursuit speed and latency, we studied speed judgments to the same stimuli during fixation and pursuit. Stimuli were either large sine wave gratings or small Gaussians blobs moving horizontally at speeds between 1 and 11°/s. Targets were defined by luminance contrast or color. Confirming prior studies, we found that speed judgments of isoluminant stimuli during fixation showed a substantial slowing when compared with luminance stimuli. A similarly strong and significant effect of isoluminance was found for pursuit initiation: compared with luminance targets of matched contrasts, latencies of pursuit initiation were delayed by 50 ms at all speeds and eye accelerations were reduced for isoluminant targets. A small difference was found between steady-state eye velocities of luminance and isoluminant targets. For comparison, we measured latencies of saccades to luminance and isoluminant stimuli under similar conditions, but the effect of isoluminance was only found for pursuit. Parallel psychophysical experiments revealed that different from speed judgments of moving isoluminant stimuli made during fixation, judgments during pursuit are veridical for the same stimuli at all speeds. Therefore information about target speed seems to be available for pursuit eye movements and speed judgments during pursuit but is degraded for perceptual speed judgments during fixation and for pursuit initiation.
Collapse
|
11
|
Abstract
The extrastriate cortex of primates encompasses a substantial portion of the cerebral cortex and is devoted to the higher order processing of visual signals and their dispatch to other parts of the brain. A first step towards the understanding of the function of this cortical tissue is a description of the selectivities of the various neuronal populations for higher order aspects of the image. These selectivities present in the various extrastriate areas support many diverse representations of the scene before the subject. The list of the known selectivities includes that for pattern direction and speed gradients in middle temporal/V5 area; for heading in medial superior temporal visual area, dorsal part; for orientation of nonluminance contours in V2 and V4; for curved boundary fragments in V4 and shape parts in infero-temporal area (IT); and for curvature and orientation in depth from disparity in IT and CIP. The most common putative mechanism for generating such emergent selectivity is the pattern of excitatory and inhibitory linear inputs from the afferent area combined with nonlinear mechanisms in the afferent and receiving area.
Collapse
Affiliation(s)
- Guy A Orban
- Laboratorium voor Neuro- en Psychofysiologie, K. U. Leuven Medical School, Leuven, Belgium.
| |
Collapse
|
12
|
Montagnini A, Spering M, Masson GS. Predicting 2D Target Velocity Cannot Help 2D Motion Integration for Smooth Pursuit Initiation. J Neurophysiol 2006; 96:3545-50. [PMID: 16928794 DOI: 10.1152/jn.00563.2006] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Smooth pursuit eye movements reflect the temporal dynamics of bidimensional (2D) visual motion integration. When tracking a single, tilted line, initial pursuit direction is biased toward unidimensional (1D) edge motion signals, which are orthogonal to the line orientation. Over 200 ms, tracking direction is slowly corrected to finally match the 2D object motion during steady-state pursuit. We now show that repetition of line orientation and/or motion direction does not eliminate the transient tracking direction error nor change the time course of pursuit correction. Nonetheless, multiple successive presentations of a single orientation/direction condition elicit robust anticipatory pursuit eye movements that always go in the 2D object motion direction not the 1D edge motion direction. These results demonstrate that predictive signals about target motion cannot be used for an efficient integration of ambiguous velocity signals at pursuit initiation.
Collapse
Affiliation(s)
- Anna Montagnini
- Team DyVA, Institut de Neurosciences Cognitives de la Méditerranée, UMR6193 CNRS, 31 Chemin Joseph Aiguier, 13402 Marseille, France
| | | | | |
Collapse
|