1
|
Sheliga BM, FitzGibbon EJ. Manipulating the Fourier spectra of stimuli comprising a two-frame kinematogram to study early visual motion-detecting mechanisms: Perception versus short latency ocular-following responses. J Vis 2023; 23:11. [PMID: 37725387 PMCID: PMC10513114 DOI: 10.1167/jov.23.10.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 08/20/2023] [Indexed: 09/21/2023] Open
Abstract
Two-frame kinematograms have been extensively used to study motion perception in human vision. Measurements of the direction-discrimination performance limits (Dmax) have been the primary subject of such studies, whereas surprisingly little research has asked how the variability in the spatial frequency content of individual frames affects motion processing. Here, we used two-frame one-dimensional vertical pink noise kinematograms, in which images in both frames were bandpass filtered, with the central spatial frequency of the filter manipulated independently for each image. To avoid spatial aliasing, there was no actual leftward-rightward shift of the image: instead, the phases of all Fourier components of the second image were shifted by ±¼ wavelength with respect to those of the first. We recorded ocular-following responses (OFRs) and perceptual direction discrimination in human subjects. OFRs were in the direction of the Fourier components' shift and showed a smooth decline in amplitude, well fit by Gaussian functions, as the difference between the central spatial frequencies of the first and second images increased. In sharp contrast, 100% correct perceptual direction-discrimination performance was observed when the difference between the central spatial frequencies of the first and second images was small, deteriorating rapidly to chance when increased further. Perceptual dependencies moved closer to the OFR ones when subjects were allowed to grade the strength of perceived motion. Response asymmetries common for perceptual judgments and the OFRs suggest that they rely on the same early visual processing mechanisms. The OFR data were quantitatively well described by a model which combined two factors: (1) an excitatory drive determined by a power law sum of stimulus Fourier components' contributions, scaled by (2) a contrast normalization mechanism. Thus, in addition to traditional studies relying on perceptual reports, the OFRs represent a valuable behavioral tool for studying early motion processing on a fine scale.
Collapse
Affiliation(s)
- Boris M Sheliga
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Edmond J FitzGibbon
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
2
|
Contribution of the slow motion mechanism to global motion revealed by an MAE technique. Sci Rep 2021; 11:3995. [PMID: 33597567 PMCID: PMC7889884 DOI: 10.1038/s41598-021-82900-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Accepted: 01/21/2021] [Indexed: 11/08/2022] Open
Abstract
Two different motion mechanisms have been identified with motion aftereffect (MAE). (1) A slow motion mechanism, accessed by a static MAE, is sensitive to high-spatial and low-temporal frequency; (2) a fast motion mechanism, accessed by a flicker MAE, is sensitive to low-spatial and high-temporal frequency. We examined their respective responses to global motion after adapting to a global motion pattern constructed of multiple compound Gabor patches arranged circularly. Each compound Gabor patch contained two gratings at different spatial frequencies (0.53 and 2.13 cpd) drifting in opposite directions. The participants reported the direction and duration of the MAE for a variety of global motion patterns. We discovered that static MAE durations depended on the global motion patterns, e.g., longer MAE duration to patches arranged to see rotation than to random motion (Exp 1), and increase with global motion strength (patch number in Exp 2). In contrast, flicker MAEs durations are similar across different patterns and adaptation strength. Further, the global integration occurred at the adaptation stage, rather than at the test stage (Exp 3). These results suggest that slow motion mechanism, assessed by static MAE, integrate motion signals over space while fast motion mechanisms do not, at least under the conditions used.
Collapse
|
3
|
Asher JM, Hibbard PB. No effect of feedback, level of processing or stimulus presentation protocol on perceptual learning when easy and difficult trials are interleaved. Vision Res 2020; 176:100-117. [DOI: 10.1016/j.visres.2020.07.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2020] [Revised: 07/26/2020] [Accepted: 07/29/2020] [Indexed: 11/24/2022]
|
4
|
Shi C, Pundlik S, Luo G. Without low spatial frequencies, high resolution vision would be detrimental to motion perception. J Vis 2020; 20:29. [PMID: 32857109 PMCID: PMC7463184 DOI: 10.1167/jov.20.8.29] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Accepted: 08/05/2020] [Indexed: 11/24/2022] Open
Abstract
A normally sighted person can see a grating of 30 cycles per degree or higher, but spatial frequencies needed for motion perception are much lower than that. It is unknown for natural images with a wide spectrum how all the visible spatial frequencies contribute to motion speed perception. In this work, we studied the effect of spatial frequency content on motion speed estimation for sequences of natural and stochastic pixel images by simulating different visual conditions, including normal vision, low vision (low-pass filtering), and complementary vision (high-pass filtering at the same cutoff frequencies of the corresponding low-vision conditions) conditions. Speed was computed using a biological motion energy-based computational model. In natural sequences, there was no difference in speed estimation error between normal vision and low vision conditions, but it was significantly higher for complementary vision conditions (containing only high-frequency components) at higher speeds. In stochastic sequences that had a flat frequency distribution, the error in normal vision condition was significantly larger compared with low vision conditions at high speeds. On the contrary, such a detrimental effect on speed estimation accuracy was not found for low spatial frequencies. The simulation results were consistent with the motion direction detection task performed by human observers viewing stochastic sequences. Together, these results (i) reiterate the importance of low frequencies in motion perception, and (ii) indicate that high frequencies may be detrimental for speed estimation when low frequency content is weak or not present.
Collapse
Affiliation(s)
- Cong Shi
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
- Schepens Eye Research Institute of Mass Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Shrinivas Pundlik
- Schepens Eye Research Institute of Mass Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Gang Luo
- Schepens Eye Research Institute of Mass Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
5
|
Asher JM, Romei V, Hibbard PB. Spatial Frequency Tuning and Transfer of Perceptual Learning for Motion Coherence Reflects the Tuning Properties of Global Motion Processing. Vision (Basel) 2019; 3:vision3030044. [PMID: 31735845 PMCID: PMC6802806 DOI: 10.3390/vision3030044] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Revised: 08/07/2019] [Accepted: 08/23/2019] [Indexed: 12/18/2022] Open
Abstract
Perceptual learning is typically highly specific to the stimuli and task used during training. However, recently, it has been shown that training on global motion can transfer to untrained tasks, reflecting the generalising properties of mechanisms at this level of processing. We investigated (i) if feedback was required for learning in a motion coherence task, (ii) the transfer across the spatial frequency of training on a global motion coherence task and (iii) the transfer of this training to a measure of contrast sensitivity. For our first experiment, two groups, with and without feedback, trained for ten days on a broadband motion coherence task. Results indicated that feedback was a requirement for robust learning. For the second experiment, training consisted of five days of direction discrimination using one of three motion coherence stimuli (where individual elements were comprised of either broadband Gaussian blobs or low- or high-frequency random-dot Gabor patches), with trial-by-trial auditory feedback. A pre- and post-training assessment was conducted for each of the three types of global motion coherence conditions and high and low spatial frequency contrast sensitivity (both without feedback). Our training paradigm was successful at eliciting improvement in the trained tasks over the five days. Post-training assessments found evidence of transfer for the motion coherence task exclusively for the group trained on low spatial frequency elements. For the contrast sensitivity tasks, improved performance was observed for low- and high-frequency stimuli, following motion coherence training with broadband stimuli, and for low-frequency stimuli, following low-frequency training. Our findings are consistent with perceptual learning, which depends on the global stage of motion processing in higher cortical areas, which is broadly tuned for spatial frequency, with a preference for low frequencies.
Collapse
Affiliation(s)
- Jordi M. Asher
- Department of Psychology, University of Essex, Wivenhoe Park, Colchester CO4 3SQ, UK; (V.R.); (P.B.H.)
- Correspondence:
| | - Vincenzo Romei
- Department of Psychology, University of Essex, Wivenhoe Park, Colchester CO4 3SQ, UK; (V.R.); (P.B.H.)
- Dipartimento di Psicologia and Centro Studi e Ricerche in Neuroscienze Cognitive, Campus di Cesena, Università di Bologna, 47521 Cesena, Italy
| | - Paul B. Hibbard
- Department of Psychology, University of Essex, Wivenhoe Park, Colchester CO4 3SQ, UK; (V.R.); (P.B.H.)
| |
Collapse
|
6
|
Genest W, Hammond R, Carpenter RHS. The random dot tachistogram: a novel task that elucidates the functional architecture of decision. Sci Rep 2016; 6:30787. [PMID: 27470436 PMCID: PMC4965790 DOI: 10.1038/srep30787] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2016] [Accepted: 07/11/2016] [Indexed: 01/16/2023] Open
Abstract
Reaction times are long and variable, almost certainly because they result from a process that accumulates noisy decision signals over time, rising to a threshold. But the origin of the variability is still disputed: is it because the incoming sensory signals are themselves noisy? Or does it arise within the brain? Here we use a stimulus – the random dot tachistogram – which demands spatial integration of information presented essentially instantaneously; with it, we demonstrate three things. First, that the latency distributions still show the variability characteristic of LATER, implying that there must be two integrators in series. Secondly, that since this variability persists despite removal of all temporal noise from the stimulus, or even trial-to-trial spatial variation, it must come from within the nervous system. Finally, that the average rate of rise of the decision signal depends linearly on how many dots move in a given direction. Taken together, this suggests a rather simple, two-stage model of the overall process. The first, detection, stage performs local temporal integration of stimuli; the local, binary, outcomes are linearly summed and integrated by LATER units in the second stage, that perform the final global decision by a process of racing competition.
Collapse
Affiliation(s)
- Wilfried Genest
- Department of Physiology, Development and Neuroscience, University of Cambridge, CB2 3EG UK
| | - Robert Hammond
- Department of Physiology, Development and Neuroscience, University of Cambridge, CB2 3EG UK
| | - R H S Carpenter
- Department of Physiology, Development and Neuroscience, University of Cambridge, CB2 3EG UK
| |
Collapse
|
7
|
Khuu SK, Khambiye S. The influence of shape-from-shading information on the perception of global motion. Vision Res 2012; 55:1-10. [DOI: 10.1016/j.visres.2012.01.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2011] [Revised: 12/16/2011] [Accepted: 01/05/2012] [Indexed: 11/27/2022]
|
8
|
Burr D, Thompson P. Motion psychophysics: 1985–2010. Vision Res 2011; 51:1431-56. [PMID: 21324335 DOI: 10.1016/j.visres.2011.02.008] [Citation(s) in RCA: 119] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2010] [Revised: 02/08/2011] [Accepted: 02/09/2011] [Indexed: 11/19/2022]
Affiliation(s)
- David Burr
- Department of Psychology, University of Florence, Florence, Italy.
| | | |
Collapse
|
9
|
Calabro FJ, Rana KD, Vaina LM. Two mechanisms for optic flow and scale change processing of looming. J Vis 2011; 11:11.3.5. [PMID: 21385865 DOI: 10.1167/11.3.5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The detection of looming, the motion of objects in depth, underlies many behavioral tasks, including the perception of self-motion and time-to-collision. A number of studies have demonstrated that one of the most important cues for looming detection is optic flow, the pattern of motion across the retina. Schrater et al. have suggested that changes in spatial frequency over time, or scale changes, may also support looming detection in the absence of optic flow (P. R. Schrater, D. C. Knill, & E. P. Simoncelli, 2001). Here we used an adaptation paradigm to determine whether the perception of looming from optic flow and scale changes is mediated by single or separate mechanisms. We show first that when the adaptation and test stimuli were the same (both optic flow or both scale change), observer performance was significantly impaired compared to a dynamic (non-motion, non-scale change) null adaptation control. Second, we found no evidence of cross-cue adaptation, either from optic flow to scale change, or vice versa. Taken together, our data suggest that optic flow and scale changes are processed by separate mechanisms, providing multiple pathways for the detection of looming.
Collapse
Affiliation(s)
- Finnegan J Calabro
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA 02134, USA
| | | | | |
Collapse
|
10
|
Visual motion gradient sensitivity shows scale invariant spatial frequency and speed tuning properties. Vision Res 2010; 50:1475-85. [DOI: 10.1016/j.visres.2010.04.021] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2009] [Revised: 04/26/2010] [Accepted: 04/29/2010] [Indexed: 11/20/2022]
|
11
|
Hancock S, McGovern DP, Peirce JW. Ameliorating the combinatorial explosion with spatial frequency-matched combinations of V1 outputs. J Vis 2010; 10:7. [PMID: 20884582 DOI: 10.1167/10.8.7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Little is known about the way in which the outputs of early orientation-selective neurons are combined. One particular problem is that the number of possible combinations of these outputs greatly outweighs the number of processing units available to represent them. Here we consider two of the possible ways in which the visual system might reduce the impact of this problem. First, the visual system might ameliorate the problem by collapsing across some low-level feature coded by previous processing stages, such as spatial frequency. Second, the visual system may combine only a subset of available outputs, such as those with similar receptive field characteristics. Using plaid-selective contrast adaptation and the curvature aftereffect, we found no evidence for the former solution; both aftereffects were clearly tuned to the spatial frequency of the adaptor relative to the test probe. We did, however, find evidence for the latter with both aftereffects; when the components forming our compound stimuli were dissimilar in spatial frequency, the effects of adapting to them were substantially reduced. This has important implications for mid-level visual processing, both for the combinatorial explosion and for the selective "binding" of common features that are perceived as coming from a single visual object.
Collapse
Affiliation(s)
- Sarah Hancock
- Nottingham Visual Neuroscience, School of Psychology, University of Nottingham, Nottingham, UK.
| | | | | |
Collapse
|
12
|
Maruya K, Amano K, Nishida S. Conditional spatial-frequency selective pooling of one-dimensional motion signals into global two-dimensional motion. Vision Res 2010; 50:1054-64. [PMID: 20353800 DOI: 10.1016/j.visres.2010.03.016] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2009] [Revised: 03/19/2010] [Accepted: 03/23/2010] [Indexed: 11/18/2022]
Abstract
This study examined spatial-frequency effects on a motion-pooling process in which spatially distributed local one-dimensional motion signals are integrated into the perception of global two-dimensional motion. Motion pooling over two- to three-octave frequency differences was found to be nearly impossible when all Gabor elements had circular envelopes, but possible when the width of high-frequency elements was reduced, and the stimulus as a whole formed a closed contour configuration. These results are consistent with a view that motion pooling is controlled by form information, and that spatial-frequency difference is one, but not an absolute, form cue of segmentation.
Collapse
Affiliation(s)
- Kazushi Maruya
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, 3-1 Morinosato Wakamiya Atsugi-shi, Kanagawa, Japan.
| | | | | |
Collapse
|
13
|
Amano K, Edwards M, Badcock DR, Nishida S. Spatial-frequency tuning in the pooling of one- and two-dimensional motion signals. Vision Res 2009; 49:2862-9. [PMID: 19732787 DOI: 10.1016/j.visres.2009.08.026] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2009] [Revised: 08/20/2009] [Accepted: 08/27/2009] [Indexed: 10/20/2022]
|
14
|
Abstract
A moving object elicits responses from V1 neurons tuned to a broad range of locations, directions, and spatiotemporal frequencies. Global pooling of such signals can overcome their intrinsic ambiguity in relation to the object's direction/speed (the "aperture problem"); here we examine the role of low-spatial frequencies (SF) and second-order statistics in this process. Subjects made a 2AFC fine direction-discrimination judgement of 'naturally' contoured stimuli viewed rigidly translating behind a series of small circular apertures. This configuration allowed us to manipulate the scene by randomly switching which portion of the stimulus was presented behind each aperture or by occluding certain spatial frequency bands. We report that global motion integration is (a) largely insensitive to the second-order statistics of such stimuli and (b) is rigidly broadband even in the presence of a disrupted low SF component.
Collapse
Affiliation(s)
- David Kane
- UCL Institute of Ophthalmology, University College London, London EC1V 9EL, United Kingdom.
| | | | | |
Collapse
|
15
|
Edwards M. Common-fate motion processing: Interaction of the On and Off pathways. Vision Res 2009; 49:429-38. [DOI: 10.1016/j.visres.2008.11.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2008] [Revised: 11/04/2008] [Accepted: 11/20/2008] [Indexed: 10/21/2022]
|
16
|
Aaen-Stockdale C, Hess RF. The amblyopic deficit for global motion is spatial scale invariant. Vision Res 2008; 48:1965-71. [PMID: 18625265 DOI: 10.1016/j.visres.2008.06.012] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2008] [Revised: 05/20/2008] [Accepted: 06/11/2008] [Indexed: 10/21/2022]
Abstract
Humans with amblyopia display anomalous performance for global motion discrimination. Attempts have been made to rule out an explanation based solely on the visibility loss in lower visual areas. However, it remains a possibility that the altered scale over which local motion is processed in V1 might lead to reduced efficiency of global motion processing in extra-striate cortex. We use stimuli composed of spatial frequency bandpass elements, equated for visibility, to show that the global motion deficit in amblyopia for both fellow and amblyopic eyes is still present once impairments in low-level processing have been factored out. This residual deficit appears to be spatial scale invariant and the relative deficit between the eyes shows a dependence on stimulus speed. We believe that this rules out an explanation of the amblyopic global motion deficit based solely on local motion input. We suggest instead that, in addition to low-level deficits, motion processing in a broadband, extra-striate, global motion mechanism is impaired in amblyopia.
Collapse
Affiliation(s)
- Craig Aaen-Stockdale
- McGill Vision Research, Department of Ophthalmology, McGill University, Royal Victoria Hospital, 687 Pine Avenue West, Montreal, Quebec, Canada.
| | | |
Collapse
|
17
|
Bachthaler S, Weiskopf D. Animation of orthogonal texture patterns for vector field visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2008; 14:741-755. [PMID: 18467751 DOI: 10.1109/tvcg.2008.36] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
This paper introduces orthogonal vector field visualization on 2D manifolds: a representation by lines that are perpendicular to the input vector field. Line patterns are generated by line integral convolution (LIC). This visualization is combined with animation based on motion along the vector field. This decoupling of the line direction from the direction of animation allows us to choose the spatial frequencies along the direction of motion independently from the length scales along the LIC line patterns. Vision research indicates that local motion detectors are tuned to certain spatial frequencies of textures, and the above decoupling enables us to generate spatial frequencies optimized for motion perception. Furthermore, we introduce a combined visualization that employs orthogonal LIC patterns together with conventional, tangential streamline LIC patterns in order to benefit from the advantages of these two visualization approaches. In addition, a filtering process is described to achieve a consistent and temporally coherent animation of orthogonal vector field visualization. Different filter kernels and filter methods are compared and discussed in terms of visualization quality and speed. We present respective visualization algorithms for 2D planar vector fields and tangential vector fields on curved surfaces, and demonstrate that those algorithms lend themselves to efficient and interactive GPU implementations.
Collapse
Affiliation(s)
- Sven Bachthaler
- VISUS, Visualization Research Center, Universität Stuttgart, Nobelstrasse, Stuttgart, Germany.
| | | |
Collapse
|
18
|
Hess RF, Hutchinson CV, Ledgeway T, Mansouri B. Binocular influences on global motion processing in the human visual system. Vision Res 2007; 47:1682-92. [PMID: 17442362 DOI: 10.1016/j.visres.2007.02.005] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2006] [Revised: 01/23/2007] [Accepted: 02/16/2007] [Indexed: 11/28/2022]
Abstract
This study investigates four key issues concerning the binocular properties of the mechanisms that encode global motion in human vision: (1) the extent of any binocular advantage; (2) the possible site of this binocular summation; (3) whether or not purely monocular inputs exist for global motion perception; (4) the extent of any dichoptic interaction. Global motion coherence thresholds were measured using random-dot-kinematograms as a function of the dot modulation depth (contrast) for translational, radial and circular flow fields. We found a marked binocular advantage of approximately 1.7, comparable for all three types of motion and the performance benefit was due to a contrast rather than a global motion enhancement. In addition, we found no evidence for any purely monocular influences on global motion detection. The results suggest that the site of binocular combination for global motion perception occurs prior to the extra-striate cortex where motion integration occurs. All cells involved are binocular and exhibit dichoptic interactions, suggesting the existence of a neural mechanism that involves more than just simple summation of the two monocular inputs.
Collapse
Affiliation(s)
- R F Hess
- McGill Vision Research, Department of Ophthalmology, McGill University, Montreal, PQ, Que., Canada H3A 1A1.
| | | | | | | |
Collapse
|
19
|
Bex PJ, Falkenberg HK. Resolution of complex motion detectors in the central and peripheral visual field. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2006; 23:1598-607. [PMID: 16783422 DOI: 10.1364/josaa.23.001598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We examine how local direction signals are combined to compute the focus of radial motion (FRM) in random dot patterns and examine how this process changes across the visual field. Equivalent noise analysis showed that a loss in FRM accuracy was largely attributable to an increase in local motion detector noise with little or no change in efficiency across the visual field. The minimum separation for discriminating the foci of two overlapping optic flow patterns increased in the periphery faster than predicted from the resolution for a single FRM. This behavior requires that observers average numerous local velocities to estimate the FRM, which enables resistance to internal and external noise and endows the system with the property of position invariance. However, such pooling limits the precision with which multiple looming objects can be discriminated, especially in the peripheral visual field.
Collapse
Affiliation(s)
- Peter J Bex
- Institute of Ophthalmology, University College London, London EC1V 9EL, UK.
| | | |
Collapse
|
20
|
Rainville SJM, Wilson HR. Global shape coding for motion-defined radial-frequency contours. Vision Res 2005; 45:3189-201. [PMID: 16099014 DOI: 10.1016/j.visres.2005.06.033] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2004] [Revised: 06/23/2005] [Accepted: 06/27/2005] [Indexed: 10/25/2022]
Abstract
The visual system is highly skilled at recovering the shape of complex objects defined exclusively by motion cues. But while low-level and high-level mechanisms involved in shape-from-motion have been studied extensively, intermediate computational stages remain poorly understood. In the present study, we used motion-defined radial-frequency contours--or motion RFs--to probe intermediate stages involved in the computation of motion-defined shape. Motion RFs consisted of a virtual circle of Gabor elements whose carriers drifted at speeds determined by a sinusoidal function of polar angle. Motion RFs elicited vivid percepts of shape, and observers could detect and discriminate radial frequencies up to approximately five cycles. Randomizing Gabor speeds over a small contour segment impaired detection and discrimination performance significantly more than predicted by probability summation. Threshold comparisons between spatial-RF and motion-RF contours ruled out that motion-induced shifts in perceived position (i.e., the DeValois effect) determine shape perception in motion RFs. Together, results indicate that the shape of motion RFs is processed by synergistic mechanisms that perform a global analysis of motion cues over space. These results are integrated with data on perceptual interactions between motion RFs and spatial-RFs and are discussed in terms of cue-specific and cue-invariant representations of object shape in human vision.
Collapse
Affiliation(s)
- Stéphane J M Rainville
- Center for Visual Neuroscience, Department of Psychology, North Dakota State University, Fargo, ND 58105-5075, USA.
| | | |
Collapse
|
21
|
Dakin SC, Mareschal I, Bex PJ. Local and global limitations on direction integration assessed using equivalent noise analysis. Vision Res 2005; 45:3027-49. [PMID: 16171844 DOI: 10.1016/j.visres.2005.07.037] [Citation(s) in RCA: 90] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2004] [Revised: 06/28/2005] [Accepted: 07/30/2005] [Indexed: 11/24/2022]
Abstract
We used an equivalent noise (EN) paradigm to examine how the human visual system pools local estimates of direction across space in order to encode global direction. Observers estimated the mean direction (clockwise or counter-clockwise of vertical) of a field of moving band-pass elements whose directions were drawn from a wrapped normal distribution. By measuring discrimination thresholds for mean direction as a function of directional variance, we were able to infer both the precision of observers' representation of each element's direction (i.e., local noise) as well as how many of these estimates they were averaging (i.e., global pooling). We estimated EN for various numbers of moving elements occupying regions of various sizes. We report that both local and global limits on direction integration are determined by the number of elements present in the display (irrespective of their density or the size of region they occupy), and we go on to show how this dependence can be understood in terms of neural noise. Specifically, we use Monte Carlo simulations to show that a maximum-likelihood operator, operating on pooled directional signals from visual cortex corrupted by Poisson noise, accounts for psychophysical data across all conditions tested, as well as motion coherence thresholds (collected under similar experimental conditions). A population vector-averaging scheme (essentially a special case of ML estimation) produces similar predictions but out-performs subjects at high levels of directional variability and fails to predict motion coherence thresholds.
Collapse
Affiliation(s)
- Steven C Dakin
- Department of Visual Science, Institute of Ophthalmology, University College London, 11-43 Bath Street, London EC1V 9EL, UK.
| | | | | |
Collapse
|
22
|
Bex PJ, Dakin SC. Spatial interference among moving targets. Vision Res 2005; 45:1385-98. [PMID: 15743609 DOI: 10.1016/j.visres.2004.12.001] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2004] [Revised: 11/18/2004] [Accepted: 12/08/2004] [Indexed: 11/16/2022]
Abstract
Peripheral vision for static form is limited both by reduced spatial acuity and by interference among adjacent features ('crowding'). However, the visibility of acuity-corrected image motion is relatively constant across the visual field. We measured whether spatial interference among nearby moving elements is similarly invariant of retinal eccentricity and assessed if motion integration could account for any observed sensitivity loss. We report that sensitivity to the direction of motion of a central target-highly visible in isolation-was strongly impaired by four drifting flanking elements. The extent of spatial interference increased with eccentricity. Random-direction flanks and flanks whose directions formed global patterns of rotation or expansion were more disruptive than flanks forming global patterns of translation, regardless of the relative direction of the target element. Spatial interference was low-pass tuned for spatial frequency and broadly tuned for temporal frequency. We show that these results challenge the generality of models of spatial interference that are based on retinal image quality, masking, confusions between target and flanks, attentional resolution limits or (simple) "averaging" of element parameters. Instead, the results suggest that spatial interference is a consequence of the integration of meaningful image structure within large receptive fields. The underlying connectivity of this integration favours low spatial frequency structure but is broadly tuned for speed.
Collapse
Affiliation(s)
- Peter J Bex
- Institute of Ophthalmology, University College London, 11-43 Bath Street, London EC1V 9EL, UK.
| | | |
Collapse
|
23
|
Rainville SJM, Wilson HR. The influence of motion-defined form on the perception of spatially-defined form. Vision Res 2004; 44:1065-77. [PMID: 15050812 DOI: 10.1016/j.visres.2004.01.003] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2003] [Revised: 11/28/2003] [Indexed: 11/22/2022]
Abstract
It is well established that the visual system is sensitive to the global structure--or "form"--of objects defined exclusively by spatial or motion cues, but it remains unclear how form perception combines spatial and motion cues if these are presented concurrently. In the present study, we introduce a novel class of stimuli where spatial-form and motion-form can be superimposed and manipulated independently. In both the spatial and motion domains, global structure consisted of radial-frequency (RF) contours defined by a virtual circle of Gabor elements whose positions and/or drift speeds were sinusoidally modulated at a specified frequency of polar angle. The first two experiments revealed that observers encode the global structure of spatial-RF and motion-RF contours presented in isolation. In a third experiment, observers detected a spatial-RF modulation superimposed on a motion-RF pedestal of identical radial frequency: results showed little facilitation at low pedestal amplitudes but significant masking at higher pedestal amplitudes, especially if the RF modulations of test and pedestal were in anti-phase. Additional experiments demonstrated that masking of the spatial-RF test is abolished if the global structure of the motion-RF pedestal is altered or destroyed while local motion cues are preserved. We argue these results cannot be explained by local neural interactions between spatial and motion cues and propose instead that data reflect higher-level interactions between separate visual pathways encoding spatial-form and motion-form.
Collapse
Affiliation(s)
- Stéphane J M Rainville
- Center for Vision Research, York University, 4700 Keele Street, North York, Ont., Canada M1J 1P3.
| | | |
Collapse
|
24
|
Abstract
We consider how local motion signals are combined to represent the movements of spatially extensive objects. A series of band-pass target dots, whose collective motion defined a moving contour, was positioned within a field of randomly moving noise dots. The visibility of the contours did not depend on the direction of movement relative to local contour orientation unless the contour was constrained to pass through fixation, suggesting that a previously reported advantage for collinear motion trajectories depends on the probability of detecting any of the target elements rather than the integrated contour. Contour visibility was invariant of the spatial frequency of the elements, but it did depend on the speed, number and spacing of elements defining it, as well as the angle and spatial frequency difference between adjacent elements. Local averaging of directional signals is not sufficient to explain these results. The visibility of these moving contours identifies narrow-band grouping processes that are sensitive to the shape defined by the directions of the elements forming the contour.
Collapse
Affiliation(s)
- Peter J Bex
- Institute of Ophthalmology, 11-43 Bath Street, EC1V 9EL, London, UK.
| | | | | |
Collapse
|
25
|
Bex PJ, Dakin SC. Motion detection and the coincidence of structure at high and low spatial frequencies. Vision Res 2003; 43:371-83. [PMID: 12535994 DOI: 10.1016/s0042-6989(02)00497-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
We used filtered random dot kinematograms and natural images to examine how motion detection depends the relative locations of structures defined at low and high spatial frequencies. The upper displacement limit of motion (D(max)), the lower displacement limit (D(min)) and motion coherence thresholds were unaffected by the degree of spatial coincidence between high and low spatial frequency structures i.e. whether they were consistent or inconsistent with a single feature. However motion detection was possible between band-pass filtered random dot patterns whose peak frequencies were separated by up to 4 octaves. The first result implicates spatial frequency selective motion detectors that operate independently. The second result implicates a motion system that can integrate the displacements of edges defined by widely separated spatial frequencies. Both are required to account for the two results, and they appear to operate under very similar conditions.
Collapse
Affiliation(s)
- Peter J Bex
- Institute of Ophthalmology, 11-43 Bath Street, EC1V 9EL, London, UK.
| | | |
Collapse
|