1
|
Sheliga BM, FitzGibbon EJ. Manipulating the Fourier spectra of stimuli comprising a two-frame kinematogram to study early visual motion-detecting mechanisms: Perception versus short latency ocular-following responses. J Vis 2023; 23:11. [PMID: 37725387 PMCID: PMC10513114 DOI: 10.1167/jov.23.10.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 08/20/2023] [Indexed: 09/21/2023] Open
Abstract
Two-frame kinematograms have been extensively used to study motion perception in human vision. Measurements of the direction-discrimination performance limits (Dmax) have been the primary subject of such studies, whereas surprisingly little research has asked how the variability in the spatial frequency content of individual frames affects motion processing. Here, we used two-frame one-dimensional vertical pink noise kinematograms, in which images in both frames were bandpass filtered, with the central spatial frequency of the filter manipulated independently for each image. To avoid spatial aliasing, there was no actual leftward-rightward shift of the image: instead, the phases of all Fourier components of the second image were shifted by ±¼ wavelength with respect to those of the first. We recorded ocular-following responses (OFRs) and perceptual direction discrimination in human subjects. OFRs were in the direction of the Fourier components' shift and showed a smooth decline in amplitude, well fit by Gaussian functions, as the difference between the central spatial frequencies of the first and second images increased. In sharp contrast, 100% correct perceptual direction-discrimination performance was observed when the difference between the central spatial frequencies of the first and second images was small, deteriorating rapidly to chance when increased further. Perceptual dependencies moved closer to the OFR ones when subjects were allowed to grade the strength of perceived motion. Response asymmetries common for perceptual judgments and the OFRs suggest that they rely on the same early visual processing mechanisms. The OFR data were quantitatively well described by a model which combined two factors: (1) an excitatory drive determined by a power law sum of stimulus Fourier components' contributions, scaled by (2) a contrast normalization mechanism. Thus, in addition to traditional studies relying on perceptual reports, the OFRs represent a valuable behavioral tool for studying early motion processing on a fine scale.
Collapse
Affiliation(s)
- Boris M Sheliga
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Edmond J FitzGibbon
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
2
|
Sheliga BM, Quaia C, FitzGibbon EJ, Cumming BG. Weighted summation and contrast normalization account for short-latency disparity vergence responses to white noise stimuli in humans. J Vis 2022; 22:17. [DOI: 10.1167/jov.22.12.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Boris M. Sheliga
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Christian Quaia
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Edmond J. FitzGibbon
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Bruce G. Cumming
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
3
|
Ocular-following responses in school-age children. PLoS One 2022; 17:e0277443. [DOI: 10.1371/journal.pone.0277443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 10/26/2022] [Indexed: 11/12/2022] Open
Abstract
Ocular following eye movements have provided insights into how the visual system of humans and monkeys processes motion. Recently, it has been shown that they also reliably reveal stereoanomalies, and, thus, might have clinical applications. Their translation from research to clinical setting has however been hindered by their small size, which makes them difficult to record, and by a lack of data about their properties in sizable populations. Notably, they have so far only been recorded in adults. We recorded ocular following responses (OFRs)–defined as the change in eye position in the 80–160 ms time window following the motion onset of a large textured stimulus–in 14 school-age children (6 to 13 years old, 9 males and 5 females), under recording conditions that closely mimic a clinical setting. The OFRs were acquired non-invasively by a custom developed high-resolution video-oculography system, described in this study. With the developed system we were able to non-invasively detect OFRs in all children in short recording sessions. Across subjects, we observed a large variability in the magnitude of the movements (by a factor of 4); OFR magnitude was however not correlated with age. A power analysis indicates that even considerably smaller movements could be detected. We conclude that the ocular following system is well developed by age six, and OFRs can be recorded non-invasively in young children in a clinical setting.
Collapse
|
4
|
Speed Estimation for Visual Tracking Emerges Dynamically from Nonlinear Frequency Interactions. eNeuro 2022; 9:ENEURO.0511-21.2022. [PMID: 35470228 PMCID: PMC9113919 DOI: 10.1523/eneuro.0511-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 03/08/2022] [Accepted: 03/11/2022] [Indexed: 11/21/2022] Open
Abstract
Sensing the movement of fast objects within our visual environments is essential for controlling actions. It requires online estimation of motion direction and speed. We probed human speed representation using ocular tracking of stimuli of different statistics. First, we compared ocular responses to single drifting gratings (DGs) with a given set of spatiotemporal frequencies to broadband motion clouds (MCs) of matched mean frequencies. Motion energy distributions of gratings and clouds are point-like, and ellipses oriented along the constant speed axis, respectively. Sampling frequency space, MCs elicited stronger, less variable, and speed-tuned responses. DGs yielded weaker and more frequency-tuned responses. Second, we measured responses to patterns made of two or three components covering a range of orientations within Fourier space. Early tracking initiation of the patterns was best predicted by a linear combination of components before nonlinear interactions emerged to shape later dynamics. Inputs are supralinearly integrated along an iso-velocity line and sublinearly integrated away from it. A dynamical probabilistic model characterizes these interactions as an excitatory pooling along the iso-velocity line and inhibition along the orthogonal “scale” axis. Such crossed patterns of interaction would appropriately integrate or segment moving objects. This study supports the novel idea that speed estimation is better framed as a dynamic channel interaction organized along speed and scale axes.
Collapse
|
5
|
Sheliga BM, Quaia C, FitzGibbon EJ, Cumming BG. Short-latency ocular following responses to motion stimuli are strongly affected by temporal modulations of the visual content during the initial fixation period. J Vis 2021; 21:8. [PMID: 33970195 PMCID: PMC8114009 DOI: 10.1167/jov.21.5.8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Neuronal and psychophysical responses to a visual stimulus are known to depend on the preceding history of visual stimulation, but the effect of stimulation history on reflexive eye movements has received less attention. Here, we quantify these effects using short-latency ocular following responses (OFRs), a valuable tool for studying early motion processing. We recorded, in human subjects, the horizontal OFRs induced by drifting vertical 1D pink noise. The stimulus was preceded by 600 to 1000 ms of maintained fixation (on a visible cross), and we explored the effect of different stimuli (“fixation patterns”) presented during the fixation period. We found that any temporal modulation present during the fixation period reduced the magnitude of the subsequent OFRs. Even changes in the overall luminance during the fixation period induced significant suppression. The magnitude of the effect was a function of both spatial and temporal structure of the fixation pattern. Suppression that was selective for both relative orientation and relative spatial frequency accounted for a considerable fraction of total suppression. Finally, changes in stimulus temporal structure alone (i.e. “flicker” versus “transparent motion”) led to changes in the spatial frequency tuning of suppression. In the time domain, the suppression developed quickly: 100 ms of temporal modulation in the fixation pattern produced up to 80% of maximal suppression. Recovery from suppression was instead more gradual, taking up to several seconds. By presenting transparent motion during the fixation period, with opposite motion signals having different spatial frequency content, we also discovered a direction-selective component of suppression, which depended on both the frequency and the direction of the moving stimulus.
Collapse
Affiliation(s)
- Boris M Sheliga
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA.,
| | - Christian Quaia
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA.,
| | - Edmond J FitzGibbon
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA.,
| | - Bruce G Cumming
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA.,
| |
Collapse
|
6
|
Sheliga BM, Quaia C, FitzGibbon EJ, Cumming BG. Short-latency ocular-following responses: Weighted nonlinear summation predicts the outcome of a competition between two sine wave gratings moving in opposite directions. J Vis 2020; 20:1. [PMID: 31995136 PMCID: PMC7239641 DOI: 10.1167/jov.20.1.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Accepted: 11/29/2019] [Indexed: 11/24/2022] Open
Abstract
We recorded horizontal ocular-following responses to pairs of superimposed vertical sine wave gratings moving in opposite directions in human subjects. This configuration elicits a nonlinear interaction: when the relative contrast of the gratings is changed, the response transitions abruptly between the responses elicited by either grating alone. We explore this interaction in pairs of gratings that differ in spatial and temporal frequency and show that all cases can be described as a weighted sum of the responses to each grating presented alone, where the weights are a nonlinear function of stimulus contrast: a nonlinear weighed summation model. The weights depended on the spatial and temporal frequency of the component grating. In many cases the dominant component was not the one that produced the strongest response when presented alone, implying that the neuronal circuits assigning weights precede the stages at which motor responses to visual motion are generated. When the stimulus area was reduced, the relationship between spatial frequency and weight shifted to higher frequencies. This finding may reflect a contribution from surround suppression. The nonlinear interaction is strongest when the two components have similar spatial frequencies, suggesting that the nonlinearity may reflect interactions within single spatial frequency channels. This framework can be extended to stimuli composed of more than two components: our model was able to predict the responses to stimuli composed of three gratings. That this relatively simple model successfully captures the ocular-following responses over a wide range of spatial/temporal frequency and contrast parameters suggests that these interactions reflect a simple mechanism.
Collapse
|
7
|
Suppression and Contrast Normalization in Motion Processing. J Neurosci 2017; 37:11051-11066. [PMID: 29018158 DOI: 10.1523/jneurosci.1572-17.2017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 08/11/2017] [Accepted: 08/18/2017] [Indexed: 11/21/2022] Open
Abstract
Sensory neurons are activated by a range of stimuli to which they are said to be tuned. Usually, they are also suppressed by another set of stimuli that have little effect when presented in isolation. The interactions between preferred and suppressive stimuli are often quite complex and vary across neurons, even within a single area, making it difficult to infer their collective effect on behavioral responses mediated by activity across populations of neurons. Here, we investigated this issue by measuring, in human subjects (three males), the suppressive effect of static masks on the ocular following responses induced by moving stimuli. We found a wide range of effects, which depend in a nonlinear and nonseparable manner on the spatial frequency, contrast, and spatial location of both stimulus and mask. Under some conditions, the presence of the mask can be seen as scaling the contrast of the driving stimulus. Under other conditions, the effect is more complex, involving also a direct scaling of the behavioral response. All of this complexity at the behavioral level can be captured by a simple model in which stimulus and mask interact nonlinearly at two stages, one monocular and one binocular. The nature of the interactions is compatible with those observed at the level of single neurons in primates, usually broadly described as divisive normalization, without having to invoke any scaling mechanism.SIGNIFICANCE STATEMENT The response of sensory neurons to their preferred stimulus is often modulated by stimuli that are not effective when presented alone. Individual neurons can exhibit multiple modulatory effects, with considerable variability across neurons even in a single area. Such diversity has made it difficult to infer the impact of these modulatory mechanisms on behavioral responses. Here, we report the effects of a stationary mask on the reflexive eye movements induced by a moving stimulus. A model with two stages, each incorporating a divisive modulatory mechanism, reproduces our experimental results and suggests that qualitative variability of masking effects in cortical neurons might arise from differences in the extent to which such effects are inherited from earlier stages.
Collapse
|
8
|
Sheliga BM, Quaia C, FitzGibbon EJ, Cumming BG. Human short-latency ocular vergence responses produced by interocular velocity differences. J Vis 2016; 16:11. [PMID: 27548089 PMCID: PMC5015998 DOI: 10.1167/16.10.11] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2016] [Accepted: 07/12/2016] [Indexed: 11/24/2022] Open
Abstract
We studied human short-latency vergence eye movements to a novel stimulus that produces interocular velocity differences without a changing disparity signal. Sinusoidal luminance gratings moved in opposite directions (left vs. right; up vs. down) in the two eyes. The grating seen by each eye underwent ¼-wavelength shifts with each image update. This arrangement eliminated changing disparity cues, since the phase difference between the eyes alternated between 0° and 180°. We nevertheless observed robust short-latency vergence responses (VRs), whose sign was consistent with the interocular velocity differences (IOVDs), indicating that the IOVD cue in isolation can evoke short-latency VRs. The IOVD cue was effective only when the images seen by the two eyes overlapped in space. We observed equally robust VRs for opposite horizontal motions (left in one eye, right in the other) and opposite vertical motions (up in one eye, down in the other). Whereas the former are naturally generated by objects moving in depth, the latter are not part of our normal experience. To our knowledge, this is the first demonstration of a behavioral consequence of vertical IOVD. This may reflect the fact that some neurons in area MT are sensitive to these motion signals (Czuba, Huk, Cormack, & Kohn, 2014). VRs were the strongest for spatial frequencies in the range of 0.35-1 c/°, much higher than the optimal spatial frequencies for evoking ocular-following responses observed during frontoparallel motion. This suggests that the two motion signals are detected by different neuronal populations. We also produced IOVD using moving uncorrelated one-dimensional white-noise stimuli. In this case the most effective stimuli have low speed, as predicted if the drive originates in neurons tuned to high spatial frequencies (Sheliga, Quaia, FitzGibbon, & Cumming, 2016).
Collapse
|