1
|
Sheliga BM, FitzGibbon EJ. Weighted power summation and contrast normalization mechanisms account for short-latency eye movements to motion and disparity of sine-wave gratings and broadband visual stimuli in humans. J Vis 2024; 24:14. [PMID: 39186301 PMCID: PMC11363211 DOI: 10.1167/jov.24.8.14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 07/12/2024] [Indexed: 08/27/2024] Open
Abstract
In this paper, we show that the model we proposed earlier to account for the disparity vergence eye movements (disparity vergence responses, or DVRs) in response to horizontal and vertical disparity steps of white noise visual stimuli also provides an excellent description of the short-latency ocular following responses (OFRs) to broadband stimuli in the visual motion domain. In addition, we reanalyzed the data and applied the model to several earlier studies that used sine-wave gratings (single or a combination of two or three gratings) and white noise stimuli. The model provides a very good account of all of these data. The model postulates that the short-latency eye movements-OFRs and DVRs-can be accounted for by the operation of two factors: an excitatory drive, determined by a weighted sum of contributions of stimulus Fourier components, scaled by a global contrast normalization mechanism. The output of the operation of these two factors is then nonlinearly scaled by the total contrast of the stimulus. Despite different roles of disparity (horizontal and vertical) and motion signals in visual scene analyses, the earliest processing stages of these different signals appear to be very similar.
Collapse
Affiliation(s)
- Boris M Sheliga
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Edmond J FitzGibbon
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
2
|
Sheliga BM, FitzGibbon EJ. Manipulating the Fourier spectra of stimuli comprising a two-frame kinematogram to study early visual motion-detecting mechanisms: Perception versus short latency ocular-following responses. J Vis 2023; 23:11. [PMID: 37725387 PMCID: PMC10513114 DOI: 10.1167/jov.23.10.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 08/20/2023] [Indexed: 09/21/2023] Open
Abstract
Two-frame kinematograms have been extensively used to study motion perception in human vision. Measurements of the direction-discrimination performance limits (Dmax) have been the primary subject of such studies, whereas surprisingly little research has asked how the variability in the spatial frequency content of individual frames affects motion processing. Here, we used two-frame one-dimensional vertical pink noise kinematograms, in which images in both frames were bandpass filtered, with the central spatial frequency of the filter manipulated independently for each image. To avoid spatial aliasing, there was no actual leftward-rightward shift of the image: instead, the phases of all Fourier components of the second image were shifted by ±¼ wavelength with respect to those of the first. We recorded ocular-following responses (OFRs) and perceptual direction discrimination in human subjects. OFRs were in the direction of the Fourier components' shift and showed a smooth decline in amplitude, well fit by Gaussian functions, as the difference between the central spatial frequencies of the first and second images increased. In sharp contrast, 100% correct perceptual direction-discrimination performance was observed when the difference between the central spatial frequencies of the first and second images was small, deteriorating rapidly to chance when increased further. Perceptual dependencies moved closer to the OFR ones when subjects were allowed to grade the strength of perceived motion. Response asymmetries common for perceptual judgments and the OFRs suggest that they rely on the same early visual processing mechanisms. The OFR data were quantitatively well described by a model which combined two factors: (1) an excitatory drive determined by a power law sum of stimulus Fourier components' contributions, scaled by (2) a contrast normalization mechanism. Thus, in addition to traditional studies relying on perceptual reports, the OFRs represent a valuable behavioral tool for studying early motion processing on a fine scale.
Collapse
Affiliation(s)
- Boris M Sheliga
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Edmond J FitzGibbon
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
3
|
Motor-effector dependent modulation of sensory-motor processes identified by the multivariate pattern analysis of EEG activity. Sci Rep 2023; 13:3161. [PMID: 36823312 PMCID: PMC9950042 DOI: 10.1038/s41598-023-30324-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 02/21/2023] [Indexed: 02/25/2023] Open
Abstract
Sensory information received through sensory organs is constantly modulated by numerous non-sensory factors. Recent studies have demonstrated that the state of action can modulate sensory representations in cortical areas. Similarly, sensory information can be modulated by the type of action used to report perception; however, systematic investigation of this issue is scarce. In this study, we examined whether sensorimotor processes represented in electroencephalography (EEG) activities vary depending on the type of effector behavior. Nineteen participants performed motion direction discrimination tasks in which visual inputs were the same, and only the effector behaviors for reporting perceived motion directions were different (smooth pursuit, saccadic eye movement, or button press). We used multivariate pattern analysis to compare the EEG activities for identical sensory inputs under different effector behaviors. The EEG activity patterns for the identical sensory stimulus before any motor action varied across the effector behavior conditions, and the choice of motor effectors modulated the neural direction discrimination differently. We suggest that the motor-effector dependent modulation of EEG direction discrimination might be caused by effector-specific motor planning or preparation signals because it did not have functional relevance to behavioral direction discriminability.
Collapse
|
4
|
Ocular-following responses in school-age children. PLoS One 2022; 17:e0277443. [DOI: 10.1371/journal.pone.0277443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 10/26/2022] [Indexed: 11/12/2022] Open
Abstract
Ocular following eye movements have provided insights into how the visual system of humans and monkeys processes motion. Recently, it has been shown that they also reliably reveal stereoanomalies, and, thus, might have clinical applications. Their translation from research to clinical setting has however been hindered by their small size, which makes them difficult to record, and by a lack of data about their properties in sizable populations. Notably, they have so far only been recorded in adults. We recorded ocular following responses (OFRs)–defined as the change in eye position in the 80–160 ms time window following the motion onset of a large textured stimulus–in 14 school-age children (6 to 13 years old, 9 males and 5 females), under recording conditions that closely mimic a clinical setting. The OFRs were acquired non-invasively by a custom developed high-resolution video-oculography system, described in this study. With the developed system we were able to non-invasively detect OFRs in all children in short recording sessions. Across subjects, we observed a large variability in the magnitude of the movements (by a factor of 4); OFR magnitude was however not correlated with age. A power analysis indicates that even considerably smaller movements could be detected. We conclude that the ocular following system is well developed by age six, and OFRs can be recorded non-invasively in young children in a clinical setting.
Collapse
|
5
|
Barthélemy FV, Fleuriet J, Perrinet LU, Masson GS. A behavioral receptive field for ocular following in monkeys: Spatial summation and its spatial frequency tuning. eNeuro 2022; 9:ENEURO.0374-21.2022. [PMID: 35760525 PMCID: PMC9275147 DOI: 10.1523/eneuro.0374-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 06/13/2022] [Accepted: 06/16/2022] [Indexed: 11/21/2022] Open
Abstract
In human and non-human primates, reflexive tracking eye movements can be initiated at very short latency in response to a rapid shift of the image. Previous studies in humans have shown that only a part of the central visual field is optimal for driving ocular following responses. Herein, we have investigated spatial summation of motion information across a wide range of spatial frequencies and speeds of drifting gratings by recording short-latency ocular following responses in macaque monkeys. We show that optimal stimulus size for driving ocular responses cover a small (<20° diameter), central part of the visual field that shrinks with higher spatial frequency. This signature of linear motion integration remains invariant with speed and temporal frequency. For low and medium spatial frequencies, we found a strong suppressive influence from surround motion, evidenced by a decrease of response amplitude for stimulus sizes larger than optimal. Such suppression disappears with gratings at high frequencies. The contribution of peripheral motion was investigated by presenting grating annuli of increasing eccentricity. We observed an exponential decay of response amplitude with grating eccentricity, the decrease being faster for higher spatial frequencies. Weaker surround suppression can thus be explained by sparser eccentric inputs at high frequencies. A Difference-of-Gaussians model best renders the antagonistic contributions of peripheral and central motions. Its best-fit parameters coincide with several, well-known spatial properties of area MT neuronal populations. These results describe the mechanism by which central motion information is automatically integrated in a context-dependent manner to drive ocular responses.Significance statementOcular following is driven by visual motion at ultra-short latency in both humans and monkeys. Its dynamics reflect the properties of low-level motion integration. Here, we show that a strong center-surround suppression mechanism modulates initial eye velocity. Its spatial properties are dependent upon visual inputs' spatial frequency but are insensitive to either its temporal frequency or speed. These properties are best described with a Difference-of-Gaussian model of spatial integration. The model parameters reflect many spatial characteristics of motion sensitive neuronal populations in monkey area MT. Our results further outline the computational properties of the behavioral receptive field underpinning automatic, context-dependent motion integration.
Collapse
Affiliation(s)
- Frédéric V Barthélemy
- Institut de Neurosciences de la Timone, UMR7289, CNRS & Aix-Marseille Université, 13385 Marseille, France
| | - Jérome Fleuriet
- Institut de Neurosciences de la Timone, UMR7289, CNRS & Aix-Marseille Université, 13385 Marseille, France
- Assistance Publique-Hôpitaux de Paris, Intensive Care Unit, Raymond Poincaré Hospital, Garches, France
| | - Laurent U Perrinet
- Institut de Neurosciences de la Timone, UMR7289, CNRS & Aix-Marseille Université, 13385 Marseille, France
| | - Guillaume S Masson
- Institut de Neurosciences de la Timone, UMR7289, CNRS & Aix-Marseille Université, 13385 Marseille, France
| |
Collapse
|
6
|
Park ASY, Schütz AC. Selective postsaccadic enhancement of motion perception. Vision Res 2021; 188:42-50. [PMID: 34280816 PMCID: PMC7611369 DOI: 10.1016/j.visres.2021.06.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Revised: 06/15/2021] [Accepted: 06/20/2021] [Indexed: 11/23/2022]
Abstract
Saccadic eye movements can drastically affect motion perception: during saccades, the stationary surround is swept rapidly across the retina and contrast sensitivity is suppressed. However, after saccades, contrast sensitivity is enhanced for color and high-spatial frequency stimuli and reflexive tracking movements known as ocular following responses (OFR) are enhanced in response to large field motion. Additionally, OFR and postsaccadic enhancement of neural activity in primate motion processing areas are well correlated. It is not yet known how this postsaccadic enhancement arises. Therefore, we tested if the enhancement can be explained by changes in the balance of centre-surround antagonism in motion processing, where spatial summation is favoured at low contrasts and surround suppression is favoured at high contrasts. We found motion perception was selectively enhanced immediately after saccades for high spatial frequency stimuli, consistent with previously reported selective postsaccadic enhancement of contrast sensitivity for flashed high spatial frequency stimuli. The observed enhancement was also associated with changes in spatial summation and suppression, as well as contrast facilitation and inhibition, suggesting that motion processing is augmented to maximise visual perception immediately after saccades. The results highlight that spatial and contrast properties of underlying neural mechanisms for motion processing can be affected by an antecedent saccade for highly detailed stimuli and are in line with studies that show behavioural and neuronal enhancement of motion processing in non-human primates.
Collapse
Affiliation(s)
- Adela S Y Park
- Experimental and Biological Psychology, University of Marburg, Marburg, Germany.
| | - Alexander C Schütz
- Experimental and Biological Psychology, University of Marburg, Marburg, Germany; Center for Mind, Brain and Behavior, University of Marburg, Marburg, Germany
| |
Collapse
|
7
|
Parisot K, Zozor S, Guérin-Dugué A, Phlypo R, Chauvin A. Micro-pursuit: A class of fixational eye movements correlating with smooth, predictable, small-scale target trajectories. J Vis 2021; 21:9. [PMID: 33444434 PMCID: PMC7838552 DOI: 10.1167/jov.21.1.9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Humans generate ocular pursuit movements when a moving target is tracked throughout the visual field. In this article, we show that pursuit can be generated and measured at small amplitudes, at the scale of fixational eye movements, and tag these eye movements as micro-pursuits. During micro-pursuits, gaze direction correlates with a target's smooth, predictable target trajectory. We measure similarity between gaze and target trajectories using a so-called maximally projected correlation and provide results in three experimental data sets. A first observation of micro-pursuit is provided in an implicit pursuit task, where observers were tasked to maintain their gaze fixed on a static cross at the center of screen, while reporting changes in perception of an ambiguous, moving (Necker) cube. We then provide two experimental paradigms and their corresponding data sets: a first replicating micro-pursuits in an explicit pursuit task, where observers had to follow a moving fixation cross (Cross), and a second with an unambiguous square (Square). Individual and group analyses provide evidence that micro-pursuits exist in both the Necker and Cross experiments but not in the Square experiment. The interexperiment analysis results suggest that the manipulation of stimulus target motion, task, and/or the nature of the stimulus may play a role in the generation of micro-pursuits.
Collapse
Affiliation(s)
- Kevin Parisot
- CNRS, Institute of Engineering, GIPSA-lab & LPNC, University of Grenoble Alpes, Grenoble, France., https://scholar.google.fr/citations?user=WjGkMmIAAAAJ&hl=fr&oi=ao
| | - Steeve Zozor
- CNRS, Institute of Engineering, GIPSA-lab, University of Grenoble Alpes, Grenoble, France., http://www.gipsa-lab.grenoble-inp.fr/page_pro.php?vid=86
| | - Anne Guérin-Dugué
- CNRS, Institute of Engineering, GIPSA-lab, University of Grenoble Alpes, Grenoble, France., http://www.gipsa-lab.grenoble-inp.fr/page_pro.php?vid=71
| | - Ronald Phlypo
- CNRS, Institute of Engineering, GIPSA-lab, University of Grenoble Alpes, Grenoble, France., http://www.gipsa-lab.grenoble-inp.fr/page_pro.php?vid=2173
| | - Alan Chauvin
- CNRS, LPNC, University of Grenoble Alpes, Grenoble, France., https://lpnc.univ-grenoble-alpes.fr/Alan-Chauvin
| |
Collapse
|
8
|
Benson PJ, Wallace L, Beedie SA. Sensory auditory interval perception errors in developmental dyslexia. Neuropsychologia 2020; 147:107587. [DOI: 10.1016/j.neuropsychologia.2020.107587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 08/15/2020] [Accepted: 08/17/2020] [Indexed: 11/16/2022]
|
9
|
Lakshminarasimhan KJ, Avila E, Neyhart E, DeAngelis GC, Pitkow X, Angelaki DE. Tracking the Mind's Eye: Primate Gaze Behavior during Virtual Visuomotor Navigation Reflects Belief Dynamics. Neuron 2020; 106:662-674.e5. [PMID: 32171388 PMCID: PMC7323886 DOI: 10.1016/j.neuron.2020.02.023] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 12/24/2019] [Accepted: 02/19/2020] [Indexed: 01/02/2023]
Abstract
To take the best actions, we often need to maintain and update beliefs about variables that cannot be directly observed. To understand the principles underlying such belief updates, we need tools to uncover subjects' belief dynamics from natural behavior. We tested whether eye movements could be used to infer subjects' beliefs about latent variables using a naturalistic navigation task. Humans and monkeys navigated to a remembered goal location in a virtual environment that provided optic flow but lacked explicit position cues. We observed eye movements that appeared to continuously track the goal location even when no visible target was present there. Accurate goal tracking was associated with improved task performance, and inhibiting eye movements in humans impaired navigation precision. These results suggest that gaze dynamics play a key role in action selection during challenging visuomotor behaviors and may possibly serve as a window into the subject's dynamically evolving internal beliefs.
Collapse
Affiliation(s)
- Kaushik J Lakshminarasimhan
- Center for Neural Science, New York University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
| | - Eric Avila
- Center for Neural Science, New York University, New York, NY, USA
| | - Erin Neyhart
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | | | - Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA; Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY, USA; Tandon School of Engineering, New York University, New York, NY, USA
| |
Collapse
|
10
|
Kwon S, Rolfs M, Mitchell JF. Presaccadic motion integration drives a predictive postsaccadic following response. J Vis 2020; 19:12. [PMID: 31557762 DOI: 10.1167/19.11.12] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Saccadic eye movements sample the visual world and ensure high acuity across the visual field. To compensate for delays in processing, saccades to moving targets require predictions: The eyes must intercept the target's future position to then pursue its direction of motion. Although prediction is crucial to voluntary pursuit, it is unclear whether it is an obligatory feature of saccade planning. Saccade planning involves an involuntary enhanced processing of the target, called presaccadic attention. Does this presaccadic attention recruit smooth eye movements automatically? To test this, we had human participants perform a saccade to one of four apertures, which were static, but each contained a random dot field with motion tangential to the required saccade. In this task, saccades were deviated along the direction of target motion, and the eyes exhibited a following response upon saccade landing. This postsaccadic following response (PFR) increased with spatial uncertainty of the target position and persisted even when we removed the motion stimulus in midflight of the saccade, confirming that it relied on presaccadic information. Motion from 50-100 ms prior to the saccade had the strongest influence on PFR, consistent with the time course of perceptual enhancements reported in presaccadic attention. Finally, the PFR magnitude related linearly to the logarithm of stimulus velocity and generally had low gain, similar to involuntary ocular following movements commonly observed after sudden motion onsets. These results suggest that presaccadic attention selects motion features of targets predictively, presumably to ensure successful immediate tracking of saccade targets in motion.
Collapse
Affiliation(s)
- Sunwoo Kwon
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA.,Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Martin Rolfs
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Jude F Mitchell
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA.,Center for Visual Science, University of Rochester, Rochester, NY, USA
| |
Collapse
|
11
|
Quaia C, FitzGibbon EJ, Optican LM, Cumming BG. Binocular Summation for Reflexive Eye Movements: A Potential Diagnostic Tool for Stereodeficiencies. Invest Ophthalmol Vis Sci 2018; 59:5816-5822. [PMID: 30521669 PMCID: PMC6284466 DOI: 10.1167/iovs.18-24520] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2018] [Accepted: 10/30/2018] [Indexed: 11/24/2022] Open
Abstract
Purpose Stereoscopic vision, by detecting interocular correlations, enhances depth perception. Stereodeficiencies often emerge during the first months of life, and left untreated can lead to severe loss of visual acuity in one eye and/or strabismus. Early treatment results in much better outcomes, yet diagnostic tests for infants are cumbersome and not widely available. We asked whether reflexive eye movements, which in principle can be recorded even in infants, can be used to identify stereodeficiencies. Methods Reflexive ocular following eye movements induced by fast drifting noise stimuli were recorded in 10 adult human participants (5 with normal stereoacuity, 5 stereodeficient). To manipulate interocular correlation, the stimuli shown to the two eyes were either identical, different, or had opposite contrast. Monocular presentations were also interleaved. The participants were asked to passively fixate the screen. Results In the participants with normal stereoacuity, the responses to binocular identical stimuli were significantly larger than those induced by binocular opposite stimuli. In the stereodeficient participants the responses were indistinguishable. Despite the small size of ocular following responses, 40 trials, corresponding to less than 2 minutes of testing, were sufficient to reliably differentiate normal from stereodeficient participants. Conclusions Ocular-following eye movements, because of their reliance on cortical neurons sensitive to interocular correlations, are affected by stereodeficiencies. Because these eye movements can be recorded noninvasively and with minimal participant cooperation, they can potentially be measured even in infants and might thus provide an useful screening tool for this currently underserved population.
Collapse
Affiliation(s)
- Christian Quaia
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, U.S. Department of Health and Human Services, Bethesda, Maryland, United States
| | - Edmond J FitzGibbon
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, U.S. Department of Health and Human Services, Bethesda, Maryland, United States
| | - Lance M Optican
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, U.S. Department of Health and Human Services, Bethesda, Maryland, United States
| | - Bruce G Cumming
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, U.S. Department of Health and Human Services, Bethesda, Maryland, United States
| |
Collapse
|
12
|
Abstract
Psychophysical studies and our own subjective experience suggest that, in natural viewing conditions (i.e., at medium to high contrasts), monocularly and binocularly viewed scenes appear very similar, with the exception of the improved depth perception provided by stereopsis. This phenomenon is usually described as a lack of binocular summation. We show here that there is an exception to this rule: Ocular following eye movements induced by the sudden motion of a large stimulus, which we recorded from three human subjects, are much larger when both eyes see the moving stimulus, than when only one eye does. We further discovered that this binocular advantage is a function of the interocular correlation between the two monocular images: It is maximal when they are identical, and reduced when the two eyes are presented with different images. This is possible only if the neurons that underlie ocular following are sensitive to binocular disparity.
Collapse
Affiliation(s)
- Christian Quaia
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, MD, USA
| | - Lance M Optican
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, MD, USA
| | - Bruce G Cumming
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, MD, USA
| |
Collapse
|
13
|
Suppression and Contrast Normalization in Motion Processing. J Neurosci 2017; 37:11051-11066. [PMID: 29018158 DOI: 10.1523/jneurosci.1572-17.2017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 08/11/2017] [Accepted: 08/18/2017] [Indexed: 11/21/2022] Open
Abstract
Sensory neurons are activated by a range of stimuli to which they are said to be tuned. Usually, they are also suppressed by another set of stimuli that have little effect when presented in isolation. The interactions between preferred and suppressive stimuli are often quite complex and vary across neurons, even within a single area, making it difficult to infer their collective effect on behavioral responses mediated by activity across populations of neurons. Here, we investigated this issue by measuring, in human subjects (three males), the suppressive effect of static masks on the ocular following responses induced by moving stimuli. We found a wide range of effects, which depend in a nonlinear and nonseparable manner on the spatial frequency, contrast, and spatial location of both stimulus and mask. Under some conditions, the presence of the mask can be seen as scaling the contrast of the driving stimulus. Under other conditions, the effect is more complex, involving also a direct scaling of the behavioral response. All of this complexity at the behavioral level can be captured by a simple model in which stimulus and mask interact nonlinearly at two stages, one monocular and one binocular. The nature of the interactions is compatible with those observed at the level of single neurons in primates, usually broadly described as divisive normalization, without having to invoke any scaling mechanism.SIGNIFICANCE STATEMENT The response of sensory neurons to their preferred stimulus is often modulated by stimuli that are not effective when presented alone. Individual neurons can exhibit multiple modulatory effects, with considerable variability across neurons even in a single area. Such diversity has made it difficult to infer the impact of these modulatory mechanisms on behavioral responses. Here, we report the effects of a stationary mask on the reflexive eye movements induced by a moving stimulus. A model with two stages, each incorporating a divisive modulatory mechanism, reproduces our experimental results and suggests that qualitative variability of masking effects in cortical neurons might arise from differences in the extent to which such effects are inherited from earlier stages.
Collapse
|
14
|
Matsuura K, Kawano K, Inaba N, Miura K. Contribution of color signals to ocular following responses. Eur J Neurosci 2016; 44:2600-2613. [PMID: 27519159 DOI: 10.1111/ejn.13361] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2015] [Revised: 07/05/2016] [Accepted: 08/02/2016] [Indexed: 11/30/2022]
Abstract
Ocular following responses (OFRs) are elicited at ultra-short latencies (< 60 ms) by sudden movements of the visual scene. In this study, we investigated the roles of color signals in OFRs in monkeys. To make physiologically isoluminant sinusoidal color gratings, we estimated the physiologically isoluminant points using OFRs and found that the physiologically isoluminant points were nearly independent of the spatiotemporal frequency of the gratings. We recorded OFRs induced by the motion of physiologically isoluminant color gratings and found that OFRs elicited by the motion of color gratings had different spatiotemporal frequency tuning from those elicited by the motion of luminance gratings. Additionally, OFRs to isoluminant color gratings had smaller peak responses, suggesting that color signals weakly contribute to OFRs compared with luminance signals. OFRs to the motion of stimuli composed of luminance and color signals were also examined. We found that color signals largely contributed to OFRs under low luminance signals regardless of whether color signals moved in the same or opposite direction to luminance signals. These results provide evidence of the multichannel visual computations underlying motor responses. We conclude that, in everyday situations, color information contributes cooperatively with luminance information to the generation of ocular tracking behaviors.
Collapse
Affiliation(s)
- Kiyoto Matsuura
- Department of Integrative Brain Science, Graduate School of Medicine, Kyoto University, Konoe-cho, Yoshida, Kyoto-shi, Kyoto, 606-8501, Japan.,Center for the Promotion of Interdisciplinary Education and Research, Research and Educational Unit of Leaders for Integrated Medical System, Kyoto University, Kyoto, Japan
| | - Kenji Kawano
- Department of Integrative Brain Science, Graduate School of Medicine, Kyoto University, Konoe-cho, Yoshida, Kyoto-shi, Kyoto, 606-8501, Japan.,Center for the Promotion of Interdisciplinary Education and Research, Research and Educational Unit of Leaders for Integrated Medical System, Kyoto University, Kyoto, Japan
| | - Naoko Inaba
- Department of Physiology, Systems Neuroscience Laboratory, Graduate School of Medicine, Hokkaido University, Hokkaido, Japan
| | - Kenichiro Miura
- Department of Integrative Brain Science, Graduate School of Medicine, Kyoto University, Konoe-cho, Yoshida, Kyoto-shi, Kyoto, 606-8501, Japan.
| |
Collapse
|
15
|
Quaia C, Optican LM, Cumming BG. A Motion-from-Form Mechanism Contributes to Extracting Pattern Motion from Plaids. J Neurosci 2016; 36:3903-18. [PMID: 27053199 PMCID: PMC4821905 DOI: 10.1523/jneurosci.3398-15.2016] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2015] [Revised: 02/22/2016] [Accepted: 02/24/2016] [Indexed: 11/21/2022] Open
Abstract
Since the discovery of neurons selective for pattern motion direction in primate middle temporal area MT (Albright, 1984; Movshon et al., 1985), the neural computation of this signal has been the subject of intense study. The bulk of this work has explored responses to plaids obtained by summing two drifting sinusoidal gratings. Unfortunately, with these stimuli, many different mechanisms are similarly effective at extracting pattern motion. We devised a new set of stimuli, obtained by summing two random line stimuli with different orientations. This allowed several novel manipulations, including generating plaids that do not contain rigid 2D motion. Importantly, these stimuli do not engage most of the previously proposed mechanisms. We then recorded the ocular following responses that such stimuli induce in human subjects. We found that pattern motion is computed even with stimuli that do not cohere perceptually, including those without rigid motion, and even when the two gratings are presented separately to the two eyes. Moderate temporal and/or spatial separation of the gratings impairs the computation. We show that, of the models proposed so far, only those based on the intersection-of-constraints rule, embedding a motion-from-form mechanism (in which orientation signals are used in the computation of motion direction signals), can account for our results. At least for the eye movements reported here, a motion-from-form mechanism is thus involved in one of the most basic functions of the visual motion system: extracting motion direction from complex scenes. SIGNIFICANCE STATEMENT Anatomical considerations led to the proposal that visual function is organized in separate processing streams: one (ventral) devoted to form and one (dorsal) devoted to motion. Several experimental results have challenged this view, arguing in favor of a more integrated view of visual processing. Here we add to this body of work, supporting a role for form information even in a function--extracting pattern motion direction from complex scenes--for which decisive evidence for the involvement of form signals has been lacking.
Collapse
Affiliation(s)
- Christian Quaia
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, Maryland 20892
| | - Lance M Optican
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, Maryland 20892
| | - Bruce G Cumming
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, Maryland 20892
| |
Collapse
|
16
|
Nohara S, Kawano K, Miura K. Difference in perceptual and oculomotor responses revealed by apparent motion stimuli presented with an interstimulus interval. J Neurophysiol 2015; 113:3219-28. [PMID: 25810485 DOI: 10.1152/jn.00647.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2014] [Accepted: 03/12/2015] [Indexed: 11/22/2022] Open
Abstract
To understand the mechanisms underlying visual motion analyses for perceptual and oculomotor responses and their similarities/differences, we analyzed eye movement responses to two-frame animations of dual-grating 3f5f stimuli while subjects performed direction discrimination tasks. The 3f5f stimulus was composed of two sinusoids with a spatial frequency ratio of 3:5 (3f and 5f), creating a pattern with fundamental frequency f. When this stimulus was shifted by 1/4 of the wavelength, the two components shifted 1/4 of their wavelengths and had opposite directions: the 5f forward and the 3f backward. By presenting the 3f5f stimulus with various interstimulus intervals (ISIs), two visual-motion-analysis mechanisms, low-level energy-based and high-level feature-based, could be effectively distinguished. This is because response direction depends on the relative contrast between the components when the energy-based mechanism operates, but not when the feature-based mechanism works. We found that when the 3f5f stimuli were presented with shorter ISIs (<100 ms), and 3f component had higher contrast, both perceptual and ocular responses were in the direction of the pattern shift, whereas the responses were reversed when the 5f had higher contrast, suggesting operation of the energy-based mechanism. On the other hand, the ocular responses were almost negligible with longer ISIs (>100 ms), whereas perceived directions were biased toward the direction of pattern shift. These results suggest that the energy-based mechanism is dominant in oculomotor responses throughout ISIs; however, there is a transition from energy-based to feature-tracking mechanisms when we perceive visual motion.
Collapse
Affiliation(s)
- Shizuka Nohara
- Department of Integrative Brain Science, Graduate School of Medicine, Kyoto University, Kyoto, Japan; and Faculty of Medicine, Kyoto University, Kyoto, Japan
| | - Kenji Kawano
- Department of Integrative Brain Science, Graduate School of Medicine, Kyoto University, Kyoto, Japan; and
| | - Kenichiro Miura
- Department of Integrative Brain Science, Graduate School of Medicine, Kyoto University, Kyoto, Japan; and
| |
Collapse
|
17
|
Retinal visual processing constrains human ocular following response. Vision Res 2013; 93:29-42. [PMID: 24125703 DOI: 10.1016/j.visres.2013.10.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2013] [Revised: 08/30/2013] [Accepted: 10/03/2013] [Indexed: 11/24/2022]
Abstract
Ocular following responses (OFRs) are the initial tracking eye movements elicited at ultra-short latency by sudden motion of a textured pattern. We wished to evaluate quantitatively the impact that subcortical stages of visual processing might have on the OFRs. In three experiments we recorded the OFRs of human subjects to brief horizontal motion of 1D vertical sine-wave gratings restricted to an elongated horizontal aperture. Gratings were composed of a variable number of abutting horizontal strips where alternate strips were in counterphase. In one of the experiments we also utilized gratings occupying a variable number of horizontal strips separated vertically by mean-luminance gaps. We modeled retinal center/surround receptive fields as a difference of two 2-D Gaussian functions. When the characteristics of such local filters were selected in accord with the known properties of primate retinal ganglion cells, a single-layer model was capable to quantitatively account for the observed changes in the OFR amplitude for stimuli composed of counterphase strips of different heights (Experiment 1), for a wide range of stimulus contrasts (Experiment 2) and spatial frequencies (Experiment 3). A similar model using oriented filters that resemble cortical simple cells was also able to account for these data. Since similar filters can be constructed from the linear summation of retinal filters, and these filters alone can explain the data, we conclude that retinal processing determines the response to these stimuli. Thus, with appropriately chosen stimuli, OFRs can be used to study visual spatial integration processes as early as in the retina.
Collapse
|
18
|
Abstract
Active sensation poses unique challenges to sensory systems because moving the sensor necessarily alters the input sensory stream. Sensory input quality is additionally compromised if the sensor moves rapidly, as during rapid eye movements, making the period immediately after the movement critical for recovering reliable sensation. Here, we studied this immediate postmovement interval for the case of microsaccades during fixation, which rapidly jitter the "sensor" exactly when it is being voluntarily stabilized to maintain clear vision. We characterized retinal-image slip in monkeys immediately after microsaccades by analyzing postmovement ocular drifts. We observed enhanced ocular drifts by up to ~28% relative to premicrosaccade levels, and for up to ~50 ms after movement end. Moreover, we used a technique to trigger full-field image motion contingent on real-time microsaccade detection, and we used the initial ocular following response to this motion as a proxy for changes in early visual motion processing caused by microsaccades. When the full-field image motion started during microsaccades, ocular following was strongly suppressed, consistent with detrimental retinal effects of the movements. However, when the motion started after microsaccades, there was up to ~73% increase in ocular following speed, suggesting an enhanced motion sensitivity. These results suggest that the interface between even the smallest possible saccades and "fixation" includes a period of faster than usual image slip, as well as an enhanced responsiveness to image motion, and that both of these phenomena need to be considered when interpreting the pervasive neural and perceptual modulations frequently observed around the time of microsaccades.
Collapse
|
19
|
Sheliga BM, Quaia C, Cumming BG, Fitzgibbon EJ. Spatial summation properties of the human ocular following response (OFR): dependence upon the spatial frequency of the stimulus. Vision Res 2012; 68:1-13. [PMID: 22819728 PMCID: PMC3430370 DOI: 10.1016/j.visres.2012.07.006] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2012] [Revised: 07/03/2012] [Accepted: 07/10/2012] [Indexed: 11/17/2022]
Abstract
Ocular following responses (OFRs) are the initial tracking eye movements that can be elicited at ultra-short latency by sudden motion of a textured pattern. The OFR magnitude depends upon stimulus size, and also upon the spatial frequency (SF) of sine-wave gratings. Here we investigate the interaction of size and SF. We recorded initial OFRs in human subjects when 1D vertical sine-wave gratings were subject to horizontal motion. Gratings were restricted to elongated horizontal apertures-"strips"-aligned with the axis of motion. In Experiment 1 the SF and the height of a single strip was manipulated. The magnitude of the OFR increased with strip height up to some optimum value, while strip heights greater than this optimum produced smaller responses. This effect was strongly dependent on SF: the optimum strip height was smaller for higher SFs. In order to explore the underlying mechanism, Experiment 2 measured OFRs to stimuli composed of two thin horizontal strips-one in the upper visual field, the other in the lower visual field-whose vertical separation varied 32-fold. Stimuli of different sizes can be reconstructed from the sum of such horizontal strips. We found that the OFRs in Experiment 1 were smaller than the sum of the responses to the component stimuli, but greater than the average of those responses. We defined an averaging coefficient that described whether a given response was closer to the sum or to the average. For any one SF, the averaging coefficients were similar over a wide range of stimulus sizes, while they varied considerably (7-fold) for stimuli of different SFs.
Collapse
Affiliation(s)
- B M Sheliga
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA.
| | | | | | | |
Collapse
|