1
|
Carrasco M, Spering M. Perception-action Dissociations as a Window into Consciousness. J Cogn Neurosci 2024; 36:1557-1566. [PMID: 38865201 DOI: 10.1162/jocn_a_02122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2024]
Abstract
Understanding the neural correlates of unconscious perception stands as a primary goal of experimental research in cognitive psychology and neuroscience. In this Perspectives paper, we explain why experimental protocols probing qualitative dissociations between perception and action provide valuable insights into conscious and unconscious processing, along with their corresponding neural correlates. We present research that utilizes human eye movements as a sensitive indicator of unconscious visual processing. Given the increasing reliance on oculomotor and pupillary responses in consciousness research, these dissociations also provide a cautionary tale about inferring conscious perception solely based on no-report protocols.
Collapse
|
2
|
Kreyenmeier P, Kumbhani R, Movshon JA, Spering M. Shared Mechanisms Drive Ocular Following and Motion Perception. eNeuro 2024; 11:ENEURO.0204-24.2024. [PMID: 38834301 PMCID: PMC11208981 DOI: 10.1523/eneuro.0204-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Accepted: 05/11/2024] [Indexed: 06/06/2024] Open
Abstract
How features of complex visual patterns are combined to drive perception and eye movements is not well understood. Here we simultaneously assessed human observers' perceptual direction estimates and ocular following responses (OFR) evoked by moving plaids made from two summed gratings with varying contrast ratios. When the gratings were of equal contrast, observers' eye movements and perceptual reports followed the motion of the plaid pattern. However, when the contrasts were unequal, eye movements and reports during early phases of the OFR were biased toward the direction of the high-contrast grating component; during later phases, both responses followed the plaid pattern direction. The shift from component- to pattern-driven behavior resembles the shift in tuning seen under similar conditions in neuronal responses recorded from monkey MT. Moreover, for some conditions, pattern tracking and perceptual reports were correlated on a trial-by-trial basis. The OFR may therefore provide a precise behavioral readout of the dynamics of neural motion integration for complex visual patterns.
Collapse
Affiliation(s)
- Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia V5Z 3N9, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia V6T 1Z3, Canada
| | - Romesh Kumbhani
- Center for Neural Science, New York University, New York, New York 10003
| | - J Anthony Movshon
- Center for Neural Science, New York University, New York, New York 10003
- Department of Psychology, New York University, New York, New York 10003
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia V5Z 3N9, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia V6T 1Z3, Canada
- Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, British Columbia V6T 1Z3, Canada
- Djavad Mowafaghian Center for Brain Health, University of British Columbia, Vancouver, British Columbia V6T 1Z3, Canada
| |
Collapse
|
3
|
Lisi M, Cavanagh P. Different extrapolation of moving object locations in perception, smooth pursuit, and saccades. J Vis 2024; 24:9. [PMID: 38546586 PMCID: PMC10996402 DOI: 10.1167/jov.24.3.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 02/01/2024] [Indexed: 04/07/2024] Open
Abstract
The ability to accurately perceive and track moving objects is crucial for many everyday activities. In this study, we use a "double-drift stimulus" to explore the processing of visual motion signals that underlie perception, pursuit, and saccade responses to a moving object. Participants were presented with peripheral moving apertures filled with noise that either drifted orthogonally to the aperture's direction or had no net motion. Participants were asked to saccade to and track these targets with their gaze as soon as they appeared and then to report their direction. In the trials with internal motion, the target disappeared at saccade onset so that the first 100 ms of the postsaccadic pursuit response was driven uniquely by peripheral information gathered before saccade onset. This provided independent measures of perceptual, pursuit, and saccadic responses to the double-drift stimulus on a trial-by-trial basis. Our analysis revealed systematic differences between saccadic responses, on one hand, and perceptual and pursuit responses, on the other. These differences are unlikely to be caused by differences in the processing of motion signals because both saccades and pursuits seem to rely on shared target position and velocity information. We conclude that our results are instead due to a difference in how the processing mechanisms underlying perception, pursuit, and saccades combine motor signals with target position. These findings advance our understanding of the mechanisms underlying dissociation in visual processing between perception and eye movements.
Collapse
Affiliation(s)
- Matteo Lisi
- Department of Psychology, Royal Holloway, University of London, London, UK
| | - Patrick Cavanagh
- Department of Psychology, Glendon College, Toronto, Ontario, Canada
- Department Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
4
|
Kreyenmeier P, Kumbhani R, Movshon JA, Spering M. Shared mechanisms drive ocular following and motion perception. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.02.560543. [PMID: 37873151 PMCID: PMC10592915 DOI: 10.1101/2023.10.02.560543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
How features of complex visual patterns combine to drive perception and eye movements is not well understood. We simultaneously assessed human observers' perceptual direction estimates and ocular following responses (OFR) evoked by moving plaids made from two summed gratings with varying contrast ratios. When the gratings were of equal contrast, observers' eye movements and perceptual reports followed the motion of the plaid pattern. However, when the contrasts were unequal, eye movements and reports during early phases of the OFR were biased toward the direction of the high-contrast grating component; during later phases, both responses more closely followed the plaid pattern direction. The shift from component- to pattern-driven behavior resembles the shift in tuning seen under similar conditions in neuronal responses recorded from monkey MT. Moreover, for some conditions, pattern tracking and perceptual reports were correlated on a trial-by-trial basis. The OFR may therefore provide a precise behavioural read-out of the dynamics of neural motion integration for complex visual patterns.
Collapse
Affiliation(s)
- Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, BC V5Z 3N9 Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, BC V6T 1Z3 Canada
| | - Romesh Kumbhani
- Center for Neural Science, New York University, New York NY 10003, USA
| | - J. Anthony Movshon
- Center for Neural Science, New York University, New York NY 10003, USA
- Department of Psychology, New York University, New York NY 10003, USA
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, BC V5Z 3N9 Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, BC V6T 1Z3 Canada
- Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, BC V6T 1Z3 Canada
- Djavad Mowafaghian Center for Brain Health, University of British Columbia, Vancouver, BC V6T 1Z3 Canada
| |
Collapse
|
5
|
Grimaldi A, Perrinet LU. Learning heterogeneous delays in a layer of spiking neurons for fast motion detection. BIOLOGICAL CYBERNETICS 2023; 117:373-387. [PMID: 37695359 DOI: 10.1007/s00422-023-00975-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 08/18/2023] [Indexed: 09/12/2023]
Abstract
The precise timing of spikes emitted by neurons plays a crucial role in shaping the response of efferent biological neurons. This temporal dimension of neural activity holds significant importance in understanding information processing in neurobiology, especially for the performance of neuromorphic hardware, such as event-based cameras. Nonetheless, many artificial neural models disregard this critical temporal dimension of neural activity. In this study, we present a model designed to efficiently detect temporal spiking motifs using a layer of spiking neurons equipped with heterogeneous synaptic delays. Our model capitalizes on the diverse synaptic delays present on the dendritic tree, enabling specific arrangements of temporally precise synaptic inputs to synchronize upon reaching the basal dendritic tree. We formalize this process as a time-invariant logistic regression, which can be trained using labeled data. To demonstrate its practical efficacy, we apply the model to naturalistic videos transformed into event streams, simulating the output of the biological retina or event-based cameras. To evaluate the robustness of the model in detecting visual motion, we conduct experiments by selectively pruning weights and demonstrate that the model remains efficient even under significantly reduced workloads. In conclusion, by providing a comprehensive, event-driven computational building block, the incorporation of heterogeneous delays has the potential to greatly improve the performance of future spiking neural network algorithms, particularly in the context of neuromorphic chips.
Collapse
Affiliation(s)
- Antoine Grimaldi
- Institut de Neurosciences de la Timone, Aix Marseille Univ, CNRS, 27 boulevard Jean Moulin, 13005, Marseille, France
| | - Laurent U Perrinet
- Institut de Neurosciences de la Timone, Aix Marseille Univ, CNRS, 27 boulevard Jean Moulin, 13005, Marseille, France.
| |
Collapse
|
6
|
Sheliga BM, FitzGibbon EJ. Manipulating the Fourier spectra of stimuli comprising a two-frame kinematogram to study early visual motion-detecting mechanisms: Perception versus short latency ocular-following responses. J Vis 2023; 23:11. [PMID: 37725387 PMCID: PMC10513114 DOI: 10.1167/jov.23.10.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 08/20/2023] [Indexed: 09/21/2023] Open
Abstract
Two-frame kinematograms have been extensively used to study motion perception in human vision. Measurements of the direction-discrimination performance limits (Dmax) have been the primary subject of such studies, whereas surprisingly little research has asked how the variability in the spatial frequency content of individual frames affects motion processing. Here, we used two-frame one-dimensional vertical pink noise kinematograms, in which images in both frames were bandpass filtered, with the central spatial frequency of the filter manipulated independently for each image. To avoid spatial aliasing, there was no actual leftward-rightward shift of the image: instead, the phases of all Fourier components of the second image were shifted by ±¼ wavelength with respect to those of the first. We recorded ocular-following responses (OFRs) and perceptual direction discrimination in human subjects. OFRs were in the direction of the Fourier components' shift and showed a smooth decline in amplitude, well fit by Gaussian functions, as the difference between the central spatial frequencies of the first and second images increased. In sharp contrast, 100% correct perceptual direction-discrimination performance was observed when the difference between the central spatial frequencies of the first and second images was small, deteriorating rapidly to chance when increased further. Perceptual dependencies moved closer to the OFR ones when subjects were allowed to grade the strength of perceived motion. Response asymmetries common for perceptual judgments and the OFRs suggest that they rely on the same early visual processing mechanisms. The OFR data were quantitatively well described by a model which combined two factors: (1) an excitatory drive determined by a power law sum of stimulus Fourier components' contributions, scaled by (2) a contrast normalization mechanism. Thus, in addition to traditional studies relying on perceptual reports, the OFRs represent a valuable behavioral tool for studying early motion processing on a fine scale.
Collapse
Affiliation(s)
- Boris M Sheliga
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Edmond J FitzGibbon
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
7
|
Ladret HJ, Cortes N, Ikan L, Chavane F, Casanova C, Perrinet LU. Cortical recurrence supports resilience to sensory variance in the primary visual cortex. Commun Biol 2023; 6:667. [PMID: 37353519 PMCID: PMC10290066 DOI: 10.1038/s42003-023-05042-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 06/13/2023] [Indexed: 06/25/2023] Open
Abstract
Our daily endeavors occur in a complex visual environment, whose intrinsic variability challenges the way we integrate information to make decisions. By processing myriads of parallel sensory inputs, our brain is theoretically able to compute the variance of its environment, a cue known to guide our behavior. Yet, the neurobiological and computational basis of such variance computations are still poorly understood. Here, we quantify the dynamics of sensory variance modulations of cat primary visual cortex neurons. We report two archetypal neuronal responses, one of which is resilient to changes in variance and co-encodes the sensory feature and its variance, improving the population encoding of orientation. The existence of these variance-specific responses can be accounted for by a model of intracortical recurrent connectivity. We thus propose that local recurrent circuits process uncertainty as a generic computation, advancing our understanding of how the brain handles naturalistic inputs.
Collapse
Affiliation(s)
- Hugo J Ladret
- Institut de Neurosciences de la Timone, UMR 7289, CNRS and Aix-Marseille Université, Marseille, France.
- School of Optometry, Université de Montréal, Montréal, Canada.
| | - Nelson Cortes
- School of Optometry, Université de Montréal, Montréal, Canada
| | - Lamyae Ikan
- School of Optometry, Université de Montréal, Montréal, Canada
| | - Frédéric Chavane
- Institut de Neurosciences de la Timone, UMR 7289, CNRS and Aix-Marseille Université, Marseille, France
| | | | - Laurent U Perrinet
- Institut de Neurosciences de la Timone, UMR 7289, CNRS and Aix-Marseille Université, Marseille, France
| |
Collapse
|
8
|
Ocular-following responses in school-age children. PLoS One 2022; 17:e0277443. [DOI: 10.1371/journal.pone.0277443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 10/26/2022] [Indexed: 11/12/2022] Open
Abstract
Ocular following eye movements have provided insights into how the visual system of humans and monkeys processes motion. Recently, it has been shown that they also reliably reveal stereoanomalies, and, thus, might have clinical applications. Their translation from research to clinical setting has however been hindered by their small size, which makes them difficult to record, and by a lack of data about their properties in sizable populations. Notably, they have so far only been recorded in adults. We recorded ocular following responses (OFRs)–defined as the change in eye position in the 80–160 ms time window following the motion onset of a large textured stimulus–in 14 school-age children (6 to 13 years old, 9 males and 5 females), under recording conditions that closely mimic a clinical setting. The OFRs were acquired non-invasively by a custom developed high-resolution video-oculography system, described in this study. With the developed system we were able to non-invasively detect OFRs in all children in short recording sessions. Across subjects, we observed a large variability in the magnitude of the movements (by a factor of 4); OFR magnitude was however not correlated with age. A power analysis indicates that even considerably smaller movements could be detected. We conclude that the ocular following system is well developed by age six, and OFRs can be recorded non-invasively in young children in a clinical setting.
Collapse
|
9
|
Wiesbrock C, Musall S, Kampa BM. A flexible Python-based touchscreen chamber for operant conditioning reveals improved visual perception of cardinal orientations in mice. Front Cell Neurosci 2022; 16:866109. [PMID: 36299493 PMCID: PMC9588922 DOI: 10.3389/fncel.2022.866109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Accepted: 09/09/2022] [Indexed: 11/17/2022] Open
Abstract
Natural scenes are composed of a wide range of edge angles and spatial frequencies, with a strong overrepresentation of vertical and horizontal edges. Correspondingly, many mammalian species are much better at discriminating these cardinal orientations compared to obliques. A potential reason for this increased performance could be an increased number of neurons in the visual cortex that are tuned to cardinal orientations, which is likely to be an adaptation to the natural scene statistics. Such biased angular tuning has recently been shown in the mouse primary visual cortex. However, it is still unknown if mice also show a perceptual dominance of cardinal orientations. Here, we describe the design of a novel custom-built touchscreen chamber that allows testing natural scene perception and orientation discrimination performance by applying different task designs. Using this chamber, we applied an iterative convergence towards orientation discrimination thresholds for cardinal or oblique orientations in different cohorts of mice. Surprisingly, the expert discrimination performance was similar for both groups but showed large inter-individual differences in performance and training time. To study the discrimination of cardinal and oblique stimuli in the same mice, we, therefore, applied, a different training regime where mice learned to discriminate cardinal and oblique gratings in parallel. Parallel training revealed a higher task performance for cardinal orientations in an early phase of the training. The performance for both orientations became similar after prolonged training, suggesting that learning permits equally high perceptual tuning towards oblique stimuli. In summary, our custom-built touchscreen chamber offers a flexible tool to test natural visual perception in rodents and revealed a training-induced increase in the perception of oblique gratings. The touchscreen chamber is entirely open-source, easy to build, and freely available to the scientific community to conduct visual or multimodal behavioral studies. It is also based on the FAIR principles for data management and sharing and could therefore serve as a catalyst for testing the perception of complex and natural visual stimuli across behavioral labs.
Collapse
Affiliation(s)
- Christopher Wiesbrock
- Systems Neurophysiology, Institute for Zoology, RWTH Aachen University, Aachen, Germany
- Research Training Group 2416 MultiSenses—MultiScales, RWTH Aachen University, Aachen, Germany
- *Correspondence: Christopher Wiesbrock Björn M. Kampa
| | - Simon Musall
- Systems Neurophysiology, Institute for Zoology, RWTH Aachen University, Aachen, Germany
- Bioelectronics, Institute of Biological Information Processing-3, Forschungszentrum Jülich, Jülich, Germany
| | - Björn M. Kampa
- Systems Neurophysiology, Institute for Zoology, RWTH Aachen University, Aachen, Germany
- Research Training Group 2416 MultiSenses—MultiScales, RWTH Aachen University, Aachen, Germany
- JARA BRAIN, Institute for Neuroscience and Medicine, Forschungszentrum Jülich, Jülich, Germany
- *Correspondence: Christopher Wiesbrock Björn M. Kampa
| |
Collapse
|
10
|
Abstract
Despite the fundamental importance of visual motion processing, our understanding of how the brain represents basic aspects of motion is incomplete. While it is generally believed that direction is the main representational feature of motion, motion processing is also influenced by nondirectional orientation signals that are present in most motion stimuli. Here, we aimed to test whether this nondirectional motion axis contributes motion perception even when orientation is completely absent from the stimulus. Using stimuli with and without orientation signals, we found that serial dependence in a simple motion direction estimation task was predominantly determined by the orientation of the previous motion stimulus. Moreover, the observed attraction profiles closely matched the characteristic pattern of serial attraction found in orientation perception. Evidently, the sequential integration of motion signals strongly depends on the orientation of motion, indicating a fundamental role of nondirectional orientation in the coding of visual motion direction.
Collapse
|
11
|
Kwon S, Fahrenthold BK, Cavanaugh MR, Huxlin KR, Mitchell JF. Perceptual restoration fails to recover unconscious processing for smooth eye movements after occipital stroke. eLife 2022; 11:67573. [PMID: 35730931 PMCID: PMC9255960 DOI: 10.7554/elife.67573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 06/21/2022] [Indexed: 11/28/2022] Open
Abstract
The visual pathways that guide actions do not necessarily mediate conscious perception. Patients with primary visual cortex (V1) damage lose conscious perception but often retain unconscious abilities (e.g. blindsight). Here, we asked if saccade accuracy and post-saccadic following responses (PFRs) that automatically track target motion upon saccade landing are retained when conscious perception is lost. We contrasted these behaviors in the blind and intact fields of 11 chronic V1-stroke patients, and in 8 visually intact controls. Saccade accuracy was relatively normal in all cases. Stroke patients also had normal PFR in their intact fields, but no PFR in their blind fields. Thus, V1 damage did not spare the unconscious visual processing necessary for automatic, post-saccadic smooth eye movements. Importantly, visual training that recovered motion perception in the blind field did not restore the PFR, suggesting a clear dissociation between pathways mediating perceptual restoration and automatic actions in the V1-damaged visual system.
Collapse
Affiliation(s)
- Sunwoo Kwon
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, United States
| | | | - Matthew R Cavanaugh
- Center for Visual Science, University of Rochester, Rochester, United States
| | - Krystel R Huxlin
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, United States
| | - Jude F Mitchell
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, United States
| |
Collapse
|
12
|
Serial dependence for oculomotor control depends on early sensory signals. Curr Biol 2022; 32:2956-2961.e3. [PMID: 35640623 DOI: 10.1016/j.cub.2022.05.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 04/21/2022] [Accepted: 05/05/2022] [Indexed: 11/23/2022]
Abstract
To create an accurate percept of the world, the visual system relies on past experience and prior assumptions.1 For example, although the retinal projection of an object moving in depth changes drastically, we still perceive the object at a constant size and velocity.2,3 Consequently, if we see the same object with a constant retinal size at two different depth levels, the perceived size differs (illustrated by the Ponzo illusion). Past experience also directly influences perceptual judgments, an effect known as serial dependence.4,5 Such sequential effects have also been reported for oculomotor behavior, even on the trial-by-trial level.6-10 An integration of past experiences seems like a smart and sophisticated mechanism to reduce uncertainty and improve behavior in a world full of statistical regularities. By leveraging the Ponzo illusion to dissociate perceived size and speed from retinal signals, we show that serial-dependence effects for oculomotor control are mediated by retinal error signals. These sequential effects likely take place in early sensory processing because they transfer to different visual stimuli. In contrast to recently reported history effects for perceptual decisions,11 sequential effects for oculomotor control deviate from perceptual mechanisms by not integrating spatial context and by ignoring size and velocity constancy. Although this dissociation might appear suboptimal, we argue that this effect reveals the different goals of the oculomotor and perceptual systems. The oculomotor system tries to reduce retinal error signals to bring and keep the target close to the fovea, whereas the visual system interprets retinal input to achieve an accurate representation of the world.12.
Collapse
|
13
|
Speed Estimation for Visual Tracking Emerges Dynamically from Nonlinear Frequency Interactions. eNeuro 2022; 9:ENEURO.0511-21.2022. [PMID: 35470228 PMCID: PMC9113919 DOI: 10.1523/eneuro.0511-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 03/08/2022] [Accepted: 03/11/2022] [Indexed: 11/21/2022] Open
Abstract
Sensing the movement of fast objects within our visual environments is essential for controlling actions. It requires online estimation of motion direction and speed. We probed human speed representation using ocular tracking of stimuli of different statistics. First, we compared ocular responses to single drifting gratings (DGs) with a given set of spatiotemporal frequencies to broadband motion clouds (MCs) of matched mean frequencies. Motion energy distributions of gratings and clouds are point-like, and ellipses oriented along the constant speed axis, respectively. Sampling frequency space, MCs elicited stronger, less variable, and speed-tuned responses. DGs yielded weaker and more frequency-tuned responses. Second, we measured responses to patterns made of two or three components covering a range of orientations within Fourier space. Early tracking initiation of the patterns was best predicted by a linear combination of components before nonlinear interactions emerged to shape later dynamics. Inputs are supralinearly integrated along an iso-velocity line and sublinearly integrated away from it. A dynamical probabilistic model characterizes these interactions as an excitatory pooling along the iso-velocity line and inhibition along the orthogonal “scale” axis. Such crossed patterns of interaction would appropriately integrate or segment moving objects. This study supports the novel idea that speed estimation is better framed as a dynamic channel interaction organized along speed and scale axes.
Collapse
|
14
|
Yoshimoto S, Hayasaka T. Common and independent processing of visual motion perception and oculomotor response. J Vis 2022; 22:6. [PMID: 35293955 PMCID: PMC8944401 DOI: 10.1167/jov.22.4.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual motion signals are used not only to drive motion perception but also to elicit oculomotor responses. A fundamental question is whether perceptual and oculomotor processing of motion signals shares a common mechanism. This study aimed to address this question using visual motion priming, in which the perceived direction of a directionally ambiguous stimulus is biased in the same (positive priming) or opposite (negative priming) direction as that of a priming stimulus. The priming effect depends on the duration of the priming stimulus. It is assumed that positive and negative priming are mediated by high- and low-level motion systems, respectively. Participants were asked to judge the perceived direction of a π-phase-shifted test grating after a smoothly drifting priming grating during varied durations. Their eye movements were measured while the test grating was presented. The perception and eye movements were discrepant under positive priming and correlated under negative priming on a trial-by-trial basis when an interstimulus interval was inserted between the priming and test stimuli, indicating that the eye movements were evoked by the test stimulus per se. These findings suggest that perceptual and oculomotor responses are induced by a common mechanism at a low level of motion processing but by independent mechanisms at a high level of motion processing.
Collapse
Affiliation(s)
- Sanae Yoshimoto
- School of Integrated Arts and Sciences, Hiroshima University, Hiroshima, Japan.,
| | - Tomoyuki Hayasaka
- School of Integrated Arts and Sciences, Hiroshima University, Hiroshima, Japan.,
| |
Collapse
|
15
|
Cloherty SL, Yates JL, Graf D, DeAngelis GC, Mitchell JF. Motion Perception in the Common Marmoset. Cereb Cortex 2021; 30:2658-2672. [PMID: 31828299 DOI: 10.1093/cercor/bhz267] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Revised: 08/23/2019] [Accepted: 09/17/2019] [Indexed: 11/13/2022] Open
Abstract
Visual motion processing is a well-established model system for studying neural population codes in primates. The common marmoset, a small new world primate, offers unparalleled opportunities to probe these population codes in key motion processing areas, such as cortical areas MT and MST, because these areas are accessible for imaging and recording at the cortical surface. However, little is currently known about the perceptual abilities of the marmoset. Here, we introduce a paradigm for studying motion perception in the marmoset and compare their psychophysical performance with human observers. We trained two marmosets to perform a motion estimation task in which they provided an analog report of their perceived direction of motion with an eye movement to a ring that surrounded the motion stimulus. Marmosets and humans exhibited similar trade-offs in speed versus accuracy: errors were larger and reaction times were longer as the strength of the motion signal was reduced. Reverse correlation on the temporal fluctuations in motion direction revealed that both species exhibited short integration windows; however, marmosets had substantially less nondecision time than humans. Our results provide the first quantification of motion perception in the marmoset and demonstrate several advantages to using analog estimation tasks.
Collapse
Affiliation(s)
- Shaun L Cloherty
- Department of Brain and Cognitive Sciences, University of Rochester, New York, NY 14627, USA.,Department of Physiology, Monash University, Melbourne, VIC 3800, Australia
| | - Jacob L Yates
- Department of Brain and Cognitive Sciences, University of Rochester, New York, NY 14627, USA
| | - Dina Graf
- Department of Brain and Cognitive Sciences, University of Rochester, New York, NY 14627, USA
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, New York, NY 14627, USA
| | - Jude F Mitchell
- Department of Brain and Cognitive Sciences, University of Rochester, New York, NY 14627, USA
| |
Collapse
|
16
|
Park ASY, Schütz AC. Selective postsaccadic enhancement of motion perception. Vision Res 2021; 188:42-50. [PMID: 34280816 PMCID: PMC7611369 DOI: 10.1016/j.visres.2021.06.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Revised: 06/15/2021] [Accepted: 06/20/2021] [Indexed: 11/23/2022]
Abstract
Saccadic eye movements can drastically affect motion perception: during saccades, the stationary surround is swept rapidly across the retina and contrast sensitivity is suppressed. However, after saccades, contrast sensitivity is enhanced for color and high-spatial frequency stimuli and reflexive tracking movements known as ocular following responses (OFR) are enhanced in response to large field motion. Additionally, OFR and postsaccadic enhancement of neural activity in primate motion processing areas are well correlated. It is not yet known how this postsaccadic enhancement arises. Therefore, we tested if the enhancement can be explained by changes in the balance of centre-surround antagonism in motion processing, where spatial summation is favoured at low contrasts and surround suppression is favoured at high contrasts. We found motion perception was selectively enhanced immediately after saccades for high spatial frequency stimuli, consistent with previously reported selective postsaccadic enhancement of contrast sensitivity for flashed high spatial frequency stimuli. The observed enhancement was also associated with changes in spatial summation and suppression, as well as contrast facilitation and inhibition, suggesting that motion processing is augmented to maximise visual perception immediately after saccades. The results highlight that spatial and contrast properties of underlying neural mechanisms for motion processing can be affected by an antecedent saccade for highly detailed stimuli and are in line with studies that show behavioural and neuronal enhancement of motion processing in non-human primates.
Collapse
Affiliation(s)
- Adela S Y Park
- Experimental and Biological Psychology, University of Marburg, Marburg, Germany.
| | - Alexander C Schütz
- Experimental and Biological Psychology, University of Marburg, Marburg, Germany; Center for Mind, Brain and Behavior, University of Marburg, Marburg, Germany
| |
Collapse
|
17
|
Isherwood ZJ, Clifford CWG, Schira MM, Roberts MM, Spehar B. Nice and slow: Measuring sensitivity and visual preference toward naturalistic stimuli varying in their amplitude spectra in space and time. Vision Res 2021; 181:47-60. [PMID: 33578184 DOI: 10.1016/j.visres.2021.01.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 01/06/2021] [Accepted: 01/06/2021] [Indexed: 10/22/2022]
Abstract
The 1/fα amplitude spectrum is a statistical property of natural scenes characterising a specific distribution of spatial and temporal frequencies and their associated luminance intensities. This property has been studied extensively in the spatial domain whereby sensitivity and visual preference overlap and peak for slopes within the natural range (α ≈ 1), but remains relatively less studied in the temporal domain. Here, we used a 4AFC task to measure sensitivity and a 2AFC task to measure visual preference and across a wide range of spatial (α = 0.25, 1.25, 2.25) and temporal (α = 0.25 to 2.50, step size: 0.25) slope conditions. Stimuli with a shallow temporal slope modulate rapidly (e.g. 0.25), whereas stimuli with a steep slope modulate slowly (e.g. 2.25). Interestingly, sensitivity and visual preference did not closely overlap. While the sensitivity of the visual system is highest for our stimulus with an intermediate modulation rate (1.25), which is most abundant in nature, the stimulus with the slowest modulation rate (2.25) was most preferred. It seems sensible for the visual system to be sensitive to spatiotemporal spectra that most commonly exist in nature (α ≈ 1). However, it is possible that preference might be related to what these properties signal in the natural world. Consider the cases of waves slowly vs. rapidly crashing on a beach or fast vs. slow animals. In both instances the slowest option is often the safest and preferential, suggesting that the temporal 1/fα amplitude spectrum provides additional information that may indicate preferred environmental conditions.
Collapse
Affiliation(s)
- Zoey J Isherwood
- School of Psychology, UNSW Sydney, Sydney, NSW 2052, Australia; School of Psychology, University of Wollongong, Wollongong, NSW 2522, Australia; Department of Psychology, University of Nevada, Reno, NV 89557, USA.
| | | | - Mark M Schira
- School of Psychology, University of Wollongong, Wollongong, NSW 2522, Australia; Neuroscience Research Australia, Randwick, NSW 2031, Australia
| | - Michelle M Roberts
- School of Psychology, UNSW Sydney, Sydney, NSW 2052, Australia; School of Psychology, University of Wollongong, Wollongong, NSW 2522, Australia
| | - Branka Spehar
- School of Psychology, UNSW Sydney, Sydney, NSW 2052, Australia
| |
Collapse
|
18
|
Badde S, Myers CF, Yuval-Greenberg S, Carrasco M. Oculomotor freezing reflects tactile temporal expectation and aids tactile perception. Nat Commun 2020; 11:3341. [PMID: 32620746 PMCID: PMC7335189 DOI: 10.1038/s41467-020-17160-1] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Accepted: 06/08/2020] [Indexed: 01/10/2023] Open
Abstract
The oculomotor system keeps the eyes steady in expectation of visual events. Here, recording microsaccades while people performed a tactile, frequency discrimination task enabled us to test whether the oculomotor system shows an analogous preparatory response for unrelated tactile events. We manipulated the temporal predictability of tactile targets using tactile cues, which preceded the target by either constant (high predictability) or variable (low predictability) time intervals. We find that microsaccades are inhibited prior to tactile targets and more so for constant than variable intervals, revealing a tight crossmodal link between tactile temporal expectation and oculomotor action. These findings portray oculomotor freezing as a marker of crossmodal temporal expectation. Moreover, microsaccades occurring around the tactile target presentation are associated with reduced task performance, suggesting that oculomotor freezing mitigates potential detrimental, concomitant effects of microsaccades and revealing a crossmodal coupling between tactile perception and oculomotor action.
Collapse
Affiliation(s)
- Stephanie Badde
- Department of Psychology, New York University, 6 Washington Place, New York, NY, 10003, USA.
- Center for Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA.
| | - Caroline F Myers
- Department of Psychology, New York University, 6 Washington Place, New York, NY, 10003, USA
| | - Shlomit Yuval-Greenberg
- School of Psychological Sciences, Tel-Aviv University, Ramat Aviv, 6997801, Tel Aviv-Yafo, Israel
- Sagol School of Neuroscience, Tel-Aviv University, Ramat Aviv, 6997801, Tel Aviv-Yafo, Israel
| | - Marisa Carrasco
- Department of Psychology, New York University, 6 Washington Place, New York, NY, 10003, USA
- Center for Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA
| |
Collapse
|
19
|
Kwon S, Rolfs M, Mitchell JF. Presaccadic motion integration drives a predictive postsaccadic following response. J Vis 2020; 19:12. [PMID: 31557762 DOI: 10.1167/19.11.12] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Saccadic eye movements sample the visual world and ensure high acuity across the visual field. To compensate for delays in processing, saccades to moving targets require predictions: The eyes must intercept the target's future position to then pursue its direction of motion. Although prediction is crucial to voluntary pursuit, it is unclear whether it is an obligatory feature of saccade planning. Saccade planning involves an involuntary enhanced processing of the target, called presaccadic attention. Does this presaccadic attention recruit smooth eye movements automatically? To test this, we had human participants perform a saccade to one of four apertures, which were static, but each contained a random dot field with motion tangential to the required saccade. In this task, saccades were deviated along the direction of target motion, and the eyes exhibited a following response upon saccade landing. This postsaccadic following response (PFR) increased with spatial uncertainty of the target position and persisted even when we removed the motion stimulus in midflight of the saccade, confirming that it relied on presaccadic information. Motion from 50-100 ms prior to the saccade had the strongest influence on PFR, consistent with the time course of perceptual enhancements reported in presaccadic attention. Finally, the PFR magnitude related linearly to the logarithm of stimulus velocity and generally had low gain, similar to involuntary ocular following movements commonly observed after sudden motion onsets. These results suggest that presaccadic attention selects motion features of targets predictively, presumably to ensure successful immediate tracking of saccade targets in motion.
Collapse
Affiliation(s)
- Sunwoo Kwon
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA.,Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Martin Rolfs
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Jude F Mitchell
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA.,Center for Visual Science, University of Rochester, Rochester, NY, USA
| |
Collapse
|
20
|
Sheliga BM, Quaia C, FitzGibbon EJ, Cumming BG. Short-latency ocular-following responses: Weighted nonlinear summation predicts the outcome of a competition between two sine wave gratings moving in opposite directions. J Vis 2020; 20:1. [PMID: 31995136 PMCID: PMC7239641 DOI: 10.1167/jov.20.1.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Accepted: 11/29/2019] [Indexed: 11/24/2022] Open
Abstract
We recorded horizontal ocular-following responses to pairs of superimposed vertical sine wave gratings moving in opposite directions in human subjects. This configuration elicits a nonlinear interaction: when the relative contrast of the gratings is changed, the response transitions abruptly between the responses elicited by either grating alone. We explore this interaction in pairs of gratings that differ in spatial and temporal frequency and show that all cases can be described as a weighted sum of the responses to each grating presented alone, where the weights are a nonlinear function of stimulus contrast: a nonlinear weighed summation model. The weights depended on the spatial and temporal frequency of the component grating. In many cases the dominant component was not the one that produced the strongest response when presented alone, implying that the neuronal circuits assigning weights precede the stages at which motor responses to visual motion are generated. When the stimulus area was reduced, the relationship between spatial frequency and weight shifted to higher frequencies. This finding may reflect a contribution from surround suppression. The nonlinear interaction is strongest when the two components have similar spatial frequencies, suggesting that the nonlinearity may reflect interactions within single spatial frequency channels. This framework can be extended to stimuli composed of more than two components: our model was able to predict the responses to stimuli composed of three gratings. That this relatively simple model successfully captures the ocular-following responses over a wide range of spatial/temporal frequency and contrast parameters suggests that these interactions reflect a simple mechanism.
Collapse
|
21
|
Kim S, Park J, Lee J. Effect of Prior Direction Expectation on the Accuracy and Precision of Smooth Pursuit Eye Movements. Front Syst Neurosci 2019; 13:71. [PMID: 32038182 PMCID: PMC6988807 DOI: 10.3389/fnsys.2019.00071] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 11/11/2019] [Indexed: 12/23/2022] Open
Abstract
The integration of sensory with top–down cognitive signals for generating appropriate sensory–motor behaviors is an important issue in understanding the brain’s information processes. Recent studies have demonstrated that the interplay between sensory and high-level signals in oculomotor behavior could be explained by Bayesian inference. Specifically, prior knowledge for motion speed introduces a bias in the speed of smooth pursuit eye movements. The other important prediction of Bayesian inference is variability reduction by prior expectation; however, there is insufficient evidence in oculomotor behaviors to support this prediction. In the present study, we trained monkeys to switch the prior expectation about motion direction and independently controlled the strength of the motion stimulus. Under identical sensory stimulus conditions, we tested if prior knowledge about the motion direction reduced the variability of open-loop smooth pursuit eye movements. We observed a significant reduction when the prior expectation was strong; this was consistent with the prediction of Bayesian inference. Taking advantage of the open-loop smooth pursuit, we investigated the temporal dynamics of the effect of the prior to the pursuit direction bias and variability. This analysis demonstrated that the strength of the sensory evidence depended not only on the strength of the sensory stimulus but also on the time required for the pursuit system to form a neural sensory representation. Finally, we demonstrated that the variability and directional bias change by prior knowledge were quantitatively explained by the Bayesian observer model.
Collapse
Affiliation(s)
- Seolmin Kim
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, South Korea.,Department of Biomedical Engineering, Sungkyunkwan University, Suwon, South Korea
| | - Jeongjun Park
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, South Korea.,Department of Biomedical Engineering, Sungkyunkwan University, Suwon, South Korea
| | - Joonyeol Lee
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, South Korea.,Department of Biomedical Engineering, Sungkyunkwan University, Suwon, South Korea
| |
Collapse
|
22
|
Vetter P, Badde S, Phelps EA, Carrasco M. Emotional faces guide the eyes in the absence of awareness. eLife 2019; 8:43467. [PMID: 30735123 PMCID: PMC6382349 DOI: 10.7554/elife.43467] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2018] [Accepted: 02/07/2019] [Indexed: 12/14/2022] Open
Abstract
The ability to act quickly to a threat is a key skill for survival. Under awareness, threat-related emotional information, such as an angry or fearful face, has not only perceptual advantages but also guides rapid actions such as eye movements. Emotional information that is suppressed from awareness still confers perceptual and attentional benefits. However, it is unknown whether suppressed emotional information can directly guide actions, or whether emotional information has to enter awareness to do so. We suppressed emotional faces from awareness using continuous flash suppression and tracked eye gaze position. Under successful suppression, as indicated by objective and subjective measures, gaze moved towards fearful faces, but away from angry faces. Our findings reveal that: (1) threat-related emotional stimuli can guide eye movements in the absence of visual awareness; (2) threat-related emotional face information guides distinct oculomotor actions depending on the type of threat conveyed by the emotional expression.
Collapse
Affiliation(s)
- Petra Vetter
- Department of Psychology, Center for Neural Science, New York University, New York, United States.,Department of Psychology, Royal Holloway, University of London, Egham, United Kingdom
| | - Stephanie Badde
- Department of Psychology, Center for Neural Science, New York University, New York, United States
| | - Elizabeth A Phelps
- Department of Psychology, Center for Neural Science, New York University, New York, United States.,Department of Psychology, Harvard University, Cambridge, United States
| | - Marisa Carrasco
- Department of Psychology, Center for Neural Science, New York University, New York, United States
| |
Collapse
|
23
|
Speed-Selectivity in Retinal Ganglion Cells is Sharpened by Broad Spatial Frequency, Naturalistic Stimuli. Sci Rep 2019; 9:456. [PMID: 30679564 PMCID: PMC6345785 DOI: 10.1038/s41598-018-36861-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Accepted: 11/09/2018] [Indexed: 11/28/2022] Open
Abstract
Motion detection represents one of the critical tasks of the visual system and has motivated a large body of research. However, it remains unclear precisely why the response of retinal ganglion cells (RGCs) to simple artificial stimuli does not predict their response to complex, naturalistic stimuli. To explore this topic, we use Motion Clouds (MC), which are synthetic textures that preserve properties of natural images and are merely parameterized, in particular by modulating the spatiotemporal spectrum complexity of the stimulus by adjusting the frequency bandwidths. By stimulating the retina of the diurnal rodent, Octodon degus with MC we show that the RGCs respond to increasingly complex stimuli by narrowing their adjustment curves in response to movement. At the level of the population, complex stimuli produce a sparser code while preserving movement information; therefore, the stimuli are encoded more efficiently. Interestingly, these properties were observed throughout different populations of RGCs. Thus, our results reveal that the response at the level of RGCs is modulated by the naturalness of the stimulus - in particular for motion - which suggests that the tuning to the statistics of natural images already emerges at the level of the retina.
Collapse
|
24
|
Quaia C, FitzGibbon EJ, Optican LM, Cumming BG. Binocular Summation for Reflexive Eye Movements: A Potential Diagnostic Tool for Stereodeficiencies. Invest Ophthalmol Vis Sci 2018; 59:5816-5822. [PMID: 30521669 PMCID: PMC6284466 DOI: 10.1167/iovs.18-24520] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2018] [Accepted: 10/30/2018] [Indexed: 11/24/2022] Open
Abstract
Purpose Stereoscopic vision, by detecting interocular correlations, enhances depth perception. Stereodeficiencies often emerge during the first months of life, and left untreated can lead to severe loss of visual acuity in one eye and/or strabismus. Early treatment results in much better outcomes, yet diagnostic tests for infants are cumbersome and not widely available. We asked whether reflexive eye movements, which in principle can be recorded even in infants, can be used to identify stereodeficiencies. Methods Reflexive ocular following eye movements induced by fast drifting noise stimuli were recorded in 10 adult human participants (5 with normal stereoacuity, 5 stereodeficient). To manipulate interocular correlation, the stimuli shown to the two eyes were either identical, different, or had opposite contrast. Monocular presentations were also interleaved. The participants were asked to passively fixate the screen. Results In the participants with normal stereoacuity, the responses to binocular identical stimuli were significantly larger than those induced by binocular opposite stimuli. In the stereodeficient participants the responses were indistinguishable. Despite the small size of ocular following responses, 40 trials, corresponding to less than 2 minutes of testing, were sufficient to reliably differentiate normal from stereodeficient participants. Conclusions Ocular-following eye movements, because of their reliance on cortical neurons sensitive to interocular correlations, are affected by stereodeficiencies. Because these eye movements can be recorded noninvasively and with minimal participant cooperation, they can potentially be measured even in infants and might thus provide an useful screening tool for this currently underserved population.
Collapse
Affiliation(s)
- Christian Quaia
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, U.S. Department of Health and Human Services, Bethesda, Maryland, United States
| | - Edmond J FitzGibbon
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, U.S. Department of Health and Human Services, Bethesda, Maryland, United States
| | - Lance M Optican
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, U.S. Department of Health and Human Services, Bethesda, Maryland, United States
| | - Bruce G Cumming
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, U.S. Department of Health and Human Services, Bethesda, Maryland, United States
| |
Collapse
|
25
|
Vacher J, Meso AI, Perrinet LU, Peyré G. Bayesian Modeling of Motion Perception Using Dynamical Stochastic Textures. Neural Comput 2018; 30:3355-3392. [DOI: 10.1162/neco_a_01142] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A common practice to account for psychophysical biases in vision is to frame them as consequences of a dynamic process relying on optimal inference with respect to a generative model. The study presented here details the complete formulation of such a generative model intended to probe visual motion perception with a dynamic texture model. It is derived in a set of axiomatic steps constrained by biological plausibility. We extend previous contributions by detailing three equivalent formulations of this texture model. First, the composite dynamic textures are constructed by the random aggregation of warped patterns, which can be viewed as three-dimensional gaussian fields. Second, these textures are cast as solutions to a stochastic partial differential equation (sPDE). This essential step enables real-time, on-the-fly texture synthesis using time-discretized autoregressive processes. It also allows for the derivation of a local motion-energy model, which corresponds to the log likelihood of the probability density. The log likelihoods are essential for the construction of a Bayesian inference framework. We use the dynamic texture model to psychophysically probe speed perception in humans using zoom-like changes in the spatial frequency content of the stimulus. The human data replicate previous findings showing perceived speed to be positively biased by spatial frequency increments. A Bayesian observer who combines a gaussian likelihood centered at the true speed and a spatial frequency dependent width with a “slow-speed prior” successfully accounts for the perceptual bias. More precisely, the bias arises from a decrease in the observer's likelihood width estimated from the experiments as the spatial frequency increases. Such a trend is compatible with the trend of the dynamic texture likelihood width.
Collapse
Affiliation(s)
- Jonathan Vacher
- Département de Mathématique et Applications, École Normale Supérieure, Paris 75005, France; UNIC, Gif-sur-Yvette 91190, France; and CNRS, France
| | - Andrew Isaac Meso
- Institut des Neurosciences de la Timone, Marseille 13005, France, and Faculty of Science and Technology, Bournemouth University, Poole BH12 5BB, U.K
| | - Laurent U. Perrinet
- Institut de Neurosciences de la Timone, Marseille 13005, France, and CNRS, France
| | - Gabriel Peyré
- Département de Mathématique et Applications, École Normale Supérieure, Paris 75005, France, and CNRS, France
| |
Collapse
|
26
|
de'Sperati C, Thornton IM. Motion prediction at low contrast. Vision Res 2018; 154:85-96. [PMID: 30471309 DOI: 10.1016/j.visres.2018.11.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2018] [Revised: 10/26/2018] [Accepted: 11/06/2018] [Indexed: 11/17/2022]
Abstract
Accurate motion prediction is fundamental for survival. How does this reconcile with the well-known speed underestimation of low-contrast stimuli? Here we asked whether this contrast-dependent perceptual bias is retained in motion prediction under two different saccadic planning conditions: making a saccade to an occluded moving target, and real-time gaze interaction with multiple moving targets. In a first experiment, observers made a saccade to the mentally extrapolated position of a moving target (imagery condition). In a second experiment, observers had to prevent collisions among multiple moving targets by glancing at them through a gaze-contingent display or by hitting them with the touchpad cursor (interaction condition). In both experiments, target contrast was manipulated. We found that, whereas saccades to the imagined moving target were systematically biased by contrast, the gaze interaction performance, as measured by missed collisions, was generally unaffected - even though low-contrast targets looked slower. Interceptive actions increased at low contrast, but only when the gaze was used for interaction. Thus, perceptual speed underestimation transfers to saccades made to imagined low-contrast targets, without however necessarily being detrimental to effective performance when real-time interaction with multiple targets is required. This differential effect of stimulus contrast suggests that in complex dynamic conditions saccades are rather tolerant to visual speed biases.
Collapse
Affiliation(s)
- Claudio de'Sperati
- Faculty of Psychology, Laboratory of Action, Perception and Cognition, Vita-Salute San Raffaele University, via Olgettina 58, 20132 Milano, Italy; Experimental Psychology Unit, Division of Neuroscience, San Raffaele Scientific Institute, via Olgettina 60, 20132 Milano, Italy.
| | - Ian M Thornton
- Department of Cognitive Science, Faculty of Media and Knowledge Sciences, University of Malta, Msida MSD 2080, Malta
| |
Collapse
|
27
|
Abstract
Psychophysical studies and our own subjective experience suggest that, in natural viewing conditions (i.e., at medium to high contrasts), monocularly and binocularly viewed scenes appear very similar, with the exception of the improved depth perception provided by stereopsis. This phenomenon is usually described as a lack of binocular summation. We show here that there is an exception to this rule: Ocular following eye movements induced by the sudden motion of a large stimulus, which we recorded from three human subjects, are much larger when both eyes see the moving stimulus, than when only one eye does. We further discovered that this binocular advantage is a function of the interocular correlation between the two monocular images: It is maximal when they are identical, and reduced when the two eyes are presented with different images. This is possible only if the neurons that underlie ocular following are sensitive to binocular disparity.
Collapse
Affiliation(s)
- Christian Quaia
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, MD, USA
| | - Lance M Optican
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, MD, USA
| | - Bruce G Cumming
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, MD, USA
| |
Collapse
|
28
|
Abstract
Visual motion processing can be conceptually divided into two levels. In the lower level, local motion signals are detected by spatiotemporal-frequency-selective sensors and then integrated into a motion vector flow. Although the model based on V1-MT physiology provides a good computational framework for this level of processing, it needs to be updated to fully explain psychophysical findings about motion perception, including complex motion signal interactions in the spatiotemporal-frequency and space domains. In the higher level, the velocity map is interpreted. Although there are many motion interpretation processes, we highlight the recent progress in research on the perception of material (e.g., specular reflection, liquid viscosity) and on animacy perception. We then consider possible linking mechanisms of the two levels and propose intrinsic flow decomposition as the key problem. To provide insights into computational mechanisms of motion perception, in addition to psychophysics and neurosciences, we review machine vision studies seeking to solve similar problems.
Collapse
Affiliation(s)
- Shin'ya Nishida
- NTT Communication Science Labs, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa 243-0198, Japan; , , ,
| | - Takahiro Kawabe
- NTT Communication Science Labs, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa 243-0198, Japan; , , ,
| | - Masataka Sawayama
- NTT Communication Science Labs, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa 243-0198, Japan; , , ,
| | - Taiki Fukiage
- NTT Communication Science Labs, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa 243-0198, Japan; , , ,
| |
Collapse
|
29
|
Botschko Y, Yarkoni M, Joshua M. Smooth Pursuit Eye Movement of Monkeys Naive to Laboratory Setups With Pictures and Artificial Stimuli. Front Syst Neurosci 2018; 12:15. [PMID: 29719503 PMCID: PMC5913553 DOI: 10.3389/fnsys.2018.00015] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2018] [Accepted: 03/28/2018] [Indexed: 12/03/2022] Open
Abstract
When animal behavior is studied in a laboratory environment, the animals are often extensively trained to shape their behavior. A crucial question is whether the behavior observed after training is part of the natural repertoire of the animal or represents an outlier in the animal’s natural capabilities. This can be investigated by assessing the extent to which the target behavior is manifested during the initial stages of training and the time course of learning. We explored this issue by examining smooth pursuit eye movements in monkeys naïve to smooth pursuit tasks. We recorded the eye movements of monkeys from the 1st days of training on a step-ramp paradigm. We used bright spots, monkey pictures and scrambled versions of the pictures as moving targets. We found that during the initial stages of training, the pursuit initiation was largest for the monkey pictures and in some direction conditions close to target velocity. When the pursuit initiation was large, the monkeys mostly continued to track the target with smooth pursuit movements while correcting for displacement errors with small saccades. Two weeks of training increased the pursuit eye velocity in all stimulus conditions, whereas further extensive training enhanced pursuit slightly more. The training decreased the coefficient of variation of the eye velocity. Anisotropies that grade pursuit across directions were observed from the 1st day of training and mostly persisted across training. Thus, smooth pursuit in the step-ramp paradigm appears to be part of the natural repertoire of monkeys’ behavior and training adjusts monkeys’ natural predisposed behavior.
Collapse
Affiliation(s)
- Yehudit Botschko
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Merav Yarkoni
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Mati Joshua
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
30
|
Suppression and Contrast Normalization in Motion Processing. J Neurosci 2017; 37:11051-11066. [PMID: 29018158 DOI: 10.1523/jneurosci.1572-17.2017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 08/11/2017] [Accepted: 08/18/2017] [Indexed: 11/21/2022] Open
Abstract
Sensory neurons are activated by a range of stimuli to which they are said to be tuned. Usually, they are also suppressed by another set of stimuli that have little effect when presented in isolation. The interactions between preferred and suppressive stimuli are often quite complex and vary across neurons, even within a single area, making it difficult to infer their collective effect on behavioral responses mediated by activity across populations of neurons. Here, we investigated this issue by measuring, in human subjects (three males), the suppressive effect of static masks on the ocular following responses induced by moving stimuli. We found a wide range of effects, which depend in a nonlinear and nonseparable manner on the spatial frequency, contrast, and spatial location of both stimulus and mask. Under some conditions, the presence of the mask can be seen as scaling the contrast of the driving stimulus. Under other conditions, the effect is more complex, involving also a direct scaling of the behavioral response. All of this complexity at the behavioral level can be captured by a simple model in which stimulus and mask interact nonlinearly at two stages, one monocular and one binocular. The nature of the interactions is compatible with those observed at the level of single neurons in primates, usually broadly described as divisive normalization, without having to invoke any scaling mechanism.SIGNIFICANCE STATEMENT The response of sensory neurons to their preferred stimulus is often modulated by stimuli that are not effective when presented alone. Individual neurons can exhibit multiple modulatory effects, with considerable variability across neurons even in a single area. Such diversity has made it difficult to infer the impact of these modulatory mechanisms on behavioral responses. Here, we report the effects of a stationary mask on the reflexive eye movements induced by a moving stimulus. A model with two stages, each incorporating a divisive modulatory mechanism, reproduces our experimental results and suggests that qualitative variability of masking effects in cortical neurons might arise from differences in the extent to which such effects are inherited from earlier stages.
Collapse
|
31
|
Probing Early Motion Processing With Eye Movements: Differences of Vestibular Migraine, Migraine With and Without Aura in the Attack Free Interval. Headache 2017; 58:275-286. [DOI: 10.1111/head.13185] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/08/2017] [Indexed: 01/03/2023]
|
32
|
Kreyenmeier P, Fooken J, Spering M. Context effects on smooth pursuit and manual interception of a disappearing target. J Neurophysiol 2017; 118:404-415. [PMID: 28515287 DOI: 10.1152/jn.00217.2017] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Revised: 04/25/2017] [Accepted: 05/12/2017] [Indexed: 11/22/2022] Open
Abstract
In our natural environment, we interact with moving objects that are surrounded by richly textured, dynamic visual contexts. Yet most laboratory studies on vision and movement show visual objects in front of uniform gray backgrounds. Context effects on eye movements have been widely studied, but it is less well known how visual contexts affect hand movements. Here we ask whether eye and hand movements integrate motion signals from target and context similarly or differently, and whether context effects on eye and hand change over time. We developed a track-intercept task requiring participants to track the initial launch of a moving object ("ball") with smooth pursuit eye movements. The ball disappeared after a brief presentation, and participants had to intercept it in a designated "hit zone." In two experiments (n = 18 human observers each), the ball was shown in front of a uniform or a textured background that either was stationary or moved along with the target. Eye and hand movement latencies and speeds were similarly affected by the visual context, but eye and hand interception (eye position at time of interception, and hand interception timing error) did not differ significantly between context conditions. Eye and hand interception timing errors were strongly correlated on a trial-by-trial basis across all context conditions, highlighting the close relation between these responses in manual interception tasks. Our results indicate that visual contexts similarly affect eye and hand movements but that these effects may be short-lasting, affecting movement trajectories more than movement end points.NEW & NOTEWORTHY In a novel track-intercept paradigm, human observers tracked a briefly shown object moving across a textured, dynamic context and intercepted it with their finger after it had disappeared. Context motion significantly affected eye and hand movement latency and speed, but not interception accuracy; eye and hand position at interception were correlated on a trial-by-trial basis. Visual context effects may be short-lasting, affecting movement trajectories more than movement end points.
Collapse
Affiliation(s)
- Philipp Kreyenmeier
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.,Graduate Program in Neuro-Cognitive Psychology, Ludwig Maximilian University, Munich, Germany
| | - Jolande Fooken
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.,Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada
| | - Miriam Spering
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada; .,Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada.,Center for Brain Health, University of British Columbia, Vancouver, Canada.,Institute for Information, Computing and Cognitive Systems, University of British Columbia, Vancouver, Canada; and.,International Collaboration on Repair Discoveries, Vancouver, Canada
| |
Collapse
|
33
|
Gekas N, Meso AI, Masson GS, Mamassian P. A Normalization Mechanism for Estimating Visual Motion across Speeds and Scales. Curr Biol 2017; 27:1514-1520.e3. [PMID: 28479319 DOI: 10.1016/j.cub.2017.04.022] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Revised: 03/21/2017] [Accepted: 04/12/2017] [Indexed: 10/19/2022]
Abstract
Interacting with the natural environment leads to complex stimulations of our senses. Here we focus on the estimation of visual speed, a critical source of information for the survival of many animal species as they monitor moving prey or approaching dangers. In mammals, and in particular in primates, speed information is conceived to be represented by a set of channels sensitive to different spatial and temporal characteristics of the optic flow [1-5]. However, it is still largely unknown how the brain accurately infers the speed of complex natural scenes from this set of spatiotemporal channels [6-14]. As complex stimuli, we chose a set of well-controlled moving naturalistic textures called "compound motion clouds" (CMCs) [15, 16] that simultaneously activate multiple spatiotemporal channels. We found that CMC stimuli that have the same physical speed are perceived moving at different speeds depending on which channel combinations are activated. We developed a computational model demonstrating that the activity in a given channel is both boosted and weakened after a systematic pattern over neighboring channels. This pattern of interactions can be understood as a combination of two components oriented in speed (consistent with a slow-speed prior) and scale (sharpening of similar features). Interestingly, the interaction along scale implements a lateral inhibition mechanism, a canonical principle that hitherto was found to operate mainly in early sensory processing. Overall, the speed-scale normalization mechanism may reflect the natural tendency of the visual system to integrate complex inputs into one coherent percept.
Collapse
Affiliation(s)
- Nikos Gekas
- Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, PSL Research University, CNRS, 29 Rue d'Ulm, Paris 75005, France.
| | - Andrew I Meso
- Psychology and Interdisciplinary Neuroscience Research, Faculty of Science and Technology, Bournemouth University, Poole BH12 5BB, UK; Institut de Neurosciences de la Timone, UMR 7289, CNRS, Aix-Marseille Université, Marseille 13005, France
| | - Guillaume S Masson
- Institut de Neurosciences de la Timone, UMR 7289, CNRS, Aix-Marseille Université, Marseille 13005, France
| | - Pascal Mamassian
- Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, PSL Research University, CNRS, 29 Rue d'Ulm, Paris 75005, France.
| |
Collapse
|
34
|
Solomon SS, Morley JW, Solomon SG. Spectral Signatures of Feedforward and Recurrent Circuitry in Monkey Area MT. Cereb Cortex 2017; 27:2793-2808. [PMID: 27170655 DOI: 10.1093/cercor/bhw124] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Recordings of local field potential (LFP) in the visual cortex can show rhythmic activity at gamma frequencies (30-100 Hz). While the gamma rhythms in the primary visual cortex have been well studied, the structural and functional characteristics of gamma rhythms in extrastriate visual cortex are less clear. Here, we studied the spatial distribution and functional specificity of gamma rhythms in extrastriate middle temporal (MT) area of visual cortex in marmoset monkeys. We found that moving gratings induced narrowband gamma rhythms across cortical layers that were coherent across much of area MT. Moving dot fields instead induced a broadband increase in LFP in middle and upper layers, with weaker narrowband gamma rhythms in deeper layers. The stimulus dependence of LFP response in middle and upper layers of area MT appears to reflect the presence (gratings) or absence (dot fields and other textures) of strongly oriented contours. Our results suggest that gamma rhythms in these layers are propagated from earlier visual cortex, while those in the deeper layers may emerge in area MT.
Collapse
Affiliation(s)
- Selina S Solomon
- Discipline of Physiology, School of Medical Sciences and Bosch Institute, The University of Sydney, Sydney, NSW 2006, Australia.,Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - John W Morley
- School of Medicine, Western Sydney University, Campbelltown, NSW 2560, Australia
| | - Samuel G Solomon
- Department of Experimental Psychology, University College London, London WC1P 0AH, UK
| |
Collapse
|
35
|
Khoei MA, Masson GS, Perrinet LU. The Flash-Lag Effect as a Motion-Based Predictive Shift. PLoS Comput Biol 2017; 13:e1005068. [PMID: 28125585 PMCID: PMC5268412 DOI: 10.1371/journal.pcbi.1005068] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2015] [Accepted: 07/21/2016] [Indexed: 11/18/2022] Open
Abstract
Due to its inherent neural delays, the visual system has an outdated access to sensory information about the current position of moving objects. In contrast, living organisms are remarkably able to track and intercept moving objects under a large range of challenging environmental conditions. Physiological, behavioral and psychophysical evidences strongly suggest that position coding is extrapolated using an explicit and reliable representation of object’s motion but it is still unclear how these two representations interact. For instance, the so-called flash-lag effect supports the idea of a differential processing of position between moving and static objects. Although elucidating such mechanisms is crucial in our understanding of the dynamics of visual processing, a theory is still missing to explain the different facets of this visual illusion. Here, we reconsider several of the key aspects of the flash-lag effect in order to explore the role of motion upon neural coding of objects’ position. First, we formalize the problem using a Bayesian modeling framework which includes a graded representation of the degree of belief about visual motion. We introduce a motion-based prediction model as a candidate explanation for the perception of coherent motion. By including the knowledge of a fixed delay, we can model the dynamics of sensory information integration by extrapolating the information acquired at previous instants in time. Next, we simulate the optimal estimation of object position with and without delay compensation and compared it with human perception under a broad range of different psychophysical conditions. Our computational study suggests that the explicit, probabilistic representation of velocity information is crucial in explaining position coding, and therefore the flash-lag effect. We discuss these theoretical results in light of the putative corrective mechanisms that can be used to cancel out the detrimental effects of neural delays and illuminate the more general question of the dynamical representation at the present time of spatial information in the visual pathways. Visual illusions are powerful tools to explore the limits and constraints of human perception. One of them has received considerable empirical and theoretical interests: the so-called “flash-lag effect”. When a visual stimulus moves along a continuous trajectory, it may be seen ahead of its veridical position with respect to an unpredictable event such as a punctuate flash. This illusion tells us something important about the visual system: contrary to classical computers, neural activity travels at a relatively slow speed. It is largely accepted that the resulting delays cause this perceived spatial lag of the flash. Still, after three decades of debates, there is no consensus regarding the underlying mechanisms. Herein, we re-examine the original hypothesis that this effect may be caused by the extrapolation of the stimulus’ motion that is naturally generated in order to compensate for neural delays. Contrary to classical models, we propose a novel theoretical framework, called parodiction, that optimizes this process by explicitly using the precision of both sensory and predicted motion. Using numerical simulations, we show that the parodiction theory subsumes many of the previously proposed models and empirical studies. More generally, the parodiction hypothesis proposes that neural systems implement generic neural computations that can systematically compensate the existing neural delays in order to represent the predicted visual scene at the present time. It calls for new experimental approaches to directly explore the relationships between neural delays and predictive coding.
Collapse
Affiliation(s)
- Mina A. Khoei
- Institut de Neurosciences de la Timone, UMR7289, CNRS / Aix-Marseille Université, Marseille, France
| | - Guillaume S. Masson
- Institut de Neurosciences de la Timone, UMR7289, CNRS / Aix-Marseille Université, Marseille, France
| | - Laurent U. Perrinet
- Institut de Neurosciences de la Timone, UMR7289, CNRS / Aix-Marseille Université, Marseille, France
- * E-mail:
| |
Collapse
|
36
|
Sheliga BM, Quaia C, FitzGibbon EJ, Cumming BG. Ocular-following responses to white noise stimuli in humans reveal a novel nonlinearity that results from temporal sampling. J Vis 2016; 16:8. [PMID: 26762277 PMCID: PMC4743714 DOI: 10.1167/16.1.8] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
White noise stimuli are frequently used to study the visual processing of broadband images in the laboratory. A common goal is to describe how responses are derived from Fourier components in the image. We investigated this issue by recording the ocular-following responses (OFRs) to white noise stimuli in human subjects. For a given speed we compared OFRs to unfiltered white noise with those to noise filtered with band-pass filters and notch filters. Removing components with low spatial frequency (SF) reduced OFR magnitudes, and the SF associated with the greatest reduction matched the SF that produced the maximal response when presented alone. This reduction declined rapidly with SF, compatible with a winner-take-all operation. Removing higher SF components increased OFR magnitudes. For higher speeds this effect became larger and propagated toward lower SFs. All of these effects were quantitatively well described by a model that combined two factors: (a) an excitatory drive that reflected the OFRs to individual Fourier components and (b) a suppression by higher SF channels where the temporal sampling of the display led to flicker. This nonlinear interaction has an important practical implication: Even with high refresh rates (150 Hz), the temporal sampling introduced by visual displays has a significant impact on visual processing. For instance, we show that this distorts speed tuning curves, shifting the peak to lower speeds. Careful attention to spectral content, in the light of this nonlinearity, is necessary to minimize the resulting artifact when using white noise patterns undergoing apparent motion.
Collapse
|
37
|
Quaia C, Optican LM, Cumming BG. A Motion-from-Form Mechanism Contributes to Extracting Pattern Motion from Plaids. J Neurosci 2016; 36:3903-18. [PMID: 27053199 PMCID: PMC4821905 DOI: 10.1523/jneurosci.3398-15.2016] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2015] [Revised: 02/22/2016] [Accepted: 02/24/2016] [Indexed: 11/21/2022] Open
Abstract
Since the discovery of neurons selective for pattern motion direction in primate middle temporal area MT (Albright, 1984; Movshon et al., 1985), the neural computation of this signal has been the subject of intense study. The bulk of this work has explored responses to plaids obtained by summing two drifting sinusoidal gratings. Unfortunately, with these stimuli, many different mechanisms are similarly effective at extracting pattern motion. We devised a new set of stimuli, obtained by summing two random line stimuli with different orientations. This allowed several novel manipulations, including generating plaids that do not contain rigid 2D motion. Importantly, these stimuli do not engage most of the previously proposed mechanisms. We then recorded the ocular following responses that such stimuli induce in human subjects. We found that pattern motion is computed even with stimuli that do not cohere perceptually, including those without rigid motion, and even when the two gratings are presented separately to the two eyes. Moderate temporal and/or spatial separation of the gratings impairs the computation. We show that, of the models proposed so far, only those based on the intersection-of-constraints rule, embedding a motion-from-form mechanism (in which orientation signals are used in the computation of motion direction signals), can account for our results. At least for the eye movements reported here, a motion-from-form mechanism is thus involved in one of the most basic functions of the visual motion system: extracting motion direction from complex scenes. SIGNIFICANCE STATEMENT Anatomical considerations led to the proposal that visual function is organized in separate processing streams: one (ventral) devoted to form and one (dorsal) devoted to motion. Several experimental results have challenged this view, arguing in favor of a more integrated view of visual processing. Here we add to this body of work, supporting a role for form information even in a function--extracting pattern motion direction from complex scenes--for which decisive evidence for the involvement of form signals has been lacking.
Collapse
Affiliation(s)
- Christian Quaia
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, Maryland 20892
| | - Lance M Optican
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, Maryland 20892
| | - Bruce G Cumming
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, Maryland 20892
| |
Collapse
|
38
|
Lisi M, Cavanagh P. Dissociation between the Perceptual and Saccadic Localization of Moving Objects. Curr Biol 2015; 25:2535-40. [PMID: 26412133 DOI: 10.1016/j.cub.2015.08.021] [Citation(s) in RCA: 48] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2015] [Revised: 08/07/2015] [Accepted: 08/10/2015] [Indexed: 01/02/2023]
Abstract
Visual processing in the human brain provides the data both for perception and for guiding motor actions. It seems natural that our actions would be directed toward perceived locations of their targets, but it has been proposed that action and perception rely on different visual information [1-4], and this provocative claim has triggered a long-lasting debate [5-7]. Here, in support of this claim, we report a large, robust dissociation between perception and action. We take advantage of a perceptual illusion in which visual motion signals presented within the boundaries of a peripheral moving object can make the object's apparent trajectory deviate by 45° or more from its physical trajectory [8-10], a shift several times larger than the typical discrimination threshold for motion direction [11]. Despite the large perceptual distortion, we found that saccadic eye movements directed to these moving objects clearly targeted locations along their physical rather than apparent trajectories. We show that the perceived trajectory is based on the accumulation of position error determined by prior sensory history-an accumulation of error that is not found for the action toward the same target. We suggest that visual processing for perception and action might diverge in how past information is combined with new visual input, with action relying only on immediate information to track a target, whereas perception builds on previous estimates to construct a conscious representation.
Collapse
Affiliation(s)
- Matteo Lisi
- Laboratoire Psychologie de la Perception, CNRS UMR 8248, Université Paris Descartes, 75006 Paris, France.
| | - Patrick Cavanagh
- Laboratoire Psychologie de la Perception, CNRS UMR 8248, Université Paris Descartes, 75006 Paris, France
| |
Collapse
|
39
|
Abstract
Object motion in natural scenes results in visual stimuli with a rich and broad spatiotemporal frequency spectrum. While the question of how the visual system detects and senses motion energies at different spatial and temporal frequencies has been fairly well studied, it is unclear how the visual system integrates this information to form coherent percepts of object motion. We applied a combination of tailored psychophysical experiments and predictive modeling to address this question with regard to perceived motion in a given direction (i.e., stimulus speed). We tested human subjects in a discrimination experiment using stimuli that selectively targeted four distinct spatiotemporally tuned channels with center frequencies consistent with a common speed. We first characterized subjects' responses to stimuli that targeted only individual channels. Based on these measurements, we then predicted subjects' psychometric functions for stimuli that targeted multiple channels simultaneously. Specifically, we compared predictions of three Bayesian observer models that either optimally integrated the information across all spatiotemporal channels, or only used information from the most reliable channel, or formed an average percept across channels. Only the model with optimal integration was successful in accounting for the data. Furthermore, the proposed channel model provides an intuitive explanation for the previously reported spatial frequency dependence of perceived speed of coherent object motion. Finally, our findings indicate that a prior expectation for slow speeds is added to the inference process only after the sensory information is combined and integrated.
Collapse
|
40
|
Abstract
Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic.
Collapse
|
41
|
Acting without seeing: eye movements reveal visual processing without awareness. Trends Neurosci 2015; 38:247-58. [PMID: 25765322 DOI: 10.1016/j.tins.2015.02.002] [Citation(s) in RCA: 80] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2014] [Revised: 02/03/2015] [Accepted: 02/09/2015] [Indexed: 11/23/2022]
Abstract
Visual perception and eye movements are considered to be tightly linked. Diverse fields, ranging from developmental psychology to computer science, utilize eye tracking to measure visual perception. However, this prevailing view has been challenged by recent behavioral studies. Here, we review converging evidence revealing dissociations between the contents of perceptual awareness and different types of eye movement. Such dissociations reveal situations in which eye movements are sensitive to particular visual features that fail to modulate perceptual reports. We also discuss neurophysiological, neuroimaging, and clinical studies supporting the role of subcortical pathways for visual processing without awareness. Our review links awareness to perceptual-eye movement dissociations and furthers our understanding of the brain pathways underlying vision and movement with and without awareness.
Collapse
|
42
|
Perrinet LU, Adams RA, Friston KJ. Active inference, eye movements and oculomotor delays. BIOLOGICAL CYBERNETICS 2014; 108:777-801. [PMID: 25128318 PMCID: PMC4250571 DOI: 10.1007/s00422-014-0620-8] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2013] [Accepted: 07/08/2014] [Indexed: 05/26/2023]
Abstract
This paper considers the problem of sensorimotor delays in the optimal control of (smooth) eye movements under uncertainty. Specifically, we consider delays in the visuo-oculomotor loop and their implications for active inference. Active inference uses a generalisation of Kalman filtering to provide Bayes optimal estimates of hidden states and action in generalised coordinates of motion. Representing hidden states in generalised coordinates provides a simple way of compensating for both sensory and oculomotor delays. The efficacy of this scheme is illustrated using neuronal simulations of pursuit initiation responses, with and without compensation. We then consider an extension of the generative model to simulate smooth pursuit eye movements-in which the visuo-oculomotor system believes both the target and its centre of gaze are attracted to a (hidden) point moving in the visual field. Finally, the generative model is equipped with a hierarchical structure, so that it can recognise and remember unseen (occluded) trajectories and emit anticipatory responses. These simulations speak to a straightforward and neurobiologically plausible solution to the generic problem of integrating information from different sources with different temporal delays and the particular difficulties encountered when a system-like the oculomotor system-tries to control its environment with delayed signals.
Collapse
Affiliation(s)
- Laurent U Perrinet
- Institut de Neurosciences de la Timone, CNRS/Aix-Marseille Université, Marseille, France,
| | | | | |
Collapse
|
43
|
Souto D, Kerzel D. Ocular tracking responses to background motion gated by feature-based attention. J Neurophysiol 2014; 112:1074-81. [DOI: 10.1152/jn.00810.2013] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Involuntary ocular tracking responses to background motion offer a window on the dynamics of motion computations. In contrast to spatial attention, we know little about the role of feature-based attention in determining this ocular response. To probe feature-based effects of background motion on involuntary eye movements, we presented human observers with a balanced background perturbation. Two clouds of dots moved in opposite vertical directions while observers tracked a target moving in horizontal direction. Additionally, they had to discriminate a change in the direction of motion (±10° from vertical) of one of the clouds. A vertical ocular following response occurred in response to the motion of the attended cloud. When motion selection was based on motion direction and color of the dots, the peak velocity of the tracking response was 30% of the tracking response elicited in a single task with only one direction of background motion. In two other experiments, we tested the effect of the perturbation when motion selection was based on color, by having motion direction vary unpredictably, or on motion direction alone. Although the gain of pursuit in the horizontal direction was significantly reduced in all experiments, indicating a trade-off between perceptual and oculomotor tasks, ocular responses to perturbations were only observed when selection was based on both motion direction and color. It appears that selection by motion direction can only be effective for driving ocular tracking when the relevant elements can be segregated before motion onset.
Collapse
Affiliation(s)
- David Souto
- School of Psychology, University of Leicester, Leicester, United Kingdom; and
| | - Dirk Kerzel
- Faculté de Psychologie et des Sciences de l'Éducation, Université de Genève, Genève, Switzerland
| |
Collapse
|
44
|
Meso AI, Simoncini C. Towards an understanding of the roles of visual areas MT and MST in computing speed. Front Comput Neurosci 2014; 8:92. [PMID: 25152730 PMCID: PMC4126038 DOI: 10.3389/fncom.2014.00092] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2014] [Accepted: 07/22/2014] [Indexed: 11/13/2022] Open
Affiliation(s)
- Andrew Isaac Meso
- Institut de Neurosciences de la Timone, UMR 7289 CNRS and Aix-Marseille Université Marseille, France
| | - Claudio Simoncini
- Institut de Neurosciences de la Timone, UMR 7289 CNRS and Aix-Marseille Université Marseille, France ; Department of Neurobiology, University of Chicago Chicago, IL, USA
| |
Collapse
|
45
|
Price NSC, Blum J. Motion perception correlates with volitional but not reflexive eye movements. Neuroscience 2014; 277:435-45. [PMID: 25073044 DOI: 10.1016/j.neuroscience.2014.07.028] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2014] [Accepted: 07/19/2014] [Indexed: 11/17/2022]
Abstract
Visually-driven actions and perception are traditionally ascribed to the dorsal and ventral visual streams of the cortical processing hierarchy. However, motion perception and the control of tracking eye movements both depend on sensory motion analysis by neurons in the dorsal stream, suggesting that the same sensory circuits may underlie both action and perception. Previous studies have suggested that multiple sensory modules may be responsible for the perception of low- and high-level motion, or the detection versus identification of motion direction. However, it remains unclear whether the sensory processing systems that contribute to direction perception and the control of eye movements have the same neuronal constraints. To address this, we examined inter-individual variability across 36 observers, using two tasks that simultaneously assessed the precision of eye movements and direction perception: in the smooth pursuit task, observers volitionally tracked a small moving target and reported its direction; in the ocular following task, observers reflexively tracked a large moving stimulus and reported its direction. We determined perceptual-oculomotor correlations across observers, defined as the correlation between each observer's mean perceptual precision and mean oculomotor precision. Across observers, we found that: (i) mean perceptual precision was correlated between the two tasks; (ii) mean oculomotor precision was correlated between the tasks, and (iii) oculomotor and perceptual precision were correlated for volitional smooth pursuit, but not reflexive ocular following. Collectively, these results demonstrate that sensory circuits with common neuronal constraints subserve motion perception and volitional, but not reflexive eye movements.
Collapse
Affiliation(s)
- N S C Price
- Department of Physiology, Monash University, VIC 3800, Australia.
| | - J Blum
- Department of Physiology, Monash University, VIC 3800, Australia
| |
Collapse
|
46
|
Szpiro SFA, Spering M, Carrasco M. Perceptual learning modifies untrained pursuit eye movements. J Vis 2014; 14:8. [PMID: 25002412 DOI: 10.1167/14.8.8] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response.
Collapse
Affiliation(s)
- Sarit F A Szpiro
- Department of Psychology, New York University, New York, NY, USA
| | - Miriam Spering
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, CanadaBrain Research Centre, University of British Columbia, Vancouver, Canada
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, NY, USACenter for Neural Science, New York University, New York, NY, USA
| |
Collapse
|
47
|
Solomon SS, Chen SC, Morley JW, Solomon SG. Local and Global Correlations between Neurons in the Middle Temporal Area of Primate Visual Cortex. Cereb Cortex 2014; 25:3182-96. [DOI: 10.1093/cercor/bhu111] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
|
48
|
Salinas E, Scerra VE, Hauser CK, Costello MG, Stanford TR. Decoupling speed and accuracy in an urgent decision-making task reveals multiple contributions to their trade-off. Front Neurosci 2014; 8:85. [PMID: 24795559 PMCID: PMC4005963 DOI: 10.3389/fnins.2014.00085] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2014] [Accepted: 04/02/2014] [Indexed: 12/31/2022] Open
Abstract
A key goal in the study of decision making is determining how neural networks involved in perception and motor planning interact to generate a given choice, but this is complicated due to the internal trade-off between speed and accuracy, which confounds their individual contributions. Urgent decisions, however, are special: they may range between random and fully informed, depending on the amount of processing time (or stimulus viewing time) available in each trial, but regardless, movement preparation always starts early on. As a consequence, under time pressure it is possible to produce a psychophysical curve that characterizes perceptual performance independently of reaction time, and this, in turn, makes it possible to pinpoint how perceptual information (which requires sensory input) modulates motor planning (which does not) to guide a choice. Here we review experiments in which, on the basis of this approach, the origin of the speed-accuracy trade-off becomes particularly transparent. Psychophysical, neurophysiological, and modeling results in the "compelled-saccade" task indicate that, during urgent decision making, perceptual information-if and whenever it becomes available-accelerates or decelerates competing motor plans that are already ongoing. This interaction affects both the reaction time and the probability of success in any given trial. In two experiments with reward asymmetries, we find that speed and accuracy can be traded in different amounts and for different reasons, depending on how the particular task contingencies affect specific neural mechanisms related to perception and motor planning. Therefore, from the vantage point of urgent decisions, the speed-accuracy trade-off is not a unique phenomenon tied to a single underlying mechanism, but rather a typical outcome of many possible combinations of internal adjustments within sensory-motor neural circuits.
Collapse
Affiliation(s)
- Emilio Salinas
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine Winston-Salem, NC, USA
| | - Veronica E Scerra
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine Winston-Salem, NC, USA
| | - Christopher K Hauser
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine Winston-Salem, NC, USA
| | - M Gabriela Costello
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine Winston-Salem, NC, USA
| | - Terrence R Stanford
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine Winston-Salem, NC, USA
| |
Collapse
|
49
|
Glasser DM, Tadin D. Modularity in the motion system: independent oculomotor and perceptual processing of brief moving stimuli. J Vis 2014; 14:28. [PMID: 24665091 DOI: 10.1167/14.3.28] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
In addition to motion perception per se, we utilize motion information for a wide range of brain functions. These varied functions place different demands on the visual system, and therefore a stimulus that provides useful information for one function may be inadequate for another. For example, the direction of motion of large high-contrast stimuli is difficult to discriminate perceptually, but other studies have shown that such stimuli are highly effective at eliciting directional oculomotor responses such as the ocular following response (OFR). Here, we investigated the degree of independence between perceptual and oculomotor processing by determining whether perceptually suppressed moving stimuli can nonetheless evoke reliable eye movements. We measured reflexively evoked tracking eye movements while observers discriminated the motion direction of large high-contrast stimuli. To quantify the discrimination ability of the oculomotor system, we used signal detection theory to generate associated oculometric functions. The results showed that oculomotor sensitivity to motion direction is not predicted by perceptual sensitivity to the same stimuli. In fact, in several cases oculomotor responses were more reliable than perceptual responses. Moreover, a trial-by-trial analysis indicated that, for stimuli tested in this study, oculomotor processing was statistically independent from perceptual processing. Evidently, perceptual and oculomotor responses reflect the activity of independent dissociable mechanisms despite operating on the same input. While results of this kind have traditionally been interpreted in the framework of perception versus action, we propose that these differences reflect a more general principle of modularity.
Collapse
|
50
|
Gharaei S, Tailby C, Solomon SS, Solomon SG. Texture-dependent motion signals in primate middle temporal area. J Physiol 2013; 591:5671-90. [PMID: 24000175 PMCID: PMC3853503 DOI: 10.1113/jphysiol.2013.257568] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Neurons in the middle temporal (MT) area of primate cortex provide an important stage in the analysis of visual motion. For simple stimuli such as bars and plaids some neurons in area MT – pattern cells – seem to signal motion independent of contour orientation, but many neurons – component cells – do not. Why area MT supports both types of receptive field is unclear. To address this we made extracellular recordings from single units in area MT of anaesthetised marmoset monkeys and examined responses to two-dimensional images with a large range of orientations and spatial frequencies. Component and pattern cell response remained distinct during presentation of these complex spatial textures. Direction tuning curves were sharpest in component cells when a texture contained a narrow range of orientations, but were similar across all neurons for textures containing all orientations. Response magnitude of pattern cells, but not component cells, increased with the spatial bandwidth of the texture. In addition, response variability in all neurons was reduced when the stimulus was rich in spatial texture. Fisher information analysis showed that component cells provide more informative responses than pattern cells when a texture contains a narrow range of orientations, but pattern cells had more informative responses for broadband textures. Component cells and pattern cells may therefore coexist because they provide complementary and parallel motion signals.
Collapse
Affiliation(s)
- Saba Gharaei
- S. G. Solomon: 26 Bedford Way, London WC1 0AH, UK.
| | | | | | | |
Collapse
|