1
|
Grimaldi A, Perrinet LU. Learning heterogeneous delays in a layer of spiking neurons for fast motion detection. BIOLOGICAL CYBERNETICS 2023; 117:373-387. [PMID: 37695359 DOI: 10.1007/s00422-023-00975-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 08/18/2023] [Indexed: 09/12/2023]
Abstract
The precise timing of spikes emitted by neurons plays a crucial role in shaping the response of efferent biological neurons. This temporal dimension of neural activity holds significant importance in understanding information processing in neurobiology, especially for the performance of neuromorphic hardware, such as event-based cameras. Nonetheless, many artificial neural models disregard this critical temporal dimension of neural activity. In this study, we present a model designed to efficiently detect temporal spiking motifs using a layer of spiking neurons equipped with heterogeneous synaptic delays. Our model capitalizes on the diverse synaptic delays present on the dendritic tree, enabling specific arrangements of temporally precise synaptic inputs to synchronize upon reaching the basal dendritic tree. We formalize this process as a time-invariant logistic regression, which can be trained using labeled data. To demonstrate its practical efficacy, we apply the model to naturalistic videos transformed into event streams, simulating the output of the biological retina or event-based cameras. To evaluate the robustness of the model in detecting visual motion, we conduct experiments by selectively pruning weights and demonstrate that the model remains efficient even under significantly reduced workloads. In conclusion, by providing a comprehensive, event-driven computational building block, the incorporation of heterogeneous delays has the potential to greatly improve the performance of future spiking neural network algorithms, particularly in the context of neuromorphic chips.
Collapse
Affiliation(s)
- Antoine Grimaldi
- Institut de Neurosciences de la Timone, Aix Marseille Univ, CNRS, 27 boulevard Jean Moulin, 13005, Marseille, France
| | - Laurent U Perrinet
- Institut de Neurosciences de la Timone, Aix Marseille Univ, CNRS, 27 boulevard Jean Moulin, 13005, Marseille, France.
| |
Collapse
|
2
|
Ladret HJ, Cortes N, Ikan L, Chavane F, Casanova C, Perrinet LU. Cortical recurrence supports resilience to sensory variance in the primary visual cortex. Commun Biol 2023; 6:667. [PMID: 37353519 PMCID: PMC10290066 DOI: 10.1038/s42003-023-05042-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 06/13/2023] [Indexed: 06/25/2023] Open
Abstract
Our daily endeavors occur in a complex visual environment, whose intrinsic variability challenges the way we integrate information to make decisions. By processing myriads of parallel sensory inputs, our brain is theoretically able to compute the variance of its environment, a cue known to guide our behavior. Yet, the neurobiological and computational basis of such variance computations are still poorly understood. Here, we quantify the dynamics of sensory variance modulations of cat primary visual cortex neurons. We report two archetypal neuronal responses, one of which is resilient to changes in variance and co-encodes the sensory feature and its variance, improving the population encoding of orientation. The existence of these variance-specific responses can be accounted for by a model of intracortical recurrent connectivity. We thus propose that local recurrent circuits process uncertainty as a generic computation, advancing our understanding of how the brain handles naturalistic inputs.
Collapse
Affiliation(s)
- Hugo J Ladret
- Institut de Neurosciences de la Timone, UMR 7289, CNRS and Aix-Marseille Université, Marseille, France.
- School of Optometry, Université de Montréal, Montréal, Canada.
| | - Nelson Cortes
- School of Optometry, Université de Montréal, Montréal, Canada
| | - Lamyae Ikan
- School of Optometry, Université de Montréal, Montréal, Canada
| | - Frédéric Chavane
- Institut de Neurosciences de la Timone, UMR 7289, CNRS and Aix-Marseille Université, Marseille, France
| | | | - Laurent U Perrinet
- Institut de Neurosciences de la Timone, UMR 7289, CNRS and Aix-Marseille Université, Marseille, France
| |
Collapse
|
3
|
Abstract
Despite the fundamental importance of visual motion processing, our understanding of how the brain represents basic aspects of motion is incomplete. While it is generally believed that direction is the main representational feature of motion, motion processing is also influenced by nondirectional orientation signals that are present in most motion stimuli. Here, we aimed to test whether this nondirectional motion axis contributes motion perception even when orientation is completely absent from the stimulus. Using stimuli with and without orientation signals, we found that serial dependence in a simple motion direction estimation task was predominantly determined by the orientation of the previous motion stimulus. Moreover, the observed attraction profiles closely matched the characteristic pattern of serial attraction found in orientation perception. Evidently, the sequential integration of motion signals strongly depends on the orientation of motion, indicating a fundamental role of nondirectional orientation in the coding of visual motion direction.
Collapse
|
4
|
Speed Estimation for Visual Tracking Emerges Dynamically from Nonlinear Frequency Interactions. eNeuro 2022; 9:ENEURO.0511-21.2022. [PMID: 35470228 PMCID: PMC9113919 DOI: 10.1523/eneuro.0511-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 03/08/2022] [Accepted: 03/11/2022] [Indexed: 11/21/2022] Open
Abstract
Sensing the movement of fast objects within our visual environments is essential for controlling actions. It requires online estimation of motion direction and speed. We probed human speed representation using ocular tracking of stimuli of different statistics. First, we compared ocular responses to single drifting gratings (DGs) with a given set of spatiotemporal frequencies to broadband motion clouds (MCs) of matched mean frequencies. Motion energy distributions of gratings and clouds are point-like, and ellipses oriented along the constant speed axis, respectively. Sampling frequency space, MCs elicited stronger, less variable, and speed-tuned responses. DGs yielded weaker and more frequency-tuned responses. Second, we measured responses to patterns made of two or three components covering a range of orientations within Fourier space. Early tracking initiation of the patterns was best predicted by a linear combination of components before nonlinear interactions emerged to shape later dynamics. Inputs are supralinearly integrated along an iso-velocity line and sublinearly integrated away from it. A dynamical probabilistic model characterizes these interactions as an excitatory pooling along the iso-velocity line and inhibition along the orthogonal “scale” axis. Such crossed patterns of interaction would appropriately integrate or segment moving objects. This study supports the novel idea that speed estimation is better framed as a dynamic channel interaction organized along speed and scale axes.
Collapse
|
5
|
Gekas N, Mamassian P. Adaptation to one perceived motion direction can generate multiple velocity aftereffects. J Vis 2021; 21:17. [PMID: 34007990 PMCID: PMC8142737 DOI: 10.1167/jov.21.5.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Sensory adaptation is a useful tool to identify the links between perceptual effects and neural mechanisms. Even though motion adaptation is one of the earliest and most documented aftereffects, few studies have investigated the perception of direction and speed of the aftereffect at the same time, that is the perceived velocity. Using a novel experimental paradigm, we simultaneously recorded the perceived direction and speed of leftward or rightward moving random dots before and after adaptation. For the adapting stimulus, we chose a horizontally-oriented broadband grating moving upward behind a circular aperture. Because of the aperture problem, the interpretation of this stimulus is ambiguous, being consistent with multiple velocities, and yet it is systematically perceived as moving at a single direction and speed. Here we ask whether the visual system adapts to the multiple velocities of the adaptor or to just the single perceived velocity. Our results show a strong repulsion aftereffect, away from the adapting velocity (downward and slower), that increases gradually for faster test stimuli as long as these stimuli include some velocities that match some of the ambiguous ones of the adaptor. In summary, the visual system seems to adapt to the multiple velocities of an ambiguous stimulus even though a single velocity is perceived. Our findings can be well described by a computational model that assumes a joint encoding of direction and speed and that includes an extended adaptation component that can represent all the possible velocities of the ambiguous stimulus.
Collapse
Affiliation(s)
- Nikos Gekas
- School of Psychology, University of Nottingham, Nottingham, UK.,Laboratoire des Systèmes Perceptifs, Département d'études cognitives, École normale supérieure, PSL University, CNRS, Paris, France.,
| | - Pascal Mamassian
- Laboratoire des Systèmes Perceptifs, Département d'études cognitives, École normale supérieure, PSL University, CNRS, Paris, France.,
| |
Collapse
|
6
|
Daucé E, Albiges P, Perrinet LU. A dual foveal-peripheral visual processing model implements efficient saccade selection. J Vis 2020; 20:22. [PMID: 38755789 PMCID: PMC7443118 DOI: 10.1167/jov.20.8.22] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Accepted: 05/07/2020] [Indexed: 11/24/2022] Open
Abstract
We develop a visuomotor model that implements visual search as a focal accuracy-seeking policy, with the target's position and category drawn independently from a common generative process. Consistently with the anatomical separation between the ventral versus dorsal pathways, the model is composed of two pathways that respectively infer what to see and where to look. The "What" network is a classical deep learning classifier that only processes a small region around the center of fixation, providing a "foveal" accuracy. In contrast, the "Where" network processes the full visual field in a biomimetic fashion, using a log-polar retinotopic encoding, which is preserved up to the action selection level. In our model, the foveal accuracy is used as a monitoring signal to train the "Where" network, much like in the "actor/critic" framework. After training, the "Where" network provides an "accuracy map" that serves to guide the eye toward peripheral objects. Finally, the comparison of both networks' accuracies amounts to either selecting a saccade or keeping the eye focused at the center to identify the target. We test this setup on a simple task of finding a digit in a large, cluttered image. Our simulation results demonstrate the effectiveness of this approach, increasing by one order of magnitude the radius of the visual field toward which the agent can detect and recognize a target, either through a single saccade or with multiple ones. Importantly, our log-polar treatment of the visual information exploits the strong compression rate performed at the sensory level, providing ways to implement visual search in a sublinear fashion, in contrast with mainstream computer vision.
Collapse
Affiliation(s)
- Emmanuel Daucé
- Institut de Neurosciences de la Timone (UMR 7289), Aix Marseille University, CNRS, Marseille, France
| | - Pierre Albiges
- Institut de Neurosciences de la Timone (UMR 7289), Aix Marseille University, CNRS, Marseille, France
| | - Laurent U Perrinet
- Institut de Neurosciences de la Timone (UMR 7289), Aix Marseille University, CNRS, Marseille, France
- https://laurentperrinet.github.io/
| |
Collapse
|
7
|
Yildizoglu T, Riegler C, Fitzgerald JE, Portugues R. A Neural Representation of Naturalistic Motion-Guided Behavior in the Zebrafish Brain. Curr Biol 2020; 30:2321-2333.e6. [PMID: 32386533 DOI: 10.1016/j.cub.2020.04.043] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2018] [Revised: 03/13/2020] [Accepted: 04/20/2020] [Indexed: 11/20/2022]
Abstract
All animals must transform ambiguous sensory data into successful behavior. This requires sensory representations that accurately reflect the statistics of natural stimuli and behavior. Multiple studies show that visual motion processing is tuned for accuracy under naturalistic conditions, but the sensorimotor circuits extracting these cues and implementing motion-guided behavior remain unclear. Here we show that the larval zebrafish retina extracts a diversity of naturalistic motion cues, and the retinorecipient pretectum organizes these cues around the elements of behavior. We find that higher-order motion stimuli, gliders, induce optomotor behavior matching expectations from natural scene analyses. We then image activity of retinal ganglion cell terminals and pretectal neurons. The retina exhibits direction-selective responses across glider stimuli, and anatomically clustered pretectal neurons respond with magnitudes matching behavior. Peripheral computations thus reflect natural input statistics, whereas central brain activity precisely codes information needed for behavior. This general principle could organize sensorimotor transformations across animal species.
Collapse
Affiliation(s)
- Tugce Yildizoglu
- Max Planck Institute of Neurobiology, Research Group of Sensorimotor Control, Martinsried 82152, Germany
| | - Clemens Riegler
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Department of Neurobiology, Faculty of Life Sciences, University of Vienna, Althanstrasse 14, 1090 Vienna, Austria
| | - James E Fitzgerald
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA.
| | - Ruben Portugues
- Max Planck Institute of Neurobiology, Research Group of Sensorimotor Control, Martinsried 82152, Germany; Institute of Neuroscience, Technical University of Munich, Munich 80802, Germany; Munich Cluster for Systems Neurology (SyNergy), Munich 80802, Germany.
| |
Collapse
|
8
|
Ananyev E, Yong Z, Hsieh PJ. Center-surround velocity-based segmentation: Speed, eccentricity, and timing of visual stimuli interact to determine interocular dominance. J Vis 2020; 19:3. [PMID: 31689716 DOI: 10.1167/19.13.3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We used a novel method to capture the spatial dominance pattern of competing motion fields at rivalry onset. When rivaling velocities were different, the participants reported center-surround segmentation: The slower stimuli often dominated in the center while faster motion persisted along the borders. The size of the central static/slow field scaled with the stimulus size. The central dominance was time-locked to the static stimulus onset but was disrupted if the dynamic stimulus was presented later. We then used the same stimuli as masks in an interocular suppression paradigm. The local suppression strengths were probed with targets at different eccentricities. Consistent with the center-surround segmentation, target speed and location interacted with mask velocities. Specifically, suppression power of the slower masks was nonhomogenous with eccentricity, providing a potential explanation for center-surround velocity-based segmentation. This interaction of speed, eccentricity, and timing has implications for motion processing and interocular suppression. The influence of different masks on which target features get suppressed predicts that some "unconscious effects" are not generalizable across masks and, thus, need to be replicated under various masking conditions.
Collapse
Affiliation(s)
- Egor Ananyev
- Nanyang Technological University, Department of Psychology, Singapore
| | - Zixin Yong
- Duke-NUS Medical School, Neuroscience and Behavioural Disorders Program, Singapore
| | - Po-Jang Hsieh
- National Taiwan University, Department of Psychology, Taipei, Taiwan
| |
Collapse
|
9
|
Shi Q, Gupta P, Boukhvalova AK, Singer JH, Butts DA. Functional characterization of retinal ganglion cells using tailored nonlinear modeling. Sci Rep 2019; 9:8713. [PMID: 31213620 PMCID: PMC6581951 DOI: 10.1038/s41598-019-45048-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2018] [Accepted: 05/31/2019] [Indexed: 01/30/2023] Open
Abstract
The mammalian retina encodes the visual world in action potentials generated by 20-50 functionally and anatomically-distinct types of retinal ganglion cell (RGC). Individual RGC types receive synaptic input from distinct presynaptic circuits; therefore, their responsiveness to specific features in the visual scene arises from the information encoded in synaptic input and shaped by postsynaptic signal integration and spike generation. Unfortunately, there is a dearth of tools for characterizing the computations reflected in RGC spike output. Therefore, we developed a statistical model, the separable Nonlinear Input Model, to characterize the excitatory and suppressive components of RGC receptive fields. We recorded RGC responses to a correlated noise ("cloud") stimulus in an in vitro preparation of mouse retina and found that our model accurately predicted RGC responses at high spatiotemporal resolution. It identified multiple receptive fields reflecting the main excitatory and suppressive components of the response of each neuron. Significantly, our model accurately identified ON-OFF cells and distinguished their distinct ON and OFF receptive fields, and it demonstrated a diversity of suppressive receptive fields in the RGC population. In total, our method offers a rich description of RGC computation and sets a foundation for relating it to retinal circuitry.
Collapse
Affiliation(s)
- Qing Shi
- Department of Biology, University of Maryland, College Park, MD, United States.
| | - Pranjal Gupta
- Department of Biology, University of Maryland, College Park, MD, United States
| | | | - Joshua H Singer
- Department of Biology, University of Maryland, College Park, MD, United States
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, United States
| | - Daniel A Butts
- Department of Biology, University of Maryland, College Park, MD, United States
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, United States
| |
Collapse
|
10
|
Speed-Selectivity in Retinal Ganglion Cells is Sharpened by Broad Spatial Frequency, Naturalistic Stimuli. Sci Rep 2019; 9:456. [PMID: 30679564 PMCID: PMC6345785 DOI: 10.1038/s41598-018-36861-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Accepted: 11/09/2018] [Indexed: 11/28/2022] Open
Abstract
Motion detection represents one of the critical tasks of the visual system and has motivated a large body of research. However, it remains unclear precisely why the response of retinal ganglion cells (RGCs) to simple artificial stimuli does not predict their response to complex, naturalistic stimuli. To explore this topic, we use Motion Clouds (MC), which are synthetic textures that preserve properties of natural images and are merely parameterized, in particular by modulating the spatiotemporal spectrum complexity of the stimulus by adjusting the frequency bandwidths. By stimulating the retina of the diurnal rodent, Octodon degus with MC we show that the RGCs respond to increasingly complex stimuli by narrowing their adjustment curves in response to movement. At the level of the population, complex stimuli produce a sparser code while preserving movement information; therefore, the stimuli are encoded more efficiently. Interestingly, these properties were observed throughout different populations of RGCs. Thus, our results reveal that the response at the level of RGCs is modulated by the naturalness of the stimulus - in particular for motion - which suggests that the tuning to the statistics of natural images already emerges at the level of the retina.
Collapse
|
11
|
Vacher J, Meso AI, Perrinet LU, Peyré G. Bayesian Modeling of Motion Perception Using Dynamical Stochastic Textures. Neural Comput 2018; 30:3355-3392. [DOI: 10.1162/neco_a_01142] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A common practice to account for psychophysical biases in vision is to frame them as consequences of a dynamic process relying on optimal inference with respect to a generative model. The study presented here details the complete formulation of such a generative model intended to probe visual motion perception with a dynamic texture model. It is derived in a set of axiomatic steps constrained by biological plausibility. We extend previous contributions by detailing three equivalent formulations of this texture model. First, the composite dynamic textures are constructed by the random aggregation of warped patterns, which can be viewed as three-dimensional gaussian fields. Second, these textures are cast as solutions to a stochastic partial differential equation (sPDE). This essential step enables real-time, on-the-fly texture synthesis using time-discretized autoregressive processes. It also allows for the derivation of a local motion-energy model, which corresponds to the log likelihood of the probability density. The log likelihoods are essential for the construction of a Bayesian inference framework. We use the dynamic texture model to psychophysically probe speed perception in humans using zoom-like changes in the spatial frequency content of the stimulus. The human data replicate previous findings showing perceived speed to be positively biased by spatial frequency increments. A Bayesian observer who combines a gaussian likelihood centered at the true speed and a spatial frequency dependent width with a “slow-speed prior” successfully accounts for the perceptual bias. More precisely, the bias arises from a decrease in the observer's likelihood width estimated from the experiments as the spatial frequency increases. Such a trend is compatible with the trend of the dynamic texture likelihood width.
Collapse
Affiliation(s)
- Jonathan Vacher
- Département de Mathématique et Applications, École Normale Supérieure, Paris 75005, France; UNIC, Gif-sur-Yvette 91190, France; and CNRS, France
| | - Andrew Isaac Meso
- Institut des Neurosciences de la Timone, Marseille 13005, France, and Faculty of Science and Technology, Bournemouth University, Poole BH12 5BB, U.K
| | - Laurent U. Perrinet
- Institut de Neurosciences de la Timone, Marseille 13005, France, and CNRS, France
| | - Gabriel Peyré
- Département de Mathématique et Applications, École Normale Supérieure, Paris 75005, France, and CNRS, France
| |
Collapse
|
12
|
Lawful tracking of visual motion in humans, macaques, and marmosets in a naturalistic, continuous, and untrained behavioral context. Proc Natl Acad Sci U S A 2018; 115:E10486-E10494. [PMID: 30322919 PMCID: PMC6217422 DOI: 10.1073/pnas.1807192115] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
We characterize spatiotemporal integration of naturalistic, continuous visual motion of three primate species (humans, macaques, and marmosets). All three species volitionally, but naturally, track the center of expansion of a dynamic optic flow field. Detailed analysis of this flow-tracking behavior reveals lawful and repeatable dependencies of the behavior on nuances in the stimulus, revealing that even unconstrained and continuous behavior can exhibit the sort of precise dependencies typically studied only in artificial and constrained tasks. Much study of the visual system has focused on how humans and monkeys integrate moving stimuli over space and time. Such assessments of spatiotemporal integration provide fundamental grounding for the interpretation of neurophysiological data, as well as how the resulting neural signals support perceptual decisions and behavior. However, the insights supported by classical characterizations of integration performed in humans and rhesus monkeys are potentially limited with respect to both generality and detail: Standard tasks require extensive amounts of training, involve abstract stimulus–response mappings, and depend on combining data across many trials and/or sessions. It is thus of concern that the integration observed in classical tasks involves the recruitment of brain circuits that might not normally subsume natural behaviors, and that quantitative analyses have limited power for characterizing single-trial or single-session processes. Here we bridge these gaps by showing that three primate species (humans, macaques, and marmosets) track the focus of expansion of an optic flow field continuously and without substantial training. This flow-tracking behavior was volitional and reflected substantial temporal integration. Most strikingly, gaze patterns exhibited lawful and nuanced dependencies on random perturbations in the stimulus, such that repetitions of identical flow movies elicited remarkably similar eye movements over long and continuous time periods. These results demonstrate the generality of spatiotemporal integration in natural vision, and offer a means for studying integration outside of artificial tasks while maintaining lawful and highly reliable behavior.
Collapse
|
13
|
Kreyenmeier P, Fooken J, Spering M. Context effects on smooth pursuit and manual interception of a disappearing target. J Neurophysiol 2017; 118:404-415. [PMID: 28515287 DOI: 10.1152/jn.00217.2017] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Revised: 04/25/2017] [Accepted: 05/12/2017] [Indexed: 11/22/2022] Open
Abstract
In our natural environment, we interact with moving objects that are surrounded by richly textured, dynamic visual contexts. Yet most laboratory studies on vision and movement show visual objects in front of uniform gray backgrounds. Context effects on eye movements have been widely studied, but it is less well known how visual contexts affect hand movements. Here we ask whether eye and hand movements integrate motion signals from target and context similarly or differently, and whether context effects on eye and hand change over time. We developed a track-intercept task requiring participants to track the initial launch of a moving object ("ball") with smooth pursuit eye movements. The ball disappeared after a brief presentation, and participants had to intercept it in a designated "hit zone." In two experiments (n = 18 human observers each), the ball was shown in front of a uniform or a textured background that either was stationary or moved along with the target. Eye and hand movement latencies and speeds were similarly affected by the visual context, but eye and hand interception (eye position at time of interception, and hand interception timing error) did not differ significantly between context conditions. Eye and hand interception timing errors were strongly correlated on a trial-by-trial basis across all context conditions, highlighting the close relation between these responses in manual interception tasks. Our results indicate that visual contexts similarly affect eye and hand movements but that these effects may be short-lasting, affecting movement trajectories more than movement end points.NEW & NOTEWORTHY In a novel track-intercept paradigm, human observers tracked a briefly shown object moving across a textured, dynamic context and intercepted it with their finger after it had disappeared. Context motion significantly affected eye and hand movement latency and speed, but not interception accuracy; eye and hand position at interception were correlated on a trial-by-trial basis. Visual context effects may be short-lasting, affecting movement trajectories more than movement end points.
Collapse
Affiliation(s)
- Philipp Kreyenmeier
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.,Graduate Program in Neuro-Cognitive Psychology, Ludwig Maximilian University, Munich, Germany
| | - Jolande Fooken
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.,Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada
| | - Miriam Spering
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada; .,Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada.,Center for Brain Health, University of British Columbia, Vancouver, Canada.,Institute for Information, Computing and Cognitive Systems, University of British Columbia, Vancouver, Canada; and.,International Collaboration on Repair Discoveries, Vancouver, Canada
| |
Collapse
|
14
|
Gekas N, Meso AI, Masson GS, Mamassian P. A Normalization Mechanism for Estimating Visual Motion across Speeds and Scales. Curr Biol 2017; 27:1514-1520.e3. [PMID: 28479319 DOI: 10.1016/j.cub.2017.04.022] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Revised: 03/21/2017] [Accepted: 04/12/2017] [Indexed: 10/19/2022]
Abstract
Interacting with the natural environment leads to complex stimulations of our senses. Here we focus on the estimation of visual speed, a critical source of information for the survival of many animal species as they monitor moving prey or approaching dangers. In mammals, and in particular in primates, speed information is conceived to be represented by a set of channels sensitive to different spatial and temporal characteristics of the optic flow [1-5]. However, it is still largely unknown how the brain accurately infers the speed of complex natural scenes from this set of spatiotemporal channels [6-14]. As complex stimuli, we chose a set of well-controlled moving naturalistic textures called "compound motion clouds" (CMCs) [15, 16] that simultaneously activate multiple spatiotemporal channels. We found that CMC stimuli that have the same physical speed are perceived moving at different speeds depending on which channel combinations are activated. We developed a computational model demonstrating that the activity in a given channel is both boosted and weakened after a systematic pattern over neighboring channels. This pattern of interactions can be understood as a combination of two components oriented in speed (consistent with a slow-speed prior) and scale (sharpening of similar features). Interestingly, the interaction along scale implements a lateral inhibition mechanism, a canonical principle that hitherto was found to operate mainly in early sensory processing. Overall, the speed-scale normalization mechanism may reflect the natural tendency of the visual system to integrate complex inputs into one coherent percept.
Collapse
Affiliation(s)
- Nikos Gekas
- Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, PSL Research University, CNRS, 29 Rue d'Ulm, Paris 75005, France.
| | - Andrew I Meso
- Psychology and Interdisciplinary Neuroscience Research, Faculty of Science and Technology, Bournemouth University, Poole BH12 5BB, UK; Institut de Neurosciences de la Timone, UMR 7289, CNRS, Aix-Marseille Université, Marseille 13005, France
| | - Guillaume S Masson
- Institut de Neurosciences de la Timone, UMR 7289, CNRS, Aix-Marseille Université, Marseille 13005, France
| | - Pascal Mamassian
- Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, PSL Research University, CNRS, 29 Rue d'Ulm, Paris 75005, France.
| |
Collapse
|
15
|
Taouali W, Benvenuti G, Wallisch P, Chavane F, Perrinet LU. Testing the odds of inherent vs. observed overdispersion in neural spike counts. J Neurophysiol 2016; 115:434-44. [PMID: 26445864 PMCID: PMC4760471 DOI: 10.1152/jn.00194.2015] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2015] [Accepted: 10/04/2015] [Indexed: 01/15/2023] Open
Abstract
The repeated presentation of an identical visual stimulus in the receptive field of a neuron may evoke different spiking patterns at each trial. Probabilistic methods are essential to understand the functional role of this variance within the neural activity. In that case, a Poisson process is the most common model of trial-to-trial variability. For a Poisson process, the variance of the spike count is constrained to be equal to the mean, irrespective of the duration of measurements. Numerous studies have shown that this relationship does not generally hold. Specifically, a majority of electrophysiological recordings show an "overdispersion" effect: responses that exhibit more intertrial variability than expected from a Poisson process alone. A model that is particularly well suited to quantify overdispersion is the Negative-Binomial distribution model. This model is well-studied and widely used but has only recently been applied to neuroscience. In this article, we address three main issues. First, we describe how the Negative-Binomial distribution provides a model apt to account for overdispersed spike counts. Second, we quantify the significance of this model for any neurophysiological data by proposing a statistical test, which quantifies the odds that overdispersion could be due to the limited number of repetitions (trials). We apply this test to three neurophysiological data sets along the visual pathway. Finally, we compare the performance of this model to the Poisson model on a population decoding task. We show that the decoding accuracy is improved when accounting for overdispersion, especially under the hypothesis of tuned overdispersion.
Collapse
Affiliation(s)
- Wahiba Taouali
- Institut de Neurosciences de la Timone, Centre National de la Recherche Scientifique, Aix-Marseille Université, Marseille, France; and
| | - Giacomo Benvenuti
- Institut de Neurosciences de la Timone, Centre National de la Recherche Scientifique, Aix-Marseille Université, Marseille, France; and
| | - Pascal Wallisch
- Center for Neural Science, New York University, New York, New York
| | - Frédéric Chavane
- Institut de Neurosciences de la Timone, Centre National de la Recherche Scientifique, Aix-Marseille Université, Marseille, France; and
| | - Laurent U Perrinet
- Institut de Neurosciences de la Timone, Centre National de la Recherche Scientifique, Aix-Marseille Université, Marseille, France; and
| |
Collapse
|
16
|
Abstract
Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic.
Collapse
|
17
|
Perrinet LU, Adams RA, Friston KJ. Active inference, eye movements and oculomotor delays. BIOLOGICAL CYBERNETICS 2014; 108:777-801. [PMID: 25128318 PMCID: PMC4250571 DOI: 10.1007/s00422-014-0620-8] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2013] [Accepted: 07/08/2014] [Indexed: 05/26/2023]
Abstract
This paper considers the problem of sensorimotor delays in the optimal control of (smooth) eye movements under uncertainty. Specifically, we consider delays in the visuo-oculomotor loop and their implications for active inference. Active inference uses a generalisation of Kalman filtering to provide Bayes optimal estimates of hidden states and action in generalised coordinates of motion. Representing hidden states in generalised coordinates provides a simple way of compensating for both sensory and oculomotor delays. The efficacy of this scheme is illustrated using neuronal simulations of pursuit initiation responses, with and without compensation. We then consider an extension of the generative model to simulate smooth pursuit eye movements-in which the visuo-oculomotor system believes both the target and its centre of gaze are attracted to a (hidden) point moving in the visual field. Finally, the generative model is equipped with a hierarchical structure, so that it can recognise and remember unseen (occluded) trajectories and emit anticipatory responses. These simulations speak to a straightforward and neurobiologically plausible solution to the generic problem of integrating information from different sources with different temporal delays and the particular difficulties encountered when a system-like the oculomotor system-tries to control its environment with delayed signals.
Collapse
Affiliation(s)
- Laurent U Perrinet
- Institut de Neurosciences de la Timone, CNRS/Aix-Marseille Université, Marseille, France,
| | | | | |
Collapse
|
18
|
Simoncini C, Perrinet LU, Montagnini A, Mamassian P, Masson GS. More is not always better: adaptive gain control explains dissociation between perception and action. Nat Neurosci 2012; 15:1596-603. [PMID: 23023292 DOI: 10.1038/nn.3229] [Citation(s) in RCA: 51] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2012] [Accepted: 09/05/2012] [Indexed: 11/09/2022]
Abstract
Moving objects generate motion information at different scales, which are processed in the visual system with a bank of spatiotemporal frequency channels. It is not known how the brain pools this information to reconstruct object speed and whether this pooling is generic or adaptive; that is, dependent on the behavioral task. We used rich textured motion stimuli of varying bandwidths to decipher how the human visual motion system computes object speed in different behavioral contexts. We found that, although a simple visuomotor behavior such as short-latency ocular following responses takes advantage of the full distribution of motion signals, perceptual speed discrimination is impaired for stimuli with large bandwidths. Such opposite dependencies can be explained by an adaptive gain control mechanism in which the divisive normalization pool is adjusted to meet the different constraints of perception and action.
Collapse
Affiliation(s)
- Claudio Simoncini
- Team InViBe, Institut de Neurosciences de la Timone, UMR 7289, CNRS and Aix-Marseille Université, Marseille, France
| | | | | | | | | |
Collapse
|