1
|
Gur M. Seeing on the fly: Physiological and behavioral evidence show that space-to-space representation and processing enable fast and efficient performance by the visual system. J Vis 2024; 24:11. [PMID: 39392446 PMCID: PMC11472890 DOI: 10.1167/jov.24.11.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 08/16/2024] [Indexed: 10/12/2024] Open
Abstract
When we view the world, our eyes saccade quickly between points of interest. Even when fixating a target our eyes are not completely at rest but execute small fixational eye movements (FEMs). That vision is not blurred despite this ever-present jitter has seemingly motivated an increasingly popular theory denying the reliance of the visual system on pure spatial processing in favor of a space-to-time mechanism generated by the eye drifting across the image. Accordingly, FEMs are not detrimental but rather essential to good visibility. However, the space-to-time theory is incompatible with physiological data showing that all information is conveyed by the short neural volleys generated when the eyes land on a target, and with our faithful perception of briefly displayed objects, during which time FEMs have no effect. Another difficulty in rejecting the idea of image representation by the locations and nature of responding cells in favor of a timecode, is that somewhere, somehow, this code must be decoded into a parallel spatial one when reaching perception. Thus, in addition to the implausibility of generating meaningful responses during retinal drift, the space-to-time hypothesis calls for replacing efficient point-to-point parallel transmission with a cumbersome, delayed, space-to-time-to-space process. A novel physiological framework is presented here wherein the ability of the visual system to quickly process information is mediated by the short, powerful neural volleys generated by the landing saccades. These volleys are necessary and sufficient for normal perception without FEMs contribution. This mechanism enables our excellent perception of brief stimuli and explains that visibility is not blurred by FEMs because they do not generate useful information.
Collapse
Affiliation(s)
- Moshe Gur
- Department of Biomedical Engineering, Technion-Israel Institute of Technology Haifa, Israel
| |
Collapse
|
2
|
Samonds JM, Szinte M, Barr C, Montagnini A, Masson GS, Priebe NJ. Mammals Achieve Common Neural Coverage of Visual Scenes Using Distinct Sampling Behaviors. eNeuro 2024; 11:ENEURO.0287-23.2023. [PMID: 38164577 PMCID: PMC10860624 DOI: 10.1523/eneuro.0287-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/24/2023] [Accepted: 10/30/2023] [Indexed: 01/03/2024] Open
Abstract
Most vertebrates use head and eye movements to quickly change gaze orientation and sample different portions of the environment with periods of stable fixation. Visual information must be integrated across fixations to construct a complete perspective of the visual environment. In concert with this sampling strategy, neurons adapt to unchanging input to conserve energy and ensure that only novel information from each fixation is processed. We demonstrate how adaptation recovery times and saccade properties interact and thus shape spatiotemporal tradeoffs observed in the motor and visual systems of mice, cats, marmosets, macaques, and humans. These tradeoffs predict that in order to achieve similar visual coverage over time, animals with smaller receptive field sizes require faster saccade rates. Indeed, we find comparable sampling of the visual environment by neuronal populations across mammals when integrating measurements of saccadic behavior with receptive field sizes and V1 neuronal density. We propose that these mammals share a common statistically driven strategy of maintaining coverage of their visual environment over time calibrated to their respective visual system characteristics.
Collapse
Affiliation(s)
- Jason M Samonds
- Center for Learning and Memory and the Institute for Neuroscience, The University of Texas at Austin, Austin 78712, Texas
| | - Martin Szinte
- Institut de Neurosciences de la Timone (UMR 7289), Centre National de la Recherche Scientifique and Aix-Marseille Université, 13385 Marseille, France
| | - Carrie Barr
- Center for Learning and Memory and the Institute for Neuroscience, The University of Texas at Austin, Austin 78712, Texas
| | - Anna Montagnini
- Institut de Neurosciences de la Timone (UMR 7289), Centre National de la Recherche Scientifique and Aix-Marseille Université, 13385 Marseille, France
| | - Guillaume S Masson
- Institut de Neurosciences de la Timone (UMR 7289), Centre National de la Recherche Scientifique and Aix-Marseille Université, 13385 Marseille, France
| | - Nicholas J Priebe
- Center for Learning and Memory and the Institute for Neuroscience, The University of Texas at Austin, Austin 78712, Texas
| |
Collapse
|
3
|
Samonds JM, Szinte M, Barr C, Montagnini A, Masson GS, Priebe NJ. Mammals achieve common neural coverage of visual scenes using distinct sampling behaviors. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.20.533210. [PMID: 36993477 PMCID: PMC10055212 DOI: 10.1101/2023.03.20.533210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
Most vertebrates use head and eye movements to quickly change gaze orientation and sample different portions of the environment with periods of stable fixation. Visual information must be integrated across several fixations to construct a more complete perspective of the visual environment. In concert with this sampling strategy, neurons adapt to unchanging input to conserve energy and ensure that only novel information from each fixation is processed. We demonstrate how adaptation recovery times and saccade properties interact, and thus shape spatiotemporal tradeoffs observed in the motor and visual systems of different species. These tradeoffs predict that in order to achieve similar visual coverage over time, animals with smaller receptive field sizes require faster saccade rates. Indeed, we find comparable sampling of the visual environment by neuronal populations across mammals when integrating measurements of saccadic behavior with receptive field sizes and V1 neuronal density. We propose that these mammals share a common statistically driven strategy of maintaining coverage of their visual environment over time calibrated to their respective visual system characteristics.
Collapse
|
4
|
Testa S, Sabatini SP, Canessa A. Active fixation as an efficient coding strategy for neuromorphic vision. Sci Rep 2023; 13:7445. [PMID: 37156822 PMCID: PMC10167324 DOI: 10.1038/s41598-023-34508-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 05/03/2023] [Indexed: 05/10/2023] Open
Abstract
Contrary to a photographer, who puts a great effort in keeping the lens still, eyes insistently move even during fixation. This benefits signal decorrelation, which underlies an efficient encoding of visual information. Yet, camera motion is not sufficient alone; it must be coupled with a sensor specifically selective to temporal changes. Indeed, motion induced on standard imagers only results in burring effects. Neuromorphic sensors represent a valuable solution. Here we characterize the response of an event-based camera equipped with fixational eye movements (FEMs) on both synthetic and natural images. Our analyses prove that the system starts an early stage of redundancy suppression, as a precursor of subsequent whitening processes on the amplitude spectrum. This does not come at the price of corrupting structural information contained in local spatial phase across oriented axes. Isotropy of FEMs ensures proper representations of image features without introducing biases towards specific contrast orientations.
Collapse
Affiliation(s)
- Simone Testa
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), University of Genoa, 16145, Genoa, Italy
| | - Silvio P Sabatini
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), University of Genoa, 16145, Genoa, Italy
| | - Andrea Canessa
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), University of Genoa, 16145, Genoa, Italy.
| |
Collapse
|
5
|
Liu W, Liu X. Pre-stimulus network responses affect information coding in neural variability quenching. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.02.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/12/2023]
|
6
|
The effects of distractors on brightness perception based on a spiking network. Sci Rep 2023; 13:1517. [PMID: 36707550 PMCID: PMC9883501 DOI: 10.1038/s41598-023-28326-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 01/17/2023] [Indexed: 01/28/2023] Open
Abstract
Visual perception can be modified by the surrounding context. Particularly, experimental observations have demonstrated that visual perception and primary visual cortical responses could be modified by properties of surrounding distractors. However, the underlying mechanism remains unclear. To simulate primary visual cortical activities in this paper, we design a k-winner-take-all (k-WTA) spiking network whose responses are generated through probabilistic inference. In simulations, images with the same target and various surrounding distractors perform as stimuli. Distractors are designed with multiple varying properties, including the luminance, the sizes and the distances to the target. Simulations for each varying property are performed with other properties fixed. Each property could modify second-layer neural responses and interactions in the network. To the same target in the designed images, the modified network responses could simulate distinguishing brightness perception consistent with experimental observations. Our model provides a possible explanation of how the surrounding distractors modify primary visual cortical responses to induce various brightness perception of the given target.
Collapse
|
7
|
Abstract
An ultimate goal in retina science is to understand how the neural circuit of the retina processes natural visual scenes. Yet most studies in laboratories have long been performed with simple, artificial visual stimuli such as full-field illumination, spots of light, or gratings. The underlying assumption is that the features of the retina thus identified carry over to the more complex scenario of natural scenes. As the application of corresponding natural settings is becoming more commonplace in experimental investigations, this assumption is being put to the test and opportunities arise to discover processing features that are triggered by specific aspects of natural scenes. Here, we review how natural stimuli have been used to probe, refine, and complement knowledge accumulated under simplified stimuli, and we discuss challenges and opportunities along the way toward a comprehensive understanding of the encoding of natural scenes. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Dimokratis Karamanlis
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.,International Max Planck Research School for Neurosciences, Göttingen, Germany
| | - Helene Marianne Schreyer
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Tim Gollisch
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.,Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
| |
Collapse
|
8
|
Schottdorf M, Lee BB. A quantitative description of macaque ganglion cell responses to natural scenes: the interplay of time and space. J Physiol 2021; 599:3169-3193. [PMID: 33913164 DOI: 10.1113/jp281200] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Accepted: 04/20/2021] [Indexed: 11/08/2022] Open
Abstract
KEY POINTS Responses to natural scenes are the business of the retina. We find primate ganglion cell responses to such scenes consistent with those to simpler stimuli. A biophysical model confirmed this and predicted ganglion cell responses with close to retinal reliability. Primate ganglion cell responses to natural scenes were driven by temporal variations in colour and luminance over the receptive field centre caused by eye movements, and little influenced by interaction of centre and surround with structure in the scene. We discuss implications in the context of efficient coding of the visual environment. Much information in a higher spatiotemporal frequency band is concentrated in the magnocellular pathway. ABSTRACT Responses of visual neurons to natural scenes provide a link between classical descriptions of receptive field structure and visual perception of the natural environment. A natural scene video with a movement pattern resembling that of primate eye movements was used to evoke responses from macaque ganglion cells. Cell responses were well described through known properties of cell receptive fields. Different analyses converge to show that responses primarily derive from the temporal pattern of stimulation derived from eye movements, rather than spatial receptive field structure beyond centre size and position. This was confirmed using a model that predicted ganglion cell responses close to retinal reliability, with only a small contribution of the surround relative to the centre. We also found that the spatiotemporal spectrum of the stimulus is modified in ganglion cell responses, and this can reduce redundancy in the retinal signal. This is more pronounced in the magnocellular pathway, which is much better suited to transmit the detailed structure of natural scenes than the parvocellular pathway. Whitening is less important for chromatic channels. Taken together, this shows how a complex interplay across space, time and spectral content sculpts ganglion cell responses.
Collapse
Affiliation(s)
- Manuel Schottdorf
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, D-37077, Germany.,Max Planck Institute of Experimental Medicine, Göttingen, D-37075, Germany.,Princeton Neuroscience Institute, Princeton, NJ, 08544, USA
| | - Barry B Lee
- Graduate Center for Vision Research, Department of Biological Sciences, SUNY College of Optometry, 33 West 42nd St., New York, NY, 10036, USA.,Department of Neurobiology, Max Planck Institute for Biophysical Chemistry, Göttingen, D-37077, Germany
| |
Collapse
|
9
|
The effects of eye movements on the visual cortical responding variability based on a spiking network. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.01.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
10
|
Shah NP, Chichilnisky EJ. Computational challenges and opportunities for a bi-directional artificial retina. J Neural Eng 2020; 17:055002. [PMID: 33089827 DOI: 10.1088/1741-2552/aba8b1] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
A future artificial retina that can restore high acuity vision in blind people will rely on the capability to both read (observe) and write (control) the spiking activity of neurons using an adaptive, bi-directional and high-resolution device. Although current research is focused on overcoming the technical challenges of building and implanting such a device, exploiting its capabilities to achieve more acute visual perception will also require substantial computational advances. Using high-density large-scale recording and stimulation in the primate retina with an ex vivo multi-electrode array lab prototype, we frame several of the major computational problems, and describe current progress and future opportunities in solving them. First, we identify cell types and locations from spontaneous activity in the blind retina, and then efficiently estimate their visual response properties by using a low-dimensional manifold of inter-retina variability learned from a large experimental dataset. Second, we estimate retinal responses to a large collection of relevant electrical stimuli by passing current patterns through an electrode array, spike sorting the resulting recordings and using the results to develop a model of evoked responses. Third, we reproduce the desired responses for a given visual target by temporally dithering a diverse collection of electrical stimuli within the integration time of the visual system. Together, these novel approaches may substantially enhance artificial vision in a next-generation device.
Collapse
Affiliation(s)
- Nishal P Shah
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States of America. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA, United States of America. Department of Neurosurgery, Stanford University, Stanford, CA, United States of America. Author to whom any correspondence should be addressed
| | | |
Collapse
|
11
|
Khademi F, Chen CY, Hafed ZM. Visual feature tuning of superior colliculus neural reafferent responses after fixational microsaccades. J Neurophysiol 2020; 123:2136-2153. [PMID: 32347160 DOI: 10.1152/jn.00077.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The primate superior colliculus (SC) is causally involved in microsaccade generation. Moreover, visually responsive SC neurons across this structure's topographic map, even at peripheral eccentricities much larger than the tiny microsaccade amplitudes, exhibit significant modulations of evoked response sensitivity when stimuli appear perimicrosaccadically. However, during natural viewing, visual stimuli are normally stably present in the environment and are only shifted on the retina by eye movements. Here we investigated this scenario for the case of microsaccades, asking whether and how SC neurons respond to microsaccade-induced image jitter. We recorded neural activity from two male rhesus macaque monkeys. Within the response field (RF) of a neuron, there was a stable stimulus consisting of a grating of one of three possible spatial frequencies. The grating was stable on the display, but microsaccades periodically jittered the retinotopic RF location over it. We observed clear short-latency visual reafferent responses after microsaccades. These responses were weaker, but earlier (relative to new fixation onset after microsaccade end), than responses to sudden stimulus onsets without microsaccades. The reafferent responses clearly depended on microsaccade amplitude as well as microsaccade direction relative to grating orientation. Our results indicate that one way for microsaccades to influence vision is through modulating how the spatio-temporal landscape of SC visual neural activity represents stable stimuli in the environment. Such representation depends on the specific pattern of temporal luminance modulations expected from the relative relationship between eye movement vector (size and direction) on one hand and spatial visual pattern layout on the other.NEW & NOTEWORTHY Despite being diminutive, microsaccades still jitter retinal images. We investigated how such jitter affects superior colliculus (SC) activity. We found that SC neurons exhibit short-latency visual reafferent bursts after microsaccades. These bursts reflect not only the spatial luminance profiles of visual patterns but also how such profiles are shifted by eye movement size and direction. These results indicate that the SC continuously represents visual patterns, even as they are jittered by the smallest possible saccades.
Collapse
Affiliation(s)
- Fatemeh Khademi
- Werner Reichardt Centre for Integrative Neuroscience, Tuebingen University, Tuebingen, Germany.,Hertie Institute for Clinical Brain Research, Tuebingen University, Tuebingen, Germany
| | - Chih-Yang Chen
- Werner Reichardt Centre for Integrative Neuroscience, Tuebingen University, Tuebingen, Germany
| | - Ziad M Hafed
- Werner Reichardt Centre for Integrative Neuroscience, Tuebingen University, Tuebingen, Germany.,Hertie Institute for Clinical Brain Research, Tuebingen University, Tuebingen, Germany
| |
Collapse
|
12
|
Rucci M, Ahissar E, Burr D. Temporal Coding of Visual Space. Trends Cogn Sci 2019; 22:883-895. [PMID: 30266148 DOI: 10.1016/j.tics.2018.07.009] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Revised: 07/16/2018] [Accepted: 07/16/2018] [Indexed: 11/20/2022]
Abstract
Establishing a representation of space is a major goal of sensory systems. Spatial information, however, is not always explicit in the incoming sensory signals. In most modalities it needs to be actively extracted from cues embedded in the temporal flow of receptor activation. Vision, on the other hand, starts with a sophisticated optical imaging system that explicitly preserves spatial information on the retina. This may lead to the assumption that vision is predominantly a spatial process: all that is needed is to transmit the retinal image to the cortex, like uploading a digital photograph, to establish a spatial map of the world. However, this deceptively simple analogy is inconsistent with theoretical models and experiments that study visual processing in the context of normal motor behavior. We argue here that, as with other senses, vision relies heavily on temporal strategies and temporal neural codes to extract and represent spatial information.
Collapse
Affiliation(s)
- Michele Rucci
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA.
| | - Ehud Ahissar
- Department of Neurobiology, Weizmann Institute, Rehovot, Israel.
| | - David Burr
- Department of Neuroscience, University of Florence, Florence 50125, Italy; School of Psychology, University of Sydney, Camperdown, NSW 2006, Australia.
| |
Collapse
|
13
|
Matsumoto A, Tachibana M. Global Jitter Motion of the Retinal Image Dynamically Alters the Receptive Field Properties of Retinal Ganglion Cells. Front Neurosci 2019; 13:979. [PMID: 31572123 PMCID: PMC6753181 DOI: 10.3389/fnins.2019.00979] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2019] [Accepted: 08/30/2019] [Indexed: 11/25/2022] Open
Abstract
Fixational eye movements induce aperiodic motion of the retinal image. However, it is not yet fully understood how fixational eye movements affect retinal information processing. Here we show that global jitter motion, simulating the image motion during fixation, alters the spatiotemporal receptive field properties of retinal ganglion cells. Using multi-electrode and whole-cell recording techniques, we investigated light-evoked responses from ganglion cells in the isolated goldfish retina. Ganglion cells were classified into six groups based on the filtering property of light stimulus, the membrane properties, and the cell morphology. The spatiotemporal receptive field profiles of retinal ganglion cells were estimated by the reverse correlation method, where the dense noise stimulus was applied on the dark or random-dot background. We found that the jitter motion of the random-dot background elongated the receptive filed along the rostral-caudal axis and temporally sensitized in a specific group of ganglion cells: Fast-transient ganglion cells. At the newly emerged regions of the receptive field local light stimulation evoked excitatory postsynaptic currents with large amplitude and fast kinetics without changing the properties of inhibitory postsynaptic currents. Pharmacological experiments suggested two presynaptic mechanisms underlying the receptive field alteration: (i) electrical coupling between bipolar cells, which expands the receptive field in all directions; (ii) GABAergic presynaptic inhibition from amacrine cells, which reduces the dorsal and ventral regions of the expanded receptive field, resulting in elongation along the rostral-caudal axis. Our study demonstrates that the receptive field of Fast-transient ganglion cells is not static but dynamically altered depending on the visual inputs. The receptive field elongation during fixational eye movements may contribute to prompt firing to a target in the succeeding saccade.
Collapse
Affiliation(s)
- Akihiro Matsumoto
- Department of Psychology, Graduate School of Humanities and Sociology, The University of Tokyo, Tokyo, Japan
- Ritsumeikan Global Innovation Research Organization (R-GIRO), Ritsumeikan University, Kusatsu, Japan
- Danish Research Institute of Translational Neuroscience (DANDRITE), Department of Biomedicine, Aarhus University, Aarhus, Denmark
| | - Masao Tachibana
- Department of Psychology, Graduate School of Humanities and Sociology, The University of Tokyo, Tokyo, Japan
- Research Organization of Science and Technology, Ritsumeikan University, Kusatsu, Japan
| |
Collapse
|
14
|
Casile A, Victor JD, Rucci M. Contrast sensitivity reveals an oculomotor strategy for temporally encoding space. eLife 2019; 8:40924. [PMID: 30620333 PMCID: PMC6324884 DOI: 10.7554/elife.40924] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Accepted: 12/03/2018] [Indexed: 11/23/2022] Open
Abstract
The contrast sensitivity function (CSF), how sensitivity varies with the frequency of the stimulus, is a fundamental assessment of visual performance. The CSF is generally assumed to be determined by low-level sensory processes. However, the spatial sensitivities of neurons in the early visual pathways, as measured in experiments with immobilized eyes, diverge from psychophysical CSF measurements in primates. Under natural viewing conditions, as in typical psychophysical measurements, humans continually move their eyes even when looking at a fixed point. Here, we show that the resulting transformation of the spatial scene into temporal modulations on the retina constitutes a processing stage that reconciles human CSF and the response characteristics of retinal ganglion cells under a broad range of conditions. Our findings suggest a fundamental integration between perception and action: eye movements work synergistically with the spatio-temporal sensitivities of retinal neurons to encode spatial information.
Collapse
Affiliation(s)
- Antonino Casile
- Center for Translational Neurophysiology, Istituto Italiano di Tecnologia, Ferrara, Italy.,Center for Neuroscience and Cognitive Systems, Rovereto, Italy.,Department of Neurobiology, Harvard Medical School, Boston, United States
| | - Jonathan D Victor
- Brain and Mind Research Institute, Weill Cornell Medical College, New York, United States.,Department of Neurology, Weill Cornell Medical College, New York, United States
| | - Michele Rucci
- Brain and Cognitive Sciences, University of Rochester, Rochester, United States.,Center for Visual Science, University of Rochester, Rochester, United States
| |
Collapse
|
15
|
Turner MH, Sanchez Giraldo LG, Schwartz O, Rieke F. Stimulus- and goal-oriented frameworks for understanding natural vision. Nat Neurosci 2019; 22:15-24. [PMID: 30531846 PMCID: PMC8378293 DOI: 10.1038/s41593-018-0284-0] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Accepted: 10/22/2018] [Indexed: 12/21/2022]
Abstract
Our knowledge of sensory processing has advanced dramatically in the last few decades, but this understanding remains far from complete, especially for stimuli with the large dynamic range and strong temporal and spatial correlations characteristic of natural visual inputs. Here we describe some of the issues that make understanding the encoding of natural images a challenge. We highlight two broad strategies for approaching this problem: a stimulus-oriented framework and a goal-oriented one. Different contexts can call for one framework or the other. Looking forward, recent advances, particularly those based in machine learning, show promise in borrowing key strengths of both frameworks and by doing so illuminating a path to a more comprehensive understanding of the encoding of natural stimuli.
Collapse
Affiliation(s)
- Maxwell H Turner
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
- Graduate Program in Neuroscience, University of Washington, Seattle, WA, USA
| | | | - Odelia Schwartz
- Department of Computer Science, University of Miami, Coral Gables, FL, USA
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA.
| |
Collapse
|
16
|
Samonds JM, Geisler WS, Priebe NJ. Natural image and receptive field statistics predict saccade sizes. Nat Neurosci 2018; 21:1591-1599. [PMID: 30349110 DOI: 10.1038/s41593-018-0255-5] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2018] [Accepted: 09/19/2018] [Indexed: 11/09/2022]
Abstract
Humans and other primates sample the visual environment using saccadic eye movements that shift a high-resolution fovea toward regions of interest to create a clear perception of a scene across fixations. Many mammals, however, like mice, lack a fovea, which raises the question of why they make saccades. Here we describe and test the hypothesis that saccades are matched to natural scene statistics and to the receptive field sizes and adaptive properties of neural populations. Specifically, we determined the minimum amplitude of saccades in natural scenes necessary to provide uncorrelated inputs to model neural populations. This analysis predicts the distributions of observed saccade sizes during passive viewing for nonhuman primates, cats, and mice. Furthermore, disrupting the development of receptive field properties by monocular deprivation changed saccade sizes consistent with this hypothesis. Therefore, natural-scene statistics and the neural representation of natural images appear to be critical factors guiding saccadic eye movements.
Collapse
Affiliation(s)
- Jason M Samonds
- Department of Neuroscience, University of Texas at Austin, Austin, TX, USA. .,Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA. .,Center for Learning and Memory, University of Texas at Austin, Austin, TX, USA.
| | - Wilson S Geisler
- Department of Neuroscience, University of Texas at Austin, Austin, TX, USA.,Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA.,Department of Psychology, University of Texas at Austin, Austin, TX, USA
| | - Nicholas J Priebe
- Department of Neuroscience, University of Texas at Austin, Austin, TX, USA.,Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA.,Center for Learning and Memory, University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
17
|
Turner MH, Schwartz GW, Rieke F. Receptive field center-surround interactions mediate context-dependent spatial contrast encoding in the retina. eLife 2018; 7:e38841. [PMID: 30188320 PMCID: PMC6185113 DOI: 10.7554/elife.38841] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Accepted: 08/29/2018] [Indexed: 11/30/2022] Open
Abstract
Antagonistic receptive field surrounds are a near-universal property of early sensory processing. A key assumption in many models for retinal ganglion cell encoding is that receptive field surrounds are added only to the fully formed center signal. But anatomical and functional observations indicate that surrounds are added before the summation of signals across receptive field subunits that creates the center. Here, we show that this receptive field architecture has an important consequence for spatial contrast encoding in the macaque monkey retina: the surround can control sensitivity to fine spatial structure by changing the way the center integrates visual information over space. The impact of the surround is particularly prominent when center and surround signals are correlated, as they are in natural stimuli. This effect of the surround differs substantially from classic center-surround models and raises the possibility that the surround plays unappreciated roles in shaping ganglion cell sensitivity to natural inputs.
Collapse
Affiliation(s)
- Maxwell H Turner
- Department of Physiology and BiophysicsUniversity of WashingtonSeattleUnited States
- Graduate Program in NeuroscienceUniversity of WashingtonSeattleUnited States
| | - Gregory W Schwartz
- Departments of Ophthalmology and Physiology, Feinberg School of MedicineNorthwestern UniversityChicagoUnited States
- Department of Neurobiology, Weinberg College of Arts and SciencesNorthwestern UniversityChicagoUnited States
| | - Fred Rieke
- Department of Physiology and BiophysicsUniversity of WashingtonSeattleUnited States
| |
Collapse
|
18
|
Schneidman E. Towards the design principles of neural population codes. Curr Opin Neurobiol 2016; 37:133-140. [PMID: 27016639 DOI: 10.1016/j.conb.2016.03.001] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2016] [Revised: 03/01/2016] [Accepted: 03/02/2016] [Indexed: 12/18/2022]
Abstract
The ability to record the joint activity of large groups of neurons would allow for direct study of information representation and computation at the level of whole circuits in the brain. The combinatorial space of potential population activity patterns and neural noise imply that it would be impossible to directly map the relations between stimuli and population responses. Understanding of large neural population codes therefore depends on identifying simplifying design principles. We review recent results showing that strongly correlated population codes can be explained using minimal models that rely on low order relations among cells. We discuss the implications for large populations, and how such models allow for mapping the semantic organization of the neural codebook and stimulus space, and decoding.
Collapse
Affiliation(s)
- Elad Schneidman
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel.
| |
Collapse
|