1
|
Intoy J, Li YH, Bowers NR, Victor JD, Poletti M, Rucci M. Consequences of eye movements for spatial selectivity. Curr Biol 2024; 34:3265-3272.e4. [PMID: 38981478 PMCID: PMC11348862 DOI: 10.1016/j.cub.2024.06.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 05/17/2024] [Accepted: 06/07/2024] [Indexed: 07/11/2024]
Abstract
What determines spatial tuning in the visual system? Standard views rely on the assumption that spatial information is directly inherited from the relative position of photoreceptors and shaped by neuronal connectivity.1,2 However, human eyes are always in motion during fixation,3,4,5,6 so retinal neurons receive temporal modulations that depend on the interaction of the spatial structure of the stimulus with eye movements. It has long been hypothesized that these modulations might contribute to spatial encoding,7,8,9,10,11,12 a proposal supported by several recent observations.13,14,15,16 A fundamental, yet untested, consequence of this encoding strategy is that spatial tuning is not hard-wired in the visual system but critically depends on how the fixational motion of the eye shapes the temporal structure of the signals impinging onto the retina. Here we used high-resolution techniques for eye-tracking17 and gaze-contingent display control18 to quantitatively test this distinctive prediction. We examined how contrast sensitivity, a hallmark of spatial vision, is influenced by fixational motion, both during normal active fixation and when the spatiotemporal stimulus on the retina is altered to mimic changes in fixational control. We showed that visual sensitivity closely follows the strength of the luminance modulations delivered within a narrow temporal bandwidth, so changes in fixational motion have opposite visual effects at low and high spatial frequencies. By identifying a key role for oculomotor activity in spatial selectivity, these findings have important implications for the perceptual consequences of abnormal eye movements, the sources of perceptual variability, and the function of oculomotor control.
Collapse
Affiliation(s)
- Janis Intoy
- Center for Visual Science, University of Rochester, Rochester, NY, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Yuanhao H Li
- Center for Visual Science, University of Rochester, Rochester, NY, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Norick R Bowers
- Department of Psychology, Justus-Liebig University, Giessen, Germany
| | - Jonathan D Victor
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York City, NY, USA
| | - Martina Poletti
- Center for Visual Science, University of Rochester, Rochester, NY, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Michele Rucci
- Center for Visual Science, University of Rochester, Rochester, NY, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
2
|
Fitzpatrick MJ, Krizan J, Hsiang JC, Shen N, Kerschensteiner D. A pupillary contrast response in mice and humans: Neural mechanisms and visual functions. Neuron 2024; 112:2404-2422.e9. [PMID: 38697114 PMCID: PMC11257825 DOI: 10.1016/j.neuron.2024.04.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 12/21/2023] [Accepted: 04/10/2024] [Indexed: 05/04/2024]
Abstract
In the pupillary light response (PLR), increases in ambient light constrict the pupil to dampen increases in retinal illuminance. Here, we report that the pupillary reflex arc implements a second input-output transformation; it senses temporal contrast to enhance spatial contrast in the retinal image and increase visual acuity. The pupillary contrast response (PCoR) is driven by rod photoreceptors via type 6 bipolar cells and M1 ganglion cells. Temporal contrast is transformed into sustained pupil constriction by the M1's conversion of excitatory input into spike output. Computational modeling explains how the PCoR shapes retinal images. Pupil constriction improves acuity in gaze stabilization and predation in mice. Humans exhibit a PCoR with similar tuning properties to mice, which interacts with eye movements to optimize the statistics of the visual input for retinal encoding. Thus, we uncover a conserved component of active vision, its cell-type-specific pathway, computational mechanisms, and optical and behavioral significance.
Collapse
Affiliation(s)
- Michael J Fitzpatrick
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine in St. Louis, St. Louis, MO 63110, USA; Graduate Program in Neuroscience, Washington University School of Medicine in St. Louis, St. Louis, MO 63110, USA; Medical Scientist Training Program, Washington University School of Medicine in St. Louis, St. Louis, MO 63110, USA
| | - Jenna Krizan
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine in St. Louis, St. Louis, MO 63110, USA; Graduate Program in Neuroscience, Washington University School of Medicine in St. Louis, St. Louis, MO 63110, USA
| | - Jen-Chun Hsiang
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine in St. Louis, St. Louis, MO 63110, USA
| | - Ning Shen
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine in St. Louis, St. Louis, MO 63110, USA
| | - Daniel Kerschensteiner
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine in St. Louis, St. Louis, MO 63110, USA; Department of Neuroscience, Washington University School of Medicine in St. Louis, St. Louis, MO 63110, USA; Department of Biomedical Engineering, Washington University School of Medicine in St. Louis, St. Louis, MO 63110, USA.
| |
Collapse
|
3
|
Rault N, Bergmans T, Delfstra N, Kleijnen BJ, Zeldenrust F, Celikel T. Where Top-Down Meets Bottom-Up: Cell-Type Specific Connectivity Map of the Whisker System. Neuroinformatics 2024; 22:251-268. [PMID: 38767789 PMCID: PMC11329691 DOI: 10.1007/s12021-024-09658-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/07/2024] [Indexed: 05/22/2024]
Abstract
Sensorimotor computation integrates bottom-up world state information with top-down knowledge and task goals to form action plans. In the rodent whisker system, a prime model of active sensing, evidence shows neuromodulatory neurotransmitters shape whisker control, affecting whisking frequency and amplitude. Since neuromodulatory neurotransmitters are mostly released from subcortical nuclei and have long-range projections that reach the rest of the central nervous system, mapping the circuits of top-down neuromodulatory control of sensorimotor nuclei will help to systematically address the mechanisms of active sensing. Therefore, we developed a neuroinformatic target discovery pipeline to mine the Allen Institute's Mouse Brain Connectivity Atlas. Using network connectivity analysis, we identified new putative connections along the whisker system and anatomically confirmed the existence of 42 previously unknown monosynaptic connections. Using this data, we updated the sensorimotor connectivity map of the mouse whisker system and developed the first cell-type-specific map of the network. The map includes 157 projections across 18 principal nuclei of the whisker system and neuromodulatory neurotransmitter-releasing. Performing a graph network analysis of this connectome, we identified cell-type specific hubs, sources, and sinks, provided anatomical evidence for monosynaptic inhibitory projections into all stages of the ascending pathway, and showed that neuromodulatory projections improve network-wide connectivity. These results argue that beyond the modulatory chemical contributions to information processing and transfer in the whisker system, the circuit connectivity features of the neuromodulatory networks position them as nodes of sensory and motor integration.
Collapse
Affiliation(s)
- Nicolas Rault
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.
| | - Tido Bergmans
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Natasja Delfstra
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | | | - Fleur Zeldenrust
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | | |
Collapse
|
4
|
Yang B, Intoy J, Rucci M. Eye blinks as a visual processing stage. Proc Natl Acad Sci U S A 2024; 121:e2310291121. [PMID: 38564641 PMCID: PMC11009678 DOI: 10.1073/pnas.2310291121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 02/12/2024] [Indexed: 04/04/2024] Open
Abstract
Humans blink their eyes frequently during normal viewing, more often than it seems necessary for keeping the cornea well lubricated. Since the closure of the eyelid disrupts the image on the retina, eye blinks are commonly assumed to be detrimental to visual processing. However, blinks also provide luminance transients rich in spatial information to neural pathways highly sensitive to temporal changes. Here, we report that the luminance modulations from blinks enhance visual sensitivity. By coupling high-resolution eye tracking in human observers with modeling of blink transients and spectral analysis of visual input signals, we show that blinking increases the power of retinal stimulation and that this effect significantly enhances visibility despite the time lost in exposure to the external scene. We further show that, as predicted from the spectral content of input signals, this enhancement is selective for stimuli at low spatial frequencies and occurs irrespective of whether the luminance transients are actively generated or passively experienced. These findings indicate that, like eye movements, blinking acts as a computational component of a visual processing strategy that uses motor behavior to reformat spatial information into the temporal domain.
Collapse
Affiliation(s)
- Bin Yang
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY14627
- Center for Visual Science, University of Rochester, Rochester, NY14627
| | - Janis Intoy
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY14627
- Center for Visual Science, University of Rochester, Rochester, NY14627
| | - Michele Rucci
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY14627
- Center for Visual Science, University of Rochester, Rochester, NY14627
| |
Collapse
|
5
|
Parker PRL, Martins DM, Leonard ESP, Casey NM, Sharp SL, Abe ETT, Smear MC, Yates JL, Mitchell JF, Niell CM. A dynamic sequence of visual processing initiated by gaze shifts. Nat Neurosci 2023; 26:2192-2202. [PMID: 37996524 PMCID: PMC11270614 DOI: 10.1038/s41593-023-01481-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 10/04/2023] [Indexed: 11/25/2023]
Abstract
Animals move their head and eyes as they explore the visual scene. Neural correlates of these movements have been found in rodent primary visual cortex (V1), but their sources and computational roles are unclear. We addressed this by combining head and eye movement measurements with neural recordings in freely moving mice. V1 neurons responded primarily to gaze shifts, where head movements are accompanied by saccadic eye movements, rather than to head movements where compensatory eye movements stabilize gaze. A variety of activity patterns followed gaze shifts and together these formed a temporal sequence that was absent in darkness. Gaze-shift responses resembled those evoked by sequentially flashed stimuli, suggesting a large component corresponds to onset of new visual input. Notably, neurons responded in a sequence that matches their spatial frequency bias, consistent with coarse-to-fine processing. Recordings in freely gazing marmosets revealed a similar sequence following saccades, also aligned to spatial frequency preference. Our results demonstrate that active vision in both mice and marmosets consists of a dynamic temporal sequence of neural activity associated with visual sampling.
Collapse
Affiliation(s)
- Philip R L Parker
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA
- Behavioral and Systems Neuroscience, Department of Psychology, Rutgers University, New Brunswick, NJ, USA
| | - Dylan M Martins
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA
| | - Emmalyn S P Leonard
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA
| | - Nathan M Casey
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA
| | - Shelby L Sharp
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA
| | - Elliott T T Abe
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA
| | - Matthew C Smear
- Institute of Neuroscience and Department of Psychology, University of Oregon, Eugene, OR, USA
| | - Jacob L Yates
- Department of Biology and Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, USA
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | - Jude F Mitchell
- Department of Brain and Cognitive Sciences and Center for Visual Sciences, University of Rochester, Rochester, NY, USA.
| | - Cristopher M Niell
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA.
| |
Collapse
|
6
|
Poletti M. An eye for detail: Eye movements and attention at the foveal scale. Vision Res 2023; 211:108277. [PMID: 37379763 PMCID: PMC10528557 DOI: 10.1016/j.visres.2023.108277] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 06/08/2023] [Accepted: 06/08/2023] [Indexed: 06/30/2023]
Abstract
Human vision relies on a tiny region of the retina, the 1-deg foveola, to achieve high spatial resolution. Foveal vision is of paramount importance in daily activities, yet its study is challenging, as eye movements incessantly displace stimuli across this region. Here I will review work that, building on recent advances in eye-tracking and gaze-contingent display, examines how attention and eye movements operate at the foveal level. This research highlights how exploration of fine spatial detail unfolds following visuomotor strategies reminiscent of those occurring at larger scales. It shows that, together with highly precise control of attention, this motor activity is linked to non-homogenous processing within the foveola and selectively modulates sensitivity both in space and time. Overall, the picture emerges of a highly dynamic foveal perception in which fine spatial vision, rather than simply being the result of placing a stimulus at the center of gaze, is the result of a finely tuned and orchestrated synergy of motor, cognitive, and attentional processes.
Collapse
Affiliation(s)
- Martina Poletti
- Department of Brain and Cognitive Sciences, University of Rochester, United States; Center for Visual Science, University of Rochester, United States; Department of Neuroscience, University of Rochester, United States.
| |
Collapse
|
7
|
Ahissar E, Nelinger G, Assa E, Karp O, Saraf-Sinik I. Thalamocortical loops as temporal demodulators across senses. Commun Biol 2023; 6:562. [PMID: 37237075 DOI: 10.1038/s42003-023-04881-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 04/27/2023] [Indexed: 05/28/2023] Open
Abstract
Sensory information is coded in space and in time. The organization of neuronal activity in space maintains straightforward relationships with the spatial organization of the perceived environment. In contrast, the temporal organization of neuronal activity is not trivially related to external features due to sensor motion. Still, the temporal organization shares similar principles across sensory modalities. Likewise, thalamocortical circuits exhibit common features across senses. Focusing on touch, vision, and audition, we review their shared coding principles and suggest that thalamocortical systems include circuits that allow analogous recoding mechanisms in all three senses. These thalamocortical circuits constitute oscillations-based phase-locked loops, that translate temporally-coded sensory information to rate-coded cortical signals, signals that can integrate information across sensory and motor modalities. The loop also allows predictive locking to the onset of future modulations of the sensory signal. The paper thus suggests a theoretical framework in which a common thalamocortical mechanism implements temporal demodulation across senses.
Collapse
Affiliation(s)
- Ehud Ahissar
- Department of Brain Sciences, Weizmann Institute, Rehovot, 76100, Israel.
| | - Guy Nelinger
- Department of Brain Sciences, Weizmann Institute, Rehovot, 76100, Israel
| | - Eldad Assa
- Department of Brain Sciences, Weizmann Institute, Rehovot, 76100, Israel
| | - Ofer Karp
- Department of Brain Sciences, Weizmann Institute, Rehovot, 76100, Israel
| | - Inbar Saraf-Sinik
- Department of Brain Sciences, Weizmann Institute, Rehovot, 76100, Israel
| |
Collapse
|
8
|
Leszczynski M, Bickel S, Nentwich M, Russ BE, Parra L, Lakatos P, Mehta A, Schroeder CE. Saccadic modulation of neural excitability in auditory areas of the neocortex. Curr Biol 2023; 33:1185-1195.e6. [PMID: 36863343 PMCID: PMC10424710 DOI: 10.1016/j.cub.2023.02.018] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 10/25/2022] [Accepted: 02/03/2023] [Indexed: 03/04/2023]
Abstract
In natural "active" vision, humans and other primates use eye movements (saccades) to sample bits of information from visual scenes. In the visual cortex, non-retinal signals linked to saccades shift visual cortical neurons into a high excitability state as each saccade ends. The extent of this saccadic modulation outside of the visual system is unknown. Here, we show that during natural viewing, saccades modulate excitability in numerous auditory cortical areas with a temporal pattern complementary to that seen in visual areas. Control somatosensory cortical recordings indicate that the temporal pattern is unique to auditory areas. Bidirectional functional connectivity patterns suggest that these effects may arise from regions involved in saccade generation. We propose that by using saccadic signals to yoke excitability states in auditory areas to those in visual areas, the brain can improve information processing in complex natural settings.
Collapse
Affiliation(s)
- Marcin Leszczynski
- Departments of Psychiatry and Neurology, Columbia University College of Physicians and Surgeons, New York, NY 10032, USA; Translational Neuroscience Lab Division, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY 10962, USA; Cognitive Science Department, Institute of Philosophy, Jagiellonian University, Krakow 31-007, Poland.
| | - Stephan Bickel
- Translational Neuroscience Lab Division, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY 10962, USA; The Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030, USA; Departments of Neurosurgery and Neurology, Zucker School of Medicine at Hofstra/Northwell, Manhasset, NY 11549, USA
| | - Maximilian Nentwich
- Biomedical Engineering Department, City College, CUNY, New York, NY 10031, USA
| | - Brian E Russ
- Translational Neuroscience Lab Division, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY 10962, USA; Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Department of Psychiatry, New York University at Langone, New York, NY 10016, USA
| | - Lucas Parra
- Biomedical Engineering Department, City College, CUNY, New York, NY 10031, USA
| | - Peter Lakatos
- Translational Neuroscience Lab Division, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY 10962, USA; Department of Psychiatry, New York University at Langone, New York, NY 10016, USA
| | - Ashesh Mehta
- The Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030, USA; Departments of Neurosurgery and Neurology, Zucker School of Medicine at Hofstra/Northwell, Manhasset, NY 11549, USA
| | - Charles E Schroeder
- Departments of Psychiatry and Neurology, Columbia University College of Physicians and Surgeons, New York, NY 10032, USA; Translational Neuroscience Lab Division, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY 10962, USA.
| |
Collapse
|
9
|
Grosu GF, Hopp AV, Moca VV, Bârzan H, Ciuparu A, Ercsey-Ravasz M, Winkel M, Linde H, Mureșan RC. The fractal brain: scale-invariance in structure and dynamics. Cereb Cortex 2023; 33:4574-4605. [PMID: 36156074 PMCID: PMC10110456 DOI: 10.1093/cercor/bhac363] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 08/09/2022] [Accepted: 08/10/2022] [Indexed: 11/12/2022] Open
Abstract
The past 40 years have witnessed extensive research on fractal structure and scale-free dynamics in the brain. Although considerable progress has been made, a comprehensive picture has yet to emerge, and needs further linking to a mechanistic account of brain function. Here, we review these concepts, connecting observations across different levels of organization, from both a structural and functional perspective. We argue that, paradoxically, the level of cortical circuits is the least understood from a structural point of view and perhaps the best studied from a dynamical one. We further link observations about scale-freeness and fractality with evidence that the environment provides constraints that may explain the usefulness of fractal structure and scale-free dynamics in the brain. Moreover, we discuss evidence that behavior exhibits scale-free properties, likely emerging from similarly organized brain dynamics, enabling an organism to thrive in an environment that shares the same organizational principles. Finally, we review the sparse evidence for and try to speculate on the functional consequences of fractality and scale-freeness for brain computation. These properties may endow the brain with computational capabilities that transcend current models of neural computation and could hold the key to unraveling how the brain constructs percepts and generates behavior.
Collapse
Affiliation(s)
- George F Grosu
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Faculty of Electronics, Telecommunications and Information Technology, Technical University of Cluj-Napoca, Str. Memorandumului 28, 400114 Cluj-Napoca, Romania
| | | | - Vasile V Moca
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
| | - Harald Bârzan
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Faculty of Electronics, Telecommunications and Information Technology, Technical University of Cluj-Napoca, Str. Memorandumului 28, 400114 Cluj-Napoca, Romania
| | - Andrei Ciuparu
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Faculty of Electronics, Telecommunications and Information Technology, Technical University of Cluj-Napoca, Str. Memorandumului 28, 400114 Cluj-Napoca, Romania
| | - Maria Ercsey-Ravasz
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Faculty of Physics, Babes-Bolyai University, Str. Mihail Kogalniceanu 1, 400084 Cluj-Napoca, Romania
| | - Mathias Winkel
- Merck KGaA, Frankfurter Straße 250, 64293 Darmstadt, Germany
| | - Helmut Linde
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Merck KGaA, Frankfurter Straße 250, 64293 Darmstadt, Germany
| | - Raul C Mureșan
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
| |
Collapse
|
10
|
Inferring visual space from ultra-fine extra-retinal knowledge of gaze position. Nat Commun 2023; 14:269. [PMID: 36650146 PMCID: PMC9845343 DOI: 10.1038/s41467-023-35834-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 01/03/2023] [Indexed: 01/18/2023] Open
Abstract
It has long been debated how humans resolve fine details and perceive a stable visual world despite the incessant fixational motion of their eyes. Current theories assume these processes to rely solely on the visual input to the retina, without contributions from motor and/or proprioceptive sources. Here we show that contrary to this widespread assumption, the visual system has access to high-resolution extra-retinal knowledge of fixational eye motion and uses it to deduce spatial relations. Building on recent advances in gaze-contingent display control, we created a spatial discrimination task in which the stimulus configuration was entirely determined by oculomotor activity. Our results show that humans correctly infer geometrical relations in the absence of spatial information on the retina and accurately combine high-resolution extraretinal monitoring of gaze displacement with retinal signals. These findings reveal a sensory-motor strategy for encoding space, in which fine oculomotor knowledge is used to interpret the fixational input to the retina.
Collapse
|
11
|
Alexiev K, Vakarelski T. Can Microsaccades Be Used for Biometrics? SENSORS (BASEL, SWITZERLAND) 2022; 23:89. [PMID: 36616687 PMCID: PMC9824634 DOI: 10.3390/s23010089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 12/17/2022] [Accepted: 12/19/2022] [Indexed: 06/17/2023]
Abstract
Human eyes are in constant motion. Even when we fix our gaze on a certain point, our eyes continue to move. When looking at a point, scientists have distinguished three different fixational eye movements (FEM)-microsaccades, drift and tremor. The main goal of this paper is to investigate one of these FEMs-microsaccades-as a source of information for biometric analysis. The paper argues why microsaccades are preferred for biometric analysis over the other two fixational eye movements. The process of microsaccades' extraction is described. Thirteen parameters are defined for microsaccade analysis, and their derivation is given. A gradient algorithm was used to solve the biometric problem. An assessment of the weights of the different pairs of parameters in solving the biometric task was made.
Collapse
|
12
|
Demonstration of three-dimensional contact point determination and contour reconstruction during active whisking behavior of an awake rat. PLoS Comput Biol 2022; 18:e1007763. [PMID: 36108064 PMCID: PMC9477318 DOI: 10.1371/journal.pcbi.1007763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Accepted: 05/06/2022] [Indexed: 11/19/2022] Open
Abstract
The rodent vibrissal (whisker) system has been studied for decades as a model of active touch sensing. There are no sensors along the length of a whisker; all sensing occurs at the whisker base. Therefore, a large open question in many neuroscience studies is how an animal could estimate the three-dimensional (3D) location at which a whisker makes contact with an object. In the present work we simulated the shape of a real rat whisker to demonstrate the existence of several unique mappings from triplets of mechanical signals at the whisker base to the three-dimensional whisker-object contact point. We then used high speed video to record whisker deflections as an awake rat whisked against a peg, and used the mechanics resulting from those deflections to extract the contact points along the peg surface. These results demonstrate that measurement of specific mechanical triplets at the base of a biological whisker can enable 3D contact point determination during natural whisking behavior. The approach is viable even though the biological whisker has non-ideal, non-planar curvature, and even given the rat’s real-world choices of whisking parameters. Visual intuition for the quality of the approach is provided in a video that shows the contour of the peg gradually emerging during active whisking behavior.
Collapse
|
13
|
Fixational drift is driven by diffusive dynamics in central neural circuitry. Nat Commun 2022; 13:1697. [PMID: 35361753 PMCID: PMC8971408 DOI: 10.1038/s41467-022-29201-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 03/04/2022] [Indexed: 11/08/2022] Open
Abstract
During fixation and between saccades, our eyes undergo diffusive random motion called fixational drift. The role of fixational drift in visual coding and inference has been debated in the past few decades, but the mechanisms that underlie this motion remained unknown. In particular, it has been unclear whether fixational drift arises from peripheral sources, or from central sources within the brain. Here we show that fixational drift is correlated with neural activity, and identify its origin in central neural circuitry within the oculomotor system, upstream to the ocular motoneurons (OMNs). We analyzed a large data set of OMN recordings in the rhesus monkey, alongside precise measurements of eye position, and found that most of the variance of fixational eye drifts must arise upstream of the OMNs. The diffusive statistics of the motion points to the oculomotor integrator, a memory circuit responsible for holding the eyes still between saccades, as a likely source of the motion. Theoretical modeling, constrained by the parameters of the primate oculomotor system, supports this hypothesis by accounting for the amplitude as well as the statistics of the motion. Thus, we propose that fixational ocular drift provides a direct observation of diffusive dynamics in a neural circuit responsible for storage of continuous parameter memory in persistent neural activity. The identification of a mechanistic origin for fixational drift is likely to advance the understanding of its role in visual processing and inference.
Collapse
|
14
|
Staudigl T, Minxha J, Mamelak AN, Gothard KM, Rutishauser U. Saccade-related neural communication in the human medial temporal lobe is modulated by the social relevance of stimuli. SCIENCE ADVANCES 2022; 8:eabl6037. [PMID: 35302856 PMCID: PMC8932656 DOI: 10.1126/sciadv.abl6037] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Accepted: 01/26/2022] [Indexed: 05/31/2023]
Abstract
Humans predominantly explore their environment by moving their eyes. To optimally communicate and process visual information, neural activity needs to be coordinated with the execution of eye movements. We investigated the coordination between visual exploration and interareal neural communication by analyzing local field potentials and single neuron activity in patients with epilepsy. We demonstrated that during the free viewing of images, neural communication between the human amygdala and hippocampus is coordinated with the execution of eye movements. The strength and direction of neural communication and hippocampal saccade-related phase alignment were strongest for fixations that landed on human faces. Our results argue that the state of the human medial temporal lobe network is selectively coordinated with motor behavior. Interareal neural communication was facilitated for social stimuli as indexed by the category of the attended information.
Collapse
Affiliation(s)
- Tobias Staudigl
- Department of Neurosurgery, Cedars-Sinai Medical Center, Los Angeles, CA 90048, USA
- Department of Psychology, Ludwig-Maximilians-University, Munich, Germany
| | - Juri Minxha
- Department of Neurosurgery, Cedars-Sinai Medical Center, Los Angeles, CA 90048, USA
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA 91125, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA
| | - Adam N. Mamelak
- Department of Neurosurgery, Cedars-Sinai Medical Center, Los Angeles, CA 90048, USA
| | - Katalin M. Gothard
- Department of Physiology, College of Medicine, University of Arizona, Tucscon, AZ 85724, USA
| | - Ueli Rutishauser
- Department of Neurosurgery, Cedars-Sinai Medical Center, Los Angeles, CA 90048, USA
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA 91125, USA
- Department of Neurology, Cedars-Sinai Medical Center, Los Angeles, CA 90048, USA
| |
Collapse
|
15
|
Microsaccades, Drifts, Hopf Bundle and Neurogeometry. J Imaging 2022; 8:jimaging8030076. [PMID: 35324631 PMCID: PMC8953095 DOI: 10.3390/jimaging8030076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 03/03/2022] [Accepted: 03/04/2022] [Indexed: 02/01/2023] Open
Abstract
The first part of the paper contains a short review of the image processing in early vision is static, when the eyes and the stimulus are stable, and in dynamics, when the eyes participate in fixation eye movements. In the second part, we give an interpretation of Donders’ and Listing’s law in terms of the Hopf fibration of the 3-sphere over the 2-sphere. In particular, it is shown that the configuration space of the eye ball (when the head is fixed) is the 2-dimensional hemisphere SL+, called Listing hemisphere, and saccades are described as geodesic segments of SL+ with respect to the standard round metric. We study fixation eye movements (drift and microsaccades) in terms of this model and discuss the role of fixation eye movements in vision. A model of fixation eye movements is proposed that gives an explanation of presaccadic shift of receptive fields.
Collapse
|
16
|
Idiosyncratic selection of active touch for shape perception. Sci Rep 2022; 12:2922. [PMID: 35190603 PMCID: PMC8861104 DOI: 10.1038/s41598-022-06807-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 02/03/2022] [Indexed: 11/23/2022] Open
Abstract
Hand movements are essential for tactile perception of objects. However, the specific functions served by active touch strategies, and their dependence on physiological parameters, are unclear and understudied. Focusing on planar shape perception, we tracked at high resolution the hands of 11 participants during shape recognition task. Two dominant hand movement strategies were identified: contour following and scanning. Contour following movements were either tangential to the contour or oscillating perpendicular to it. Scanning movements crossed between distant parts of the shapes’ contour. Both strategies exhibited non-uniform coverage of the shapes’ contours. Idiosyncratic movement patterns were specific to the sensed object. In a second experiment, we have measured the participants’ spatial and temporal tactile thresholds. Significant portions of the variations in hand speed and in oscillation patterns could be explained by the idiosyncratic thresholds. Using data-driven simulations, we show how specific strategy choices may affect receptors activation. These results suggest that motion strategies of active touch adapt to both the sensed object and to the perceiver’s physiological parameters.
Collapse
|
17
|
Leonard BT, Kontos AP, Marchetti GF, Zhang M, Eagle SR, Reecher HM, Bensinger ES, Snyder VC, Holland CL, Sheehy CK, Rossi EA. Fixational eye movements following concussion. J Vis 2021; 21:11. [PMID: 34940825 PMCID: PMC8709928 DOI: 10.1167/jov.21.13.11] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The purpose of this study was to evaluate fixational eye movements (FEMs) with high spatial and temporal resolution following concussion, where oculomotor symptoms and impairments are common. Concussion diagnosis was determined using current consensus guidelines. A retinal eye-tracking device, the tracking scanning laser ophthalmoscope (TSLO), was used to measure FEMs in adolescents and young adults following a concussion and in an unaffected control population. FEMs were quantified in two fixational paradigms: (1) when fixating on the center, or (2) when fixating on the corner of the TSLO imaging raster. Fixational saccade amplitude in recent concussion patients (≤ 21 days) was significantly greater, on average, in the concussion group (mean = 1.03°; SD = 0.36°) compared with the controls (mean = 0.82°; SD = 0.31°), when fixating on the center of the imaging raster (t = 2.87, df = 82, p = 0.005). These fixational saccades followed the main sequence and therefore also had greater peak velocity (t = 2.86, df = 82, p = 0.006) and peak acceleration (t = 2.80, df = 82, p = 0.006). These metrics significantly differentiated concussed from controls (AUC = 0.67-0.68, minimum p = 0.005). No group differences were seen for the drift metrics in either task or for any of the FEMs metrics in the corner-of-raster fixation task. Fixational saccade amplitudes were significantly different in the concussion group, but only when fixating on the center of the raster. This task specificity suggests that task optimization may improve differentiation and warrants further study. FEMs measured in the acute-to-subacute period of concussion recovery may provide a quick (<3 minutes), objective, sensitive, and accurate ocular dysfunction assessment. Future work should assess the impact of age, mechanism of injury, and post-concussion recovery on FEM alterations following concussion.
Collapse
Affiliation(s)
- Bianca T Leonard
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.,
| | - Anthony P Kontos
- Department of Orthopedic Surgery, University of Pittsburgh, Pittsburgh, PA, USA.,
| | | | - Min Zhang
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.,
| | - Shawn R Eagle
- Department of Orthopedic Surgery, University of Pittsburgh, Pittsburgh, PA, USA.,
| | - Hope M Reecher
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.,
| | - Ethan S Bensinger
- Vision Science Group, University of California, Berkeley, Berkeley, CA, USA.,
| | - Valerie C Snyder
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.,
| | - Cyndi L Holland
- Department of Orthopedic Surgery, University of Pittsburgh, Pittsburgh, PA, USA.,
| | - Christy K Sheehy
- Department of Neurology, University of California, San Francisco, San Francisco, CA, USA.,
| | - Ethan A Rossi
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh Swanson School of Engineering, Pittsburgh, PA, USA., rossilab.org
| |
Collapse
|
18
|
Ahissar E. Time in the brain: Encoding does not mean perceiving. Neuron 2021; 109:3542-3544. [PMID: 34793705 DOI: 10.1016/j.neuron.2021.11.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
The neural basis of time perception remains an enigma. In rats performing interval judgment tasks, striatal time coding has drawn attention as one potential substrate. Toso et al. (2021b) find that such time coding does not account for stimulus duration perception.
Collapse
Affiliation(s)
- Ehud Ahissar
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel.
| |
Collapse
|
19
|
Active sensing in a dynamic olfactory world. J Comput Neurosci 2021; 50:1-6. [PMID: 34591220 DOI: 10.1007/s10827-021-00798-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 08/27/2021] [Accepted: 09/22/2021] [Indexed: 10/20/2022]
|
20
|
Leszczynski M, Chaieb L, Staudigl T, Enkirch SJ, Fell J, Schroeder CE. Neural activity in the human anterior thalamus during natural vision. Sci Rep 2021; 11:17480. [PMID: 34471183 PMCID: PMC8410783 DOI: 10.1038/s41598-021-96588-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 08/11/2021] [Indexed: 12/23/2022] Open
Abstract
In natural vision humans and other primates explore environment by active sensing, using saccadic eye movements to relocate the fovea and sample different bits of information multiple times per second. Saccades induce a phase reset of ongoing neuronal oscillations in primary and higher-order visual cortices and in the medial temporal lobe. As a result, neuron ensembles are shifted to a common state at the time visual input propagates through the system (i.e., just after fixation). The extent of the brain’s circuitry that is modulated by saccades is not yet known. Here, we evaluate the possibility that saccadic phase reset impacts the anterior nuclei of the thalamus (ANT). Using recordings in the human thalamus of three surgical patients during natural vision, we found that saccades and visual stimulus onset both modulate neural activity, but with distinct field potential morphologies. Specifically, we found that fixation-locked field potentials had a component that preceded saccade onset. It was followed by an early negativity around 50 ms after fixation onset which is significantly faster than any response to visual stimulus presentation. The timing of these events suggests that the ANT is predictively modulated before the saccadic eye movement. We also found oscillatory phase concentration, peaking at 3–4 Hz, coincident with suppression of Broadband High-frequency Activity (BHA; 80–180 Hz), both locked to fixation onset supporting the idea that neural oscillations in these nuclei are reorganized to a low excitability state right after fixation onset. These findings show that during real-world natural visual exploration neural dynamics in the human ANT is influenced by visual and oculomotor events, which supports the idea that ANT, apart from their contribution to episodic memory, also play a role in natural vision.
Collapse
Affiliation(s)
- Marcin Leszczynski
- Department of Psychiatry, College of Physicians and Surgeons, Columbia University Medical Center, 1051 Riverside Drive Kolb Annex Rm 561, New York, NY, 10032, USA. .,Translational Neuroscience Division, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY, USA.
| | - Leila Chaieb
- Department of Epileptology, University Hospital Bonn, Bonn, Germany
| | - Tobias Staudigl
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | | | - Juergen Fell
- Department of Epileptology, University Hospital Bonn, Bonn, Germany
| | - Charles E Schroeder
- Department of Psychiatry, College of Physicians and Surgeons, Columbia University Medical Center, 1051 Riverside Drive Kolb Annex Rm 561, New York, NY, 10032, USA.,Translational Neuroscience Division, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY, USA
| |
Collapse
|
21
|
Oculo-retinal dynamics can explain the perception of minimal recognizable configurations. Proc Natl Acad Sci U S A 2021; 118:2022792118. [PMID: 34417308 DOI: 10.1073/pnas.2022792118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Natural vision is a dynamic and continuous process. Under natural conditions, visual object recognition typically involves continuous interactions between ocular motion and visual contrasts, resulting in dynamic retinal activations. In order to identify the dynamic variables that participate in this process and are relevant for image recognition, we used a set of images that are just above and below the human recognition threshold and whose recognition typically requires >2 s of viewing. We recorded eye movements of participants while attempting to recognize these images within trials lasting 3 s. We then assessed the activation dynamics of retinal ganglion cells resulting from ocular dynamics using a computational model. We found that while the saccadic rate was similar between recognized and unrecognized trials, the fixational ocular speed was significantly larger for unrecognized trials. Interestingly, however, retinal activation level was significantly lower during these unrecognized trials. We used retinal activation patterns and oculomotor parameters of each fixation to train a binary classifier, classifying recognized from unrecognized trials. Only retinal activation patterns could predict recognition, reaching 80% correct classifications on the fourth fixation (on average, ∼2.5 s from trial onset). We thus conclude that the information that is relevant for visual perception is embedded in the dynamic interactions between the oculomotor sequence and the image. Hence, our results suggest that ocular dynamics play an important role in recognition and that understanding the dynamics of retinal activation is crucial for understanding natural vision.
Collapse
|
22
|
Zilbershtain-Kra Y, Graffi S, Ahissar E, Arieli A. Active sensory substitution allows fast learning via effective motor-sensory strategies. iScience 2021; 24:101918. [PMID: 33392481 PMCID: PMC7773576 DOI: 10.1016/j.isci.2020.101918] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 10/25/2020] [Accepted: 12/07/2020] [Indexed: 11/28/2022] Open
Abstract
We examined the development of new sensing abilities in adults by training participants to perceive remote objects through their fingers. Using an Active-Sensing based sensory Substitution device (ASenSub), participants quickly learned to perceive fast via the new modality and preserved their high performance for more than 20 months. Both sighted and blind participants exhibited almost complete transfer of performance from 2D images to novel 3D physical objects. Perceptual accuracy and speed using the ASenSub were, on average, 300% and 600% better than previous reports for 2D images and 3D objects. This improvement is attributed to the ability of the participants to employ their own motor-sensory strategies. Sighted participants dominant strategy was based on motor-sensory convergence on the most informative regions of objects, similarly to fixation patterns in vision. Congenitally, blind participants did not show such a tendency, and many of their exploratory procedures resembled those observed with natural touch.
Collapse
Affiliation(s)
- Yael Zilbershtain-Kra
- The Department of Neurobiology, Weizmann Institute of Science, 234 Herzl Street, Rehovot 76100, Israel
| | - Shmuel Graffi
- The Department of Neurobiology, Weizmann Institute of Science, 234 Herzl Street, Rehovot 76100, Israel
| | - Ehud Ahissar
- The Department of Neurobiology, Weizmann Institute of Science, 234 Herzl Street, Rehovot 76100, Israel
| | - Amos Arieli
- The Department of Neurobiology, Weizmann Institute of Science, 234 Herzl Street, Rehovot 76100, Israel
| |
Collapse
|
23
|
Gruber LZ, Ahissar E. Closed loop motor-sensory dynamics in human vision. PLoS One 2020; 15:e0240660. [PMID: 33057398 PMCID: PMC7561174 DOI: 10.1371/journal.pone.0240660] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Accepted: 09/30/2020] [Indexed: 12/02/2022] Open
Abstract
Vision is obtained with a continuous motion of the eyes. The kinematic analysis of eye motion, during any visual or ocular task, typically reveals two (kinematic) components: saccades, which quickly replace the visual content in the retinal fovea, and drifts, which slowly scan the image after each saccade. While the saccadic exchange of regions of interest (ROIs) is commonly considered to be included in motor-sensory closed-loops, it is commonly assumed that drifts function in an open-loop manner, that is, independent of the concurrent visual input. Accordingly, visual perception is assumed to be based on a sequence of open-loop processes, each initiated by a saccade-triggered retinal snapshot. Here we directly challenged this assumption by testing the dependency of drift kinematics on concurrent visual inputs using real-time gaze-contingent-display. Our results demonstrate a dependency of the trajectory on the concurrent visual input, convergence of speed to condition-specific values and maintenance of selected drift-related motor-sensory controlled variables, all strongly indicative of drifts being included in a closed-loop brain-world process, and thus suggesting that vision is inherently a closed-loop process.
Collapse
Affiliation(s)
| | - Ehud Ahissar
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| |
Collapse
|
24
|
Predictive whisker kinematics reveal context-dependent sensorimotor strategies. PLoS Biol 2020; 18:e3000571. [PMID: 32453721 PMCID: PMC7274460 DOI: 10.1371/journal.pbio.3000571] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2019] [Revised: 06/05/2020] [Accepted: 05/11/2020] [Indexed: 01/27/2023] Open
Abstract
Animals actively move their sensory organs in order to acquire sensory information. Some rodents, such as mice and rats, employ cyclic scanning motions of their facial whiskers to explore their proximal surrounding, a behavior known as whisking. Here, we investigated the contingency of whisking kinematics on the animal's behavioral context that arises from both internal processes (attention and expectations) and external constraints (available sensory and motor degrees of freedom). We recorded rat whisking at high temporal resolution in 2 experimental contexts-freely moving or head-fixed-and 2 spatial sensory configurations-a single row or 3 caudal whiskers on each side of the snout. We found that rapid sensorimotor twitches, called pumps, occurring during free-air whisking carry information about the rat's upcoming exploratory direction, as demonstrated by the ability of these pumps to predict consequent head and body locomotion. Specifically, pump behavior during both voluntary motionlessness and imposed head fixation exposed a backward redistribution of sensorimotor exploratory resources. Further, head-fixed rats employed a wide range of whisking profiles to compensate for the loss of head- and body-motor degrees of freedom. Finally, changing the number of intact vibrissae available to a rat resulted in an alteration of whisking strategy consistent with the rat actively reallocating its remaining resources. In sum, this work shows that rats adapt their active exploratory behavior in a homeostatic attempt to preserve sensorimotor coverage under changing environmental conditions and changing sensory capacities, including those imposed by various laboratory conditions.
Collapse
|
25
|
Staadt R, Philipp ST, Cremers JL, Kornmeier J, Jancke D. Perception of the difference between past and present stimulus: A rare orientation illusion may indicate incidental access to prediction error-like signals. PLoS One 2020; 15:e0232349. [PMID: 32365070 PMCID: PMC7197803 DOI: 10.1371/journal.pone.0232349] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2019] [Accepted: 04/14/2020] [Indexed: 11/20/2022] Open
Abstract
A popular model for sensory processing, known as predictive coding, proposes that incoming signals are iteratively compared with top-down predictions along a hierarchical processing scheme. At each step, error signals arising from differences between actual input and prediction are forwarded and recurrently minimized by updating internal models to finally be “explained away”. However, the neuronal mechanisms underlying such computations and their limitations in processing speed are largely unknown. Further, it remains unclear at which step of cortical processing prediction errors are explained away, if at all. In the present study, human subjects briefly viewed the superposition of two orthogonally oriented gratings followed by abrupt removal of one orientation after either 33 or 200 milliseconds. Instead of strictly seeing the remaining orientation, observers report rarely but highly significantly an illusory percept of the arithmetic difference between previous and actual orientations. Previous findings in cats using the identical paradigm suggest that such difference signals are inherited from first steps of visual cortical processing. In light of early modeling accounts of predictive coding, in which visual neurons were interpreted as residual error detectors signaling the difference between actual input and its temporal prediction based on past input, our data may indicate continued access to residual errors. Such strategy permits time-critical perceptual decision making across a spectrum of competing internal signals up to the highest levels of processing. Thus, the occasional appearance of a prediction error-like illusory percept may uncover maintained flexibility at perceptual decision stages when subjects cope with highly dynamic and ambiguous visual stimuli.
Collapse
Affiliation(s)
- Robert Staadt
- Optical Imaging Group, Institut für Neuroinformatik, Ruhr University Bochum, Bochum, Germany
| | - Sebastian T. Philipp
- Optical Imaging Group, Institut für Neuroinformatik, Ruhr University Bochum, Bochum, Germany
- Institute for Frontier Areas of Psychology and Mental Health, Freiburg, Germany
| | - Joschka L. Cremers
- Institute for Frontier Areas of Psychology and Mental Health, Freiburg, Germany
- Department of Psychiatry and Psychotherapy, Medical Center, University of Freiburg, Freiburg, Germany
- Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Jürgen Kornmeier
- Institute for Frontier Areas of Psychology and Mental Health, Freiburg, Germany
- Department of Psychiatry and Psychotherapy, Medical Center, University of Freiburg, Freiburg, Germany
- Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Dirk Jancke
- Optical Imaging Group, Institut für Neuroinformatik, Ruhr University Bochum, Bochum, Germany
- * E-mail:
| |
Collapse
|
26
|
Intoy J, Rucci M. Finely tuned eye movements enhance visual acuity. Nat Commun 2020; 11:795. [PMID: 32034165 PMCID: PMC7005897 DOI: 10.1038/s41467-020-14616-2] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Accepted: 01/16/2020] [Indexed: 11/16/2022] Open
Abstract
High visual acuity is essential for many tasks, from recognizing distant friends to driving a car. While much is known about how the eye’s optics and anatomy contribute to spatial resolution, possible influences from eye movements are rarely considered. Yet humans incessantly move their eyes, and it has long been suggested that oculomotor activity enhances fine pattern vision. Here we examine the role of eye movements in the most common assessment of visual acuity, the Snellen eye chart. By precisely localizing gaze and actively controlling retinal stimulation, we show that fixational behavior improves acuity by more than 0.15 logMAR, at least 2 lines of the Snellen chart. This improvement is achieved by adapting both microsaccades and ocular drifts to precisely position the image on the retina and adjust its motion. These findings show that humans finely tune their fixational eye movements so that they greatly contribute to normal visual acuity. Humans are normally not aware that their eyes are always in motion, even when attempting to maintain steady gaze on a point. Here the authors show that these small eye movements are finely controlled and contribute more than two lines in a standard eye-chart test of visual acuity.
Collapse
Affiliation(s)
- Janis Intoy
- Graduate Program for Neuroscience, Boston University, Boston, MA, 02215, USA.,Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, 14627, USA.,Center for Visual Science, University of Rochester, Rochester, NY, 14627, USA
| | - Michele Rucci
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, 14627, USA. .,Center for Visual Science, University of Rochester, Rochester, NY, 14627, USA.
| |
Collapse
|
27
|
Rucci M, Ahissar E, Burr D. Temporal Coding of Visual Space. Trends Cogn Sci 2019; 22:883-895. [PMID: 30266148 DOI: 10.1016/j.tics.2018.07.009] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Revised: 07/16/2018] [Accepted: 07/16/2018] [Indexed: 11/20/2022]
Abstract
Establishing a representation of space is a major goal of sensory systems. Spatial information, however, is not always explicit in the incoming sensory signals. In most modalities it needs to be actively extracted from cues embedded in the temporal flow of receptor activation. Vision, on the other hand, starts with a sophisticated optical imaging system that explicitly preserves spatial information on the retina. This may lead to the assumption that vision is predominantly a spatial process: all that is needed is to transmit the retinal image to the cortex, like uploading a digital photograph, to establish a spatial map of the world. However, this deceptively simple analogy is inconsistent with theoretical models and experiments that study visual processing in the context of normal motor behavior. We argue here that, as with other senses, vision relies heavily on temporal strategies and temporal neural codes to extract and represent spatial information.
Collapse
Affiliation(s)
- Michele Rucci
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA.
| | - Ehud Ahissar
- Department of Neurobiology, Weizmann Institute, Rehovot, Israel.
| | - David Burr
- Department of Neuroscience, University of Florence, Florence 50125, Italy; School of Psychology, University of Sydney, Camperdown, NSW 2006, Australia.
| |
Collapse
|
28
|
Leszczynski M, Schroeder CE. The Role of Neuronal Oscillations in Visual Active Sensing. Front Integr Neurosci 2019; 13:32. [PMID: 31396059 PMCID: PMC6664014 DOI: 10.3389/fnint.2019.00032] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Accepted: 07/03/2019] [Indexed: 01/22/2023] Open
Abstract
Visual perception is most often studied as a "passive" process in which an observer fixates steadily at point in space so that stimuli can be delivered to the system with spatial precision. Analysis of neuronal signals related to vision is generally keyed to stimulus onset, stimulus movement, etc.; i.e., events external to the observer. In natural "active" vision, however, information is systematically acquired by using eye movements including rapid (saccadic) eye movements, as well as smooth ocular pursuit of moving objects and slower drifts. Here we consider the use of alternating saccades and fixations to gather information from a visual scene. The underlying motor sampling plan contains highly reliable information regarding "where" and "when" the eyes will land, this information can be used predictively to modify firing properties of neurons precisely at the time when this "contextual" information is most useful - when a volley of retinal input enters the system at the onset of each fixation. Analyses focusing on neural events leading to and resulting from shifts in fixation, as well as visual events external to the observer, can provide a more complete and mechanistic understanding of visual information processing. Studies thus far suggest that active vision may be a fundamentally different from that process we usually study with more traditional passive viewing paradigms. In this Perspective we note that active saccadic sampling behavior imposes robust temporal patterning on the activity of neuron ensembles and large-scale neural dynamics throughout the brain's visual pathways whose mechanistic effects on information processing are not yet fully understood. The spatio-temporal sequence of eye movements elicits a succession of temporally predictable quasi-rhythmic sensory inputs, whose encoding is enhanced by entrainment of low frequency oscillations to the rate of eye movements. Review of the pertinent findings underscores the fact that temporal coordination between motor and visual cortices is critical for understanding neural dynamics of active vision and posits that phase entrainment of neuronal oscillations plays a mechanistic role in this process.
Collapse
Affiliation(s)
- Marcin Leszczynski
- Department of Neurological Surgery, College of Physicians and Surgeons, Columbia University, New York, NY, United States
- Translational Neuroscience Laboratories, The Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| | - Charles E. Schroeder
- Department of Neurological Surgery, College of Physicians and Surgeons, Columbia University, New York, NY, United States
- Translational Neuroscience Laboratories, The Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| |
Collapse
|
29
|
Affiliation(s)
- Jan Koenderink
- University of Leuven (KU Leuven), Belgium; Justus Liebig University Giessen, Germany; University of California at Berkeley, CA, USA; Utrecht University, the Netherlands
| |
Collapse
|
30
|
Chaudhary R, Rema V. Deficits in Behavioral Functions of Intact Barrel Cortex Following Lesions of Homotopic Contralateral Cortex. Front Syst Neurosci 2018; 12:57. [PMID: 30524251 PMCID: PMC6262316 DOI: 10.3389/fnsys.2018.00057] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Accepted: 10/17/2018] [Indexed: 12/02/2022] Open
Abstract
Focal unilateral injuries to the somatosensory whisker barrel cortex have been shown cause long-lasting deficits in the activity and experience-dependent plasticity of neurons in the intact contralateral barrel cortex. However, the long-term effect of these deficits on behavioral functions of the intact contralesional cortex is not clear. In this study, we used the “Gap-crossing task” a barrel cortex-dependent, whisker-sensitive, tactile behavior to test the hypothesis that unilateral lesions of the somatosensory cortex would affect behavioral functions of the intact somatosensory cortex and degrade the execution of a bilaterally learnt behavior. Adult rats were trained to perform the Gap-crossing task using whiskers on both sides of the face. The barrel cortex was then lesioned unilaterally by subpial aspiration. As observed in other studies, when rats used whiskers that directly projected to the lesioned hemisphere the performance of Gap-crossing was drastically compromised, perhaps due to direct effect of lesion. Significant and persistent deficits were present when the lesioned rats performed Gap-crossing task using whiskers that projected to the intact cortex. The deficits were specific to performance of the task at the highest levels of sensitivity. Comparable deficits were seen when normal, bilaterally trained, rats performed the Gap-crossing task with only the whiskers on one side of the face or when they used only two rows of whiskers (D row and E row) intact on both side of the face. These findings indicate that the prolonged impairment in execution of the learnt task by rats with unilateral lesions of somatosensory cortex could be because sensory inputs from one set of whiskers to the intact cortex is insufficient to provide adequate sensory information at higher thresholds of detection. Our data suggest that optimal performance of somatosensory behavior requires dynamic activity-driven interhemispheric interactions from the entire somatosensory inputs between homotopic areas of the cerebral cortex. These results imply that focal unilateral cortical injuries, including those in humans, are likely to have widespread bilateral effects on information processing including in intact areas of the cortex.
Collapse
Affiliation(s)
| | - V Rema
- National Brain Research Centre, Manesar, India
| |
Collapse
|
31
|
Abstract
During development, the eye tunes its size to its optics so that distant objects are in focus, a state known as emmetropia. Although multiple factors contribute to this process, a strong influence appears to be exerted by the visual input signals entering the eye. Much research has been dedicated to the possible roles of specific features of the retinal image, such as the magnitude of blur. However, in humans and other species, the input to the retina is not an image, but a spatiotemporal flow of luminance. Small eye movements occur incessantly during natural fixation, continually transforming the spatial scene into temporal modulations on the retina. An emerging body of evidence suggests that this space-time reformatting is crucial to many aspects of visual processing, including sensitivity to fine spatial detail. The resulting temporal modulations depend not only on ocular dynamics, but also on the optics and shape of the eye, and the spatial statistics of the visual scene. Here we examine the characteristics of these signals and suggest that they may play a role in emmetropization. A direct consequence of this viewpoint is that abnormal oculomotor behavior may contribute to the development of myopia and hyperopia.
Collapse
Affiliation(s)
- Michele Rucci
- Department of Brain & Cognitive Sciences, University of Rochester, Rochester, NY, USA.,Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Jonathan D Victor
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York, NY, USA.,Department of Neurology, Weill Cornell Medical College, New York, NY, USA
| |
Collapse
|
32
|
Vaxenburg R, Wyche I, Svoboda K, Efros AL, Hires SA. Dynamic cues for whisker-based object localization: An analytical solution to vibration during active whisker touch. PLoS Comput Biol 2018; 14:e1006032. [PMID: 29584719 PMCID: PMC5889188 DOI: 10.1371/journal.pcbi.1006032] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2017] [Revised: 04/06/2018] [Accepted: 02/08/2018] [Indexed: 11/24/2022] Open
Abstract
Vibrations are important cues for tactile perception across species. Whisker-based sensation in mice is a powerful model system for investigating mechanisms of tactile perception. However, the role vibration plays in whisker-based sensation remains unsettled, in part due to difficulties in modeling the vibration of whiskers. Here, we develop an analytical approach to calculate the vibrations of whiskers striking objects. We use this approach to quantify vibration forces during active whisker touch at a range of locations along the whisker. The frequency and amplitude of vibrations evoked by contact are strongly dependent on the position of contact along the whisker. The magnitude of vibrational shear force and bending moment is comparable to quasi-static forces. The fundamental vibration frequencies are in a detectable range for mechanoreceptor properties and below the maximum spike rates of primary sensory afferents. These results suggest two dynamic cues exist that rodents can use for object localization: vibration frequency and comparison of vibrational to quasi-static force magnitude. These complement the use of quasi-static force angle as a distance cue, particularly for touches close to the follicle, where whiskers are stiff and force angles hardly change during touch. Our approach also provides a general solution to calculation of whisker vibrations in other sensing tasks. Vibrations play an important role in the sense of touch in many species, but exactly how they influence touch perception remains mysterious. An important reason for this mystery is the difficulty in measuring vibrations during touch. Mice are a powerful model system for investigating touch perception because they actively sweep their whiskers into objects and the resulting bending from touch can be video recorded. However, vibrations of the whiskers during touch are usually too small and fast to be seen. To overcome this limitation, we develop a new mathematical approach to calculating whisker vibrations from the speed at impact, maximum whisker bending during touch, and location of contact along a whisker, which is more easily observed. We find that vibration frequency and amplitude is strongly dependent on the location of contact along the whisker, which mice may use to deduce the distance between their face and touched objects. We confirm our calculations with high-speed imaging of whisker vibration during touch.
Collapse
Affiliation(s)
- Roman Vaxenburg
- Computational Materials Science Center, George Mason University, Fairfax, Virginia, United States of America
| | - Isis Wyche
- Section of Neurobiology, Department of Biological Sciences, University of South California, Los Angeles, California, United States of America
| | - Karel Svoboda
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Alexander L. Efros
- Center for Computational Material Science, Code 6390, Naval Research Laboratory, Washington, DC, United States of America
| | - Samuel Andrew Hires
- Section of Neurobiology, Department of Biological Sciences, University of South California, Los Angeles, California, United States of America
- * E-mail:
| |
Collapse
|
33
|
Isett BR, Feasel SH, Lane MA, Feldman DE. Slip-Based Coding of Local Shape and Texture in Mouse S1. Neuron 2018; 97:418-433.e5. [PMID: 29307709 PMCID: PMC5773356 DOI: 10.1016/j.neuron.2017.12.021] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2017] [Revised: 10/23/2017] [Accepted: 12/14/2017] [Indexed: 01/13/2023]
Abstract
Tactile objects have both local geometry (shape) and broader macroscopic texture, but how these different spatial scales are simultaneously encoded during active touch is unknown. In the whisker system, we tested for a shared code based on localized whisker micromotions (stick-slips) and slip-evoked spikes. We trained mice to discriminate smooth from rough surfaces, including ridged gratings and sandpaper. Whisker slips locked to ridges and evoked temporally precise spikes (<10 ms jitter) in somatosensory cortex (S1) that could resolve ridges with ∼1 mm accuracy. Slip-sensitive neurons also encoded touch and texture. On rough surfaces, both slip-evoked spikes and an additional non-slip signal elevated mean firing rate, allowing accurate rough-smooth texture decoding from population firing rate. Eighteen percent of neurons were selective among rough surfaces. Thus, slips elicit spatially and temporally precise spiking in S1 that simultaneously encodes local shape (ridges) and is integrated into a macroscopic firing rate code for roughness.
Collapse
Affiliation(s)
- Brian R Isett
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Sierra H Feasel
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Monet A Lane
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Daniel E Feldman
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA 94720, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA.
| |
Collapse
|
34
|
Herrmann CJJ, Metzler R, Engbert R. A self-avoiding walk with neural delays as a model of fixational eye movements. Sci Rep 2017; 7:12958. [PMID: 29021548 PMCID: PMC5636902 DOI: 10.1038/s41598-017-13489-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Accepted: 09/26/2017] [Indexed: 11/09/2022] Open
Abstract
Fixational eye movements show scaling behaviour of the positional mean-squared displacement with a characteristic transition from persistence to antipersistence for increasing time-lag. These statistical patterns were found to be mainly shaped by microsaccades (fast, small-amplitude movements). However, our re-analysis of fixational eye-movement data provides evidence that the slow component (physiological drift) of the eyes exhibits scaling behaviour of the mean-squared displacement that varies across human participants. These results suggest that drift is a correlated movement that interacts with microsaccades. Moreover, on the long time scale, the mean-squared displacement of the drift shows oscillations, which is also present in the displacement auto-correlation function. This finding lends support to the presence of time-delayed feedback in the control of drift movements. Based on an earlier non-linear delayed feedback model of fixational eye movements, we propose and discuss different versions of a new model that combines a self-avoiding walk with time delay. As a result, we identify a model that reproduces oscillatory correlation functions, the transition from persistence to antipersistence, and microsaccades.
Collapse
Affiliation(s)
- Carl J J Herrmann
- Institute of Physics and Astronomy, University of Potsdam, Potsdam, D-14476, Germany
| | - Ralf Metzler
- Institute of Physics and Astronomy, University of Potsdam, Potsdam, D-14476, Germany.
| | - Ralf Engbert
- Department of Psychology, University of Potsdam, Potsdam, D-14476, Germany
| |
Collapse
|
35
|
Juusola M, Dau A, Song Z, Solanki N, Rien D, Jaciuch D, Dongre SA, Blanchard F, de Polavieja GG, Hardie RC, Takalo J. Microsaccadic sampling of moving image information provides Drosophila hyperacute vision. eLife 2017; 6:26117. [PMID: 28870284 PMCID: PMC5584993 DOI: 10.7554/elife.26117] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2017] [Accepted: 07/25/2017] [Indexed: 11/13/2022] Open
Abstract
Small fly eyes should not see fine image details. Because flies exhibit saccadic visual behaviors and their compound eyes have relatively few ommatidia (sampling points), their photoreceptors would be expected to generate blurry and coarse retinal images of the world. Here we demonstrate that Drosophila see the world far better than predicted from the classic theories. By using electrophysiological, optical and behavioral assays, we found that R1-R6 photoreceptors’ encoding capacity in time is maximized to fast high-contrast bursts, which resemble their light input during saccadic behaviors. Whilst over space, R1-R6s resolve moving objects at saccadic speeds beyond the predicted motion-blur-limit. Our results show how refractory phototransduction and rapid photomechanical photoreceptor contractions jointly sharpen retinal images of moving objects in space-time, enabling hyperacute vision, and explain how such microsaccadic information sampling exceeds the compound eyes’ optical limits. These discoveries elucidate how acuity depends upon photoreceptor function and eye movements. Fruit flies have five eyes: two large compound eyes which support vision, plus three smaller single lens eyes which are used for navigation. Each compound eye monitors 180° of space and consists of roughly 750 units, each containing eight light-sensitive cells called photoreceptors. This relatively wide spacing of photoreceptors is thought to limit the sharpness, or acuity, of vision in fruit flies. The area of the human retina (the light-sensitive surface at back of our eyes) that generates our sharpest vision contains photoreceptors that are 500 times more densely packed. Despite their differing designs, human and fruit fly eyes work via the same general principles. If we, or a fruit fly, were to hold our gaze completely steady, the world would gradually fade from view as the eye adapted to the unchanging visual stimulus. To ensure this does not happen, animals continuously make rapid, automatic eye movements called microsaccades. These refresh the image on the retina and prevent it from fading. Yet it is not known why do they not also cause blurred vision. Standard accounts of vision assume that the retina and the brain perform most of the information processing required, with photoreceptors simply detecting how much light enters the eye. However, Juusola, Dau, Song et al. now challenge this idea by showing that photoreceptors are specially adapted to detect the fluctuating patterns of light that enter the eye as a result of microsaccades. Moreover, fruit fly eyes resolve small moving objects far better than would be predicted based on the spacing of their photoreceptors. The discovery that photoreceptors are well adapted to deal with eye movements changes our understanding of insect vision. The findings also disprove the 100-year-old dogma that the spacing of photoreceptors limits the sharpness of vision in compound eyes. Further studies are required to determine whether photoreceptors in the retinas of other animals, including humans, have similar properties.
Collapse
Affiliation(s)
- Mikko Juusola
- National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China.,Department of Biomedical Science, University of Sheffield, Sheffield, United Kingdom
| | - An Dau
- Department of Biomedical Science, University of Sheffield, Sheffield, United Kingdom
| | - Zhuoyi Song
- Department of Biomedical Science, University of Sheffield, Sheffield, United Kingdom
| | - Narendra Solanki
- Department of Biomedical Science, University of Sheffield, Sheffield, United Kingdom
| | - Diana Rien
- National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China.,Department of Biomedical Science, University of Sheffield, Sheffield, United Kingdom
| | - David Jaciuch
- Department of Biomedical Science, University of Sheffield, Sheffield, United Kingdom
| | - Sidhartha Anil Dongre
- Department of Biomedical Science, University of Sheffield, Sheffield, United Kingdom
| | - Florence Blanchard
- Department of Biomedical Science, University of Sheffield, Sheffield, United Kingdom
| | - Gonzalo G de Polavieja
- Champalimaud Neuroscience Programme, Champalimaud Center for the Unknown, Lisbon, Portugal
| | - Roger C Hardie
- Department of Physiology Development and Neuroscience, Cambridge University, Cambridge, United Kingdom
| | - Jouni Takalo
- Department of Biomedical Science, University of Sheffield, Sheffield, United Kingdom
| |
Collapse
|
36
|
Kagan I, Burr DC. Active Vision: Dynamic Reformatting of Visual Information by the Saccade-Drift Cycle. Curr Biol 2017; 27:R341-R344. [PMID: 28486116 DOI: 10.1016/j.cub.2017.03.042] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Visual processing depends on rapid parsing of global features followed by analysis of fine detail. A new study suggests that this transformation is enabled by a cycle of saccades and fixational drifts, which reformat visual input to match the spatiotemporal sensitivity of fast and slow neuronal pathways.
Collapse
Affiliation(s)
- Igor Kagan
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Centre, Leibniz Institute for Primate Research, Goettingen 37077, Germany.
| | - David C Burr
- Department of Neuroscience, University of Florence, Italy
| |
Collapse
|
37
|
Boi M, Poletti M, Victor JD, Rucci M. Consequences of the Oculomotor Cycle for the Dynamics of Perception. Curr Biol 2017; 27:1268-1277. [PMID: 28434862 DOI: 10.1016/j.cub.2017.03.034] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2017] [Revised: 03/11/2017] [Accepted: 03/15/2017] [Indexed: 10/19/2022]
Abstract
Much evidence indicates that humans and other species process large-scale visual information before fine spatial detail. Neurophysiological data obtained with paralyzed eyes suggest that this coarse-to-fine sequence results from spatiotemporal filtering by neurons in the early visual pathway. However, the eyes are normally never stationary: rapid gaze shifts (saccades) incessantly alternate with slow fixational movements. To investigate the consequences of this oculomotor cycle on the dynamics of perception, we combined spectral analysis of visual input signals, neural modeling, and gaze-contingent control of retinal stimulation in humans. We show that the saccade/fixation cycle reformats the flow impinging on the retina in a way that initiates coarse-to-fine processing at each fixation. This finding reveals that the visual system uses oculomotor-induced temporal modulations to sequentially encode different spatial components and suggests that, rather than initiating coarse-to-fine processing, spatiotemporal coupling in the early visual pathway builds on the information dynamics of the oculomotor cycle.
Collapse
Affiliation(s)
- Marco Boi
- Department of Psychological and Brain Sciences, Boston University, 2 Cummington Mall, Boston, MA 02215, USA
| | - Martina Poletti
- Department of Psychological and Brain Sciences, Boston University, 2 Cummington Mall, Boston, MA 02215, USA
| | - Jonathan D Victor
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, 1300 York Avenue, New York, NY 10065, USA
| | - Michele Rucci
- Department of Psychological and Brain Sciences, Boston University, 2 Cummington Mall, Boston, MA 02215, USA; Graduate Program in Neuroscience, Boston University, 2 Cummington Mall, Boston, MA 02215, USA.
| |
Collapse
|
38
|
Amit R, Abeles D, Bar-Gad I, Yuval-Greenberg S. Temporal dynamics of saccades explained by a self-paced process. Sci Rep 2017; 7:886. [PMID: 28428540 PMCID: PMC5430543 DOI: 10.1038/s41598-017-00881-7] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2017] [Accepted: 03/15/2017] [Indexed: 11/08/2022] Open
Abstract
Sensory organs are thought to sample the environment rhythmically thereby providing periodic perceptual input. Whisking and sniffing are governed by oscillators which impose rhythms on the motor-control of sensory acquisition and consequently on sensory input. Saccadic eye movements are the main visual sampling mechanism in primates, and were suggested to constitute part of such a rhythmic exploration system. In this study we characterized saccadic rhythmicity, and examined whether it is consistent with autonomous oscillatory generator or with self-paced generation. Eye movements were tracked while observers were either free-viewing a movie or fixating a static stimulus. We inspected the temporal dynamics of exploratory and fixational saccades and quantified their first-order and high-order dependencies. Data were analyzed using methods derived from spike-train analysis, and tested against mathematical models and simulations. The findings show that saccade timings are explained by first-order dependencies, specifically by their refractory period. Saccade-timings are inconsistent with an autonomous pace-maker but are consistent with a "self-paced" generator, where each saccade is a link in a chain of neural processes that depend on the outcome of the saccade itself. We propose a mathematical model parsimoniously capturing various facets of saccade-timings, and suggest a possible neural mechanism producing the observed dynamics.
Collapse
Affiliation(s)
- Roy Amit
- Sagol School of Neuroscience, Tel Aviv University, 6997801, Tel Aviv, Israel.
| | - Dekel Abeles
- School of Psychological Sciences, Tel Aviv University, 6997801, Tel Aviv, Israel
| | - Izhar Bar-Gad
- The Leslie and Susan Goldschmidt (Gonda) Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, 5290002, Israel
| | - Shlomit Yuval-Greenberg
- Sagol School of Neuroscience, Tel Aviv University, 6997801, Tel Aviv, Israel
- School of Psychological Sciences, Tel Aviv University, 6997801, Tel Aviv, Israel
| |
Collapse
|
39
|
Mishra A, Ghosh R, Principe JC, Thakor NV, Kukreja SL. A Saccade Based Framework for Real-Time Motion Segmentation Using Event Based Vision Sensors. Front Neurosci 2017; 11:83. [PMID: 28316563 PMCID: PMC5334512 DOI: 10.3389/fnins.2017.00083] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2016] [Accepted: 02/06/2017] [Indexed: 11/25/2022] Open
Abstract
Motion segmentation is a critical pre-processing step for autonomous robotic systems to facilitate tracking of moving objects in cluttered environments. Event based sensors are low power analog devices that represent a scene by means of asynchronous information updates of only the dynamic details at high temporal resolution and, hence, require significantly less calculations. However, motion segmentation using spatiotemporal data is a challenging task due to data asynchrony. Prior approaches for object tracking using neuromorphic sensors perform well while the sensor is static or a known model of the object to be followed is available. To address these limitations, in this paper we develop a technique for generalized motion segmentation based on spatial statistics across time frames. First, we create micromotion on the platform to facilitate the separation of static and dynamic elements of a scene, inspired by human saccadic eye movements. Second, we introduce the concept of spike-groups as a methodology to partition spatio-temporal event groups, which facilitates computation of scene statistics and characterize objects in it. Experimental results show that our algorithm is able to classify dynamic objects with a moving camera with maximum accuracy of 92%.
Collapse
Affiliation(s)
- Abhishek Mishra
- Singapore Institute for Neurotechnology, National University of Singapore Singapore, Singapore
| | - Rohan Ghosh
- Singapore Institute for Neurotechnology, National University of Singapore Singapore, Singapore
| | - Jose C Principe
- Department of Electrical and Computer Engineering, University of Florida Gainesville, FL, USA
| | - Nitish V Thakor
- Singapore Institute for Neurotechnology, National University of SingaporeSingapore, Singapore; Biomedical Engineering Department, Johns Hopkins UniversityBaltimore, MD, USA
| | - Sunil L Kukreja
- Singapore Institute for Neurotechnology, National University of Singapore Singapore, Singapore
| |
Collapse
|
40
|
Grant RA, Cielen N, Maes K, Heulens N, Galli GL, Janssens W, Gayan-Ramirez G, Degens H. The effects of smoking on whisker movements: A quantitative measure of exploratory behaviour in rodents. Behav Processes 2016; 128:17-23. [DOI: 10.1016/j.beproc.2016.03.021] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2015] [Revised: 03/30/2016] [Accepted: 03/30/2016] [Indexed: 01/15/2023]
|
41
|
Abstract
Perception of external objects involves sensory acquisition via the relevant sensory organs. A widely-accepted assumption is that the sensory organ is the first station in a serial chain of processing circuits leading to an internal circuit in which a percept emerges. This open-loop scheme, in which the interaction between the sensory organ and the environment is not affected by its concurrent downstream neuronal processing, is strongly challenged by behavioral and anatomical data. We present here a hypothesis in which the perception of external objects is a closed-loop dynamical process encompassing loops that integrate the organism and its environment and converging towards organism-environment steady-states. We discuss the consistency of closed-loop perception (CLP) with empirical data and show that it can be synthesized in a robotic setup. Testable predictions are proposed for empirical distinction between open and closed loop schemes of perception.
Collapse
Affiliation(s)
- Ehud Ahissar
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Eldad Assa
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| |
Collapse
|
42
|
Eye movements between saccades: Measuring ocular drift and tremor. Vision Res 2016; 122:93-104. [PMID: 27068415 DOI: 10.1016/j.visres.2016.03.006] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2015] [Revised: 03/17/2016] [Accepted: 03/17/2016] [Indexed: 11/23/2022]
Abstract
Intersaccadic periods of fixation are characterized by incessant retinal motion due to small eye movements. While these movements are often disregarded as noise, the temporal modulations they introduce to retinal receptors are significant. However, analysis of these input modulations is challenging because the intersaccadic eye motion is close to the resolution limits of most eyetrackers, including widespread pupil-based video systems. Here, we analyzed in depth the limits of two high-precision eyetrackers, the Dual-Purkinje Image and the scleral search coil, and compared the intersaccadic eye movements of humans to those of a non-human primate. By means of a model eye we determined that the resolution of both techniques is sufficient to reliably measure intersaccadic ocular activity up to approximately 80Hz. Our results show that the characteristics of ocular drift are remarkably similar in the two species; a clear deviation from a scale-invariant spectrum occurs in the range between 50 and 100Hz, generally attributed to ocular tremor, leading to intersaccadic retinal speeds as high as 1.5deg/s. The amplitude of this deviation differs on the two axes of motion. In addition to our experimental observations, we suggest basic guidelines to evaluate the performance of eyetrackers and to optimize experimental conditions for the measurement of ocular drift and tremor.
Collapse
|
43
|
Hobbs JA, Towal RB, Hartmann MJZ. Spatiotemporal Patterns of Contact Across the Rat Vibrissal Array During Exploratory Behavior. Front Behav Neurosci 2016; 9:356. [PMID: 26778990 PMCID: PMC4700281 DOI: 10.3389/fnbeh.2015.00356] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Accepted: 12/08/2015] [Indexed: 11/13/2022] Open
Abstract
The rat vibrissal system is an important model for the study of somatosensation, but the small size and rapid speed of the vibrissae have precluded measuring precise vibrissal-object contact sequences during behavior. We used a laser light sheet to quantify, with 1 ms resolution, the spatiotemporal structure of whisker-surface contact as five naïve rats freely explored a flat, vertical glass wall. Consistent with previous work, we show that the whisk cycle cannot be uniquely defined because different whiskers often move asynchronously, but that quasi-periodic (~8 Hz) variations in head velocity represent a distinct temporal feature on which to lock analysis. Around times of minimum head velocity, whiskers protract to make contact with the surface, and then sustain contact with the surface for extended durations (~25-60 ms) before detaching. This behavior results in discrete temporal windows in which large numbers of whiskers are in contact with the surface. These "sustained collective contact intervals" (SCCIs) were observed on 100% of whisks for all five rats. The overall spatiotemporal structure of the SCCIs can be qualitatively predicted based on information about head pose and the average whisk cycle. In contrast, precise sequences of whisker-surface contact depend on detailed head and whisker kinematics. Sequences of vibrissal contact were highly variable, equally likely to propagate in all directions across the array. Somewhat more structure was found when sequences of contacts were examined on a row-wise basis. In striking contrast to the high variability associated with contact sequences, a consistent feature of each SCCI was that the contact locations of the whiskers on the glass converged and moved more slowly on the sheet. Together, these findings lead us to propose that the rat uses a strategy of "windowed sampling" to extract an object's spatial features: specifically, the rat spatially integrates quasi-static mechanical signals across whiskers during the period of sustained contact, resembling an "enclosing" haptic procedure.
Collapse
Affiliation(s)
- Jennifer A Hobbs
- Department of Physics and Astronomy, Northwestern University Evanston, IL, USA
| | - R Blythe Towal
- Department of Biomedical Engineering, Northwestern University Evanston, IL, USA
| | - Mitra J Z Hartmann
- Department of Biomedical Engineering, Northwestern UniversityEvanston, IL, USA; Department of Mechanical Engineering, Northwestern UniversityEvanston, IL, USA
| |
Collapse
|
44
|
Affiliation(s)
- Michele Rucci
- Department of Psychological & Brain Sciences, Boston University, Boston, MA, USA.
| | - Paul V McGraw
- School of Psychology, University of Nottingham, Nottingham, United Kingdom.
| | | |
Collapse
|
45
|
Ahissar E, Ozana S, Arieli A. 1-D Vision: Encoding of Eye Movements by Simple Receptive Fields. Perception 2015; 44:986-94. [PMID: 26562913 DOI: 10.1177/0301006615594946] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Eye movements (eyeM) are an essential component of visual perception. They allow the sampling and scanning of stationary scenes at various spatial scales, primarily at the scene level, via saccades, and at the local level, via fixational eyeM. Given the constant motion of visual images on the retina, a crucial factor in resolving spatial ambiguities related to the external scene is the exact trajectory of eyeM. We show here that the trajectory of eyeM can be encoded at high resolution by simple retinal receptive fields of the symmetrical type. We also show that such encoding can account for motion illusions such as the Ouchi illusion. In addition, encoding of motion projections along horizontal and vertical symmetrical simple retinal receptive fields entails a kind of Cartesian decomposition of the 2-D image into two 1-D projections.
Collapse
Affiliation(s)
- Ehud Ahissar
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Shira Ozana
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Amos Arieli
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| |
Collapse
|
46
|
Abstract
Humans and other species explore a visual scene by rapidly shifting their gaze 2-3 times every second. Although the eyes may appear immobile in the brief intervals in between saccades, microscopic (fixational) eye movements are always present, even when attending to a single point. These movements occur during the very periods in which visual information is acquired and processed and their functions have long been debated. Recent technical advances in controlling retinal stimulation during normal oculomotor activity have shed new light on the visual contributions of fixational eye movements and their degree of control. The emerging body of evidence, reviewed in this article, indicates that fixational eye movements are important components of the strategy by which the visual system processes fine spatial details, enabling both precise positioning of the stimulus on the retina and encoding of spatial information into the joint space-time domain.
Collapse
Affiliation(s)
- Michele Rucci
- Department of Psychological & Brain Sciences, Boston University, Boston, MA 02215; Graduate Program in Neuroscience, Boston University, Boston, MA 02215
| | - Martina Poletti
- Department of Psychological & Brain Sciences, Boston University, Boston, MA 02215
| |
Collapse
|
47
|
Motion makes sense: an adaptive motor-sensory strategy underlies the perception of object location in rats. J Neurosci 2015; 35:8777-89. [PMID: 26063912 DOI: 10.1523/jneurosci.4149-14.2015] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Tactile perception is obtained by coordinated motor-sensory processes. We studied the processes underlying the perception of object location in freely moving rats. We trained rats to identify the relative location of two vertical poles placed in front of them and measured at high resolution the motor and sensory variables (19 and 2 variables, respectively) associated with this whiskers-based perceptual process. We found that the rats developed stereotypic head and whisker movements to solve this task, in a manner that can be described by several distinct behavioral phases. During two of these phases, the rats' whiskers coded object position by first temporal and then angular coding schemes. We then introduced wind (in two opposite directions) and remeasured their perceptual performance and motor-sensory variables. Our rats continued to perceive object location in a consistent manner under wind perturbations while maintaining all behavioral phases and relatively constant sensory coding. Constant sensory coding was achieved by keeping one group of motor variables (the "controlled variables") constant, despite the perturbing wind, at the cost of strongly modulating another group of motor variables (the "modulated variables"). The controlled variables included coding-relevant variables, such as head azimuth and whisker velocity. These results indicate that consistent perception of location in the rat is obtained actively, via a selective control of perception-relevant motor variables.
Collapse
|
48
|
Nortmann N, Rekauzke S, Onat S, König P, Jancke D. Primary visual cortex represents the difference between past and present. Cereb Cortex 2015; 25:1427-40. [PMID: 24343889 PMCID: PMC4428292 DOI: 10.1093/cercor/bht318] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
The visual system is confronted with rapidly changing stimuli in everyday life. It is not well understood how information in such a stream of input is updated within the brain. We performed voltage-sensitive dye imaging across the primary visual cortex (V1) to capture responses to sequences of natural scene contours. We presented vertically and horizontally filtered natural images, and their superpositions, at 10 or 33 Hz. At low frequency, the encoding was found to represent not the currently presented images, but differences in orientation between consecutive images. This was in sharp contrast to more rapid sequences for which we found an ongoing representation of current input, consistent with earlier studies. Our finding that for slower image sequences, V1 does no longer report actual features but represents their relative difference in time counteracts the view that the first cortical processing stage must always transfer complete information. Instead, we show its capacities for change detection with a new emphasis on the role of automatic computation evolving in the 100-ms range, inevitably affecting information transmission further downstream.
Collapse
Affiliation(s)
- Nora Nortmann
- Optical Imaging Group, Institut für Neuroinformatik, Ruhr-University Bochum, 44780 Bochum, Germany
- Bernstein Group for Computational Neuroscience, Ruhr-University Bochum, 44780 Bochum, Germany
- Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany
| | - Sascha Rekauzke
- Optical Imaging Group, Institut für Neuroinformatik, Ruhr-University Bochum, 44780 Bochum, Germany
- Bernstein Group for Computational Neuroscience, Ruhr-University Bochum, 44780 Bochum, Germany
| | - Selim Onat
- Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany
| | - Peter König
- Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Dirk Jancke
- Optical Imaging Group, Institut für Neuroinformatik, Ruhr-University Bochum, 44780 Bochum, Germany
- Bernstein Group for Computational Neuroscience, Ruhr-University Bochum, 44780 Bochum, Germany
| |
Collapse
|
49
|
Geva-Sagiv M, Las L, Yovel Y, Ulanovsky N. Spatial cognition in bats and rats: from sensory acquisition to multiscale maps and navigation. Nat Rev Neurosci 2015; 16:94-108. [PMID: 25601780 DOI: 10.1038/nrn3888] [Citation(s) in RCA: 134] [Impact Index Per Article: 14.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Spatial orientation and navigation rely on the acquisition of several types of sensory information. This information is then transformed into a neural code for space in the hippocampal formation through the activity of place cells, grid cells and head-direction cells. These spatial representations, in turn, are thought to guide long-range navigation. But how the representations encoded by these different cell types are integrated in the brain to form a neural 'map and compass' is largely unknown. Here, we discuss this problem in the context of spatial navigation by bats and rats. We review the experimental findings and theoretical models that provide insight into the mechanisms that link sensory systems to spatial representations and to large-scale natural navigation.
Collapse
Affiliation(s)
- Maya Geva-Sagiv
- 1] Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel. [2] Edmond and Lily Safra Center for Brain Research, Hebrew University, Jerusalem 91904, Israel
| | - Liora Las
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Yossi Yovel
- Department of Zoology and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 69978, Israel
| | - Nachum Ulanovsky
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel
| |
Collapse
|
50
|
Rucci M, Victor JD. The unsteady eye: an information-processing stage, not a bug. Trends Neurosci 2015; 38:195-206. [PMID: 25698649 PMCID: PMC4385455 DOI: 10.1016/j.tins.2015.01.005] [Citation(s) in RCA: 131] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2014] [Revised: 01/20/2015] [Accepted: 01/22/2015] [Indexed: 11/25/2022]
Abstract
How is space represented in the visual system? At first glance, the answer to this fundamental question appears straightforward: spatial information is directly encoded in the locations of neurons within maps. This concept has long dominated visual neuroscience, leading to mainstream theories of how neurons encode information. However, an accumulation of evidence indicates that this purely spatial view is incomplete and that, even for static images, the representation is fundamentally spatiotemporal. The evidence for this new understanding centers on recent experimental findings concerning the functional role of fixational eye movements, the tiny movements humans and other species continually perform, even when attending to a single point. We review some of these findings and discuss their functional implications.
Collapse
Affiliation(s)
- Michele Rucci
- Department of Psychological and Brain Sciences, Boston University, Boston, MA 02215, USA; Graduate Program in Neuroscience, Boston University, Boston, MA 02215, USA.
| | - Jonathan D Victor
- Brain and Mind Research Institute, Weill Cornell Medical College, New York, NY 10065, USA
| |
Collapse
|