1
|
Mao J, Rothkopf CA, Stocker AA. Adaptation optimizes sensory encoding for future stimuli. PLoS Comput Biol 2025; 21:e1012746. [PMID: 39823517 PMCID: PMC11771873 DOI: 10.1371/journal.pcbi.1012746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 01/27/2025] [Accepted: 12/21/2024] [Indexed: 01/19/2025] Open
Abstract
Sensory neurons continually adapt their response characteristics according to recent stimulus history. However, it is unclear how such a reactive process can benefit the organism. Here, we test the hypothesis that adaptation actually acts proactively in the sense that it optimally adjusts sensory encoding for future stimuli. We first quantified human subjects' ability to discriminate visual orientation under different adaptation conditions. Using an information theoretic analysis, we found that adaptation leads to a reallocation of coding resources such that encoding accuracy peaks at the mean orientation of the adaptor while total coding capacity remains constant. We then asked whether this characteristic change in encoding accuracy is predicted by the temporal statistics of natural visual input. Analyzing the retinal input of freely behaving human subjects showed that the distribution of local visual orientations in the retinal input stream indeed peaks at the mean orientation of the preceding input history (i.e., the adaptor). We further tested our hypothesis by analyzing the internal sensory representations of a recurrent neural network trained to predict the next frame of natural scene videos (PredNet). Simulating our human adaptation experiment with PredNet, we found that the network exhibited the same change in encoding accuracy as observed in human subjects. Taken together, our results suggest that adaptation-induced changes in encoding accuracy prepare the visual system for future stimuli.
Collapse
Affiliation(s)
- Jiang Mao
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | | | - Alan A Stocker
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
2
|
Greene MR, Balas BJ, Lescroart MD, MacNeilage PR, Hart JA, Binaee K, Hausamann PA, Mezile R, Shankar B, Sinnott CB, Capurro K, Halow S, Howe H, Josyula M, Li A, Mieses A, Mohamed A, Nudnou I, Parkhill E, Riley P, Schmidt B, Shinkle MW, Si W, Szekely B, Torres JM, Weissmann E. The visual experience dataset: Over 200 recorded hours of integrated eye movement, odometry, and egocentric video. J Vis 2024; 24:6. [PMID: 39377740 PMCID: PMC11466363 DOI: 10.1167/jov.24.11.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Accepted: 08/13/2024] [Indexed: 10/09/2024] Open
Abstract
We introduce the Visual Experience Dataset (VEDB), a compilation of more than 240 hours of egocentric video combined with gaze- and head-tracking data that offer an unprecedented view of the visual world as experienced by human observers. The dataset consists of 717 sessions, recorded by 56 observers ranging from 7 to 46 years of age. This article outlines the data collection, processing, and labeling protocols undertaken to ensure a representative sample and discusses the potential sources of error or bias within the dataset. The VEDB's potential applications are vast, including improving gaze-tracking methodologies, assessing spatiotemporal image statistics, and refining deep neural networks for scene and activity recognition. The VEDB is accessible through established open science platforms and is intended to be a living dataset with plans for expansion and community contributions. It is released with an emphasis on ethical considerations, such as participant privacy and the mitigation of potential biases. By providing a dataset grounded in real-world experiences and accompanied by extensive metadata and supporting code, the authors invite the research community to use and contribute to the VEDB, facilitating a richer understanding of visual perception and behavior in naturalistic settings.
Collapse
Affiliation(s)
- Michelle R Greene
- Barnard College, Columbia University, New York, NY, USA
- Bates College, Lewiston, ME, USA
| | | | | | | | | | - Kamran Binaee
- University of Nevada, Reno, NV, USA
- Magic Leap, Plantation, FL, USA
| | | | | | - Bharath Shankar
- University of Nevada, Reno, NV, USA
- Unmanned Ground Systems, Chelmsford, MA, USA
| | - Christian B Sinnott
- University of Nevada, Reno, NV, USA
- Smith-Kettlewell Eye Research Institute, San Francisco, CA, USA
| | | | | | | | | | - Annie Li
- Bates College, Lewiston, ME, USA
| | | | | | - Ilya Nudnou
- North Dakota State University, Fargo, ND, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
3
|
Walper D, Bendixen A, Grimm S, Schubö A, Einhäuser W. Attention deployment in natural scenes: Higher-order scene statistics rather than semantics modulate the N2pc component. J Vis 2024; 24:7. [PMID: 38848099 PMCID: PMC11166226 DOI: 10.1167/jov.24.6.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 04/19/2024] [Indexed: 06/13/2024] Open
Abstract
Which properties of a natural scene affect visual search? We consider the alternative hypotheses that low-level statistics, higher-level statistics, semantics, or layout affect search difficulty in natural scenes. Across three experiments (n = 20 each), we used four different backgrounds that preserve distinct scene properties: (a) natural scenes (all experiments); (b) 1/f noise (pink noise, which preserves only low-level statistics and was used in Experiments 1 and 2); (c) textures that preserve low-level and higher-level statistics but not semantics or layout (Experiments 2 and 3); and (d) inverted (upside-down) scenes that preserve statistics and semantics but not layout (Experiment 2). We included "split scenes" that contained different backgrounds left and right of the midline (Experiment 1, natural/noise; Experiment 3, natural/texture). Participants searched for a Gabor patch that occurred at one of six locations (all experiments). Reaction times were faster for targets on noise and slower on inverted images, compared to natural scenes and textures. The N2pc component of the event-related potential, a marker of attentional selection, had a shorter latency and a higher amplitude for targets in noise than for all other backgrounds. The background contralateral to the target had an effect similar to that on the target side: noise led to faster reactions and shorter N2pc latencies than natural scenes, although we observed no difference in N2pc amplitude. There were no interactions between the target side and the non-target side. Together, this shows that-at least when searching simple targets without own semantic content-natural scenes are more effective distractors than noise and that this results from higher-order statistics rather than from semantics or layout.
Collapse
Affiliation(s)
- Daniel Walper
- Physics of Cognition Group, Chemnitz University of Technology, Chemnitz, Germany
| | - Alexandra Bendixen
- Cognitive Systems Lab, Chemnitz University of Technology, Chemnitz, Germany
- https://www.tu-chemnitz.de/physik/SFKS/index.html.en
| | - Sabine Grimm
- Physics of Cognition Group, Chemnitz University of Technology, Chemnitz, Germany
- Cognitive Systems Lab, Chemnitz University of Technology, Chemnitz, Germany
| | - Anna Schubö
- Cognitive Neuroscience of Perception & Action, Philipps University Marburg, Marburg, Germany
- https://www.uni-marburg.de/en/fb04/team-schuboe
| | - Wolfgang Einhäuser
- Physics of Cognition Group, Chemnitz University of Technology, Chemnitz, Germany
- https://www.tu-chemnitz.de/physik/PHKP/index.html.en
| |
Collapse
|
4
|
Scalabrino ML, Thapa M, Wang T, Sampath AP, Chen J, Field GD. Late gene therapy limits the restoration of retinal function in a mouse model of retinitis pigmentosa. Nat Commun 2023; 14:8256. [PMID: 38086857 PMCID: PMC10716155 DOI: 10.1038/s41467-023-44063-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 11/29/2023] [Indexed: 12/18/2023] Open
Abstract
Retinitis pigmentosa is an inherited photoreceptor degeneration that begins with rod loss followed by cone loss. This cell loss greatly diminishes vision, with most patients becoming legally blind. Gene therapies are being developed, but it is unknown how retinal function depends on the time of intervention. To uncover this dependence, we utilize a mouse model of retinitis pigmentosa capable of artificial genetic rescue. This model enables a benchmark of best-case gene therapy by removing variables that complicate answering this question. Complete genetic rescue was performed at 25%, 50%, and 70% rod loss (early, mid and late, respectively). Early and mid treatment restore retinal output to near wild-type levels. Late treatment retinas exhibit continued, albeit slowed, loss of sensitivity and signal fidelity among retinal ganglion cells, as well as persistent gliosis. We conclude that gene replacement therapies delivered after 50% rod loss are unlikely to restore visual function to normal. This is critical information for administering gene therapies to rescue vision.
Collapse
Affiliation(s)
- Miranda L Scalabrino
- Stein Eye Institute, Department of Ophthalmology, University of California, Los Angeles, CA, USA
- Department of Neurobiology, Duke University School of Medicine, Durham, NC, USA
| | - Mishek Thapa
- Stein Eye Institute, Department of Ophthalmology, University of California, Los Angeles, CA, USA
- Department of Neurobiology, Duke University School of Medicine, Durham, NC, USA
| | - Tian Wang
- Zilkha Neurogenetic Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Alapakkam P Sampath
- Stein Eye Institute, Department of Ophthalmology, University of California, Los Angeles, CA, USA
| | - Jeannie Chen
- Zilkha Neurogenetic Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Greg D Field
- Stein Eye Institute, Department of Ophthalmology, University of California, Los Angeles, CA, USA.
- Department of Neurobiology, Duke University School of Medicine, Durham, NC, USA.
| |
Collapse
|
5
|
Takahashi M, Veale R. Pathways for Naturalistic Looking Behavior in Primate I: Behavioral Characteristics and Brainstem Circuits. Neuroscience 2023; 532:133-163. [PMID: 37776945 DOI: 10.1016/j.neuroscience.2023.09.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 09/09/2023] [Accepted: 09/18/2023] [Indexed: 10/02/2023]
Abstract
Organisms control their visual worlds by moving their eyes, heads, and bodies. This control of "gaze" or "looking" is key to survival and intelligence, but our investigation of the underlying neural mechanisms in natural conditions is hindered by technical limitations. Recent advances have enabled measurement of both brain and behavior in freely moving animals in complex environments, expanding on historical head-fixed laboratory investigations. We juxtapose looking behavior as traditionally measured in the laboratory against looking behavior in naturalistic conditions, finding that behavior changes when animals are free to move or when stimuli have depth or sound. We specifically focus on the brainstem circuits driving gaze shifts and gaze stabilization. The overarching goal of this review is to reconcile historical understanding of the differential neural circuits for different "classes" of gaze shift with two inconvenient truths. (1) "classes" of gaze behavior are artificial. (2) The neural circuits historically identified to control each "class" of behavior do not operate in isolation during natural behavior. Instead, multiple pathways combine adaptively and non-linearly depending on individual experience. While the neural circuits for reflexive and voluntary gaze behaviors traverse somewhat independent brainstem and spinal cord circuits, both can be modulated by feedback, meaning that most gaze behaviors are learned rather than hardcoded. Despite this flexibility, there are broadly enumerable neural pathways commonly adopted among primate gaze systems. Parallel pathways which carry simultaneous evolutionary and homeostatic drives converge in superior colliculus, a layered midbrain structure which integrates and relays these volitional signals to brainstem gaze-control circuits.
Collapse
Affiliation(s)
- Mayu Takahashi
- Department of Systems Neurophysiology, Graduate School of Medical and Dental, Sciences, Tokyo Medical and Dental University, Japan.
| | - Richard Veale
- Department of Neurobiology, Graduate School of Medicine, Kyoto University, Japan
| |
Collapse
|
6
|
Khazali MF, Daddaoua N, Thier P. Nonhuman primates exploit the prior assumption that the visual world is vertical. J Neurophysiol 2023; 130:1252-1264. [PMID: 37823212 DOI: 10.1152/jn.00514.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 09/26/2023] [Accepted: 10/10/2023] [Indexed: 10/13/2023] Open
Abstract
When human subjects tilt their heads in dark surroundings, the noisiness of vestibular information impedes precise reports on objects' orientation with respect to Earth's vertical axis. This difficulty is mitigated if a vertical visual background is available. Tilted visual backgrounds induce feelings of head tilt in subjects who are in fact upright. This is often explained as a result of the brain resorting to the prior assumption that natural visual backgrounds are vertical. Here, we tested whether monkeys show comparable perceptual mechanisms. To this end we trained two monkeys to align a visual arrow to a vertical reference line that had variable luminance across trials, while including a large, clearly visible background square whose orientation changed from trial to trial. On ∼20% of all trials, the vertical reference line was left out to measure the subjective visual vertical (SVV). When the frame was upright, the monkeys' SVV was aligned with the gravitational vertical. In accordance with the perceptual reports of humans, however, when the frame was tilted it induced an illusion of head tilt as indicated by a bias in SVV toward the frame orientation. Thus all primates exploit the prior assumption that the visual world is vertical.NEW & NOTEWORTHY Here we show that the principles that characterize the human perception of the vertical are shared by another old world primate species, the rhesus monkey, suggesting phylogenetic continuity. In both species the integration of visual and vestibular information on the orientation of the head relative to the world is similarly constrained by the prior assumption that the visual world is vertical in the sense of having an orientation that is congruent with the gravity vector.
Collapse
Affiliation(s)
- Mohammad Farhan Khazali
- Epilepsy Center, Medical Center, University of Freiburg, Freiburg, Germany
- Center for Neural Science, New York University, New York, United States
| | - Nabil Daddaoua
- National Institute on Drug Abuse (NIDA) Intramural Research Program, Baltimore, Maryland, United States
| | - Peter Thier
- Hertie-Institute for Clinical Brain Research, Cognitive Neurology Laboratory, University of Tübingen, Tübingen, Germany
| |
Collapse
|
7
|
Muller KS, Matthis J, Bonnen K, Cormack LK, Huk AC, Hayhoe M. Retinal motion statistics during natural locomotion. eLife 2023; 12:e82410. [PMID: 37133442 PMCID: PMC10156169 DOI: 10.7554/elife.82410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 04/09/2023] [Indexed: 05/04/2023] Open
Abstract
Walking through an environment generates retinal motion, which humans rely on to perform a variety of visual tasks. Retinal motion patterns are determined by an interconnected set of factors, including gaze location, gaze stabilization, the structure of the environment, and the walker's goals. The characteristics of these motion signals have important consequences for neural organization and behavior. However, to date, there are no empirical in situ measurements of how combined eye and body movements interact with real 3D environments to shape the statistics of retinal motion signals. Here, we collect measurements of the eyes, the body, and the 3D environment during locomotion. We describe properties of the resulting retinal motion patterns. We explain how these patterns are shaped by gaze location in the world, as well as by behavior, and how they may provide a template for the way motion sensitivity and receptive field properties vary across the visual field.
Collapse
Affiliation(s)
- Karl S Muller
- Center for Perceptual Systems, The University of Texas at AustinAustinUnited States
| | - Jonathan Matthis
- Department of Biology, Northeastern UniversityBostonUnited States
| | - Kathryn Bonnen
- School of Optometry, Indiana UniversityBloomingtonUnited States
| | - Lawrence K Cormack
- Center for Perceptual Systems, The University of Texas at AustinAustinUnited States
| | - Alex C Huk
- Center for Perceptual Systems, The University of Texas at AustinAustinUnited States
| | - Mary Hayhoe
- Center for Perceptual Systems, The University of Texas at AustinAustinUnited States
| |
Collapse
|
8
|
Ellis EM, Paniagua AE, Scalabrino ML, Thapa M, Rathinavelu J, Jiao Y, Williams DS, Field GD, Fain GL, Sampath AP. Cones and cone pathways remain functional in advanced retinal degeneration. Curr Biol 2023; 33:1513-1522.e4. [PMID: 36977418 PMCID: PMC10133175 DOI: 10.1016/j.cub.2023.03.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 12/14/2022] [Accepted: 03/03/2023] [Indexed: 03/29/2023]
Abstract
Most defects causing retinal degeneration in retinitis pigmentosa (RP) are rod-specific mutations, but the subsequent degeneration of cones, which produces loss of daylight vision and high-acuity perception, is the most debilitating feature of the disease. To understand better why cones degenerate and how cone vision might be restored, we have made the first single-cell recordings of light responses from degenerating cones and retinal interneurons after most rods have died and cones have lost their outer-segment disk membranes and synaptic pedicles. We show that degenerating cones have functional cyclic-nucleotide-gated channels and can continue to give light responses, apparently produced by opsin localized either to small areas of organized membrane near the ciliary axoneme or distributed throughout the inner segment. Light responses of second-order horizontal and bipolar cells are less sensitive but otherwise resemble those of normal retina. Furthermore, retinal output as reflected in responses of ganglion cells is less sensitive but maintains spatiotemporal receptive fields at cone-mediated light levels. Together, these findings show that cones and their retinal pathways can remain functional even as degeneration is progressing, an encouraging result for future research aimed at enhancing the light sensitivity of residual cones to restore vision in patients with genetically inherited retinal degeneration.
Collapse
Affiliation(s)
- Erika M Ellis
- Department of Ophthalmology and Jules Stein Eye Institute, University of California, Los Angeles, Los Angeles, CA 90095-7000, USA
| | - Antonio E Paniagua
- Department of Ophthalmology and Jules Stein Eye Institute, University of California, Los Angeles, Los Angeles, CA 90095-7000, USA
| | - Miranda L Scalabrino
- Department of Ophthalmology and Jules Stein Eye Institute, University of California, Los Angeles, Los Angeles, CA 90095-7000, USA; Department of Neurobiology, Duke University School of Medicine, Durham, NC 27710, USA
| | - Mishek Thapa
- Department of Ophthalmology and Jules Stein Eye Institute, University of California, Los Angeles, Los Angeles, CA 90095-7000, USA; Department of Neurobiology, Duke University School of Medicine, Durham, NC 27710, USA
| | - Jay Rathinavelu
- Department of Neurobiology, Duke University School of Medicine, Durham, NC 27710, USA
| | - Yuekan Jiao
- Department of Ophthalmology and Jules Stein Eye Institute, University of California, Los Angeles, Los Angeles, CA 90095-7000, USA
| | - David S Williams
- Department of Ophthalmology and Jules Stein Eye Institute, University of California, Los Angeles, Los Angeles, CA 90095-7000, USA.
| | - Greg D Field
- Department of Ophthalmology and Jules Stein Eye Institute, University of California, Los Angeles, Los Angeles, CA 90095-7000, USA; Department of Neurobiology, Duke University School of Medicine, Durham, NC 27710, USA.
| | - Gordon L Fain
- Department of Ophthalmology and Jules Stein Eye Institute, University of California, Los Angeles, Los Angeles, CA 90095-7000, USA.
| | - Alapakkam P Sampath
- Department of Ophthalmology and Jules Stein Eye Institute, University of California, Los Angeles, Los Angeles, CA 90095-7000, USA.
| |
Collapse
|
9
|
Scalabrino ML, Thapa M, Wang T, Sampath AP, Chen J, Field GD. Late gene therapy limits the restoration of retinal function in a mouse model of retinitis pigmentosa. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.07.536035. [PMID: 37066264 PMCID: PMC10104154 DOI: 10.1101/2023.04.07.536035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/18/2023]
Abstract
Retinitis pigmentosa is an inherited photoreceptor degeneration that begins with rod loss followed by cone loss and eventual blindness. Gene therapies are being developed, but it is unknown how retinal function depends on the time of intervention. To uncover this dependence, we utilized a mouse model of retinitis pigmentosa capable of artificial genetic rescue. This model enables a benchmark of best-case gene therapy by removing the variables that complicate the ability to answer this vital question. Complete genetic rescue was performed at 25%, 50%, and 70% rod loss (early, mid and late, respectively). Early and mid treatment restored retinal function to near wild-type levels, specifically the sensitivity and signal fidelity of retinal ganglion cells (RGCs), the 'output' neurons of the retina. However, some anatomical defects persisted. Late treatment retinas exhibited continued, albeit slowed, loss of sensitivity and signal fidelity among RGCs, as well as persistent gliosis. We conclude that gene replacement therapies delivered after 50% rod loss are unlikely to restore visual function to normal. This is critical information for administering gene therapies to rescue vision.
Collapse
Affiliation(s)
- Miranda L Scalabrino
- Stein Eye Institute, Department of Ophthalmology, University of California, Los Angeles CA
- Department of Neurobiology, Duke University School of Medicine, Durham NC
| | - Mishek Thapa
- Stein Eye Institute, Department of Ophthalmology, University of California, Los Angeles CA
- Department of Neurobiology, Duke University School of Medicine, Durham NC
| | - Tian Wang
- Zilkha Neurogenetic Institute, Keck School of Medicine, University of Southern California, Los Angeles CA
| | - Alapakkam P Sampath
- Stein Eye Institute, Department of Ophthalmology, University of California, Los Angeles CA
| | - Jeannie Chen
- Zilkha Neurogenetic Institute, Keck School of Medicine, University of Southern California, Los Angeles CA
| | - Greg D Field
- Stein Eye Institute, Department of Ophthalmology, University of California, Los Angeles CA
- Department of Neurobiology, Duke University School of Medicine, Durham NC
| |
Collapse
|
10
|
Gaynes JA, Budoff SA, Grybko MJ, Hunt JB, Poleg-Polsky A. Classical center-surround receptive fields facilitate novel object detection in retinal bipolar cells. Nat Commun 2022; 13:5575. [PMID: 36163249 PMCID: PMC9512824 DOI: 10.1038/s41467-022-32761-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 08/16/2022] [Indexed: 11/11/2022] Open
Abstract
Antagonistic interactions between center and surround receptive field (RF) components lie at the heart of the computations performed in the visual system. Circularly symmetric center-surround RFs are thought to enhance responses to spatial contrasts (i.e., edges), but how visual edges affect motion processing is unclear. Here, we addressed this question in retinal bipolar cells, the first visual neuron with classic center-surround interactions. We found that bipolar glutamate release emphasizes objects that emerge in the RF; their responses to continuous motion are smaller, slower, and cannot be predicted by signals elicited by stationary stimuli. In our hands, the alteration in signal dynamics induced by novel objects was more pronounced than edge enhancement and could be explained by priming of RF surround during continuous motion. These findings echo the salience of human visual perception and demonstrate an unappreciated capacity of the center-surround architecture to facilitate novel object detection and dynamic signal representation.
Collapse
Affiliation(s)
- John A Gaynes
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Samuel A Budoff
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Michael J Grybko
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Joshua B Hunt
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Alon Poleg-Polsky
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO, USA.
| |
Collapse
|
11
|
Scalabrino ML, Thapa M, Chew LA, Zhang E, Xu J, Sampath AP, Chen J, Field GD. Robust cone-mediated signaling persists late into rod photoreceptor degeneration. eLife 2022; 11:e80271. [PMID: 36040015 PMCID: PMC9560159 DOI: 10.7554/elife.80271] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 08/25/2022] [Indexed: 01/13/2023] Open
Abstract
Rod photoreceptor degeneration causes deterioration in the morphology and physiology of cone photoreceptors along with changes in retinal circuits. These changes could diminish visual signaling at cone-mediated light levels, thereby limiting the efficacy of treatments such as gene therapy for rescuing normal, cone-mediated vision. However, the impact of progressive rod death on cone-mediated signaling remains unclear. To investigate the fidelity of retinal ganglion cell (RGC) signaling throughout disease progression, we used a mouse model of rod degeneration (Cngb1neo/neo). Despite clear deterioration of cone morphology with rod death, cone-mediated signaling among RGCs remained surprisingly robust: spatiotemporal receptive fields changed little and the mutual information between stimuli and spiking responses was relatively constant. This relative stability held until nearly all rods had died and cones had completely lost well-formed outer segments. Interestingly, RGC information rates were higher and more stable for natural movies than checkerboard noise as degeneration progressed. The main change in RGC responses with photoreceptor degeneration was a decrease in response gain. These results suggest that gene therapies for rod degenerative diseases are likely to prolong cone-mediated vision even if there are changes to cone morphology and density.
Collapse
Affiliation(s)
- Miranda L Scalabrino
- Department of Neurobiology, Duke University School of MedicineDurhamUnited States
| | - Mishek Thapa
- Department of Neurobiology, Duke University School of MedicineDurhamUnited States
| | - Lindsey A Chew
- Department of Neurobiology, Duke University School of MedicineDurhamUnited States
| | - Esther Zhang
- Department of Neurobiology, Duke University School of MedicineDurhamUnited States
| | - Jason Xu
- Department of Statistical Science, Duke UniversityDurhamUnited States
| | - Alapakkam P Sampath
- Jules Stein Eye Institute, University of California, Los AngelesLos AngelesUnited States
| | - Jeannie Chen
- Zilkha Neurogenetics Institute, Keck School of Medicine, University of Southern CaliforniaLos AngelesUnited States
| | - Greg D Field
- Department of Neurobiology, Duke University School of MedicineDurhamUnited States
| |
Collapse
|
12
|
Abstract
An ultimate goal in retina science is to understand how the neural circuit of the retina processes natural visual scenes. Yet most studies in laboratories have long been performed with simple, artificial visual stimuli such as full-field illumination, spots of light, or gratings. The underlying assumption is that the features of the retina thus identified carry over to the more complex scenario of natural scenes. As the application of corresponding natural settings is becoming more commonplace in experimental investigations, this assumption is being put to the test and opportunities arise to discover processing features that are triggered by specific aspects of natural scenes. Here, we review how natural stimuli have been used to probe, refine, and complement knowledge accumulated under simplified stimuli, and we discuss challenges and opportunities along the way toward a comprehensive understanding of the encoding of natural scenes. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Dimokratis Karamanlis
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.,International Max Planck Research School for Neurosciences, Göttingen, Germany
| | - Helene Marianne Schreyer
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Tim Gollisch
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.,Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
| |
Collapse
|
13
|
Sedigh-Sarvestani M, Fitzpatrick D. What and Where: Location-Dependent Feature Sensitivity as a Canonical Organizing Principle of the Visual System. Front Neural Circuits 2022; 16:834876. [PMID: 35498372 PMCID: PMC9039279 DOI: 10.3389/fncir.2022.834876] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 03/01/2022] [Indexed: 11/13/2022] Open
Abstract
Traditionally, functional representations in early visual areas are conceived as retinotopic maps preserving ego-centric spatial location information while ensuring that other stimulus features are uniformly represented for all locations in space. Recent results challenge this framework of relatively independent encoding of location and features in the early visual system, emphasizing location-dependent feature sensitivities that reflect specialization of cortical circuits for different locations in visual space. Here we review the evidence for such location-specific encoding including: (1) systematic variation of functional properties within conventional retinotopic maps in the cortex; (2) novel periodic retinotopic transforms that dramatically illustrate the tight linkage of feature sensitivity, spatial location, and cortical circuitry; and (3) retinotopic biases in cortical areas, and groups of areas, that have been defined by their functional specializations. We propose that location-dependent feature sensitivity is a fundamental organizing principle of the visual system that achieves efficient representation of positional regularities in visual experience, and reflects the evolutionary selection of sensory and motor circuits to optimally represent behaviorally relevant information. Future studies are necessary to discover mechanisms underlying joint encoding of location and functional information, how this relates to behavior, emerges during development, and varies across species.
Collapse
|
14
|
Grujic N, Brus J, Burdakov D, Polania R. Rational inattention in mice. SCIENCE ADVANCES 2022; 8:eabj8935. [PMID: 35245128 PMCID: PMC8896787 DOI: 10.1126/sciadv.abj8935] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Behavior exhibited by humans and other organisms is generally inconsistent and biased and, thus, is often labeled irrational. However, the origins of this seemingly suboptimal behavior remain elusive. We developed a behavioral task and normative framework to reveal how organisms should allocate their limited processing resources such that sensory precision and its related metabolic investment are balanced to guarantee maximal utility. We found that mice act as rational inattentive agents by adaptively allocating their sensory resources in a way that maximizes reward consumption in previously unexperienced stimulus-reward association environments. Unexpectedly, perception of commonly occurring stimuli was relatively imprecise; however, this apparent statistical fallacy implies "awareness" and efficient adaptation to their neurocognitive limitations. Arousal systems carry reward distribution information of sensory signals, and distributional reinforcement learning mechanisms regulate sensory precision via top-down normalization. These findings reveal how organisms efficiently perceive and adapt to previously unexperienced environmental contexts within the constraints imposed by neurobiology.
Collapse
Affiliation(s)
- Nikola Grujic
- Institute for Neuroscience, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
- Neuroscience Center Zürich, Zurich, Switzerland
| | - Jeroen Brus
- Neuroscience Center Zürich, Zurich, Switzerland
- Decision Neuroscience Lab, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Denis Burdakov
- Institute for Neuroscience, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
- Neuroscience Center Zürich, Zurich, Switzerland
- Corresponding author. (R.P.); (D.B.)
| | - Rafael Polania
- Neuroscience Center Zürich, Zurich, Switzerland
- Decision Neuroscience Lab, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
- Corresponding author. (R.P.); (D.B.)
| |
Collapse
|
15
|
Qiu Y, Zhao Z, Klindt D, Kautzky M, Szatko KP, Schaeffel F, Rifai K, Franke K, Busse L, Euler T. Natural environment statistics in the upper and lower visual field are reflected in mouse retinal specializations. Curr Biol 2021; 31:3233-3247.e6. [PMID: 34107304 DOI: 10.1016/j.cub.2021.05.017] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 04/06/2021] [Accepted: 05/11/2021] [Indexed: 12/29/2022]
Abstract
Pressures for survival make sensory circuits adapted to a species' natural habitat and its behavioral challenges. Thus, to advance our understanding of the visual system, it is essential to consider an animal's specific visual environment by capturing natural scenes, characterizing their statistical regularities, and using them to probe visual computations. Mice, a prominent visual system model, have salient visual specializations, being dichromatic with enhanced sensitivity to green and UV in the dorsal and ventral retina, respectively. However, the characteristics of their visual environment that likely have driven these adaptations are rarely considered. Here, we built a UV-green-sensitive camera to record footage from mouse habitats. This footage is publicly available as a resource for mouse vision research. We found chromatic contrast to greatly diverge in the upper, but not the lower, visual field. Moreover, training a convolutional autoencoder on upper, but not lower, visual field scenes was sufficient for the emergence of color-opponent filters, suggesting that this environmental difference might have driven superior chromatic opponency in the ventral mouse retina, supporting color discrimination in the upper visual field. Furthermore, the upper visual field was biased toward dark UV contrasts, paralleled by more light-offset-sensitive ganglion cells in the ventral retina. Finally, footage recorded at twilight suggests that UV promotes aerial predator detection. Our findings support that natural scene statistics shaped early visual processing in evolution.
Collapse
Affiliation(s)
- Yongrong Qiu
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, University of Tübingen, 72076 Tübingen, Germany
| | - Zhijian Zhao
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany
| | - David Klindt
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, University of Tübingen, 72076 Tübingen, Germany
| | - Magdalena Kautzky
- Division of Neurobiology, Faculty of Biology, LMU Munich, 82152 Planegg-Martinsried, Germany; Graduate School of Systemic Neurosciences (GSN), LMU Munich, 82152 Planegg-Martinsried, Germany
| | - Klaudia P Szatko
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, University of Tübingen, 72076 Tübingen, Germany; Bernstein Centre for Computational Neuroscience, 72076 Tübingen, Germany
| | - Frank Schaeffel
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany
| | - Katharina Rifai
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Carl Zeiss Vision International GmbH, 73430 Aalen, Germany
| | - Katrin Franke
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Bernstein Centre for Computational Neuroscience, 72076 Tübingen, Germany
| | - Laura Busse
- Division of Neurobiology, Faculty of Biology, LMU Munich, 82152 Planegg-Martinsried, Germany; Bernstein Centre for Computational Neuroscience, 82152 Planegg-Martinsried, Germany.
| | - Thomas Euler
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Bernstein Centre for Computational Neuroscience, 72076 Tübingen, Germany.
| |
Collapse
|
16
|
Nonlinear spatial integration in retinal bipolar cells shapes the encoding of artificial and natural stimuli. Neuron 2021; 109:1692-1706.e8. [PMID: 33798407 PMCID: PMC8153253 DOI: 10.1016/j.neuron.2021.03.015] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 01/22/2021] [Accepted: 03/10/2021] [Indexed: 11/21/2022]
Abstract
The retina dissects the visual scene into parallel information channels, which extract specific visual features through nonlinear processing. The first nonlinear stage is typically considered to occur at the output of bipolar cells, resulting from nonlinear transmitter release from synaptic terminals. In contrast, we show here that bipolar cells themselves can act as nonlinear processing elements at the level of their somatic membrane potential. Intracellular recordings from bipolar cells in the salamander retina revealed frequent nonlinear integration of visual signals within bipolar cell receptive field centers, affecting the encoding of artificial and natural stimuli. These nonlinearities provide sensitivity to spatial structure below the scale of bipolar cell receptive fields in both bipolar and downstream ganglion cells and appear to arise at the excitatory input into bipolar cells. Thus, our data suggest that nonlinear signal pooling starts earlier than previously thought: that is, at the input stage of bipolar cells. Some retinal bipolar cells represent visual contrast in a nonlinear fashion These bipolar cells also nonlinearly integrate visual signals over space The spatial nonlinearity affects the encoding of natural stimuli by bipolar cells The nonlinearity results from feedforward input, not from feedback inhibition
Collapse
|
17
|
Straub D, Rothkopf CA. Looking for Image Statistics: Active Vision With Avatars in a Naturalistic Virtual Environment. Front Psychol 2021; 12:641471. [PMID: 33692732 PMCID: PMC7937646 DOI: 10.3389/fpsyg.2021.641471] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Accepted: 02/01/2021] [Indexed: 11/13/2022] Open
Abstract
The efficient coding hypothesis posits that sensory systems are tuned to the regularities of their natural input. The statistics of natural image databases have been the topic of many studies, which have revealed biases in the distribution of orientations that are related to neural representations as well as behavior in psychophysical tasks. However, commonly used natural image databases contain images taken with a camera with a planar image sensor and limited field of view. Thus, these images do not incorporate the physical properties of the visual system and its active use reflecting body and eye movements. Here, we investigate quantitatively, whether the active use of the visual system influences image statistics across the visual field by simulating visual behaviors in an avatar in a naturalistic virtual environment. Images with a field of view of 120° were generated during exploration of a virtual forest environment both for a human and cat avatar. The physical properties of the visual system were taken into account by projecting the images onto idealized retinas according to models of the eyes' geometrical optics. Crucially, different active gaze behaviors were simulated to obtain image ensembles that allow investigating the consequences of active visual behaviors on the statistics of the input to the visual system. In the central visual field, the statistics of the virtual images matched photographic images regarding their power spectra and a bias in edge orientations toward cardinal directions. At larger eccentricities, the cardinal bias was superimposed with a gradually increasing radial bias. The strength of this effect depends on the active visual behavior and the physical properties of the eye. There were also significant differences between the upper and lower visual field, which became stronger depending on how the environment was actively sampled. Taken together, the results show that quantitatively relating natural image statistics to neural representations and psychophysical behavior requires not only to take the structure of the environment into account, but also the physical properties of the visual system, and its active use in behavior.
Collapse
Affiliation(s)
- Dominik Straub
- Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany
- Centre for Cognitive Science, Technical University of Darmstadt, Darmstadt, Germany
| | - Constantin A. Rothkopf
- Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany
- Centre for Cognitive Science, Technical University of Darmstadt, Darmstadt, Germany
| |
Collapse
|
18
|
Talyansky S, Brinkman BAW. Dysregulation of excitatory neural firing replicates physiological and functional changes in aging visual cortex. PLoS Comput Biol 2021; 17:e1008620. [PMID: 33497380 PMCID: PMC7864437 DOI: 10.1371/journal.pcbi.1008620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Revised: 02/05/2021] [Accepted: 12/08/2020] [Indexed: 11/19/2022] Open
Abstract
The mammalian visual system has been the focus of countless experimental and theoretical studies designed to elucidate principles of neural computation and sensory coding. Most theoretical work has focused on networks intended to reflect developing or mature neural circuitry, in both health and disease. Few computational studies have attempted to model changes that occur in neural circuitry as an organism ages non-pathologically. In this work we contribute to closing this gap, studying how physiological changes correlated with advanced age impact the computational performance of a spiking network model of primary visual cortex (V1). Our results demonstrate that deterioration of homeostatic regulation of excitatory firing, coupled with long-term synaptic plasticity, is a sufficient mechanism to reproduce features of observed physiological and functional changes in neural activity data, specifically declines in inhibition and in selectivity to oriented stimuli. This suggests a potential causality between dysregulation of neuron firing and age-induced changes in brain physiology and functional performance. While this does not rule out deeper underlying causes or other mechanisms that could give rise to these changes, our approach opens new avenues for exploring these underlying mechanisms in greater depth and making predictions for future experiments.
Collapse
Affiliation(s)
- Seth Talyansky
- Catlin Gabel School, Portland, Oregon, United States of America
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
| | - Braden A. W. Brinkman
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
| |
Collapse
|
19
|
Ruda K, Zylberberg J, Field GD. Ignoring correlated activity causes a failure of retinal population codes. Nat Commun 2020; 11:4605. [PMID: 32929073 PMCID: PMC7490269 DOI: 10.1038/s41467-020-18436-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Accepted: 08/21/2020] [Indexed: 11/25/2022] Open
Abstract
From starlight to sunlight, adaptation alters retinal output, changing both the signal and noise among populations of retinal ganglion cells (RGCs). Here we determine how these light level-dependent changes impact decoding of retinal output, testing the importance of accounting for RGC noise correlations to optimally read out retinal activity. We find that at moonlight conditions, correlated noise is greater and assuming independent noise severely diminishes decoding performance. In fact, assuming independence among a local population of RGCs produces worse decoding than using a single RGC, demonstrating a failure of population codes when correlated noise is substantial and ignored. We generalize these results with a simple model to determine what conditions dictate this failure of population processing. This work elucidates the circumstances in which accounting for noise correlations is necessary to take advantage of population-level codes and shows that sensory adaptation can strongly impact decoding requirements on downstream brain areas. To see during day and night, the retina adapts to a trillion-fold change in light intensity. The authors show that an accurate read-out of retinal signals over this intensity range requires that brain circuits account for changing noise correlations across populations of retinal neurons.
Collapse
Affiliation(s)
- Kiersten Ruda
- Department of Neurobiology, Duke University School of Medicine, Durham, NC, USA
| | - Joel Zylberberg
- Department of Physics and Center for Vision Research, York University, Toronto, Ontario, Canada
| | - Greg D Field
- Department of Neurobiology, Duke University School of Medicine, Durham, NC, USA.
| |
Collapse
|
20
|
Cafaro J, Zylberberg J, Field GD. Global Motion Processing by Populations of Direction-Selective Retinal Ganglion Cells. J Neurosci 2020; 40:5807-5819. [PMID: 32561674 PMCID: PMC7380974 DOI: 10.1523/jneurosci.0564-20.2020] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2020] [Revised: 06/09/2020] [Accepted: 06/12/2020] [Indexed: 11/21/2022] Open
Abstract
Simple stimuli have been critical to understanding neural population codes in sensory systems. Yet it remains necessary to determine the extent to which this understanding generalizes to more complex conditions. To examine this problem, we measured how populations of direction-selective ganglion cells (DSGCs) from the retinas of male and female mice respond to a global motion stimulus with its direction and speed changing dynamically. We then examined the encoding and decoding of motion direction in both individual and populations of DSGCs. Individual cells integrated global motion over ∼200 ms, and responses were tuned to direction. However, responses were sparse and broadly tuned, which severely limited decoding performance from small DSGC populations. In contrast, larger populations compensated for response sparsity, enabling decoding with high temporal precision (<100 ms). At these timescales, correlated spiking was minimal and had little impact on decoding performance, unlike results obtained using simpler local motion stimuli decoded over longer timescales. We use these data to define different DSGC population decoding regimes that use or mitigate correlated spiking to achieve high-spatial versus high-temporal resolution.SIGNIFICANCE STATEMENT ON-OFF direction-selective ganglion cells (ooDSGCs) in the mammalian retina are typically thought to signal local motion to the brain. However, several recent studies suggest they may signal global motion. Here we analyze the fidelity of encoding and decoding global motion in a natural scene across large populations of ooDSGCs. We show that large populations of DSGCs are capable of signaling rapid changes in global motion.
Collapse
Affiliation(s)
- Jon Cafaro
- Department of Neurobiology, Duke University, Durham, North Carolina, 27710
| | - Joel Zylberberg
- Department of Physics and Astronomy, York University, Toronto, Ontario, M3J 1P3
| | - Greg D Field
- Department of Neurobiology, Duke University, Durham, North Carolina, 27710
| |
Collapse
|
21
|
Probabilistic Representation in Human Visual Cortex Reflects Uncertainty in Serial Decisions. J Neurosci 2019; 39:8164-8176. [PMID: 31481435 DOI: 10.1523/jneurosci.3212-18.2019] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 07/24/2019] [Accepted: 07/24/2019] [Indexed: 01/16/2023] Open
Abstract
How does the brain represent the reliability of its sensory evidence? Here, we test whether sensory uncertainty is encoded in cortical population activity as the width of a probability distribution, a hypothesis that lies at the heart of Bayesian models of neural coding. We probe the neural representation of uncertainty by capitalizing on a well-known behavioral bias called serial dependence. Human observers of either sex reported the orientation of stimuli presented in sequence, while activity in visual cortex was measured with fMRI. We decoded probability distributions from population-level activity and found that serial dependence effects in behavior are consistent with a statistically advantageous sensory integration strategy, in which uncertain sensory information is given less weight. More fundamentally, our results suggest that probability distributions decoded from human visual cortex reflect the sensory uncertainty that observers rely on in their decisions, providing critical evidence for Bayesian theories of perception.SIGNIFICANCE STATEMENT Virtually any decision that people make is based on uncertain and incomplete information. Although uncertainty plays a major role in decision-making, we have but a nascent understanding of its neural basis. Here, we probe the neural code of uncertainty by capitalizing on a well-known perceptual illusion. We developed a computational model to explain the illusion, and tested it in behavioral and neuroimaging experiments. This revealed that the illusion is not a mistake of perception, but rather reflects a rational decision under uncertainty. No less important, we discovered that the uncertainty that people use in this decision is represented in brain activity as the width of a probability distribution, providing critical evidence for current Bayesian theories of decision-making.
Collapse
|
22
|
Richard B, Hansen BC, Johnson AP, Shafto P. Spatial summation of broadband contrast. J Vis 2019; 19:16. [PMID: 31100132 DOI: 10.1167/19.5.16] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Spatial summation of luminance contrast signals has historically been psychophysically measured with stimuli isolated in spatial frequency (i.e., narrowband). Here, we revisit the study of spatial summation with noise patterns that contain the naturalistic 1/fα distribution of contrast across spatial frequency. We measured amplitude spectrum slope (α) discrimination thresholds and verified if sensitivity to α improved according to stimulus size. Discrimination thresholds did decrease with an increase in stimulus size. These data were modeled with a summation model originally designed for narrowband stimuli (i.e., single detecting channel; Baker & Meese, 2011; Meese & Baker, 2011) that we modified to include summation across multiple-differently tuned-spatial frequency channels. To fit our data, contrast gain control weights had to be inversely related to spatial frequency (1/f); thus low spatial frequencies received significantly more divisive inhibition than higher spatial frequencies, which is a similar finding to previous models of broadband contrast perception (Haun & Essock, 2010; Haun & Peli, 2013). We found summation across spatial frequency channels to occur prior to summation across space, channel summation was near linear and summation across space was nonlinear. Our analysis demonstrates that classical psychophysical models can be adapted to computationally define visual mechanisms under broadband visual input, with the adapted models offering novel insight on the integration of signals across channels and space.
Collapse
Affiliation(s)
- Bruno Richard
- Department of Mathematics and Computer Science, Rutgers University, Newark, NJ, USA
| | - Bruce C Hansen
- Department of Psychological and Brain Sciences, Neuroscience Program, Colgate University, Hamilton, NY, USA
| | - Aaron P Johnson
- Department of Psychology, Concordia University, Montreal, Quebec, Canada
| | - Patrick Shafto
- Department of Mathematics and Computer Science, Rutgers University, Newark, NJ, USA
| |
Collapse
|
23
|
Habtegiorgis SW, Jarvers C, Rifai K, Neumann H, Wahl S. The Role of Bottom-Up and Top-Down Cortical Interactions in Adaptation to Natural Scene Statistics. Front Neural Circuits 2019; 13:9. [PMID: 30814934 PMCID: PMC6381060 DOI: 10.3389/fncir.2019.00009] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Accepted: 01/24/2019] [Indexed: 11/16/2022] Open
Abstract
Adaptation is a mechanism by which cortical neurons adjust their responses according to recently viewed stimuli. Visual information is processed in a circuit formed by feedforward (FF) and feedback (FB) synaptic connections of neurons in different cortical layers. Here, the functional role of FF-FB streams and their synaptic dynamics in adaptation to natural stimuli is assessed in psychophysics and neural model. We propose a cortical model which predicts psychophysically observed motion adaptation aftereffects (MAE) after exposure to geometrically distorted natural image sequences. The model comprises direction selective neurons in V1 and MT connected by recurrent FF and FB dynamic synapses. Psychophysically plausible model MAEs were obtained from synaptic changes within neurons tuned to salient direction signals of the broadband natural input. It is conceived that, motion disambiguation by FF-FB interactions is critical to encode this salient information. Moreover, only FF-FB dynamic synapses operating at distinct rates predicted psychophysical MAEs at different adaptation time-scales which could not be accounted for by single rate dynamic synapses in either of the streams. Recurrent FF-FB pathways thereby play a role during adaptation in a natural environment, specifically in inducing multilevel cortical plasticity to salient information and in mediating adaptation at different time-scales.
Collapse
Affiliation(s)
| | - Christian Jarvers
- Faculty of Engineering, Computer Sciences and Psychology, Institute of Neural Information Processing, Ulm University, Ulm, Germany
| | - Katharina Rifai
- Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| | - Heiko Neumann
- Faculty of Engineering, Computer Sciences and Psychology, Institute of Neural Information Processing, Ulm University, Ulm, Germany
| | - Siegfried Wahl
- Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
- Faculty of Engineering, Computer Sciences and Psychology, Institute of Neural Information Processing, Ulm University, Ulm, Germany
| |
Collapse
|
24
|
Habtegiorgis SW, Rifai K, Lappe M, Wahl S. Experience-dependent long-term facilitation of skew adaptation. J Vis 2018; 18:7. [DOI: 10.1167/18.9.7] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
| | - Katharina Rifai
- Institute for Ophthalmic Research, University of Tuebingen, Tuebingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| | - Markus Lappe
- Institute of Psychology, University of Muenster, Muenster, Germany
| | - Siegfried Wahl
- Institute for Ophthalmic Research, University of Tuebingen, Tuebingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| |
Collapse
|
25
|
Wright NC, Hoseini MS, Yasar TB, Wessel R. Coupling of synaptic inputs to local cortical activity differs among neurons and adapts after stimulus onset. J Neurophysiol 2017; 118:3345-3359. [PMID: 28931610 DOI: 10.1152/jn.00398.2017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Cortical activity contributes significantly to the high variability of sensory responses of interconnected pyramidal neurons, which has crucial implications for sensory coding. Yet, largely because of technical limitations of in vivo intracellular recordings, the coupling of a pyramidal neuron's synaptic inputs to the local cortical activity has evaded full understanding. Here we obtained excitatory synaptic conductance ( g) measurements from putative pyramidal neurons and local field potential (LFP) recordings from adjacent cortical circuits during visual processing in the turtle whole brain ex vivo preparation. We found a range of g-LFP coupling across neurons. Importantly, for a given neuron, g-LFP coupling increased at stimulus onset and then relaxed toward intermediate values during continued visual stimulation. A model network with clustered connectivity and synaptic depression reproduced both the diversity and the dynamics of g-LFP coupling. In conclusion, these results establish a rich dependence of single-neuron responses on anatomical, synaptic, and emergent network properties. NEW & NOTEWORTHY Cortical neurons are strongly influenced by the networks in which they are embedded. To understand sensory processing, we must identify the nature of this influence and its underlying mechanisms. Here we investigate synaptic inputs to cortical neurons, and the nearby local field potential, during visual processing. We find a range of neuron-to-network coupling across cortical neurons. This coupling is dynamically modulated during visual processing via biophysical and emergent network properties.
Collapse
Affiliation(s)
- Nathaniel C Wright
- Department of Physics, Washington University in St. Louis , St. Louis, Missouri
| | - Mahmood S Hoseini
- Department of Physics, Washington University in St. Louis , St. Louis, Missouri
| | - Tansel Baran Yasar
- Department of Physics, Washington University in St. Louis , St. Louis, Missouri
| | - Ralf Wessel
- Department of Physics, Washington University in St. Louis , St. Louis, Missouri
| |
Collapse
|
26
|
The brain during free movement - What can we learn from the animal model. Brain Res 2017; 1716:3-15. [PMID: 28893579 DOI: 10.1016/j.brainres.2017.09.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2017] [Revised: 08/11/2017] [Accepted: 09/04/2017] [Indexed: 11/21/2022]
Abstract
Animals, just like humans, can freely move. They do so for various important reasons, such as finding food and escaping predators. Observing these behaviors can inform us about the underlying cognitive processes. In addition, while humans can convey complicated information easily through speaking, animals need to move their bodies to communicate. This has prompted many creative solutions by animal neuroscientists to enable studying the brain during movement. In this review, we first summarize how animal researchers record from the brain while an animal is moving, by describing the most common neural recording techniques in animals and how they were adapted to record during movement. We further discuss the challenge of controlling or monitoring sensory input during free movement. However, not only is free movement a necessity to reflect the outcome of certain internal cognitive processes in animals, it is also a fascinating field of research since certain crucial behavioral patterns can only be observed and studied during free movement. Therefore, in a second part of the review, we focus on some key findings in animal research that specifically address the interaction between free movement and brain activity. First, focusing on walking as a fundamental form of free movement, we discuss how important such intentional movements are for understanding processes as diverse as spatial navigation, active sensing, and complex motor planning. Second, we propose the idea of regarding free movement as the expression of a behavioral state. This view can help to understand the general influence of movement on brain function. Together, the technological advancements towards recording from the brain during movement, and the scientific questions asked about the brain engaged in movement, make animal research highly valuable to research into the human "moving brain".
Collapse
|
27
|
Wright NC, Wessel R. Network activity influences the subthreshold and spiking visual responses of pyramidal neurons in the three-layer turtle cortex. J Neurophysiol 2017; 118:2142-2155. [PMID: 28747466 DOI: 10.1152/jn.00340.2017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2017] [Revised: 07/17/2017] [Accepted: 07/20/2017] [Indexed: 11/22/2022] Open
Abstract
A primary goal of systems neuroscience is to understand cortical function, typically by studying spontaneous and stimulus-modulated cortical activity. Mounting evidence suggests a strong and complex relationship exists between the ongoing and stimulus-modulated cortical state. To date, most work in this area has been based on spiking in populations of neurons. While advantageous in many respects, this approach is limited in scope: it records the activity of a minority of neurons and gives no direct indication of the underlying subthreshold dynamics. Membrane potential recordings can fill these gaps in our understanding, but stable recordings are difficult to obtain in vivo. Here, we recorded subthreshold cortical visual responses in the ex vivo turtle eye-attached whole brain preparation, which is ideally suited for such a study. We found that, in the absence of visual stimulation, the network was "synchronous"; neurons displayed network-mediated transitions between hyperpolarized (Down) and depolarized (Up) membrane potential states. The prevalence of these slow-wave transitions varied across turtles and recording sessions. Visual stimulation evoked similar Up states, which were on average larger and less reliable when the ongoing state was more synchronous. Responses were muted when immediately preceded by large, spontaneous Up states. Evoked spiking was sparse, highly variable across trials, and mediated by concerted synaptic inputs that were, in general, only very weakly correlated with inputs to nearby neurons. Together, these results highlight the multiplexed influence of the cortical network on the spontaneous and sensory-evoked activity of individual cortical neurons.NEW & NOTEWORTHY Most studies of cortical activity focus on spikes. Subthreshold membrane potential recordings can provide complementary insight, but stable recordings are difficult to obtain in vivo. Here, we recorded the membrane potentials of cortical neurons during ongoing and visually evoked activity. We observed a strong relationship between network and single-neuron evoked activity spanning multiple temporal scales. The membrane potential perspective of cortical dynamics thus highlights the influence of intrinsic network properties on visual processing.
Collapse
Affiliation(s)
- Nathaniel C Wright
- Department of Physics, Washington University in St. Louis, St. Louis, Missouri
| | - Ralf Wessel
- Department of Physics, Washington University in St. Louis, St. Louis, Missouri
| |
Collapse
|
28
|
Habtegiorgis SW, Rifai K, Lappe M, Wahl S. Adaptation to Skew Distortions of Natural Scenes and Retinal Specificity of Its Aftereffects. Front Psychol 2017; 8:1158. [PMID: 28751870 PMCID: PMC5508008 DOI: 10.3389/fpsyg.2017.01158] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2017] [Accepted: 06/26/2017] [Indexed: 11/25/2022] Open
Abstract
Image skew is one of the prominent distortions that exist in optical elements, such as in spectacle lenses. The present study evaluates adaptation to image skew in dynamic natural images. Moreover, the cortical levels involved in skew coding were probed using retinal specificity of skew adaptation aftereffects. Left and right skewed natural image sequences were shown to observers as adapting stimuli. The point of subjective equality (PSE), i.e., the skew amplitude in simple geometrical patterns that is perceived to be unskewed, was used to quantify the aftereffect of each adapting skew direction. The PSE, in a two-alternative forced choice paradigm, shifted toward the adapting skew direction. Moreover, significant adaptation aftereffects were obtained not only at adapted, but also at non-adapted retinal locations during fixation. Skew adaptation information was transferred partially to non-adapted retinal locations. Thus, adaptation to skewed natural scenes induces coordinated plasticity in lower and higher cortical areas of the visual pathway.
Collapse
Affiliation(s)
| | - Katharina Rifai
- Institute for Ophthalmic Research, University of TuebingenTuebingen, Germany
| | - Markus Lappe
- Institute of Psychology, University of MuensterMuenster, Germany
| | - Siegfried Wahl
- Institute for Ophthalmic Research, University of TuebingenTuebingen, Germany
| |
Collapse
|
29
|
De Luna P, Veit J, Rainer G. Basal forebrain activation enhances between-trial reliability of low-frequency local field potentials (LFP) and spiking activity in tree shrew primary visual cortex (V1). Brain Struct Funct 2017; 222:4239-4252. [PMID: 28660418 DOI: 10.1007/s00429-017-1468-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2017] [Accepted: 06/21/2017] [Indexed: 02/03/2023]
Abstract
Brain state has profound effects on neural processing and stimulus encoding in sensory cortices. While the synchronized state is dominated by low-frequency local field potential (LFP) activity, low-frequency LFP power is suppressed in the desynchronized state, where a concurrent enhancement in gamma power is observed. Recently, it has been shown that cortical desynchronization co-occurs with enhanced between-trial reliability of spiking activity in sensory neurons, but it is currently unclear whether this effect is also evident in LFP signals. Here, we address this question by recording both spike trains and LFP in primary visual cortex during natural movie stimulation, and using isoflurane anesthesia and basal forebrain (BF) electrical activation as proxies for synchronized and desynchronized brain states. We show that indeed, low-frequency LFP modulations ("LFP events") also occur more reliably following BF activation. Interestingly, while being more reliable, these LFP events are smaller in amplitude compared to those generated in the synchronized brain state. We further demonstrate that differences in reliability of spiking activity between cortical states can be linked to amplitude and probability of LFP events. The correlated temporal dynamics between low-frequency LFP and spiking response reliability in visual cortex suggests that these effects may both be the result of the same neural circuit activation triggered by BF stimulation, which facilitates switching between processing of incoming sensory information in the desynchronized and reverberation of internal signals in the synchronized state.
Collapse
Affiliation(s)
- Paolo De Luna
- Visual Cognition Laboratory, Department of Medicine, University of Fribourg, Chemin du Musée 5, 1700, Fribourg, Switzerland
| | - Julia Veit
- Visual Cognition Laboratory, Department of Medicine, University of Fribourg, Chemin du Musée 5, 1700, Fribourg, Switzerland.,Department of Molecular and Cell Biology, University of California, Berkeley, CA, 94720-3200, USA
| | - Gregor Rainer
- Visual Cognition Laboratory, Department of Medicine, University of Fribourg, Chemin du Musée 5, 1700, Fribourg, Switzerland.
| |
Collapse
|
30
|
Wright NC, Hoseini MS, Wessel R. Adaptation modulates correlated subthreshold response variability in visual cortex. J Neurophysiol 2017; 118:1257-1269. [PMID: 28592686 DOI: 10.1152/jn.00124.2017] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2017] [Revised: 06/05/2017] [Accepted: 06/06/2017] [Indexed: 02/02/2023] Open
Abstract
Cortical sensory responses are highly variable across stimulus presentations. This variability can be correlated across neurons (due to some combination of dense intracortical connectivity, cortical activity level, and cortical state), with fundamental implications for population coding. Yet the interpretation of correlated response variability (or "noise correlation") has remained fraught with difficulty, in part because of the restriction to extracellular neuronal spike recordings. Here, we measured response variability and its correlation at the most microscopic level of electrical neural activity, the membrane potential, by obtaining dual whole cell recordings from pairs of cortical pyramidal neurons during visual processing in the turtle whole brain ex vivo preparation. We found that during visual stimulation, correlated variability adapts toward an intermediate level and that this correlation dynamic is likely mediated by intracortical mechanisms. A model network with external inputs, synaptic depression, and structure reproduced the observed dynamics of correlated variability. These results suggest that intracortical adaptation self-organizes cortical circuits toward a balanced regime at which correlated variability is maintained at an intermediate level.NEW & NOTEWORTHY Correlated response variability has profound implications for stimulus encoding, yet our understanding of this phenomenon is based largely on spike data. Here, we investigate the dynamics and mechanisms of membrane potential-correlated variability (CC) in visual cortex with a combined experimental and computational approach. We observe a visually evoked increase in CC, followed by a fast return to baseline. Our results further suggest a link between this observation and the adaptation-mediated dynamics of emergent network phenomena.
Collapse
Affiliation(s)
- Nathaniel C Wright
- Department of Physics, Washington University in St. Louis, St. Louis, Missouri
| | - Mahmood S Hoseini
- Department of Physics, Washington University in St. Louis, St. Louis, Missouri
| | - Ralf Wessel
- Department of Physics, Washington University in St. Louis, St. Louis, Missouri
| |
Collapse
|
31
|
Selective interhemispheric circuits account for a cardinal bias in spontaneous activity within early visual areas. Neuroimage 2017; 146:971-982. [DOI: 10.1016/j.neuroimage.2016.09.048] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Revised: 09/15/2016] [Accepted: 09/19/2016] [Indexed: 11/19/2022] Open
|
32
|
Katz ML, Viney TJ, Nikolic K. Receptive Field Vectors of Genetically-Identified Retinal Ganglion Cells Reveal Cell-Type-Dependent Visual Functions. PLoS One 2016; 11:e0147738. [PMID: 26845435 PMCID: PMC4742227 DOI: 10.1371/journal.pone.0147738] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2015] [Accepted: 01/07/2016] [Indexed: 11/18/2022] Open
Abstract
Sensory stimuli are encoded by diverse kinds of neurons but the identities of the recorded neurons that are studied are often unknown. We explored in detail the firing patterns of eight previously defined genetically-identified retinal ganglion cell (RGC) types from a single transgenic mouse line. We first introduce a new technique of deriving receptive field vectors (RFVs) which utilises a modified form of mutual information (“Quadratic Mutual Information”). We analysed the firing patterns of RGCs during presentation of short duration (~10 second) complex visual scenes (natural movies). We probed the high dimensional space formed by the visual input for a much smaller dimensional subspace of RFVs that give the most information about the response of each cell. The new technique is very efficient and fast and the derivation of novel types of RFVs formed by the natural scene visual input was possible even with limited numbers of spikes per cell. This approach enabled us to estimate the 'visual memory' of each cell type and the corresponding receptive field area by calculating Mutual Information as a function of the number of frames and radius. Finally, we made predictions of biologically relevant functions based on the RFVs of each cell type. RGC class analysis was complemented with results for the cells’ response to simple visual input in the form of black and white spot stimulation, and their classification on several key physiological metrics. Thus RFVs lead to predictions of biological roles based on limited data and facilitate analysis of sensory-evoked spiking data from defined cell types.
Collapse
Affiliation(s)
- Matthew L. Katz
- Centre for Bio-Inspired Technology, Institute of Biomedical Engineering, Department of Electrical and Electronic Engineering, The Bessemer Building, Imperial College London, London SW7 2AZ, United Kingdom
| | - Tim J. Viney
- Neural Circuit Laboratories, Friedrich Miescher Institute for Biomedical Research, 4058 Basel, Switzerland
- University of Basel, 4003 Basel, Switzerland
| | - Konstantin Nikolic
- Centre for Bio-Inspired Technology, Institute of Biomedical Engineering, Department of Electrical and Electronic Engineering, The Bessemer Building, Imperial College London, London SW7 2AZ, United Kingdom
- * E-mail:
| |
Collapse
|
33
|
Wadehn F, Schieban K, Nikolic K. Motion sensitivity analysis of retinal ganglion cells in mouse retina using natural visual stimuli. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:1658-62. [PMID: 26736594 DOI: 10.1109/embc.2015.7318694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
One of the major objectives in functional studies of the retina is the understanding of neural circuits and identification of the function of involved nerve cells. Instead of stimulating the retina with light patterns of simple geometrical shapes, we analyze the response of retinal ganglion cells of mouse retina to a black and white movie containing a natural scenery. By correlating measured spike trains with a metric for the velocity of a visual scene, PV0 cells were found to be direction selective, whereas PV5 cells did not show any sensitivity to motion.
Collapse
|
34
|
Where's the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network. PLoS Comput Biol 2015; 11:e1004640. [PMID: 26714277 PMCID: PMC4694925 DOI: 10.1371/journal.pcbi.1004640] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2014] [Accepted: 11/02/2015] [Indexed: 11/26/2022] Open
Abstract
Even in the absence of sensory stimulation the brain is spontaneously active. This background “noise” seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network’s spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network’s behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms. Neural recordings seem very noisy. If the exact same stimulus is shown to an animal multiple times, the neural response will vary substantially. In fact, the activity of a single neuron shows many features of a random process. Furthermore, the spontaneous activity occurring in the absence of any sensory stimulus, which is usually considered a kind of background noise, often has a magnitude comparable to the activity evoked by stimulus presentation and interacts with sensory inputs in interesting ways. Here we show that the key features of neural variability and spontaneous activity can all be accounted for by a simple and completely deterministic neural network learning a predictive model of its sensory inputs. The network’s deterministic dynamics give rise to structured but variable responses matching key experimental findings obtained in different mammalian species with different recording techniques. Our results suggest that the notorious variability of neural recordings and the complex features of spontaneous brain activity could reflect the dynamics of a largely deterministic but highly adaptive network learning a predictive model of its sensory environment.
Collapse
|
35
|
Schulz DPA, Sahani M, Carandini M. Five key factors determining pairwise correlations in visual cortex. J Neurophysiol 2015; 114:1022-33. [PMID: 26019310 PMCID: PMC4725109 DOI: 10.1152/jn.00094.2015] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2015] [Accepted: 05/22/2015] [Indexed: 12/03/2022] Open
Abstract
The responses of cortical neurons to repeated presentation of a stimulus are highly variable, yet correlated. These "noise correlations" reflect a low-dimensional structure of population dynamics. Here, we examine noise correlations in 22,705 pairs of neurons in primary visual cortex (V1) of anesthetized cats, during ongoing activity and in response to artificial and natural visual stimuli. We measured how noise correlations depend on 11 factors. Because these factors are themselves not independent, we distinguished their influences using a nonlinear additive model. The model revealed that five key factors play a predominant role in determining pairwise correlations. Two of these are distance in cortex and difference in sensory tuning: these are known to decrease correlation. A third factor is firing rate: confirming most earlier observations, it markedly increased pairwise correlations. A fourth factor is spike width: cells with a broad spike were more strongly correlated amongst each other. A fifth factor is spike isolation: neurons with worse isolation were more correlated, even if they were recorded on different electrodes. For pairs of neurons with poor isolation, this last factor was the main determinant of correlations. These results were generally independent of stimulus type and timescale of analysis, but there were exceptions. For instance, pairwise correlations depended on difference in orientation tuning more during responses to gratings than to natural stimuli. These results consolidate disjoint observations in a vast literature on pairwise correlations and point towards regularities of population coding in sensory cortex.
Collapse
Affiliation(s)
- David P A Schulz
- COMPLeX, London, United Kingdom; Gatsby Computational Neuroscience Unit, London, United Kingdom; and Institute of Ophthalmology, University College London, London, United Kingdom
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, London, United Kingdom; and
| | - Matteo Carandini
- Institute of Ophthalmology, University College London, London, United Kingdom
| |
Collapse
|
36
|
Nortmann N, Rekauzke S, Onat S, König P, Jancke D. Primary visual cortex represents the difference between past and present. Cereb Cortex 2015; 25:1427-40. [PMID: 24343889 PMCID: PMC4428292 DOI: 10.1093/cercor/bht318] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
The visual system is confronted with rapidly changing stimuli in everyday life. It is not well understood how information in such a stream of input is updated within the brain. We performed voltage-sensitive dye imaging across the primary visual cortex (V1) to capture responses to sequences of natural scene contours. We presented vertically and horizontally filtered natural images, and their superpositions, at 10 or 33 Hz. At low frequency, the encoding was found to represent not the currently presented images, but differences in orientation between consecutive images. This was in sharp contrast to more rapid sequences for which we found an ongoing representation of current input, consistent with earlier studies. Our finding that for slower image sequences, V1 does no longer report actual features but represents their relative difference in time counteracts the view that the first cortical processing stage must always transfer complete information. Instead, we show its capacities for change detection with a new emphasis on the role of automatic computation evolving in the 100-ms range, inevitably affecting information transmission further downstream.
Collapse
Affiliation(s)
- Nora Nortmann
- Optical Imaging Group, Institut für Neuroinformatik, Ruhr-University Bochum, 44780 Bochum, Germany
- Bernstein Group for Computational Neuroscience, Ruhr-University Bochum, 44780 Bochum, Germany
- Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany
| | - Sascha Rekauzke
- Optical Imaging Group, Institut für Neuroinformatik, Ruhr-University Bochum, 44780 Bochum, Germany
- Bernstein Group for Computational Neuroscience, Ruhr-University Bochum, 44780 Bochum, Germany
| | - Selim Onat
- Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany
| | - Peter König
- Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Dirk Jancke
- Optical Imaging Group, Institut für Neuroinformatik, Ruhr-University Bochum, 44780 Bochum, Germany
- Bernstein Group for Computational Neuroscience, Ruhr-University Bochum, 44780 Bochum, Germany
| |
Collapse
|
37
|
Fernandes HL, Stevenson IH, Phillips AN, Segraves MA, Kording KP. Saliency and saccade encoding in the frontal eye field during natural scene search. Cereb Cortex 2014; 24:3232-45. [PMID: 23863686 PMCID: PMC4240184 DOI: 10.1093/cercor/bht179] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
The frontal eye field (FEF) plays a central role in saccade selection and execution. Using artificial stimuli, many studies have shown that the activity of neurons in the FEF is affected by both visually salient stimuli in a neuron's receptive field and upcoming saccades in a certain direction. However, the extent to which visual and motor information is represented in the FEF in the context of the cluttered natural scenes we encounter during everyday life has not been explored. Here, we model the activities of neurons in the FEF, recorded while monkeys were searching natural scenes, using both visual and saccade information. We compare the contribution of bottom-up visual saliency (based on low-level features such as brightness, orientation, and color) and saccade direction. We find that, while saliency is correlated with the activities of some neurons, this relationship is ultimately driven by activities related to movement. Although bottom-up visual saliency contributes to the choice of saccade targets, it does not appear that FEF neurons actively encode the kind of saliency posited by popular saliency map theories. Instead, our results emphasize the FEF's role in the stages of saccade planning directly related to movement generation.
Collapse
Affiliation(s)
- Hugo L. Fernandes
- Department of Physical Medicine and Rehabilitation, Northwestern University and Rehabilitation Institute of Chicago, Chicago, IL 60611, USA
- PDBC, Instituto Gulbenkian de Ciência, 2780 Oeiras, Portugal
- Instituto de Tecnologia Química e Biológica, Universidade Nova de Lisboa, 2780 Oeiras, Portugal
| | - Ian H. Stevenson
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, CA 94720, USA
| | - Adam N. Phillips
- Tamagawa University, Brain Science Institute, Machida 194-8610, Japan
- Department of Neurobiology, Northwestern University, Evanston, IL 60208, USA
| | - Mark A. Segraves
- Department of Neurobiology, Northwestern University, Evanston, IL 60208, USA
| | - Konrad P. Kording
- Department of Physical Medicine and Rehabilitation, Northwestern University and Rehabilitation Institute of Chicago, Chicago, IL 60611, USA
- Department of Engineering Sciences and Applied Mathematics, Northwestern University, Evanston, IL 60208, USA
- Department of Physiology, Northwestern University, Chicago, IL 60611, USA
| |
Collapse
|
38
|
Augmented saliency model using automatic 3D head pose detection and learned gaze following in natural scenes. Vision Res 2014; 116:113-26. [PMID: 25448115 DOI: 10.1016/j.visres.2014.10.027] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2014] [Revised: 10/22/2014] [Accepted: 10/28/2014] [Indexed: 11/23/2022]
Abstract
Previous studies have shown that gaze direction of actors in a scene influences eye movements of passive observers during free-viewing (Castelhano, Wieth, & Henderson, 2007; Borji, Parks, & Itti, 2014). However, no computational model has been proposed to combine bottom-up saliency with actor's head pose and gaze direction for predicting where observers look. Here, we first learn probability maps that predict fixations leaving head regions (gaze following fixations), as well as fixations on head regions (head fixations), both dependent on the actor's head size and pose angle. We then learn a combination of gaze following, head region, and bottom-up saliency maps with a Markov chain composed of head region and non-head region states. This simple structure allows us to inspect the model and make comments about the nature of eye movements originating from heads as opposed to other regions. Here, we assume perfect knowledge of actor head pose direction (from an oracle). The combined model, which we call the Dynamic Weighting of Cues model (DWOC), explains observers' fixations significantly better than each of the constituent components. Finally, in a fully automatic combined model, we replace the oracle head pose direction data with detections from a computer vision model of head pose. Using these (imperfect) automated detections, we again find that the combined model significantly outperforms its individual components. Our work extends the engineering and scientific applications of saliency models and helps better understand mechanisms of visual attention.
Collapse
|
39
|
Temporal statistics of natural image sequences generated by movements with insect flight characteristics. PLoS One 2014; 9:e110386. [PMID: 25340761 PMCID: PMC4207754 DOI: 10.1371/journal.pone.0110386] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2014] [Accepted: 09/09/2014] [Indexed: 11/19/2022] Open
Abstract
Many flying insects, such as flies, wasps and bees, pursue a saccadic flight and gaze strategy. This behavioral strategy is thought to separate the translational and rotational components of self-motion and, thereby, to reduce the computational efforts to extract information about the environment from the retinal image flow. Because of the distinguishing dynamic features of this active flight and gaze strategy of insects, the present study analyzes systematically the spatiotemporal statistics of image sequences generated during saccades and intersaccadic intervals in cluttered natural environments. We show that, in general, rotational movements with saccade-like dynamics elicit fluctuations and overall changes in brightness, contrast and spatial frequency of up to two orders of magnitude larger than translational movements at velocities that are characteristic of insects. Distinct changes in image parameters during translations are only caused by nearby objects. Image analysis based on larger patches in the visual field reveals smaller fluctuations in brightness and spatial frequency composition compared to small patches. The temporal structure and extent of these changes in image parameters define the temporal constraints imposed on signal processing performed by the insect visual system under behavioral conditions in natural environments.
Collapse
|
40
|
Jurica P, Gepshtein S, Tyukin I, van Leeuwen C. Sensory optimization by stochastic tuning. Psychol Rev 2014; 120:798-816. [PMID: 24219849 DOI: 10.1037/a0034192] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Individually, visual neurons are each selective for several aspects of stimulation, such as stimulus location, frequency content, and speed. Collectively, the neurons implement the visual system's preferential sensitivity to some stimuli over others, manifested in behavioral sensitivity functions. We ask how the individual neurons are coordinated to optimize visual sensitivity. We model synaptic plasticity in a generic neural circuit and find that stochastic changes in strengths of synaptic connections entail fluctuations in parameters of neural receptive fields. The fluctuations correlate with uncertainty of sensory measurement in individual neurons: The higher the uncertainty the larger the amplitude of fluctuation. We show that this simple relationship is sufficient for the stochastic fluctuations to steer sensitivities of neurons toward a characteristic distribution, from which follows a sensitivity function observed in human psychophysics and which is predicted by a theory of optimal allocation of receptive fields. The optimal allocation arises in our simulations without supervision or feedback about system performance and independently of coupling between neurons, making the system highly adaptive and sensitive to prevailing stimulation.
Collapse
|
41
|
Banerjee B, Dutta JK. SELP: A general-purpose framework for learning the norms from saliencies in spatiotemporal data. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2013.02.044] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
42
|
Léveillé J, Hayashi I, Fukushima K. A Probabilistic WKL Rule for Incremental Feature Learning and Pattern Recognition. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS 2014. [DOI: 10.20965/jaciii.2014.p0672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Recent advances in machine learning and computer vision have led to the development of several sophisticated learning schemes for object recognition by convolutional networks. One relatively simple learning rule, the Winner-Kill-Loser (WKL), was shown to be efficient at learning higher-order features in the neocognitron model when used in a written digit classification task. The WKL rule is one variant of incremental clustering procedures that adapt the number of cluster components to the input data. The WKL rule seeks to provide a complete, yet minimally redundant, covering of the input distribution. It is difficult to apply this approach directly to high-dimensional spaces since it leads to a dramatic explosion in the number of clustering components. In this work, a small generalization of the WKL rule is proposed to learn from high-dimensional data. We first show that the learning rule leads mostly to V1-like oriented cells when applied to natural images, suggesting that it captures second-order image statistics not unlike variants of Hebbian learning. We further embed the proposed learning rule into a convolutional network, specifically, the Neocognitron, and show its usefulness on a standard written digit recognition benchmark. Although the new learning rule leads to a small reduction in overall accuracy, this small reduction is accompanied by a major reduction in the number of coding nodes in the network. This in turn confirms that by learning statistical regularities rather than covering an entire input space, it may be possible to incrementally learn and retain most of the useful structure in the input distribution.
Collapse
|
43
|
Finger H, König P. Phase synchrony facilitates binding and segmentation of natural images in a coupled neural oscillator network. Front Comput Neurosci 2014; 7:195. [PMID: 24478685 PMCID: PMC3902207 DOI: 10.3389/fncom.2013.00195] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2013] [Accepted: 12/30/2013] [Indexed: 11/13/2022] Open
Abstract
Synchronization has been suggested as a mechanism of binding distributed feature representations facilitating segmentation of visual stimuli. Here we investigate this concept based on unsupervised learning using natural visual stimuli. We simulate dual-variable neural oscillators with separate activation and phase variables. The binding of a set of neurons is coded by synchronized phase variables. The network of tangential synchronizing connections learned from the induced activations exhibits small-world properties and allows binding even over larger distances. We evaluate the resulting dynamic phase maps using segmentation masks labeled by human experts. Our simulation results show a continuously increasing phase synchrony between neurons within the labeled segmentation masks. The evaluation of the network dynamics shows that the synchrony between network nodes establishes a relational coding of the natural image inputs. This demonstrates that the concept of binding by synchrony is applicable in the context of unsupervised learning using natural visual stimuli.
Collapse
Affiliation(s)
- Holger Finger
- Institute of Cognitive Science, University of Osnabrück Osnabrück, Germany
| | - Peter König
- Institute of Cognitive Science, University of Osnabrück Osnabrück, Germany ; Institute of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf Hamburg, Germany
| |
Collapse
|
44
|
Predictions in the light of your own action repertoire as a general computational principle. Behav Brain Sci 2013; 36:219-20. [PMID: 23663324 DOI: 10.1017/s0140525x12002294] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
We argue that brains generate predictions only within the constraints of the action repertoire. This makes the computational complexity tractable and fosters a step-by-step parallel development of sensory and motor systems. Hence, it is more of a benefit than a literal constraint and may serve as a universal normative principle to understand sensorimotor coupling and interactions with the world.
Collapse
|
45
|
Leveillé J, Hannagan T. Learning spatial invariance with the trace rule in nonuniform distributions. Neural Comput 2013; 25:1261-76. [PMID: 23470122 DOI: 10.1162/neco_a_00435] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Convolutional models of object recognition achieve invariance to spatial transformations largely because of the use of a suitably defined pooling operator. This operator typically takes the form of a max or average function defined across units tuned to the same feature. As a model of the brain's ventral pathway, where computations are carried out by weighted synaptic connections, such pooling can lead to spatial invariance only if the weights that connect similarly tuned units to a given pooling unit are of approximately equal strengths. How identical weights can be learned in the face of nonuniformly distributed data remains unclear. In this letter, we show how various versions of the trace learning rule can help solve this problem. This allows us in turn to explain previously published results and make recommendations as to the optimal rule for invariance learning.
Collapse
Affiliation(s)
- Jasmin Leveillé
- Center for Computational Neuroscience and Neural Technology, Boston University, Boston, MA 02215, USA.
| | | |
Collapse
|
46
|
Allen WL, Higham JP. Analyzing visual signals as visual scenes. Am J Primatol 2013; 75:664-82. [PMID: 23440880 DOI: 10.1002/ajp.22129] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2012] [Revised: 12/11/2012] [Accepted: 12/30/2012] [Indexed: 11/07/2022]
Abstract
The study of visual signal design is gaining momentum as techniques for studying signals become more sophisticated and more freely available. In this paper we discuss methods for analyzing the color and form of visual signals, for integrating signal components into visual scenes, and for producing visual signal stimuli for use in psychophysical experiments. Our recommended methods aim to be rigorous, detailed, quantitative, objective, and where possible based on the perceptual representation of the intended signal receiver(s). As methods for analyzing signal color and luminance have been outlined in previous publications we focus on analyzing form information by discussing how statistical shape analysis (SSA) methods can be used to analyze signal shape, and spatial filtering to analyze repetitive patterns. We also suggest the use of vector-based approaches for integrating multiple signal components. In our opinion elliptical Fourier analysis (EFA) is the most promising technique for shape quantification but we await the results of empirical comparison of techniques and the development of new shape analysis methods based on the cognitive and perceptual representations of receivers. Our manuscript should serve as an introductory guide to those interested in measuring visual signals, and while our examples focus on primate signals, the methods are applicable to quantifying visual signals in most taxa.
Collapse
Affiliation(s)
- William L Allen
- Department of Anthropology, New York University, New York, New York 10003, USA.
| | | |
Collapse
|
47
|
A three-layer model of natural image statistics. ACTA ACUST UNITED AC 2013; 107:369-98. [PMID: 23369823 DOI: 10.1016/j.jphysparis.2013.01.001] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2012] [Revised: 12/22/2012] [Accepted: 01/11/2013] [Indexed: 11/21/2022]
Abstract
An important property of visual systems is to be simultaneously both selective to specific patterns found in the sensory input and invariant to possible variations. Selectivity and invariance (tolerance) are opposing requirements. It has been suggested that they could be joined by iterating a sequence of elementary selectivity and tolerance computations. It is, however, unknown what should be selected or tolerated at each level of the hierarchy. We approach this issue by learning the computations from natural images. We propose and estimate a probabilistic model of natural images that consists of three processing layers. Two natural image data sets are considered: image patches, and complete visual scenes downsampled to the size of small patches. For both data sets, we find that in the first two layers, simple and complex cell-like computations are performed. In the third layer, we mainly find selectivity to longer contours; for patch data, we further find some selectivity to texture, while for the downsampled complete scenes, some selectivity to curvature is observed.
Collapse
|
48
|
Egelhaaf M, Boeddeker N, Kern R, Kurtz R, Lindemann JP. Spatial vision in insects is facilitated by shaping the dynamics of visual input through behavioral action. Front Neural Circuits 2012; 6:108. [PMID: 23269913 PMCID: PMC3526811 DOI: 10.3389/fncir.2012.00108] [Citation(s) in RCA: 71] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2012] [Accepted: 12/03/2012] [Indexed: 11/30/2022] Open
Abstract
Insects such as flies or bees, with their miniature brains, are able to control highly aerobatic flight maneuvres and to solve spatial vision tasks, such as avoiding collisions with obstacles, landing on objects, or even localizing a previously learnt inconspicuous goal on the basis of environmental cues. With regard to solving such spatial tasks, these insects still outperform man-made autonomous flying systems. To accomplish their extraordinary performance, flies and bees have been shown by their characteristic behavioral actions to actively shape the dynamics of the image flow on their eyes ("optic flow"). The neural processing of information about the spatial layout of the environment is greatly facilitated by segregating the rotational from the translational optic flow component through a saccadic flight and gaze strategy. This active vision strategy thus enables the nervous system to solve apparently complex spatial vision tasks in a particularly efficient and parsimonious way. The key idea of this review is that biological agents, such as flies or bees, acquire at least part of their strength as autonomous systems through active interactions with their environment and not by simply processing passively gained information about the world. These agent-environment interactions lead to adaptive behavior in surroundings of a wide range of complexity. Animals with even tiny brains, such as insects, are capable of performing extraordinarily well in their behavioral contexts by making optimal use of the closed action-perception loop. Model simulations and robotic implementations show that the smart biological mechanisms of motion computation and visually-guided flight control might be helpful to find technical solutions, for example, when designing micro air vehicles carrying a miniaturized, low-weight on-board processor.
Collapse
Affiliation(s)
- Martin Egelhaaf
- Neurobiology and Centre of Excellence “Cognitive Interaction Technology”Bielefeld University, Germany
| | | | | | | | | |
Collapse
|
49
|
Distinct functional properties of primary and posteromedial visual area of mouse neocortex. J Neurosci 2012; 32:9716-26. [PMID: 22787057 DOI: 10.1523/jneurosci.0110-12.2012] [Citation(s) in RCA: 77] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
Visual input provides important landmarks for navigating in the environment, information that in mammals is processed by specialized areas in the visual cortex. In rodents, the posteromedial area (PM) mediates visual information between primary visual cortex (V1) and the retrosplenial cortex, which further projects to the hippocampus. To understand the functional role of area PM requires a detailed analysis of its spatial frequency (SF) and temporal frequency (TF) tuning. Here, we applied two-photon calcium imaging to map neuronal tuning for orientation, direction, SF and TF, and speed in response to drifting gratings in V1 and PM of anesthetized mice. The distributions of orientation and direction tuning were similar in V1 and PM. Notably, in both areas we found a preference for cardinal compared to oblique orientations. The overrepresentation of cardinal tuned neurons was particularly strong in PM showing narrow tuning bandwidths for horizontal and vertical orientations. A detailed analysis of SF and TF tuning revealed a broad range of highly tuned neurons in V1. On the contrary, PM contained one subpopulation of neurons with high spatial acuity and a second subpopulation broadly tuned for low SFs. Furthermore, ∼20% of the responding neurons in V1 and only 12% in PM were tuned to the speed of drifting gratings with PM preferring slower drift rates compared to V1. Together, PM is tuned for cardinal orientations, high SFs, and low speed and is further located between V1 and the retrosplenial cortex consistent with a role in processing natural scenes during spatial navigation.
Collapse
|
50
|
Altered visual experience induces instructive changes of orientation preference in mouse visual cortex. J Neurosci 2011; 31:13911-20. [PMID: 21957253 DOI: 10.1523/jneurosci.2143-11.2011] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Stripe rearing, the restriction of visual experience to contours of only one orientation, leads to an overrepresentation of the experienced orientation among neurons in the visual cortex. It is unclear, however, how these changes are brought about. Are they caused by silencing of neurons tuned to non-experienced orientations, or do some neurons change their preferred orientation? To address this question, we stripe-reared juvenile mice using cylinder lens goggles. Following stripe rearing, the orientation preference of cortical neurons was determined with two-photon calcium imaging. This allowed us to sample all neurons in a given field of view, including the non-responsive ones, thus overcoming a fundamental limitation of extracellular electrophysiological recordings. Stripe rearing for 3 weeks resulted in a clear overrepresentation of the experienced orientation in cortical layer 2/3. Closer inspection revealed that the stripe rearing effect changed with depth in cortex: The fraction of responsive neurons decreased in upper layer 2/3, but changed very little deeper in this layer. At the same time, the overrepresentation of the experienced orientation was strongest in lower layer 2/3. Thus, diverse mechanisms contribute to the overall stripe rearing effect, but for neurons in lower layer 2/3 the effect is mediated by an instructive mechanism, which alters the orientation tuning of individual neurons.
Collapse
|