1
|
Feuerriegel D. Adaptation in the visual system: Networked fatigue or suppressed prediction error signalling? Cortex 2024; 177:302-320. [PMID: 38905873 DOI: 10.1016/j.cortex.2024.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 05/10/2024] [Accepted: 06/04/2024] [Indexed: 06/23/2024]
Abstract
Our brains are constantly adapting to changes in our visual environments. Neural adaptation exerts a persistent influence on the activity of sensory neurons and our perceptual experience, however there is a lack of consensus regarding how adaptation is implemented in the visual system. One account describes fatigue-based mechanisms embedded within local networks of stimulus-selective neurons (networked fatigue models). Another depicts adaptation as a product of stimulus expectations (predictive coding models). In this review, I evaluate neuroimaging and psychophysical evidence that poses fundamental problems for predictive coding models of neural adaptation. Specifically, I discuss observations of distinct repetition and expectation effects, as well as incorrect predictions of repulsive adaptation aftereffects made by predictive coding accounts. Based on this evidence, I argue that networked fatigue models provide a more parsimonious account of adaptation effects in the visual system. Although stimulus expectations can be formed based on recent stimulation history, any consequences of these expectations are likely to co-occur (or interact) with effects of fatigue-based adaptation. I conclude by proposing novel, testable hypotheses relating to interactions between fatigue-based adaptation and other predictive processes, focusing on stimulus feature extrapolation phenomena.
Collapse
Affiliation(s)
- Daniel Feuerriegel
- Melbourne School of Psychological Sciences, The University of Melbourne, Australia.
| |
Collapse
|
2
|
Tipado Z, Kuypers KPC, Sorger B, Ramaekers JG. Visual hallucinations originating in the retinofugal pathway under clinical and psychedelic conditions. Eur Neuropsychopharmacol 2024; 85:10-20. [PMID: 38648694 DOI: 10.1016/j.euroneuro.2024.04.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 04/11/2024] [Accepted: 04/13/2024] [Indexed: 04/25/2024]
Abstract
Psychedelics like LSD (Lysergic acid diethylamide) and psilocybin are known to modulate perceptual modalities due to the activation of mostly serotonin receptors in specific cortical (e.g., visual cortex) and subcortical (e.g., thalamus) regions of the brain. In the visual domain, these psychedelic modulations often result in peculiar disturbances of viewed objects and light and sometimes even in hallucinations of non-existent environments, objects, and creatures. Although the underlying processes are poorly understood, research conducted over the past twenty years on the subjective experience of psychedelics details theories that attempt to explain these perceptual alterations due to a disruption of communication between cortical and subcortical regions. However, rare medical conditions in the visual system like Charles Bonnet syndrome that cause perceptual distortions may shed new light on the additional importance of the retinofugal pathway in psychedelic subjective experiences. Interneurons in the retina called amacrine cells could be the first site of visual psychedelic modulation and aid in disrupting the hierarchical structure of how humans perceive visual information. This paper presents an understanding of how the retinofugal pathway communicates and modulates visual information in psychedelic and clinical conditions. Therefore, we elucidate a new theory of psychedelic modulation in the retinofugal pathway.
Collapse
Affiliation(s)
- Zeus Tipado
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, the Netherlands; Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, the Netherlands.
| | - Kim P C Kuypers
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, the Netherlands
| | - Bettina Sorger
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, the Netherlands
| | - Johannes G Ramaekers
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, the Netherlands
| |
Collapse
|
3
|
Manookin MB, Rieke F. Two Sides of the Same Coin: Efficient and Predictive Neural Coding. Annu Rev Vis Sci 2023; 9:293-311. [PMID: 37220331 DOI: 10.1146/annurev-vision-112122-020941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Some visual properties are consistent across a wide range of environments, while other properties are more labile. The efficient coding hypothesis states that many of these regularities in the environment can be discarded from neural representations, thus allocating more of the brain's dynamic range to properties that are likely to vary. This paradigm is less clear about how the visual system prioritizes different pieces of information that vary across visual environments. One solution is to prioritize information that can be used to predict future events, particularly those that guide behavior. The relationship between the efficient coding and future prediction paradigms is an area of active investigation. In this review, we argue that these paradigms are complementary and often act on distinct components of the visual input. We also discuss how normative approaches to efficient coding and future prediction can be integrated.
Collapse
Affiliation(s)
- Michael B Manookin
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA;
- Vision Science Center, University of Washington, Seattle, Washington, USA
- Karalis Johnson Retina Center, University of Washington, Seattle, Washington, USA
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington, USA;
- Vision Science Center, University of Washington, Seattle, Washington, USA
| |
Collapse
|
4
|
Maheswaranathan N, McIntosh LT, Tanaka H, Grant S, Kastner DB, Melander JB, Nayebi A, Brezovec LE, Wang JH, Ganguli S, Baccus SA. Interpreting the retinal neural code for natural scenes: From computations to neurons. Neuron 2023; 111:2742-2755.e4. [PMID: 37451264 PMCID: PMC10680974 DOI: 10.1016/j.neuron.2023.06.007] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 01/30/2023] [Accepted: 06/14/2023] [Indexed: 07/18/2023]
Abstract
Understanding the circuit mechanisms of the visual code for natural scenes is a central goal of sensory neuroscience. We show that a three-layer network model predicts retinal natural scene responses with an accuracy nearing experimental limits. The model's internal structure is interpretable, as interneurons recorded separately and not modeled directly are highly correlated with model interneurons. Models fitted only to natural scenes reproduce a diverse set of phenomena related to motion encoding, adaptation, and predictive coding, establishing their ethological relevance to natural visual computation. A new approach decomposes the computations of model ganglion cells into the contributions of model interneurons, allowing automatic generation of new hypotheses for how interneurons with different spatiotemporal responses are combined to generate retinal computations, including predictive phenomena currently lacking an explanation. Our results demonstrate a unified and general approach to study the circuit mechanisms of ethological retinal computations under natural visual scenes.
Collapse
Affiliation(s)
| | - Lane T McIntosh
- Neuroscience Program, Stanford University School of Medicine, Stanford, CA, USA
| | - Hidenori Tanaka
- Department of Applied Physics, Stanford University, Stanford, CA, USA; Physics & Informatics Laboratories, NTT Research, Inc., Sunnyvale, CA, USA; Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Satchel Grant
- Department of Neurobiology, Stanford University, Stanford, CA, USA
| | - David B Kastner
- Neuroscience Program, Stanford University School of Medicine, Stanford, CA, USA
| | - Joshua B Melander
- Neuroscience Program, Stanford University School of Medicine, Stanford, CA, USA
| | - Aran Nayebi
- Neuroscience Program, Stanford University School of Medicine, Stanford, CA, USA
| | - Luke E Brezovec
- Neuroscience Program, Stanford University School of Medicine, Stanford, CA, USA
| | | | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, CA, USA
| | - Stephen A Baccus
- Department of Neurobiology, Stanford University, Stanford, CA, USA.
| |
Collapse
|
5
|
Johnson PA, Blom T, van Gaal S, Feuerriegel D, Bode S, Hogendoorn H. Position representations of moving objects align with real-time position in the early visual response. eLife 2023; 12:e82424. [PMID: 36656268 PMCID: PMC9851612 DOI: 10.7554/elife.82424] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 11/16/2022] [Indexed: 01/20/2023] Open
Abstract
When interacting with the dynamic world, the brain receives outdated sensory information, due to the time required for neural transmission and processing. In motion perception, the brain may overcome these fundamental delays through predictively encoding the position of moving objects using information from their past trajectories. In the present study, we evaluated this proposition using multivariate analysis of high temporal resolution electroencephalographic data. We tracked neural position representations of moving objects at different stages of visual processing, relative to the real-time position of the object. During early stimulus-evoked activity, position representations of moving objects were activated substantially earlier than the equivalent activity evoked by unpredictable flashes, aligning the earliest representations of moving stimuli with their real-time positions. These findings indicate that the predictability of straight trajectories enables full compensation for the neural delays accumulated early in stimulus processing, but that delays still accumulate across later stages of cortical processing.
Collapse
|
6
|
The Impact of Multivesicular Release on the Transmission of Sensory Information by Ribbon Synapses. J Neurosci 2022; 42:9401-9414. [PMID: 36344266 PMCID: PMC9794368 DOI: 10.1523/jneurosci.0717-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 10/01/2022] [Accepted: 10/06/2022] [Indexed: 11/09/2022] Open
Abstract
The statistics of vesicle release determine how synapses transfer information, but the classical Poisson model of independent release does not always hold at the first stages of vision and hearing. There, ribbon synapses also encode sensory signals as events comprising two or more vesicles released simultaneously. The implications of such coordinated multivesicular release (MVR) for spike generation are not known. Here we investigate how MVR alters the transmission of sensory information compared with Poisson synapses using a pure rate-code. We used leaky integrate-and-fire models incorporating the statistics of release measured experimentally from glutamatergic synapses of retinal bipolar cells in zebrafish (both sexes) and compared these with models assuming Poisson inputs constrained to operate at the same average rates. We find that MVR can increase the number of spikes generated per vesicle while reducing interspike intervals and latency to first spike. The combined effect was to increase the efficiency of information transfer (bits per vesicle) over a range of conditions mimicking target neurons of different size. MVR was most advantageous in neurons with short time constants and reliable synaptic inputs, when less convergence was required to trigger spikes. In the special case of a single input driving a neuron, as occurs in the auditory system of mammals, MVR increased information transfer whenever spike generation required more than one vesicle. This study demonstrates how presynaptic integration of vesicles by MVR can increase the efficiency with which sensory information is transmitted compared with a rate-code described by Poisson statistics.SIGNIFICANCE STATEMENT Neurons communicate by the stochastic release of vesicles at the synapse and the statistics of this process will determine how information is represented by spikes. The classical model is that vesicles are released independently by a Poisson process, but this does not hold at ribbon-type synapses specialized to transmit the first electrical signals in vision and hearing, where two or more vesicles can fuse in a single event by a process termed coordinated multivesicular release. This study shows that multivesicular release can increase the number of spikes generated per vesicle and the efficiency of information transfer (bits per vesicle) over a range of conditions found in the retina and peripheral auditory system.
Collapse
|
7
|
DePiero VJ, Borghuis BG. Phase advancing is a common property of multiple neuron classes in the mouse retina. eNeuro 2022; 9:ENEURO.0270-22.2022. [PMID: 35995559 PMCID: PMC9450563 DOI: 10.1523/eneuro.0270-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 08/11/2022] [Accepted: 08/18/2022] [Indexed: 11/21/2022] Open
Abstract
Behavioral interactions with moving objects are challenged by response latencies within the sensory and motor nervous systems. In vision, the combined latency from phototransduction and synaptic transmission from the retina to central visual areas amounts to 50-100 ms, depending on stimulus conditions. Time required for generating appropriate motor output adds to this latency and further compounds the behavioral delay. Neuronal adaptations that help counter sensory latency within the retina have been demonstrated in some species, but how general these specializations are, and where in the circuitry they originate, remains unclear. To address this, we studied the timing of object motion-evoked responses at multiple signaling stages within the mouse retina using two-photon fluorescence calcium and glutamate imaging, targeted whole-cell electrophysiology, and computational modeling. We found that both ON and OFF-type ganglion cells, as well as the bipolar cells that innervate them, temporally advance the position encoding of a moving object and so help counter the inherent signaling delay in the retina. Model simulations show that this predictive capability is a direct consequence of the spatial extent of the cells' linear visual receptive field, with no apparent specialized circuits that help predict beyond it.Significance StatementSignal transduction and synaptic transmission within sensory signaling pathways costs time. Not a lot of time, just tens to a few hundred milliseconds depending on the sensory system, but enough to challenge fast behavioral interactions under dynamic stimulus conditions, like catching a moving fly. To counter neuronal delays, nervous systems of many species use anticipatory mechanisms. One such mechanism in the mammalian visual system helps predict the future position of a moving target through a process called phase advancing. Here we ask for functionally diverse neuron populations in the mouse retina how common is phase advancing and demonstrate that it is common and generated at multiple signaling stages.
Collapse
Affiliation(s)
- Victor J DePiero
- Department of Anatomical Sciences and Neurobiology, University of Louisville School of Medicine, Louisville, KY 40202, USA
- Department of Biology, University of Virginia, Charlottesville, VA 22904, USA
| | - Bart G Borghuis
- Department of Anatomical Sciences and Neurobiology, University of Louisville School of Medicine, Louisville, KY 40202, USA
| |
Collapse
|
8
|
Wienbar S, Schwartz GW. Differences in spike generation instead of synaptic inputs determine the feature selectivity of two retinal cell types. Neuron 2022; 110:2110-2123.e4. [PMID: 35508174 DOI: 10.1016/j.neuron.2022.04.012] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 03/21/2022] [Accepted: 04/11/2022] [Indexed: 12/19/2022]
Abstract
Retinal ganglion cells (RGCs) are the spiking projection neurons of the eye that encode different features of the visual environment. The circuits providing synaptic input to different RGC types to drive feature selectivity have been studied extensively, but there has been less research aimed at understanding the intrinsic properties and how they impact feature selectivity. We introduce an RGC type in the mouse, the Bursty Suppressed-by-Contrast (bSbC) RGC, and compared it to the OFF sustained alpha (OFFsA). Differences in their contrast response functions arose from differences not in synaptic inputs but in their intrinsic properties. Spike generation was the key intrinsic property behind this functional difference; the bSbC RGC undergoes depolarization block while the OFFsA RGC maintains a high spike rate. Our results demonstrate that differences in intrinsic properties allow these two RGC types to detect and relay distinct features of an identical visual stimulus to the brain.
Collapse
Affiliation(s)
- Sophia Wienbar
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA; Northwestern University Interdepartmental Neuroscience Program, Northwestern University, Evanston, IL 60208, USA
| | - Gregory William Schwartz
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA; Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA; Department of Neurobiology, Weinberg College of Arts and Sciences, Northwestern University, Evanston, IL 60208, USA.
| |
Collapse
|
9
|
Cessac B. Retinal Processing: Insights from Mathematical Modelling. J Imaging 2022; 8:14. [PMID: 35049855 PMCID: PMC8780400 DOI: 10.3390/jimaging8010014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 01/11/2022] [Accepted: 01/12/2022] [Indexed: 02/04/2023] Open
Abstract
The retina is the entrance of the visual system. Although based on common biophysical principles, the dynamics of retinal neurons are quite different from their cortical counterparts, raising interesting problems for modellers. In this paper, I address some mathematically stated questions in this spirit, discussing, in particular: (1) How could lateral amacrine cell connectivity shape the spatio-temporal spike response of retinal ganglion cells? (2) How could spatio-temporal stimuli correlations and retinal network dynamics shape the spike train correlations at the output of the retina? These questions are addressed, first, introducing a mathematically tractable model of the layered retina, integrating amacrine cells' lateral connectivity and piecewise linear rectification, allowing for computing the retinal ganglion cells receptive field together with the voltage and spike correlations of retinal ganglion cells resulting from the amacrine cells networks. Then, I review some recent results showing how the concept of spatio-temporal Gibbs distributions and linear response theory can be used to characterize the collective spike response to a spatio-temporal stimulus of a set of retinal ganglion cells, coupled via effective interactions corresponding to the amacrine cells network. On these bases, I briefly discuss several potential consequences of these results at the cortical level.
Collapse
Affiliation(s)
- Bruno Cessac
- France INRIA Biovision Team and Neuromod Institute, Université Côte d'Azur, 2004 Route des Lucioles, BP 93, 06902 Valbonne, France
| |
Collapse
|
10
|
Predictive encoding of motion begins in the primate retina. Nat Neurosci 2021; 24:1280-1291. [PMID: 34341586 PMCID: PMC8728393 DOI: 10.1038/s41593-021-00899-1] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 06/25/2021] [Indexed: 02/06/2023]
Abstract
Predictive motion encoding is an important aspect of visually guided behavior that allows animals to estimate the trajectory of moving objects. Motion prediction is understood primarily in the context of translational motion, but the environment contains other types of behaviorally salient motion correlation such as those produced by approaching or receding objects. However, the neural mechanisms that detect and predictively encode these correlations remain unclear. We report here that four of the parallel output pathways in the primate retina encode predictive motion information, and this encoding occurs for several classes of spatiotemporal correlation that are found in natural vision. Such predictive coding can be explained by known nonlinear circuit mechanisms that produce a nearly optimal encoding, with transmitted information approaching the theoretical limit imposed by the stimulus itself. Thus, these neural circuit mechanisms efficiently separate predictive information from nonpredictive information during the encoding process.
Collapse
|
11
|
Yedutenko M, Howlett MHC, Kamermans M. High Contrast Allows the Retina to Compute More Than Just Contrast. Front Cell Neurosci 2021; 14:595193. [PMID: 33519381 PMCID: PMC7843368 DOI: 10.3389/fncel.2020.595193] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Accepted: 12/22/2020] [Indexed: 11/29/2022] Open
Abstract
The goal of sensory processing is to represent the environment of an animal. All sensory systems share a similar constraint: they need to encode a wide range of stimulus magnitudes within their narrow neuronal response range. The most efficient way, exploited by even the simplest nervous systems, is to encode relative changes in stimulus magnitude rather than the absolute magnitudes. For instance, the retina encodes contrast, which are the variations of light intensity occurring in time and in space. From this perspective, it is easy to understand why the bright plumage of a moving bird gains a lot of attention, while an octopus remains motionless and mimics its surroundings for concealment. Stronger contrasts simply cause stronger visual signals. However, the gains in retinal performance associated with higher contrast are far more than what can be attributed to just a trivial linear increase in signal strength. Here we discuss how this improvement in performance is reflected throughout different parts of the neural circuitry, within its neural code and how high contrast activates many non-linear mechanisms to unlock several sophisticated retinal computations that are virtually impossible in low contrast conditions.
Collapse
Affiliation(s)
- Matthew Yedutenko
- Retinal Signal Processing Lab, Netherlands Institute for Neuroscience, Amsterdam, Netherlands
| | - Marcus H. C. Howlett
- Retinal Signal Processing Lab, Netherlands Institute for Neuroscience, Amsterdam, Netherlands
| | - Maarten Kamermans
- Retinal Signal Processing Lab, Netherlands Institute for Neuroscience, Amsterdam, Netherlands
- Department of Biomedical Physics and Biomedical Optics, Amsterdam University Medical Center, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
12
|
Souihel S, Cessac B. On the potential role of lateral connectivity in retinal anticipation. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2021; 11:3. [PMID: 33420903 PMCID: PMC7796858 DOI: 10.1186/s13408-020-00101-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Accepted: 12/15/2020] [Indexed: 06/12/2023]
Abstract
We analyse the potential effects of lateral connectivity (amacrine cells and gap junctions) on motion anticipation in the retina. Our main result is that lateral connectivity can-under conditions analysed in the paper-trigger a wave of activity enhancing the anticipation mechanism provided by local gain control (Berry et al. in Nature 398(6725):334-338, 1999; Chen et al. in J. Neurosci. 33(1):120-132, 2013). We illustrate these predictions by two examples studied in the experimental literature: differential motion sensitive cells (Baccus and Meister in Neuron 36(5):909-919, 2002) and direction sensitive cells where direction sensitivity is inherited from asymmetry in gap junctions connectivity (Trenholm et al. in Nat. Neurosci. 16:154-156, 2013). We finally present reconstructions of retinal responses to 2D visual inputs to assess the ability of our model to anticipate motion in the case of three different 2D stimuli.
Collapse
Affiliation(s)
- Selma Souihel
- Biovision Team and Neuromod Institute, Inria, Université Côte d'Azur, Nice, France.
| | - Bruno Cessac
- Biovision Team and Neuromod Institute, Inria, Université Côte d'Azur, Nice, France
| |
Collapse
|
13
|
Johnston J, Seibel SH, Darnet LSA, Renninger S, Orger M, Lagnado L. A Retinal Circuit Generating a Dynamic Predictive Code for Oriented Features. Neuron 2019; 102:1211-1222.e3. [PMID: 31054873 PMCID: PMC6591004 DOI: 10.1016/j.neuron.2019.04.002] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2018] [Revised: 02/15/2019] [Accepted: 03/28/2019] [Indexed: 12/17/2022]
Abstract
Sensory systems must reduce the transmission of redundant information to function efficiently. One strategy is to continuously adjust the sensitivity of neurons to suppress responses to common features of the input while enhancing responses to new ones. Here we image the excitatory synaptic inputs and outputs of retinal ganglion cells to understand how such dynamic predictive coding is implemented in the analysis of spatial patterns. Synapses of bipolar cells become tuned to orientation through presynaptic inhibition, generating lateral antagonism in the orientation domain. Individual ganglion cells receive excitatory synapses tuned to different orientations, but feedforward inhibition generates a high-pass filter that only transmits the initial activation of these inputs, removing redundancy. These results demonstrate how a dynamic predictive code can be implemented by circuit motifs common to many parts of the brain.
Collapse
Affiliation(s)
- Jamie Johnston
- School of Biomedical Sciences, Faculty of Biological Sciences, University of Leeds, Leeds LS2 9JT, UK
| | - Sofie-Helene Seibel
- Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton BN1 9QG, UK
| | | | | | - Michael Orger
- Champalimaud Centre for the Unknown, Lisbon 1400-038, Portugal
| | - Leon Lagnado
- Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton BN1 9QG, UK.
| |
Collapse
|
14
|
Wienbar S, Schwartz GW. The dynamic receptive fields of retinal ganglion cells. Prog Retin Eye Res 2018; 67:102-117. [PMID: 29944919 PMCID: PMC6235744 DOI: 10.1016/j.preteyeres.2018.06.003] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2018] [Revised: 06/15/2018] [Accepted: 06/20/2018] [Indexed: 11/30/2022]
Abstract
Retinal ganglion cells (RGCs) were one of the first classes of sensory neurons to be described in terms of a receptive field (RF). Over the last six decades, our understanding of the diversity of RGC types and the nuances of their response properties has grown exponentially. We will review the current understanding of RGC RFs mostly from studies in mammals, but including work from other vertebrates as well. We will argue for a new paradigm that embraces the fluidity of RGC RFs with an eye toward the neuroethology of vision. Specifically, we will focus on (1) different methods for measuring RGC RFs, (2) RF models, (3) feature selectivity and the distinction between fluid and stable RF properties, and (4) ideas about the future of understanding RGC RFs.
Collapse
Affiliation(s)
- Sophia Wienbar
- Departments of Ophthalmology and Physiology, Feinberg School of Medicine, Northwestern University, United States.
| | - Gregory W Schwartz
- Departments of Ophthalmology and Physiology, Feinberg School of Medicine, Northwestern University, United States.
| |
Collapse
|
15
|
Affiliation(s)
- Peter A. White
- School of Psychology, Cardiff University, Cardiff, Wales, UK
| |
Collapse
|
16
|
Sağlam M, Hayashida Y. A single retinal circuit model for multiple computations. BIOLOGICAL CYBERNETICS 2018; 112:427-444. [PMID: 29951908 DOI: 10.1007/s00422-018-0767-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2015] [Accepted: 06/18/2018] [Indexed: 06/08/2023]
Abstract
Vision is dependent on extracting intricate features of the visual information from the outside world, and complex visual computations begin to take place as soon as at the retinal level. In multiple studies on salamander retinas, the responses of a subtype of retinal ganglion cells, i.e., fast/biphasic-OFF ganglion cells, have been shown to be able to realize multiple functions, such as the segregation of a moving object from its background, motion anticipation, and rapid encoding of the spatial features of a new visual scene. For each of these visual functions, modeling approaches using extended linear-nonlinear cascade models suggest specific preceding retinal circuitries merging onto fast/biphasic-OFF ganglion cells. However, whether multiple visual functions can be accommodated together in a certain retinal circuitry and how specific mechanisms for each visual function interact with each other have not been investigated. Here, we propose a physiologically consistent, detailed computational model of the retinal circuit based on the spatiotemporal dynamics and connections of each class of retinal neurons to implement object motion sensitivity, motion anticipation, and rapid coding in the same circuit. Simulations suggest that multiple computations can be accommodated together, thereby implying that the fast/biphasic-OFF ganglion cell has potential to output a train of spikes carrying multiple pieces of information on distinct features of the visual stimuli.
Collapse
Affiliation(s)
- Murat Sağlam
- Department of Advanced Analytics, Supply Chain Wizard LLC, 34870, Istanbul, Turkey.
| | - Yuki Hayashida
- Graduate School of Engineering, Osaka University, Suita, Osaka, 565-0871, Japan.
| |
Collapse
|
17
|
Murali G. Now you see me, now you don't: dynamic flash coloration as an antipredator strategy in motion. Anim Behav 2018. [DOI: 10.1016/j.anbehav.2018.06.017] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
18
|
Antinucci P, Hindges R. Orientation-Selective Retinal Circuits in Vertebrates. Front Neural Circuits 2018; 12:11. [PMID: 29467629 PMCID: PMC5808299 DOI: 10.3389/fncir.2018.00011] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2017] [Accepted: 01/23/2018] [Indexed: 11/24/2022] Open
Abstract
Visual information is already processed in the retina before it is transmitted to higher visual centers in the brain. This includes the extraction of salient features from visual scenes, such as motion directionality or contrast, through neurons belonging to distinct neural circuits. Some retinal neurons are tuned to the orientation of elongated visual stimuli. Such ‘orientation-selective’ neurons are present in the retinae of most, if not all, vertebrate species analyzed to date, with species-specific differences in frequency and degree of tuning. In some cases, orientation-selective neurons have very stereotyped functional and morphological properties suggesting that they represent distinct cell types. In this review, we describe the retinal cell types underlying orientation selectivity found in various vertebrate species, and highlight their commonalities and differences. In addition, we discuss recent studies that revealed the cellular, synaptic and circuit mechanisms at the basis of retinal orientation selectivity. Finally, we outline the significance of these findings in shaping our current understanding of how this fundamental neural computation is implemented in the visual systems of vertebrates.
Collapse
Affiliation(s)
- Paride Antinucci
- Centre for Developmental Neurobiology, King's College London, London, United Kingdom
| | - Robert Hindges
- Centre for Developmental Neurobiology, King's College London, London, United Kingdom.,MRC Centre for Neurodevelopmental Disorders, King's College London, London, United Kingdom
| |
Collapse
|
19
|
Franke K, Baden T. General features of inhibition in the inner retina. J Physiol 2017; 595:5507-5515. [PMID: 28332227 PMCID: PMC5556161 DOI: 10.1113/jp273648] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2016] [Accepted: 02/08/2017] [Indexed: 11/08/2022] Open
Abstract
Visual processing starts in the retina. Within only two synaptic layers, a large number of parallel information channels emerge, each encoding a highly processed feature like edges or the direction of motion. Much of this functional diversity arises in the inner plexiform layer, where inhibitory amacrine cells modulate the excitatory signal of bipolar and ganglion cells. Studies investigating individual amacrine cell circuits like the starburst or A17 circuit have demonstrated that single types can possess specific morphological and functional adaptations to convey a particular function in one or a small number of inner retinal circuits. However, the interconnected and often stereotypical network formed by different types of amacrine cells across the inner plexiform layer prompts that they should be also involved in more general computations. In line with this notion, different recent studies systematically analysing inner retinal signalling at a population level provide evidence that general functions of the ensemble of amacrine cells across types are critical for establishing universal principles of retinal computation like parallel processing or motion anticipation. Combining recent advances in the development of indicators for imaging inhibition with large-scale morphological and genetic classifications will help to further our understanding of how single amacrine cell circuits act together to help decompose the visual scene into parallel information channels. In this review, we aim to summarise the current state-of-the-art in our understanding of how general features of amacrine cell inhibition lead to general features of computation.
Collapse
Affiliation(s)
- Katrin Franke
- Centre for Integrative NeuroscienceUniversity of TübingenGermany
- Institute for Ophthalmic ResearchTübingenGermany
- Bernstein Centre for Computational NeuroscienceTübingenGermany
| | - Tom Baden
- Institute for Ophthalmic ResearchTübingenGermany
- School of Life SciencesUniversity of SussexBrightonUK
| |
Collapse
|
20
|
Three Small-Receptive-Field Ganglion Cells in the Mouse Retina Are Distinctly Tuned to Size, Speed, and Object Motion. J Neurosci 2017; 37:610-625. [PMID: 28100743 DOI: 10.1523/jneurosci.2804-16.2016] [Citation(s) in RCA: 57] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2016] [Revised: 11/14/2016] [Accepted: 11/25/2016] [Indexed: 11/21/2022] Open
Abstract
Retinal ganglion cells (RGCs) are frequently divided into functional types by their ability to extract and relay specific features from a visual scene, such as the capacity to discern local or global motion, direction of motion, stimulus orientation, contrast or uniformity, or the presence of large or small objects. Here we introduce three previously uncharacterized, nondirection-selective ON-OFF RGC types that represent a distinct set of feature detectors in the mouse retina. The three high-definition (HD) RGCs possess small receptive-field centers and strong surround suppression. They respond selectively to objects of specific sizes, speeds, and types of motion. We present comprehensive morphological characterization of the HD RGCs and physiological recordings of their light responses, receptive-field size and structure, and synaptic mechanisms of surround suppression. We also explore the similarities and differences between the HD RGCs and a well characterized RGC with a comparably small receptive field, the local edge detector, in response to moving objects and textures. We model populations of each RGC type to study how they differ in their performance tracking a moving object. These results, besides introducing three new RGC types that together constitute a substantial fraction of mouse RGCs, provide insights into the role of different circuits in shaping RGC receptive fields and establish a foundation for continued study of the mechanisms of surround suppression and the neural basis of motion detection. SIGNIFICANCE STATEMENT The output cells of the retina, retinal ganglion cells (RGCs), are a diverse group of ∼40 distinct neuron types that are often assigned "feature detection" profiles based on the specific aspects of the visual scene to which they respond. Here we describe, for the first time, morphological and physiological characterization of three new RGC types in the mouse retina, substantially augmenting our understanding of feature selectivity. Experiments and modeling show that while these three "high-definition" RGCs share certain receptive-field properties, they also have distinct tuning to the size, speed, and type of motion on the retina, enabling them to occupy different niches in stimulus space.
Collapse
|
21
|
MATSUMOTO A, TACHIBANA M. Rapid and coordinated processing of global motion images by local clusters of retinal ganglion cells. PROCEEDINGS OF THE JAPAN ACADEMY. SERIES B, PHYSICAL AND BIOLOGICAL SCIENCES 2017; 93:234-249. [PMID: 28413199 PMCID: PMC5489431 DOI: 10.2183/pjab.93.015] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2016] [Accepted: 02/14/2016] [Indexed: 06/07/2023]
Abstract
Even when the body is stationary, the whole retinal image is always in motion by fixational eye movements and saccades that move the eye between fixation points. Accumulating evidence indicates that the brain is equipped with specific mechanisms for compensating for the global motion induced by these eye movements. However, it is not yet fully understood how the retina processes global motion images during eye movements. Here we show that global motion images evoke novel coordinated firing in retinal ganglion cells (GCs). We simultaneously recorded the firing of GCs in the goldfish isolated retina using a multi-electrode array, and classified each GC based on the temporal profile of its receptive field (RF). A moving target that accompanied the global motion (simulating a saccade following a period of fixational eye movements) modulated the RF properties and evoked synchronized and correlated firing among local clusters of the specific GCs. Our findings provide a novel concept for retinal information processing during eye movements.
Collapse
Affiliation(s)
- Akihiro MATSUMOTO
- Department of Psychology, Graduate School of Humanities and Sociology, The University of Tokyo, Tokyo, Japan
| | - Masao TACHIBANA
- Department of Psychology, Graduate School of Humanities and Sociology, The University of Tokyo, Tokyo, Japan
- Center for Systems Vision Science, Organization of Science and Technology, Ritsumeikan University, Kusatsu, Shiga, Japan
| |
Collapse
|
22
|
Abstract
UNLABELLED Sensorimotor delays decouple behaviors from the events that drive them. The brain compensates for these delays with predictive mechanisms, but the efficacy and timescale over which these mechanisms operate remain poorly understood. Here, we assess how prediction is used to compensate for prey movement that occurs during visuomotor processing. We obtained high-speed video records of freely moving, tongue-projecting salamanders catching walking prey, emulating natural foraging conditions. We found that tongue projections were preceded by a rapid head turn lasting ∼ 130 ms. This motor lag, combined with the ∼ 100 ms phototransduction delay at photopic light levels, gave a ∼ 230 ms visuomotor response delay during which prey typically moved approximately one body length. Tongue projections, however, did not significantly lag prey position but were highly accurate instead. Angular errors in tongue projection accuracy were consistent with a linear extrapolation model that predicted prey position at the time of tongue contact using the average prey motion during a ∼ 175 ms period one visual latency before the head movement. The model explained successful strikes where the tongue hit the fly, and unsuccessful strikes where the fly turned and the tongue hit a phantom location consistent with the fly's earlier trajectory. The model parameters, obtained from the data, agree with the temporal integration and latency of retinal responses proposed to contribute to motion extrapolation. These results show that the salamander predicts future prey position and that prediction significantly improves prey capture success over a broad range of prey speeds and light levels. SIGNIFICANCE STATEMENT Neural processing delays cause actions to lag behind the events that elicit them. To cope with these delays, the brain predicts what will happen in the future. While neural circuits in the retina and beyond have been suggested to participate in such predictions, few behaviors have been explored sufficiently to constrain circuit function. Here we show that salamanders aim their tongues by using extrapolation to estimate future prey position, thereby compensating for internal delays from both visual and motor processing. Predictions made just before a prey turn resulted in the tongue being projected to a position consistent with the prey's pre-turn trajectory. These results define the computations and operating regimen for neural circuits that predict target motion.
Collapse
|