1
|
Dabaghian Y. Grid cells, border cells, and discrete complex analysis. Front Comput Neurosci 2023; 17:1242300. [PMID: 37881247 PMCID: PMC10595009 DOI: 10.3389/fncom.2023.1242300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 09/22/2023] [Indexed: 10/27/2023] Open
Abstract
We propose a mechanism enabling the appearance of border cells-neurons firing at the boundaries of the navigated enclosures. The approach is based on the recent discovery of discrete complex analysis on a triangular lattice, which allows constructing discrete epitomes of complex-analytic functions and making use of their inherent ability to attain maximal values at the boundaries of generic lattice domains. As it turns out, certain elements of the discrete-complex framework readily appear in the oscillatory models of grid cells. We demonstrate that these models can extend further, producing cells that increase their activity toward the frontiers of the navigated environments. We also construct a network model of neurons with border-bound firing that conforms with the oscillatory models.
Collapse
Affiliation(s)
- Yuri Dabaghian
- Department of Neurology, The University of Texas, McGovern Medical Center at Houston, Houston, TX, United States
| |
Collapse
|
2
|
Dabaghian Y. Grid Cells, Border Cells and Discrete Complex Analysis. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.06.539720. [PMID: 37214803 PMCID: PMC10197584 DOI: 10.1101/2023.05.06.539720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
We propose a mechanism enabling the appearance of border cells-neurons firing at the boundaries of the navigated enclosures. The approach is based on the recent discovery of discrete complex analysis on a triangular lattice, which allows constructing discrete epitomes of complex-analytic functions and making use of their inherent ability to attain maximal values at the boundaries of generic lattice domains. As it turns out, certain elements of the discrete-complex framework readily appear in the oscillatory models of grid cells. We demonstrate that these models can extend further, producing cells that increase their activity towards the frontiers of the navigated environments. We also construct a network model of neurons with border-bound firing that conforms with the oscillatory models.
Collapse
Affiliation(s)
- Yuri Dabaghian
- Department of Neurology, The University of Texas McGovern Medical School, 6431 Fannin St, Houston, TX 77030
| |
Collapse
|
3
|
Wang C, Fang C, Zou Y, Yang J, Sawan M. Artificial intelligence techniques for retinal prostheses: a comprehensive review and future direction. J Neural Eng 2023; 20. [PMID: 36634357 DOI: 10.1088/1741-2552/acb295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 01/12/2023] [Indexed: 01/14/2023]
Abstract
Objective. Retinal prostheses are promising devices to restore vision for patients with severe age-related macular degeneration or retinitis pigmentosa disease. The visual processing mechanism embodied in retinal prostheses play an important role in the restoration effect. Its performance depends on our understanding of the retina's working mechanism and the evolvement of computer vision models. Recently, remarkable progress has been made in the field of processing algorithm for retinal prostheses where the new discovery of the retina's working principle and state-of-the-arts computer vision models are combined together.Approach. We investigated the related research on artificial intelligence techniques for retinal prostheses. The processing algorithm in these studies could be attributed to three types: computer vision-related methods, biophysical models, and deep learning models.Main results. In this review, we first illustrate the structure and function of the normal and degenerated retina, then demonstrate the vision rehabilitation mechanism of three representative retinal prostheses. It is necessary to summarize the computational frameworks abstracted from the normal retina. In addition, the development and feature of three types of different processing algorithms are summarized. Finally, we analyze the bottleneck in existing algorithms and propose our prospect about the future directions to improve the restoration effect.Significance. This review systematically summarizes existing processing models for predicting the response of the retina to external stimuli. What's more, the suggestions for future direction may inspire researchers in this field to design better algorithms for retinal prostheses.
Collapse
Affiliation(s)
- Chuanqing Wang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Chaoming Fang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Yong Zou
- Beijing Institute of Radiation Medicine, Beijing, People's Republic of China
| | - Jie Yang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Mohamad Sawan
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| |
Collapse
|
4
|
Liu JK, Karamanlis D, Gollisch T. Simple model for encoding natural images by retinal ganglion cells with nonlinear spatial integration. PLoS Comput Biol 2022; 18:e1009925. [PMID: 35259159 PMCID: PMC8932571 DOI: 10.1371/journal.pcbi.1009925] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 03/18/2022] [Accepted: 02/14/2022] [Indexed: 01/05/2023] Open
Abstract
A central goal in sensory neuroscience is to understand the neuronal signal processing involved in the encoding of natural stimuli. A critical step towards this goal is the development of successful computational encoding models. For ganglion cells in the vertebrate retina, the development of satisfactory models for responses to natural visual scenes is an ongoing challenge. Standard models typically apply linear integration of visual stimuli over space, yet many ganglion cells are known to show nonlinear spatial integration, in particular when stimulated with contrast-reversing gratings. We here study the influence of spatial nonlinearities in the encoding of natural images by ganglion cells, using multielectrode-array recordings from isolated salamander and mouse retinas. We assess how responses to natural images depend on first- and second-order statistics of spatial patterns inside the receptive field. This leads us to a simple extension of current standard ganglion cell models. We show that taking not only the weighted average of light intensity inside the receptive field into account but also its variance over space can partly account for nonlinear integration and substantially improve response predictions of responses to novel images. For salamander ganglion cells, we find that response predictions for cell classes with large receptive fields profit most from including spatial contrast information. Finally, we demonstrate how this model framework can be used to assess the spatial scale of nonlinear integration. Our results underscore that nonlinear spatial stimulus integration translates to stimulation with natural images. Furthermore, the introduced model framework provides a simple, yet powerful extension of standard models and may serve as a benchmark for the development of more detailed models of the nonlinear structure of receptive fields. For understanding how sensory systems operate in the natural environment, an important goal is to develop models that capture neuronal responses to natural stimuli. For retinal ganglion cells, which connect the eye to the brain, current standard models often fail to capture responses to natural visual scenes. This shortcoming is at least partly rooted in the fact that ganglion cells may combine visual signals over space in a nonlinear fashion. We here show that a simple model, which not only considers the average light intensity inside a cell’s receptive field but also the variance of light intensity over space, can partly account for these nonlinearities and thereby improve current standard models. This provides an easy-to-obtain benchmark for modeling ganglion cell responses to natural images.
Collapse
Affiliation(s)
- Jian K. Liu
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- School of Computing, University of Leeds, Leeds, United Kingdom
| | - Dimokratis Karamanlis
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- International Max Planck Research School for Neurosciences, Göttingen, Germany
| | - Tim Gollisch
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- Cluster of Excellence “Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells” (MBExC), University of Göttingen, Göttingen, Germany
- * E-mail:
| |
Collapse
|
5
|
Almasi A, Meffin H, Cloherty SL, Wong Y, Yunzab M, Ibbotson MR. Mechanisms of Feature Selectivity and Invariance in Primary Visual Cortex. Cereb Cortex 2020; 30:5067-5087. [PMID: 32368778 DOI: 10.1093/cercor/bhaa102] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 03/26/2020] [Accepted: 03/26/2020] [Indexed: 11/14/2022] Open
Abstract
Visual object identification requires both selectivity for specific visual features that are important to the object's identity and invariance to feature manipulations. For example, a hand can be shifted in position, rotated, or contracted but still be recognized as a hand. How are the competing requirements of selectivity and invariance built into the early stages of visual processing? Typically, cells in the primary visual cortex are classified as either simple or complex. They both show selectivity for edge-orientation but complex cells develop invariance to edge position within the receptive field (spatial phase). Using a data-driven model that extracts the spatial structures and nonlinearities associated with neuronal computation, we quantitatively describe the balance between selectivity and invariance in complex cells. Phase invariance is frequently partial, while invariance to orientation and spatial frequency are more extensive than expected. The invariance arises due to two independent factors: (1) the structure and number of filters and (2) the form of nonlinearities that act upon the filter outputs. Both vary more than previously considered, so primary visual cortex forms an elaborate set of generic feature sensitivities, providing the foundation for more sophisticated object processing.
Collapse
Affiliation(s)
- Ali Almasi
- National Vision Research Institute, Australian College of Optometry, Carlton VIC 3053, Australia
| | - Hamish Meffin
- National Vision Research Institute, Australian College of Optometry, Carlton VIC 3053, Australia.,Department of Biomedical Engineering, The University of Melbourne, Parkville VIC 3010, Australia
| | - Shaun L Cloherty
- School of Engineering, RMIT University, Melbourne VIC 3001, Australia
| | - Yan Wong
- Department of Electrical and Computer Systems Engineering and Department of Physiology, Monash University, Clayton VIC 3800, Australia
| | - Molis Yunzab
- National Vision Research Institute, Australian College of Optometry, Carlton VIC 3053, Australia
| | - Michael R Ibbotson
- National Vision Research Institute, Australian College of Optometry, Carlton VIC 3053, Australia.,Department of Optometry and Vision Sciences, The University of Melbourne, Parkville VIC 3010, Australia
| |
Collapse
|
6
|
Latimer KW, Rieke F, Pillow JW. Inferring synaptic inputs from spikes with a conductance-based neural encoding model. eLife 2019; 8:47012. [PMID: 31850846 PMCID: PMC6989090 DOI: 10.7554/elife.47012] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 12/17/2019] [Indexed: 01/15/2023] Open
Abstract
Descriptive statistical models of neural responses generally aim to characterize the mapping from stimuli to spike responses while ignoring biophysical details of the encoding process. Here, we introduce an alternative approach, the conductance-based encoding model (CBEM), which describes a mapping from stimuli to excitatory and inhibitory synaptic conductances governing the dynamics of sub-threshold membrane potential. Remarkably, we show that the CBEM can be fit to extracellular spike train data and then used to predict excitatory and inhibitory synaptic currents. We validate these predictions with intracellular recordings from macaque retinal ganglion cells. Moreover, we offer a novel quasi-biophysical interpretation of the Poisson generalized linear model (GLM) as a special case of the CBEM in which excitation and inhibition are perfectly balanced. This work forges a new link between statistical and biophysical models of neural encoding and sheds new light on the biophysical variables that underlie spiking in the early visual pathway.
Collapse
Affiliation(s)
- Kenneth W Latimer
- Department of Physiology and Biophysics, University of Washington, Seattle, United States
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, United States
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Department of Psychology, Princeton University, Princeton, United States
| |
Collapse
|
7
|
Abstract
With modern neurophysiological methods able to record neural activity throughout the visual pathway in the context of arbitrarily complex visual stimulation, our understanding of visual system function is becoming limited by the available models of visual neurons that can be directly related to such data. Different forms of statistical models are now being used to probe the cellular and circuit mechanisms shaping neural activity, understand how neural selectivity to complex visual features is computed, and derive the ways in which neurons contribute to systems-level visual processing. However, models that are able to more accurately reproduce observed neural activity often defy simple interpretations. As a result, rather than being used solely to connect with existing theories of visual processing, statistical modeling will increasingly drive the evolution of more sophisticated theories.
Collapse
Affiliation(s)
- Daniel A. Butts
- Department of Biology and Program in Neuroscience and Cognitive Science, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
8
|
Maheswaranathan N, Kastner DB, Baccus SA, Ganguli S. Inferring hidden structure in multilayered neural circuits. PLoS Comput Biol 2018; 14:e1006291. [PMID: 30138312 PMCID: PMC6124781 DOI: 10.1371/journal.pcbi.1006291] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2017] [Revised: 09/05/2018] [Accepted: 06/09/2018] [Indexed: 01/26/2023] Open
Abstract
A central challenge in sensory neuroscience involves understanding how neural circuits shape computations across cascaded cell layers. Here we attempt to reconstruct the response properties of experimentally unobserved neurons in the interior of a multilayered neural circuit, using cascaded linear-nonlinear (LN-LN) models. We combine non-smooth regularization with proximal consensus algorithms to overcome difficulties in fitting such models that arise from the high dimensionality of their parameter space. We apply this framework to retinal ganglion cell processing, learning LN-LN models of retinal circuitry consisting of thousands of parameters, using 40 minutes of responses to white noise. Our models demonstrate a 53% improvement in predicting ganglion cell spikes over classical linear-nonlinear (LN) models. Internal nonlinear subunits of the model match properties of retinal bipolar cells in both receptive field structure and number. Subunits have consistently high thresholds, supressing all but a small fraction of inputs, leading to sparse activity patterns in which only one subunit drives ganglion cell spiking at any time. From the model’s parameters, we predict that the removal of visual redundancies through stimulus decorrelation across space, a central tenet of efficient coding theory, originates primarily from bipolar cell synapses. Furthermore, the composite nonlinear computation performed by retinal circuitry corresponds to a boolean OR function applied to bipolar cell feature detectors. Our methods are statistically and computationally efficient, enabling us to rapidly learn hierarchical non-linear models as well as efficiently compute widely used descriptive statistics such as the spike triggered average (STA) and covariance (STC) for high dimensional stimuli. This general computational framework may aid in extracting principles of nonlinear hierarchical sensory processing across diverse modalities from limited data. Computation in neural circuits arises from the cascaded processing of inputs through multiple cell layers. Each of these cell layers performs operations such as filtering and thresholding in order to shape a circuit’s output. It remains a challenge to describe both the computations and the mechanisms that mediate them given limited data recorded from a neural circuit. A standard approach to describing circuit computation involves building quantitative encoding models that predict the circuit response given its input, but these often fail to map in an interpretable way onto mechanisms within the circuit. In this work, we build two layer linear-nonlinear cascade models (LN-LN) in order to describe how the retinal output is shaped by nonlinear mechanisms in the inner retina. We find that these LN-LN models, fit to ganglion cell recordings alone, identify filters and nonlinearities that are readily mapped onto individual circuit components inside the retina, namely bipolar cells and the bipolar-to-ganglion cell synaptic threshold. This work demonstrates how combining simple prior knowledge of circuit properties with partial experimental recordings of a neural circuit’s output can yield interpretable models of the entire circuit computation, including parts of the circuit that are hidden or not directly observed in neural recordings.
Collapse
Affiliation(s)
- Niru Maheswaranathan
- Neurosciences Graduate Program, Stanford University, Stanford, California, United States of America
| | - David B. Kastner
- Neurosciences Graduate Program, Stanford University, Stanford, California, United States of America
| | - Stephen A. Baccus
- Department of Neurobiology, Stanford University, Stanford, California, United States of America
| | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, California, United States of America
- * E-mail:
| |
Collapse
|
9
|
Westö J, May PJC. Describing complex cells in primary visual cortex: a comparison of context and multifilter LN models. J Neurophysiol 2018; 120:703-719. [PMID: 29718805 PMCID: PMC6139451 DOI: 10.1152/jn.00916.2017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 04/30/2018] [Accepted: 04/30/2018] [Indexed: 11/24/2022] Open
Abstract
Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multifilter linear-nonlinear (LN) models and context models. Models are, however, never correct, and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: 1) we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions, and 2) we evaluate context models and multifilter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multifilter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multifilter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior. NEW & NOTEWORTHY We used data from complex cells in primary visual cortex to estimate a wide variety of receptive field models from two frameworks that have previously not been compared with each other. The models included traditionally used multifilter linear-nonlinear models and novel variants of context models. Using mutual information and correlation coefficients as performance measures, we showed that context models are superior for describing complex cells and that the novel context models performed the best.
Collapse
Affiliation(s)
- Johan Westö
- Department of Neuroscience and Biomedical Engineering Aalto University , Espoo , Finland
| | - Patrick J C May
- Department of Psychology, Lancaster University , Lancaster , United Kingdom
| |
Collapse
|
10
|
Sandler RA, Geng K, Song D, Hampson RE, Witcher MR, Deadwyler SA, Berger TW, Marmarelis VZ. Designing Patient-Specific Optimal Neurostimulation Patterns for Seizure Suppression. Neural Comput 2018; 30:1180-1208. [PMID: 29566356 DOI: 10.1162/neco_a_01075] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Neurostimulation is a promising therapy for abating epileptic seizures. However, it is extremely difficult to identify optimal stimulation patterns experimentally. In this study, human recordings are used to develop a functional 24 neuron network statistical model of hippocampal connectivity and dynamics. Spontaneous seizure-like activity is induced in silico in this reconstructed neuronal network. The network is then used as a testbed to design and validate a wide range of neurostimulation patterns. Commonly used periodic trains were not able to permanently abate seizures at any frequency. A simulated annealing global optimization algorithm was then used to identify an optimal stimulation pattern, which successfully abated 92% of seizures. Finally, in a fully responsive, or closed-loop, neurostimulation paradigm, the optimal stimulation successfully prevented the network from entering the seizure state. We propose that the framework presented here for algorithmically identifying patient-specific neurostimulation patterns can greatly increase the efficacy of neurostimulation devices for seizures.
Collapse
Affiliation(s)
- Roman A Sandler
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA 90089, U.S.A.
| | - Kunling Geng
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA 90089, U.S.A.
| | - Dong Song
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA 90089, U.S.A.
| | - Robert E Hampson
- Department of Physiology and Pharmacology, Wake Forest University, Winston-Salem, NC 27109, U.S.A.
| | - Mark R Witcher
- Department of Neurosurgery, Wake Forest University, Winston-Salem, NC 27109, U.S.A.
| | - Sam A Deadwyler
- Department of Physiology and Pharmacology, Wake Forest University, Winston-Salem, NC 27109, U.S.A.
| | - Theodore W Berger
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA 90089, U.S.A.
| | - Vasilis Z Marmarelis
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA 90089, U.S.A.
| |
Collapse
|
11
|
Weber AI, Pillow JW. Capturing the Dynamical Repertoire of Single Neurons with Generalized Linear Models. Neural Comput 2017; 29:3260-3289. [PMID: 28957020 DOI: 10.1162/neco_a_01021] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A key problem in computational neuroscience is to find simple, tractable models that are nevertheless flexible enough to capture the response properties of real neurons. Here we examine the capabilities of recurrent point process models known as Poisson generalized linear models (GLMs). These models are defined by a set of linear filters and a point nonlinearity and are conditionally Poisson spiking. They have desirable statistical properties for fitting and have been widely used to analyze spike trains from electrophysiological recordings. However, the dynamical repertoire of GLMs has not been systematically compared to that of real neurons. Here we show that GLMs can reproduce a comprehensive suite of canonical neural response behaviors, including tonic and phasic spiking, bursting, spike rate adaptation, type I and type II excitation, and two forms of bistability. GLMs can also capture stimulus-dependent changes in spike timing precision and reliability that mimic those observed in real neurons, and can exhibit varying degrees of stochasticity, from virtually deterministic responses to greater-than-Poisson variability. These results show that Poisson GLMs can exhibit a wide range of dynamic spiking behaviors found in real neurons, making them well suited for qualitative dynamical as well as quantitative statistical studies of single-neuron and population response properties.
Collapse
Affiliation(s)
- Alison I Weber
- Graduate Program in Neuroscience, University of Washington, Seattle, WA 98195, U.S.A.
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, and Department of Psychology, Princeton University, Princeton, NJ 08540, U.S.A.
| |
Collapse
|
12
|
Meyer AF, Williamson RS, Linden JF, Sahani M. Models of Neuronal Stimulus-Response Functions: Elaboration, Estimation, and Evaluation. Front Syst Neurosci 2017; 10:109. [PMID: 28127278 PMCID: PMC5226961 DOI: 10.3389/fnsys.2016.00109] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2016] [Accepted: 12/19/2016] [Indexed: 11/13/2022] Open
Abstract
Rich, dynamic, and dense sensory stimuli are encoded within the nervous system by the time-varying activity of many individual neurons. A fundamental approach to understanding the nature of the encoded representation is to characterize the function that relates the moment-by-moment firing of a neuron to the recent history of a complex sensory input. This review provides a unifying and critical survey of the techniques that have been brought to bear on this effort thus far—ranging from the classical linear receptive field model to modern approaches incorporating normalization and other nonlinearities. We address separately the structure of the models; the criteria and algorithms used to identify the model parameters; and the role of regularizing terms or “priors.” In each case we consider benefits or drawbacks of various proposals, providing examples for when these methods work and when they may fail. Emphasis is placed on key concepts rather than mathematical details, so as to make the discussion accessible to readers from outside the field. Finally, we review ways in which the agreement between an assumed model and the neuron's response may be quantified. Re-implemented and unified code for many of the methods are made freely available.
Collapse
Affiliation(s)
- Arne F Meyer
- Gatsby Computational Neuroscience Unit, University College London London, UK
| | - Ross S Williamson
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear InfirmaryBoston, MA, USA; Department of Otology and Laryngology, Harvard Medical SchoolBoston, MA, USA
| | - Jennifer F Linden
- Ear Institute, University College LondonLondon, UK; Department of Neuroscience, Physiology and Pharmacology, University College LondonLondon, UK
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London London, UK
| |
Collapse
|
13
|
Campagner D, Evans MH, Bale MR, Erskine A, Petersen RS. Prediction of primary somatosensory neuron activity during active tactile exploration. eLife 2016; 5. [PMID: 26880559 PMCID: PMC4764568 DOI: 10.7554/elife.10696] [Citation(s) in RCA: 59] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2015] [Accepted: 01/06/2016] [Indexed: 11/13/2022] Open
Abstract
Primary sensory neurons form the interface between world and brain. Their function is well-understood during passive stimulation but, under natural behaving conditions, sense organs are under active, motor control. In an attempt to predict primary neuron firing under natural conditions of sensorimotor integration, we recorded from primary mechanosensory neurons of awake, head-fixed mice as they explored a pole with their whiskers, and simultaneously measured both whisker motion and forces with high-speed videography. Using Generalised Linear Models, we found that primary neuron responses were poorly predicted by whisker angle, but well-predicted by rotational forces acting on the whisker: both during touch and free-air whisker motion. These results are in apparent contrast to previous studies of passive stimulation, but could be reconciled by differences in the kinematics-force relationship between active and passive conditions. Thus, simple statistical models can predict rich neural activity elicited by natural, exploratory behaviour involving active movement of sense organs.
Collapse
Affiliation(s)
- Dario Campagner
- Faculty of Life Sciences, The University of Manchester, Manchester, United Kingdom
| | - Mathew Hywel Evans
- Faculty of Life Sciences, The University of Manchester, Manchester, United Kingdom
| | - Michael Ross Bale
- Faculty of Life Sciences, The University of Manchester, Manchester, United Kingdom.,School of Life Sciences, University of Sussex, Brighton, United Kingdom
| | - Andrew Erskine
- Faculty of Life Sciences, The University of Manchester, Manchester, United Kingdom.,Mill Hill Laboratory, The Francis Crick Institute, London, United Kingdom
| | | |
Collapse
|
14
|
Williamson RS, Sahani M, Pillow JW. The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction. PLoS Comput Biol 2015; 11:e1004141. [PMID: 25831448 PMCID: PMC4382343 DOI: 10.1371/journal.pcbi.1004141] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2013] [Accepted: 01/20/2015] [Indexed: 12/02/2022] Open
Abstract
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.
Collapse
Affiliation(s)
- Ross S. Williamson
- Gatsby Computational Neuroscience Unit, University College London, London, UK
- Centre for Mathematics and Physics in the Life Sciences and Experimental Biology, University College London, London, UK
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London, London, UK
| | - Jonathan W. Pillow
- Princeton Neuroscience Institute, Department of Psychology, Princeton University, Princeton, New Jersey, USA
| |
Collapse
|
15
|
Kinney JB, Atwal GS. Parametric inference in the large data limit using maximally informative models. Neural Comput 2014; 26:637-53. [PMID: 24479782 DOI: 10.1162/neco_a_00568] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Motivated by data-rich experiments in transcriptional regulation and sensory neuroscience, we consider the following general problem in statistical inference: when exposed to a high-dimensional signal S, a system of interest computes a representation R of that signal, which is then observed through a noisy measurement M. From a large number of signals and measurements, we wish to infer the "filter" that maps S to R. However, the standard method for solving such problems, likelihood-based inference, requires perfect a priori knowledge of the "noise function" mapping R to M. In practice such noise functions are usually known only approximately, if at all, and using an incorrect noise function will typically bias the inferred filter. Here we show that in the large data limit, this need for a precharacterized noise function can be circumvented by searching for filters that instead maximize the mutual information I[M; R] between observed measurements and predicted representations. Moreover, if the correct filter lies within the space of filters being explored, maximizing mutual information becomes equivalent to simultaneously maximizing every dependence measure that satisfies the data processing inequality. It is important to note that maximizing mutual information will typically leave a small number of directions in parameter space unconstrained. We term these directions diffeomorphic modes and present an equation that allows these modes to be derived systematically. The presence of diffeomorphic modes reflects a fundamental and nontrivial substructure within parameter space, one that is obscured by standard likelihood-based inference.
Collapse
Affiliation(s)
- Justin B Kinney
- Simons Center for Quantitative Biology, Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, U.S.A.
| | | |
Collapse
|
16
|
Tkačik G, Ghosh A, Schneidman E, Segev R. Adaptation to changes in higher-order stimulus statistics in the salamander retina. PLoS One 2014; 9:e85841. [PMID: 24465742 PMCID: PMC3897542 DOI: 10.1371/journal.pone.0085841] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2013] [Accepted: 12/02/2013] [Indexed: 11/30/2022] Open
Abstract
Adaptation in the retina is thought to optimize the encoding of natural light signals into sequences of spikes sent to the brain. While adaptive changes in retinal processing to the variations of the mean luminance level and second-order stimulus statistics have been documented before, no such measurements have been performed when higher-order moments of the light distribution change. We therefore measured the ganglion cell responses in the tiger salamander retina to controlled changes in the second (contrast), third (skew) and fourth (kurtosis) moments of the light intensity distribution of spatially uniform temporally independent stimuli. The skew and kurtosis of the stimuli were chosen to cover the range observed in natural scenes. We quantified adaptation in ganglion cells by studying linear-nonlinear models that capture well the retinal encoding properties across all stimuli. We found that the encoding properties of retinal ganglion cells change only marginally when higher-order statistics change, compared to the changes observed in response to the variation in contrast. By analyzing optimal coding in LN-type models, we showed that neurons can maintain a high information rate without large dynamic adaptation to changes in skew or kurtosis. This is because, for uncorrelated stimuli, spatio-temporal summation within the receptive field averages away non-gaussian aspects of the light intensity distribution.
Collapse
Affiliation(s)
- Gašper Tkačik
- Institute of Science and Technology Austria, Klosterneuburg, Austria
- * E-mail:
| | - Anandamohan Ghosh
- Indian Institute of Science Education and Research-Kolkata, Mohanpur (Nadia), India
| | - Elad Schneidman
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Ronen Segev
- Faculty of Natural Sciences, Department of Life Sciences and Zlotowski Center for Neuroscience, Ben Gurion University of the Negev, Be'er Sheva, Israel
| |
Collapse
|
17
|
Theis L, Chagas AM, Arnstein D, Schwarz C, Bethge M. Beyond GLMs: a generative mixture modeling approach to neural system identification. PLoS Comput Biol 2013; 9:e1003356. [PMID: 24278006 PMCID: PMC3836720 DOI: 10.1371/journal.pcbi.1003356] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2013] [Accepted: 10/06/2013] [Indexed: 11/19/2022] Open
Abstract
Generalized linear models (GLMs) represent a popular choice for the probabilistic characterization of neural spike responses. While GLMs are attractive for their computational tractability, they also impose strong assumptions and thus only allow for a limited range of stimulus-response relationships to be discovered. Alternative approaches exist that make only very weak assumptions but scale poorly to high-dimensional stimulus spaces. Here we seek an approach which can gracefully interpolate between the two extremes. We extend two frequently used special cases of the GLM—a linear and a quadratic model—by assuming that the spike-triggered and non-spike-triggered distributions can be adequately represented using Gaussian mixtures. Because we derive the model from a generative perspective, its components are easy to interpret as they correspond to, for example, the spike-triggered distribution and the interspike interval distribution. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood. We show that in practice this does not have to be an issue and the mixture-based model is able to outperform generalized linear and quadratic models. An essential goal of sensory systems neuroscience is to characterize the functional relationship between neural responses and external stimuli. Of particular interest are the nonlinear response properties of single cells. Inherently linear approaches such as generalized linear modeling can nevertheless be used to fit nonlinear behavior by choosing an appropriate feature space for the stimulus. This requires, however, that one has already obtained a good understanding of a cells nonlinear properties, whereas more flexible approaches are necessary for the characterization of unexpected nonlinear behavior. In this work, we present a generalization of some frequently used generalized linear models which enables us to automatically extract complex stimulus-response relationships from recorded data. We show that our model can lead to substantial quantitative and qualitative improvements over generalized linear and quadratic models, which we illustrate on the example of primary afferents of the rat whisker system.
Collapse
Affiliation(s)
- Lucas Theis
- Werner Reichardt Centre for Integrative Neuroscience, Tübingen, Germany
- Graduate School of Neural Information Processing, University of Tübingen, Tübingen, Germany
- * E-mail:
| | - Andrè Maia Chagas
- Werner Reichardt Centre for Integrative Neuroscience, Tübingen, Germany
- Hertie Institute for Clinical Brain Research, Tübingen, Germany
- Graduate School of Neural and Behavioural Sciences, University of Tübingen, Tübingen, Germany
| | - Daniel Arnstein
- Hertie Institute for Clinical Brain Research, Tübingen, Germany
- Graduate School of Neural and Behavioural Sciences, University of Tübingen, Tübingen, Germany
| | - Cornelius Schwarz
- Werner Reichardt Centre for Integrative Neuroscience, Tübingen, Germany
- Hertie Institute for Clinical Brain Research, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
| | - Matthias Bethge
- Werner Reichardt Centre for Integrative Neuroscience, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
18
|
Rajan K, Bialek W. Maximally informative "stimulus energies" in the analysis of neural responses to natural signals. PLoS One 2013; 8:e71959. [PMID: 24250780 PMCID: PMC3826732 DOI: 10.1371/journal.pone.0071959] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2013] [Accepted: 07/06/2013] [Indexed: 11/19/2022] Open
Abstract
The concept of feature selectivity in sensory signal processing can be formalized as dimensionality reduction: in a stimulus space of very high dimensions, neurons respond only to variations within some smaller, relevant subspace. But if neural responses exhibit invariances, then the relevant subspace typically cannot be reached by a Euclidean projection of the original stimulus. We argue that, in several cases, we can make progress by appealing to the simplest nonlinear construction, identifying the relevant variables as quadratic forms, or “stimulus energies.” Natural examples include non–phase–locked cells in the auditory system, complex cells in the visual cortex, and motion–sensitive neurons in the visual system. Generalizing the idea of maximally informative dimensions, we show that one can search for kernels of the relevant quadratic forms by maximizing the mutual information between the stimulus energy and the arrival times of action potentials. Simple implementations of this idea successfully recover the underlying properties of model neurons even when the number of parameters in the kernel is comparable to the number of action potentials and stimuli are completely natural. We explore several generalizations that allow us to incorporate plausible structure into the kernel and thereby restrict the number of parameters. We hope that this approach will add significantly to the set of tools available for the analysis of neural responses to complex, naturalistic stimuli.
Collapse
Affiliation(s)
- Kanaka Rajan
- Joseph Henry Laboratories of Physics and Lewis–Sigler Institute for Integrative Genomics, Princeton University, Princeton, New Jersey, United States of America
- * E-mail:
| | - William Bialek
- Joseph Henry Laboratories of Physics and Lewis–Sigler Institute for Integrative Genomics, Princeton University, Princeton, New Jersey, United States of America
| |
Collapse
|