1
|
Larisch R, Gönner L, Teichmann M, Hamker FH. Sensory coding and contrast invariance emerge from the control of plastic inhibition over emergent selectivity. PLoS Comput Biol 2021; 17:e1009566. [PMID: 34843455 PMCID: PMC8629393 DOI: 10.1371/journal.pcbi.1009566] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 10/15/2021] [Indexed: 11/18/2022] Open
Abstract
Visual stimuli are represented by a highly efficient code in the primary visual cortex, but the development of this code is still unclear. Two distinct factors control coding efficiency: Representational efficiency, which is determined by neuronal tuning diversity, and metabolic efficiency, which is influenced by neuronal gain. How these determinants of coding efficiency are shaped during development, supported by excitatory and inhibitory plasticity, is only partially understood. We investigate a fully plastic spiking network of the primary visual cortex, building on phenomenological plasticity rules. Our results suggest that inhibitory plasticity is key to the emergence of tuning diversity and accurate input encoding. We show that inhibitory feedback (random and specific) increases the metabolic efficiency by implementing a gain control mechanism. Interestingly, this led to the spontaneous emergence of contrast-invariant tuning curves. Our findings highlight that (1) interneuron plasticity is key to the development of tuning diversity and (2) that efficient sensory representations are an emergent property of the resulting network. Synaptic plasticity is crucial for the development of efficient input representation in the different sensory cortices, such as the primary visual cortex. Efficient visual representation is determined by two factors: representational efficiency, i.e. how many different input features can be represented, and metabolic efficiency, i.e. how many spikes are required to represent a specific feature. Previous research has pointed out the importance of plasticity at excitatory synapses to achieve high representational efficiency and feedback inhibition as a gain control mechanism for controlling metabolic efficiency. However, it is only partially understood how the influence of inhibitory plasticity on excitatory plasticity can lead to an efficient representation. Using a spiking neural network, we show that plasticity at feed-forward and feedback inhibitory synapses is necessary for the emergence of well-distributed neuronal selectivity to improve representational efficiency. Further, the emergent balance between excitatory and inhibitory currents improves the metabolic efficiency, and leads to contrast-invariant tuning as an inherent network property. Extending previous work, our simulation results highlight the importance of plasticity at inhibitory synapses.
Collapse
Affiliation(s)
- René Larisch
- Department of Computer Science, Artificial Intelligence, TU Chemnitz, Chemnitz, Germany
- * E-mail: (RL); (FHH)
| | - Lorenz Gönner
- Department of Computer Science, Artificial Intelligence, TU Chemnitz, Chemnitz, Germany
- Faculty of Psychology, Lifespan Developmental Neuroscience, TU Dresden, Dresden, Germany
| | - Michael Teichmann
- Department of Computer Science, Artificial Intelligence, TU Chemnitz, Chemnitz, Germany
| | - Fred H. Hamker
- Department of Computer Science, Artificial Intelligence, TU Chemnitz, Chemnitz, Germany
- Bernstein Center Computational Neuroscience, Berlin, Germany
- * E-mail: (RL); (FHH)
| |
Collapse
|
2
|
Wosniack ME, Kirchner JH, Chao LY, Zabouri N, Lohmann C, Gjorgjieva J. Adaptation of spontaneous activity in the developing visual cortex. eLife 2021; 10:61619. [PMID: 33722342 PMCID: PMC7963484 DOI: 10.7554/elife.61619] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 02/03/2021] [Indexed: 12/11/2022] Open
Abstract
Spontaneous activity drives the establishment of appropriate connectivity in different circuits during brain development. In the mouse primary visual cortex, two distinct patterns of spontaneous activity occur before vision onset: local low-synchronicity events originating in the retina and global high-synchronicity events originating in the cortex. We sought to determine the contribution of these activity patterns to jointly organize network connectivity through different activity-dependent plasticity rules. We postulated that local events shape cortical input selectivity and topography, while global events homeostatically regulate connection strength. However, to generate robust selectivity, we found that global events should adapt their amplitude to the history of preceding cortical activation. We confirmed this prediction by analyzing in vivo spontaneous cortical activity. The predicted adaptation leads to the sparsification of spontaneous activity on a slower timescale during development, demonstrating the remarkable capacity of the developing sensory cortex to acquire sensitivity to visual inputs after eye-opening.
Collapse
Affiliation(s)
- Marina E Wosniack
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany.,School of Life Sciences Weihenstephan, Technical University of Munich, Freising, Germany
| | - Jan H Kirchner
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany.,School of Life Sciences Weihenstephan, Technical University of Munich, Freising, Germany
| | - Ling-Ya Chao
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
| | - Nawal Zabouri
- Netherlands Institute for Neuroscience, Amsterdam, Netherlands
| | - Christian Lohmann
- Netherlands Institute for Neuroscience, Amsterdam, Netherlands.,Center for Neurogenomics and Cognitive Research, Vrije Universiteit, Amsterdam, Netherlands
| | - Julijana Gjorgjieva
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany.,School of Life Sciences Weihenstephan, Technical University of Munich, Freising, Germany
| |
Collapse
|
3
|
Talyansky S, Brinkman BAW. Dysregulation of excitatory neural firing replicates physiological and functional changes in aging visual cortex. PLoS Comput Biol 2021; 17:e1008620. [PMID: 33497380 PMCID: PMC7864437 DOI: 10.1371/journal.pcbi.1008620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Revised: 02/05/2021] [Accepted: 12/08/2020] [Indexed: 11/19/2022] Open
Abstract
The mammalian visual system has been the focus of countless experimental and theoretical studies designed to elucidate principles of neural computation and sensory coding. Most theoretical work has focused on networks intended to reflect developing or mature neural circuitry, in both health and disease. Few computational studies have attempted to model changes that occur in neural circuitry as an organism ages non-pathologically. In this work we contribute to closing this gap, studying how physiological changes correlated with advanced age impact the computational performance of a spiking network model of primary visual cortex (V1). Our results demonstrate that deterioration of homeostatic regulation of excitatory firing, coupled with long-term synaptic plasticity, is a sufficient mechanism to reproduce features of observed physiological and functional changes in neural activity data, specifically declines in inhibition and in selectivity to oriented stimuli. This suggests a potential causality between dysregulation of neuron firing and age-induced changes in brain physiology and functional performance. While this does not rule out deeper underlying causes or other mechanisms that could give rise to these changes, our approach opens new avenues for exploring these underlying mechanisms in greater depth and making predictions for future experiments.
Collapse
Affiliation(s)
- Seth Talyansky
- Catlin Gabel School, Portland, Oregon, United States of America
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
| | - Braden A. W. Brinkman
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
| |
Collapse
|
4
|
Drix D, Hafner VV, Schmuker M. Sparse coding with a somato-dendritic rule. Neural Netw 2020; 131:37-49. [PMID: 32750603 DOI: 10.1016/j.neunet.2020.06.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Revised: 04/30/2020] [Accepted: 06/04/2020] [Indexed: 10/24/2022]
Abstract
Cortical neurons are silent most of the time: sparse activity enables low-energy computation in the brain, and promises to do the same in neuromorphic hardware. Beyond power efficiency, sparse codes have favourable properties for associative learning, as they can store more information than local codes but are easier to read out than dense codes. Auto-encoders with a sparse constraint can learn sparse codes, and so can single-layer networks that combine recurrent inhibition with unsupervised Hebbian learning. But the latter usually require fast homeostatic plasticity, which could lead to catastrophic forgetting in embodied agents that learn continuously. Here we set out to explore whether plasticity at recurrent inhibitory synapses could take up that role instead, regulating both the population sparseness and the firing rates of individual neurons. We put the idea to the test in a network that employs compartmentalised inputs to solve the task: rate-based dendritic compartments integrate the feedforward input, while spiking integrate-and-fire somas compete through recurrent inhibition. A somato-dendritic learning rule allows somatic inhibition to modulate nonlinear Hebbian learning in the dendrites. Trained on MNIST digits and natural images, the network discovers independent components that form a sparse encoding of the input and support linear decoding. These findings confirm that intrinsic homeostatic plasticity is not strictly required for regulating sparseness: inhibitory synaptic plasticity can have the same effect. Our work illustrates the usefulness of compartmentalised inputs, and makes the case for moving beyond point neuron models in artificial spiking neural networks.
Collapse
Affiliation(s)
- Damien Drix
- Biocomputation group, Department of Computer Science, University of Hertfordshire, Hatfield, United Kingdom; Adaptive Systems laboratory, Institut für Informatik, Humboldt-Universität zu Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience, Berlin, Germany.
| | - Verena V Hafner
- Adaptive Systems laboratory, Institut für Informatik, Humboldt-Universität zu Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience, Berlin, Germany
| | - Michael Schmuker
- Biocomputation group, Department of Computer Science, University of Hertfordshire, Hatfield, United Kingdom; Bernstein Center for Computational Neuroscience, Berlin, Germany
| |
Collapse
|
5
|
Dodds EM, DeWeese MR. On the Sparse Structure of Natural Sounds and Natural Images: Similarities, Differences, and Implications for Neural Coding. Front Comput Neurosci 2019; 13:39. [PMID: 31293408 PMCID: PMC6606779 DOI: 10.3389/fncom.2019.00039] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2018] [Accepted: 06/05/2019] [Indexed: 11/25/2022] Open
Abstract
Sparse coding models of natural images and sounds have been able to predict several response properties of neurons in the visual and auditory systems. While the success of these models suggests that the structure they capture is universal across domains to some degree, it is not yet clear which aspects of this structure are universal and which vary across sensory modalities. To address this, we fit complete and highly overcomplete sparse coding models to natural images and spectrograms of speech and report on differences in the statistics learned by these models. We find several types of sparse features in natural images, which all appear in similar, approximately Laplace distributions, whereas the many types of sparse features in speech exhibit a broad range of sparse distributions, many of which are highly asymmetric. Moreover, individual sparse coding units tend to exhibit higher lifetime sparseness for overcomplete models trained on images compared to those trained on speech. Conversely, population sparseness tends to be greater for these networks trained on speech compared with sparse coding models of natural images. To illustrate the relevance of these findings to neural coding, we studied how they impact a biologically plausible sparse coding network's representations in each sensory modality. In particular, a sparse coding network with synaptically local plasticity rules learns different sparse features from speech data than are found by more conventional sparse coding algorithms, but the learned features are qualitatively the same for these models when trained on natural images.
Collapse
Affiliation(s)
- Eric McVoy Dodds
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, United States
- Department of Physics, University of California, Berkeley, Berkeley, CA, United States
| | - Michael Robert DeWeese
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, United States
- Department of Physics, University of California, Berkeley, Berkeley, CA, United States
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States
| |
Collapse
|
6
|
Kindel WF, Christensen ED, Zylberberg J. Using deep learning to probe the neural code for images in primary visual cortex. J Vis 2019; 19:29. [PMID: 31026016 PMCID: PMC6485988 DOI: 10.1167/19.4.29] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Primary visual cortex (V1) is the first stage of cortical image processing, and major effort in systems neuroscience is devoted to understanding how it encodes information about visual stimuli. Within V1, many neurons respond selectively to edges of a given preferred orientation: These are known as either simple or complex cells. Other neurons respond to localized center–surround image features. Still others respond selectively to certain image stimuli, but the specific features that excite them are unknown. Moreover, even for the simple and complex cells—the best-understood V1 neurons—it is challenging to predict how they will respond to natural image stimuli. Thus, there are important gaps in our understanding of how V1 encodes images. To fill this gap, we trained deep convolutional neural networks to predict the firing rates of V1 neurons in response to natural image stimuli, and we find that the predicted firing rates are highly correlated (\begin{document}\newcommand{\bialpha}{\boldsymbol{\alpha}}\newcommand{\bibeta}{\boldsymbol{\beta}}\newcommand{\bigamma}{\boldsymbol{\gamma}}\newcommand{\bidelta}{\boldsymbol{\delta}}\newcommand{\bivarepsilon}{\boldsymbol{\varepsilon}}\newcommand{\bizeta}{\boldsymbol{\zeta}}\newcommand{\bieta}{\boldsymbol{\eta}}\newcommand{\bitheta}{\boldsymbol{\theta}}\newcommand{\biiota}{\boldsymbol{\iota}}\newcommand{\bikappa}{\boldsymbol{\kappa}}\newcommand{\bilambda}{\boldsymbol{\lambda}}\newcommand{\bimu}{\boldsymbol{\mu}}\newcommand{\binu}{\boldsymbol{\nu}}\newcommand{\bixi}{\boldsymbol{\xi}}\newcommand{\biomicron}{\boldsymbol{\micron}}\newcommand{\bipi}{\boldsymbol{\pi}}\newcommand{\birho}{\boldsymbol{\rho}}\newcommand{\bisigma}{\boldsymbol{\sigma}}\newcommand{\bitau}{\boldsymbol{\tau}}\newcommand{\biupsilon}{\boldsymbol{\upsilon}}\newcommand{\biphi}{\boldsymbol{\phi}}\newcommand{\bichi}{\boldsymbol{\chi}}\newcommand{\bipsi}{\boldsymbol{\psi}}\newcommand{\biomega}{\boldsymbol{\omega}}{\overline {{\bf{CC}}} _{{\bf{norm}}}}\end{document} = 0.556 ± 0.01) with the neurons' actual firing rates over a population of 355 neurons. This performance value is quoted for all neurons, with no selection filter. Performance is better for more active neurons: When evaluated only on neurons with mean firing rates above 5 Hz, our predictors achieve correlations of \begin{document}\newcommand{\bialpha}{\boldsymbol{\alpha}}\newcommand{\bibeta}{\boldsymbol{\beta}}\newcommand{\bigamma}{\boldsymbol{\gamma}}\newcommand{\bidelta}{\boldsymbol{\delta}}\newcommand{\bivarepsilon}{\boldsymbol{\varepsilon}}\newcommand{\bizeta}{\boldsymbol{\zeta}}\newcommand{\bieta}{\boldsymbol{\eta}}\newcommand{\bitheta}{\boldsymbol{\theta}}\newcommand{\biiota}{\boldsymbol{\iota}}\newcommand{\bikappa}{\boldsymbol{\kappa}}\newcommand{\bilambda}{\boldsymbol{\lambda}}\newcommand{\bimu}{\boldsymbol{\mu}}\newcommand{\binu}{\boldsymbol{\nu}}\newcommand{\bixi}{\boldsymbol{\xi}}\newcommand{\biomicron}{\boldsymbol{\micron}}\newcommand{\bipi}{\boldsymbol{\pi}}\newcommand{\birho}{\boldsymbol{\rho}}\newcommand{\bisigma}{\boldsymbol{\sigma}}\newcommand{\bitau}{\boldsymbol{\tau}}\newcommand{\biupsilon}{\boldsymbol{\upsilon}}\newcommand{\biphi}{\boldsymbol{\phi}}\newcommand{\bichi}{\boldsymbol{\chi}}\newcommand{\bipsi}{\boldsymbol{\psi}}\newcommand{\biomega}{\boldsymbol{\omega}}{\overline {{\bf{CC}}} _{{\bf{norm}}}}\end{document} = 0.69 ± 0.01 with the neurons' true firing rates. We find that the firing rates of both orientation-selective and non-orientation-selective neurons can be predicted with high accuracy. Additionally, we use a variety of models to benchmark performance and find that our convolutional neural-network model makes more accurate predictions.
Collapse
Affiliation(s)
- William F Kindel
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Elijah D Christensen
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Joel Zylberberg
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO, USA.,Learning in Machines and Brains Program, Canadian Institute for Advanced Research, Toronto, Canada
| |
Collapse
|
7
|
Abstract
We show how a multi-resolution network can model the development of acuity and coarse-to-fine processing in the mammalian visual cortex. The network adapts to input statistics in an unsupervised manner, and learns a coarse-to-fine representation by using cumulative inhibition of nodes within a network layer. We show that a system of such layers can represent input by hierarchically composing larger parts from smaller components. It can also model aspects of top-down processes, such as image regeneration.
Collapse
Affiliation(s)
- Trond A Tjøstheim
- Lund University Cognitive Science, Lund University, Box 117, 221 00, Lund, Sweden
| | - Christian Balkenius
- Lund University Cognitive Science, Lund University, Box 117, 221 00, Lund, Sweden.
| |
Collapse
|
8
|
Speed-Selectivity in Retinal Ganglion Cells is Sharpened by Broad Spatial Frequency, Naturalistic Stimuli. Sci Rep 2019; 9:456. [PMID: 30679564 PMCID: PMC6345785 DOI: 10.1038/s41598-018-36861-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Accepted: 11/09/2018] [Indexed: 11/28/2022] Open
Abstract
Motion detection represents one of the critical tasks of the visual system and has motivated a large body of research. However, it remains unclear precisely why the response of retinal ganglion cells (RGCs) to simple artificial stimuli does not predict their response to complex, naturalistic stimuli. To explore this topic, we use Motion Clouds (MC), which are synthetic textures that preserve properties of natural images and are merely parameterized, in particular by modulating the spatiotemporal spectrum complexity of the stimulus by adjusting the frequency bandwidths. By stimulating the retina of the diurnal rodent, Octodon degus with MC we show that the RGCs respond to increasingly complex stimuli by narrowing their adjustment curves in response to movement. At the level of the population, complex stimuli produce a sparser code while preserving movement information; therefore, the stimuli are encoded more efficiently. Interestingly, these properties were observed throughout different populations of RGCs. Thus, our results reveal that the response at the level of RGCs is modulated by the naturalness of the stimulus - in particular for motion - which suggests that the tuning to the statistics of natural images already emerges at the level of the retina.
Collapse
|
9
|
Renoult JP, Bovet J, Raymond M. Beauty is in the efficient coding of the beholder. ROYAL SOCIETY OPEN SCIENCE 2016; 3:160027. [PMID: 27069668 PMCID: PMC4821279 DOI: 10.1098/rsos.160027] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2016] [Accepted: 01/29/2016] [Indexed: 06/05/2023]
Abstract
Sexual ornaments are often assumed to be indicators of mate quality. Yet it remains poorly known how certain ornaments are chosen before any coevolutionary race makes them indicative. Perceptual biases have been proposed to play this role, but known biases are mostly restricted to a specific taxon, which precludes evaluating their general importance in sexual selection. Here we identify a potentially universal perceptual bias in mate choice. We used an algorithm that models the sparseness of the activity of simple cells in the primary visual cortex (or V1) of humans when coding images of female faces. Sparseness was found positively correlated with attractiveness as rated by men and explained up to 17% of variance in attractiveness. Because V1 is adapted to process signals from natural scenes, in general, not faces specifically, our results indicate that attractiveness for female faces is influenced by a visual bias. Sparseness and more generally efficient neural coding are ubiquitous, occurring in various animals and sensory modalities, suggesting that the influence of efficient coding on mate choice can be widespread in animals.
Collapse
Affiliation(s)
- Julien P. Renoult
- Institute for Arts, Creations, Theories and Esthetics, UMR8218 CNRS-University of Paris 1, Paris, France
| | - Jeanne Bovet
- Institute for Advanced Study in Toulouse, Toulouse, France
| | - Michel Raymond
- Institute for Evolutionary Sciences, UMR 5554 CNRS-University of Montpellier, Montpellier, France
| |
Collapse
|
10
|
Zylberberg J, Shea-Brown E. Input nonlinearities can shape beyond-pairwise correlations and improve information transmission by neural populations. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2015; 92:062707. [PMID: 26764727 DOI: 10.1103/physreve.92.062707] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2012] [Indexed: 06/05/2023]
Abstract
While recent recordings from neural populations show beyond-pairwise, or higher-order, correlations (HOC), we have little understanding of how HOC arise from network interactions and of how they impact encoded information. Here, we show that input nonlinearities imply HOC in spin-glass-type statistical models. We then discuss one such model with parametrized pairwise- and higher-order interactions, revealing conditions under which beyond-pairwise interactions increase the mutual information between a given stimulus type and the population responses. For jointly Gaussian stimuli, coding performance is improved by shaping output HOC only when neural firing rates are constrained to be low. For stimuli with skewed probability distributions (like natural image luminances), performance improves for all firing rates. Our work suggests surprising connections between nonlinear integration of neural inputs, stimulus statistics, and normative theories of population coding. Moreover, it suggests that the inclusion of beyond-pairwise interactions could improve the performance of Boltzmann machines for machine learning and signal processing applications.
Collapse
Affiliation(s)
- Joel Zylberberg
- Department of Applied Mathematics, University of Washington, Seattle, Washington 98195, USA
| | - Eric Shea-Brown
- Department of Applied Mathematics, Program in Neuroscience, Department of Physiology and Biophysics, University of Washington, Seattle, Washington 98195, USA
| |
Collapse
|
11
|
Network Anisotropy Trumps Noise for Efficient Object Coding in Macaque Inferior Temporal Cortex. J Neurosci 2015; 35:9889-99. [PMID: 26156990 DOI: 10.1523/jneurosci.4595-14.2015] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED How neuronal ensembles compute information is actively studied in early visual cortex. Much less is known about how local ensembles function in inferior temporal (IT) cortex, the last stage of the ventral visual pathway that supports visual recognition. Previous reports suggested that nearby neurons carry information mostly independently, supporting efficient processing (Barlow, 1961). However, others postulate that noise covariation effects may depend on network anisotropy/homogeneity and on how the covariation relates to representation. Do slow trial-by-trial noise covariations increase or decrease IT's object coding capability, how does encoding capability relate to correlational structure (i.e., the spatial pattern of signal and noise redundancy/homogeneity across neurons), and does knowledge of correlational structure matter for decoding? We recorded simultaneously from ∼80 spiking neurons in ∼1 mm(3) of macaque IT under light neurolept anesthesia. Noise correlations were stronger for neurons with correlated tuning, and noise covariations reduced object encoding capability, including generalization across object pose and illumination. Knowledge of noise covariations did not lead to better decoding performance. However, knowledge of anisotropy/homogeneity improved encoding and decoding efficiency by reducing the number of neurons needed to reach a given performance level. Such correlated neurons were found mostly in supragranular and infragranular layers, supporting theories that link recurrent circuitry to manifold representation. These results suggest that redundancy benefits manifold learning of complex high-dimensional information and that subsets of neurons may be more immune to noise covariation than others. SIGNIFICANCE STATEMENT How noise affects neuronal population coding is poorly understood. By sampling densely from local populations supporting visual object recognition, we show that recurrent circuitry supports useful representations and that subsets of neurons may be more immune to noise covariation than others.
Collapse
|
12
|
Xiong H, Rodríguez-Sánchez AJ, Szedmak S, Piater J. Diversity priors for learning early visual features. Front Comput Neurosci 2015; 9:104. [PMID: 26321941 PMCID: PMC4532921 DOI: 10.3389/fncom.2015.00104] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2015] [Accepted: 07/27/2015] [Indexed: 11/13/2022] Open
Abstract
This paper investigates how utilizing diversity priors can discover early visual features that resemble their biological counterparts. The study is mainly motivated by the sparsity and selectivity of activations of visual neurons in area V1. Most previous work on computational modeling emphasizes selectivity or sparsity independently. However, we argue that selectivity and sparsity are just two epiphenomena of the diversity of receptive fields, which has been rarely exploited in learning. In this paper, to verify our hypothesis, restricted Boltzmann machines (RBMs) are employed to learn early visual features by modeling the statistics of natural images. Considering RBMs as neural networks, the receptive fields of neurons are formed by the inter-weights between hidden and visible nodes. Due to the conditional independence in RBMs, there is no mechanism to coordinate the activations of individual neurons or the whole population. A diversity prior is introduced in this paper for training RBMs. We find that the diversity prior indeed can assure simultaneously sparsity and selectivity of neuron activations. The learned receptive fields yield a high degree of biological similarity in comparison to physiological data. Also, corresponding visual features display a good generative capability in image reconstruction.
Collapse
Affiliation(s)
- Hanchen Xiong
- Intelligent and Interactive Systems Group, Institute of Computer Science, University of Innsbruck Innsbruck, Austria
| | - Antonio J Rodríguez-Sánchez
- Intelligent and Interactive Systems Group, Institute of Computer Science, University of Innsbruck Innsbruck, Austria
| | - Sandor Szedmak
- Intelligent and Interactive Systems Group, Institute of Computer Science, University of Innsbruck Innsbruck, Austria
| | - Justus Piater
- Intelligent and Interactive Systems Group, Institute of Computer Science, University of Innsbruck Innsbruck, Austria
| |
Collapse
|
13
|
Spanne A, Jörntell H. Questioning the role of sparse coding in the brain. Trends Neurosci 2015; 38:417-27. [PMID: 26093844 DOI: 10.1016/j.tins.2015.05.005] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2015] [Revised: 05/20/2015] [Accepted: 05/24/2015] [Indexed: 01/27/2023]
Abstract
Coding principles are central to understanding the organization of brain circuitry. Sparse coding offers several advantages, but a near-consensus has developed that it only has beneficial properties, and these are partially unique to sparse coding. We find that these advantages come at the cost of several trade-offs, with the lower capacity for generalization being especially problematic, and the value of sparse coding as a measure and its experimental support are both questionable. Furthermore, silent synapses and inhibitory interneurons can permit learning speed and memory capacity that was previously ascribed to sparse coding only. Combining these properties without exaggerated sparse coding improves the capacity for generalization and facilitates learning of models of a complex and high-dimensional reality.
Collapse
Affiliation(s)
- Anton Spanne
- Neural Basis of Sensorimotor Control, Department of Experimental Medical Science, Biomedical Center F10, Tornavägen 10, 221 84 Lund, Sweden
| | - Henrik Jörntell
- Neural Basis of Sensorimotor Control, Department of Experimental Medical Science, Biomedical Center F10, Tornavägen 10, 221 84 Lund, Sweden.
| |
Collapse
|
14
|
SHAPERO SAMUEL, ZHU MENGCHEN, HASLER JENNIFER, ROZELL CHRISTOPHER. OPTIMAL SPARSE APPROXIMATION WITH INTEGRATE AND FIRE NEURONS. Int J Neural Syst 2014; 24:1440001. [DOI: 10.1142/s0129065714400012] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Sparse approximation is a hypothesized coding strategy where a population of sensory neurons (e.g. V1) encodes a stimulus using as few active neurons as possible. We present the Spiking LCA (locally competitive algorithm), a rate encoded Spiking Neural Network (SNN) of integrate and fire neurons that calculate sparse approximations. The Spiking LCA is designed to be equivalent to the nonspiking LCA, an analog dynamical system that converges on a ℓ1-norm sparse approximations exponentially. We show that the firing rate of the Spiking LCA converges on the same solution as the analog LCA, with an error inversely proportional to the sampling time. We simulate in NEURON a network of 128 neuron pairs that encode 8 × 8 pixel image patches, demonstrating that the network converges to nearly optimal encodings within 20 ms of biological time. We also show that when using more biophysically realistic parameters in the neurons, the gain function encourages additional ℓ0-norm sparsity in the encoding, relative both to ideal neurons and digital solvers.
Collapse
Affiliation(s)
- SAMUEL SHAPERO
- Electronic Systems Laboratory, Georgia Tech Research Institute, 400 10th St NW, Atlanta, Georgia 30318, United States of America
| | - MENGCHEN ZHU
- Biomedical Engineering, Georgia Institute of Technology, 313 Ferst Drive, Atlanta, Georgia 30332, United States of America
| | - JENNIFER HASLER
- Electrical and Computer Engineering, Georgia Institute of Technology, 777 Atlantic Dr NW, Atlanta, Georgia 30332, United States of America
| | - CHRISTOPHER ROZELL
- Electrical and Computer Engineering, Georgia Institute of Technology, 777 Atlantic Dr NW, Atlanta, Georgia 30332, United States of America
| |
Collapse
|