51
|
Xiong H, Rodríguez-Sánchez AJ, Szedmak S, Piater J. Diversity priors for learning early visual features. Front Comput Neurosci 2015; 9:104. [PMID: 26321941 PMCID: PMC4532921 DOI: 10.3389/fncom.2015.00104] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2015] [Accepted: 07/27/2015] [Indexed: 11/13/2022] Open
Abstract
This paper investigates how utilizing diversity priors can discover early visual features that resemble their biological counterparts. The study is mainly motivated by the sparsity and selectivity of activations of visual neurons in area V1. Most previous work on computational modeling emphasizes selectivity or sparsity independently. However, we argue that selectivity and sparsity are just two epiphenomena of the diversity of receptive fields, which has been rarely exploited in learning. In this paper, to verify our hypothesis, restricted Boltzmann machines (RBMs) are employed to learn early visual features by modeling the statistics of natural images. Considering RBMs as neural networks, the receptive fields of neurons are formed by the inter-weights between hidden and visible nodes. Due to the conditional independence in RBMs, there is no mechanism to coordinate the activations of individual neurons or the whole population. A diversity prior is introduced in this paper for training RBMs. We find that the diversity prior indeed can assure simultaneously sparsity and selectivity of neuron activations. The learned receptive fields yield a high degree of biological similarity in comparison to physiological data. Also, corresponding visual features display a good generative capability in image reconstruction.
Collapse
Affiliation(s)
- Hanchen Xiong
- Intelligent and Interactive Systems Group, Institute of Computer Science, University of Innsbruck Innsbruck, Austria
| | - Antonio J Rodríguez-Sánchez
- Intelligent and Interactive Systems Group, Institute of Computer Science, University of Innsbruck Innsbruck, Austria
| | - Sandor Szedmak
- Intelligent and Interactive Systems Group, Institute of Computer Science, University of Innsbruck Innsbruck, Austria
| | - Justus Piater
- Intelligent and Interactive Systems Group, Institute of Computer Science, University of Innsbruck Innsbruck, Austria
| |
Collapse
|
52
|
Zhu M, Rozell CJ. Modeling Inhibitory Interneurons in Efficient Sensory Coding Models. PLoS Comput Biol 2015; 11:e1004353. [PMID: 26172289 PMCID: PMC4501572 DOI: 10.1371/journal.pcbi.1004353] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2014] [Accepted: 05/21/2015] [Indexed: 11/19/2022] Open
Abstract
There is still much unknown regarding the computational role of inhibitory cells in the sensory cortex. While modeling studies could potentially shed light on the critical role played by inhibition in cortical computation, there is a gap between the simplicity of many models of sensory coding and the biological complexity of the inhibitory subpopulation. In particular, many models do not respect that inhibition must be implemented in a separate subpopulation, with those inhibitory interneurons having a diversity of tuning properties and characteristic E/I cell ratios. In this study we demonstrate a computational framework for implementing inhibition in dynamical systems models that better respects these biophysical observations about inhibitory interneurons. The main approach leverages recent work related to decomposing matrices into low-rank and sparse components via convex optimization, and explicitly exploits the fact that models and input statistics often have low-dimensional structure that can be exploited for efficient implementations. While this approach is applicable to a wide range of sensory coding models (including a family of models based on Bayesian inference in a linear generative model), for concreteness we demonstrate the approach on a network implementing sparse coding. We show that the resulting implementation stays faithful to the original coding goals while using inhibitory interneurons that are much more biophysically plausible. Cortical function is a result of coordinated interactions between excitatory and inhibitory neural populations. In previous theoretical models of sensory systems, inhibitory neurons are often ignored or modeled too simplistically to contribute to understanding their role in cortical computation. In biophysical reality, inhibition is implemented with interneurons that have different characteristics from the population of excitatory cells. In this study, we propose a computational approach for including inhibition in theoretical models of neural coding in a way that respects several of these important characteristics, such as the relative number of inhibitory cells and the diversity of their response properties. The main idea is that the significant structure of the sensory world is reflected in very structured models of sensory coding, which can then be exploited in the implementation of the model using modern computational techniques. We demonstrate this approach on one specific model of sensory coding (called “sparse coding”) that has been successful at modeling other aspects of sensory cortex.
Collapse
Affiliation(s)
- Mengchen Zhu
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States of America
| | - Christopher J. Rozell
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- * E-mail:
| |
Collapse
|
53
|
Vanni S, Sharifian F, Heikkinen H, Vigário R. Modeling fMRI signals can provide insights into neural processing in the cerebral cortex. J Neurophysiol 2015; 114:768-80. [PMID: 25972586 DOI: 10.1152/jn.00332.2014] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2014] [Accepted: 05/04/2015] [Indexed: 12/16/2022] Open
Abstract
Every stimulus or task activates multiple areas in the mammalian cortex. These distributed activations can be measured with functional magnetic resonance imaging (fMRI), which has the best spatial resolution among the noninvasive brain imaging methods. Unfortunately, the relationship between the fMRI activations and distributed cortical processing has remained unclear, both because the coupling between neural and fMRI activations has remained poorly understood and because fMRI voxels are too large to directly sense the local neural events. To get an idea of the local processing given the macroscopic data, we need models to simulate the neural activity and to provide output that can be compared with fMRI data. Such models can describe neural mechanisms as mathematical functions between input and output in a specific system, with little correspondence to physiological mechanisms. Alternatively, models can be biomimetic, including biological details with straightforward correspondence to experimental data. After careful balancing between complexity, computational efficiency, and realism, a biomimetic simulation should be able to provide insight into how biological structures or functions contribute to actual data processing as well as to promote theory-driven neuroscience experiments. This review analyzes the requirements for validating system-level computational models with fMRI. In particular, we study mesoscopic biomimetic models, which include a limited set of details from real-life networks and enable system-level simulations of neural mass action. In addition, we discuss how recent developments in neurophysiology and biophysics may significantly advance the modelling of fMRI signals.
Collapse
Affiliation(s)
- Simo Vanni
- Clinical Neurosciences, Neurology, University of Helsinki and Helsinki University Hospital, Helsinki, Finland;
| | - Fariba Sharifian
- Clinical Neurosciences, Neurology, University of Helsinki and Helsinki University Hospital, Helsinki, Finland; Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, Espoo, Finland; Advanced Magnetic Imaging Centre, Aalto Neuroimaging, School of Science, Aalto University, Espoo, Finland; and
| | - Hanna Heikkinen
- Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, Espoo, Finland; Advanced Magnetic Imaging Centre, Aalto Neuroimaging, School of Science, Aalto University, Espoo, Finland; and
| | - Ricardo Vigário
- Department Computer Science, School of Science, Aalto University, Espoo, Finland
| |
Collapse
|
54
|
Młynarski W. The opponent channel population code of sound location is an efficient representation of natural binaural sounds. PLoS Comput Biol 2015; 11:e1004294. [PMID: 25996373 PMCID: PMC4440638 DOI: 10.1371/journal.pcbi.1004294] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2014] [Accepted: 04/20/2015] [Indexed: 12/02/2022] Open
Abstract
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.
Collapse
Affiliation(s)
- Wiktor Młynarski
- Max-Planck Institute for Mathematics in the Sciences, Leipzig, Germany.
| |
Collapse
|
55
|
Chen N, Sugihara H, Sur M. An acetylcholine-activated microcircuit drives temporal dynamics of cortical activity. Nat Neurosci 2015; 18:892-902. [PMID: 25915477 PMCID: PMC4446146 DOI: 10.1038/nn.4002] [Citation(s) in RCA: 135] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2015] [Accepted: 03/13/2015] [Indexed: 12/30/2022]
Abstract
Cholinergic modulation of cortex powerfully influences information processing and brain states, causing robust desynchronization of local field potentials and strong decorrelation of responses between neurons. We found that intracortical cholinergic inputs to mouse visual cortex specifically and differentially drive a defined cortical microcircuit: they facilitate somatostatin-expressing (SOM) inhibitory neurons that in turn inhibit parvalbumin-expressing inhibitory neurons and pyramidal neurons. Selective optogenetic inhibition of SOM responses blocked desynchronization and decorrelation, demonstrating that direct cholinergic activation of SOM neurons is necessary for this phenomenon. Optogenetic inhibition of vasoactive intestinal peptide-expressing neurons did not block desynchronization, despite these neurons being activated at high levels of cholinergic drive. Direct optogenetic SOM activation, independent of cholinergic modulation, was sufficient to induce desynchronization. Together, these findings demonstrate a mechanistic basis for temporal structure in cortical populations and the crucial role of neuromodulatory drive in specific inhibitory-excitatory circuits in actively shaping the dynamics of neuronal activity.
Collapse
Affiliation(s)
- Naiyan Chen
- Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Hiroki Sugihara
- Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Mriganka Sur
- Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|
56
|
Hiratani N, Fukai T. Mixed signal learning by spike correlation propagation in feedback inhibitory circuits. PLoS Comput Biol 2015; 11:e1004227. [PMID: 25910189 PMCID: PMC4409403 DOI: 10.1371/journal.pcbi.1004227] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2014] [Accepted: 03/06/2015] [Indexed: 11/18/2022] Open
Abstract
The brain can learn and detect mixed input signals masked by various types of noise, and spike-timing-dependent plasticity (STDP) is the candidate synaptic level mechanism. Because sensory inputs typically have spike correlation, and local circuits have dense feedback connections, input spikes cause the propagation of spike correlation in lateral circuits; however, it is largely unknown how this secondary correlation generated by lateral circuits influences learning processes through STDP, or whether it is beneficial to achieve efficient spike-based learning from uncertain stimuli. To explore the answers to these questions, we construct models of feedforward networks with lateral inhibitory circuits and study how propagated correlation influences STDP learning, and what kind of learning algorithm such circuits achieve. We derive analytical conditions at which neurons detect minor signals with STDP, and show that depending on the origin of the noise, different correlation timescales are useful for learning. In particular, we show that non-precise spike correlation is beneficial for learning in the presence of cross-talk noise. We also show that by considering excitatory and inhibitory STDP at lateral connections, the circuit can acquire a lateral structure optimal for signal detection. In addition, we demonstrate that the model performs blind source separation in a manner similar to the sequential sampling approximation of the Bayesian independent component analysis algorithm. Our results provide a basic understanding of STDP learning in feedback circuits by integrating analyses from both dynamical systems and information theory.
Collapse
Affiliation(s)
- Naoki Hiratani
- Department of Complexity Science and Engineering, The University of Tokyo, Kashiwa, Chiba, Japan
- Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Wako, Saitama, Japan
- * E-mail: (NH); (TF)
| | - Tomoki Fukai
- Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Wako, Saitama, Japan
- * E-mail: (NH); (TF)
| |
Collapse
|
57
|
Duan H, Wang X. Visual attention model based on statistical properties of neuron responses. Sci Rep 2015; 5:8873. [PMID: 25747859 PMCID: PMC4352866 DOI: 10.1038/srep08873] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2014] [Accepted: 02/06/2015] [Indexed: 11/08/2022] Open
Abstract
Visual attention is a mechanism of the visual system that can select relevant objects from a specific scene. Interactions among neurons in multiple cortical areas are considered to be involved in attentional allocation. However, the characteristics of the encoded features and neuron responses in those attention related cortices are indefinite. Therefore, further investigations carried out in this study aim at demonstrating that unusual regions arousing more attention generally cause particular neuron responses. We suppose that visual saliency is obtained on the basis of neuron responses to contexts in natural scenes. A bottom-up visual attention model is proposed based on the self-information of neuron responses to test and verify the hypothesis. Four different color spaces are adopted and a novel entropy-based combination scheme is designed to make full use of color information. Valuable regions are highlighted while redundant backgrounds are suppressed in the saliency maps obtained by the proposed model. Comparative results reveal that the proposed model outperforms several state-of-the-art models. This study provides insights into the neuron responses based saliency detection and may underlie the neural mechanism of early visual cortices for bottom-up visual attention.
Collapse
Affiliation(s)
- Haibin Duan
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, P. R. China
- Science and Technology on Aircraft Control Laboratory, School of Automation Science and Electronic Engineering, Beihang University, Beijing 100191, P. R. China
| | - Xiaohua Wang
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, P. R. China
- Science and Technology on Aircraft Control Laboratory, School of Automation Science and Electronic Engineering, Beihang University, Beijing 100191, P. R. China
| |
Collapse
|
58
|
Abstract
Sensory responses are modulated by internal factors including attention, experience, and brain state. This is partly due to fluctuations in neuromodulatory input from regions such as the noradrenergic locus ceruleus (LC) in the brainstem. LC activity changes with arousal and modulates sensory processing, cognition, and memory. The main olfactory bulb (MOB) is richly targeted by LC fibers and noradrenaline profoundly influences MOB circuitry and odor-guided behavior. Noradrenaline-dependent plasticity affects the output of the MOB; however. it is unclear whether noradrenergic plasticity also affects the input to the MOB from olfactory sensory neurons (OSNs) in the glomerular layer. Noradrenergic terminals are found in the glomerular layer, but noradrenaline receptors do not seem to acutely modulate OSN terminals in vitro. We investigated whether noradrenaline induces plasticity at the glomerulus. We used wide-field optical imaging to measure changes in odor responses following electrical stimulation of LC in anesthetized mice. Surprisingly, odor-evoked intrinsic optical signals at the glomerulus were persistently weakened after LC activation. Calcium imaging selectively from OSNs confirmed that this effect was due to suppression of presynaptic input and was prevented by noradrenergic antagonists. Finally, suppression of responses to an odor did not require precise coincidence of the odor with LC activation. However, suppression was intensified by LC activation in the absence of odors. We conclude that noradrenaline release from LC has persistent effects on odor processing already at the first synapse of the main olfactory system. This mechanism could contribute to arousal-dependent memories.
Collapse
|
59
|
Ziskind AJ, Emondi AA, Kurgansky AV, Rebrik SP, Miller KD. Neurons in cat V1 show significant clustering by degree of tuning. J Neurophysiol 2015; 113:2555-81. [PMID: 25652921 DOI: 10.1152/jn.00646.2014] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2014] [Accepted: 02/04/2015] [Indexed: 11/22/2022] Open
Abstract
Neighboring neurons in cat primary visual cortex (V1) have similar preferred orientation, direction, and spatial frequency. How diverse is their degree of tuning for these properties? To address this, we used single-tetrode recordings to simultaneously isolate multiple cells at single recording sites and record their responses to flashed and drifting gratings of multiple orientations, spatial frequencies, and, for drifting gratings, directions. Orientation tuning width, spatial frequency tuning width, and direction selectivity index (DSI) all showed significant clustering: pairs of neurons recorded at a single site were significantly more similar in each of these properties than pairs of neurons from different recording sites. The strength of the clustering was generally modest. The percent decrease in the median difference between pairs from the same site, relative to pairs from different sites, was as follows: for different measures of orientation tuning width, 29-35% (drifting gratings) or 15-25% (flashed gratings); for DSI, 24%; and for spatial frequency tuning width measured in octaves, 8% (drifting gratings). The clusterings of all of these measures were much weaker than for preferred orientation (68% decrease) but comparable to that seen for preferred spatial frequency in response to drifting gratings (26%). For the above properties, little difference in clustering was seen between simple and complex cells. In studies of spatial frequency tuning to flashed gratings, strong clustering was seen among simple-cell pairs for tuning width (70% decrease) and preferred frequency (71% decrease), whereas no clustering was seen for simple-complex or complex-complex cell pairs.
Collapse
Affiliation(s)
- Avi J Ziskind
- Center for Theoretical Neuroscience, Columbia University, New York, New York
| | - Al A Emondi
- Center for Theoretical Neuroscience, Columbia University, New York, New York
| | - Andrei V Kurgansky
- Center for Theoretical Neuroscience, Columbia University, New York, New York
| | - Sergei P Rebrik
- Center for Theoretical Neuroscience, Columbia University, New York, New York
| | - Kenneth D Miller
- Center for Theoretical Neuroscience, Columbia University, New York, New York
| |
Collapse
|
60
|
Chadwick A, van Rossum MCW, Nolan MF. Independent theta phase coding accounts for CA1 population sequences and enables flexible remapping. eLife 2015; 4. [PMID: 25643396 PMCID: PMC4383210 DOI: 10.7554/elife.03542] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2014] [Accepted: 02/01/2015] [Indexed: 12/27/2022] Open
Abstract
Hippocampal place cells encode an animal's past, current, and future location
through sequences of action potentials generated within each cycle of the network
theta rhythm. These sequential representations have been suggested to result from
temporally coordinated synaptic interactions within and between cell assemblies.
Instead, we find through simulations and analysis of experimental data that rate and
phase coding in independent neurons is sufficient to explain the organization of CA1
population activity during theta states. We show that CA1 population activity can be
described as an evolving traveling wave that exhibits phase coding, rate coding,
spike sequences and that generates an emergent population theta rhythm. We identify
measures of global remapping and intracellular theta dynamics as critical for
distinguishing mechanisms for pacemaking and coordination of sequential population
activity. Our analysis suggests that, unlike synaptically coupled assemblies,
independent neurons flexibly generate sequential population activity within the
duration of a single theta cycle. DOI:http://dx.doi.org/10.7554/eLife.03542.001 When we explore a new place, we naturally create a mental map of the location as we
go. This mental map is stored in a region of the brain called the hippocampus, which
contains cells called place cells. These cells can carry information about our past,
present, and future location in the form of electrical signals. They connect to each
other to form networks and it has been proposed that these connections can store the
information needed for the mental maps. Real-time maps are represented in the information carried by the electrical signals
themselves. A physical location is specified by the individual place cell that is
activated, and by the timing of the electrical signal it produces relative to a
‘brain wave’ called the theta rhythm. Brain waves are patterns of
electrical signals activated in sets of brain cells and the theta rhythm is produced
in the hippocampus of an animal as it explores its surroundings. Previous experiments suggested that when a rat explores an area, several sets of
brain cells in the hippocampus are activated in sequence within each cycle of the
theta rhythm. As the rat moves forward, the sequence shifts to different sets of
cells to reflect the upcoming locations ahead of the rat. It has been thought that
these sequences are triggered by the individual connections between the place
cells. Here, Chadwick et al. developed mathematical models of the electrical activity in the
brains of rats as they explored. They used these models to analyze data from previous
experiments and found that the sequences of electrical activity arise from the timing
of each cell's activity relative to the theta rhythm, rather than from the
connections between the cells. Chadwick et al.'s findings suggest that the mental map may be highly flexible,
allowing vast numbers of distinct memories to be stored within the same network of
place cells without interference. Future studies will involve investigating the role
of brain waves in the forming new mental maps and creating new memories. DOI:http://dx.doi.org/10.7554/eLife.03542.002
Collapse
Affiliation(s)
- Angus Chadwick
- Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| | - Mark C W van Rossum
- Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| | - Matthew F Nolan
- Centre for Integrative Physiology, University of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
61
|
The development of cortical circuits for motion discrimination. Nat Neurosci 2015; 18:252-61. [PMID: 25599224 PMCID: PMC4334116 DOI: 10.1038/nn.3921] [Citation(s) in RCA: 59] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2014] [Accepted: 12/10/2014] [Indexed: 12/12/2022]
Abstract
Stimulus discrimination depends on the selectivity and variability of neural responses, as well as the size and correlation structure of the responsive population. For direction discrimination in visual cortex, only the selectivity of neurons has been well characterized across development. Here we show in ferrets that at eye opening, the cortical response to visual stimulation exhibits several immaturities, including: a high density of active neurons that display prominent wave-like activity, a high degree of variability, and strong noise correlations. Over the next three weeks, the population response becomes increasingly sparse, wave-like activity disappears, and variability and noise correlations are markedly reduced. Similar changes are observed in identified neuronal populations imaged repeatedly over days. Furthermore, experience with a moving stimulus is capable of driving a reduction in noise correlations over a matter of hours. These changes in variability and correlation contribute significantly to a marked improvement in direction discriminability over development.
Collapse
|
62
|
Hung CP, Cui D, Chen YP, Lin CP, Levine MR. Correlated activity supports efficient cortical processing. Front Comput Neurosci 2015; 8:171. [PMID: 25610392 PMCID: PMC4285095 DOI: 10.3389/fncom.2014.00171] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2014] [Accepted: 12/09/2014] [Indexed: 11/13/2022] Open
Abstract
Visual recognition is a computational challenge that is thought to occur via efficient coding. An important concept is sparseness, a measure of coding efficiency. The prevailing view is that sparseness supports efficiency by minimizing redundancy and correlations in spiking populations. Yet, we recently reported that "choristers", neurons that behave more similarly (have correlated stimulus preferences and spontaneous coincident spiking), carry more generalizable object information than uncorrelated neurons ("soloists") in macaque inferior temporal (IT) cortex. The rarity of choristers (as low as 6% of IT neurons) indicates that they were likely missed in previous studies. Here, we report that correlation strength is distinct from sparseness (choristers are not simply broadly tuned neurons), that choristers are located in non-granular output layers, and that correlated activity predicts human visual search efficiency. These counterintuitive results suggest that a redundant correlational structure supports efficient processing and behavior.
Collapse
Affiliation(s)
- Chou P Hung
- Department of Neuroscience, Georgetown University Washington, D.C., USA ; Institute of Neuroscience, National Yang-Ming University Taipei, Taiwan
| | - Ding Cui
- Department of Neuroscience, Georgetown University Washington, D.C., USA
| | - Yueh-Peng Chen
- Institute of Neuroscience, National Yang-Ming University Taipei, Taiwan
| | - Chia-Pei Lin
- Institute of Neuroscience, National Yang-Ming University Taipei, Taiwan
| | - Matthew R Levine
- Department of Neuroscience, Georgetown University Washington, D.C., USA
| |
Collapse
|
63
|
Spratling MW. Classification using sparse representations: a biologically plausible approach. BIOLOGICAL CYBERNETICS 2014; 108:61-73. [PMID: 24306061 DOI: 10.1007/s00422-013-0579-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2012] [Accepted: 11/18/2013] [Indexed: 06/02/2023]
Abstract
Representing signals as linear combinations of basis vectors sparsely selected from an overcomplete dictionary has proven to be advantageous for many applications in pattern recognition, machine learning, signal processing, and computer vision. While this approach was originally inspired by insights into cortical information processing, biologically plausible approaches have been limited to exploring the functionality of early sensory processing in the brain, while more practical applications have employed non-biologically plausible sparse coding algorithms. Here, a biologically plausible algorithm is proposed that can be applied to practical problems. This algorithm is evaluated using standard benchmark tasks in the domain of pattern classification, and its performance is compared to a wide range of alternative algorithms that are widely used in signal and image processing. The results show that for the classification tasks performed here, the proposed method is competitive with the best of the alternative algorithms that have been evaluated. This demonstrates that classification using sparse representations can be performed in a neurally plausible manner, and hence, that this mechanism of classification might be exploited by the brain.
Collapse
Affiliation(s)
- M W Spratling
- Department of Informatics, King's College London, Strand, London, WC2R 2LS, UK,
| |
Collapse
|
64
|
Zylberberg J, DeWeese MR. Sparse coding models can exhibit decreasing sparseness while learning sparse codes for natural images. PLoS Comput Biol 2013; 9:e1003182. [PMID: 24009489 PMCID: PMC3757070 DOI: 10.1371/journal.pcbi.1003182] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2012] [Accepted: 07/03/2013] [Indexed: 11/18/2022] Open
Abstract
The sparse coding hypothesis has enjoyed much success in predicting response properties of simple cells in primary visual cortex (V1) based solely on the statistics of natural scenes. In typical sparse coding models, model neuron activities and receptive fields are optimized to accurately represent input stimuli using the least amount of neural activity. As these networks develop to represent a given class of stimulus, the receptive fields are refined so that they capture the most important stimulus features. Intuitively, this is expected to result in sparser network activity over time. Recent experiments, however, show that stimulus-evoked activity in ferret V1 becomes less sparse during development, presenting an apparent challenge to the sparse coding hypothesis. Here we demonstrate that some sparse coding models, such as those employing homeostatic mechanisms on neural firing rates, can exhibit decreasing sparseness during learning, while still achieving good agreement with mature V1 receptive field shapes and a reasonably sparse mature network state. We conclude that observed developmental trends do not rule out sparseness as a principle of neural coding per se: a mature network can perform sparse coding even if sparseness decreases somewhat during development. To make comparisons between model and physiological receptive fields, we introduce a new nonparametric method for comparing receptive field shapes using image registration techniques.
Collapse
Affiliation(s)
- Joel Zylberberg
- Department of Physics, University of California, Berkeley, Berkeley, California, United States of America
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, California, United States of America
| | - Michael Robert DeWeese
- Department of Physics, University of California, Berkeley, Berkeley, California, United States of America
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, California, United States of America
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
- * E-mail:
| |
Collapse
|