1
|
Tobin M, Sheth JN, Wood KC, Michel EK, Geffen MN. Distinct inhibitory neurons differently shape neuronal codes for sound intensity in the auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.02.01.526470. [PMID: 36778269 PMCID: PMC9915672 DOI: 10.1101/2023.02.01.526470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Cortical circuits contain multiple types of inhibitory neurons which shape how information is processed within neuronal networks. Here, we asked whether somatostatin-expressing (SST) and vasoactive intestinal peptide-expressing (VIP) inhibitory neurons have distinct effects on population neuronal responses to noise bursts of varying intensities. We optogenetically stimulated SST or VIP neurons while simultaneously measuring the calcium responses of populations of hundreds of neurons in the auditory cortex of male and female awake, head-fixed mice to sounds. Upon SST neuronal activation, noise bursts representations became more discrete for different intensity levels, relying on cell identity rather than strength. By contrast, upon VIP neuronal activation, noise bursts of different intensity level activated overlapping neuronal populations, albeit at different response strengths. At the single-cell level, SST and VIP neuronal activation differentially activated the response-level curves of monotonic and nonmonotonic neurons. SST neuronal activation effects were consistent with a shift of the neuronal population responses toward a more localist code with different cells responding to sounds of different intensity. By contrast, VIP neuronal activation shifted responses towards a more distributed code, in which sounds of different intensity level are encoded in the relative response of similar populations of cells. These results delineate how distinct inhibitory neurons in the auditory cortex dynamically control cortical population codes. Different inhibitory neuronal populations may be recruited under different behavioral demands, depending on whether categorical or invariant representations are advantageous for the task.
Collapse
|
2
|
Mòdol L, Moissidis M, Selten M, Oozeer F, Marín O. Somatostatin interneurons control the timing of developmental desynchronization in cortical networks. Neuron 2024; 112:2015-2030.e5. [PMID: 38599213 DOI: 10.1016/j.neuron.2024.03.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 12/21/2023] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
Synchronous neuronal activity is a hallmark of the developing brain. In the mouse cerebral cortex, activity decorrelates during the second week of postnatal development, progressively acquiring the characteristic sparse pattern underlying the integration of sensory information. The maturation of inhibition seems critical for this process, but the interneurons involved in this crucial transition of network activity in the developing cortex remain unknown. Using in vivo longitudinal two-photon calcium imaging during the period that precedes the change from highly synchronous to decorrelated activity, we identify somatostatin-expressing (SST+) interneurons as critical modulators of this switch in mice. Modulation of the activity of SST+ cells accelerates or delays the decorrelation of cortical network activity, a process that involves regulating the maturation of parvalbumin-expressing (PV+) interneurons. SST+ cells critically link sensory inputs with local circuits, controlling the neural dynamics in the developing cortex while modulating the integration of other interneurons into nascent cortical circuits.
Collapse
Affiliation(s)
- Laura Mòdol
- Centre for Developmental Neurobiology, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK; MRC Centre for Neurodevelopmental Disorders, King's College London, London, UK.
| | - Monika Moissidis
- Centre for Developmental Neurobiology, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK; MRC Centre for Neurodevelopmental Disorders, King's College London, London, UK
| | - Martijn Selten
- Centre for Developmental Neurobiology, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK; MRC Centre for Neurodevelopmental Disorders, King's College London, London, UK
| | - Fazal Oozeer
- Centre for Developmental Neurobiology, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK; MRC Centre for Neurodevelopmental Disorders, King's College London, London, UK
| | - Oscar Marín
- Centre for Developmental Neurobiology, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK; MRC Centre for Neurodevelopmental Disorders, King's College London, London, UK.
| |
Collapse
|
3
|
Eckmann S, Young EJ, Gjorgjieva J. Synapse-type-specific competitive Hebbian learning forms functional recurrent networks. Proc Natl Acad Sci U S A 2024; 121:e2305326121. [PMID: 38870059 PMCID: PMC11194505 DOI: 10.1073/pnas.2305326121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 04/25/2024] [Indexed: 06/15/2024] Open
Abstract
Cortical networks exhibit complex stimulus-response patterns that are based on specific recurrent interactions between neurons. For example, the balance between excitatory and inhibitory currents has been identified as a central component of cortical computations. However, it remains unclear how the required synaptic connectivity can emerge in developing circuits where synapses between excitatory and inhibitory neurons are simultaneously plastic. Using theory and modeling, we propose that a wide range of cortical response properties can arise from a single plasticity paradigm that acts simultaneously at all excitatory and inhibitory connections-Hebbian learning that is stabilized by the synapse-type-specific competition for a limited supply of synaptic resources. In plastic recurrent circuits, this competition enables the formation and decorrelation of inhibition-balanced receptive fields. Networks develop an assembly structure with stronger synaptic connections between similarly tuned excitatory and inhibitory neurons and exhibit response normalization and orientation-specific center-surround suppression, reflecting the stimulus statistics during training. These results demonstrate how neurons can self-organize into functional networks and suggest an essential role for synapse-type-specific competitive learning in the development of cortical circuits.
Collapse
Affiliation(s)
- Samuel Eckmann
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt am Main60438, Germany
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, CambridgeCB2 1PZ, United Kingdom
| | - Edward James Young
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, CambridgeCB2 1PZ, United Kingdom
| | - Julijana Gjorgjieva
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt am Main60438, Germany
- School of Life Sciences, Technical University Munich, Freising85354, Germany
| |
Collapse
|
4
|
de Brito CSN, Gerstner W. Learning what matters: Synaptic plasticity with invariance to second-order input correlations. PLoS Comput Biol 2024; 20:e1011844. [PMID: 38346073 PMCID: PMC10890752 DOI: 10.1371/journal.pcbi.1011844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 02/23/2024] [Accepted: 01/18/2024] [Indexed: 02/25/2024] Open
Abstract
Cortical populations of neurons develop sparse representations adapted to the statistics of the environment. To learn efficient population codes, synaptic plasticity mechanisms must differentiate relevant latent features from spurious input correlations, which are omnipresent in cortical networks. Here, we develop a theory for sparse coding and synaptic plasticity that is invariant to second-order correlations in the input. Going beyond classical Hebbian learning, our learning objective explains the functional form of observed excitatory plasticity mechanisms, showing how Hebbian long-term depression (LTD) cancels the sensitivity to second-order correlations so that receptive fields become aligned with features hidden in higher-order statistics. Invariance to second-order correlations enhances the versatility of biologically realistic learning models, supporting optimal decoding from noisy inputs and sparse population coding from spatially correlated stimuli. In a spiking model with triplet spike-timing-dependent plasticity (STDP), we show that individual neurons can learn localized oriented receptive fields, circumventing the need for input preprocessing, such as whitening, or population-level lateral inhibition. The theory advances our understanding of local unsupervised learning in cortical circuits, offers new interpretations of the Bienenstock-Cooper-Munro and triplet STDP models, and assigns a specific functional role to synaptic LTD mechanisms in pyramidal neurons.
Collapse
Affiliation(s)
- Carlos Stein Naves de Brito
- École Polytechnique Fédérale de Lausanne, EPFL, Lusanne, Switzerland
- Champalimaud Research, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Wulfram Gerstner
- École Polytechnique Fédérale de Lausanne, EPFL, Lusanne, Switzerland
| |
Collapse
|
5
|
Andrei AR, Akil AE, Kharas N, Rosenbaum R, Josić K, Dragoi V. Rapid compensatory plasticity revealed by dynamic correlated activity in monkeys in vivo. Nat Neurosci 2023; 26:1960-1969. [PMID: 37828225 DOI: 10.1038/s41593-023-01446-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 09/01/2023] [Indexed: 10/14/2023]
Abstract
To produce adaptive behavior, neural networks must balance between plasticity and stability. Computational work has demonstrated that network stability requires plasticity mechanisms to be counterbalanced by rapid compensatory processes. However, such processes have yet to be experimentally observed. Here we demonstrate that repeated optogenetic activation of excitatory neurons in monkey visual cortex (area V1) induces a population-wide dynamic reduction in the strength of neuronal interactions over the timescale of minutes during the awake state, but not during rest. This new form of rapid plasticity was observed only in the correlation structure, with firing rates remaining stable across trials. A computational network model operating in the balanced regime confirmed experimental findings and revealed that inhibitory plasticity is responsible for the decrease in correlated activity in response to repeated light stimulation. These results provide the first experimental evidence for rapid homeostatic plasticity that primarily operates during wakefulness, which stabilizes neuronal interactions during strong network co-activation.
Collapse
Affiliation(s)
- Ariana R Andrei
- Department of Neurobiology and Anatomy, University of Texas, Houston, TX, USA.
| | - Alan E Akil
- Departments of Mathematics, Biology and Biochemistry, University of Houston, Houston, TX, USA
| | - Natasha Kharas
- Department of Neurobiology and Anatomy, University of Texas, Houston, TX, USA
| | - Robert Rosenbaum
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, IN, USA
| | - Krešimir Josić
- Departments of Mathematics, Biology and Biochemistry, University of Houston, Houston, TX, USA
| | - Valentin Dragoi
- Department of Neurobiology and Anatomy, University of Texas, Houston, TX, USA.
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA.
| |
Collapse
|
6
|
Halvagal MS, Zenke F. The combination of Hebbian and predictive plasticity learns invariant object representations in deep sensory networks. Nat Neurosci 2023; 26:1906-1915. [PMID: 37828226 PMCID: PMC10620089 DOI: 10.1038/s41593-023-01460-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 09/08/2023] [Indexed: 10/14/2023]
Abstract
Recognition of objects from sensory stimuli is essential for survival. To that end, sensory networks in the brain must form object representations invariant to stimulus changes, such as size, orientation and context. Although Hebbian plasticity is known to shape sensory networks, it fails to create invariant object representations in computational models, raising the question of how the brain achieves such processing. In the present study, we show that combining Hebbian plasticity with a predictive form of plasticity leads to invariant representations in deep neural network models. We derive a local learning rule that generalizes to spiking neural networks and naturally accounts for several experimentally observed properties of synaptic plasticity, including metaplasticity and spike-timing-dependent plasticity. Finally, our model accurately captures neuronal selectivity changes observed in the primate inferotemporal cortex in response to altered visual experience. Thus, we provide a plausible normative theory emphasizing the importance of predictive plasticity mechanisms for successful representational learning.
Collapse
Affiliation(s)
- Manu Srinath Halvagal
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland
- Faculty of Science, University of Basel, Basel, Switzerland
| | - Friedemann Zenke
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland.
- Faculty of Science, University of Basel, Basel, Switzerland.
| |
Collapse
|
7
|
Chapochnikov NM, Pehlevan C, Chklovskii DB. Normative and mechanistic model of an adaptive circuit for efficient encoding and feature extraction. Proc Natl Acad Sci U S A 2023; 120:e2117484120. [PMID: 37428907 PMCID: PMC10629579 DOI: 10.1073/pnas.2117484120] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 05/08/2023] [Indexed: 07/12/2023] Open
Abstract
One major question in neuroscience is how to relate connectomes to neural activity, circuit function, and learning. We offer an answer in the peripheral olfactory circuit of the Drosophila larva, composed of olfactory receptor neurons (ORNs) connected through feedback loops with interconnected inhibitory local neurons (LNs). We combine structural and activity data and, using a holistic normative framework based on similarity-matching, we formulate biologically plausible mechanistic models of the circuit. In particular, we consider a linear circuit model, for which we derive an exact theoretical solution, and a nonnegative circuit model, which we examine through simulations. The latter largely predicts the ORN [Formula: see text] LN synaptic weights found in the connectome and demonstrates that they reflect correlations in ORN activity patterns. Furthermore, this model accounts for the relationship between ORN [Formula: see text] LN and LN-LN synaptic counts and the emergence of different LN types. Functionally, we propose that LNs encode soft cluster memberships of ORN activity, and partially whiten and normalize the stimulus representations in ORNs through inhibitory feedback. Such a synaptic organization could, in principle, autonomously arise through Hebbian plasticity and would allow the circuit to adapt to different environments in an unsupervised manner. We thus uncover a general and potent circuit motif that can learn and extract significant input features and render stimulus representations more efficient. Finally, our study provides a unified framework for relating structure, activity, function, and learning in neural circuits and supports the conjecture that similarity-matching shapes the transformation of neural representations.
Collapse
Affiliation(s)
- Nikolai M. Chapochnikov
- Center for Computation Neuroscience, Flatiron Institute, New York, NY10010
- Department of Neurology, New York University School of Medicine, New York, NY10016
| | - Cengiz Pehlevan
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA02138
- Center for Brain Science, Harvard University, Cambridge, MA02138
- Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Cambridge, MA02138
| | - Dmitri B. Chklovskii
- Center for Computation Neuroscience, Flatiron Institute, New York, NY10010
- Neuroscience Institute, New York University School of Medicine, New York, NY10016
| |
Collapse
|
8
|
Koren V, Bondanelli G, Panzeri S. Computational methods to study information processing in neural circuits. Comput Struct Biotechnol J 2023; 21:910-922. [PMID: 36698970 PMCID: PMC9851868 DOI: 10.1016/j.csbj.2023.01.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 01/09/2023] [Accepted: 01/09/2023] [Indexed: 01/13/2023] Open
Abstract
The brain is an information processing machine and thus naturally lends itself to be studied using computational tools based on the principles of information theory. For this reason, computational methods based on or inspired by information theory have been a cornerstone of practical and conceptual progress in neuroscience. In this Review, we address how concepts and computational tools related to information theory are spurring the development of principled theories of information processing in neural circuits and the development of influential mathematical methods for the analyses of neural population recordings. We review how these computational approaches reveal mechanisms of essential functions performed by neural circuits. These functions include efficiently encoding sensory information and facilitating the transmission of information to downstream brain areas to inform and guide behavior. Finally, we discuss how further progress and insights can be achieved, in particular by studying how competing requirements of neural encoding and readout may be optimally traded off to optimize neural information processing.
Collapse
Affiliation(s)
- Veronika Koren
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, Hamburg 20251, Germany
| | | | - Stefano Panzeri
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, Hamburg 20251, Germany,Istituto Italiano di Tecnologia, Via Melen 83, Genova 16152, Italy,Corresponding author at: Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, Hamburg 20251, Germany.
| |
Collapse
|
9
|
Rohlfs C. A descriptive analysis of olfactory sensation and memory in Drosophila and its relation to artificial neural networks. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.10.068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
10
|
Mehra M, Mukesh A, Bandyopadhyay S. Separate Functional Subnetworks of Excitatory Neurons Show Preference to Periodic and Random Sound Structures. J Neurosci 2022; 42:3165-3183. [PMID: 35241488 PMCID: PMC8994540 DOI: 10.1523/jneurosci.0333-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Revised: 11/18/2021] [Accepted: 01/03/2022] [Indexed: 11/21/2022] Open
Abstract
Auditory cortex (ACX) neurons are sensitive to spectro-temporal sound patterns and violations in patterns induced by rare stimuli embedded within streams of sounds. We investigate the auditory cortical representation of repeated presentations of sequences of sounds with standard stimuli (common) with an embedded deviant (rare) stimulus in two conditions, Periodic (Fixed deviant position) or Random (Random deviant position). We used extracellular single-unit and two-photon Ca2+ imaging recordings in layer 2/3 neurons of the mouse (Mus musculus) ACX of either sex. Population single-unit average responses increased over repetitions in the Random condition and were suppressed or did not change in the Periodic condition, showing general irregularity preference. A subset of neurons showed the opposite behavior, indicating regularity preference. Furthermore, pairwise noise correlations were higher in the Random condition than in the Periodic condition, suggesting a role of recurrent connections in the observed differential adaptation. Functional two-photon Ca2+ imaging showed that excitatory (EX), and inhibitory (IN) neurons [parvalbumin-positive (PV) and somatostatin-positive (SOM)] also had different categories of long-term adaptation as observed with single-units. However, examination of functional connectivity between pairs of neurons of different categories showed that EX-PV connected pairs behaved opposite to the EX-EX and EX-SOM pairs, with more connections outside category in Random condition than Periodic condition. Finally, considering Regularity, Irregularity, and no preference of connected pairs of neurons showed that EX-EX and EX-SOM pairs were in largely separate functional subnetworks with different preferences, not EX-PV pairs. Thus, separate subnetworks underlie coding of periodic and random sound sequences.SIGNIFICANCE STATEMENT Studying how the auditory cortex (ACX) neurons respond to streams of sound sequences help us understand the importance of changes in dynamic acoustic noisy scenes around us. Humans and animals are sensitive to regularity and its violations in sound sequences. Psychophysical tasks in humans show that the auditory brain differentially responds to Periodic and Random structures, independent of the listener's attentional states. Here, we show that mouse ACX L2/3 neurons detect changes and respond differently to patterns over long-time scales. The differential functional connectivity profile obtained in response to two different sound contexts suggests the vital role of recurrent connections in the auditory cortical network. Furthermore, the excitatory-inhibitory neuronal interactions can contribute to detecting the changing sound patterns.
Collapse
Affiliation(s)
- Muneshwar Mehra
- Information Processing Laboratory, Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, 721302, India
- Advanced Technology Development Centre, Indian Institute of Technology Kharagpur, 721302, India
| | - Adarsh Mukesh
- Information Processing Laboratory, Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, 721302, India
- Advanced Technology Development Centre, Indian Institute of Technology Kharagpur, 721302, India
| | - Sharba Bandyopadhyay
- Information Processing Laboratory, Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, 721302, India
- Advanced Technology Development Centre, Indian Institute of Technology Kharagpur, 721302, India
| |
Collapse
|
11
|
Alreja A, Nemenman I, Rozell CJ. Constrained brain volume in an efficient coding model explains the fraction of excitatory and inhibitory neurons in sensory cortices. PLoS Comput Biol 2022; 18:e1009642. [PMID: 35061666 PMCID: PMC8809590 DOI: 10.1371/journal.pcbi.1009642] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 02/02/2022] [Accepted: 11/14/2021] [Indexed: 11/18/2022] Open
Abstract
The number of neurons in mammalian cortex varies by multiple orders of magnitude across different species. In contrast, the ratio of excitatory to inhibitory neurons (E:I ratio) varies in a much smaller range, from 3:1 to 9:1 and remains roughly constant for different sensory areas within a species. Despite this structure being important for understanding the function of neural circuits, the reason for this consistency is not yet understood. While recent models of vision based on the efficient coding hypothesis show that increasing the number of both excitatory and inhibitory cells improves stimulus representation, the two cannot increase simultaneously due to constraints on brain volume. In this work, we implement an efficient coding model of vision under a constraint on the volume (using number of neurons as a surrogate) while varying the E:I ratio. We show that the performance of the model is optimal at biologically observed E:I ratios under several metrics. We argue that this happens due to trade-offs between the computational accuracy and the representation capacity for natural stimuli. Further, we make experimentally testable predictions that 1) the optimal E:I ratio should be higher for species with a higher sparsity in the neural activity and 2) the character of inhibitory synaptic distributions and firing rates should change depending on E:I ratio. Our findings, which are supported by our new preliminary analyses of publicly available data, provide the first quantitative and testable hypothesis based on optimal coding models for the distribution of excitatory and inhibitory neural types in the mammalian sensory cortices. Neurons in the brain come in two main types: excitatory and inhibitory. The interplay between them shapes neural computation. Despite brain sizes varying by several orders of magnitude across species, the ratio of excitatory and inhibitory sub-populations (E:I ratio) remains relatively constant, and we don’t know why. Simulations of theoretical models of the brain can help answer such questions, especially when experiments are prohibitive or impossible. Here we placed one such theoretical model of sensory coding (’sparse coding’ that minimizes the simultaneously active neurons) under a biophysical ‘volume’ constraint that fixes the total number of neurons available. We vary the E:I ratio in the model (which cannot be done in experiments), and reveal an optimal E:I ratio where the representation of sensory stimulus and energy consumption within the circuit are concurrently optimal. We also show that varying the population sparsity changes the optimal E:I ratio, spanning the relatively narrow ranges observed in biology. Crucially, this minimally parameterized theoretical model makes predictions about structure (recurrent connectivity) and activity (population sparsity) in neural circuits with different E:I ratios (i.e., different species), of which we verify the latter in a first-of-its-kind inter-species comparison using newly publicly available data.
Collapse
Affiliation(s)
- Arish Alreja
- Neuroscience Institute, Center for the Neural Basis of Cognition and Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
| | - Ilya Nemenman
- Department of Physics, Department of Biology and Initiative in Theory and Modeling of Living Systems, Emory University, Atlanta, Georgia, United States of America
| | - Christopher J. Rozell
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- * E-mail:
| |
Collapse
|
12
|
Marino J. Predictive Coding, Variational Autoencoders, and Biological Connections. Neural Comput 2021; 34:1-44. [PMID: 34758480 DOI: 10.1162/neco_a_01458] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Accepted: 08/14/2021] [Indexed: 11/04/2022]
Abstract
We present a review of predictive coding, from theoretical neuroscience, and variational autoencoders, from machine learning, identifying the common origin and mathematical framework underlying both areas. As each area is prominent within its respective field, more firmly connecting these areas could prove useful in the dialogue between neuroscience and machine learning. After reviewing each area, we discuss two possible correspondences implied by this perspective: cortical pyramidal dendrites as analogous to (nonlinear) deep networks and lateral inhibition as analogous to normalizing flows. These connections may provide new directions for further investigations in each field.
Collapse
Affiliation(s)
- Joseph Marino
- Computation and Neural Systems, California Institute of Technology, Pasadena, CA 91125, U.S.A.
| |
Collapse
|
13
|
Larisch R, Gönner L, Teichmann M, Hamker FH. Sensory coding and contrast invariance emerge from the control of plastic inhibition over emergent selectivity. PLoS Comput Biol 2021; 17:e1009566. [PMID: 34843455 PMCID: PMC8629393 DOI: 10.1371/journal.pcbi.1009566] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 10/15/2021] [Indexed: 11/18/2022] Open
Abstract
Visual stimuli are represented by a highly efficient code in the primary visual cortex, but the development of this code is still unclear. Two distinct factors control coding efficiency: Representational efficiency, which is determined by neuronal tuning diversity, and metabolic efficiency, which is influenced by neuronal gain. How these determinants of coding efficiency are shaped during development, supported by excitatory and inhibitory plasticity, is only partially understood. We investigate a fully plastic spiking network of the primary visual cortex, building on phenomenological plasticity rules. Our results suggest that inhibitory plasticity is key to the emergence of tuning diversity and accurate input encoding. We show that inhibitory feedback (random and specific) increases the metabolic efficiency by implementing a gain control mechanism. Interestingly, this led to the spontaneous emergence of contrast-invariant tuning curves. Our findings highlight that (1) interneuron plasticity is key to the development of tuning diversity and (2) that efficient sensory representations are an emergent property of the resulting network. Synaptic plasticity is crucial for the development of efficient input representation in the different sensory cortices, such as the primary visual cortex. Efficient visual representation is determined by two factors: representational efficiency, i.e. how many different input features can be represented, and metabolic efficiency, i.e. how many spikes are required to represent a specific feature. Previous research has pointed out the importance of plasticity at excitatory synapses to achieve high representational efficiency and feedback inhibition as a gain control mechanism for controlling metabolic efficiency. However, it is only partially understood how the influence of inhibitory plasticity on excitatory plasticity can lead to an efficient representation. Using a spiking neural network, we show that plasticity at feed-forward and feedback inhibitory synapses is necessary for the emergence of well-distributed neuronal selectivity to improve representational efficiency. Further, the emergent balance between excitatory and inhibitory currents improves the metabolic efficiency, and leads to contrast-invariant tuning as an inherent network property. Extending previous work, our simulation results highlight the importance of plasticity at inhibitory synapses.
Collapse
Affiliation(s)
- René Larisch
- Department of Computer Science, Artificial Intelligence, TU Chemnitz, Chemnitz, Germany
- * E-mail: (RL); (FHH)
| | - Lorenz Gönner
- Department of Computer Science, Artificial Intelligence, TU Chemnitz, Chemnitz, Germany
- Faculty of Psychology, Lifespan Developmental Neuroscience, TU Dresden, Dresden, Germany
| | - Michael Teichmann
- Department of Computer Science, Artificial Intelligence, TU Chemnitz, Chemnitz, Germany
| | - Fred H. Hamker
- Department of Computer Science, Artificial Intelligence, TU Chemnitz, Chemnitz, Germany
- Bernstein Center Computational Neuroscience, Berlin, Germany
- * E-mail: (RL); (FHH)
| |
Collapse
|
14
|
Hu X, Zeng Z. Bridging the Functional and Wiring Properties of V1 Neurons Through Sparse Coding. Neural Comput 2021; 34:104-137. [PMID: 34758484 DOI: 10.1162/neco_a_01453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 07/20/2021] [Indexed: 11/04/2022]
Abstract
The functional properties of neurons in the primary visual cortex (V1) are thought to be closely related to the structural properties of this network, but the specific relationships remain unclear. Previous theoretical studies have suggested that sparse coding, an energy-efficient coding method, might underlie the orientation selectivity of V1 neurons. We thus aimed to delineate how the neurons are wired to produce this feature. We constructed a model and endowed it with a simple Hebbian learning rule to encode images of natural scenes. The excitatory neurons fired sparsely in response to images and developed strong orientation selectivity. After learning, the connectivity between excitatory neuron pairs, inhibitory neuron pairs, and excitatory-inhibitory neuron pairs depended on firing pattern and receptive field similarity between the neurons. The receptive fields (RFs) of excitatory neurons and inhibitory neurons were well predicted by the RFs of presynaptic excitatory neurons and inhibitory neurons, respectively. The excitatory neurons formed a small-world network, in which certain local connection patterns were significantly overrepresented. Bidirectionally manipulating the firing rates of inhibitory neurons caused linear transformations of the firing rates of excitatory neurons, and vice versa. These wiring properties and modulatory effects were congruent with a wide variety of data measured in V1, suggesting that the sparse coding principle might underlie both the functional and wiring properties of V1 neurons.
Collapse
Affiliation(s)
- Xiaolin Hu
- Department of Computer Science and Technology, State Key Laboratory of Intelligent Technology and Systems, BNRist, Tsinghua Laboratory of Brain and Intelligence, and IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| | - Zhigang Zeng
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China, and Key Laboratory of Image Processing and Intelligent Control, Education Ministry of China, Wuhan 430074, China
| |
Collapse
|
15
|
Teichmann M, Larisch R, Hamker FH. Performance of biologically grounded models of the early visual system on standard object recognition tasks. Neural Netw 2021; 144:210-228. [PMID: 34507042 DOI: 10.1016/j.neunet.2021.08.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Revised: 07/05/2021] [Accepted: 08/04/2021] [Indexed: 11/29/2022]
Abstract
Computational neuroscience models of vision and neural network models for object recognition are often framed by different research agendas. Computational neuroscience mainly aims at replicating experimental data, while (artificial) neural networks target high performance on classification tasks. However, we propose that models of vision should be validated on object recognition tasks. At some point, mechanisms of realistic neuro-computational models of the visual cortex have to convince in object recognition as well. In order to foster this idea, we report the recognition accuracy for two different neuro-computational models of the visual cortex on several object recognition datasets. The models were trained using unsupervised Hebbian learning rules on natural scene inputs for the emergence of receptive fields comparable to their biological counterpart. We assume that the emerged receptive fields result in a general codebook of features, which should be applicable to a variety of visual scenes. We report the performances on datasets with different levels of difficulty, ranging from the simple MNIST to the more complex CIFAR-10 or ETH-80. We found that both networks show good results on simple digit recognition, comparable with previously published biologically plausible models. We also observed that our deeper layer neurons provide for naturalistic datasets a better recognition codebook. As for most datasets, recognition results of biologically grounded models are not available yet, our results provide a broad basis of performance values to compare methodologically similar models.
Collapse
Affiliation(s)
- Michael Teichmann
- Chemnitz University of Technology, Str. der Nationen, 62, 09111, Chemnitz, Germany.
| | - René Larisch
- Chemnitz University of Technology, Str. der Nationen, 62, 09111, Chemnitz, Germany.
| | - Fred H Hamker
- Chemnitz University of Technology, Str. der Nationen, 62, 09111, Chemnitz, Germany.
| |
Collapse
|
16
|
Lipshutz D, Bahroun Y, Golkar S, Sengupta AM, Chklovskii DB. A Biologically Plausible Neural Network for Multichannel Canonical Correlation Analysis. Neural Comput 2021; 33:2309-2352. [PMID: 34412114 DOI: 10.1162/neco_a_01414] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 03/23/2021] [Indexed: 11/04/2022]
Abstract
Cortical pyramidal neurons receive inputs from multiple distinct neural populations and integrate these inputs in separate dendritic compartments. We explore the possibility that cortical microcircuits implement canonical correlation analysis (CCA), an unsupervised learning method that projects the inputs onto a common subspace so as to maximize the correlations between the projections. To this end, we seek a multichannel CCA algorithm that can be implemented in a biologically plausible neural network. For biological plausibility, we require that the network operates in the online setting and its synaptic update rules are local. Starting from a novel CCA objective function, we derive an online optimization algorithm whose optimization steps can be implemented in a single-layer neural network with multicompartmental neurons and local non-Hebbian learning rules. We also derive an extension of our online CCA algorithm with adaptive output rank and output whitening. Interestingly, the extension maps onto a neural network whose neural architecture and synaptic updates resemble neural circuitry and non-Hebbian plasticity observed in the cortex.
Collapse
Affiliation(s)
- David Lipshutz
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A.
| | - Yanis Bahroun
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A.
| | - Siavash Golkar
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A.
| | - Anirvan M Sengupta
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A., and Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854 U.S.A.
| | - Dmitri B Chklovskii
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A., and Neuroscience Institute, NYU Medical Center, New York, NY 10016, U.S.A.
| |
Collapse
|
17
|
Burg MF, Cadena SA, Denfield GH, Walker EY, Tolias AS, Bethge M, Ecker AS. Learning divisive normalization in primary visual cortex. PLoS Comput Biol 2021; 17:e1009028. [PMID: 34097695 PMCID: PMC8211272 DOI: 10.1371/journal.pcbi.1009028] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Revised: 06/17/2021] [Accepted: 04/30/2021] [Indexed: 11/18/2022] Open
Abstract
Divisive normalization (DN) is a prominent computational building block in the brain that has been proposed as a canonical cortical operation. Numerous experimental studies have verified its importance for capturing nonlinear neural response properties to simple, artificial stimuli, and computational studies suggest that DN is also an important component for processing natural stimuli. However, we lack quantitative models of DN that are directly informed by measurements of spiking responses in the brain and applicable to arbitrary stimuli. Here, we propose a DN model that is applicable to arbitrary input images. We test its ability to predict how neurons in macaque primary visual cortex (V1) respond to natural images, with a focus on nonlinear response properties within the classical receptive field. Our model consists of one layer of subunits followed by learned orientation-specific DN. It outperforms linear-nonlinear and wavelet-based feature representations and makes a significant step towards the performance of state-of-the-art convolutional neural network (CNN) models. Unlike deep CNNs, our compact DN model offers a direct interpretation of the nature of normalization. By inspecting the learned normalization pool of our model, we gained insights into a long-standing question about the tuning properties of DN that update the current textbook description: we found that within the receptive field oriented features were normalized preferentially by features with similar orientation rather than non-specifically as currently assumed.
Collapse
Affiliation(s)
- Max F. Burg
- Institute for Theoretical Physics and Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Göttingen, Germany
- * E-mail:
| | - Santiago A. Cadena
- Institute for Theoretical Physics and Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas, United States of America
| | - George H. Denfield
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas, United States of America
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas, United States of America
| | - Edgar Y. Walker
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas, United States of America
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas, United States of America
| | - Andreas S. Tolias
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas, United States of America
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas, United States of America
- Department of Electrical and Computer Engineering, Rice University, Houston, Texas, United States of America
| | - Matthias Bethge
- Institute for Theoretical Physics and Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas, United States of America
| | - Alexander S. Ecker
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Göttingen, Germany
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
| |
Collapse
|
18
|
Talyansky S, Brinkman BAW. Dysregulation of excitatory neural firing replicates physiological and functional changes in aging visual cortex. PLoS Comput Biol 2021; 17:e1008620. [PMID: 33497380 PMCID: PMC7864437 DOI: 10.1371/journal.pcbi.1008620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Revised: 02/05/2021] [Accepted: 12/08/2020] [Indexed: 11/19/2022] Open
Abstract
The mammalian visual system has been the focus of countless experimental and theoretical studies designed to elucidate principles of neural computation and sensory coding. Most theoretical work has focused on networks intended to reflect developing or mature neural circuitry, in both health and disease. Few computational studies have attempted to model changes that occur in neural circuitry as an organism ages non-pathologically. In this work we contribute to closing this gap, studying how physiological changes correlated with advanced age impact the computational performance of a spiking network model of primary visual cortex (V1). Our results demonstrate that deterioration of homeostatic regulation of excitatory firing, coupled with long-term synaptic plasticity, is a sufficient mechanism to reproduce features of observed physiological and functional changes in neural activity data, specifically declines in inhibition and in selectivity to oriented stimuli. This suggests a potential causality between dysregulation of neuron firing and age-induced changes in brain physiology and functional performance. While this does not rule out deeper underlying causes or other mechanisms that could give rise to these changes, our approach opens new avenues for exploring these underlying mechanisms in greater depth and making predictions for future experiments.
Collapse
Affiliation(s)
- Seth Talyansky
- Catlin Gabel School, Portland, Oregon, United States of America
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
| | - Braden A. W. Brinkman
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
| |
Collapse
|
19
|
Exploitation of image statistics with sparse coding in the case of stereo vision. Neural Netw 2020; 135:158-176. [PMID: 33388507 DOI: 10.1016/j.neunet.2020.12.016] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2020] [Revised: 12/06/2020] [Accepted: 12/14/2020] [Indexed: 11/23/2022]
Abstract
The sparse coding algorithm has served as a model for early processing in mammalian vision. It has been assumed that the brain uses sparse coding to exploit statistical properties of the sensory stream. We hypothesize that sparse coding discovers patterns from the data set, which can be used to estimate a set of stimulus parameters by simple readout. In this study, we chose a model of stereo vision to test our hypothesis. We used the Locally Competitive Algorithm (LCA), followed by a naïve Bayes classifier, to infer stereo disparity. From the results we report three observations. First, disparity inference was successful with this naturalistic processing pipeline. Second, an expanded, highly redundant representation is required to robustly identify the input patterns. Third, the inference error can be predicted from the number of active coefficients in the LCA representation. We conclude that sparse coding can generate a suitable general representation for subsequent inference tasks.
Collapse
|
20
|
Drix D, Hafner VV, Schmuker M. Sparse coding with a somato-dendritic rule. Neural Netw 2020; 131:37-49. [PMID: 32750603 DOI: 10.1016/j.neunet.2020.06.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Revised: 04/30/2020] [Accepted: 06/04/2020] [Indexed: 10/24/2022]
Abstract
Cortical neurons are silent most of the time: sparse activity enables low-energy computation in the brain, and promises to do the same in neuromorphic hardware. Beyond power efficiency, sparse codes have favourable properties for associative learning, as they can store more information than local codes but are easier to read out than dense codes. Auto-encoders with a sparse constraint can learn sparse codes, and so can single-layer networks that combine recurrent inhibition with unsupervised Hebbian learning. But the latter usually require fast homeostatic plasticity, which could lead to catastrophic forgetting in embodied agents that learn continuously. Here we set out to explore whether plasticity at recurrent inhibitory synapses could take up that role instead, regulating both the population sparseness and the firing rates of individual neurons. We put the idea to the test in a network that employs compartmentalised inputs to solve the task: rate-based dendritic compartments integrate the feedforward input, while spiking integrate-and-fire somas compete through recurrent inhibition. A somato-dendritic learning rule allows somatic inhibition to modulate nonlinear Hebbian learning in the dendrites. Trained on MNIST digits and natural images, the network discovers independent components that form a sparse encoding of the input and support linear decoding. These findings confirm that intrinsic homeostatic plasticity is not strictly required for regulating sparseness: inhibitory synaptic plasticity can have the same effect. Our work illustrates the usefulness of compartmentalised inputs, and makes the case for moving beyond point neuron models in artificial spiking neural networks.
Collapse
Affiliation(s)
- Damien Drix
- Biocomputation group, Department of Computer Science, University of Hertfordshire, Hatfield, United Kingdom; Adaptive Systems laboratory, Institut für Informatik, Humboldt-Universität zu Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience, Berlin, Germany.
| | - Verena V Hafner
- Adaptive Systems laboratory, Institut für Informatik, Humboldt-Universität zu Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience, Berlin, Germany
| | - Michael Schmuker
- Biocomputation group, Department of Computer Science, University of Hertfordshire, Hatfield, United Kingdom; Bernstein Center for Computational Neuroscience, Berlin, Germany
| |
Collapse
|
21
|
Iyer R, Hu B, Mihalas S. Contextual Integration in Cortical and Convolutional Neural Networks. Front Comput Neurosci 2020; 14:31. [PMID: 32390818 PMCID: PMC7192314 DOI: 10.3389/fncom.2020.00031] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Accepted: 03/24/2020] [Indexed: 11/28/2022] Open
Abstract
It has been suggested that neurons can represent sensory input using probability distributions and neural circuits can perform probabilistic inference. Lateral connections between neurons have been shown to have non-random connectivity and modulate responses to stimuli within the classical receptive field. Large-scale efforts mapping local cortical connectivity describe cell type specific connections from inhibitory neurons and like-to-like connectivity between excitatory neurons. To relate the observed connectivity to computations, we propose a neuronal network model that approximates Bayesian inference of the probability of different features being present at different image locations. We show that the lateral connections between excitatory neurons in a circuit implementing contextual integration in this should depend on correlations between unit activities, minus a global inhibitory drive. The model naturally suggests the need for two types of inhibitory gates (normalization, surround inhibition). First, using natural scene statistics and classical receptive fields corresponding to simple cells parameterized with data from mouse primary visual cortex, we show that the predicted connectivity qualitatively matches with that measured in mouse cortex: neurons with similar orientation tuning have stronger connectivity, and both excitatory and inhibitory connectivity have a modest spatial extent, comparable to that observed in mouse visual cortex. We incorporate lateral connections learned using this model into convolutional neural networks. Features are defined by supervised learning on the task, and the lateral connections provide an unsupervised learning of feature context in multiple layers. Since the lateral connections provide contextual information when the feedforward input is locally corrupted, we show that incorporating such lateral connections into convolutional neural networks makes them more robust to noise and leads to better performance on noisy versions of the MNIST dataset. Decomposing the predicted lateral connectivity matrices into low-rank and sparse components introduces additional cell types into these networks. We explore effects of cell-type specific perturbations on network computation. Our framework can potentially be applied to networks trained on other tasks, with the learned lateral connections aiding computations implemented by feedforward connections when the input is unreliable and demonstrate the potential usefulness of combining supervised and unsupervised learning techniques in real-world vision tasks.
Collapse
Affiliation(s)
- Ramakrishnan Iyer
- Modeling and Theory, Allen Institute for Brain Science, Seattle, WA, United States
| | - Brian Hu
- Modeling and Theory, Allen Institute for Brain Science, Seattle, WA, United States
| | - Stefan Mihalas
- Modeling and Theory, Allen Institute for Brain Science, Seattle, WA, United States
| |
Collapse
|
22
|
Orekhova EV, Rostovtseva EN, Manyukhina VO, Prokofiev AO, Obukhova TS, Nikolaeva AY, Schneiderman JF, Stroganova TA. Spatial suppression in visual motion perception is driven by inhibition: Evidence from MEG gamma oscillations. Neuroimage 2020; 213:116753. [PMID: 32194278 DOI: 10.1016/j.neuroimage.2020.116753] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 02/14/2020] [Accepted: 03/14/2020] [Indexed: 12/21/2022] Open
Abstract
Spatial suppression (SS) is a visual perceptual phenomenon that is manifest in a reduction of directional sensitivity for drifting high-contrast gratings whose size exceeds the center of the visual field. Gratings moving at faster velocities induce stronger SS. The neural processes that give rise to such size- and velocity-dependent reductions in directional sensitivity are currently unknown, and the role of surround inhibition is unclear. In magnetoencephalogram (MEG), large high-contrast drifting gratings induce a strong gamma response (GR), which also attenuates with an increase in the gratings' velocity. It has been suggested that the slope of this GR attenuation is mediated by inhibitory interactions in the primary visual cortex. Herein, we investigate whether SS is related to this inhibitory-based MEG measure. We evaluated SS and GR in two independent samples of participants: school-age boys and adult women. The slope of GR attenuation predicted inter-individual differences in SS in both samples. Test-retest reliability of the neuro-behavioral correlation was assessed in the adults, and was high between two sessions separated by several days or weeks. Neither frequencies nor absolute amplitudes of the GRs correlated with SS, which highlights the functional relevance of velocity-related changes in GR magnitude caused by augmentation of incoming input. Our findings provide evidence that links the psychophysical phenomenon of SS to inhibitory-based neural responses in the human primary visual cortex. This supports the role of inhibitory interactions as an important underlying mechanism for spatial suppression.
Collapse
Affiliation(s)
- Elena V Orekhova
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation; MedTech West and the Institute of Neuroscience and Physiology, Sahlgrenska Academy, The University of Gothenburg, Gothenburg, Sweden.
| | - Ekaterina N Rostovtseva
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation
| | - Viktoriya O Manyukhina
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation; National Research University Higher School of Economics, Moscow, Russian Federation, Moscow, Russian Federation
| | - Andrey O Prokofiev
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation
| | - Tatiana S Obukhova
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation
| | - Anastasia Yu Nikolaeva
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation
| | - Justin F Schneiderman
- MedTech West and the Institute of Neuroscience and Physiology, Sahlgrenska Academy, The University of Gothenburg, Gothenburg, Sweden
| | - Tatiana A Stroganova
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation
| |
Collapse
|
23
|
Brendel W, Bourdoukan R, Vertechi P, Machens CK, Denève S. Learning to represent signals spike by spike. PLoS Comput Biol 2020; 16:e1007692. [PMID: 32176682 PMCID: PMC7135338 DOI: 10.1371/journal.pcbi.1007692] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Revised: 04/06/2020] [Accepted: 01/27/2020] [Indexed: 12/31/2022] Open
Abstract
Networks based on coordinated spike coding can encode information with high efficiency in the spike trains of individual neurons. These networks exhibit single-neuron variability and tuning curves as typically observed in cortex, but paradoxically coincide with a precise, non-redundant spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these networks can be learnt with local learning rules. Here, we show how to learn the required architecture. Using coding efficiency as an objective, we derive spike-timing-dependent learning rules for a recurrent neural network, and we provide exact solutions for the networks’ convergence to an optimal state. As a result, we deduce an entire network from its input distribution and a firing cost. After learning, basic biophysical quantities such as voltages, firing thresholds, excitation, inhibition, or spikes acquire precise functional interpretations. Spiking neural networks can encode information with high efficiency in the spike trains of individual neurons if the synaptic weights between neurons are set to specific, optimal values. In this regime, the networks exhibit irregular spike trains, high trial-to-trial variability, and stimulus tuning, as typically observed in cortex. The strong variability on the level of single neurons paradoxically coincides with a precise, non-redundant, and spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these spiking networks can be learnt with local learning rules. In this study, we show how the required architecture can be learnt. We derive local and biophysically plausible learning rules for recurrent neural networks from first principles. We show both mathematically and using numerical simulations that these learning rules drive the networks into the optimal state, and we show that the optimal state is governed by the statistics of the input signals. After learning, the voltages of individual neurons can be interpreted as measuring the instantaneous error of the code, given by the error between the desired output signal and the actual output signal.
Collapse
Affiliation(s)
- Wieland Brendel
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
- Tübingen AI Center, University of Tübingen, Germany
| | - Ralph Bourdoukan
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
| | - Pietro Vertechi
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
| | - Christian K. Machens
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
- * E-mail: (CKM); (SD)
| | - Sophie Denève
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
- * E-mail: (CKM); (SD)
| |
Collapse
|
24
|
Orekhova EV, Prokofyev AO, Nikolaeva AY, Schneiderman JF, Stroganova TA. Additive effect of contrast and velocity suggests the role of strong excitatory drive in suppression of visual gamma response. PLoS One 2020; 15:e0228937. [PMID: 32053681 PMCID: PMC7018047 DOI: 10.1371/journal.pone.0228937] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Accepted: 01/27/2020] [Indexed: 11/19/2022] Open
Abstract
It is commonly acknowledged that gamma-band oscillations arise from interplay between neural excitation and inhibition; however, the neural mechanisms controlling the power of stimulus-induced gamma responses (GR) in the human brain remain poorly understood. A moderate increase in velocity of drifting gratings results in GR power enhancement, while increasing the velocity beyond some 'transition point' leads to GR power attenuation. We tested two alternative explanations for this nonlinear input-output dependency in the GR power. First, the GR power can be maximal at the preferable velocity/temporal frequency of motion-sensitive V1 neurons. This 'velocity tuning' hypothesis predicts that lowering contrast either will not affect the transition point or shift it to a lower velocity. Second, the GR power attenuation at high velocities of visual motion can be caused by changes in excitation/inhibition balance with increasing excitatory drive. Since contrast and velocity both add to excitatory drive, this 'excitatory drive' hypothesis predicts that the 'transition point' for low-contrast gratings would be reached at a higher velocity, as compared to high-contrast gratings. To test these alternatives, we recorded magnetoencephalography during presentation of low (50%) and high (100%) contrast gratings drifting at four velocities. We found that lowering contrast led to a highly reliable shift of the GR suppression transition point to higher velocities, thus supporting the excitatory drive hypothesis. No effects of contrast or velocity were found in the alpha-beta range. The results have implications for understanding the mechanisms of gamma oscillations and developing gamma-based biomarkers of disturbed excitation/inhibition balance in brain disorders.
Collapse
Affiliation(s)
- Elena V. Orekhova
- Moscow State University of Psychology and Education, Center for Neurocognitive Research (MEG Center), Moscow, Russia
- University of Gothenburg, Sahlgrenska Academy, Institute of Neuroscience &Physiology, Department of Clinical Neuroscience, Gothenburg, Sweden
- MedTech West, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Andrey O. Prokofyev
- Moscow State University of Psychology and Education, Center for Neurocognitive Research (MEG Center), Moscow, Russia
| | - Anastasia Yu. Nikolaeva
- Moscow State University of Psychology and Education, Center for Neurocognitive Research (MEG Center), Moscow, Russia
| | - Justin F. Schneiderman
- University of Gothenburg, Sahlgrenska Academy, Institute of Neuroscience &Physiology, Department of Clinical Neuroscience, Gothenburg, Sweden
- MedTech West, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Tatiana A. Stroganova
- Moscow State University of Psychology and Education, Center for Neurocognitive Research (MEG Center), Moscow, Russia
| |
Collapse
|
25
|
Interfering with a memory without erasing its trace. Neural Netw 2019; 121:339-355. [PMID: 31593840 DOI: 10.1016/j.neunet.2019.09.027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 09/16/2019] [Accepted: 09/22/2019] [Indexed: 11/21/2022]
Abstract
Previous research has shown that performance of a novice skill can be easily interfered with by subsequent training of another skill. We address the open questions whether extensively trained skills show the same vulnerability to interference as novice skills and which memory mechanism regulates interference between expert skills. We developed a recurrent neural network model of V1 able to learn from feedback experienced over the course of a long-term orientation discrimination experiment. After first exposing the model to one discrimination task for 3480 consecutive trials, we assessed how its performance was affected by subsequent training in a second, similar task. Training the second task strongly interfered with the first (highly trained) discrimination skill. The magnitude of interference depended on the relative amounts of training devoted to the different tasks. We used these and other model outcomes as predictions for a perceptual learning experiment in which human participants underwent the same training protocol as our model. Specifically, over the course of three months participants underwent baseline training in one orientation discrimination task for 15 sessions before being trained for 15 sessions on a similar task and finally undergoing another 15 sessions of training on the first task (to assess interference). Across all conditions, the pattern of interference observed empirically closely matched model predictions. According to our model, behavioral interference can be explained by antagonistic changes in neuronal tuning induced by the two tasks. Remarkably, this did not stem from erasing connections due to earlier learning but rather from a reweighting of lateral inhibition.
Collapse
|
26
|
Capparelli F, Pawelzik K, Ernst U. Constrained inference in sparse coding reproduces contextual effects and predicts laminar neural dynamics. PLoS Comput Biol 2019; 15:e1007370. [PMID: 31581240 PMCID: PMC6793885 DOI: 10.1371/journal.pcbi.1007370] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Revised: 10/15/2019] [Accepted: 09/02/2019] [Indexed: 01/16/2023] Open
Abstract
When probed with complex stimuli that extend beyond their classical receptive field, neurons in primary visual cortex display complex and non-linear response characteristics. Sparse coding models reproduce some of the observed contextual effects, but still fail to provide a satisfactory explanation in terms of realistic neural structures and cortical mechanisms, since the connection scheme they propose consists only of interactions among neurons with overlapping input fields. Here we propose an extended generative model for visual scenes that includes spatial dependencies among different features. We derive a neurophysiologically realistic inference scheme under the constraint that neurons have direct access only to local image information. The scheme can be interpreted as a network in primary visual cortex where two neural populations are organized in different layers within orientation hypercolumns that are connected by local, short-range and long-range recurrent interactions. When trained with natural images, the model predicts a connectivity structure linking neurons with similar orientation preferences matching the typical patterns found for long-ranging horizontal axons and feedback projections in visual cortex. Subjected to contextual stimuli typically used in empirical studies, our model replicates several hallmark effects of contextual processing and predicts characteristic differences for surround modulation between the two model populations. In summary, our model provides a novel framework for contextual processing in the visual system proposing a well-defined functional role for horizontal axons and feedback projections.
Collapse
Affiliation(s)
- Federica Capparelli
- Institute for Theoretical Physics, University of Bremen, Bremen, Germany
- * E-mail:
| | - Klaus Pawelzik
- Institute for Theoretical Physics, University of Bremen, Bremen, Germany
| | - Udo Ernst
- Institute for Theoretical Physics, University of Bremen, Bremen, Germany
| |
Collapse
|
27
|
Heterogeneous network dynamics in an excitatory-inhibitory network model by distinct intrinsic mechanisms in the fast spiking interneurons. Brain Res 2019; 1714:27-44. [DOI: 10.1016/j.brainres.2019.02.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2017] [Revised: 01/06/2019] [Accepted: 02/12/2019] [Indexed: 01/22/2023]
|
28
|
Dodds EM, DeWeese MR. On the Sparse Structure of Natural Sounds and Natural Images: Similarities, Differences, and Implications for Neural Coding. Front Comput Neurosci 2019; 13:39. [PMID: 31293408 PMCID: PMC6606779 DOI: 10.3389/fncom.2019.00039] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2018] [Accepted: 06/05/2019] [Indexed: 11/25/2022] Open
Abstract
Sparse coding models of natural images and sounds have been able to predict several response properties of neurons in the visual and auditory systems. While the success of these models suggests that the structure they capture is universal across domains to some degree, it is not yet clear which aspects of this structure are universal and which vary across sensory modalities. To address this, we fit complete and highly overcomplete sparse coding models to natural images and spectrograms of speech and report on differences in the statistics learned by these models. We find several types of sparse features in natural images, which all appear in similar, approximately Laplace distributions, whereas the many types of sparse features in speech exhibit a broad range of sparse distributions, many of which are highly asymmetric. Moreover, individual sparse coding units tend to exhibit higher lifetime sparseness for overcomplete models trained on images compared to those trained on speech. Conversely, population sparseness tends to be greater for these networks trained on speech compared with sparse coding models of natural images. To illustrate the relevance of these findings to neural coding, we studied how they impact a biologically plausible sparse coding network's representations in each sensory modality. In particular, a sparse coding network with synaptically local plasticity rules learns different sparse features from speech data than are found by more conventional sparse coding algorithms, but the learned features are qualitatively the same for these models when trained on natural images.
Collapse
Affiliation(s)
- Eric McVoy Dodds
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, United States
- Department of Physics, University of California, Berkeley, Berkeley, CA, United States
| | - Michael Robert DeWeese
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, United States
- Department of Physics, University of California, Berkeley, Berkeley, CA, United States
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States
| |
Collapse
|
29
|
Haroush N, Marom S. Inhibition increases response variability and reduces stimulus discrimination in random networks of cortical neurons. Sci Rep 2019; 9:4969. [PMID: 30899035 PMCID: PMC6428807 DOI: 10.1038/s41598-019-41220-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Accepted: 02/25/2019] [Indexed: 11/11/2022] Open
Abstract
Much of what is known about the contribution of inhibition to stimulus discrimination is due to extensively studied sensory systems, which are highly structured neural circuits. The effect of inhibition on stimulus representation in less structured networks is not as clear. Here we exercise a biosynthetic approach in order to study the impacts of inhibition on stimulus representation in non-specialized network anatomy. Combining pharmacological manipulation, multisite electrical stimulation and recording from ex-vivo randomly rewired networks of cortical neurons, we quantified the effects of inhibition on response variability and stimulus discrimination at the population and single unit levels. We find that blocking inhibition quenches variability of responses evoked by repeated stimuli and enhances discrimination between stimuli that invade the network from different spatial loci. Enhanced stimulus discrimination is reserved for representation schemes that are based on temporal relation between spikes emitted in groups of neurons. Our data indicate that - under intact inhibition - the response to a given stimulus is a noisy version of the response evoked in the absence of inhibition. Spatial analysis suggests that the dispersion effect of inhibition is due to disruption of an otherwise coherent, wave-like propagation of activity.
Collapse
Affiliation(s)
- Netta Haroush
- Network Biology Research Laboratory, Faculty of Electrical Engineering, Technion - Israel Institute of Technology, Haifa, 32000, Israel.
- Department of Physiology, Biophysics and Systems Biology, Faculty of Medicine, Technion - Israel Institute of Technology, Haifa, 32000, Israel.
| | - Shimon Marom
- Network Biology Research Laboratory, Faculty of Electrical Engineering, Technion - Israel Institute of Technology, Haifa, 32000, Israel
- Department of Physiology, Biophysics and Systems Biology, Faculty of Medicine, Technion - Israel Institute of Technology, Haifa, 32000, Israel
| |
Collapse
|
30
|
Zhang Q, Hu X, Hong B, Zhang B. A hierarchical sparse coding model predicts acoustic feature encoding in both auditory midbrain and cortex. PLoS Comput Biol 2019; 15:e1006766. [PMID: 30742609 PMCID: PMC6386396 DOI: 10.1371/journal.pcbi.1006766] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2018] [Revised: 02/22/2019] [Accepted: 12/21/2018] [Indexed: 12/03/2022] Open
Abstract
The auditory pathway consists of multiple stages, from the cochlear nucleus to the auditory cortex. Neurons acting at different stages have different functions and exhibit different response properties. It is unclear whether these stages share a common encoding mechanism. We trained an unsupervised deep learning model consisting of alternating sparse coding and max pooling layers on cochleogram-filtered human speech. Evaluation of the response properties revealed that computing units in lower layers exhibited spectro-temporal receptive fields (STRFs) similar to those of inferior colliculus neurons measured in physiological experiments, including properties such as sound onset and termination, checkerboard pattern, and spectral motion. Units in upper layers tended to be tuned to phonetic features such as plosivity and nasality, resembling the results of field recording in human auditory cortex. Variation of the sparseness level of the units in each higher layer revealed a positive correlation between the sparseness level and the strength of phonetic feature encoding. The activities of the units in the top layer, but not other layers, correlated with the dynamics of the first two formants (F1, F2) of all phonemes, indicating the encoding of phoneme dynamics in these units. These results suggest that the principles of sparse coding and max pooling may be universal in the human auditory pathway. When speech enters the ear, it is subjected to a series of processing stages prior to arriving at the auditory cortex. Neurons acting at different processing stages have different response properties. For example, at the auditory midbrain, a neuron may specifically detect the onsets of a frequency component in the speech, whereas in the auditory cortex, a neuron may specifically detect phonetic features. The encoding mechanisms underlying these neuronal functions remain unclear. To address this issue, we designed a hierarchical sparse coding model, inspired by the sparse activity of neurons in the sensory system, to learn features in speech signals. We found that the computing units in different layers exhibited hierarchical extraction of speech sound features, similar to those of neurons in the auditory midbrain and auditory cortex, although the computational principles in these layers were the same. The results suggest that sparse coding and max pooling represent universal computational principles throughout the auditory pathway.
Collapse
Affiliation(s)
- Qingtian Zhang
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
| | - Xiaolin Hu
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
- Center for Brain-Inspired Computing Research (CBICR), Tsinghua University, Beijing, China
- * E-mail:
| | - Bo Hong
- School of Medicine, Tsinghua University, Beijing, China
| | - Bo Zhang
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
- Center for Brain-Inspired Computing Research (CBICR), Tsinghua University, Beijing, China
| |
Collapse
|
31
|
Tavanaei A, Ghodrati M, Kheradpisheh SR, Masquelier T, Maida A. Deep learning in spiking neural networks. Neural Netw 2018; 111:47-63. [PMID: 30682710 DOI: 10.1016/j.neunet.2018.12.002] [Citation(s) in RCA: 205] [Impact Index Per Article: 34.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Revised: 12/02/2018] [Accepted: 12/03/2018] [Indexed: 12/14/2022]
Abstract
In recent years, deep learning has revolutionized the field of machine learning, for computer vision in particular. In this approach, a deep (multilayer) artificial neural network (ANN) is trained, most often in a supervised manner using backpropagation. Vast amounts of labeled training examples are required, but the resulting classification accuracy is truly impressive, sometimes outperforming humans. Neurons in an ANN are characterized by a single, static, continuous-valued activation. Yet biological neurons use discrete spikes to compute and transmit information, and the spike times, in addition to the spike rates, matter. Spiking neural networks (SNNs) are thus more biologically realistic than ANNs, and are arguably the only viable option if one wants to understand how the brain computes at the neuronal description level. The spikes of biological neurons are sparse in time and space, and event-driven. Combined with bio-plausible local learning rules, this makes it easier to build low-power, neuromorphic hardware for SNNs. However, training deep SNNs remains a challenge. Spiking neurons' transfer function is usually non-differentiable, which prevents using backpropagation. Here we review recent supervised and unsupervised methods to train deep SNNs, and compare them in terms of accuracy and computational cost. The emerging picture is that SNNs still lag behind ANNs in terms of accuracy, but the gap is decreasing, and can even vanish on some tasks, while SNNs typically require many fewer operations and are the better candidates to process spatio-temporal data.
Collapse
Affiliation(s)
- Amirhossein Tavanaei
- School of Computing and Informatics, University of Louisiana at Lafayette, Lafayette, LA 70504, USA.
| | - Masoud Ghodrati
- Department of Physiology, Monash University, Clayton, VIC, Australia
| | - Saeed Reza Kheradpisheh
- Department of Computer Science, Faculty of Mathematical Sciences and Computer, Kharazmi University, Tehran, Iran
| | | | - Anthony Maida
- School of Computing and Informatics, University of Louisiana at Lafayette, Lafayette, LA 70504, USA
| |
Collapse
|
32
|
Neural Correlate of Visual Familiarity in Macaque Area V2. J Neurosci 2018; 38:8967-8975. [PMID: 30181138 DOI: 10.1523/jneurosci.0664-18.2018] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Revised: 08/21/2018] [Accepted: 08/26/2018] [Indexed: 11/21/2022] Open
Abstract
Neurons in macaque inferotemporal cortex (ITC) respond less strongly to familiar than to novel images. It is commonly assumed that this effect arises within ITC because its neurons respond selectively to complex images and thus encode in an explicit form information sufficient for identifying a particular image as familiar. However, no prior study has examined whether neurons in low-order visual areas selective for local features also exhibit familiarity suppression. To address this issue, we recorded from neurons in macaque area V2 with semichronic microelectrode arrays while monkeys repeatedly viewed a set of large complex natural images. We report here that V2 neurons exhibit familiarity suppression. The effect develops over several days with a trajectory well fitted by an exponential function with a rate constant of ∼100 exposures. Suppression occurs in V2 at a latency following image onset shorter than its reported latency in ITC.SIGNIFICANCE STATEMENT Familiarity suppression, the tendency for neurons to respond less strongly to familiar than novel images, is well known in monkey inferotemporal cortex. Suppression has been thought to arise in inferotemporal cortex because its neurons respond selectively to large complex images and thus explicitly to encode information sufficient for identifying a particular image as familiar. No previous study has explored the possibility that familiarity suppression occurs even in early-stage visual areas where neurons are selective for simple features in confined receptive fields. We now report that neurons in area V2 exhibit familiarity suppression. This finding challenges our current understanding of information processing in V2 as well as our understanding of the mechanisms that underlie familiarity suppression.
Collapse
|
33
|
Representation learning using event-based STDP. Neural Netw 2018; 105:294-303. [DOI: 10.1016/j.neunet.2018.05.018] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2017] [Revised: 03/06/2018] [Accepted: 05/25/2018] [Indexed: 11/18/2022]
|
34
|
Kong Q, Han J, Zeng Y, Xu B. Efficient coding matters in the organization of the early visual system. Neural Netw 2018; 105:218-226. [PMID: 29870929 DOI: 10.1016/j.neunet.2018.04.019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2017] [Revised: 02/24/2018] [Accepted: 04/26/2018] [Indexed: 11/27/2022]
Abstract
Individual areas in the brain are organized into a hierarchical network as a result of evolution. Previous work indicated that the receptive fields (RFs) of individual areas have been evolved to favor metabolically efficient neural codes. In this paper, we propose that not only the RFs of individual areas, but also the organization of adjacent neurons and the hierarchical structure composed of these areas have been evolved to support efficient coding. To verify this hypothesis, we introduce a feed-forward three-layer network to simulate the early stages of human visual system. We emphasize that the network is not a purely feed-forward one since it also includes intra-layer connections, which are essential but usually ignored in the literature. Simulation results strongly reveal that (1) the obtained RFs of the simulated retinal ganglion cells (RGCs) or neurons in the lateral geniculate nucleus (LGN) and V1 simple neurons are consistent to the neurophysiological data; (2) the responses of closer RGCs are more correlated, and V1 simple neurons with similar orientations prefer to cluster together; (3) the hierarchical organization of the early visual system is beneficial for saving energy, which accords with the requirement of metabolically efficient neural coding in the process of human brain evolution.
Collapse
Affiliation(s)
- Qingqun Kong
- Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - Jiuqi Han
- Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; Department of Neural Engineering and Biological Interdisciplinary Studies, Institute of Military Cognition and Brain Sciences, Academy of Military Medical Sciences, Beijing, 100850, China
| | - Yi Zeng
- Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; University of Chinese Academy of Sciences, Beijing, 100049, China; Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China; National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China.
| | - Bo Xu
- Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; University of Chinese Academy of Sciences, Beijing, 100049, China; Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
| |
Collapse
|
35
|
Input-dependent modulation of MEG gamma oscillations reflects gain control in the visual cortex. Sci Rep 2018; 8:8451. [PMID: 29855596 PMCID: PMC5981429 DOI: 10.1038/s41598-018-26779-6] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Accepted: 05/17/2018] [Indexed: 01/06/2023] Open
Abstract
Gamma-band oscillations arise from the interplay between neural excitation (E) and inhibition (I) and may provide a non-invasive window into the state of cortical circuitry. A bell-shaped modulation of gamma response power by increasing the intensity of sensory input was observed in animals and is thought to reflect neural gain control. Here we sought to find a similar input-output relationship in humans with MEG via modulating the intensity of a visual stimulation by changing the velocity/temporal-frequency of visual motion. In the first experiment, adult participants observed static and moving gratings. The frequency of the MEG gamma response monotonically increased with motion velocity whereas power followed a bell-shape. In the second experiment, on a large group of children and adults, we found that despite drastic developmental changes in frequency and power of gamma oscillations, the relative suppression at high motion velocities was scaled to the same range of values across the life-span. In light of animal and modeling studies, the modulation of gamma power and frequency at high stimulation intensities characterizes the capacity of inhibitory neurons to counterbalance increasing excitation in visual networks. Gamma suppression may thus provide a non-invasive measure of inhibitory-based gain control in the healthy and diseased brain.
Collapse
|
36
|
Pham T, Haas JS. Electrical synapses between inhibitory neurons shape the responses of principal neurons to transient inputs in the thalamus: a modeling study. Sci Rep 2018; 8:7763. [PMID: 29773817 PMCID: PMC5958104 DOI: 10.1038/s41598-018-25956-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2017] [Accepted: 05/02/2018] [Indexed: 11/09/2022] Open
Abstract
As multimodal sensory information proceeds to the cortex, it is intercepted and processed by the nuclei of the thalamus. The main source of inhibition within thalamus is the reticular nucleus (TRN), which collects signals both from thalamocortical relay neurons and from thalamocortical feedback. Within the reticular nucleus, neurons are densely interconnected by connexin36-based gap junctions, known as electrical synapses. Electrical synapses have been shown to coordinate neuronal rhythms, including thalamocortical spindle rhythms, but their role in shaping or modulating transient activity is less understood. We constructed a four-cell model of thalamic relay and TRN neurons, and used it to investigate the impact of electrical synapses on closely timed inputs delivered to thalamic relay cells. We show that the electrical synapses of the TRN assist cortical discrimination of these inputs through effects of truncation, delay or inhibition of thalamic spike trains. We expect that these are principles whereby electrical synapses play similar roles in regulating the processing of transient activity in excitatory neurons across the brain.
Collapse
Affiliation(s)
- Tuan Pham
- Department of Biological Sciences, Lehigh University, Bethlehem, PA, USA
| | - Julie S Haas
- Department of Biological Sciences, Lehigh University, Bethlehem, PA, USA.
| |
Collapse
|
37
|
Zhou S, Yu Y. Synaptic E-I Balance Underlies Efficient Neural Coding. Front Neurosci 2018; 12:46. [PMID: 29456491 PMCID: PMC5801300 DOI: 10.3389/fnins.2018.00046] [Citation(s) in RCA: 89] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Accepted: 01/19/2018] [Indexed: 12/19/2022] Open
Abstract
Both theoretical and experimental evidence indicate that synaptic excitation and inhibition in the cerebral cortex are well-balanced during the resting state and sensory processing. Here, we briefly summarize the evidence for how neural circuits are adjusted to achieve this balance. Then, we discuss how such excitatory and inhibitory balance shapes stimulus representation and information propagation, two basic functions of neural coding. We also point out the benefit of adopting such a balance during neural coding. We conclude that excitatory and inhibitory balance may be a fundamental mechanism underlying efficient coding.
Collapse
Affiliation(s)
- Shanglin Zhou
- State Key Laboratory of Medical Neurobiology, School of Life Science and the Collaborative Innovation Center for Brain Science, Institutes of Brain Science, Center for Computational Systems Biology, Fudan University, Shanghai, China
| | - Yuguo Yu
- State Key Laboratory of Medical Neurobiology, School of Life Science and the Collaborative Innovation Center for Brain Science, Institutes of Brain Science, Center for Computational Systems Biology, Fudan University, Shanghai, China
| |
Collapse
|
38
|
Faghihi F, Moustafa AA. Sparse and burst spiking in artificial neural networks inspired by synaptic retrograde signaling. Inf Sci (N Y) 2017. [DOI: 10.1016/j.ins.2017.08.073] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
39
|
Xu Z, Skorheim S, Tu M, Berisha V, Yu S, Seo JS, Bazhenov M, Cao Y. Improving efficiency in sparse learning with the feedforward inhibitory motif. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.05.016] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
40
|
Zenke F, Gerstner W. Hebbian plasticity requires compensatory processes on multiple timescales. Philos Trans R Soc Lond B Biol Sci 2017; 372:rstb.2016.0259. [PMID: 28093557 PMCID: PMC5247595 DOI: 10.1098/rstb.2016.0259] [Citation(s) in RCA: 92] [Impact Index Per Article: 13.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2016] [Indexed: 01/19/2023] Open
Abstract
We review a body of theoretical and experimental research on Hebbian and homeostatic plasticity, starting from a puzzling observation: while homeostasis of synapses found in experiments is a slow compensatory process, most mathematical models of synaptic plasticity use rapid compensatory processes (RCPs). Even worse, with the slow homeostatic plasticity reported in experiments, simulations of existing plasticity models cannot maintain network stability unless further control mechanisms are implemented. To solve this paradox, we suggest that in addition to slow forms of homeostatic plasticity there are RCPs which stabilize synaptic plasticity on short timescales. These rapid processes may include heterosynaptic depression triggered by episodes of high postsynaptic firing rate. While slower forms of homeostatic plasticity are not sufficient to stabilize Hebbian plasticity, they are important for fine-tuning neural circuits. Taken together we suggest that learning and memory rely on an intricate interplay of diverse plasticity mechanisms on different timescales which jointly ensure stability and plasticity of neural circuits.This article is part of the themed issue 'Integrating Hebbian and homeostatic plasticity'.
Collapse
Affiliation(s)
- Friedemann Zenke
- Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Wulfram Gerstner
- Brain Mind Institute, School of Life Sciences and School of Computer and Communication Sciences, Ecole Polytechnique Fédérale de Lausanne, 1015 Lausanne EPFL, Switzerland
| |
Collapse
|
41
|
Sparse synaptic connectivity is required for decorrelation and pattern separation in feedforward networks. Nat Commun 2017; 8:1116. [PMID: 29061964 PMCID: PMC5653655 DOI: 10.1038/s41467-017-01109-y] [Citation(s) in RCA: 66] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2017] [Accepted: 08/18/2017] [Indexed: 11/17/2022] Open
Abstract
Pattern separation is a fundamental function of the brain. The divergent feedforward networks thought to underlie this computation are widespread, yet exhibit remarkably similar sparse synaptic connectivity. Marr-Albus theory postulates that such networks separate overlapping activity patterns by mapping them onto larger numbers of sparsely active neurons. But spatial correlations in synaptic input and those introduced by network connectivity are likely to compromise performance. To investigate the structural and functional determinants of pattern separation we built models of the cerebellar input layer with spatially correlated input patterns, and systematically varied their synaptic connectivity. Performance was quantified by the learning speed of a classifier trained on either the input or output patterns. Our results show that sparse synaptic connectivity is essential for separating spatially correlated input patterns over a wide range of network activity, and that expansion and correlations, rather than sparse activity, are the major determinants of pattern separation. Input decorrelation, expansion recoding and sparse activity have been proposed to separate overlapping activity patterns in feedforward networks. Here the authors use reduced and detailed spiking models to elucidate how synaptic connectivity affects the contribution of these mechanisms to pattern separation in cerebellar cortex.
Collapse
|
42
|
Recurrent networks with soft-thresholding nonlinearities for lightweight coding. Neural Netw 2017; 94:212-219. [PMID: 28806715 DOI: 10.1016/j.neunet.2017.07.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2017] [Revised: 06/18/2017] [Accepted: 07/07/2017] [Indexed: 11/21/2022]
Abstract
A long-standing and influential hypothesis in neural information processing is that early sensory networks adapt themselves to produce efficient codes of afferent inputs. Here, we show how a nonlinear recurrent network provides an optimal solution for the efficient coding of an afferent input and its history. We specifically consider the problem of producing lightweight codes, ones that minimize both ℓ1 and ℓ2 constraints on sparsity and energy, respectively. When embedded in a linear coding paradigm, this problem results in a non-smooth convex optimization problem. We employ a proximal gradient descent technique to develop the solution, showing that the optimal code is realized through a recurrent network endowed with a nonlinear soft thresholding operator. The training of the network connection weights is readily achieved through gradient-based local learning. If such learning is assumed to occur on a slower time-scale than the (faster) recurrent dynamics, then the network as a whole converges to an optimal set of codes and weights via what is, in effect, an alternative minimization procedure. Our results show how the addition of thresholding nonlinearities to a recurrent network may enable the production of lightweight, history-sensitive encoding schemes.
Collapse
|
43
|
Blake DT. Network Supervision of Adult Experience and Learning Dependent Sensory Cortical Plasticity. Compr Physiol 2017. [DOI: 10.1002/cphy.c160036] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
44
|
Barrett DG, Denève S, Machens CK. Optimal compensation for neuron loss. eLife 2016; 5. [PMID: 27935480 PMCID: PMC5283835 DOI: 10.7554/elife.12454] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2015] [Accepted: 12/08/2016] [Indexed: 11/13/2022] Open
Abstract
The brain has an impressive ability to withstand neural damage. Diseases that kill neurons can go unnoticed for years, and incomplete brain lesions or silencing of neurons often fail to produce any behavioral effect. How does the brain compensate for such damage, and what are the limits of this compensation? We propose that neural circuits instantly compensate for neuron loss, thereby preserving their function as much as possible. We show that this compensation can explain changes in tuning curves induced by neuron silencing across a variety of systems, including the primary visual cortex. We find that compensatory mechanisms can be implemented through the dynamics of networks with a tight balance of excitation and inhibition, without requiring synaptic plasticity. The limits of this compensatory mechanism are reached when excitation and inhibition become unbalanced, thereby demarcating a recovery boundary, where signal representation fails and where diseases may become symptomatic.
Collapse
Affiliation(s)
- David Gt Barrett
- Laboratoire de Neurosciences Cognitives, École Normale Supérieure, Paris, France.,Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Sophie Denève
- Laboratoire de Neurosciences Cognitives, École Normale Supérieure, Paris, France
| | - Christian K Machens
- Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, Lisbon, Portugal
| |
Collapse
|
45
|
Brito CSN, Gerstner W. Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation. PLoS Comput Biol 2016; 12:e1005070. [PMID: 27690349 PMCID: PMC5045191 DOI: 10.1371/journal.pcbi.1005070] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2015] [Accepted: 07/19/2016] [Indexed: 11/19/2022] Open
Abstract
The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely nonlinear Hebbian learning. When nonlinear Hebbian learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities. The question of how the brain self-organizes to develop precisely tuned neurons has puzzled neuroscientists at least since the discoveries of Hubel and Wiesel. In the past decades, a variety of theories and models have been proposed to describe receptive field formation, notably V1 simple cells, from natural inputs. We cut through the jungle of candidate explanations by demonstrating that in fact a single principle is sufficient to explain receptive field development. Our results follow from two major insights. First, we show that many representative models of sensory development are in fact implementing variations of a common principle: nonlinear Hebbian learning. Second, we reveal that nonlinear Hebbian learning is sufficient for receptive field formation through sensory inputs. The surprising result is that our findings are robust of specific details of a model, and allows for robust predictions on the learned receptive fields. Nonlinear Hebbian learning is therefore general in two senses: it applies to many models developed by theoreticians, and to many sensory modalities studied by experimental neuroscientists.
Collapse
Affiliation(s)
- Carlos S. N. Brito
- School of Computer and Communication Sciences and School of Life Science, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne, Lausanne EPFL, Switzerland
- Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom
- * E-mail:
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Science, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne, Lausanne EPFL, Switzerland
| |
Collapse
|
46
|
Sugihara H, Chen N, Sur M. Cell-specific modulation of plasticity and cortical state by cholinergic inputs to the visual cortex. JOURNAL OF PHYSIOLOGY, PARIS 2016; 110:37-43. [PMID: 27840211 PMCID: PMC5769868 DOI: 10.1016/j.jphysparis.2016.11.004] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2016] [Revised: 11/08/2016] [Accepted: 11/09/2016] [Indexed: 12/18/2022]
Abstract
Acetylcholine (ACh) modulates diverse vital brain functions. Cholinergic neurons from the basal forebrain innervate a wide range of cortical areas, including the primary visual cortex (V1), and multiple cortical cell types have been found to be responsive to ACh. Here we review how different cell types contribute to different cortical functions modulated by ACh. We specifically focus on two major cortical functions: plasticity and cortical state. In layer II/III of V1, ACh acting on astrocytes and somatostatin-expressing inhibitory neurons plays critical roles in these functions. Cell type specificity of cholinergic modulation points towards the growing understanding that even diffuse neurotransmitter systems can mediate specific functions through specific cell classes and receptors.
Collapse
Affiliation(s)
- Hiroki Sugihara
- Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Naiyan Chen
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Laboratory of Metabolic Medicine, Singapore Bioimaging Consortium, A(∗)STAR, Republic of Singapore
| | - Mriganka Sur
- Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| |
Collapse
|
47
|
Garrido JA, Luque NR, Tolu S, D’Angelo E. Oscillation-Driven Spike-Timing Dependent Plasticity Allows Multiple Overlapping Pattern Recognition in Inhibitory Interneuron Networks. Int J Neural Syst 2016; 26:1650020. [DOI: 10.1142/s0129065716500209] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The majority of operations carried out by the brain require learning complex signal patterns for future recognition, retrieval and reuse. Although learning is thought to depend on multiple forms of long-term synaptic plasticity, the way this latter contributes to pattern recognition is still poorly understood. Here, we have used a simple model of afferent excitatory neurons and interneurons with lateral inhibition, reproducing a network topology found in many brain areas from the cerebellum to cortical columns. When endowed with spike-timing dependent plasticity (STDP) at the excitatory input synapses and at the inhibitory interneuron–interneuron synapses, the interneurons rapidly learned complex input patterns. Interestingly, induction of plasticity required that the network be entrained into theta-frequency band oscillations, setting the internal phase-reference required to drive STDP. Inhibitory plasticity effectively distributed multiple patterns among available interneurons, thus allowing the simultaneous detection of multiple overlapping patterns. The addition of plasticity in intrinsic excitability made the system more robust allowing self-adjustment and rescaling in response to a broad range of input patterns. The combination of plasticity in lateral inhibitory connections and homeostatic mechanisms in the inhibitory interneurons optimized mutual information (MI) transfer. The storage of multiple complex patterns in plastic interneuron networks could be critical for the generation of sparse representations of information in excitatory neuron populations falling under their control.
Collapse
Affiliation(s)
- Jesús A. Garrido
- Department of Computer Architecture and Technology, University of Granada, Periodista Daniel Saucedo Aranda s/n, Granada, 18071, Spain
| | - Niceto R. Luque
- Institut National de la Santé et de la Recherche Médicale, U968 and Centre National de la Recherche Scientifique, UMR_7210, Institut de la Vision, rue Moreau, 17, Paris, F75012, France
- Sorbonne Universités, Université Pierre et Marie Curie Paris 06, UMR_S 968, Place Jussieu, 4, Paris, F75252, France
| | - Silvia Tolu
- Center for Playware, Department of Electrical Engineering, Technical University of Denmark, Richard Petersens Plads, Elektrovej, Building 326, Lyngby, Copenhagen, 2800, Denmark
| | - Egidio D’Angelo
- Department of Brain and Behavioral Sciences, University of Pavia, Via Forlanini, 6, Pavia, I27100, Italy
- Brain Connectivity Center, Istituto Neurologico IRCCS Fondazione Casimiro Mondino, Via Mondino, 2 Pavia, I27100, Italy
| |
Collapse
|
48
|
Zhang Y, Li X, Samonds JM, Lee TS. Relating functional connectivity in V1 neural circuits and 3D natural scenes using Boltzmann machines. Vision Res 2015; 120:121-31. [PMID: 26712581 DOI: 10.1016/j.visres.2015.12.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2014] [Revised: 12/03/2015] [Accepted: 12/07/2015] [Indexed: 11/25/2022]
Abstract
Bayesian theory has provided a compelling conceptualization for perceptual inference in the brain. Central to Bayesian inference is the notion of statistical priors. To understand the neural mechanisms of Bayesian inference, we need to understand the neural representation of statistical regularities in the natural environment. In this paper, we investigated empirically how statistical regularities in natural 3D scenes are represented in the functional connectivity of disparity-tuned neurons in the primary visual cortex of primates. We applied a Boltzmann machine model to learn from 3D natural scenes, and found that the units in the model exhibited cooperative and competitive interactions, forming a "disparity association field", analogous to the contour association field. The cooperative and competitive interactions in the disparity association field are consistent with constraints of computational models for stereo matching. In addition, we simulated neurophysiological experiments on the model, and found the results to be consistent with neurophysiological data in terms of the functional connectivity measurements between disparity-tuned neurons in the macaque primary visual cortex. These findings demonstrate that there is a relationship between the functional connectivity observed in the visual cortex and the statistics of natural scenes. They also suggest that the Boltzmann machine can be a viable model for conceptualizing computations in the visual cortex and, as such, can be used to predict neural circuits in the visual cortex from natural scene statistics.
Collapse
Affiliation(s)
- Yimeng Zhang
- Center for the Neural Basis of Cognition and Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | - Xiong Li
- Center for the Neural Basis of Cognition and Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, China.
| | - Jason M Samonds
- Center for the Neural Basis of Cognition and Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | - Tai Sing Lee
- Center for the Neural Basis of Cognition and Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| |
Collapse
|
49
|
Zylberberg J, Shea-Brown E. Input nonlinearities can shape beyond-pairwise correlations and improve information transmission by neural populations. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2015; 92:062707. [PMID: 26764727 DOI: 10.1103/physreve.92.062707] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2012] [Indexed: 06/05/2023]
Abstract
While recent recordings from neural populations show beyond-pairwise, or higher-order, correlations (HOC), we have little understanding of how HOC arise from network interactions and of how they impact encoded information. Here, we show that input nonlinearities imply HOC in spin-glass-type statistical models. We then discuss one such model with parametrized pairwise- and higher-order interactions, revealing conditions under which beyond-pairwise interactions increase the mutual information between a given stimulus type and the population responses. For jointly Gaussian stimuli, coding performance is improved by shaping output HOC only when neural firing rates are constrained to be low. For stimuli with skewed probability distributions (like natural image luminances), performance improves for all firing rates. Our work suggests surprising connections between nonlinear integration of neural inputs, stimulus statistics, and normative theories of population coding. Moreover, it suggests that the inclusion of beyond-pairwise interactions could improve the performance of Boltzmann machines for machine learning and signal processing applications.
Collapse
Affiliation(s)
- Joel Zylberberg
- Department of Applied Mathematics, University of Washington, Seattle, Washington 98195, USA
| | - Eric Shea-Brown
- Department of Applied Mathematics, Program in Neuroscience, Department of Physiology and Biophysics, University of Washington, Seattle, Washington 98195, USA
| |
Collapse
|
50
|
Kee T, Sanda P, Gupta N, Stopfer M, Bazhenov M. Feed-Forward versus Feedback Inhibition in a Basic Olfactory Circuit. PLoS Comput Biol 2015; 11:e1004531. [PMID: 26458212 PMCID: PMC4601731 DOI: 10.1371/journal.pcbi.1004531] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2015] [Accepted: 08/28/2015] [Indexed: 11/23/2022] Open
Abstract
Inhibitory interneurons play critical roles in shaping the firing patterns of principal neurons in many brain systems. Despite difference in the anatomy or functions of neuronal circuits containing inhibition, two basic motifs repeatedly emerge: feed-forward and feedback. In the locust, it was proposed that a subset of lateral horn interneurons (LHNs), provide feed-forward inhibition onto Kenyon cells (KCs) to maintain their sparse firing—a property critical for olfactory learning and memory. But recently it was established that a single inhibitory cell, the giant GABAergic neuron (GGN), is the main and perhaps sole source of inhibition in the mushroom body, and that inhibition from this cell is mediated by a feedback (FB) loop including KCs and the GGN. To clarify basic differences in the effects of feedback vs. feed-forward inhibition in circuit dynamics we here use a model of the locust olfactory system. We found both inhibitory motifs were able to maintain sparse KCs responses and provide optimal odor discrimination. However, we further found that only FB inhibition could create a phase response consistent with data recorded in vivo. These findings describe general rules for feed-forward versus feedback inhibition and suggest GGN is potentially capable of providing the primary source of inhibition to the KCs. A better understanding of how inhibitory motifs impact post-synaptic neuronal activity could be used to reveal unknown inhibitory structures within biological networks. Understanding how inhibitory neurons interact with excitatory neurons is critical for understanding the behaviors of neuronal networks. Here we address this question with simple but biologically relevant models based on the anatomy of the locust olfactory pathway. Two ubiquitous and basic inhibitory motifs were tested: feed-forward and feedback. Feed-forward inhibition typically occurs between different brain areas when excitatory neurons excite inhibitory cells, which then inhibit a group of postsynaptic excitatory neurons outside of the initializing excitatory neurons’ area. On the other hand, the feedback inhibitory motif requires a population of excitatory neurons to drive the inhibitory cells, which in turn inhibit the same population of excitatory cells. We found the type of the inhibitory motif determined the timing with which each group of cells fired action potentials in comparison to one another (relative timing). It also affected the range of inhibitory neurons’ activity, with the inhibitory neurons having a wider range in the feedback circuit than that in the feed-forward one. These results will allow predicting the type of the connectivity structure within unexplored biological circuits given only electrophysiological recordings.
Collapse
Affiliation(s)
- Tiffany Kee
- Department of Cell Biology and Neuroscience, University of California, Riverside, Riverside, California, United States of America
| | - Pavel Sanda
- Department of Cell Biology and Neuroscience, University of California, Riverside, Riverside, California, United States of America
| | - Nitin Gupta
- Department of Biological Sciences and Bioengineering, Indian Institute of Technology Kanpur, Kanpur, India
| | - Mark Stopfer
- US National Institutes of Health, National Institute of Child Health and Human Development, Bethesda, Maryland, United States of America
| | - Maxim Bazhenov
- Department of Cell Biology and Neuroscience, University of California, Riverside, Riverside, California, United States of America
- * E-mail:
| |
Collapse
|