1
|
Homma NY, See JZ, Atencio CA, Hu C, Downer JD, Beitel RE, Cheung SW, Najafabadi MS, Olsen T, Bigelow J, Hasenstaub AR, Malone BJ, Schreiner CE. Receptive-field nonlinearities in primary auditory cortex: a comparative perspective. Cereb Cortex 2024; 34:bhae364. [PMID: 39270676 PMCID: PMC11398879 DOI: 10.1093/cercor/bhae364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 08/14/2024] [Accepted: 08/21/2024] [Indexed: 09/15/2024] Open
Abstract
Cortical processing of auditory information can be affected by interspecies differences as well as brain states. Here we compare multifeature spectro-temporal receptive fields (STRFs) and associated input/output functions or nonlinearities (NLs) of neurons in primary auditory cortex (AC) of four mammalian species. Single-unit recordings were performed in awake animals (female squirrel monkeys, female, and male mice) and anesthetized animals (female squirrel monkeys, rats, and cats). Neuronal responses were modeled as consisting of two STRFs and their associated NLs. The NLs for the STRF with the highest information content show a broad distribution between linear and quadratic forms. In awake animals, we find a higher percentage of quadratic-like NLs as opposed to more linear NLs in anesthetized animals. Moderate sex differences of the shape of NLs were observed between male and female unanesthetized mice. This indicates that the core AC possesses a rich variety of potential computations, particularly in awake animals, suggesting that multiple computational algorithms are at play to enable the auditory system's robust recognition of auditory events.
Collapse
Affiliation(s)
- Natsumi Y Homma
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
- Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge, UK
| | - Jermyn Z See
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Craig A Atencio
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Congcong Hu
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Joshua D Downer
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
- Center of Neuroscience, University of California Davis, Newton Ct, Davis, CA, USA
| | - Ralph E Beitel
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Steven W Cheung
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Mina Sadeghi Najafabadi
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Timothy Olsen
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - James Bigelow
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Andrea R Hasenstaub
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Brian J Malone
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
- Center of Neuroscience, University of California Davis, Newton Ct, Davis, CA, USA
| | - Christoph E Schreiner
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| |
Collapse
|
2
|
Krüppel S, Khani MH, Schreyer HM, Sridhar S, Ramakrishna V, Zapp SJ, Mietsch M, Karamanlis D, Gollisch T. Applying Super-Resolution and Tomography Concepts to Identify Receptive Field Subunits in the Retina. PLoS Comput Biol 2024; 20:e1012370. [PMID: 39226328 PMCID: PMC11398665 DOI: 10.1371/journal.pcbi.1012370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 09/13/2024] [Accepted: 07/28/2024] [Indexed: 09/05/2024] Open
Abstract
Spatially nonlinear stimulus integration by retinal ganglion cells lies at the heart of various computations performed by the retina. It arises from the nonlinear transmission of signals that ganglion cells receive from bipolar cells, which thereby constitute functional subunits within a ganglion cell's receptive field. Inferring these subunits from recorded ganglion cell activity promises a new avenue for studying the functional architecture of the retina. This calls for efficient methods, which leave sufficient experimental time to leverage the acquired knowledge for further investigating identified subunits. Here, we combine concepts from super-resolution microscopy and computed tomography and introduce super-resolved tomographic reconstruction (STR) as a technique to efficiently stimulate and locate receptive field subunits. Simulations demonstrate that this approach can reliably identify subunits across a wide range of model variations, and application in recordings of primate parasol ganglion cells validates the experimental feasibility. STR can potentially reveal comprehensive subunit layouts within only a few tens of minutes of recording time, making it ideal for online analysis and closed-loop investigations of receptive field substructure in retina recordings.
Collapse
Affiliation(s)
- Steffen Krüppel
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
| | - Mohammad H Khani
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Helene M Schreyer
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Shashwat Sridhar
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Varsha Ramakrishna
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- International Max Planck Research School for Neurosciences, Göttingen, Germany
| | - Sören J Zapp
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Matthias Mietsch
- German Primate Center, Laboratory Animal Science Unit, Göttingen, Germany
- German Center for Cardiovascular Research, Partner Site Göttingen, Göttingen, Germany
| | - Dimokratis Karamanlis
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Tim Gollisch
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
- Else Kröner Fresenius Center for Optogenetic Therapies, University Medical Center Göttingen, Göttingen, Germany
| |
Collapse
|
3
|
Lohse M, King AJ, Willmore BDB. Subcortical origin of nonlinear sound encoding in auditory cortex. Curr Biol 2024; 34:3405-3415.e5. [PMID: 39032492 DOI: 10.1016/j.cub.2024.06.057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 06/05/2024] [Accepted: 06/21/2024] [Indexed: 07/23/2024]
Abstract
A major challenge in neuroscience is to understand how neural representations of sensory information are transformed by the network of ascending and descending connections in each sensory system. By recording from neurons at several levels of the auditory pathway, we show that much of the nonlinear encoding of complex sounds in auditory cortex can be explained by transformations in the midbrain and thalamus. Modeling cortical neurons in terms of their inputs across these subcortical populations enables their responses to be predicted with unprecedented accuracy. By contrast, subcortical responses cannot be predicted from descending cortical inputs, indicating that ascending transformations are irreversible, resulting in increasingly lossy, higher-order representations across the auditory pathway. Rather, auditory cortex selectively modulates the nonlinear aspects of thalamic auditory responses and the functional coupling between subcortical neurons without affecting the linear encoding of sound. These findings reveal the fundamental role of subcortical transformations in shaping cortical responses.
Collapse
Affiliation(s)
- Michael Lohse
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, London W1T 4JG, UK; Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK.
| | - Andrew J King
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK.
| | - Ben D B Willmore
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK.
| |
Collapse
|
4
|
van den Berg MM, Wong AB, Houtak G, Williamson RS, Borst JGG. Sodium salicylate improves detection of amplitude-modulated sound in mice. iScience 2024; 27:109691. [PMID: 38736549 PMCID: PMC11088340 DOI: 10.1016/j.isci.2024.109691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 01/14/2024] [Accepted: 04/05/2024] [Indexed: 05/14/2024] Open
Abstract
Salicylate is commonly used to induce tinnitus in animals, but its underlying mechanism of action is still debated. We therefore tested its effects on the firing properties of neurons in the mouse inferior colliculus (IC). Salicylate induced a large decrease in the spontaneous activity and an increase of ∼20 dB SPL in the minimum threshold of single units. In response to sinusoidally modulated noise (SAM noise) single units showed both an increase in phase locking and improved rate coding. Mice also became better at detecting amplitude modulations, and a simple threshold model based on the IC population response could reproduce this improvement. The responses to dynamic random chords (DRCs) suggested that the improved AM encoding was due to a linearization of the cochlear output, resulting in larger contrasts during SAM noise. These effects of salicylate are not consistent with the presence of tinnitus, but should be taken into account when studying hyperacusis.
Collapse
Affiliation(s)
- Maurits M. van den Berg
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, NL-3015 GD Rotterdam, the Netherlands
| | - Aaron B. Wong
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, NL-3015 GD Rotterdam, the Netherlands
| | - Ghais Houtak
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, NL-3015 GD Rotterdam, the Netherlands
| | - Ross S. Williamson
- Pittsburgh Hearing Research Center, Department of Otolaryngology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - J. Gerard G. Borst
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, NL-3015 GD Rotterdam, the Netherlands
| |
Collapse
|
5
|
Weng G, Clark K, Akbarian A, Noudoost B, Nategh N. Time-varying generalized linear models: characterizing and decoding neuronal dynamics in higher visual areas. Front Comput Neurosci 2024; 18:1273053. [PMID: 38348287 PMCID: PMC10859875 DOI: 10.3389/fncom.2024.1273053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 01/09/2024] [Indexed: 02/15/2024] Open
Abstract
To create a behaviorally relevant representation of the visual world, neurons in higher visual areas exhibit dynamic response changes to account for the time-varying interactions between external (e.g., visual input) and internal (e.g., reward value) factors. The resulting high-dimensional representational space poses challenges for precisely quantifying individual factors' contributions to the representation and readout of sensory information during a behavior. The widely used point process generalized linear model (GLM) approach provides a powerful framework for a quantitative description of neuronal processing as a function of various sensory and non-sensory inputs (encoding) as well as linking particular response components to particular behaviors (decoding), at the level of single trials and individual neurons. However, most existing variations of GLMs assume the neural systems to be time-invariant, making them inadequate for modeling nonstationary characteristics of neuronal sensitivity in higher visual areas. In this review, we summarize some of the existing GLM variations, with a focus on time-varying extensions. We highlight their applications to understanding neural representations in higher visual areas and decoding transient neuronal sensitivity as well as linking physiology to behavior through manipulation of model components. This time-varying class of statistical models provide valuable insights into the neural basis of various visual behaviors in higher visual areas and hold significant potential for uncovering the fundamental computational principles that govern neuronal processing underlying various behaviors in different regions of the brain.
Collapse
Affiliation(s)
- Geyu Weng
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, United States
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Kelsey Clark
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Amir Akbarian
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Behrad Noudoost
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Neda Nategh
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
- Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, United States
| |
Collapse
|
6
|
De Clercq P, Vanthornhout J, Vandermosten M, Francart T. Beyond linear neural envelope tracking: a mutual information approach. J Neural Eng 2023; 20. [PMID: 36812597 DOI: 10.1088/1741-2552/acbe1d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 02/22/2023] [Indexed: 02/24/2023]
Abstract
Objective.The human brain tracks the temporal envelope of speech, which contains essential cues for speech understanding. Linear models are the most common tool to study neural envelope tracking. However, information on how speech is processed can be lost since nonlinear relations are precluded. Analysis based on mutual information (MI), on the other hand, can detect both linear and nonlinear relations and is gradually becoming more popular in the field of neural envelope tracking. Yet, several different approaches to calculating MI are applied with no consensus on which approach to use. Furthermore, the added value of nonlinear techniques remains a subject of debate in the field. The present paper aims to resolve these open questions.Approach.We analyzed electroencephalography (EEG) data of participants listening to continuous speech and applied MI analyses and linear models.Main results.Comparing the different MI approaches, we conclude that results are most reliable and robust using the Gaussian copula approach, which first transforms the data to standard Gaussians. With this approach, the MI analysis is a valid technique for studying neural envelope tracking. Like linear models, it allows spatial and temporal interpretations of speech processing, peak latency analyses, and applications to multiple EEG channels combined. In a final analysis, we tested whether nonlinear components were present in the neural response to the envelope by first removing all linear components in the data. We robustly detected nonlinear components on the single-subject level using the MI analysis.Significance.We demonstrate that the human brain processes speech in a nonlinear way. Unlike linear models, the MI analysis detects such nonlinear relations, proving its added value to neural envelope tracking. In addition, the MI analysis retains spatial and temporal characteristics of speech processing, an advantage lost when using more complex (nonlinear) deep neural networks.
Collapse
Affiliation(s)
- Pieter De Clercq
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium
| | - Jonas Vanthornhout
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium
| | - Maaike Vandermosten
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium
| | - Tom Francart
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium
| |
Collapse
|
7
|
Sadagopan S, Kar M, Parida S. Quantitative models of auditory cortical processing. Hear Res 2023; 429:108697. [PMID: 36696724 PMCID: PMC9928778 DOI: 10.1016/j.heares.2023.108697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/17/2022] [Accepted: 01/12/2023] [Indexed: 01/15/2023]
Abstract
To generate insight from experimental data, it is critical to understand the inter-relationships between individual data points and place them in context within a structured framework. Quantitative modeling can provide the scaffolding for such an endeavor. Our main objective in this review is to provide a primer on the range of quantitative tools available to experimental auditory neuroscientists. Quantitative modeling is advantageous because it can provide a compact summary of observed data, make underlying assumptions explicit, and generate predictions for future experiments. Quantitative models may be developed to characterize or fit observed data, to test theories of how a task may be solved by neural circuits, to determine how observed biophysical details might contribute to measured activity patterns, or to predict how an experimental manipulation would affect neural activity. In complexity, quantitative models can range from those that are highly biophysically realistic and that include detailed simulations at the level of individual synapses, to those that use abstract and simplified neuron models to simulate entire networks. Here, we survey the landscape of recently developed models of auditory cortical processing, highlighting a small selection of models to demonstrate how they help generate insight into the mechanisms of auditory processing. We discuss examples ranging from models that use details of synaptic properties to explain the temporal pattern of cortical responses to those that use modern deep neural networks to gain insight into human fMRI data. We conclude by discussing a biologically realistic and interpretable model that our laboratory has developed to explore aspects of vocalization categorization in the auditory pathway.
Collapse
Affiliation(s)
- Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA; Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| | - Manaswini Kar
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Satyabrata Parida
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
8
|
Strong energy component is more important than spectral selectivity in modeling responses of midbrain auditory neurons to wide-band environmental sounds. Biosystems 2022; 221:104752. [PMID: 36028002 DOI: 10.1016/j.biosystems.2022.104752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Revised: 07/08/2022] [Accepted: 07/31/2022] [Indexed: 11/23/2022]
Abstract
Modeling central auditory neurons in response to complex sounds not only helps understanding neural processing of speech signals but can also provide insights for biomimetics in neuro-engineering. While modeling responses of midbrain auditory neurons to synthetic tones is rather good, modeling those to environmental sounds is less satisfactory. Environmental sounds typically contain a wide range of frequency components, often with strong and transient energy. These stimulus features have not been examined in the conventional approach of auditory modeling centered on spectral selectivity. To this end, we firstly compared responses to an environmental sound of auditory midbrain neurons across 3 subpopulations of neurons with frequency selectivity in the low, middle and high ranges; secondly, we manipulated the sound energy, both in power and in spectrum, and compared across these subpopulations how their modeled responses were affected. The environmental sound was recorded when a rat was drinking from a feeding bottle (called the 'drinking sound'). The sound spectrum was divided into 20 non-overlapping frequency bands (from 0 to 20 kHz, at 1 kHz width) and presented to an artificial neural model built on a committee machine with parallel spectral inputs to simulate the known tonotopic organization of the auditory system. The model was trained to predict empirical response probability profiles of neurons to the repeated sounds. Results showed that model performance depended more on the strong energy components than on the spectral selectivity. Findings were interpreted to reflect general sensitivity to rapidly changing sound intensities at the auditory midbrain and in the cortex.
Collapse
|
9
|
Cody PA, Tzounopoulos T. Neuromodulatory Mechanisms Underlying Contrast Gain Control in Mouse Auditory Cortex. J Neurosci 2022; 42:5564-5579. [PMID: 35998293 PMCID: PMC9295830 DOI: 10.1523/jneurosci.2054-21.2022] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 05/11/2022] [Accepted: 05/11/2022] [Indexed: 01/16/2023] Open
Abstract
Neural adaptation enables the brain to efficiently process sensory signals despite large changes in background noise. Previous studies have established that recent background spectro- or spatio-temporal statistics scale neural responses to sensory stimuli via a canonical normalization computation, which is conserved among species and sensory domains. In the auditory pathway, one major form of normalization, termed contrast gain control, presents as decreasing instantaneous firing-rate gain, the slope of the neural input-output relationship, with increasing variability of background sound levels (contrast) across time and frequency. Despite this gain rescaling, mean firing-rates in auditory cortex become invariant to sound level contrast, termed contrast invariance. The underlying neuromodulatory mechanisms of these two phenomena remain unknown. To study these mechanisms in male and female mice, we used a 2-photon calcium imaging preparation in layer 2/3 neurons of primary auditory cortex (A1), along with pharmacological and genetic KO approaches. We found that neuromodulatory cortical synaptic zinc signaling is necessary for contrast gain control but not contrast invariance in mouse A1.SIGNIFICANCE STATEMENT When sound levels in the acoustic environment become more variable across time and frequency, the brain decreases response gain to maintain dynamic range and thus stimulus discriminability. This gain adaptation accounts for changes in perceptual judgments in humans and mice; however, the underlying neuromodulatory mechanisms remain poorly understood. Here, we report context-dependent neuromodulatory effects of synaptic zinc that are necessary for contrast gain control in A1. Understanding context-specific neuromodulatory mechanisms, such as contrast gain control, provides insight into A1 cortical mechanisms of adaptation and also into fundamental aspects of perceptual changes that rely on gain modulation, such as attention.
Collapse
Affiliation(s)
- Patrick A Cody
- Pittsburgh Hearing Research Center, Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Thanos Tzounopoulos
- Pittsburgh Hearing Research Center, Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| |
Collapse
|
10
|
Zapp SJ, Nitsche S, Gollisch T. Retinal receptive-field substructure: scaffolding for coding and computation. Trends Neurosci 2022; 45:430-445. [DOI: 10.1016/j.tins.2022.03.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 02/28/2022] [Accepted: 03/17/2022] [Indexed: 11/29/2022]
|
11
|
Li L, Rehr R, Bruns P, Gerkmann T, Röder B. A Survey on Probabilistic Models in Human Perception and Machines. Front Robot AI 2021; 7:85. [PMID: 33501252 PMCID: PMC7805657 DOI: 10.3389/frobt.2020.00085] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Accepted: 05/29/2020] [Indexed: 11/29/2022] Open
Abstract
Extracting information from noisy signals is of fundamental importance for both biological and artificial perceptual systems. To provide tractable solutions to this challenge, the fields of human perception and machine signal processing (SP) have developed powerful computational models, including Bayesian probabilistic models. However, little true integration between these fields exists in their applications of the probabilistic models for solving analogous problems, such as noise reduction, signal enhancement, and source separation. In this mini review, we briefly introduce and compare selective applications of probabilistic models in machine SP and human psychophysics. We focus on audio and audio-visual processing, using examples of speech enhancement, automatic speech recognition, audio-visual cue integration, source separation, and causal inference to illustrate the basic principles of the probabilistic approach. Our goal is to identify commonalities between probabilistic models addressing brain processes and those aiming at building intelligent machines. These commonalities could constitute the closest points for interdisciplinary convergence.
Collapse
Affiliation(s)
- Lux Li
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Robert Rehr
- Signal Processing (SP), Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Timo Gerkmann
- Signal Processing (SP), Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
12
|
Royer J, Huetz C, Occelli F, Cancela JM, Edeline JM. Enhanced Discriminative Abilities of Auditory Cortex Neurons for Pup Calls Despite Reduced Evoked Responses in C57BL/6 Mother Mice. Neuroscience 2020; 453:1-16. [PMID: 33253823 DOI: 10.1016/j.neuroscience.2020.11.031] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 11/03/2020] [Accepted: 11/18/2020] [Indexed: 11/30/2022]
Abstract
A fundamental task for the auditory system is to process communication sounds according to their behavioral significance. In many mammalian species, pup calls became more significant for mothers than other conspecific and heterospecific communication sounds. To study the cortical consequences of motherhood on the processing of communication sounds, we recorded neuronal responses in the primary auditory cortex of virgin and mother C57BL/6 mice which had similar ABR thresholds. In mothers, the evoked firing rate in response to pure tones was decreased and the frequency receptive fields were narrower. The responses to pup and adult calls were also reduced but the amount of mutual information (MI) per spike about the pup call's identity was increased in mother mice. The response latency to pup and adult calls was significantly shorter in mothers. Despite similarly decreased responses to guinea pig whistles, the response latency, and the MI per spike did not differ between virgins and mothers for these heterospecific vocalizations. Noise correlations between cortical recordings were decreased in mothers, suggesting that the firing rate of distant neurons was more independent from each other. Together, these results indicate that in the most commonly used mouse strain for behavioral studies, the discrimination of pup calls by auditory cortex neurons is more efficient during motherhood.
Collapse
Affiliation(s)
- Juliette Royer
- Université Paris-Saclay, CNRS UMR 9197, Institut des neurosciences Paris-Saclay, 91190 Gif-sur-Yvette, France; Institut des neurosciences Paris-Saclay, CNRS, 91190 Gif-sur-Yvette, France
| | - Chloé Huetz
- Université Paris-Saclay, CNRS UMR 9197, Institut des neurosciences Paris-Saclay, 91190 Gif-sur-Yvette, France; Institut des neurosciences Paris-Saclay, CNRS, 91190 Gif-sur-Yvette, France
| | - Florian Occelli
- Université Paris-Saclay, CNRS UMR 9197, Institut des neurosciences Paris-Saclay, 91190 Gif-sur-Yvette, France; Institut des neurosciences Paris-Saclay, CNRS, 91190 Gif-sur-Yvette, France
| | - José-Manuel Cancela
- Université Paris-Saclay, CNRS UMR 9197, Institut des neurosciences Paris-Saclay, 91190 Gif-sur-Yvette, France; Institut des neurosciences Paris-Saclay, CNRS, 91190 Gif-sur-Yvette, France
| | - Jean-Marc Edeline
- Université Paris-Saclay, CNRS UMR 9197, Institut des neurosciences Paris-Saclay, 91190 Gif-sur-Yvette, France; Institut des neurosciences Paris-Saclay, CNRS, 91190 Gif-sur-Yvette, France.
| |
Collapse
|
13
|
Rahman M, Willmore BDB, King AJ, Harper NS. Simple transformations capture auditory input to cortex. Proc Natl Acad Sci U S A 2020; 117:28442-28451. [PMID: 33097665 PMCID: PMC7668077 DOI: 10.1073/pnas.1922033117] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Sounds are processed by the ear and central auditory pathway. These processing steps are biologically complex, and many aspects of the transformation from sound waveforms to cortical response remain unclear. To understand this transformation, we combined models of the auditory periphery with various encoding models to predict auditory cortical responses to natural sounds. The cochlear models ranged from detailed biophysical simulations of the cochlea and auditory nerve to simple spectrogram-like approximations of the information processing in these structures. For three different stimulus sets, we tested the capacity of these models to predict the time course of single-unit neural responses recorded in ferret primary auditory cortex. We found that simple models based on a log-spaced spectrogram with approximately logarithmic compression perform similarly to the best-performing biophysically detailed models of the auditory periphery, and more consistently well over diverse natural and synthetic sounds. Furthermore, we demonstrated that including approximations of the three categories of auditory nerve fiber in these simple models can substantially improve prediction, particularly when combined with a network encoding model. Our findings imply that the properties of the auditory periphery and central pathway may together result in a simpler than expected functional transformation from ear to cortex. Thus, much of the detailed biological complexity seen in the auditory periphery does not appear to be important for understanding the cortical representation of sound.
Collapse
Affiliation(s)
- Monzilur Rahman
- Department of Physiology, Anatomy and Genetics, University of Oxford, OX1 3PT Oxford, United Kingdom
| | - Ben D B Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, OX1 3PT Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, OX1 3PT Oxford, United Kingdom
| | - Nicol S Harper
- Department of Physiology, Anatomy and Genetics, University of Oxford, OX1 3PT Oxford, United Kingdom
| |
Collapse
|
14
|
Ivanenko A, Watkins P, van Gerven MAJ, Hammerschmidt K, Englitz B. Classifying sex and strain from mouse ultrasonic vocalizations using deep learning. PLoS Comput Biol 2020; 16:e1007918. [PMID: 32569292 PMCID: PMC7347231 DOI: 10.1371/journal.pcbi.1007918] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Revised: 07/09/2020] [Accepted: 04/30/2020] [Indexed: 11/18/2022] Open
Abstract
Vocalizations are widely used for communication between animals. Mice use a large repertoire of ultrasonic vocalizations (USVs) in different social contexts. During social interaction recognizing the partner's sex is important, however, previous research remained inconclusive whether individual USVs contain this information. Using deep neural networks (DNNs) to classify the sex of the emitting mouse from the spectrogram we obtain unprecedented performance (77%, vs. SVM: 56%, Regression: 51%). Performance was even higher (85%) if the DNN could also use each mouse's individual properties during training, which may, however, be of limited practical value. Splitting estimation into two DNNs and using 24 extracted features per USV, spectrogram-to-features and features-to-sex (60%) failed to reach single-step performance. Extending the features by each USVs spectral line, frequency and time marginal in a semi-convolutional DNN resulted in a performance mid-way (64%). Analyzing the network structure suggests an increase in sparsity of activation and correlation with sex, specifically in the fully-connected layers. A detailed analysis of the USV structure, reveals a subset of male vocalizations characterized by a few acoustic features, while the majority of sex differences appear to rely on a complex combination of many features. The same network architecture was also able to achieve above-chance classification for cortexless mice, which were considered indistinguishable before. In summary, spectrotemporal differences between male and female USVs allow at least their partial classification, which enables sexual recognition between mice and automated attribution of USVs during analysis of social interactions.
Collapse
Affiliation(s)
- A. Ivanenko
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, The Netherlands
- Institute of Biology and Biomedicine, Lobachevsky State University, Nizhny Novgorod, Russia
| | | | - M. A. J. van Gerven
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, The Netherlands
| | - K. Hammerschmidt
- Cognitive Ethology Laboratory, German Primate Center, Göttingen, Germany
| | - B. Englitz
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, The Netherlands
- * E-mail:
| |
Collapse
|
15
|
Gaucher Q, Panniello M, Ivanov AZ, Dahmen JC, King AJ, Walker KM. Complexity of frequency receptive fields predicts tonotopic variability across species. eLife 2020; 9:53462. [PMID: 32420865 PMCID: PMC7269667 DOI: 10.7554/elife.53462] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2019] [Accepted: 05/18/2020] [Indexed: 12/17/2022] Open
Abstract
Primary cortical areas contain maps of sensory features, including sound frequency in primary auditory cortex (A1). Two-photon calcium imaging in mice has confirmed the presence of these global tonotopic maps, while uncovering an unexpected local variability in the stimulus preferences of individual neurons in A1 and other primary regions. Here we show that local heterogeneity of frequency preferences is not unique to rodents. Using two-photon calcium imaging in layers 2/3, we found that local variance in frequency preferences is equivalent in ferrets and mice. Neurons with multipeaked frequency tuning are less spatially organized than those tuned to a single frequency in both species. Furthermore, we show that microelectrode recordings may describe a smoother tonotopic arrangement due to a sampling bias towards neurons with simple frequency tuning. These results help explain previous inconsistencies in cortical topography across species and recording techniques.
Collapse
Affiliation(s)
- Quentin Gaucher
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| | - Mariangela Panniello
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| | - Aleksandar Z Ivanov
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| | - Johannes C Dahmen
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| | - Kerry Mm Walker
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
16
|
Cooke JE, Kahn MC, Mann EO, King AJ, Schnupp JWH, Willmore BDB. Contrast gain control occurs independently of both parvalbumin-positive interneuron activity and shunting inhibition in auditory cortex. J Neurophysiol 2020; 123:1536-1551. [PMID: 32186432 PMCID: PMC7191518 DOI: 10.1152/jn.00587.2019] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 03/16/2020] [Accepted: 03/18/2020] [Indexed: 12/31/2022] Open
Abstract
Contrast gain control is the systematic adjustment of neuronal gain in response to the contrast of sensory input. It is widely observed in sensory cortical areas and has been proposed to be a canonical neuronal computation. Here, we investigated whether shunting inhibition from parvalbumin-positive interneurons-a mechanism involved in gain control in visual cortex-also underlies contrast gain control in auditory cortex. First, we performed extracellular recordings in the auditory cortex of anesthetized male mice and optogenetically manipulated the activity of parvalbumin-positive interneurons while varying the contrast of the sensory input. We found that both activation and suppression of parvalbumin interneuron activity altered the overall gain of cortical neurons. However, despite these changes in overall gain, we found that manipulating parvalbumin interneuron activity did not alter the strength of contrast gain control in auditory cortex. Furthermore, parvalbumin-positive interneurons did not show increases in activity in response to high-contrast stimulation, which would be expected if they drive contrast gain control. Finally, we performed in vivo whole-cell recordings in auditory cortical neurons during high- and low-contrast stimulation and found that no increase in membrane conductance was observed during high-contrast stimulation. Taken together, these findings indicate that while parvalbumin-positive interneuron activity modulates the overall gain of auditory cortical responses, other mechanisms are primarily responsible for contrast gain control in this cortical area.NEW & NOTEWORTHY We investigated whether contrast gain control is mediated by shunting inhibition from parvalbumin-positive interneurons in auditory cortex. We performed extracellular and intracellular recordings in mouse auditory cortex while presenting sensory stimuli with varying contrasts and manipulated parvalbumin-positive interneuron activity using optogenetics. We show that while parvalbumin-positive interneuron activity modulates the gain of cortical responses, this activity is not the primary mechanism for contrast gain control in auditory cortex.
Collapse
Affiliation(s)
- James E Cooke
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
- University College London, London, United Kingdom
| | - Martin C Kahn
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Edward O Mann
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Jan W H Schnupp
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong
| | - Ben D B Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
17
|
Lohse M, Bajo VM, King AJ, Willmore BDB. Neural circuits underlying auditory contrast gain control and their perceptual implications. Nat Commun 2020; 11:324. [PMID: 31949136 PMCID: PMC6965083 DOI: 10.1038/s41467-019-14163-5] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Accepted: 12/19/2019] [Indexed: 11/09/2022] Open
Abstract
Neural adaptation enables sensory information to be represented optimally in the brain despite large fluctuations over time in the statistics of the environment. Auditory contrast gain control represents an important example, which is thought to arise primarily from cortical processing. Here we show that neurons in the auditory thalamus and midbrain of mice show robust contrast gain control, and that this is implemented independently of cortical activity. Although neurons at each level exhibit contrast gain control to similar degrees, adaptation time constants become longer at later stages of the processing hierarchy, resulting in progressively more stable representations. We also show that auditory discrimination thresholds in human listeners compensate for changes in contrast, and that the strength of this perceptual adaptation can be predicted from physiological measurements. Contrast adaptation is therefore a robust property of both the subcortical and cortical auditory system and accounts for the short-term adaptability of perceptual judgments.
Collapse
Affiliation(s)
- Michael Lohse
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, OX1 3PT, UK.
| | - Victoria M Bajo
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Andrew J King
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, OX1 3PT, UK.
| | - Ben D B Willmore
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, OX1 3PT, UK
| |
Collapse
|
18
|
Mano O, Creamer MS, Matulis CA, Salazar-Gatzimas E, Chen J, Zavatone-Veth JA, Clark DA. Using slow frame rate imaging to extract fast receptive fields. Nat Commun 2019; 10:4979. [PMID: 31672963 PMCID: PMC6823504 DOI: 10.1038/s41467-019-12974-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2019] [Accepted: 10/11/2019] [Indexed: 11/09/2022] Open
Abstract
In functional imaging, large numbers of neurons are measured during sensory stimulation or behavior. This data can be used to map receptive fields that describe neural associations with stimuli or with behavior. The temporal resolution of these receptive fields has traditionally been limited by image acquisition rates. However, even when acquisitions scan slowly across a population of neurons, individual neurons may be measured at precisely known times. Here, we apply a method that leverages the timing of neural measurements to find receptive fields with temporal resolutions higher than the image acquisition rate. We use this temporal super-resolution method to resolve fast voltage and glutamate responses in visual neurons in Drosophila and to extract calcium receptive fields from cortical neurons in mammals. We provide code to easily apply this method to existing datasets. This method requires no specialized hardware and can be used with any optical indicator of neural activity.
Collapse
Affiliation(s)
- Omer Mano
- Department of Molecular, Cellular, and Developmental Biology, Yale University, New Haven, CT, 06511, USA
| | - Matthew S Creamer
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT, 06511, USA
| | | | | | - Juyue Chen
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT, 06511, USA
| | | | - Damon A Clark
- Department of Molecular, Cellular, and Developmental Biology, Yale University, New Haven, CT, 06511, USA.
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT, 06511, USA.
- Department of Physics, Yale University, New Haven, CT, 06511, USA.
- Department of Neuroscience, Yale University, New Haven, CT, 06511, USA.
| |
Collapse
|
19
|
Chang TR, Šuta D, Chiu TW. Responses of midbrain auditory neurons to two different environmental sounds-A new approach on cross-sound modeling. Biosystems 2019; 187:104021. [PMID: 31574292 DOI: 10.1016/j.biosystems.2019.104021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 07/07/2019] [Accepted: 08/19/2019] [Indexed: 11/29/2022]
Abstract
When modeling auditory responses to environmental sounds, results are satisfactory if both training and testing are restricted to datasets of one type of sound. To predict 'cross-sound' responses (i.e., to predict the response to one type of sound e.g., rat Eating sound, after training with another type of sound e.g., rat Drinking sound), performance is typically poor. Here we implemented a novel approach to improve such cross-sound modeling (single unit datasets were collected at the auditory midbrain of anesthetized rats). The method had two key features: (a) population responses (e.g., average of 32 units) instead of responses of individual units were analyzed; and (b) the long sound segment was first divided into short segments (single sound-bouts), their similarity was then computed over a new metric involving the response (called Stimulus Response Model map or SRM map), and finally similar sound-bouts (regardless of sound type) and their associated responses (peri-stimulus time histograms, PSTHs) were modelled. Specifically, a committee machine model (artificial neural networks with 20 stratified spectral inputs) was trained with datasets from one sound type before predicting PSTH responses to another sound type. Model performance was markedly improved up to 92%. Results also suggested the involvement of different neural mechanisms in generating the early and late responses to amplitude transients in the broad-band environmental sounds. We concluded that it is possible to perform rather satisfactory cross-sound modeling on datasets grouped together based on their similarities in terms of the new metric of SRM map.
Collapse
Affiliation(s)
- T R Chang
- Department of Computer Science and Information Engineering, Southern Taiwan University of Science and Technology, Tainan, Taiwan, ROC
| | - D Šuta
- Department of Cognitive Systems and Neurosciences, Czech Institute of Informatics, Robotics and Cybernetics, Czech Technical University, Prague, Czech Republic; Department of Auditory Neuroscience, Academy of Sciences of the Czech Republic, Czech Republic
| | - T W Chiu
- Department of Biological Science and Technology, National Chiao-Tung University, Hsinchu, Taiwan, ROC; Center For Intelligent Drug Systems and Smart Bio-devices (IDS2B), National Chiao-Tung University, Hsinchu, Taiwan, ROC.
| |
Collapse
|
20
|
Lopez Espejo M, Schwartz ZP, David SV. Spectral tuning of adaptation supports coding of sensory context in auditory cortex. PLoS Comput Biol 2019; 15:e1007430. [PMID: 31626624 PMCID: PMC6821137 DOI: 10.1371/journal.pcbi.1007430] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Revised: 10/30/2019] [Accepted: 09/23/2019] [Indexed: 12/19/2022] Open
Abstract
Perception of vocalizations and other behaviorally relevant sounds requires integrating acoustic information over hundreds of milliseconds. Sound-evoked activity in auditory cortex typically has much shorter latency, but the acoustic context, i.e., sound history, can modulate sound evoked activity over longer periods. Contextual effects are attributed to modulatory phenomena, such as stimulus-specific adaption and contrast gain control. However, an encoding model that links context to natural sound processing has yet to be established. We tested whether a model in which spectrally tuned inputs undergo adaptation mimicking short-term synaptic plasticity (STP) can account for contextual effects during natural sound processing. Single-unit activity was recorded from primary auditory cortex of awake ferrets during presentation of noise with natural temporal dynamics and fully natural sounds. Encoding properties were characterized by a standard linear-nonlinear spectro-temporal receptive field (LN) model and variants that incorporated STP-like adaptation. In the adapting models, STP was applied either globally across all input spectral channels or locally to subsets of channels. For most neurons, models incorporating local STP predicted neural activity as well or better than LN and global STP models. The strength of nonlinear adaptation varied across neurons. Within neurons, adaptation was generally stronger for spectral channels with excitatory than inhibitory gain. Neurons showing improved STP model performance also tended to undergo stimulus-specific adaptation, suggesting a common mechanism for these phenomena. When STP models were compared between passive and active behavior conditions, response gain often changed, but average STP parameters were stable. Thus, spectrally and temporally heterogeneous adaptation, subserved by a mechanism with STP-like dynamics, may support representation of the complex spectro-temporal patterns that comprise natural sounds across wide-ranging sensory contexts.
Collapse
Affiliation(s)
- Mateo Lopez Espejo
- Neuroscience Graduate Program, Oregon Health and Science University, Portland, OR, United States of America
| | - Zachary P. Schwartz
- Neuroscience Graduate Program, Oregon Health and Science University, Portland, OR, United States of America
| | - Stephen V. David
- Oregon Hearing Research Center, Oregon Health and Science University, Portland, OR, United States of America
| |
Collapse
|
21
|
Isomura T, Toyoizumi T. Multi-context blind source separation by error-gated Hebbian rule. Sci Rep 2019; 9:7127. [PMID: 31073206 PMCID: PMC6509167 DOI: 10.1038/s41598-019-43423-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Accepted: 04/23/2019] [Indexed: 11/08/2022] Open
Abstract
Animals need to adjust their inferences according to the context they are in. This is required for the multi-context blind source separation (BSS) task, where an agent needs to infer hidden sources from their context-dependent mixtures. The agent is expected to invert this mixing process for all contexts. Here, we show that a neural network that implements the error-gated Hebbian rule (EGHR) with sufficiently redundant sensory inputs can successfully learn this task. After training, the network can perform the multi-context BSS without further updating synapses, by retaining memories of all experienced contexts. This demonstrates an attractive use of the EGHR for dimensionality reduction by extracting low-dimensional sources across contexts. Finally, if there is a common feature shared across contexts, the EGHR can extract it and generalize the task to even inexperienced contexts. The results highlight the utility of the EGHR as a model for perceptual adaptation in animals.
Collapse
Affiliation(s)
- Takuya Isomura
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Wako, Saitama, 351-0198, Japan.
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Wako, Saitama, 351-0198, Japan.
- RIKEN CBS-OMRON Collaboration Center, Wako, Saitama, 351-0198, Japan.
| |
Collapse
|
22
|
Rahman M, Willmore BDB, King AJ, Harper NS. A dynamic network model of temporal receptive fields in primary auditory cortex. PLoS Comput Biol 2019; 15:e1006618. [PMID: 31059503 PMCID: PMC6534339 DOI: 10.1371/journal.pcbi.1006618] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2018] [Revised: 05/24/2019] [Accepted: 04/13/2019] [Indexed: 11/19/2022] Open
Abstract
Auditory neurons encode stimulus history, which is often modelled using a span of time-delays in a spectro-temporal receptive field (STRF). We propose an alternative model for the encoding of stimulus history, which we apply to extracellular recordings of neurons in the primary auditory cortex of anaesthetized ferrets. For a linear-non-linear STRF model (LN model) to achieve a high level of performance in predicting single unit neural responses to natural sounds in the primary auditory cortex, we found that it is necessary to include time delays going back at least 200 ms in the past. This is an unrealistic time span for biological delay lines. We therefore asked how much of this dependence on stimulus history can instead be explained by dynamical aspects of neurons. We constructed a neural-network model whose output is the weighted sum of units whose responses are determined by a dynamic firing-rate equation. The dynamic aspect performs low-pass filtering on each unit's response, providing an exponentially decaying memory whose time constant is individual to each unit. We find that this dynamic network (DNet) model, when fitted to the neural data using STRFs of only 25 ms duration, can achieve prediction performance on a held-out dataset comparable to the best performing LN model with STRFs of 200 ms duration. These findings suggest that integration due to the membrane time constants or other exponentially-decaying memory processes may underlie linear temporal receptive fields of neurons beyond 25 ms.
Collapse
Affiliation(s)
- Monzilur Rahman
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Ben D. B. Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J. King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Nicol S. Harper
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
23
|
Williamson RS, Polley DB. Parallel pathways for sound processing and functional connectivity among layer 5 and 6 auditory corticofugal neurons. eLife 2019; 8:e42974. [PMID: 30735128 PMCID: PMC6384027 DOI: 10.7554/elife.42974] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2018] [Accepted: 02/06/2019] [Indexed: 11/27/2022] Open
Abstract
Cortical layers (L) 5 and 6 are populated by intermingled cell-types with distinct inputs and downstream targets. Here, we made optogenetically guided recordings from L5 corticofugal (CF) and L6 corticothalamic (CT) neurons in the auditory cortex of awake mice to discern differences in sensory processing and underlying patterns of functional connectivity. Whereas L5 CF neurons showed broad stimulus selectivity with sluggish response latencies and extended temporal non-linearities, L6 CTs exhibited sparse selectivity and rapid temporal processing. L5 CF spikes lagged behind neighboring units and imposed weak feedforward excitation within the local column. By contrast, L6 CT spikes drove robust and sustained activity, particularly in local fast-spiking interneurons. Our findings underscore a duality among sub-cortical projection neurons, where L5 CF units are canonical broadcast neurons that integrate sensory inputs for transmission to distributed downstream targets, while L6 CT neurons are positioned to regulate thalamocortical response gain and selectivity.
Collapse
Affiliation(s)
- Ross S Williamson
- Eaton-Peabody LaboratoriesMassachusetts Eye and Ear InfirmaryBostonUnited States
- Department of OtolaryngologyHarvard Medical SchoolBostonUnited States
| | - Daniel B Polley
- Eaton-Peabody LaboratoriesMassachusetts Eye and Ear InfirmaryBostonUnited States
- Department of OtolaryngologyHarvard Medical SchoolBostonUnited States
| |
Collapse
|
24
|
Cooke JE, King AJ, Willmore BDB, Schnupp JWH. Contrast gain control in mouse auditory cortex. J Neurophysiol 2018; 120:1872-1884. [PMID: 30044164 PMCID: PMC6230796 DOI: 10.1152/jn.00847.2017] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2017] [Revised: 07/18/2018] [Accepted: 07/20/2018] [Indexed: 11/22/2022] Open
Abstract
The neocortex is thought to employ a number of canonical computations, but little is known about whether these computations rely on shared mechanisms across different neural populations. In recent years, the mouse has emerged as a powerful model organism for the dissection of the circuits and mechanisms underlying various aspects of neural processing and therefore provides an important avenue for research into putative canonical computations. One such computation is contrast gain control, the systematic adjustment of neural gain in accordance with the contrast of sensory input, which helps to construct neural representations that are robust to the presence of background stimuli. Here, we characterized contrast gain control in the mouse auditory cortex. We performed laminar extracellular recordings in the auditory cortex of the anesthetized mouse while varying the contrast of the sensory input. We observed that an increase in stimulus contrast resulted in a compensatory reduction in the gain of neural responses, leading to representations in the mouse auditory cortex that are largely contrast invariant. Contrast gain control was present in all cortical layers but was found to be strongest in deep layers, indicating that intracortical mechanisms may contribute to these gain changes. These results lay a foundation for investigations into the mechanisms underlying contrast adaptation in the mouse auditory cortex. NEW & NOTEWORTHY We investigated whether contrast gain control, the systematic reduction in neural gain in response to an increase in sensory contrast, exists in the mouse auditory cortex. We performed extracellular recordings in the mouse auditory cortex while presenting sensory stimuli with varying contrasts and found this form of processing was widespread. This finding provides evidence that contrast gain control may represent a canonical cortical computation and lays a foundation for investigations into the underlying mechanisms.
Collapse
Affiliation(s)
- James E Cooke
- Department of Physiology, Anatomy and Genetics, University of Oxford , Oxford , United Kingdom
- University College London , United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford , Oxford , United Kingdom
| | - Ben D B Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford , Oxford , United Kingdom
| | - Jan W H Schnupp
- Department of Physiology, Anatomy and Genetics, University of Oxford , Oxford , United Kingdom
- Department of Biomedical Sciences, City University of Hong Kong , Hong Kong
| |
Collapse
|
25
|
Abstract
Our ability to make sense of the auditory world results from neural processing that begins in the ear, goes through multiple subcortical areas, and continues in the cortex. The specific contribution of the auditory cortex to this chain of processing is far from understood. Although many of the properties of neurons in the auditory cortex resemble those of subcortical neurons, they show somewhat more complex selectivity for sound features, which is likely to be important for the analysis of natural sounds, such as speech, in real-life listening conditions. Furthermore, recent work has shown that auditory cortical processing is highly context-dependent, integrates auditory inputs with other sensory and motor signals, depends on experience, and is shaped by cognitive demands, such as attention. Thus, in addition to being the locus for more complex sound selectivity, the auditory cortex is increasingly understood to be an integral part of the network of brain regions responsible for prediction, auditory perceptual decision-making, and learning. In this review, we focus on three key areas that are contributing to this understanding: the sound features that are preferentially represented by cortical neurons, the spatial organization of those preferences, and the cognitive roles of the auditory cortex.
Collapse
Affiliation(s)
- Andrew J King
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Sundeep Teki
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Ben D B Willmore
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| |
Collapse
|
26
|
Westö J, May PJC. Describing complex cells in primary visual cortex: a comparison of context and multifilter LN models. J Neurophysiol 2018; 120:703-719. [PMID: 29718805 PMCID: PMC6139451 DOI: 10.1152/jn.00916.2017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 04/30/2018] [Accepted: 04/30/2018] [Indexed: 11/24/2022] Open
Abstract
Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multifilter linear-nonlinear (LN) models and context models. Models are, however, never correct, and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: 1) we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions, and 2) we evaluate context models and multifilter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multifilter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multifilter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior. NEW & NOTEWORTHY We used data from complex cells in primary visual cortex to estimate a wide variety of receptive field models from two frameworks that have previously not been compared with each other. The models included traditionally used multifilter linear-nonlinear models and novel variants of context models. Using mutual information and correlation coefficients as performance measures, we showed that context models are superior for describing complex cells and that the novel context models performed the best.
Collapse
Affiliation(s)
- Johan Westö
- Department of Neuroscience and Biomedical Engineering Aalto University , Espoo , Finland
| | - Patrick J C May
- Department of Psychology, Lancaster University , Lancaster , United Kingdom
| |
Collapse
|
27
|
Hamilton LS, Huth AG. The revolution will not be controlled: natural stimuli in speech neuroscience. LANGUAGE, COGNITION AND NEUROSCIENCE 2018; 35:573-582. [PMID: 32656294 PMCID: PMC7324135 DOI: 10.1080/23273798.2018.1499946] [Citation(s) in RCA: 106] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Accepted: 07/03/2018] [Indexed: 05/22/2023]
Abstract
Humans have a unique ability to produce and consume rich, complex, and varied language in order to communicate ideas to one another. Still, outside of natural reading, the most common methods for studying how our brains process speech or understand language use only isolated words or simple sentences. Recent studies have upset this status quo by employing complex natural stimuli and measuring how the brain responds to language as it is used. In this article we argue that natural stimuli offer many advantages over simplified, controlled stimuli for studying how language is processed by the brain. Furthermore, the downsides of using natural language stimuli can be mitigated using modern statistical and computational techniques.
Collapse
Affiliation(s)
- Liberty S. Hamilton
- Communication Sciences & Disorders, Moody College of Communication, The University of Texas at Austin, Austin, USA
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, USA
| | - Alexander G. Huth
- Department of Neuroscience, The University of Texas at Austin, Austin, USA
- Department of Computer Science, The University of Texas at Austin, Austin, USA
| |
Collapse
|
28
|
Williams AH, Kim TH, Wang F, Vyas S, Ryu SI, Shenoy KV, Schnitzer M, Kolda TG, Ganguli S. Unsupervised Discovery of Demixed, Low-Dimensional Neural Dynamics across Multiple Timescales through Tensor Component Analysis. Neuron 2018; 98:1099-1115.e8. [PMID: 29887338 DOI: 10.1016/j.neuron.2018.05.015] [Citation(s) in RCA: 132] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Revised: 03/18/2018] [Accepted: 05/08/2018] [Indexed: 01/19/2023]
Abstract
Perceptions, thoughts, and actions unfold over millisecond timescales, while learned behaviors can require many days to mature. While recent experimental advances enable large-scale and long-term neural recordings with high temporal fidelity, it remains a formidable challenge to extract unbiased and interpretable descriptions of how rapid single-trial circuit dynamics change slowly over many trials to mediate learning. We demonstrate a simple tensor component analysis (TCA) can meet this challenge by extracting three interconnected, low-dimensional descriptions of neural data: neuron factors, reflecting cell assemblies; temporal factors, reflecting rapid circuit dynamics mediating perceptions, thoughts, and actions within each trial; and trial factors, describing both long-term learning and trial-to-trial changes in cognitive state. We demonstrate the broad applicability of TCA by revealing insights into diverse datasets derived from artificial neural networks, large-scale calcium imaging of rodent prefrontal cortex during maze navigation, and multielectrode recordings of macaque motor cortex during brain machine interface learning.
Collapse
Affiliation(s)
- Alex H Williams
- Neurosciences Graduate Program, Stanford University, Stanford, CA 94305, USA.
| | - Tony Hyun Kim
- Electrical Engineering Department, Stanford University, Stanford, CA 94305, USA
| | - Forea Wang
- Neurosciences Graduate Program, Stanford University, Stanford, CA 94305, USA
| | - Saurabh Vyas
- Electrical Engineering Department, Stanford University, Stanford, CA 94305, USA; Bioengineering Department, Stanford University, Stanford, CA 94305, USA
| | - Stephen I Ryu
- Electrical Engineering Department, Stanford University, Stanford, CA 94305, USA; Department of Neurosurgery, Palo Alto Medical Foundation, Palo Alto, CA 94301, USA
| | - Krishna V Shenoy
- Electrical Engineering Department, Stanford University, Stanford, CA 94305, USA; Bioengineering Department, Stanford University, Stanford, CA 94305, USA; Neurobiology Department, Stanford University, Stanford, CA 94305, USA; Bio-X Program, Stanford University, Stanford, CA 94305, USA; Stanford Neurosciences Institute, Stanford University, Stanford, CA 94305, USA; Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA
| | - Mark Schnitzer
- Applied Physics Department, Stanford University, Stanford, CA 94305, USA; Biology Department, Stanford University, Stanford, CA 94305, USA; Bio-X Program, Stanford University, Stanford, CA 94305, USA; Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA; CNC Program, Stanford University, Stanford, CA 94305, USA
| | | | - Surya Ganguli
- Applied Physics Department, Stanford University, Stanford, CA 94305, USA; Neurobiology Department, Stanford University, Stanford, CA 94305, USA; Bio-X Program, Stanford University, Stanford, CA 94305, USA; Stanford Neurosciences Institute, Stanford University, Stanford, CA 94305, USA.
| |
Collapse
|
29
|
Angeloni C, Geffen MN. Contextual modulation of sound processing in the auditory cortex. Curr Opin Neurobiol 2018; 49:8-15. [PMID: 29125987 PMCID: PMC6037899 DOI: 10.1016/j.conb.2017.10.012] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2017] [Revised: 10/11/2017] [Accepted: 10/13/2017] [Indexed: 12/26/2022]
Abstract
In everyday acoustic environments, we navigate through a maze of sounds that possess a complex spectrotemporal structure, spanning many frequencies and exhibiting temporal modulations that differ within frequency bands. Our auditory system needs to efficiently encode the same sounds in a variety of different contexts, while preserving the ability to separate complex sounds within an acoustic scene. Recent work in auditory neuroscience has made substantial progress in studying how sounds are represented in the auditory system under different contexts, demonstrating that auditory processing of seemingly simple acoustic features, such as frequency and time, is highly dependent on co-occurring acoustic and behavioral stimuli. Through a combination of electrophysiological recordings, computational analysis and behavioral techniques, recent research identified the interactions between external spectral and temporal context of stimuli, as well as the internal behavioral state.
Collapse
Affiliation(s)
- C Angeloni
- Department of Otorhinolaryngology: HNS, Department of Neuroscience, Psychology Graduate Group, Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, United States
| | - M N Geffen
- Department of Otorhinolaryngology: HNS, Department of Neuroscience, Psychology Graduate Group, Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, United States.
| |
Collapse
|
30
|
Cortical Neural Activity Predicts Sensory Acuity Under Optogenetic Manipulation. J Neurosci 2018; 38:2094-2105. [PMID: 29367406 DOI: 10.1523/jneurosci.2457-17.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2017] [Revised: 11/14/2017] [Accepted: 12/15/2017] [Indexed: 11/21/2022] Open
Abstract
Excitatory and inhibitory neurons in the mammalian sensory cortex form interconnected circuits that control cortical stimulus selectivity and sensory acuity. Theoretical studies have predicted that suppression of inhibition in such excitatory-inhibitory networks can lead to either an increase or, paradoxically, a decrease in excitatory neuronal firing, with consequent effects on stimulus selectivity. We tested whether modulation of inhibition or excitation in the auditory cortex of male mice could evoke such a variety of effects in tone-evoked responses and in behavioral frequency discrimination acuity. We found that, indeed, the effects of optogenetic manipulation on stimulus selectivity and behavior varied in both magnitude and sign across subjects, possibly reflecting differences in circuitry or expression of optogenetic factors. Changes in neural population responses consistently predicted behavioral changes for individuals separately, including improvement and impairment in acuity. This correlation between cortical and behavioral change demonstrates that, despite the complex and varied effects that these manipulations can have on neuronal dynamics, the resulting changes in cortical activity account for accompanying changes in behavioral acuity.SIGNIFICANCE STATEMENT Excitatory and inhibitory interactions determine stimulus specificity and tuning in sensory cortex, thereby controlling perceptual discrimination acuity. Modeling has predicted that suppressing the activity of inhibitory neurons can lead to increased or, paradoxically, decreased excitatory activity depending on the architecture of the network. Here, we capitalized on differences between subjects to test whether suppressing/activating inhibition and excitation can in fact exhibit such paradoxical effects for both stimulus sensitivity and behavioral discriminability. Indeed, the same optogenetic manipulation in the auditory cortex of different mice could improve or impair frequency discrimination acuity, predictable from the effects on cortical responses to tones. The same manipulations sometimes produced opposite changes in the behavior of different individuals, supporting theoretical predictions for inhibition-stabilized networks.
Collapse
|
31
|
Weber AI, Pillow JW. Capturing the Dynamical Repertoire of Single Neurons with Generalized Linear Models. Neural Comput 2017; 29:3260-3289. [PMID: 28957020 DOI: 10.1162/neco_a_01021] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A key problem in computational neuroscience is to find simple, tractable models that are nevertheless flexible enough to capture the response properties of real neurons. Here we examine the capabilities of recurrent point process models known as Poisson generalized linear models (GLMs). These models are defined by a set of linear filters and a point nonlinearity and are conditionally Poisson spiking. They have desirable statistical properties for fitting and have been widely used to analyze spike trains from electrophysiological recordings. However, the dynamical repertoire of GLMs has not been systematically compared to that of real neurons. Here we show that GLMs can reproduce a comprehensive suite of canonical neural response behaviors, including tonic and phasic spiking, bursting, spike rate adaptation, type I and type II excitation, and two forms of bistability. GLMs can also capture stimulus-dependent changes in spike timing precision and reliability that mimic those observed in real neurons, and can exhibit varying degrees of stochasticity, from virtually deterministic responses to greater-than-Poisson variability. These results show that Poisson GLMs can exhibit a wide range of dynamic spiking behaviors found in real neurons, making them well suited for qualitative dynamical as well as quantitative statistical studies of single-neuron and population response properties.
Collapse
Affiliation(s)
- Alison I Weber
- Graduate Program in Neuroscience, University of Washington, Seattle, WA 98195, U.S.A.
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, and Department of Psychology, Princeton University, Princeton, NJ 08540, U.S.A.
| |
Collapse
|
32
|
Keine C, Rübsamen R, Englitz B. Signal integration at spherical bushy cells enhances representation of temporal structure but limits its range. eLife 2017; 6:29639. [PMID: 28945194 PMCID: PMC5626481 DOI: 10.7554/elife.29639] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2017] [Accepted: 09/25/2017] [Indexed: 11/25/2022] Open
Abstract
Neuronal inhibition is crucial for temporally precise and reproducible signaling in the auditory brainstem. Previously we showed that for various synthetic stimuli, spherical bushy cell (SBC) activity in the Mongolian gerbil is rendered sparser and more reliable by subtractive inhibition (Keine et al., 2016). Here, employing environmental stimuli, we demonstrate that the inhibitory gain control becomes even more effective, keeping stimulated response rates equal to spontaneous ones. However, what are the costs of this modulation? We performed dynamic stimulus reconstructions based on neural population responses for auditory nerve (ANF) input and SBC output to assess the influence of inhibition on acoustic signal representation. Compared to ANFs, reconstructions of natural stimuli based on SBC responses were temporally more precise, but the match between acoustic and represented signal decreased. Hence, for natural sounds, inhibition at SBCs plays an even stronger role in achieving sparse and reproducible neuronal activity, while compromising general signal representation.
Collapse
Affiliation(s)
- Christian Keine
- Carver College of Medicine, Department of Anatomy and Cell Biology, University of Iowa, Iowa City, United States.,Faculty of Bioscience, Pharmacy and Psychology, University of Leipzig, Leipzig, Germany
| | - Rudolf Rübsamen
- Faculty of Bioscience, Pharmacy and Psychology, University of Leipzig, Leipzig, Germany
| | - Bernhard Englitz
- Donders Center for Neuroscience, Department of Neurophysiology, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
33
|
Ghanbari A, Malyshev A, Volgushev M, Stevenson IH. Estimating short-term synaptic plasticity from pre- and postsynaptic spiking. PLoS Comput Biol 2017; 13:e1005738. [PMID: 28873406 PMCID: PMC5600391 DOI: 10.1371/journal.pcbi.1005738] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2017] [Revised: 09/15/2017] [Accepted: 08/18/2017] [Indexed: 01/27/2023] Open
Abstract
Short-term synaptic plasticity (STP) critically affects the processing of information in neuronal circuits by reversibly changing the effective strength of connections between neurons on time scales from milliseconds to a few seconds. STP is traditionally studied using intracellular recordings of postsynaptic potentials or currents evoked by presynaptic spikes. However, STP also affects the statistics of postsynaptic spikes. Here we present two model-based approaches for estimating synaptic weights and short-term plasticity from pre- and postsynaptic spike observations alone. We extend a generalized linear model (GLM) that predicts postsynaptic spiking as a function of the observed pre- and postsynaptic spikes and allow the connection strength (coupling term in the GLM) to vary as a function of time based on the history of presynaptic spikes. Our first model assumes that STP follows a Tsodyks-Markram description of vesicle depletion and recovery. In a second model, we introduce a functional description of STP where we estimate the coupling term as a biophysically unrestrained function of the presynaptic inter-spike intervals. To validate the models, we test the accuracy of STP estimation using the spiking of pre- and postsynaptic neurons with known synaptic dynamics. We first test our models using the responses of layer 2/3 pyramidal neurons to simulated presynaptic input with different types of STP, and then use simulated spike trains to examine the effects of spike-frequency adaptation, stochastic vesicle release, spike sorting errors, and common input. We find that, using only spike observations, both model-based methods can accurately reconstruct the time-varying synaptic weights of presynaptic inputs for different types of STP. Our models also capture the differences in postsynaptic spike responses to presynaptic spikes following short vs long inter-spike intervals, similar to results reported for thalamocortical connections. These models may thus be useful tools for characterizing short-term plasticity from multi-electrode spike recordings in vivo.
Collapse
Affiliation(s)
- Abed Ghanbari
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
| | - Aleksey Malyshev
- Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Science, Moscow, Russia
| | - Maxim Volgushev
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
| | - Ian H. Stevenson
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
| |
Collapse
|
34
|
Mechanisms of Saccadic Suppression in Primate Cortical Area V4. J Neurosci 2017; 36:9227-39. [PMID: 27581462 DOI: 10.1523/jneurosci.1015-16.2016] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2016] [Accepted: 07/16/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Psychophysical studies have shown that subjects are often unaware of visual stimuli presented around the time of an eye movement. This saccadic suppression is thought to be a mechanism for maintaining perceptual stability. The brain might accomplish saccadic suppression by reducing the gain of visual responses to specific stimuli or by simply suppressing firing uniformly for all stimuli. Moreover, the suppression might be identical across the visual field or concentrated at specific points. To evaluate these possibilities, we recorded from individual neurons in cortical area V4 of nonhuman primates trained to execute saccadic eye movements. We found that both modes of suppression were evident in the visual responses of these neurons and that the two modes showed different spatial and temporal profiles: while gain changes started earlier and were more widely distributed across visual space, nonspecific suppression was found more often in the peripheral visual field, after the completion of the saccade. Peripheral suppression was also associated with increased noise correlations and stronger local field potential oscillations in the α frequency band. This pattern of results suggests that saccadic suppression shares some of the circuitry responsible for allocating voluntary attention. SIGNIFICANCE STATEMENT We explore our surroundings by looking at things, but each eye movement that we make causes an abrupt shift of the visual input. Why doesn't the world look like a film recorded on a shaky camera? The answer in part is a brain mechanism called saccadic suppression, which reduces the responses of visual neurons around the time of each eye movement. Here we reveal several new properties of the underlying mechanisms. First, the suppression operates differently in the central and peripheral visual fields. Second, it appears to be controlled by oscillations in the local field potentials at frequencies traditionally associated with attention. These results suggest that saccadic suppression shares the brain circuits responsible for actively ignoring irrelevant stimuli.
Collapse
|
35
|
Boubenec Y, Lawlor J, Górska U, Shamma S, Englitz B. Detecting changes in dynamic and complex acoustic environments. eLife 2017; 6. [PMID: 28262095 PMCID: PMC5367897 DOI: 10.7554/elife.24910] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2017] [Accepted: 03/04/2017] [Indexed: 01/28/2023] Open
Abstract
Natural sounds such as wind or rain, are characterized by the statistical occurrence of their constituents. Despite their complexity, listeners readily detect changes in these contexts. We here address the neural basis of statistical decision-making using a combination of psychophysics, EEG and modelling. In a texture-based, change-detection paradigm, human performance and reaction times improved with longer pre-change exposure, consistent with improved estimation of baseline statistics. Change-locked and decision-related EEG responses were found in a centro-parietal scalp location, whose slope depended on change size, consistent with sensory evidence accumulation. The potential's amplitude scaled with the duration of pre-change exposure, suggesting a time-dependent decision threshold. Auditory cortex-related potentials showed no response to the change. A dual timescale, statistical estimation model accounted for subjects' performance. Furthermore, a decision-augmented auditory cortex model accounted for performance and reaction times, suggesting that the primary cortical representation requires little post-processing to enable change-detection in complex acoustic environments. DOI:http://dx.doi.org/10.7554/eLife.24910.001
Collapse
Affiliation(s)
- Yves Boubenec
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, Paris, France.,Département d'études cognitives, École normale supérieure, PSL Research University, Paris, France
| | - Jennifer Lawlor
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, Paris, France.,Département d'études cognitives, École normale supérieure, PSL Research University, Paris, France
| | - Urszula Górska
- Department of Neurophysiology, Donders Centre for Neuroscience, Radboud Universiteit, Nijmegen, Netherlands.,Psychophysiology Laboratory, Institute of Psychology, Jagiellonian University, Krakow, Poland.,Smoluchowski Institute of Physics, Jagiellonian University, Krakow, Poland
| | - Shihab Shamma
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, Paris, France.,Département d'études cognitives, École normale supérieure, PSL Research University, Paris, France.,Department of Electrical and Computer Engineering, University of Maryland, College Park, United States.,Institute for Systems Research, University of Maryland, College Park, United States
| | - Bernhard Englitz
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, Paris, France.,Département d'études cognitives, École normale supérieure, PSL Research University, Paris, France.,Department of Neurophysiology, Donders Centre for Neuroscience, Radboud Universiteit, Nijmegen, Netherlands
| |
Collapse
|
36
|
Meyer AF, Williamson RS, Linden JF, Sahani M. Models of Neuronal Stimulus-Response Functions: Elaboration, Estimation, and Evaluation. Front Syst Neurosci 2017; 10:109. [PMID: 28127278 PMCID: PMC5226961 DOI: 10.3389/fnsys.2016.00109] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2016] [Accepted: 12/19/2016] [Indexed: 11/13/2022] Open
Abstract
Rich, dynamic, and dense sensory stimuli are encoded within the nervous system by the time-varying activity of many individual neurons. A fundamental approach to understanding the nature of the encoded representation is to characterize the function that relates the moment-by-moment firing of a neuron to the recent history of a complex sensory input. This review provides a unifying and critical survey of the techniques that have been brought to bear on this effort thus far—ranging from the classical linear receptive field model to modern approaches incorporating normalization and other nonlinearities. We address separately the structure of the models; the criteria and algorithms used to identify the model parameters; and the role of regularizing terms or “priors.” In each case we consider benefits or drawbacks of various proposals, providing examples for when these methods work and when they may fail. Emphasis is placed on key concepts rather than mathematical details, so as to make the discussion accessible to readers from outside the field. Finally, we review ways in which the agreement between an assumed model and the neuron's response may be quantified. Re-implemented and unified code for many of the methods are made freely available.
Collapse
Affiliation(s)
- Arne F Meyer
- Gatsby Computational Neuroscience Unit, University College London London, UK
| | - Ross S Williamson
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear InfirmaryBoston, MA, USA; Department of Otology and Laryngology, Harvard Medical SchoolBoston, MA, USA
| | - Jennifer F Linden
- Ear Institute, University College LondonLondon, UK; Department of Neuroscience, Physiology and Pharmacology, University College LondonLondon, UK
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London London, UK
| |
Collapse
|
37
|
Yildiz IB, Mesgarani N, Deneve S. Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields. J Neurosci 2016; 36:12338-12350. [PMID: 27927954 PMCID: PMC5148225 DOI: 10.1523/jneurosci.4648-15.2016] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2015] [Revised: 09/18/2016] [Accepted: 09/20/2016] [Indexed: 11/23/2022] Open
Abstract
UNLABELLED A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by "explaining away," a divisive competition between alternative interpretations of the auditory scene. SIGNIFICANCE STATEMENT Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data.
Collapse
Affiliation(s)
- Izzet B Yildiz
- Group for Neural Theory, Laboratoire de Neurosciences Cognitives, Département d'Etudes Cognitives, Ecole Normale Supérieure, 75005 Paris, France, and
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, New York 10027
| | - Sophie Deneve
- Group for Neural Theory, Laboratoire de Neurosciences Cognitives, Département d'Etudes Cognitives, Ecole Normale Supérieure, 75005 Paris, France, and
| |
Collapse
|
38
|
Keine C, Rübsamen R, Englitz B. Inhibition in the auditory brainstem enhances signal representation and regulates gain in complex acoustic environments. eLife 2016; 5. [PMID: 27855778 PMCID: PMC5148601 DOI: 10.7554/elife.19295] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2016] [Accepted: 11/17/2016] [Indexed: 12/30/2022] Open
Abstract
Inhibition plays a crucial role in neural signal processing, shaping and limiting responses. In the auditory system, inhibition already modulates second order neurons in the cochlear nucleus, e.g. spherical bushy cells (SBCs). While the physiological basis of inhibition and excitation is well described, their functional interaction in signal processing remains elusive. Using a combination of in vivo loose-patch recordings, iontophoretic drug application, and detailed signal analysis in the Mongolian Gerbil, we demonstrate that inhibition is widely co-tuned with excitation, and leads only to minor sharpening of the spectral response properties. Combinations of complex stimuli and neuronal input-output analysis based on spectrotemporal receptive fields revealed inhibition to render the neuronal output temporally sparser and more reproducible than the input. Overall, inhibition plays a central role in improving the temporal response fidelity of SBCs across a wide range of input intensities and thereby provides the basis for high-fidelity signal processing.
Collapse
Affiliation(s)
- Christian Keine
- Faculty of Bioscience, Pharmacy and Psychology, University of Leipzig, Leipzig, Germany
| | - Rudolf Rübsamen
- Faculty of Bioscience, Pharmacy and Psychology, University of Leipzig, Leipzig, Germany
| | - Bernhard Englitz
- Department of Neurophysiology, Donders Center for Neuroscience, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
39
|
Cui Y, Wang YV, Park SJH, Demb JB, Butts DA. Divisive suppression explains high-precision firing and contrast adaptation in retinal ganglion cells. eLife 2016; 5:e19460. [PMID: 27841746 PMCID: PMC5108594 DOI: 10.7554/elife.19460] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2016] [Accepted: 10/19/2016] [Indexed: 11/13/2022] Open
Abstract
Visual processing depends on specific computations implemented by complex neural circuits. Here, we present a circuit-inspired model of retinal ganglion cell computation, targeted to explain their temporal dynamics and adaptation to contrast. To localize the sources of such processing, we used recordings at the levels of synaptic input and spiking output in the in vitro mouse retina. We found that an ON-Alpha ganglion cell's excitatory synaptic inputs were described by a divisive interaction between excitation and delayed suppression, which explained nonlinear processing that was already present in ganglion cell inputs. Ganglion cell output was further shaped by spike generation mechanisms. The full model accurately predicted spike responses with unprecedented millisecond precision, and accurately described contrast adaptation of the spike train. These results demonstrate how circuit and cell-intrinsic mechanisms interact for ganglion cell function and, more generally, illustrate the power of circuit-inspired modeling of sensory processing.
Collapse
Affiliation(s)
- Yuwei Cui
- Department of Biology, University of Maryland, College Park, United States
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, United States
| | - Yanbin V Wang
- Department of Ophthalmology and Visual Science, Yale University, New Haven, United States
- Department of Cellular and Molecular Physiology, Yale University, New Haven, United States
| | - Silvia J H Park
- Department of Ophthalmology and Visual Science, Yale University, New Haven, United States
| | - Jonathan B Demb
- Department of Ophthalmology and Visual Science, Yale University, New Haven, United States
- Department of Cellular and Molecular Physiology, Yale University, New Haven, United States
| | - Daniel A Butts
- Department of Biology, University of Maryland, College Park, United States
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, United States
| |
Collapse
|
40
|
Harper NS, Schoppe O, Willmore BDB, Cui Z, Schnupp JWH, King AJ. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons. PLoS Comput Biol 2016; 12:e1005113. [PMID: 27835647 PMCID: PMC5105998 DOI: 10.1371/journal.pcbi.1005113] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2015] [Accepted: 08/22/2016] [Indexed: 11/28/2022] Open
Abstract
Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.
Collapse
Affiliation(s)
- Nicol S. Harper
- Dept. of Physiology, Anatomy and Genetics (DPAG), Sherrington Building, University of Oxford, United Kingdom
- Institute of Biomedical Engineering, Department of Engineering Science, Old Road Campus Research Building, University of Oxford, Headington, United Kingdom
| | - Oliver Schoppe
- Dept. of Physiology, Anatomy and Genetics (DPAG), Sherrington Building, University of Oxford, United Kingdom
- Bio-Inspired Information Processing, Technische Universität München, Germany
| | - Ben D. B. Willmore
- Dept. of Physiology, Anatomy and Genetics (DPAG), Sherrington Building, University of Oxford, United Kingdom
| | - Zhanfeng Cui
- Institute of Biomedical Engineering, Department of Engineering Science, Old Road Campus Research Building, University of Oxford, Headington, United Kingdom
| | - Jan W. H. Schnupp
- Dept. of Physiology, Anatomy and Genetics (DPAG), Sherrington Building, University of Oxford, United Kingdom
- Department of Biomedical Science, City University of Hong Kong, Kowloon Tong, Hong Kong
| | - Andrew J. King
- Dept. of Physiology, Anatomy and Genetics (DPAG), Sherrington Building, University of Oxford, United Kingdom
| |
Collapse
|
41
|
Robinson BS, Berger TW, Song D. Identification of Stable Spike-Timing-Dependent Plasticity from Spiking Activity with Generalized Multilinear Modeling. Neural Comput 2016; 28:2320-2351. [PMID: 27557101 DOI: 10.1162/neco_a_00883] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Characterization of long-term activity-dependent plasticity from behaviorally driven spiking activity is important for understanding the underlying mechanisms of learning and memory. In this letter, we present a computational framework for quantifying spike-timing-dependent plasticity (STDP) during behavior by identifying a functional plasticity rule solely from spiking activity. First, we formulate a flexible point-process spiking neuron model structure with STDP, which includes functions that characterize the stationary and plastic properties of the neuron. The STDP model includes a novel function for prolonged plasticity induction, as well as a more typical function for synaptic weight change based on the relative timing of input-output spike pairs. Consideration for system stability is incorporated with weight-dependent synaptic modification. Next, we formalize an estimation technique using a generalized multilinear model (GMLM) structure with basis function expansion. The weight-dependent synaptic modification adds a nonlinearity to the model, which is addressed with an iterative unconstrained optimization approach. Finally, we demonstrate successful model estimation on simulated spiking data and show that all model functions can be estimated accurately with this method across a variety of simulation parameters, such as number of inputs, output firing rate, input firing type, and simulation time. Since this approach requires only naturally generated spikes, it can be readily applied to behaving animal studies to characterize the underlying mechanisms of learning and memory.
Collapse
Affiliation(s)
- Brian S Robinson
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA 90089, U.S.A.
| | - Theodore W Berger
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA 90089, U.S.A.
| | - Dong Song
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA 90089, U.S.A.
| |
Collapse
|
42
|
Rubin J, Ulanovsky N, Nelken I, Tishby N. The Representation of Prediction Error in Auditory Cortex. PLoS Comput Biol 2016; 12:e1005058. [PMID: 27490251 PMCID: PMC4973877 DOI: 10.1371/journal.pcbi.1005058] [Citation(s) in RCA: 55] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2015] [Accepted: 07/07/2016] [Indexed: 11/19/2022] Open
Abstract
To survive, organisms must extract information from the past that is relevant for their future. How this process is expressed at the neural level remains unclear. We address this problem by developing a novel approach from first principles. We show here how to generate low-complexity representations of the past that produce optimal predictions of future events. We then illustrate this framework by studying the coding of ‘oddball’ sequences in auditory cortex. We find that for many neurons in primary auditory cortex, trial-by-trial fluctuations of neuronal responses correlate with the theoretical prediction error calculated from the short-term past of the stimulation sequence, under constraints on the complexity of the representation of this past sequence. In some neurons, the effect of prediction error accounted for more than 50% of response variability. Reliable predictions often depended on a representation of the sequence of the last ten or more stimuli, although the representation kept only few details of that sequence. A crucial aspect of all life is the ability to use past events in order to guide future behavior. To do that, creatures need the ability to predict future events. Indeed, predictability has been shown to affect neuronal responses in many animals and under many conditions. Clearly, the quality of predictions should depend on the amount and detail of the past information used to generate them. Here, by using a basic principle from information theory, we show how to derive explicitly the tradeoff between quality of prediction and complexity of the representation of past information. We then apply these ideas to a concrete case–neuronal responses recorded in auditory cortex during the presentation of oddball sequences, consisting of two tones with varying probabilities. We show that the neuronal responses fit quantitatively the prediction errors of optimal predictors derived from our theory, and use that result in order to deduce the properties of the representations of the past in the auditory system. We conclude that these memory representations have surprisingly long duration (10 stimuli back or more), but keep relatively little detail about this past. Our theory can be applied widely to other sensory systems.
Collapse
Affiliation(s)
- Jonathan Rubin
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
| | - Nachum Ulanovsky
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Israel Nelken
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
- Department of Neurobiology, Institute of Life Sciences, Hebrew University, Jerusalem, Israel
- * E-mail:
| | - Naftali Tishby
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
- The Benin School of Computer Science and Engineering, Hebrew University, Jerusalem, Israel
| |
Collapse
|
43
|
Westö J, May PJC. Capturing contextual effects in spectro-temporal receptive fields. Hear Res 2016; 339:195-210. [PMID: 27473504 DOI: 10.1016/j.heares.2016.07.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/30/2016] [Revised: 06/16/2016] [Accepted: 07/24/2016] [Indexed: 11/25/2022]
Abstract
Spectro-temporal receptive fields (STRFs) are thought to provide descriptive images of the computations performed by neurons along the auditory pathway. However, their validity can be questioned because they rely on a set of assumptions that are probably not fulfilled by real neurons exhibiting contextual effects, that is, nonlinear interactions in the time or frequency dimension that cannot be described with a linear filter. We used a novel approach to investigate how a variety of contextual effects, due to facilitating nonlinear interactions and synaptic depression, affect different STRF models, and if these effects can be captured with a context field (CF). Contextual effects were incorporated in simulated networks of spiking neurons, allowing one to define the true STRFs of the neurons. This, in turn, made it possible to evaluate the performance of each STRF model by comparing the estimations with the true STRFs. We found that currently used STRF models are particularly poor at estimating inhibitory regions. Specifically, contextual effects make estimated STRFs dependent on stimulus density in a contrasting fashion: inhibitory regions are underestimated at lower densities while artificial inhibitory regions emerge at higher densities. The CF was found to provide a solution to this dilemma, but only when it is used together with a generalized linear model. Our results therefore highlight the limitations of the traditional STRF approach and provide useful recipes for how different STRF models and stimuli can be used to arrive at reliable quantifications of neural computations in the presence of contextual effects. The results therefore push the purpose of STRF analysis from simply finding an optimal stimulus toward describing context-dependent computations of neurons along the auditory pathway.
Collapse
Affiliation(s)
- Johan Westö
- Department of Neuroscience and Biomedical Engineering, Aalto University, FI-00076 Espoo, Finland.
| | - Patrick J C May
- Special Laboratory Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology, D-39118 Magdeburg, Germany.
| |
Collapse
|
44
|
Williamson RS, Ahrens MB, Linden JF, Sahani M. Input-Specific Gain Modulation by Local Sensory Context Shapes Cortical and Thalamic Responses to Complex Sounds. Neuron 2016; 91:467-81. [PMID: 27346532 PMCID: PMC4961224 DOI: 10.1016/j.neuron.2016.05.041] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2015] [Revised: 10/25/2015] [Accepted: 05/12/2016] [Indexed: 01/19/2023]
Abstract
Sensory neurons are customarily characterized by one or more linearly weighted receptive fields describing sensitivity in sensory space and time. We show that in auditory cortical and thalamic neurons, the weight of each receptive field element depends on the pattern of sound falling within a local neighborhood surrounding it in time and frequency. Accounting for this change in effective receptive field with spectrotemporal context improves predictions of both cortical and thalamic responses to stationary complex sounds. Although context dependence varies among neurons and across brain areas, there are strong shared qualitative characteristics. In a spectrotemporally rich soundscape, sound elements modulate neuronal responsiveness more effectively when they coincide with sounds at other frequencies, and less effectively when they are preceded by sounds at similar frequencies. This local-context-driven lability in the representation of complex sounds—a modulation of “input-specific gain” rather than “output gain”—may be a widespread motif in sensory processing. Gain of neuronal responses to sound components varies with immediate acoustic context “Contextual gain fields” can be estimated from neuronal responses to complex sounds Coincident sound at different frequencies boosts gain in cortex and thalamus Preceding sound at similar frequency reduces gain for longer in cortex than thalamus
Collapse
Affiliation(s)
- Ross S Williamson
- Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, UK; Centre for Mathematics and Physics in the Life Sciences and Experimental Biology, University College London, London WC1E 6BT, UK
| | - Misha B Ahrens
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK
| | - Jennifer F Linden
- Ear Institute, University College London, London WC1X 8EE, UK; Department of Neuroscience, Physiology and Pharmacology, University College London, London WC1E 6BT, UK.
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, UK.
| |
Collapse
|
45
|
Schoppe O, Harper NS, Willmore BDB, King AJ, Schnupp JWH. Measuring the Performance of Neural Models. Front Comput Neurosci 2016; 10:10. [PMID: 26903851 PMCID: PMC4748266 DOI: 10.3389/fncom.2016.00010] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2015] [Accepted: 01/21/2016] [Indexed: 11/13/2022] Open
Abstract
Good metrics of the performance of a statistical or computational model are essential for model comparison and selection. Here, we address the design of performance metrics for models that aim to predict neural responses to sensory inputs. This is particularly difficult because the responses of sensory neurons are inherently variable, even in response to repeated presentations of identical stimuli. In this situation, standard metrics (such as the correlation coefficient) fail because they do not distinguish between explainable variance (the part of the neural response that is systematically dependent on the stimulus) and response variability (the part of the neural response that is not systematically dependent on the stimulus, and cannot be explained by modeling the stimulus-response relationship). As a result, models which perfectly describe the systematic stimulus-response relationship may appear to perform poorly. Two metrics have previously been proposed which account for this inherent variability: Signal Power Explained (SPE, Sahani and Linden, 2003), and the normalized correlation coefficient (CC norm , Hsu et al., 2004). Here, we analyze these metrics, and show that they are intimately related. However, SPE has no lower bound, and we show that, even for good models, SPE can yield negative values that are difficult to interpret. CC norm is better behaved in that it is effectively bounded between -1 and 1, and values below zero are very rare in practice and easy to interpret. However, it was hitherto not possible to calculate CC norm directly; instead, it was estimated using imprecise and laborious resampling techniques. Here, we identify a new approach that can calculate CC norm quickly and accurately. As a result, we argue that it is now a better choice of metric than SPE to accurately evaluate the performance of neural models.
Collapse
Affiliation(s)
- Oliver Schoppe
- Department of Physiology, Anatomy, and Genetics, University of OxfordOxford, UK
- Bio-Inspired Information Processing, Technische Universität MünchenGarching, Germany
| | - Nicol S. Harper
- Department of Physiology, Anatomy, and Genetics, University of OxfordOxford, UK
| | - Ben D. B. Willmore
- Department of Physiology, Anatomy, and Genetics, University of OxfordOxford, UK
| | - Andrew J. King
- Department of Physiology, Anatomy, and Genetics, University of OxfordOxford, UK
| | - Jan W. H. Schnupp
- Department of Physiology, Anatomy, and Genetics, University of OxfordOxford, UK
| |
Collapse
|
46
|
Abstract
Robust representations of sounds with a complex spectrotemporal structure are thought to emerge in hierarchically organized auditory cortex, but the computational advantage of this hierarchy remains unknown. Here, we used computational models to study how such hierarchical structures affect temporal binding in neural networks. We equipped individual units in different types of feedforward networks with local memory mechanisms storing recent inputs and observed how this affected the ability of the networks to process stimuli context dependently. Our findings illustrate that these local memories stack up in hierarchical structures and hence allow network units to exhibit selectivity to spectral sequences longer than the time spans of the local memories. We also illustrate that short-term synaptic plasticity is a potential local memory mechanism within the auditory cortex, and we show that it can bring robustness to context dependence against variation in the temporal rate of stimuli, while introducing nonlinearities to response profiles that are not well captured by standard linear spectrotemporal receptive field models. The results therefore indicate that short-term synaptic plasticity might provide hierarchically structured auditory cortex with computational capabilities important for robust representations of spectrotemporal patterns.
Collapse
Affiliation(s)
- Johan Westö
- Department of Neuroscience and Biomedical Engineering, Aalto University, FI-00076 Espoo, Finland
| | - Patrick J. C. May
- Special Laboratory Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology, D-39118 Magdeburg, Germany
| | - Hannu Tiitinen
- Department of Neuroscience and Biomedical Engineering, Aalto University, FI-00076 Espoo, Finland
| |
Collapse
|
47
|
Blackwell JM, Taillefumier TO, Natan RG, Carruthers IM, Magnasco MO, Geffen MN. Stable encoding of sounds over a broad range of statistical parameters in the auditory cortex. Eur J Neurosci 2016; 43:751-64. [PMID: 26663571 PMCID: PMC5021175 DOI: 10.1111/ejn.13144] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2015] [Revised: 11/22/2015] [Accepted: 12/01/2015] [Indexed: 11/29/2022]
Abstract
Natural auditory scenes possess highly structured statistical regularities, which are dictated by the physics of sound production in nature, such as scale‐invariance. We recently identified that natural water sounds exhibit a particular type of scale invariance, in which the temporal modulation within spectral bands scales with the centre frequency of the band. Here, we tested how neurons in the mammalian primary auditory cortex encode sounds that exhibit this property, but differ in their statistical parameters. The stimuli varied in spectro‐temporal density and cyclo‐temporal statistics over several orders of magnitude, corresponding to a range of water‐like percepts, from pattering of rain to a slow stream. We recorded neuronal activity in the primary auditory cortex of awake rats presented with these stimuli. The responses of the majority of individual neurons were selective for a subset of stimuli with specific statistics. However, as a neuronal population, the responses were remarkably stable over large changes in stimulus statistics, exhibiting a similar range in firing rate, response strength, variability and information rate, and only minor variation in receptive field parameters. This pattern of neuronal responses suggests a potentially general principle for cortical encoding of complex acoustic scenes: while individual cortical neurons exhibit selectivity for specific statistical features, a neuronal population preserves a constant response structure across a broad range of statistical parameters.
Collapse
Affiliation(s)
- Jennifer M Blackwell
- Department of Otorhinolaryngology and Head and Neck Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Thibaud O Taillefumier
- Center for Physics and Biology, Rockefeller University, New York, NY, USA.,Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, NJ, USA
| | - Ryan G Natan
- Department of Otorhinolaryngology and Head and Neck Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Isaac M Carruthers
- Department of Otorhinolaryngology and Head and Neck Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Marcelo O Magnasco
- Center for Physics and Biology, Rockefeller University, New York, NY, USA
| | - Maria N Geffen
- Department of Otorhinolaryngology and Head and Neck Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA.,Center for Physics and Biology, Rockefeller University, New York, NY, USA
| |
Collapse
|
48
|
Willmore BDB, Schoppe O, King AJ, Schnupp JWH, Harper NS. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing. J Neurosci 2016; 36:280-9. [PMID: 26758822 PMCID: PMC4710761 DOI: 10.1523/jneurosci.2441-15.2016] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2015] [Revised: 11/03/2015] [Accepted: 11/10/2015] [Indexed: 11/21/2022] Open
Abstract
Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear-nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too.
Collapse
Affiliation(s)
- Ben D B Willmore
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom, and
| | - Oliver Schoppe
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom, and Bio-Inspired Information Processing, Technische Universität München, 85748 Garching, Germany
| | - Andrew J King
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom, and
| | - Jan W H Schnupp
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom, and
| | - Nicol S Harper
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom, and
| |
Collapse
|
49
|
Rhythmic auditory cortex activity at multiple timescales shapes stimulus-response gain and background firing. J Neurosci 2015; 35:7750-62. [PMID: 25995464 DOI: 10.1523/jneurosci.0268-15.2015] [Citation(s) in RCA: 48] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The phase of low-frequency network activity in the auditory cortex captures changes in neural excitability, entrains to the temporal structure of natural sounds, and correlates with the perceptual performance in acoustic tasks. Although these observations suggest a causal link between network rhythms and perception, it remains unknown how precisely they affect the processes by which neural populations encode sounds. We addressed this question by analyzing neural responses in the auditory cortex of anesthetized rats using stimulus-response models. These models included a parametric dependence on the phase of local field potential rhythms in both stimulus-unrelated background activity and the stimulus-response transfer function. We found that phase-dependent models better reproduced the observed responses than static models, during both stimulation with a series of natural sounds and epochs of silence. This was attributable to two factors: (1) phase-dependent variations in background firing (most prominent for delta; 1-4 Hz); and (2) modulations of response gain that rhythmically amplify and attenuate the responses at specific phases of the rhythm (prominent for frequencies between 2 and 12 Hz). These results provide a quantitative characterization of how slow auditory cortical rhythms shape sound encoding and suggest a differential contribution of network activity at different timescales. In addition, they highlight a putative mechanism that may implement the selective amplification of appropriately timed sound tokens relative to the phase of rhythmic auditory cortex activity.
Collapse
|
50
|
Abstract
Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning.
Collapse
|