1
|
Chen C, Cruces-Solís H, Ertman A, de Hoz L. Subcortical coding of predictable and unsupervised sound-context associations. CURRENT RESEARCH IN NEUROBIOLOGY 2023; 5:100110. [PMID: 38020811 PMCID: PMC10663128 DOI: 10.1016/j.crneur.2023.100110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 09/12/2023] [Accepted: 09/17/2023] [Indexed: 12/01/2023] Open
Abstract
Our environment is made of a myriad of stimuli present in combinations often patterned in predictable ways. For example, there is a strong association between where we are and the sounds we hear. Like many environmental patterns, sound-context associations are learned implicitly, in an unsupervised manner, and are highly informative and predictive of normality. Yet, we know little about where and how unsupervised sound-context associations are coded in the brain. Here we measured plasticity in the auditory midbrain of mice living over days in an enriched task-less environment in which entering a context triggered sound with different degrees of predictability. Plasticity in the auditory midbrain, a hub of auditory input and multimodal feedback, developed over days and reflected learning of contextual information in a manner that depended on the predictability of the sound-context association and not on reinforcement. Plasticity manifested as an increase in response gain and tuning shift that correlated with a general increase in neuronal frequency discrimination. Thus, the auditory midbrain is sensitive to unsupervised predictable sound-context associations, revealing a subcortical engagement in the detection of contextual sounds. By increasing frequency resolution, this detection might facilitate the processing of behaviorally relevant foreground information described to occur in cortical auditory structures.
Collapse
Affiliation(s)
- Chi Chen
- Department of Neurogenetics, Max Planck Institute for Multidisciplinary Sciences, Göttingen, Germany
- International Max Planck Research School for Neurosciences, Göttingen, Germany
- Göttingen Graduate School of Neurosciences and Molecular Biosciences, Germany
- Charité Medical University, Neuroscience Research Center, Berlin, Germany
| | - Hugo Cruces-Solís
- Department of Neurogenetics, Max Planck Institute for Multidisciplinary Sciences, Göttingen, Germany
- International Max Planck Research School for Neurosciences, Göttingen, Germany
- Göttingen Graduate School of Neurosciences and Molecular Biosciences, Germany
| | - Alexandra Ertman
- Charité Medical University, Neuroscience Research Center, Berlin, Germany
- International Graduate Program Medical Neurosciences, Charité Medical University, Berlin, Germany
| | - Livia de Hoz
- Department of Neurogenetics, Max Planck Institute for Multidisciplinary Sciences, Göttingen, Germany
- Charité Medical University, Neuroscience Research Center, Berlin, Germany
- Bernstein Center for Computational Neuroscience, Berlin, Germany
| |
Collapse
|
2
|
Angeloni CF, Młynarski W, Piasini E, Williams AM, Wood KC, Garami L, Hermundstad AM, Geffen MN. Dynamics of cortical contrast adaptation predict perception of signals in noise. Nat Commun 2023; 14:4817. [PMID: 37558677 PMCID: PMC10412650 DOI: 10.1038/s41467-023-40477-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 07/27/2023] [Indexed: 08/11/2023] Open
Abstract
Neurons throughout the sensory pathway adapt their responses depending on the statistical structure of the sensory environment. Contrast gain control is a form of adaptation in the auditory cortex, but it is unclear whether the dynamics of gain control reflect efficient adaptation, and whether they shape behavioral perception. Here, we trained mice to detect a target presented in background noise shortly after a change in the contrast of the background. The observed changes in cortical gain and behavioral detection followed the dynamics of a normative model of efficient contrast gain control; specifically, target detection and sensitivity improved slowly in low contrast, but degraded rapidly in high contrast. Auditory cortex was required for this task, and cortical responses were not only similarly affected by contrast but predicted variability in behavioral performance. Combined, our results demonstrate that dynamic gain adaptation supports efficient coding in auditory cortex and predicts the perception of sounds in noise.
Collapse
Affiliation(s)
- Christopher F Angeloni
- Psychology Graduate Group, University of Pennsylvania, Philadelphia, PA, USA
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Wiktor Młynarski
- Faculty of Biology, Ludwig Maximilian University of Munich, Munich, Germany
- Bernstein Center for Computational Neuroscience, Munich, Germany
| | - Eugenio Piasini
- International School for Advanced Studies (SISSA), Trieste, Italy
| | - Aaron M Williams
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA, USA
| | - Katherine C Wood
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Linda Garami
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Ann M Hermundstad
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Maria N Geffen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA.
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Neuroscience, Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
3
|
Auksztulewicz R, Rajendran VG, Peng F, Schnupp JWH, Harper NS. Omission responses in local field potentials in rat auditory cortex. BMC Biol 2023; 21:130. [PMID: 37254137 DOI: 10.1186/s12915-023-01592-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 04/11/2023] [Indexed: 06/01/2023] Open
Abstract
BACKGROUND Non-invasive recordings of gross neural activity in humans often show responses to omitted stimuli in steady trains of identical stimuli. This has been taken as evidence for the neural coding of prediction or prediction error. However, evidence for such omission responses from invasive recordings of cellular-scale responses in animal models is scarce. Here, we sought to characterise omission responses using extracellular recordings in the auditory cortex of anaesthetised rats. We profiled omission responses across local field potentials (LFP), analogue multiunit activity (AMUA), and single/multi-unit spiking activity, using stimuli that were fixed-rate trains of acoustic noise bursts where 5% of bursts were randomly omitted. RESULTS Significant omission responses were observed in LFP and AMUA signals, but not in spiking activity. These omission responses had a lower amplitude and longer latency than burst-evoked sensory responses, and omission response amplitude increased as a function of the number of preceding bursts. CONCLUSIONS Together, our findings show that omission responses are most robustly observed in LFP and AMUA signals (relative to spiking activity). This has implications for models of cortical processing that require many neurons to encode prediction errors in their spike output.
Collapse
Affiliation(s)
- Ryszard Auksztulewicz
- Center for Cognitive Neuroscience Berlin, Free University Berlin, Berlin, Germany.
- Dept of Neuroscience, City University of Hong Kong, Hong Kong, Hong Kong S.A.R..
| | | | - Fei Peng
- Dept of Neuroscience, City University of Hong Kong, Hong Kong, Hong Kong S.A.R
| | | | | |
Collapse
|
4
|
Pennington JR, David SV. A convolutional neural network provides a generalizable model of natural sound coding by neural populations in auditory cortex. PLoS Comput Biol 2023; 19:e1011110. [PMID: 37146065 DOI: 10.1371/journal.pcbi.1011110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Revised: 05/17/2023] [Accepted: 04/17/2023] [Indexed: 05/07/2023] Open
Abstract
Convolutional neural networks (CNNs) can provide powerful and flexible models of neural sensory processing. However, the utility of CNNs in studying the auditory system has been limited by their requirement for large datasets and the complex response properties of single auditory neurons. To address these limitations, we developed a population encoding model: a CNN that simultaneously predicts activity of several hundred neurons recorded during presentation of a large set of natural sounds. This approach defines a shared spectro-temporal space and pools statistical power across neurons. Population models of varying architecture performed consistently and substantially better than traditional linear-nonlinear models on data from primary and non-primary auditory cortex. Moreover, population models were highly generalizable. The output layer of a model pre-trained on one population of neurons could be fit to data from novel single units, achieving performance equivalent to that of neurons in the original fit data. This ability to generalize suggests that population encoding models capture a complete representational space across neurons in an auditory cortical field.
Collapse
Affiliation(s)
- Jacob R Pennington
- Washington State University, Vancouver, Washington, United States of America
| | - Stephen V David
- Oregon Hearing Research Center, Oregon Health and Science University, Oregon, United States of America
| |
Collapse
|
5
|
Willmore BDB, King AJ. Adaptation in auditory processing. Physiol Rev 2023; 103:1025-1058. [PMID: 36049112 PMCID: PMC9829473 DOI: 10.1152/physrev.00011.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Adaptation is an essential feature of auditory neurons, which reduces their responses to unchanging and recurring sounds and allows their response properties to be matched to the constantly changing statistics of sounds that reach the ears. As a consequence, processing in the auditory system highlights novel or unpredictable sounds and produces an efficient representation of the vast range of sounds that animals can perceive by continually adjusting the sensitivity and, to a lesser extent, the tuning properties of neurons to the most commonly encountered stimulus values. Together with attentional modulation, adaptation to sound statistics also helps to generate neural representations of sound that are tolerant to background noise and therefore plays a vital role in auditory scene analysis. In this review, we consider the diverse forms of adaptation that are found in the auditory system in terms of the processing levels at which they arise, the underlying neural mechanisms, and their impact on neural coding and perception. We also ask what the dynamics of adaptation, which can occur over multiple timescales, reveal about the statistical properties of the environment. Finally, we examine how adaptation to sound statistics is influenced by learning and experience and changes as a result of aging and hearing loss.
Collapse
Affiliation(s)
- Ben D. B. Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J. King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
6
|
Sadagopan S, Kar M, Parida S. Quantitative models of auditory cortical processing. Hear Res 2023; 429:108697. [PMID: 36696724 PMCID: PMC9928778 DOI: 10.1016/j.heares.2023.108697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/17/2022] [Accepted: 01/12/2023] [Indexed: 01/15/2023]
Abstract
To generate insight from experimental data, it is critical to understand the inter-relationships between individual data points and place them in context within a structured framework. Quantitative modeling can provide the scaffolding for such an endeavor. Our main objective in this review is to provide a primer on the range of quantitative tools available to experimental auditory neuroscientists. Quantitative modeling is advantageous because it can provide a compact summary of observed data, make underlying assumptions explicit, and generate predictions for future experiments. Quantitative models may be developed to characterize or fit observed data, to test theories of how a task may be solved by neural circuits, to determine how observed biophysical details might contribute to measured activity patterns, or to predict how an experimental manipulation would affect neural activity. In complexity, quantitative models can range from those that are highly biophysically realistic and that include detailed simulations at the level of individual synapses, to those that use abstract and simplified neuron models to simulate entire networks. Here, we survey the landscape of recently developed models of auditory cortical processing, highlighting a small selection of models to demonstrate how they help generate insight into the mechanisms of auditory processing. We discuss examples ranging from models that use details of synaptic properties to explain the temporal pattern of cortical responses to those that use modern deep neural networks to gain insight into human fMRI data. We conclude by discussing a biologically realistic and interpretable model that our laboratory has developed to explore aspects of vocalization categorization in the auditory pathway.
Collapse
Affiliation(s)
- Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA; Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| | - Manaswini Kar
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Satyabrata Parida
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
7
|
Mischler G, Keshishian M, Bickel S, Mehta AD, Mesgarani N. Deep neural networks effectively model neural adaptation to changing background noise and suggest nonlinear noise filtering methods in auditory cortex. Neuroimage 2023; 266:119819. [PMID: 36529203 PMCID: PMC10510744 DOI: 10.1016/j.neuroimage.2022.119819] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 11/28/2022] [Accepted: 12/15/2022] [Indexed: 12/23/2022] Open
Abstract
The human auditory system displays a robust capacity to adapt to sudden changes in background noise, allowing for continuous speech comprehension despite changes in background environments. However, despite comprehensive studies characterizing this ability, the computations that underly this process are not well understood. The first step towards understanding a complex system is to propose a suitable model, but the classical and easily interpreted model for the auditory system, the spectro-temporal receptive field (STRF), cannot match the nonlinear neural dynamics involved in noise adaptation. Here, we utilize a deep neural network (DNN) to model neural adaptation to noise, illustrating its effectiveness at reproducing the complex dynamics at the levels of both individual electrodes and the cortical population. By closely inspecting the model's STRF-like computations over time, we find that the model alters both the gain and shape of its receptive field when adapting to a sudden noise change. We show that the DNN model's gain changes allow it to perform adaptive gain control, while the spectro-temporal change creates noise filtering by altering the inhibitory region of the model's receptive field. Further, we find that models of electrodes in nonprimary auditory cortex also exhibit noise filtering changes in their excitatory regions, suggesting differences in noise filtering mechanisms along the cortical hierarchy. These findings demonstrate the capability of deep neural networks to model complex neural adaptation and offer new hypotheses about the computations the auditory cortex performs to enable noise-robust speech perception in real-world, dynamic environments.
Collapse
Affiliation(s)
- Gavin Mischler
- Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, New York, United States; Department of Electrical Engineering, Columbia University, New York, United States
| | - Menoua Keshishian
- Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, New York, United States; Department of Electrical Engineering, Columbia University, New York, United States
| | - Stephan Bickel
- Hofstra Northwell School of Medicine, Manhasset, New York, United States
| | - Ashesh D Mehta
- Hofstra Northwell School of Medicine, Manhasset, New York, United States
| | - Nima Mesgarani
- Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, New York, United States; Department of Electrical Engineering, Columbia University, New York, United States.
| |
Collapse
|
8
|
Cody PA, Tzounopoulos T. Neuromodulatory Mechanisms Underlying Contrast Gain Control in Mouse Auditory Cortex. J Neurosci 2022; 42:5564-5579. [PMID: 35998293 PMCID: PMC9295830 DOI: 10.1523/jneurosci.2054-21.2022] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 05/11/2022] [Accepted: 05/11/2022] [Indexed: 01/16/2023] Open
Abstract
Neural adaptation enables the brain to efficiently process sensory signals despite large changes in background noise. Previous studies have established that recent background spectro- or spatio-temporal statistics scale neural responses to sensory stimuli via a canonical normalization computation, which is conserved among species and sensory domains. In the auditory pathway, one major form of normalization, termed contrast gain control, presents as decreasing instantaneous firing-rate gain, the slope of the neural input-output relationship, with increasing variability of background sound levels (contrast) across time and frequency. Despite this gain rescaling, mean firing-rates in auditory cortex become invariant to sound level contrast, termed contrast invariance. The underlying neuromodulatory mechanisms of these two phenomena remain unknown. To study these mechanisms in male and female mice, we used a 2-photon calcium imaging preparation in layer 2/3 neurons of primary auditory cortex (A1), along with pharmacological and genetic KO approaches. We found that neuromodulatory cortical synaptic zinc signaling is necessary for contrast gain control but not contrast invariance in mouse A1.SIGNIFICANCE STATEMENT When sound levels in the acoustic environment become more variable across time and frequency, the brain decreases response gain to maintain dynamic range and thus stimulus discriminability. This gain adaptation accounts for changes in perceptual judgments in humans and mice; however, the underlying neuromodulatory mechanisms remain poorly understood. Here, we report context-dependent neuromodulatory effects of synaptic zinc that are necessary for contrast gain control in A1. Understanding context-specific neuromodulatory mechanisms, such as contrast gain control, provides insight into A1 cortical mechanisms of adaptation and also into fundamental aspects of perceptual changes that rely on gain modulation, such as attention.
Collapse
Affiliation(s)
- Patrick A Cody
- Pittsburgh Hearing Research Center, Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Thanos Tzounopoulos
- Pittsburgh Hearing Research Center, Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| |
Collapse
|
9
|
Knipper M, Singer W, Schwabe K, Hagberg GE, Li Hegner Y, Rüttiger L, Braun C, Land R. Disturbed Balance of Inhibitory Signaling Links Hearing Loss and Cognition. Front Neural Circuits 2022; 15:785603. [PMID: 35069123 PMCID: PMC8770933 DOI: 10.3389/fncir.2021.785603] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 12/08/2021] [Indexed: 12/19/2022] Open
Abstract
Neuronal hyperexcitability in the central auditory pathway linked to reduced inhibitory activity is associated with numerous forms of hearing loss, including noise damage, age-dependent hearing loss, and deafness, as well as tinnitus or auditory processing deficits in autism spectrum disorder (ASD). In most cases, the reduced central inhibitory activity and the accompanying hyperexcitability are interpreted as an active compensatory response to the absence of synaptic activity, linked to increased central neural gain control (increased output activity relative to reduced input). We here suggest that hyperexcitability also could be related to an immaturity or impairment of tonic inhibitory strength that typically develops in an activity-dependent process in the ascending auditory pathway with auditory experience. In these cases, high-SR auditory nerve fibers, which are critical for the shortest latencies and lowest sound thresholds, may have either not matured (possibly in congenital deafness or autism) or are dysfunctional (possibly after sudden, stressful auditory trauma or age-dependent hearing loss linked with cognitive decline). Fast auditory processing deficits can occur despite maintained basal hearing. In that case, tonic inhibitory strength is reduced in ascending auditory nuclei, and fast inhibitory parvalbumin positive interneuron (PV-IN) dendrites are diminished in auditory and frontal brain regions. This leads to deficits in central neural gain control linked to hippocampal LTP/LTD deficiencies, cognitive deficits, and unbalanced extra-hypothalamic stress control. Under these conditions, a diminished inhibitory strength may weaken local neuronal coupling to homeostatic vascular responses required for the metabolic support of auditory adjustment processes. We emphasize the need to distinguish these two states of excitatory/inhibitory imbalance in hearing disorders: (i) Under conditions of preserved fast auditory processing and sustained tonic inhibitory strength, an excitatory/inhibitory imbalance following auditory deprivation can maintain precise hearing through a memory linked, transient disinhibition that leads to enhanced spiking fidelity (central neural gain⇑) (ii) Under conditions of critically diminished fast auditory processing and reduced tonic inhibitory strength, hyperexcitability can be part of an increased synchronization over a broader frequency range, linked to reduced spiking reliability (central neural gain⇓). This latter stage mutually reinforces diminished metabolic support for auditory adjustment processes, increasing the risks for canonical dementia syndromes.
Collapse
Affiliation(s)
- Marlies Knipper
- Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Center (THRC), Molecular Physiology of Hearing, University of Tübingen, Tübingen, Germany
- *Correspondence: Marlies Knipper,
| | - Wibke Singer
- Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Center (THRC), Molecular Physiology of Hearing, University of Tübingen, Tübingen, Germany
| | - Kerstin Schwabe
- Experimental Neurosurgery, Department of Neurosurgery, Hannover Medical School, Hanover, Germany
| | - Gisela E. Hagberg
- Department of Biomedical Magnetic Resonance, University Hospital Tübingen (UKT), Tübingen, Germany
- High-Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Yiwen Li Hegner
- MEG Center, University of Tübingen, Tübingen, Germany
- Center of Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Lukas Rüttiger
- Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Center (THRC), Molecular Physiology of Hearing, University of Tübingen, Tübingen, Germany
| | - Christoph Braun
- MEG Center, University of Tübingen, Tübingen, Germany
- Center of Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Rüdiger Land
- Department of Experimental Otology, Institute for Audioneurotechnology, Hannover Medical School, Hanover, Germany
| |
Collapse
|
10
|
Onder H. The potential significance of reversed stapes reflex in clinical practice in idiopathic intracranial hypertension. Ann Indian Acad Neurol 2022; 25:214-217. [PMID: 35693648 PMCID: PMC9175437 DOI: 10.4103/aian.aian_379_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Revised: 10/03/2021] [Accepted: 10/11/2021] [Indexed: 11/05/2022] Open
Abstract
Background: Stapes reflex test is a method of evaluating the involuntary muscle contraction of the stapedius muscle in response to a high-intensity sound stimulus. The formation of this reflex involves the intact function of the 7th nerve, brain stem, 8th nerve, and middle ear. Due to ease of administration and information yielded, the stapedial reflex is considered one of the most powerful differential diagnostic audiological procedures. Numerous studies have remarked on the fluid communication between the intracochlear and intracranial spaces through the cochlear aqueduct. Currently, the potential significance of a noninvasive audiological technique in the discrimination of raised intracranial pressure constitutes a crucial topic of interest. Methods: We have performed the pre-LP and post-LP detailed otorhinolaryngological investigations, including the detailed inspection, audiometric testing, tympanometry, and stapedial reflex in a total of four consecutive patients with IIH. Results: We found that the stapedial reflex was bilateral absent initially in two of the patients. However, the second stapedial reflex investigations after LP showed reversal of the reflex responses in both of the patients. Conclusions: We suggest some hypotheses and propose some clinical applications. Future studies focusing on the potential utility of this reflex in the monitorization of IIH may provide crucial perspectives.
Collapse
|
11
|
Homma NY, Bajo VM. Lemniscal Corticothalamic Feedback in Auditory Scene Analysis. Front Neurosci 2021; 15:723893. [PMID: 34489635 PMCID: PMC8417129 DOI: 10.3389/fnins.2021.723893] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 07/30/2021] [Indexed: 12/15/2022] Open
Abstract
Sound information is transmitted from the ear to central auditory stations of the brain via several nuclei. In addition to these ascending pathways there exist descending projections that can influence the information processing at each of these nuclei. A major descending pathway in the auditory system is the feedback projection from layer VI of the primary auditory cortex (A1) to the ventral division of medial geniculate body (MGBv) in the thalamus. The corticothalamic axons have small glutamatergic terminals that can modulate thalamic processing and thalamocortical information transmission. Corticothalamic neurons also provide input to GABAergic neurons of the thalamic reticular nucleus (TRN) that receives collaterals from the ascending thalamic axons. The balance of corticothalamic and TRN inputs has been shown to refine frequency tuning, firing patterns, and gating of MGBv neurons. Therefore, the thalamus is not merely a relay stage in the chain of auditory nuclei but does participate in complex aspects of sound processing that include top-down modulations. In this review, we aim (i) to examine how lemniscal corticothalamic feedback modulates responses in MGBv neurons, and (ii) to explore how the feedback contributes to auditory scene analysis, particularly on frequency and harmonic perception. Finally, we will discuss potential implications of the role of corticothalamic feedback in music and speech perception, where precise spectral and temporal processing is essential.
Collapse
Affiliation(s)
- Natsumi Y. Homma
- Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA, United States
- Coleman Memorial Laboratory, Department of Otolaryngology – Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Victoria M. Bajo
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
12
|
Hosseini M, Rodriguez G, Guo H, Lim HH, Plourde E. The effect of input noises on the activity of auditory neurons using GLM-based metrics. J Neural Eng 2021; 18. [PMID: 33626516 DOI: 10.1088/1741-2552/abe979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Accepted: 02/24/2021] [Indexed: 11/11/2022]
Abstract
CONTEXT The auditory system is extremely efficient in extracting auditory information in the presence of background noise. However, people with auditory implants have a hard time understanding speech in noisy conditions. Understanding the mechanisms of perception in noise could lead to better stimulation or preprocessing strategies for such implants. OBJECTIVE The neural mechanisms related to the processing of background noise, especially in the inferior colliculus (IC) where the auditory midbrain implant is located, are still not well understood. We thus wish to investigate if there is a difference in the activity of neurons in the IC when presenting noisy vocalizations with different types of noise (stationary vs. non-stationary), input signal-to-noise ratios (SNR) and signal levels. APPROACH We developed novel metrics based on a generalized linear model (GLM) to investigate the effect of a given input noise on neural activity. We used these metrics to analyze neural data recorded from the IC in ketamine-anesthetized female Hartley guinea pigs while presenting noisy vocalizations. MAIN RESULTS We found that non-stationary noise clearly contributes to the multi-unit neural activity in the IC by causing excitation, regardless of the SNR, input level or vocalization type. However, when presenting white or natural stationary noises, a great diversity of responses was observed for the different conditions, where the multi-unit activity of some sites was affected by the presence of noise and the activity of others was not. SIGNIFICANCE The GLM-based metrics allowed the identification of a clear distinction between the effect of white or natural stationary noises and that of non-stationary noise on the multi-unit activity in the IC. This had not been observed before and indicates that the so-called noise invariance in the IC is dependent on the input noisy conditions. This could suggest different preprocessing or stimulation approaches for auditory midbrain implants depending on the noisy conditions.
Collapse
Affiliation(s)
- Maryam Hosseini
- Electrical engineering, Université de Sherbrooke, 2500 Boulevard de l'Université, Sherbrooke, Quebec, J1K 2R1, CANADA
| | - Gerardo Rodriguez
- Biomedical engineering, University of Minnesota, 312 Church St SE, Minneapolis, Minnesota, 55455, UNITED STATES
| | - Hongsun Guo
- Biomedical engineering, University of Minnesota, 312 Church St SE, Minneapolis, Minnesota, 55455, UNITED STATES
| | - Hubert H Lim
- Department of Biomedical Engineering, University of Minnesota, 7-105 Hasselmo Hall, 312 Church Street SE, Minneapolis, MN 55455, USA, Minneapolis, Minnesota, 55455, UNITED STATES
| | - Eric Plourde
- Electrical engineering, Université de Sherbrooke, 2500 Boulevard de l'Université, Sherbrooke, Quebec, J1K 2R1, CANADA
| |
Collapse
|
13
|
Li L, Rehr R, Bruns P, Gerkmann T, Röder B. A Survey on Probabilistic Models in Human Perception and Machines. Front Robot AI 2021; 7:85. [PMID: 33501252 PMCID: PMC7805657 DOI: 10.3389/frobt.2020.00085] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Accepted: 05/29/2020] [Indexed: 11/29/2022] Open
Abstract
Extracting information from noisy signals is of fundamental importance for both biological and artificial perceptual systems. To provide tractable solutions to this challenge, the fields of human perception and machine signal processing (SP) have developed powerful computational models, including Bayesian probabilistic models. However, little true integration between these fields exists in their applications of the probabilistic models for solving analogous problems, such as noise reduction, signal enhancement, and source separation. In this mini review, we briefly introduce and compare selective applications of probabilistic models in machine SP and human psychophysics. We focus on audio and audio-visual processing, using examples of speech enhancement, automatic speech recognition, audio-visual cue integration, source separation, and causal inference to illustrate the basic principles of the probabilistic approach. Our goal is to identify commonalities between probabilistic models addressing brain processes and those aiming at building intelligent machines. These commonalities could constitute the closest points for interdisciplinary convergence.
Collapse
Affiliation(s)
- Lux Li
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Robert Rehr
- Signal Processing (SP), Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Timo Gerkmann
- Signal Processing (SP), Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
14
|
Pennington JR, David SV. Complementary Effects of Adaptation and Gain Control on Sound Encoding in Primary Auditory Cortex. eNeuro 2020; 7:ENEURO.0205-20.2020. [PMID: 33109632 PMCID: PMC7675144 DOI: 10.1523/eneuro.0205-20.2020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 08/15/2020] [Accepted: 09/05/2020] [Indexed: 11/24/2022] Open
Abstract
An important step toward understanding how the brain represents complex natural sounds is to develop accurate models of auditory coding by single neurons. A commonly used model is the linear-nonlinear spectro-temporal receptive field (STRF; LN model). The LN model accounts for many features of auditory tuning, but it cannot account for long-lasting effects of sensory context on sound-evoked activity. Two mechanisms that may support these contextual effects are short-term plasticity (STP) and contrast-dependent gain control (GC), which have inspired expanded versions of the LN model. Both models improve performance over the LN model, but they have never been compared directly. Thus, it is unclear whether they account for distinct processes or describe one phenomenon in different ways. To address this question, we recorded activity of neurons in primary auditory cortex (A1) of awake ferrets during presentation of natural sounds. We then fit models incorporating one nonlinear mechanism (GC or STP) or both (GC+STP) using this single dataset, and measured the correlation between the models' predictions and the recorded neural activity. Both the STP and GC models performed significantly better than the LN model, but the GC+STP model outperformed both individual models. We also quantified the equivalence of STP and GC model predictions and found only modest similarity. Consistent results were observed for a dataset collected in clean and noisy acoustic contexts. These results establish general methods for evaluating the equivalence of arbitrarily complex encoding models and suggest that the STP and GC models describe complementary processes in the auditory system.
Collapse
Affiliation(s)
- Jacob R Pennington
- Department of Mathematics, Washington State University, Vancouver, WA, 98686
| | - Stephen V David
- Department of Otolaryngology, Oregon Health and Science University, Portland, OR, 97239
| |
Collapse
|
15
|
Cooke JE, Kahn MC, Mann EO, King AJ, Schnupp JWH, Willmore BDB. Contrast gain control occurs independently of both parvalbumin-positive interneuron activity and shunting inhibition in auditory cortex. J Neurophysiol 2020; 123:1536-1551. [PMID: 32186432 PMCID: PMC7191518 DOI: 10.1152/jn.00587.2019] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 03/16/2020] [Accepted: 03/18/2020] [Indexed: 12/31/2022] Open
Abstract
Contrast gain control is the systematic adjustment of neuronal gain in response to the contrast of sensory input. It is widely observed in sensory cortical areas and has been proposed to be a canonical neuronal computation. Here, we investigated whether shunting inhibition from parvalbumin-positive interneurons-a mechanism involved in gain control in visual cortex-also underlies contrast gain control in auditory cortex. First, we performed extracellular recordings in the auditory cortex of anesthetized male mice and optogenetically manipulated the activity of parvalbumin-positive interneurons while varying the contrast of the sensory input. We found that both activation and suppression of parvalbumin interneuron activity altered the overall gain of cortical neurons. However, despite these changes in overall gain, we found that manipulating parvalbumin interneuron activity did not alter the strength of contrast gain control in auditory cortex. Furthermore, parvalbumin-positive interneurons did not show increases in activity in response to high-contrast stimulation, which would be expected if they drive contrast gain control. Finally, we performed in vivo whole-cell recordings in auditory cortical neurons during high- and low-contrast stimulation and found that no increase in membrane conductance was observed during high-contrast stimulation. Taken together, these findings indicate that while parvalbumin-positive interneuron activity modulates the overall gain of auditory cortical responses, other mechanisms are primarily responsible for contrast gain control in this cortical area.NEW & NOTEWORTHY We investigated whether contrast gain control is mediated by shunting inhibition from parvalbumin-positive interneurons in auditory cortex. We performed extracellular and intracellular recordings in mouse auditory cortex while presenting sensory stimuli with varying contrasts and manipulated parvalbumin-positive interneuron activity using optogenetics. We show that while parvalbumin-positive interneuron activity modulates the gain of cortical responses, this activity is not the primary mechanism for contrast gain control in auditory cortex.
Collapse
Affiliation(s)
- James E Cooke
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
- University College London, London, United Kingdom
| | - Martin C Kahn
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Edward O Mann
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Jan W H Schnupp
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong
| | - Ben D B Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
16
|
Rajendran VG, Harper NS, Schnupp JWH. Auditory cortical representation of music favours the perceived beat. ROYAL SOCIETY OPEN SCIENCE 2020; 7:191194. [PMID: 32269783 PMCID: PMC7137933 DOI: 10.1098/rsos.191194] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Accepted: 02/03/2020] [Indexed: 06/02/2023]
Abstract
Previous research has shown that musical beat perception is a surprisingly complex phenomenon involving widespread neural coordination across higher-order sensory, motor and cognitive areas. However, the question of how low-level auditory processing must necessarily shape these dynamics, and therefore perception, is not well understood. Here, we present evidence that the auditory cortical representation of music, even in the absence of motor or top-down activations, already favours the beat that will be perceived. Extracellular firing rates in the rat auditory cortex were recorded in response to 20 musical excerpts diverse in tempo and genre, for which musical beat perception had been characterized by the tapping behaviour of 40 human listeners. We found that firing rates in the rat auditory cortex were on average higher on the beat than off the beat. This 'neural emphasis' distinguished the beat that was perceived from other possible interpretations of the beat, was predictive of the degree of tapping consensus across human listeners, and was accounted for by a spectrotemporal receptive field model. These findings strongly suggest that the 'bottom-up' processing of music performed by the auditory system predisposes the timing and clarity of the perceived musical beat.
Collapse
Affiliation(s)
- Vani G. Rajendran
- Auditory Neuroscience Group, Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, UK
- Department of Biomedical Sciences, City University of Hong Kong, Kowloon Tong, Hong Kong
| | - Nicol S. Harper
- Auditory Neuroscience Group, Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, UK
| | - Jan W. H. Schnupp
- Auditory Neuroscience Group, Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, UK
- Department of Biomedical Sciences, City University of Hong Kong, Kowloon Tong, Hong Kong
| |
Collapse
|
17
|
Lazar AA, Ukani NH, Zhou Y. Sparse identification of contrast gain control in the fruit fly photoreceptor and amacrine cell layer. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2020; 10:3. [PMID: 32052209 PMCID: PMC7016054 DOI: 10.1186/s13408-020-0080-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Accepted: 01/28/2020] [Indexed: 05/05/2023]
Abstract
The fruit fly's natural visual environment is often characterized by light intensities ranging across several orders of magnitude and by rapidly varying contrast across space and time. Fruit fly photoreceptors robustly transduce and, in conjunction with amacrine cells, process visual scenes and provide the resulting signal to downstream targets. Here, we model the first step of visual processing in the photoreceptor-amacrine cell layer. We propose a novel divisive normalization processor (DNP) for modeling the computation taking place in the photoreceptor-amacrine cell layer. The DNP explicitly models the photoreceptor feedforward and temporal feedback processing paths and the spatio-temporal feedback path of the amacrine cells. We then formally characterize the contrast gain control of the DNP and provide sparse identification algorithms that can efficiently identify each the feedforward and feedback DNP components. The algorithms presented here are the first demonstration of tractable and robust identification of the components of a divisive normalization processor. The sparse identification algorithms can be readily employed in experimental settings, and their effectiveness is demonstrated with several examples.
Collapse
Affiliation(s)
- Aurel A. Lazar
- Department of Electrical Engineering, Columbia University, New York, USA
| | - Nikul H. Ukani
- Department of Electrical Engineering, Columbia University, New York, USA
| | - Yiyin Zhou
- Department of Electrical Engineering, Columbia University, New York, USA
| |
Collapse
|
18
|
Lohse M, Bajo VM, King AJ, Willmore BDB. Neural circuits underlying auditory contrast gain control and their perceptual implications. Nat Commun 2020; 11:324. [PMID: 31949136 PMCID: PMC6965083 DOI: 10.1038/s41467-019-14163-5] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Accepted: 12/19/2019] [Indexed: 11/09/2022] Open
Abstract
Neural adaptation enables sensory information to be represented optimally in the brain despite large fluctuations over time in the statistics of the environment. Auditory contrast gain control represents an important example, which is thought to arise primarily from cortical processing. Here we show that neurons in the auditory thalamus and midbrain of mice show robust contrast gain control, and that this is implemented independently of cortical activity. Although neurons at each level exhibit contrast gain control to similar degrees, adaptation time constants become longer at later stages of the processing hierarchy, resulting in progressively more stable representations. We also show that auditory discrimination thresholds in human listeners compensate for changes in contrast, and that the strength of this perceptual adaptation can be predicted from physiological measurements. Contrast adaptation is therefore a robust property of both the subcortical and cortical auditory system and accounts for the short-term adaptability of perceptual judgments.
Collapse
Affiliation(s)
- Michael Lohse
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, OX1 3PT, UK.
| | - Victoria M Bajo
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Andrew J King
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, OX1 3PT, UK.
| | - Ben D B Willmore
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, OX1 3PT, UK
| |
Collapse
|
19
|
Lopez Espejo M, Schwartz ZP, David SV. Spectral tuning of adaptation supports coding of sensory context in auditory cortex. PLoS Comput Biol 2019; 15:e1007430. [PMID: 31626624 PMCID: PMC6821137 DOI: 10.1371/journal.pcbi.1007430] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Revised: 10/30/2019] [Accepted: 09/23/2019] [Indexed: 12/19/2022] Open
Abstract
Perception of vocalizations and other behaviorally relevant sounds requires integrating acoustic information over hundreds of milliseconds. Sound-evoked activity in auditory cortex typically has much shorter latency, but the acoustic context, i.e., sound history, can modulate sound evoked activity over longer periods. Contextual effects are attributed to modulatory phenomena, such as stimulus-specific adaption and contrast gain control. However, an encoding model that links context to natural sound processing has yet to be established. We tested whether a model in which spectrally tuned inputs undergo adaptation mimicking short-term synaptic plasticity (STP) can account for contextual effects during natural sound processing. Single-unit activity was recorded from primary auditory cortex of awake ferrets during presentation of noise with natural temporal dynamics and fully natural sounds. Encoding properties were characterized by a standard linear-nonlinear spectro-temporal receptive field (LN) model and variants that incorporated STP-like adaptation. In the adapting models, STP was applied either globally across all input spectral channels or locally to subsets of channels. For most neurons, models incorporating local STP predicted neural activity as well or better than LN and global STP models. The strength of nonlinear adaptation varied across neurons. Within neurons, adaptation was generally stronger for spectral channels with excitatory than inhibitory gain. Neurons showing improved STP model performance also tended to undergo stimulus-specific adaptation, suggesting a common mechanism for these phenomena. When STP models were compared between passive and active behavior conditions, response gain often changed, but average STP parameters were stable. Thus, spectrally and temporally heterogeneous adaptation, subserved by a mechanism with STP-like dynamics, may support representation of the complex spectro-temporal patterns that comprise natural sounds across wide-ranging sensory contexts.
Collapse
Affiliation(s)
- Mateo Lopez Espejo
- Neuroscience Graduate Program, Oregon Health and Science University, Portland, OR, United States of America
| | - Zachary P. Schwartz
- Neuroscience Graduate Program, Oregon Health and Science University, Portland, OR, United States of America
| | - Stephen V. David
- Oregon Hearing Research Center, Oregon Health and Science University, Portland, OR, United States of America
| |
Collapse
|
20
|
Awad A, Fgaier H, Mustafa I, Elkamel A, Elnashaie S. Pharmacokinetic/Pharmacodynamic modeling and simulation of the effect of medications on β-amyloid aggregates and cholinergic neurocycle. Comput Chem Eng 2019. [DOI: 10.1016/j.compchemeng.2019.04.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
21
|
Rahman M, Willmore BDB, King AJ, Harper NS. A dynamic network model of temporal receptive fields in primary auditory cortex. PLoS Comput Biol 2019; 15:e1006618. [PMID: 31059503 PMCID: PMC6534339 DOI: 10.1371/journal.pcbi.1006618] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2018] [Revised: 05/24/2019] [Accepted: 04/13/2019] [Indexed: 11/19/2022] Open
Abstract
Auditory neurons encode stimulus history, which is often modelled using a span of time-delays in a spectro-temporal receptive field (STRF). We propose an alternative model for the encoding of stimulus history, which we apply to extracellular recordings of neurons in the primary auditory cortex of anaesthetized ferrets. For a linear-non-linear STRF model (LN model) to achieve a high level of performance in predicting single unit neural responses to natural sounds in the primary auditory cortex, we found that it is necessary to include time delays going back at least 200 ms in the past. This is an unrealistic time span for biological delay lines. We therefore asked how much of this dependence on stimulus history can instead be explained by dynamical aspects of neurons. We constructed a neural-network model whose output is the weighted sum of units whose responses are determined by a dynamic firing-rate equation. The dynamic aspect performs low-pass filtering on each unit's response, providing an exponentially decaying memory whose time constant is individual to each unit. We find that this dynamic network (DNet) model, when fitted to the neural data using STRFs of only 25 ms duration, can achieve prediction performance on a held-out dataset comparable to the best performing LN model with STRFs of 200 ms duration. These findings suggest that integration due to the membrane time constants or other exponentially-decaying memory processes may underlie linear temporal receptive fields of neurons beyond 25 ms.
Collapse
Affiliation(s)
- Monzilur Rahman
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Ben D. B. Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J. King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Nicol S. Harper
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
22
|
Norman-Haignere SV, McDermott JH. Neural responses to natural and model-matched stimuli reveal distinct computations in primary and nonprimary auditory cortex. PLoS Biol 2018; 16:e2005127. [PMID: 30507943 PMCID: PMC6292651 DOI: 10.1371/journal.pbio.2005127] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 12/13/2018] [Accepted: 11/08/2018] [Indexed: 11/19/2022] Open
Abstract
A central goal of sensory neuroscience is to construct models that can explain neural responses to natural stimuli. As a consequence, sensory models are often tested by comparing neural responses to natural stimuli with model responses to those stimuli. One challenge is that distinct model features are often correlated across natural stimuli, and thus model features can predict neural responses even if they do not in fact drive them. Here, we propose a simple alternative for testing a sensory model: we synthesize a stimulus that yields the same model response as each of a set of natural stimuli, and test whether the natural and "model-matched" stimuli elicit the same neural responses. We used this approach to test whether a common model of auditory cortex-in which spectrogram-like peripheral input is processed by linear spectrotemporal filters-can explain fMRI responses in humans to natural sounds. Prior studies have that shown that this model has good predictive power throughout auditory cortex, but this finding could reflect feature correlations in natural stimuli. We observed that fMRI responses to natural and model-matched stimuli were nearly equivalent in primary auditory cortex (PAC) but that nonprimary regions, including those selective for music or speech, showed highly divergent responses to the two sound sets. This dissociation between primary and nonprimary regions was less clear from model predictions due to the influence of feature correlations across natural stimuli. Our results provide a signature of hierarchical organization in human auditory cortex, and suggest that nonprimary regions compute higher-order stimulus properties that are not well captured by traditional models. Our methodology enables stronger tests of sensory models and could be broadly applied in other domains.
Collapse
Affiliation(s)
- Sam V. Norman-Haignere
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Zuckerman Institute of Mind, Brain and Behavior, Columbia University, New York, New York, United States of America
- Laboratoire des Sytèmes Perceptifs, Département d’Études Cognitives, ENS, PSL University, CNRS, Paris France
| | - Josh H. McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, Massachusetts, United States of America
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| |
Collapse
|
23
|
Cooke JE, King AJ, Willmore BDB, Schnupp JWH. Contrast gain control in mouse auditory cortex. J Neurophysiol 2018; 120:1872-1884. [PMID: 30044164 PMCID: PMC6230796 DOI: 10.1152/jn.00847.2017] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2017] [Revised: 07/18/2018] [Accepted: 07/20/2018] [Indexed: 11/22/2022] Open
Abstract
The neocortex is thought to employ a number of canonical computations, but little is known about whether these computations rely on shared mechanisms across different neural populations. In recent years, the mouse has emerged as a powerful model organism for the dissection of the circuits and mechanisms underlying various aspects of neural processing and therefore provides an important avenue for research into putative canonical computations. One such computation is contrast gain control, the systematic adjustment of neural gain in accordance with the contrast of sensory input, which helps to construct neural representations that are robust to the presence of background stimuli. Here, we characterized contrast gain control in the mouse auditory cortex. We performed laminar extracellular recordings in the auditory cortex of the anesthetized mouse while varying the contrast of the sensory input. We observed that an increase in stimulus contrast resulted in a compensatory reduction in the gain of neural responses, leading to representations in the mouse auditory cortex that are largely contrast invariant. Contrast gain control was present in all cortical layers but was found to be strongest in deep layers, indicating that intracortical mechanisms may contribute to these gain changes. These results lay a foundation for investigations into the mechanisms underlying contrast adaptation in the mouse auditory cortex. NEW & NOTEWORTHY We investigated whether contrast gain control, the systematic reduction in neural gain in response to an increase in sensory contrast, exists in the mouse auditory cortex. We performed extracellular recordings in the mouse auditory cortex while presenting sensory stimuli with varying contrasts and found this form of processing was widespread. This finding provides evidence that contrast gain control may represent a canonical cortical computation and lays a foundation for investigations into the underlying mechanisms.
Collapse
Affiliation(s)
- James E Cooke
- Department of Physiology, Anatomy and Genetics, University of Oxford , Oxford , United Kingdom
- University College London , United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford , Oxford , United Kingdom
| | - Ben D B Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford , Oxford , United Kingdom
| | - Jan W H Schnupp
- Department of Physiology, Anatomy and Genetics, University of Oxford , Oxford , United Kingdom
- Department of Biomedical Sciences, City University of Hong Kong , Hong Kong
| |
Collapse
|
24
|
Rocchi F, Ramachandran R. Neuronal adaptation to sound statistics in the inferior colliculus of behaving macaques does not reduce the effectiveness of the masking noise. J Neurophysiol 2018; 120:2819-2833. [PMID: 30256735 DOI: 10.1152/jn.00875.2017] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The detectability of target sounds embedded within noisy backgrounds is affected by the regularities that summarize acoustic sceneries. Previous studies suggested that the dynamic range of neurons in the inferior colliculus (IC) of anesthetized guinea pigs shifts toward the mean sound pressure level in irregular acoustic environments. Yet, it is unclear how this neuronal adaptation processes may influence the effectiveness of sounds as a masker, both behaviorally and in terms of neuronal encoding. To answer this question, we measured the neural response of IC neurons while macaque monkeys performed a Go/No-Go tone detection task. Macaques detected a 50-ms tone that was either simultaneously gated with a burst of noise or embedded within a continuous noise background, whose levels were randomly sampled (every 50 ms) from a probability distribution. The mean of the distribution matched the level of the gated burst of noise. Psychometric and IC neurometric thresholds to tones did not differ between the two masking conditions. However, the neuronal firing rate versus level function was significantly affected by the temporal characteristics of the noise masker. Simultaneously gated noise caused higher baseline responses and greater dynamic range compression compared with noise distribution. The slopes of psychometric and neurometric functions were significantly shallower for higher variance distributions, suggesting that neuronal sensitivity might change with the variability of the sound. Our results suggest that the adaptive response of IC neurons to sound regularities does not affect the effectiveness of the noise-masking signal, which remains invariant to surrounding noise amplitudes. NEW & NOTEWORTHY Auditory neurons adapt to the statistics of sound levels in the acoustic scene. However, it is still unclear to what extent such adaptation influences the effectiveness of the stimulus as a masker. Our study represents the first attempt to investigate how the adaptation to the statistics of masking stimuli may be related to the effectiveness of masking, and to the single-unit encoding of the midbrain auditory neurons in behaving animals.
Collapse
Affiliation(s)
- Francesca Rocchi
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center , Nashville, Tennessee
| | - Ramnarayan Ramachandran
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center , Nashville, Tennessee
| |
Collapse
|
25
|
Abstract
Our ability to make sense of the auditory world results from neural processing that begins in the ear, goes through multiple subcortical areas, and continues in the cortex. The specific contribution of the auditory cortex to this chain of processing is far from understood. Although many of the properties of neurons in the auditory cortex resemble those of subcortical neurons, they show somewhat more complex selectivity for sound features, which is likely to be important for the analysis of natural sounds, such as speech, in real-life listening conditions. Furthermore, recent work has shown that auditory cortical processing is highly context-dependent, integrates auditory inputs with other sensory and motor signals, depends on experience, and is shaped by cognitive demands, such as attention. Thus, in addition to being the locus for more complex sound selectivity, the auditory cortex is increasingly understood to be an integral part of the network of brain regions responsible for prediction, auditory perceptual decision-making, and learning. In this review, we focus on three key areas that are contributing to this understanding: the sound features that are preferentially represented by cortical neurons, the spatial organization of those preferences, and the cognitive roles of the auditory cortex.
Collapse
Affiliation(s)
- Andrew J King
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Sundeep Teki
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Ben D B Willmore
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| |
Collapse
|
26
|
Abstract
Human listeners appear to represent the textures of sounds through a process of automatic time averaging that exists beyond volition. This process distils likely background sounds into their summary statistics, a computationally efficient way of dealing with complex auditory scenes.
Collapse
Affiliation(s)
- David McAlpine
- Department of Linguistics, and The Australian Hearing Hub, Macquarie University, 16 University Avenue, Sydney, NSW, 2109, Australia.
| |
Collapse
|
27
|
Madi MK, Karameh FN. Adaptive optimal input design and parametric estimation of nonlinear dynamical systems: application to neuronal modeling. J Neural Eng 2018; 15:046028. [PMID: 29749350 DOI: 10.1088/1741-2552/aac3f7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Many physical models of biological processes including neural systems are characterized by parametric nonlinear dynamical relations between driving inputs, internal states, and measured outputs of the process. Fitting such models using experimental data (data assimilation) is a challenging task since the physical process often operates in a noisy, possibly non-stationary environment; moreover, conducting multiple experiments under controlled and repeatable conditions can be impractical, time consuming or costly. The accuracy of model identification, therefore, is dictated principally by the quality and dynamic richness of collected data over single or few experimental sessions. Accordingly, it is highly desirable to design efficient experiments that, by exciting the physical process with smart inputs, yields fast convergence and increased accuracy of the model. APPROACH We herein introduce an adaptive framework in which optimal input design is integrated with square root cubature Kalman filters (OID-SCKF) to develop an online estimation procedure that first, converges significantly quicker, thereby permitting model fitting over shorter time windows, and second, enhances model accuracy when only few process outputs are accessible. The methodology is demonstrated on common nonlinear models and on a four-area neural mass model with noisy and limited measurements. Estimation quality (speed and accuracy) is benchmarked against high-performance SCKF-based methods that commonly employ dynamically rich informed inputs for accurate model identification. MAIN RESULTS For all the tested models, simulated single-trial and ensemble averages showed that OID-SCKF exhibited (i) faster convergence of parameter estimates and (ii) lower dependence on inter-trial noise variability with gains up to around 1000 ms in speed and 81% increase in variability for the neural mass models. In terms of accuracy, OID-SCKF estimation was superior, and exhibited considerably less variability across experiments, in identifying model parameters of (a) systems with challenging model inversion dynamics and (b) systems with fewer measurable outputs that directly relate to the underlying processes. SIGNIFICANCE Fast and accurate identification therefore carries particular promise for modeling of transient (short-lived) neuronal network dynamics using a spatially under-sampled set of noisy measurements, as is commonly encountered in neural engineering applications.
Collapse
Affiliation(s)
- Mahmoud K Madi
- Department of Electrical and Computer Engineering, American University of Beirut, Beirut, Lebanon
| | | |
Collapse
|
28
|
Angeloni C, Geffen MN. Contextual modulation of sound processing in the auditory cortex. Curr Opin Neurobiol 2018; 49:8-15. [PMID: 29125987 PMCID: PMC6037899 DOI: 10.1016/j.conb.2017.10.012] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2017] [Revised: 10/11/2017] [Accepted: 10/13/2017] [Indexed: 12/26/2022]
Abstract
In everyday acoustic environments, we navigate through a maze of sounds that possess a complex spectrotemporal structure, spanning many frequencies and exhibiting temporal modulations that differ within frequency bands. Our auditory system needs to efficiently encode the same sounds in a variety of different contexts, while preserving the ability to separate complex sounds within an acoustic scene. Recent work in auditory neuroscience has made substantial progress in studying how sounds are represented in the auditory system under different contexts, demonstrating that auditory processing of seemingly simple acoustic features, such as frequency and time, is highly dependent on co-occurring acoustic and behavioral stimuli. Through a combination of electrophysiological recordings, computational analysis and behavioral techniques, recent research identified the interactions between external spectral and temporal context of stimuli, as well as the internal behavioral state.
Collapse
Affiliation(s)
- C Angeloni
- Department of Otorhinolaryngology: HNS, Department of Neuroscience, Psychology Graduate Group, Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, United States
| | - M N Geffen
- Department of Otorhinolaryngology: HNS, Department of Neuroscience, Psychology Graduate Group, Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, United States.
| |
Collapse
|
29
|
David SV. Incorporating behavioral and sensory context into spectro-temporal models of auditory encoding. Hear Res 2018; 360:107-123. [PMID: 29331232 PMCID: PMC6292525 DOI: 10.1016/j.heares.2017.12.021] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 12/18/2017] [Accepted: 12/26/2017] [Indexed: 01/11/2023]
Abstract
For several decades, auditory neuroscientists have used spectro-temporal encoding models to understand how neurons in the auditory system represent sound. Derived from early applications of systems identification tools to the auditory periphery, the spectro-temporal receptive field (STRF) and more sophisticated variants have emerged as an efficient means of characterizing representation throughout the auditory system. Most of these encoding models describe neurons as static sensory filters. However, auditory neural coding is not static. Sensory context, reflecting the acoustic environment, and behavioral context, reflecting the internal state of the listener, can both influence sound-evoked activity, particularly in central auditory areas. This review explores recent efforts to integrate context into spectro-temporal encoding models. It begins with a brief tutorial on the basics of estimating and interpreting STRFs. Then it describes three recent studies that have characterized contextual effects on STRFs, emerging over a range of timescales, from many minutes to tens of milliseconds. An important theme of this work is not simply that context influences auditory coding, but also that contextual effects span a large continuum of internal states. The added complexity of these context-dependent models introduces new experimental and theoretical challenges that must be addressed in order to be used effectively. Several new methodological advances promise to address these limitations and allow the development of more comprehensive context-dependent models in the future.
Collapse
Affiliation(s)
- Stephen V David
- Oregon Hearing Research Center, Oregon Health & Science University, 3181 SW Sam Jackson Park Rd, MC L335A, Portland, OR 97239, United States.
| |
Collapse
|
30
|
Atencio CA, Sharpee TO. Multidimensional receptive field processing by cat primary auditory cortical neurons. Neuroscience 2017; 359:130-141. [PMID: 28694174 DOI: 10.1016/j.neuroscience.2017.07.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2017] [Revised: 06/03/2017] [Accepted: 07/03/2017] [Indexed: 12/01/2022]
Abstract
The receptive fields of many auditory cortical neurons are multidimensional and are best represented by more than one stimulus feature. The number of these dimensions, their characteristics, and how they differ with stimulus context have been relatively unexplored. Standard methods that are often used to characterize multidimensional stimulus selectivity, such as spike-triggered covariance (STC) or maximally informative dimensions (MIDs), are either limited to Gaussian stimuli or are only able to recover a small number of stimulus features due to data limitations. An information theoretic extension of STC, the maximum noise entropy (MNE) model, can be used with non-Gaussian stimulus distributions to find an arbitrary number of stimulus dimensions. When we applied the MNE model to auditory cortical neurons, we often found more than two stimulus features that influenced neuronal firing. Excitatory and suppressive features coded different acoustic contexts: excitatory features encoded higher temporal and spectral modulations, while suppressive features had lower modulation frequency preferences. We found that the excitatory and suppressive features themselves were sensitive to stimulus context when we employed two stimuli that differed only in their short-term correlation structure: while the linear features were similar, the secondary features were strongly affected by stimulus statistics. These results show that multidimensional receptive field processing is influenced by feature type and stimulus context.
Collapse
Affiliation(s)
- Craig A Atencio
- Coleman Memorial Laboratory, UCSF Center for Integrative Neuroscience, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-HNS, University of California, San Francisco, USA.
| | - Tatyana O Sharpee
- Computational Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA, USA; Center for Theoretical Biological Physics and Department of Physics, University of California, San Diego, La Jolla, CA, USA
| |
Collapse
|
31
|
Single Neurons in the Avian Auditory Cortex Encode Individual Identity and Propagation Distance in Naturally Degraded Communication Calls. J Neurosci 2017; 37:3491-3510. [PMID: 28235893 PMCID: PMC5373131 DOI: 10.1523/jneurosci.2220-16.2017] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2016] [Revised: 01/08/2017] [Accepted: 01/13/2017] [Indexed: 11/21/2022] Open
Abstract
One of the most complex tasks performed by sensory systems is "scene analysis": the interpretation of complex signals as behaviorally relevant objects. The study of this problem, universal to species and sensory modalities, is particularly challenging in audition, where sounds from various sources and localizations, degraded by propagation through the environment, sum to form a single acoustical signal. Here we investigated in a songbird model, the zebra finch, the neural substrate for ranging and identifying a single source. We relied on ecologically and behaviorally relevant stimuli, contact calls, to investigate the neural discrimination of individual vocal signature as well as sound source distance when calls have been degraded through propagation in a natural environment. Performing electrophysiological recordings in anesthetized birds, we found neurons in the auditory forebrain that discriminate individual vocal signatures despite long-range degradation, as well as neurons discriminating propagation distance, with varying degrees of multiplexing between both information types. Moreover, the neural discrimination performance of individual identity was not affected by propagation-induced degradation beyond what was induced by the decreased intensity. For the first time, neurons with distance-invariant identity discrimination properties as well as distance-discriminant neurons are revealed in the avian auditory cortex. Because these neurons were recorded in animals that had prior experience neither with the vocalizers of the stimuli nor with long-range propagation of calls, we suggest that this neural population is part of a general-purpose system for vocalizer discrimination and ranging.SIGNIFICANCE STATEMENT Understanding how the brain makes sense of the multitude of stimuli that it continually receives in natural conditions is a challenge for scientists. Here we provide a new understanding of how the auditory system extracts behaviorally relevant information, the vocalizer identity and its distance to the listener, from acoustic signals that have been degraded by long-range propagation in natural conditions. We show, for the first time, that single neurons, in the auditory cortex of zebra finches, are capable of discriminating the individual identity and sound source distance in conspecific communication calls. The discrimination of identity in propagated calls relies on a neural coding that is robust to intensity changes, signals' quality, and decreases in the signal-to-noise ratio.
Collapse
|
32
|
Meyer AF, Williamson RS, Linden JF, Sahani M. Models of Neuronal Stimulus-Response Functions: Elaboration, Estimation, and Evaluation. Front Syst Neurosci 2017; 10:109. [PMID: 28127278 PMCID: PMC5226961 DOI: 10.3389/fnsys.2016.00109] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2016] [Accepted: 12/19/2016] [Indexed: 11/13/2022] Open
Abstract
Rich, dynamic, and dense sensory stimuli are encoded within the nervous system by the time-varying activity of many individual neurons. A fundamental approach to understanding the nature of the encoded representation is to characterize the function that relates the moment-by-moment firing of a neuron to the recent history of a complex sensory input. This review provides a unifying and critical survey of the techniques that have been brought to bear on this effort thus far—ranging from the classical linear receptive field model to modern approaches incorporating normalization and other nonlinearities. We address separately the structure of the models; the criteria and algorithms used to identify the model parameters; and the role of regularizing terms or “priors.” In each case we consider benefits or drawbacks of various proposals, providing examples for when these methods work and when they may fail. Emphasis is placed on key concepts rather than mathematical details, so as to make the discussion accessible to readers from outside the field. Finally, we review ways in which the agreement between an assumed model and the neuron's response may be quantified. Re-implemented and unified code for many of the methods are made freely available.
Collapse
Affiliation(s)
- Arne F Meyer
- Gatsby Computational Neuroscience Unit, University College London London, UK
| | - Ross S Williamson
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear InfirmaryBoston, MA, USA; Department of Otology and Laryngology, Harvard Medical SchoolBoston, MA, USA
| | - Jennifer F Linden
- Ear Institute, University College LondonLondon, UK; Department of Neuroscience, Physiology and Pharmacology, University College LondonLondon, UK
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London London, UK
| |
Collapse
|
33
|
Yildiz IB, Mesgarani N, Deneve S. Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields. J Neurosci 2016; 36:12338-12350. [PMID: 27927954 PMCID: PMC5148225 DOI: 10.1523/jneurosci.4648-15.2016] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2015] [Revised: 09/18/2016] [Accepted: 09/20/2016] [Indexed: 11/23/2022] Open
Abstract
UNLABELLED A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by "explaining away," a divisive competition between alternative interpretations of the auditory scene. SIGNIFICANCE STATEMENT Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data.
Collapse
Affiliation(s)
- Izzet B Yildiz
- Group for Neural Theory, Laboratoire de Neurosciences Cognitives, Département d'Etudes Cognitives, Ecole Normale Supérieure, 75005 Paris, France, and
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, New York 10027
| | - Sophie Deneve
- Group for Neural Theory, Laboratoire de Neurosciences Cognitives, Département d'Etudes Cognitives, Ecole Normale Supérieure, 75005 Paris, France, and
| |
Collapse
|
34
|
Robinson BL, Harper NS, McAlpine D. Meta-adaptation in the auditory midbrain under cortical influence. Nat Commun 2016; 7:13442. [PMID: 27883088 PMCID: PMC5123015 DOI: 10.1038/ncomms13442] [Citation(s) in RCA: 72] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2016] [Accepted: 10/04/2016] [Indexed: 11/17/2022] Open
Abstract
Neural adaptation is central to sensation. Neurons in auditory midbrain, for example, rapidly adapt their firing rates to enhance coding precision of common sound intensities. However, it remains unknown whether this adaptation is fixed, or dynamic and dependent on experience. Here, using guinea pigs as animal models, we report that adaptation accelerates when an environment is re-encountered-in response to a sound environment that repeatedly switches between quiet and loud, midbrain neurons accrue experience to find an efficient code more rapidly. This phenomenon, which we term meta-adaptation, suggests a top-down influence on the midbrain. To test this, we inactivate auditory cortex and find acceleration of adaptation with experience is attenuated, indicating a role for cortex-and its little-understood projections to the midbrain-in modulating meta-adaptation. Given the prevalence of adaptation across organisms and senses, meta-adaptation might be similarly common, with extensive implications for understanding how neurons encode the rapidly changing environments of the real world.
Collapse
Affiliation(s)
- Benjamin L. Robinson
- University College London Ear Institute, 332 Gray's Inn Road, London WC1X 8EE, UK
- Southwark and Central Integrated Psychological Therapies Team, The Maudsley Hospital, South London and Maudsley NHS Foundation Trust, Denmark Hill, London SE5 8AZ, UK
| | - Nicol S. Harper
- Department of Physiology, Anatomy, and Genetics, University of Oxford, South Parks Road, Oxford OX1 3QX, UK
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Old Road Campus Research Building, Headington, Oxford OX3 7DQ, UK
| | - David McAlpine
- University College London Ear Institute, 332 Gray's Inn Road, London WC1X 8EE, UK
- The Australian Hearing Hub, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia
| |
Collapse
|
35
|
Harper NS, Schoppe O, Willmore BDB, Cui Z, Schnupp JWH, King AJ. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons. PLoS Comput Biol 2016; 12:e1005113. [PMID: 27835647 PMCID: PMC5105998 DOI: 10.1371/journal.pcbi.1005113] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2015] [Accepted: 08/22/2016] [Indexed: 11/28/2022] Open
Abstract
Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.
Collapse
Affiliation(s)
- Nicol S. Harper
- Dept. of Physiology, Anatomy and Genetics (DPAG), Sherrington Building, University of Oxford, United Kingdom
- Institute of Biomedical Engineering, Department of Engineering Science, Old Road Campus Research Building, University of Oxford, Headington, United Kingdom
| | - Oliver Schoppe
- Dept. of Physiology, Anatomy and Genetics (DPAG), Sherrington Building, University of Oxford, United Kingdom
- Bio-Inspired Information Processing, Technische Universität München, Germany
| | - Ben D. B. Willmore
- Dept. of Physiology, Anatomy and Genetics (DPAG), Sherrington Building, University of Oxford, United Kingdom
| | - Zhanfeng Cui
- Institute of Biomedical Engineering, Department of Engineering Science, Old Road Campus Research Building, University of Oxford, Headington, United Kingdom
| | - Jan W. H. Schnupp
- Dept. of Physiology, Anatomy and Genetics (DPAG), Sherrington Building, University of Oxford, United Kingdom
- Department of Biomedical Science, City University of Hong Kong, Kowloon Tong, Hong Kong
| | - Andrew J. King
- Dept. of Physiology, Anatomy and Genetics (DPAG), Sherrington Building, University of Oxford, United Kingdom
| |
Collapse
|
36
|
Maor I, Shalev A, Mizrahi A. Distinct Spatiotemporal Response Properties of Excitatory Versus Inhibitory Neurons in the Mouse Auditory Cortex. Cereb Cortex 2016; 26:4242-4252. [PMID: 27600839 PMCID: PMC5066836 DOI: 10.1093/cercor/bhw266] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2016] [Revised: 07/05/2016] [Accepted: 08/01/2016] [Indexed: 01/31/2023] Open
Abstract
In the auditory system, early neural stations such as brain stem are characterized by strict tonotopy, which is used to deconstruct sounds to their basic frequencies. But higher along the auditory hierarchy, as early as primary auditory cortex (A1), tonotopy starts breaking down at local circuits. Here, we studied the response properties of both excitatory and inhibitory neurons in the auditory cortex of anesthetized mice. We used in vivo two photon-targeted cell-attached recordings from identified parvalbumin-positive neurons (PVNs) and their excitatory pyramidal neighbors (PyrNs). We show that PyrNs are locally heterogeneous as characterized by diverse best frequencies, pairwise signal correlations, and response timing. In marked contrast, neighboring PVNs exhibited homogenous response properties in pairwise signal correlations and temporal responses. The distinct physiological microarchitecture of different cell types is maintained qualitatively in response to natural sounds. Excitatory heterogeneity and inhibitory homogeneity within the same circuit suggest different roles for each population in coding natural stimuli.
Collapse
Affiliation(s)
- Ido Maor
- Department of Neurobiology
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Edmond J. Safra Campus, Givat Ram Jerusalem 91904, Israel
| | - Amos Shalev
- Department of Neurobiology
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Edmond J. Safra Campus, Givat Ram Jerusalem 91904, Israel
| | - Adi Mizrahi
- Department of Neurobiology
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Edmond J. Safra Campus, Givat Ram Jerusalem 91904, Israel
| |
Collapse
|
37
|
Deneux T, Kempf A, Daret A, Ponsot E, Bathellier B. Temporal asymmetries in auditory coding and perception reflect multi-layered nonlinearities. Nat Commun 2016; 7:12682. [PMID: 27580932 PMCID: PMC5025791 DOI: 10.1038/ncomms12682] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2016] [Accepted: 07/22/2016] [Indexed: 11/10/2022] Open
Abstract
Sound recognition relies not only on spectral cues, but also on temporal cues, as demonstrated by the profound impact of time reversals on perception of common sounds. To address the coding principles underlying such auditory asymmetries, we recorded a large sample of auditory cortex neurons using two-photon calcium imaging in awake mice, while playing sounds ramping up or down in intensity. We observed clear asymmetries in cortical population responses, including stronger cortical activity for up-ramping sounds, which matches perceptual saliency assessments in mice and previous measures in humans. Analysis of cortical activity patterns revealed that auditory cortex implements a map of spatially clustered neuronal ensembles, detecting specific combinations of spectral and intensity modulation features. Comparing different models, we show that cortical responses result from multi-layered nonlinearities, which, contrary to standard receptive field models of auditory cortex function, build divergent representations of sounds with similar spectral content, but different temporal structure. In humans, sounds that increase in intensity over time (up-ramp) are perceived as louder than down-ramping sounds. Here the authors show that in mice this bias also exists and is reflected in the complex nonlinearities of auditory cortex activity.
Collapse
Affiliation(s)
- Thomas Deneux
- Unité de Neuroscience, Information et Complexité (UNIC), Centre National de la Recherche Scientifique, UPR 3293, F-91198 Gif-sur-Yvette, France
| | - Alexandre Kempf
- Unité de Neuroscience, Information et Complexité (UNIC), Centre National de la Recherche Scientifique, UPR 3293, F-91198 Gif-sur-Yvette, France
| | - Aurélie Daret
- Unité de Neuroscience, Information et Complexité (UNIC), Centre National de la Recherche Scientifique, UPR 3293, F-91198 Gif-sur-Yvette, France
| | - Emmanuel Ponsot
- Institut de Recherche et de Coordination Acoustique/Musique (IRCAM), Centre National de la Recherche Scientifique, UMR 9912, F-75004 Paris, France
| | - Brice Bathellier
- Unité de Neuroscience, Information et Complexité (UNIC), Centre National de la Recherche Scientifique, UPR 3293, F-91198 Gif-sur-Yvette, France
| |
Collapse
|
38
|
Rubin J, Ulanovsky N, Nelken I, Tishby N. The Representation of Prediction Error in Auditory Cortex. PLoS Comput Biol 2016; 12:e1005058. [PMID: 27490251 PMCID: PMC4973877 DOI: 10.1371/journal.pcbi.1005058] [Citation(s) in RCA: 55] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2015] [Accepted: 07/07/2016] [Indexed: 11/19/2022] Open
Abstract
To survive, organisms must extract information from the past that is relevant for their future. How this process is expressed at the neural level remains unclear. We address this problem by developing a novel approach from first principles. We show here how to generate low-complexity representations of the past that produce optimal predictions of future events. We then illustrate this framework by studying the coding of ‘oddball’ sequences in auditory cortex. We find that for many neurons in primary auditory cortex, trial-by-trial fluctuations of neuronal responses correlate with the theoretical prediction error calculated from the short-term past of the stimulation sequence, under constraints on the complexity of the representation of this past sequence. In some neurons, the effect of prediction error accounted for more than 50% of response variability. Reliable predictions often depended on a representation of the sequence of the last ten or more stimuli, although the representation kept only few details of that sequence. A crucial aspect of all life is the ability to use past events in order to guide future behavior. To do that, creatures need the ability to predict future events. Indeed, predictability has been shown to affect neuronal responses in many animals and under many conditions. Clearly, the quality of predictions should depend on the amount and detail of the past information used to generate them. Here, by using a basic principle from information theory, we show how to derive explicitly the tradeoff between quality of prediction and complexity of the representation of past information. We then apply these ideas to a concrete case–neuronal responses recorded in auditory cortex during the presentation of oddball sequences, consisting of two tones with varying probabilities. We show that the neuronal responses fit quantitatively the prediction errors of optimal predictors derived from our theory, and use that result in order to deduce the properties of the representations of the past in the auditory system. We conclude that these memory representations have surprisingly long duration (10 stimuli back or more), but keep relatively little detail about this past. Our theory can be applied widely to other sensory systems.
Collapse
Affiliation(s)
- Jonathan Rubin
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
| | - Nachum Ulanovsky
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Israel Nelken
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
- Department of Neurobiology, Institute of Life Sciences, Hebrew University, Jerusalem, Israel
- * E-mail:
| | - Naftali Tishby
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
- The Benin School of Computer Science and Engineering, Hebrew University, Jerusalem, Israel
| |
Collapse
|
39
|
Joosten ERM, Shamma SA, Lorenzi C, Neri P. Dynamic Reweighting of Auditory Modulation Filters. PLoS Comput Biol 2016; 12:e1005019. [PMID: 27398600 PMCID: PMC4939963 DOI: 10.1371/journal.pcbi.1005019] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2015] [Accepted: 06/13/2016] [Indexed: 11/22/2022] Open
Abstract
Sound waveforms convey information largely via amplitude modulations (AM). A large body of experimental evidence has provided support for a modulation (bandpass) filterbank. Details of this model have varied over time partly reflecting different experimental conditions and diverse datasets from distinct task strategies, contributing uncertainty to the bandwidth measurements and leaving important issues unresolved. We adopt here a solely data-driven measurement approach in which we first demonstrate how different models can be subsumed within a common 'cascade' framework, and then proceed to characterize the cascade via system identification analysis using a single stimulus/task specification and hence stable task rules largely unconstrained by any model or parameters. Observers were required to detect a brief change in level superimposed onto random level changes that served as AM noise; the relationship between trial-by-trial noisy fluctuations and corresponding human responses enables targeted identification of distinct cascade elements. The resulting measurements exhibit a dynamic complex picture in which human perception of auditory modulations appears adaptive in nature, evolving from an initial lowpass to bandpass modes (with broad tuning, Q∼1) following repeated stimulus exposure.
Collapse
Affiliation(s)
- Eva R. M. Joosten
- Laboratoire Psychologie de la Perception (CNRS UMR 8242) and Université Paris Descartes, Sorbonne Paris Cité, Paris, France
| | - Shihab A. Shamma
- Laboratoire des Systèmes Perceptifs (CNRS UMR 8248) and Département d’études cognitives, Ecole Normale Supérieure, PSL Research University, Paris, France
- Department of Electrical and Computer Engineering, Institute for Systems Research, University of Maryland, College Park, Maryland, United States of America
| | - Christian Lorenzi
- Laboratoire des Systèmes Perceptifs (CNRS UMR 8248) and Département d’études cognitives, Ecole Normale Supérieure, PSL Research University, Paris, France
| | - Peter Neri
- Laboratoire des Systèmes Perceptifs (CNRS UMR 8248) and Département d’études cognitives, Ecole Normale Supérieure, PSL Research University, Paris, France
| |
Collapse
|
40
|
Williamson RS, Ahrens MB, Linden JF, Sahani M. Input-Specific Gain Modulation by Local Sensory Context Shapes Cortical and Thalamic Responses to Complex Sounds. Neuron 2016; 91:467-81. [PMID: 27346532 PMCID: PMC4961224 DOI: 10.1016/j.neuron.2016.05.041] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2015] [Revised: 10/25/2015] [Accepted: 05/12/2016] [Indexed: 01/19/2023]
Abstract
Sensory neurons are customarily characterized by one or more linearly weighted receptive fields describing sensitivity in sensory space and time. We show that in auditory cortical and thalamic neurons, the weight of each receptive field element depends on the pattern of sound falling within a local neighborhood surrounding it in time and frequency. Accounting for this change in effective receptive field with spectrotemporal context improves predictions of both cortical and thalamic responses to stationary complex sounds. Although context dependence varies among neurons and across brain areas, there are strong shared qualitative characteristics. In a spectrotemporally rich soundscape, sound elements modulate neuronal responsiveness more effectively when they coincide with sounds at other frequencies, and less effectively when they are preceded by sounds at similar frequencies. This local-context-driven lability in the representation of complex sounds—a modulation of “input-specific gain” rather than “output gain”—may be a widespread motif in sensory processing. Gain of neuronal responses to sound components varies with immediate acoustic context “Contextual gain fields” can be estimated from neuronal responses to complex sounds Coincident sound at different frequencies boosts gain in cortex and thalamus Preceding sound at similar frequency reduces gain for longer in cortex than thalamus
Collapse
Affiliation(s)
- Ross S Williamson
- Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, UK; Centre for Mathematics and Physics in the Life Sciences and Experimental Biology, University College London, London WC1E 6BT, UK
| | - Misha B Ahrens
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK
| | - Jennifer F Linden
- Ear Institute, University College London, London WC1X 8EE, UK; Department of Neuroscience, Physiology and Pharmacology, University College London, London WC1E 6BT, UK.
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, UK.
| |
Collapse
|
41
|
Schoppe O, Harper NS, Willmore BDB, King AJ, Schnupp JWH. Measuring the Performance of Neural Models. Front Comput Neurosci 2016; 10:10. [PMID: 26903851 PMCID: PMC4748266 DOI: 10.3389/fncom.2016.00010] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2015] [Accepted: 01/21/2016] [Indexed: 11/13/2022] Open
Abstract
Good metrics of the performance of a statistical or computational model are essential for model comparison and selection. Here, we address the design of performance metrics for models that aim to predict neural responses to sensory inputs. This is particularly difficult because the responses of sensory neurons are inherently variable, even in response to repeated presentations of identical stimuli. In this situation, standard metrics (such as the correlation coefficient) fail because they do not distinguish between explainable variance (the part of the neural response that is systematically dependent on the stimulus) and response variability (the part of the neural response that is not systematically dependent on the stimulus, and cannot be explained by modeling the stimulus-response relationship). As a result, models which perfectly describe the systematic stimulus-response relationship may appear to perform poorly. Two metrics have previously been proposed which account for this inherent variability: Signal Power Explained (SPE, Sahani and Linden, 2003), and the normalized correlation coefficient (CC norm , Hsu et al., 2004). Here, we analyze these metrics, and show that they are intimately related. However, SPE has no lower bound, and we show that, even for good models, SPE can yield negative values that are difficult to interpret. CC norm is better behaved in that it is effectively bounded between -1 and 1, and values below zero are very rare in practice and easy to interpret. However, it was hitherto not possible to calculate CC norm directly; instead, it was estimated using imprecise and laborious resampling techniques. Here, we identify a new approach that can calculate CC norm quickly and accurately. As a result, we argue that it is now a better choice of metric than SPE to accurately evaluate the performance of neural models.
Collapse
Affiliation(s)
- Oliver Schoppe
- Department of Physiology, Anatomy, and Genetics, University of OxfordOxford, UK
- Bio-Inspired Information Processing, Technische Universität MünchenGarching, Germany
| | - Nicol S. Harper
- Department of Physiology, Anatomy, and Genetics, University of OxfordOxford, UK
| | - Ben D. B. Willmore
- Department of Physiology, Anatomy, and Genetics, University of OxfordOxford, UK
| | - Andrew J. King
- Department of Physiology, Anatomy, and Genetics, University of OxfordOxford, UK
| | - Jan W. H. Schnupp
- Department of Physiology, Anatomy, and Genetics, University of OxfordOxford, UK
| |
Collapse
|
42
|
Willmore BDB, Schoppe O, King AJ, Schnupp JWH, Harper NS. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing. J Neurosci 2016; 36:280-9. [PMID: 26758822 PMCID: PMC4710761 DOI: 10.1523/jneurosci.2441-15.2016] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2015] [Revised: 11/03/2015] [Accepted: 11/10/2015] [Indexed: 11/21/2022] Open
Abstract
Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear-nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too.
Collapse
Affiliation(s)
- Ben D B Willmore
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom, and
| | - Oliver Schoppe
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom, and Bio-Inspired Information Processing, Technische Universität München, 85748 Garching, Germany
| | - Andrew J King
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom, and
| | - Jan W H Schnupp
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom, and
| | - Nicol S Harper
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom, and
| |
Collapse
|
43
|
Thorson IL, Liénard J, David SV. The Essential Complexity of Auditory Receptive Fields. PLoS Comput Biol 2015; 11:e1004628. [PMID: 26683490 PMCID: PMC4684325 DOI: 10.1371/journal.pcbi.1004628] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2015] [Accepted: 10/26/2015] [Indexed: 12/05/2022] Open
Abstract
Encoding properties of sensory neurons are commonly modeled using linear finite impulse response (FIR) filters. For the auditory system, the FIR filter is instantiated in the spectro-temporal receptive field (STRF), often in the framework of the generalized linear model. Despite widespread use of the FIR STRF, numerous formulations for linear filters are possible that require many fewer parameters, potentially permitting more efficient and accurate model estimates. To explore these alternative STRF architectures, we recorded single-unit neural activity from auditory cortex of awake ferrets during presentation of natural sound stimuli. We compared performance of > 1000 linear STRF architectures, evaluating their ability to predict neural responses to a novel natural stimulus. Many were able to outperform the FIR filter. Two basic constraints on the architecture lead to the improved performance: (1) factorization of the STRF matrix into a small number of spectral and temporal filters and (2) low-dimensional parameterization of the factorized filters. The best parameterized model was able to outperform the full FIR filter in both primary and secondary auditory cortex, despite requiring fewer than 30 parameters, about 10% of the number required by the FIR filter. After accounting for noise from finite data sampling, these STRFs were able to explain an average of 40% of A1 response variance. The simpler models permitted more straightforward interpretation of sensory tuning properties. They also showed greater benefit from incorporating nonlinear terms, such as short term plasticity, that provide theoretical advances over the linear model. Architectures that minimize parameter count while maintaining maximum predictive power provide insight into the essential degrees of freedom governing auditory cortical function. They also maximize statistical power available for characterizing additional nonlinear properties that limit current auditory models. Understanding how the brain solves sensory problems can provide useful insight for the development of automated systems such as speech recognizers and image classifiers. Recent developments in nonlinear regression and machine learning have produced powerful algorithms for characterizing the input-output relationship of complex systems. However, the complexity of sensory neural systems, combined with practical limitations on experimental data, make it difficult to apply arbitrarily complex analyses to neural data. In this study we pushed analysis in the opposite direction, toward simpler models. We asked how simple a model can be while still capturing the essential sensory properties of neurons in auditory cortex. We found that substantially simpler formulations of the widely-used spectro-temporal receptive field are able to perform as well as the best current models. These simpler formulations define new basis sets that can be incorporated into state-of-the-art machine learning algorithms for a more exhaustive exploration of sensory processing.
Collapse
Affiliation(s)
- Ivar L. Thorson
- Oregon Hearing Research Center, Oregon Health & Science University, Portland, Oregon, United States of America
| | - Jean Liénard
- Department of Mathematics, Washington State University, Vancouver, Washington, United States of America
| | - Stephen V. David
- Oregon Hearing Research Center, Oregon Health & Science University, Portland, Oregon, United States of America
- * E-mail:
| |
Collapse
|
44
|
Guo W, Hight AE, Chen JX, Klapoetke NC, Hancock KE, Shinn-Cunningham BG, Boyden ES, Lee DJ, Polley DB. Hearing the light: neural and perceptual encoding of optogenetic stimulation in the central auditory pathway. Sci Rep 2015; 5:10319. [PMID: 26000557 PMCID: PMC4441320 DOI: 10.1038/srep10319] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2015] [Accepted: 04/07/2015] [Indexed: 11/26/2022] Open
Abstract
Optogenetics provides a means to dissect the organization and function of neural circuits. Optogenetics also offers the translational promise of restoring sensation, enabling movement or supplanting abnormal activity patterns in pathological brain circuits. However, the inherent sluggishness of evoked photocurrents in conventional channelrhodopsins has hampered the development of optoprostheses that adequately mimic the rate and timing of natural spike patterning. Here, we explore the feasibility and limitations of a central auditory optoprosthesis by photoactivating mouse auditory midbrain neurons that either express channelrhodopsin-2 (ChR2) or Chronos, a channelrhodopsin with ultra-fast channel kinetics. Chronos-mediated spike fidelity surpassed ChR2 and natural acoustic stimulation to support a superior code for the detection and discrimination of rapid pulse trains. Interestingly, this midbrain coding advantage did not translate to a perceptual advantage, as behavioral detection of midbrain activation was equivalent with both opsins. Auditory cortex recordings revealed that the precisely synchronized midbrain responses had been converted to a simplified rate code that was indistinguishable between opsins and less robust overall than acoustic stimulation. These findings demonstrate the temporal coding benefits that can be realized with next-generation channelrhodopsins, but also highlight the challenge of inducing variegated patterns of forebrain spiking activity that support adaptive perception and behavior.
Collapse
Affiliation(s)
- Wei Guo
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston MA 02114
- Center for Computational Neuroscience and Neural Technology, Boston University, Boston, Massachusetts 02215
| | - Ariel E. Hight
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston MA 02114
- Program in Speech Hearing Bioscience and Technology, Harvard Medical School (HMS), Boston MA 02115
| | - Jenny X. Chen
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston MA 02114
- New Pathway MD Program, HMS 02115
| | - Nathan C. Klapoetke
- The MIT Media Laboratory, Synthetic Neurobiology Group, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, USA
- Department of Biological Engineering, MIT, Cambridge, Massachusetts, USA
| | - Kenneth E. Hancock
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston MA 02114
- Department of Otology and Laryngology, HMS, Boston MA, 02114
| | - Barbara G. Shinn-Cunningham
- Center for Computational Neuroscience and Neural Technology, Boston University, Boston, Massachusetts 02215
- Department of Biomedical Engineering, Boston University 02215
| | - Edward S. Boyden
- The MIT Media Laboratory, Synthetic Neurobiology Group, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, USA
- Department of Biological Engineering, MIT, Cambridge, Massachusetts, USA
| | - Daniel J. Lee
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston MA 02114
- Department of Otology and Laryngology, HMS, Boston MA, 02114
| | - Daniel B. Polley
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston MA 02114
- Center for Computational Neuroscience and Neural Technology, Boston University, Boston, Massachusetts 02215
- Department of Otology and Laryngology, HMS, Boston MA, 02114
| |
Collapse
|
45
|
Bibikov NG. Some features of the sound-signal envelope extracted by cochlear nucleus neurons in grass frog. Biophysics (Nagoya-shi) 2015. [DOI: 10.1134/s0006350915030045] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
|
46
|
Montejo N, Noreña AJ. Dynamic representation of spectral edges in guinea pig primary auditory cortex. J Neurophysiol 2015; 113:2998-3012. [PMID: 25744885 PMCID: PMC4416612 DOI: 10.1152/jn.00785.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2014] [Accepted: 03/02/2015] [Indexed: 11/22/2022] Open
Abstract
The central representation of a given acoustic motif is thought to be strongly context dependent, i.e., to rely on the spectrotemporal past and present of the acoustic mixture in which it is embedded. The present study investigated the cortical representation of spectral edges (i.e., where stimulus energy changes abruptly over frequency) and its dependence on stimulus duration and depth of the spectral contrast in guinea pig. We devised a stimulus ensemble composed of random tone pips with or without an attenuated frequency band (AFB) of variable depth. Additionally, the multitone ensemble with AFB was interleaved with periods of silence or with multitone ensembles without AFB. We have shown that the representation of the frequencies near but outside the AFB is greatly enhanced, whereas the representation of frequencies near and inside the AFB is strongly suppressed. These cortical changes depend on the depth of the AFB: although they are maximal for the largest depth of the AFB, they are also statistically significant for depths as small as 10 dB. Finally, the cortical changes are quick, occurring within a few seconds of stimulus ensemble presentation with AFB, and are very labile, disappearing within a few seconds after the presentation without AFB. Overall, this study demonstrates that the representation of spectral edges is dynamically enhanced in the auditory centers. These central changes may have important functional implications, particularly in noisy environments where they could contribute to preserving the central representation of spectral edges.
Collapse
Affiliation(s)
- Noelia Montejo
- Laboratoire de Neurosciences Intégratives et Adaptatives, Aix Marseille Université, CNRS UMR 7260, Marseille, France
| | - Arnaud J Noreña
- Laboratoire de Neurosciences Intégratives et Adaptatives, Aix Marseille Université, CNRS UMR 7260, Marseille, France
| |
Collapse
|
47
|
Meyer AF, Diepenbrock JP, Ohl FW, Anemüller J. Temporal variability of spectro-temporal receptive fields in the anesthetized auditory cortex. Front Comput Neurosci 2014; 8:165. [PMID: 25566049 PMCID: PMC4274980 DOI: 10.3389/fncom.2014.00165] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2014] [Accepted: 11/30/2014] [Indexed: 11/13/2022] Open
Abstract
Temporal variability of neuronal response characteristics during sensory stimulation is a ubiquitous phenomenon that may reflect processes such as stimulus-driven adaptation, top-down modulation or spontaneous fluctuations. It poses a challenge to functional characterization methods such as the receptive field, since these often assume stationarity. We propose a novel method for estimation of sensory neurons' receptive fields that extends the classic static linear receptive field model to the time-varying case. Here, the long-term estimate of the static receptive field serves as the mean of a probabilistic prior distribution from which the short-term temporally localized receptive field may deviate stochastically with time-varying standard deviation. The derived corresponding generalized linear model permits robust characterization of temporal variability in receptive field structure also for highly non-Gaussian stimulus ensembles. We computed and analyzed short-term auditory spectro-temporal receptive field (STRF) estimates with characteristic temporal resolution 5-30 s based on model simulations and responses from in total 60 single-unit recordings in anesthetized Mongolian gerbil auditory midbrain and cortex. Stimulation was performed with short (100 ms) overlapping frequency-modulated tones. Results demonstrate identification of time-varying STRFs, with obtained predictive model likelihoods exceeding those from baseline static STRF estimation. Quantitative characterization of STRF variability reveals a higher degree thereof in auditory cortex compared to midbrain. Cluster analysis indicates that significant deviations from the long-term static STRF are brief, but reliably estimated. We hypothesize that the observed variability more likely reflects spontaneous or state-dependent internal fluctuations that interact with stimulus-induced processing, rather than experimental or stimulus design.
Collapse
Affiliation(s)
- Arne F Meyer
- Medizinische Physik and Cluster of Excellence Hearing4all, Department of Medical Physics and Acoustics, Carl von Ossietzky University Oldenburg, Germany
| | - Jan-Philipp Diepenbrock
- Department of Systems Physiology of Learning, Leibniz Institute for Neurobiology Magdeburg, Germany
| | - Frank W Ohl
- Department of Systems Physiology of Learning, Leibniz Institute for Neurobiology Magdeburg, Germany ; Department of Neuroprosthetics, Institute of Biology, Otto-von-Guericke University Magdeburg, Germany
| | - Jörn Anemüller
- Medizinische Physik and Cluster of Excellence Hearing4all, Department of Medical Physics and Acoustics, Carl von Ossietzky University Oldenburg, Germany
| |
Collapse
|
48
|
Willmore BDB, Cooke JE, King AJ. Hearing in noisy environments: noise invariance and contrast gain control. J Physiol 2014; 592:3371-81. [PMID: 24907308 DOI: 10.1113/jphysiol.2014.274886] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
Contrast gain control has recently been identified as a fundamental property of the auditory system. Electrophysiological recordings in ferrets have shown that neurons continuously adjust their gain (their sensitivity to change in sound level) in response to the contrast of sounds that are heard. At the level of the auditory cortex, these gain changes partly compensate for changes in sound contrast. This means that sounds which are structurally similar, but have different contrasts, have similar neuronal representations in the auditory cortex. As a result, the cortical representation is relatively invariant to stimulus contrast and robust to the presence of noise in the stimulus. In the inferior colliculus (an important subcortical auditory structure), gain changes are less reliably compensatory, suggesting that contrast- and noise-invariant representations are constructed gradually as one ascends the auditory pathway. In addition to noise invariance, contrast gain control provides a variety of computational advantages over static neuronal representations; it makes efficient use of neuronal dynamic range, may contribute to redundancy-reducing, sparse codes for sound and allows for simpler decoding of population responses. The circuits underlying auditory contrast gain control are still under investigation. As in the visual system, these circuits may be modulated by factors other than stimulus contrast, forming a potential neural substrate for mediating the effects of attention as well as interactions between the senses.
Collapse
Affiliation(s)
- Ben D B Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, Sherrington Building, Parks Road, Oxford, OX1 3PT, UK
| | - James E Cooke
- Department of Physiology, Anatomy and Genetics, University of Oxford, Sherrington Building, Parks Road, Oxford, OX1 3PT, UK
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Sherrington Building, Parks Road, Oxford, OX1 3PT, UK
| |
Collapse
|
49
|
A new and fast characterization of multiple encoding properties of auditory neurons. Brain Topogr 2014; 28:379-400. [PMID: 24869676 DOI: 10.1007/s10548-014-0375-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2013] [Accepted: 05/07/2014] [Indexed: 10/25/2022]
Abstract
The functional properties of auditory cortex neurons are most often investigated separately, through spectrotemporal receptive fields (STRFs) for the frequency tuning and the use of frequency sweeps sounds for selectivity to velocity and direction. In fact, auditory neurons are sensitive to a multidimensional space of acoustic parameters where spectral, temporal and spatial dimensions interact. We designed a multi-parameter stimulus, the random double sweep (RDS), composed of two uncorrelated random sweeps, which gives an easy, fast and simultaneous access to frequency tuning as well as frequency modulation sweep direction and velocity selectivity, frequency interactions and temporal properties of neurons. Reverse correlation techniques applied to recordings from the primary auditory cortex of guinea pigs and rats in response to RDS stimulation revealed the variety of temporal dynamics of acoustic patterns evoking an enhanced or suppressed firing rate. Group results on these two species revealed less frequent suppression areas in frequency tuning STRFs, the absence of downward sweep selectivity, and lower phase locking abilities in the auditory cortex of rats compared to guinea pigs.
Collapse
|
50
|
Gold JR, Bajo VM. Insult-induced adaptive plasticity of the auditory system. Front Neurosci 2014; 8:110. [PMID: 24904256 PMCID: PMC4033160 DOI: 10.3389/fnins.2014.00110] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2014] [Accepted: 04/28/2014] [Indexed: 01/10/2023] Open
Abstract
The brain displays a remarkable capacity for both widespread and region-specific modifications in response to environmental challenges, with adaptive processes bringing about the reweighing of connections in neural networks putatively required for optimizing performance and behavior. As an avenue for investigation, studies centered around changes in the mammalian auditory system, extending from the brainstem to the cortex, have revealed a plethora of mechanisms that operate in the context of sensory disruption after insult, be it lesion-, noise trauma, drug-, or age-related. Of particular interest in recent work are those aspects of auditory processing which, after sensory disruption, change at multiple—if not all—levels of the auditory hierarchy. These include changes in excitatory, inhibitory and neuromodulatory networks, consistent with theories of homeostatic plasticity; functional alterations in gene expression and in protein levels; as well as broader network processing effects with cognitive and behavioral implications. Nevertheless, there abounds substantial debate regarding which of these processes may only be sequelae of the original insult, and which may, in fact, be maladaptively compelling further degradation of the organism's competence to cope with its disrupted sensory context. In this review, we aim to examine how the mammalian auditory system responds in the wake of particular insults, and to disambiguate how the changes that develop might underlie a correlated class of phantom disorders, including tinnitus and hyperacusis, which putatively are brought about through maladaptive neuroplastic disruptions to auditory networks governing the spatial and temporal processing of acoustic sensory information.
Collapse
Affiliation(s)
- Joshua R Gold
- Department of Physiology, Anatomy and Genetics, University of Oxford Oxford, UK
| | - Victoria M Bajo
- Department of Physiology, Anatomy and Genetics, University of Oxford Oxford, UK
| |
Collapse
|