1
|
van den Berg MM, Wong AB, Houtak G, Williamson RS, Borst JGG. Sodium salicylate improves detection of amplitude-modulated sound in mice. iScience 2024; 27:109691. [PMID: 38736549 PMCID: PMC11088340 DOI: 10.1016/j.isci.2024.109691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 01/14/2024] [Accepted: 04/05/2024] [Indexed: 05/14/2024] Open
Abstract
Salicylate is commonly used to induce tinnitus in animals, but its underlying mechanism of action is still debated. We therefore tested its effects on the firing properties of neurons in the mouse inferior colliculus (IC). Salicylate induced a large decrease in the spontaneous activity and an increase of ∼20 dB SPL in the minimum threshold of single units. In response to sinusoidally modulated noise (SAM noise) single units showed both an increase in phase locking and improved rate coding. Mice also became better at detecting amplitude modulations, and a simple threshold model based on the IC population response could reproduce this improvement. The responses to dynamic random chords (DRCs) suggested that the improved AM encoding was due to a linearization of the cochlear output, resulting in larger contrasts during SAM noise. These effects of salicylate are not consistent with the presence of tinnitus, but should be taken into account when studying hyperacusis.
Collapse
Affiliation(s)
- Maurits M. van den Berg
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, NL-3015 GD Rotterdam, the Netherlands
| | - Aaron B. Wong
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, NL-3015 GD Rotterdam, the Netherlands
| | - Ghais Houtak
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, NL-3015 GD Rotterdam, the Netherlands
| | - Ross S. Williamson
- Pittsburgh Hearing Research Center, Department of Otolaryngology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - J. Gerard G. Borst
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, NL-3015 GD Rotterdam, the Netherlands
| |
Collapse
|
2
|
Ding SS, Fox JL, Gordus A, Joshi A, Liao JC, Scholz M. Fantastic beasts and how to study them: rethinking experimental animal behavior. J Exp Biol 2024; 227:jeb247003. [PMID: 38372042 PMCID: PMC10911175 DOI: 10.1242/jeb.247003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
Humans have been trying to understand animal behavior at least since recorded history. Recent rapid development of new technologies has allowed us to make significant progress in understanding the physiological and molecular mechanisms underlying behavior, a key goal of neuroethology. However, there is a tradeoff when studying animal behavior and its underlying biological mechanisms: common behavior protocols in the laboratory are designed to be replicable and controlled, but they often fail to encompass the variability and breadth of natural behavior. This Commentary proposes a framework of 10 key questions that aim to guide researchers in incorporating a rich natural context into their experimental design or in choosing a new animal study system. The 10 questions cover overarching experimental considerations that can provide a template for interspecies comparisons, enable us to develop studies in new model organisms and unlock new experiments in our quest to understand behavior.
Collapse
Affiliation(s)
- Siyu Serena Ding
- Max Planck Institute of Animal Behavior, 78464 Konstanz, Germany
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz, 78464 Konstanz, Germany
| | - Jessica L. Fox
- Department of Biology, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Andrew Gordus
- Department of Biology, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Abhilasha Joshi
- Departments of Physiology and Psychiatry, University of California, San Francisco, CA 94158, USA
| | - James C. Liao
- Department of Biology, The Whitney Laboratory for Marine Bioscience, University of Florida, St. Augustine, FL 32080, USA
| | - Monika Scholz
- Max Planck Research Group Neural Information Flow, Max Planck Institute for Neurobiology of Behavior – caesar, 53175 Bonn, Germany
| |
Collapse
|
3
|
Weng G, Clark K, Akbarian A, Noudoost B, Nategh N. Time-varying generalized linear models: characterizing and decoding neuronal dynamics in higher visual areas. Front Comput Neurosci 2024; 18:1273053. [PMID: 38348287 PMCID: PMC10859875 DOI: 10.3389/fncom.2024.1273053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 01/09/2024] [Indexed: 02/15/2024] Open
Abstract
To create a behaviorally relevant representation of the visual world, neurons in higher visual areas exhibit dynamic response changes to account for the time-varying interactions between external (e.g., visual input) and internal (e.g., reward value) factors. The resulting high-dimensional representational space poses challenges for precisely quantifying individual factors' contributions to the representation and readout of sensory information during a behavior. The widely used point process generalized linear model (GLM) approach provides a powerful framework for a quantitative description of neuronal processing as a function of various sensory and non-sensory inputs (encoding) as well as linking particular response components to particular behaviors (decoding), at the level of single trials and individual neurons. However, most existing variations of GLMs assume the neural systems to be time-invariant, making them inadequate for modeling nonstationary characteristics of neuronal sensitivity in higher visual areas. In this review, we summarize some of the existing GLM variations, with a focus on time-varying extensions. We highlight their applications to understanding neural representations in higher visual areas and decoding transient neuronal sensitivity as well as linking physiology to behavior through manipulation of model components. This time-varying class of statistical models provide valuable insights into the neural basis of various visual behaviors in higher visual areas and hold significant potential for uncovering the fundamental computational principles that govern neuronal processing underlying various behaviors in different regions of the brain.
Collapse
Affiliation(s)
- Geyu Weng
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, United States
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Kelsey Clark
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Amir Akbarian
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Behrad Noudoost
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Neda Nategh
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
- Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, United States
| |
Collapse
|
4
|
López Espejo M, David SV. A sparse code for natural sound context in auditory cortex. CURRENT RESEARCH IN NEUROBIOLOGY 2023; 6:100118. [PMID: 38152461 PMCID: PMC10749876 DOI: 10.1016/j.crneur.2023.100118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 10/27/2023] [Accepted: 11/14/2023] [Indexed: 12/29/2023] Open
Abstract
Accurate sound perception can require integrating information over hundreds of milliseconds or even seconds. Spectro-temporal models of sound coding by single neurons in auditory cortex indicate that the majority of sound-evoked activity can be attributed to stimuli with a few tens of milliseconds. It remains uncertain how the auditory system integrates information about sensory context on a longer timescale. Here we characterized long-lasting contextual effects in auditory cortex (AC) using a diverse set of natural sound stimuli. We measured context effects as the difference in a neuron's response to a single probe sound following two different context sounds. Many AC neurons showed context effects lasting longer than the temporal window of a traditional spectro-temporal receptive field. The duration and magnitude of context effects varied substantially across neurons and stimuli. This diversity of context effects formed a sparse code across the neural population that encoded a wider range of contexts than any constituent neuron. Encoding model analysis indicates that context effects can be explained by activity in the local neural population, suggesting that recurrent local circuits support a long-lasting representation of sensory context in auditory cortex.
Collapse
Affiliation(s)
- Mateo López Espejo
- Neuroscience Graduate Program, Oregon Health & Science University, Portland, OR, USA
| | - Stephen V. David
- Otolaryngology, Oregon Health & Science University, Portland, OR, USA
| |
Collapse
|
5
|
Steinschneider M. Toward an understanding of vowel encoding in the human auditory cortex. Neuron 2023; 111:1995-1997. [PMID: 37413966 DOI: 10.1016/j.neuron.2023.06.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 06/12/2023] [Accepted: 06/12/2023] [Indexed: 07/08/2023]
Abstract
In this issue of Neuron, Oganian et al.1 performed intracranial recordings in the auditory cortex of human subjects to clarify how vowels are encoded by the brain. Formant-based tuning curves demonstrated the organization of vowel encoding. The need for population codes and demonstration of speaker normalization were emphasized.
Collapse
|
6
|
Sadagopan S, Kar M, Parida S. Quantitative models of auditory cortical processing. Hear Res 2023; 429:108697. [PMID: 36696724 PMCID: PMC9928778 DOI: 10.1016/j.heares.2023.108697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/17/2022] [Accepted: 01/12/2023] [Indexed: 01/15/2023]
Abstract
To generate insight from experimental data, it is critical to understand the inter-relationships between individual data points and place them in context within a structured framework. Quantitative modeling can provide the scaffolding for such an endeavor. Our main objective in this review is to provide a primer on the range of quantitative tools available to experimental auditory neuroscientists. Quantitative modeling is advantageous because it can provide a compact summary of observed data, make underlying assumptions explicit, and generate predictions for future experiments. Quantitative models may be developed to characterize or fit observed data, to test theories of how a task may be solved by neural circuits, to determine how observed biophysical details might contribute to measured activity patterns, or to predict how an experimental manipulation would affect neural activity. In complexity, quantitative models can range from those that are highly biophysically realistic and that include detailed simulations at the level of individual synapses, to those that use abstract and simplified neuron models to simulate entire networks. Here, we survey the landscape of recently developed models of auditory cortical processing, highlighting a small selection of models to demonstrate how they help generate insight into the mechanisms of auditory processing. We discuss examples ranging from models that use details of synaptic properties to explain the temporal pattern of cortical responses to those that use modern deep neural networks to gain insight into human fMRI data. We conclude by discussing a biologically realistic and interpretable model that our laboratory has developed to explore aspects of vocalization categorization in the auditory pathway.
Collapse
Affiliation(s)
- Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA; Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| | - Manaswini Kar
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Satyabrata Parida
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
7
|
Riad R, Karadayi J, Bachoud-Lévi AC, Dupoux E. Learning spectro-temporal representations of complex sounds with parameterized neural networks. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:353. [PMID: 34340514 DOI: 10.1121/10.0005482] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 06/08/2021] [Indexed: 06/13/2023]
Abstract
Deep learning models have become potential candidates for auditory neuroscience research, thanks to their recent successes in a variety of auditory tasks, yet these models often lack interpretability to fully understand the exact computations that have been performed. Here, we proposed a parametrized neural network layer, which computes specific spectro-temporal modulations based on Gabor filters [learnable spectro-temporal filters (STRFs)] and is fully interpretable. We evaluated this layer on speech activity detection, speaker verification, urban sound classification, and zebra finch call type classification. We found that models based on learnable STRFs are on par for all tasks with state-of-the-art and obtain the best performance for speech activity detection. As this layer remains a Gabor filter, it is fully interpretable. Thus, we used quantitative measures to describe distribution of the learned spectro-temporal modulations. Filters adapted to each task and focused mostly on low temporal and spectral modulations. The analyses show that the filters learned on human speech have similar spectro-temporal parameters as the ones measured directly in the human auditory cortex. Finally, we observed that the tasks organized in a meaningful way: the human vocalization tasks closer to each other and bird vocalizations far away from human vocalizations and urban sounds tasks.
Collapse
Affiliation(s)
- Rachid Riad
- Ecole des Hautes Etudes en Sciences Sociales, CNRS, Institut National de Recherche informatique et Automatique, Département d'Études Cognitives, Ecole Normale Supérieure-Paris Sciences et Lettres University, 29 Rue d'Ulm, 75005 Paris, France
| | - Julien Karadayi
- Ecole des Hautes Etudes en Sciences Sociales, CNRS, Institut National de Recherche informatique et Automatique, Département d'Études Cognitives, Ecole Normale Supérieure-Paris Sciences et Lettres University, 29 Rue d'Ulm, 75005 Paris, France
| | - Anne-Catherine Bachoud-Lévi
- NeuroPsychologie Interventionnelle, Département d'Études Cognitives, Ecole Normale Supérieure, Institut National de la Santé et de la Recherche Médicale, Institut Mondor de Recherche Biomédicale, Neuratris, Université Paris-Est Créteil, Paris Sciences et Lettres University, 29 Rue d'Ulm, 75005 Paris, France
| | - Emmanuel Dupoux
- Ecole des Hautes Etudes en Sciences Sociales, CNRS, Institut National de Recherche informatique et Automatique, Département d'Études Cognitives, Ecole Normale Supérieure-Paris Sciences et Lettres University, 29 Rue d'Ulm, 75005 Paris, France
| |
Collapse
|
8
|
Gothner T, Gonçalves PJ, Sahani M, Linden JF, Hildebrandt KJ. Sustained Activation of PV+ Interneurons in Core Auditory Cortex Enables Robust Divisive Gain Control for Complex and Naturalistic Stimuli. Cereb Cortex 2021; 31:2364-2381. [PMID: 33300581 DOI: 10.1093/cercor/bhaa347] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 09/01/2020] [Accepted: 10/13/2020] [Indexed: 01/21/2023] Open
Abstract
Sensory cortices must flexibly adapt their operations to internal states and external requirements. Sustained modulation of activity levels in different inhibitory interneuron populations may provide network-level mechanisms for adjustment of sensory cortical processing on behaviorally relevant timescales. However, understanding of the computational roles of inhibitory interneuron modulation has mostly been restricted to effects at short timescales, through the use of phasic optogenetic activation and transient stimuli. Here, we investigated how modulation of inhibitory interneurons affects cortical computation on longer timescales, by using sustained, network-wide optogenetic activation of parvalbumin-positive interneurons (the largest class of cortical inhibitory interneurons) to study modulation of auditory cortical responses to prolonged and naturalistic as well as transient stimuli. We found highly conserved spectral and temporal tuning in auditory cortical neurons, despite a profound reduction in overall network activity. This reduction was predominantly divisive, and consistent across simple, complex, and naturalistic stimuli. A recurrent network model with power-law input-output functions replicated our results. We conclude that modulation of parvalbumin-positive interneurons on timescales typical of sustained neuromodulation may provide a means for robust divisive gain control conserving stimulus representations.
Collapse
Affiliation(s)
- Tina Gothner
- Department of Neuroscience, University of Oldenburg, 26126 Oldenburg, Germany
| | - Pedro J Gonçalves
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (CAESAR), 53175 Bonn, Germany.,Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, UK
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, UK
| | - Jennifer F Linden
- Ear Institute, University College London, London, WC1X 8EE, UK.,Department of Neuroscience, Physiology, and Pharmacology, University College London, London WC1E 6BT, UK
| | - K Jannis Hildebrandt
- Department of Neuroscience, University of Oldenburg, 26126 Oldenburg, Germany.,Cluster of Excellence Hearing4all, University of Oldenburg, 26126 Oldenburg, Germany
| |
Collapse
|
9
|
Bondanelli G, Deneux T, Bathellier B, Ostojic S. Network dynamics underlying OFF responses in the auditory cortex. eLife 2021; 10:e53151. [PMID: 33759763 PMCID: PMC8057817 DOI: 10.7554/elife.53151] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Accepted: 03/19/2021] [Indexed: 11/13/2022] Open
Abstract
Across sensory systems, complex spatio-temporal patterns of neural activity arise following the onset (ON) and offset (OFF) of stimuli. While ON responses have been widely studied, the mechanisms generating OFF responses in cortical areas have so far not been fully elucidated. We examine here the hypothesis that OFF responses are single-cell signatures of recurrent interactions at the network level. To test this hypothesis, we performed population analyses of two-photon calcium recordings in the auditory cortex of awake mice listening to auditory stimuli, and compared them to linear single-cell and network models. While the single-cell model explained some prominent features of the data, it could not capture the structure across stimuli and trials. In contrast, the network model accounted for the low-dimensional organization of population responses and their global structure across stimuli, where distinct stimuli activated mostly orthogonal dimensions in the neural state-space.
Collapse
Affiliation(s)
- Giulio Bondanelli
- Laboratoire de Neurosciences Cognitives et Computationelles, Département d’études cognitives, ENS, PSL University, INSERMParisFrance
- Neural Computation Laboratory, Center for Human Technologies, Istituto Italiano di Tecnologia (IIT)GenoaItaly
| | - Thomas Deneux
- Départment de Neurosciences Intégratives et Computationelles (ICN), Institut des Neurosciences Paris-Saclay (NeuroPSI), UMR 9197 CNRS, Université Paris SudGif-sur-YvetteFrance
| | - Brice Bathellier
- Départment de Neurosciences Intégratives et Computationelles (ICN), Institut des Neurosciences Paris-Saclay (NeuroPSI), UMR 9197 CNRS, Université Paris SudGif-sur-YvetteFrance
- Institut Pasteur, INSERM, Institut de l’AuditionParisFrance
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationelles, Département d’études cognitives, ENS, PSL University, INSERMParisFrance
| |
Collapse
|
10
|
Pennington JR, David SV. Complementary Effects of Adaptation and Gain Control on Sound Encoding in Primary Auditory Cortex. eNeuro 2020; 7:ENEURO.0205-20.2020. [PMID: 33109632 PMCID: PMC7675144 DOI: 10.1523/eneuro.0205-20.2020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 08/15/2020] [Accepted: 09/05/2020] [Indexed: 11/24/2022] Open
Abstract
An important step toward understanding how the brain represents complex natural sounds is to develop accurate models of auditory coding by single neurons. A commonly used model is the linear-nonlinear spectro-temporal receptive field (STRF; LN model). The LN model accounts for many features of auditory tuning, but it cannot account for long-lasting effects of sensory context on sound-evoked activity. Two mechanisms that may support these contextual effects are short-term plasticity (STP) and contrast-dependent gain control (GC), which have inspired expanded versions of the LN model. Both models improve performance over the LN model, but they have never been compared directly. Thus, it is unclear whether they account for distinct processes or describe one phenomenon in different ways. To address this question, we recorded activity of neurons in primary auditory cortex (A1) of awake ferrets during presentation of natural sounds. We then fit models incorporating one nonlinear mechanism (GC or STP) or both (GC+STP) using this single dataset, and measured the correlation between the models' predictions and the recorded neural activity. Both the STP and GC models performed significantly better than the LN model, but the GC+STP model outperformed both individual models. We also quantified the equivalence of STP and GC model predictions and found only modest similarity. Consistent results were observed for a dataset collected in clean and noisy acoustic contexts. These results establish general methods for evaluating the equivalence of arbitrarily complex encoding models and suggest that the STP and GC models describe complementary processes in the auditory system.
Collapse
Affiliation(s)
- Jacob R Pennington
- Department of Mathematics, Washington State University, Vancouver, WA, 98686
| | - Stephen V David
- Department of Otolaryngology, Oregon Health and Science University, Portland, OR, 97239
| |
Collapse
|
11
|
Qureshi F, Yan J. Three dimensional rendering of auditory neuronal responses: A novel illustration of receptive field across frequency, intensity & time domains. J Neurosci Methods 2020; 338:108682. [PMID: 32165230 DOI: 10.1016/j.jneumeth.2020.108682] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Revised: 03/07/2020] [Accepted: 03/08/2020] [Indexed: 10/24/2022]
Abstract
BACKGROUND Neural coding of sound information is often studied through frequency tuning curve (FTC), spectro-temporal receptive field (STRF), post-stimulus time histogram (PSTH), and other methods such as rate functions. These methods, despite providing a robust characterization of auditory responses in their specific domains, lack a complete description in terms of three sound fundamentals: frequency, amplitude, and time. NEW METHOD Using the techniques of electrophysiology, neural signal processing and medical image processing, a standalone method is created to illustrate the neural processing of three sound fundamentals in one representation. RESULTS The new method comprehensively showed frequency tuning, intensity tuning, time tuning as well as a novel representation of frequency and time dependent intensity coding. It provides most of the necessary parameters that are used to quantify neural response properties, such as minimum threshold (MT), frequency tuning, latency, best frequency (BF), characteristic frequency (CF), bandwidth (BW), etc. COMPARISON WITH EXISTING METHODS: Our method shows neural responses as a function of all three sound fundamentals in a single representation that was not possible in previous methods. It covers many functions of conventional methods and allow extracting novel information such as the intensity coding as the function of the spectrotemporal response area of auditory neurons. CONCLUSION This method can be used as a standalone package to study auditory neural responses and evaluate the performance of different hearing related devices such as cochlear implants and hearing aids in animal models as well as study and compare auditory processing in aged and hearing impaired animal models.
Collapse
Affiliation(s)
- Farhad Qureshi
- Department of Physiology and Pharmacology, University of Calgary, Cumming School of Medicine 3330 Hospital Drive NW, Calgary Alberta, T2N 4N1, Canada.
| | - Jun Yan
- Department of Physiology and Pharmacology, University of Calgary, Cumming School of Medicine 3330 Hospital Drive NW, Calgary Alberta, T2N 4N1, Canada
| |
Collapse
|
12
|
Oscillations in the auditory system and their possible role. Neurosci Biobehav Rev 2020; 113:507-528. [PMID: 32298712 DOI: 10.1016/j.neubiorev.2020.03.030] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 03/25/2020] [Accepted: 03/30/2020] [Indexed: 12/26/2022]
Abstract
GOURÉVITCH, B., C. Martin, O. Postal, J.J. Eggermont. Oscillations in the auditory system, their possible role. NEUROSCI BIOBEHAV REV XXX XXX-XXX, 2020. - Neural oscillations are thought to have various roles in brain processing such as, attention modulation, neuronal communication, motor coordination, memory consolidation, decision-making, or feature binding. The role of oscillations in the auditory system is less clear, especially due to the large discrepancy between human and animal studies. Here we describe many methodological issues that confound the results of oscillation studies in the auditory field. Moreover, we discuss the relationship between neural entrainment and oscillations that remains unclear. Finally, we aim to identify which kind of oscillations could be specific or salient to the auditory areas and their processing. We suggest that the role of oscillations might dramatically differ between the primary auditory cortex and the more associative auditory areas. Despite the moderate presence of intrinsic low frequency oscillations in the primary auditory cortex, rhythmic components in the input seem crucial for auditory processing. This allows the phase entrainment between the oscillatory phase and rhythmic input, which is an integral part of stimulus selection within the auditory system.
Collapse
|
13
|
Shih JY, Yuan K, Atencio CA, Schreiner CE. Distinct Manifestations of Cooperative, Multidimensional Stimulus Representations in Different Auditory Forebrain Stations. Cereb Cortex 2020; 30:3130-3147. [PMID: 32047882 DOI: 10.1093/cercor/bhz299] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Classic spectrotemporal receptive fields (STRFs) for auditory neurons are usually expressed as a single linear filter representing a single encoded stimulus feature. Multifilter STRF models represent the stimulus-response relationship of primary auditory cortex (A1) neurons more accurately because they can capture multiple stimulus features. To determine whether multifilter processing is unique to A1, we compared the utility of single-filter versus multifilter STRF models in the medial geniculate body (MGB), anterior auditory field (AAF), and A1 of ketamine-anesthetized cats. We estimated STRFs using both spike-triggered average (STA) and maximally informative dimension (MID) methods. Comparison of basic filter properties of first maximally informative dimension (MID1) and second maximally informative dimension (MID2) in the 3 stations revealed broader spectral integration of MID2s in MGBv and A1 as opposed to AAF. MID2 peak latency was substantially longer than for STAs and MID1s in all 3 stations. The 2-filter MID model captured more information and yielded better predictions in many neurons from all 3 areas but disproportionately more so in AAF and A1 compared with MGBv. Significantly, information-enhancing cooperation between the 2 MIDs was largely restricted to A1 neurons. This demonstrates significant differences in how these 3 forebrain stations process auditory information, as expressed in effective and synergistic multifilter processing.
Collapse
Affiliation(s)
- Jonathan Y Shih
- Department of Otolaryngology-Head and Neck Surgery, Coleman Memorial Laboratory, UCSF Center for Integrative Neuroscience, University of California, San Francisco, CA 94158-0444, USA
| | - Kexin Yuan
- Department of Otolaryngology-Head and Neck Surgery, Coleman Memorial Laboratory, UCSF Center for Integrative Neuroscience, University of California, San Francisco, CA 94158-0444, USA.,Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Craig A Atencio
- Department of Otolaryngology-Head and Neck Surgery, Coleman Memorial Laboratory, UCSF Center for Integrative Neuroscience, University of California, San Francisco, CA 94158-0444, USA
| | - Christoph E Schreiner
- Department of Otolaryngology-Head and Neck Surgery, Coleman Memorial Laboratory, UCSF Center for Integrative Neuroscience, University of California, San Francisco, CA 94158-0444, USA
| |
Collapse
|
14
|
Maor I, Shwartz-Ziv R, Feigin L, Elyada Y, Sompolinsky H, Mizrahi A. Neural Correlates of Learning Pure Tones or Natural Sounds in the Auditory Cortex. Front Neural Circuits 2020; 13:82. [PMID: 32047424 PMCID: PMC6997498 DOI: 10.3389/fncir.2019.00082] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 12/17/2019] [Indexed: 11/17/2022] Open
Abstract
Associative learning of pure tones is known to cause tonotopic map expansion in the auditory cortex (ACx), but the function this plasticity sub-serves is unclear. We developed an automated training platform called the “Educage,” which was used to train mice on a go/no-go auditory discrimination task to their perceptual limits, for difficult discriminations among pure tones or natural sounds. Spiking responses of excitatory and inhibitory parvalbumin (PV+) L2/3 neurons in mouse ACx revealed learning-induced overrepresentation of the learned frequencies, as expected from previous literature. The coordinated plasticity of excitatory and inhibitory neurons supports a role for PV+ neurons in homeostatic maintenance of excitation–inhibition balance within the circuit. Using a novel computational model to study auditory tuning curves, we show that overrepresentation of the learned tones does not necessarily improve discrimination performance of the network to these tones. In a separate set of experiments, we trained mice to discriminate among natural sounds. Perceptual learning of natural sounds induced “sparsening” and decorrelation of the neural response, consequently improving discrimination of these complex sounds. This signature of plasticity in A1 highlights its role in coding natural sounds.
Collapse
Affiliation(s)
- Ido Maor
- Department of Neurobiology, Alexander Silberman Institute of Life Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel.,The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Ravid Shwartz-Ziv
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Libi Feigin
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Yishai Elyada
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Haim Sompolinsky
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel.,The Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Adi Mizrahi
- Department of Neurobiology, Alexander Silberman Institute of Life Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel.,The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
15
|
Lohse M, Bajo VM, King AJ, Willmore BDB. Neural circuits underlying auditory contrast gain control and their perceptual implications. Nat Commun 2020; 11:324. [PMID: 31949136 PMCID: PMC6965083 DOI: 10.1038/s41467-019-14163-5] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Accepted: 12/19/2019] [Indexed: 11/09/2022] Open
Abstract
Neural adaptation enables sensory information to be represented optimally in the brain despite large fluctuations over time in the statistics of the environment. Auditory contrast gain control represents an important example, which is thought to arise primarily from cortical processing. Here we show that neurons in the auditory thalamus and midbrain of mice show robust contrast gain control, and that this is implemented independently of cortical activity. Although neurons at each level exhibit contrast gain control to similar degrees, adaptation time constants become longer at later stages of the processing hierarchy, resulting in progressively more stable representations. We also show that auditory discrimination thresholds in human listeners compensate for changes in contrast, and that the strength of this perceptual adaptation can be predicted from physiological measurements. Contrast adaptation is therefore a robust property of both the subcortical and cortical auditory system and accounts for the short-term adaptability of perceptual judgments.
Collapse
Affiliation(s)
- Michael Lohse
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, OX1 3PT, UK.
| | - Victoria M Bajo
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Andrew J King
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, OX1 3PT, UK.
| | - Ben D B Willmore
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, OX1 3PT, UK
| |
Collapse
|
16
|
Latimer KW, Rieke F, Pillow JW. Inferring synaptic inputs from spikes with a conductance-based neural encoding model. eLife 2019; 8:47012. [PMID: 31850846 PMCID: PMC6989090 DOI: 10.7554/elife.47012] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 12/17/2019] [Indexed: 01/15/2023] Open
Abstract
Descriptive statistical models of neural responses generally aim to characterize the mapping from stimuli to spike responses while ignoring biophysical details of the encoding process. Here, we introduce an alternative approach, the conductance-based encoding model (CBEM), which describes a mapping from stimuli to excitatory and inhibitory synaptic conductances governing the dynamics of sub-threshold membrane potential. Remarkably, we show that the CBEM can be fit to extracellular spike train data and then used to predict excitatory and inhibitory synaptic currents. We validate these predictions with intracellular recordings from macaque retinal ganglion cells. Moreover, we offer a novel quasi-biophysical interpretation of the Poisson generalized linear model (GLM) as a special case of the CBEM in which excitation and inhibition are perfectly balanced. This work forges a new link between statistical and biophysical models of neural encoding and sheds new light on the biophysical variables that underlie spiking in the early visual pathway.
Collapse
Affiliation(s)
- Kenneth W Latimer
- Department of Physiology and Biophysics, University of Washington, Seattle, United States
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, United States
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Department of Psychology, Princeton University, Princeton, United States
| |
Collapse
|
17
|
Roach JP, Eniwaye B, Booth V, Sander LM, Zochowski MR. Acetylcholine Mediates Dynamic Switching Between Information Coding Schemes in Neuronal Networks. Front Syst Neurosci 2019; 13:64. [PMID: 31780905 PMCID: PMC6861375 DOI: 10.3389/fnsys.2019.00064] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Accepted: 10/14/2019] [Indexed: 11/23/2022] Open
Abstract
Rate coding and phase coding are the two major coding modes seen in the brain. For these two modes, network dynamics must either have a wide distribution of frequencies for rate coding, or a narrow one to achieve stability in phase dynamics for phase coding. Acetylcholine (ACh) is a potent regulator of neural excitability. Acting through the muscarinic receptor, ACh reduces the magnitude of the potassium M-current, a hyperpolarizing current that builds up as neurons fire. The M-current contributes to several excitability features of neurons, becoming a major player in facilitating the transition between Type 1 (integrator) and Type 2 (resonator) excitability. In this paper we argue that this transition enables a dynamic switch between rate coding and phase coding as levels of ACh release change. When a network is in a high ACh state variations in synaptic inputs will lead to a wider distribution of firing rates across the network and this distribution will reflect the network structure or pattern of external input to the network. When ACh is low, network frequencies become narrowly distributed and the structure of a network or pattern of external inputs will be represented through phase relationships between firing neurons. This work provides insights into how modulation of neuronal features influences network dynamics and information processing across brain states.
Collapse
Affiliation(s)
- James P Roach
- Neuroscience Graduate Program, University of Michigan, Ann Arbor, MI, United States
| | - Bolaji Eniwaye
- Department of Physics, University of Michigan, Ann Arbor, MI, United States
| | - Victoria Booth
- Neuroscience Graduate Program, University of Michigan, Ann Arbor, MI, United States.,Department of Mathematics, University of Michigan, Ann Arbor, MI, United States.,Department of Anesthesiology, University of Michigan, Ann Arbor, MI, United States
| | - Leonard M Sander
- Department of Physics, University of Michigan, Ann Arbor, MI, United States.,Center for the Study of Complex Systems, University of Michigan, Ann Arbor, MI, United States
| | - Michal R Zochowski
- Neuroscience Graduate Program, University of Michigan, Ann Arbor, MI, United States.,Department of Physics, University of Michigan, Ann Arbor, MI, United States.,Center for the Study of Complex Systems, University of Michigan, Ann Arbor, MI, United States.,Biophysics Program, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
18
|
Lopez Espejo M, Schwartz ZP, David SV. Spectral tuning of adaptation supports coding of sensory context in auditory cortex. PLoS Comput Biol 2019; 15:e1007430. [PMID: 31626624 PMCID: PMC6821137 DOI: 10.1371/journal.pcbi.1007430] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Revised: 10/30/2019] [Accepted: 09/23/2019] [Indexed: 12/19/2022] Open
Abstract
Perception of vocalizations and other behaviorally relevant sounds requires integrating acoustic information over hundreds of milliseconds. Sound-evoked activity in auditory cortex typically has much shorter latency, but the acoustic context, i.e., sound history, can modulate sound evoked activity over longer periods. Contextual effects are attributed to modulatory phenomena, such as stimulus-specific adaption and contrast gain control. However, an encoding model that links context to natural sound processing has yet to be established. We tested whether a model in which spectrally tuned inputs undergo adaptation mimicking short-term synaptic plasticity (STP) can account for contextual effects during natural sound processing. Single-unit activity was recorded from primary auditory cortex of awake ferrets during presentation of noise with natural temporal dynamics and fully natural sounds. Encoding properties were characterized by a standard linear-nonlinear spectro-temporal receptive field (LN) model and variants that incorporated STP-like adaptation. In the adapting models, STP was applied either globally across all input spectral channels or locally to subsets of channels. For most neurons, models incorporating local STP predicted neural activity as well or better than LN and global STP models. The strength of nonlinear adaptation varied across neurons. Within neurons, adaptation was generally stronger for spectral channels with excitatory than inhibitory gain. Neurons showing improved STP model performance also tended to undergo stimulus-specific adaptation, suggesting a common mechanism for these phenomena. When STP models were compared between passive and active behavior conditions, response gain often changed, but average STP parameters were stable. Thus, spectrally and temporally heterogeneous adaptation, subserved by a mechanism with STP-like dynamics, may support representation of the complex spectro-temporal patterns that comprise natural sounds across wide-ranging sensory contexts.
Collapse
Affiliation(s)
- Mateo Lopez Espejo
- Neuroscience Graduate Program, Oregon Health and Science University, Portland, OR, United States of America
| | - Zachary P. Schwartz
- Neuroscience Graduate Program, Oregon Health and Science University, Portland, OR, United States of America
| | - Stephen V. David
- Oregon Hearing Research Center, Oregon Health and Science University, Portland, OR, United States of America
| |
Collapse
|
19
|
Abstract
With modern neurophysiological methods able to record neural activity throughout the visual pathway in the context of arbitrarily complex visual stimulation, our understanding of visual system function is becoming limited by the available models of visual neurons that can be directly related to such data. Different forms of statistical models are now being used to probe the cellular and circuit mechanisms shaping neural activity, understand how neural selectivity to complex visual features is computed, and derive the ways in which neurons contribute to systems-level visual processing. However, models that are able to more accurately reproduce observed neural activity often defy simple interpretations. As a result, rather than being used solely to connect with existing theories of visual processing, statistical modeling will increasingly drive the evolution of more sophisticated theories.
Collapse
Affiliation(s)
- Daniel A. Butts
- Department of Biology and Program in Neuroscience and Cognitive Science, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
20
|
Kell AJE, McDermott JH. Invariance to background noise as a signature of non-primary auditory cortex. Nat Commun 2019; 10:3958. [PMID: 31477711 PMCID: PMC6718388 DOI: 10.1038/s41467-019-11710-y] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Accepted: 07/30/2019] [Indexed: 12/22/2022] Open
Abstract
Despite well-established anatomical differences between primary and non-primary auditory cortex, the associated representational transformations have remained elusive. Here we show that primary and non-primary auditory cortex are differentiated by their invariance to real-world background noise. We measured fMRI responses to natural sounds presented in isolation and in real-world noise, quantifying invariance as the correlation between the two responses for individual voxels. Non-primary areas were substantially more noise-invariant than primary areas. This primary-nonprimary difference occurred both for speech and non-speech sounds and was unaffected by a concurrent demanding visual task, suggesting that the observed invariance is not specific to speech processing and is robust to inattention. The difference was most pronounced for real-world background noise-both primary and non-primary areas were relatively robust to simple types of synthetic noise. Our results suggest a general representational transformation between auditory cortical stages, illustrating a representational consequence of hierarchical organization in the auditory system.
Collapse
Affiliation(s)
- Alexander J E Kell
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, 02139, USA.
- McGovern Institute for Brain Research, MIT, Cambridge, MA, 02139, USA.
- Center for Brains, Minds, and Machines, MIT, Cambridge, MA, 02139, USA.
- Zuckerman Institute of Mind, Brain, and Behavior, Columbia University, New York, NY, 10027, USA.
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, 02139, USA.
- McGovern Institute for Brain Research, MIT, Cambridge, MA, 02139, USA.
- Center for Brains, Minds, and Machines, MIT, Cambridge, MA, 02139, USA.
- Program in Speech and Hearing Biosciences and Technology, Harvard University, Boston, MA, USA.
| |
Collapse
|
21
|
Zuk NJ, Delgutte B. Neural coding and perception of auditory motion direction based on interaural time differences. J Neurophysiol 2019; 122:1821-1842. [PMID: 31461376 DOI: 10.1152/jn.00081.2019] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
While motion is important for parsing a complex auditory scene into perceptual objects, how it is encoded in the auditory system is unclear. Perceptual studies suggest that the ability to identify the direction of motion is limited by the duration of the moving sound, yet we can detect changes in interaural differences at even shorter durations. To understand the source of these distinct temporal limits, we recorded from single units in the inferior colliculus (IC) of unanesthetized rabbits in response to noise stimuli containing a brief segment with linearly time-varying interaural time difference ("ITD sweep") temporally embedded in interaurally uncorrelated noise. We also tested the ability of human listeners to either detect the ITD sweeps or identify the motion direction. Using a point-process model to separate the contributions of stimulus dependence and spiking history to single-neuron responses, we found that the neurons respond primarily by following the instantaneous ITD rather than exhibiting true direction selectivity. Furthermore, using an optimal classifier to decode the single-neuron responses, we found that neural threshold durations of ITD sweeps for both direction identification and detection overlapped with human threshold durations even though the average response of the neurons could track the instantaneous ITD beyond psychophysical limits. Our results suggest that the IC does not explicitly encode motion direction, but internal neural noise may limit the speed at which we can identify the direction of motion.NEW & NOTEWORTHY Recognizing motion and identifying an object's trajectory are important for parsing a complex auditory scene, but how we do so is unclear. We show that neurons in the auditory midbrain do not exhibit direction selectivity as found in the visual system but instead follow the trajectory of the motion in their temporal firing patterns. Our results suggest that the inherent variability in neural firings may limit our ability to identify motion direction at short durations.
Collapse
Affiliation(s)
- Nathaniel J Zuk
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts
| | - Bertrand Delgutte
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts.,Department of Otolaryngology, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
22
|
Abstract
Adaptation is a common principle that recurs throughout the nervous system at all stages of processing. This principle manifests in a variety of phenomena, from spike frequency adaptation, to apparent changes in receptive fields with changes in stimulus statistics, to enhanced responses to unexpected stimuli. The ubiquity of adaptation leads naturally to the question: What purpose do these different types of adaptation serve? A diverse set of theories, often highly overlapping, has been proposed to explain the functional role of adaptive phenomena. In this review, we discuss several of these theoretical frameworks, highlighting relationships among them and clarifying distinctions. We summarize observations of the varied manifestations of adaptation, particularly as they relate to these theoretical frameworks, focusing throughout on the visual system and making connections to other sensory systems.
Collapse
Affiliation(s)
- Alison I Weber
- Department of Physiology and Biophysics and Computational Neuroscience Center, University of Washington, Seattle, Washington 98195, USA; ,
| | - Kamesh Krishnamurthy
- Neuroscience Institute and Center for Physics of Biological Function, Department of Physics, Princeton University, Princeton, New Jersey 08544, USA;
| | - Adrienne L Fairhall
- Department of Physiology and Biophysics and Computational Neuroscience Center, University of Washington, Seattle, Washington 98195, USA; , .,UW Institute for Neuroengineering, University of Washington, Seattle, Washington 98195, USA
| |
Collapse
|
23
|
Williamson RS, Polley DB. Parallel pathways for sound processing and functional connectivity among layer 5 and 6 auditory corticofugal neurons. eLife 2019; 8:e42974. [PMID: 30735128 PMCID: PMC6384027 DOI: 10.7554/elife.42974] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2018] [Accepted: 02/06/2019] [Indexed: 11/27/2022] Open
Abstract
Cortical layers (L) 5 and 6 are populated by intermingled cell-types with distinct inputs and downstream targets. Here, we made optogenetically guided recordings from L5 corticofugal (CF) and L6 corticothalamic (CT) neurons in the auditory cortex of awake mice to discern differences in sensory processing and underlying patterns of functional connectivity. Whereas L5 CF neurons showed broad stimulus selectivity with sluggish response latencies and extended temporal non-linearities, L6 CTs exhibited sparse selectivity and rapid temporal processing. L5 CF spikes lagged behind neighboring units and imposed weak feedforward excitation within the local column. By contrast, L6 CT spikes drove robust and sustained activity, particularly in local fast-spiking interneurons. Our findings underscore a duality among sub-cortical projection neurons, where L5 CF units are canonical broadcast neurons that integrate sensory inputs for transmission to distributed downstream targets, while L6 CT neurons are positioned to regulate thalamocortical response gain and selectivity.
Collapse
Affiliation(s)
- Ross S Williamson
- Eaton-Peabody LaboratoriesMassachusetts Eye and Ear InfirmaryBostonUnited States
- Department of OtolaryngologyHarvard Medical SchoolBostonUnited States
| | - Daniel B Polley
- Eaton-Peabody LaboratoriesMassachusetts Eye and Ear InfirmaryBostonUnited States
- Department of OtolaryngologyHarvard Medical SchoolBostonUnited States
| |
Collapse
|
24
|
Neural code: Another breach in the wall? Behav Brain Sci 2019; 42:e232. [DOI: 10.1017/s0140525x19001328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Abstract
Brette presents arguments that query the existence of the neural code. However, he has neglected certain evidence that could be viewed as proof that a neural code operates in the brain. Albeit these proofs show a link between neural activity and cognition, we discuss why they fail to demonstrate the existence of an invariant neural code.
Collapse
|
25
|
Kopp-Scheinpflug C, Sinclair JL, Linden JF. When Sound Stops: Offset Responses in the Auditory System. Trends Neurosci 2018; 41:712-728. [DOI: 10.1016/j.tins.2018.08.009] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 07/30/2018] [Accepted: 08/10/2018] [Indexed: 11/17/2022]
|
26
|
Abstract
Our ability to make sense of the auditory world results from neural processing that begins in the ear, goes through multiple subcortical areas, and continues in the cortex. The specific contribution of the auditory cortex to this chain of processing is far from understood. Although many of the properties of neurons in the auditory cortex resemble those of subcortical neurons, they show somewhat more complex selectivity for sound features, which is likely to be important for the analysis of natural sounds, such as speech, in real-life listening conditions. Furthermore, recent work has shown that auditory cortical processing is highly context-dependent, integrates auditory inputs with other sensory and motor signals, depends on experience, and is shaped by cognitive demands, such as attention. Thus, in addition to being the locus for more complex sound selectivity, the auditory cortex is increasingly understood to be an integral part of the network of brain regions responsible for prediction, auditory perceptual decision-making, and learning. In this review, we focus on three key areas that are contributing to this understanding: the sound features that are preferentially represented by cortical neurons, the spatial organization of those preferences, and the cognitive roles of the auditory cortex.
Collapse
Affiliation(s)
- Andrew J King
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Sundeep Teki
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Ben D B Willmore
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| |
Collapse
|
27
|
Westö J, May PJC. Describing complex cells in primary visual cortex: a comparison of context and multifilter LN models. J Neurophysiol 2018; 120:703-719. [PMID: 29718805 PMCID: PMC6139451 DOI: 10.1152/jn.00916.2017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 04/30/2018] [Accepted: 04/30/2018] [Indexed: 11/24/2022] Open
Abstract
Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multifilter linear-nonlinear (LN) models and context models. Models are, however, never correct, and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: 1) we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions, and 2) we evaluate context models and multifilter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multifilter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multifilter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior. NEW & NOTEWORTHY We used data from complex cells in primary visual cortex to estimate a wide variety of receptive field models from two frameworks that have previously not been compared with each other. The models included traditionally used multifilter linear-nonlinear models and novel variants of context models. Using mutual information and correlation coefficients as performance measures, we showed that context models are superior for describing complex cells and that the novel context models performed the best.
Collapse
Affiliation(s)
- Johan Westö
- Department of Neuroscience and Biomedical Engineering Aalto University , Espoo , Finland
| | - Patrick J C May
- Department of Psychology, Lancaster University , Lancaster , United Kingdom
| |
Collapse
|
28
|
Hamilton LS, Huth AG. The revolution will not be controlled: natural stimuli in speech neuroscience. LANGUAGE, COGNITION AND NEUROSCIENCE 2018; 35:573-582. [PMID: 32656294 PMCID: PMC7324135 DOI: 10.1080/23273798.2018.1499946] [Citation(s) in RCA: 106] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Accepted: 07/03/2018] [Indexed: 05/22/2023]
Abstract
Humans have a unique ability to produce and consume rich, complex, and varied language in order to communicate ideas to one another. Still, outside of natural reading, the most common methods for studying how our brains process speech or understand language use only isolated words or simple sentences. Recent studies have upset this status quo by employing complex natural stimuli and measuring how the brain responds to language as it is used. In this article we argue that natural stimuli offer many advantages over simplified, controlled stimuli for studying how language is processed by the brain. Furthermore, the downsides of using natural language stimuli can be mitigated using modern statistical and computational techniques.
Collapse
Affiliation(s)
- Liberty S. Hamilton
- Communication Sciences & Disorders, Moody College of Communication, The University of Texas at Austin, Austin, USA
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, USA
| | - Alexander G. Huth
- Department of Neuroscience, The University of Texas at Austin, Austin, USA
- Department of Computer Science, The University of Texas at Austin, Austin, USA
| |
Collapse
|
29
|
Angeloni C, Geffen MN. Contextual modulation of sound processing in the auditory cortex. Curr Opin Neurobiol 2018; 49:8-15. [PMID: 29125987 PMCID: PMC6037899 DOI: 10.1016/j.conb.2017.10.012] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2017] [Revised: 10/11/2017] [Accepted: 10/13/2017] [Indexed: 12/26/2022]
Abstract
In everyday acoustic environments, we navigate through a maze of sounds that possess a complex spectrotemporal structure, spanning many frequencies and exhibiting temporal modulations that differ within frequency bands. Our auditory system needs to efficiently encode the same sounds in a variety of different contexts, while preserving the ability to separate complex sounds within an acoustic scene. Recent work in auditory neuroscience has made substantial progress in studying how sounds are represented in the auditory system under different contexts, demonstrating that auditory processing of seemingly simple acoustic features, such as frequency and time, is highly dependent on co-occurring acoustic and behavioral stimuli. Through a combination of electrophysiological recordings, computational analysis and behavioral techniques, recent research identified the interactions between external spectral and temporal context of stimuli, as well as the internal behavioral state.
Collapse
Affiliation(s)
- C Angeloni
- Department of Otorhinolaryngology: HNS, Department of Neuroscience, Psychology Graduate Group, Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, United States
| | - M N Geffen
- Department of Otorhinolaryngology: HNS, Department of Neuroscience, Psychology Graduate Group, Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, United States.
| |
Collapse
|
30
|
David SV. Incorporating behavioral and sensory context into spectro-temporal models of auditory encoding. Hear Res 2018; 360:107-123. [PMID: 29331232 PMCID: PMC6292525 DOI: 10.1016/j.heares.2017.12.021] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 12/18/2017] [Accepted: 12/26/2017] [Indexed: 01/11/2023]
Abstract
For several decades, auditory neuroscientists have used spectro-temporal encoding models to understand how neurons in the auditory system represent sound. Derived from early applications of systems identification tools to the auditory periphery, the spectro-temporal receptive field (STRF) and more sophisticated variants have emerged as an efficient means of characterizing representation throughout the auditory system. Most of these encoding models describe neurons as static sensory filters. However, auditory neural coding is not static. Sensory context, reflecting the acoustic environment, and behavioral context, reflecting the internal state of the listener, can both influence sound-evoked activity, particularly in central auditory areas. This review explores recent efforts to integrate context into spectro-temporal encoding models. It begins with a brief tutorial on the basics of estimating and interpreting STRFs. Then it describes three recent studies that have characterized contextual effects on STRFs, emerging over a range of timescales, from many minutes to tens of milliseconds. An important theme of this work is not simply that context influences auditory coding, but also that contextual effects span a large continuum of internal states. The added complexity of these context-dependent models introduces new experimental and theoretical challenges that must be addressed in order to be used effectively. Several new methodological advances promise to address these limitations and allow the development of more comprehensive context-dependent models in the future.
Collapse
Affiliation(s)
- Stephen V David
- Oregon Hearing Research Center, Oregon Health & Science University, 3181 SW Sam Jackson Park Rd, MC L335A, Portland, OR 97239, United States.
| |
Collapse
|
31
|
Blackwell JM, Geffen MN. Progress and challenges for understanding the function of cortical microcircuits in auditory processing. Nat Commun 2017; 8:2165. [PMID: 29255268 PMCID: PMC5735136 DOI: 10.1038/s41467-017-01755-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Accepted: 10/12/2017] [Indexed: 12/21/2022] Open
Abstract
An important outstanding question in auditory neuroscience is to identify the mechanisms by which specific motifs within inter-connected neural circuits affect auditory processing and, ultimately, behavior. In the auditory cortex, a combination of large-scale electrophysiological recordings and concurrent optogenetic manipulations are improving our understanding of the role of inhibitory–excitatory interactions. At the same time, computational approaches have grown to incorporate diverse neuronal types and connectivity patterns. However, we are still far from understanding how cortical microcircuits encode and transmit information about complex acoustic scenes. In this review, we focus on recent results identifying the special function of different cortical neurons in the auditory cortex and discuss a computational framework for future work that incorporates ideas from network science and network dynamics toward the coding of complex auditory scenes. Advances in multi-neuron recordings and optogenetic manipulation have resulted in an interrogation of the function of specific cortical cell types in auditory cortex during sound processing. Here, the authors review this literature and discuss the merits of integrating computational approaches from dynamic network science.
Collapse
Affiliation(s)
- Jennifer M Blackwell
- Department of Otorhinolaryngology: HNS, Department of Neuroscience, Neuroscience Graduate Group, Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Maria N Geffen
- Department of Otorhinolaryngology: HNS, Department of Neuroscience, Neuroscience Graduate Group, Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, 19104, USA.
| |
Collapse
|
32
|
Młynarski W, McDermott JH. Learning Midlevel Auditory Codes from Natural Sound Statistics. Neural Comput 2017; 30:631-669. [PMID: 29220308 DOI: 10.1162/neco_a_01048] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Interaction with the world requires an organism to transform sensory signals into representations in which behaviorally meaningful properties of the environment are made explicit. These representations are derived through cascades of neuronal processing stages in which neurons at each stage recode the output of preceding stages. Explanations of sensory coding may thus involve understanding how low-level patterns are combined into more complex structures. To gain insight into such midlevel representations for sound, we designed a hierarchical generative model of natural sounds that learns combinations of spectrotemporal features from natural stimulus statistics. In the first layer, the model forms a sparse convolutional code of spectrograms using a dictionary of learned spectrotemporal kernels. To generalize from specific kernel activation patterns, the second layer encodes patterns of time-varying magnitude of multiple first-layer coefficients. When trained on corpora of speech and environmental sounds, some second-layer units learned to group similar spectrotemporal features. Others instantiate opponency between distinct sets of features. Such groupings might be instantiated by neurons in the auditory cortex, providing a hypothesis for midlevel neuronal computation.
Collapse
|
33
|
History-Dependent Odor Processing in the Mouse Olfactory Bulb. J Neurosci 2017; 37:12018-12030. [PMID: 29109236 PMCID: PMC5719977 DOI: 10.1523/jneurosci.0755-17.2017] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2017] [Revised: 09/15/2017] [Accepted: 10/22/2017] [Indexed: 12/02/2022] Open
Abstract
In nature, animals normally perceive sensory information on top of backgrounds. Thus, the neural substrate to perceive under background conditions is inherent in all sensory systems. Where and how sensory systems process backgrounds is not fully understood. In olfaction, just a few studies have addressed the issue of odor coding on top of continuous odorous backgrounds. Here, we tested how background odors are encoded by mitral cells (MCs) in the olfactory bulb (OB) of male mice. Using in vivo two-photon calcium imaging, we studied how MCs responded to odors in isolation versus their responses to the same odors on top of continuous backgrounds. We show that MCs adapt to continuous odor presentation and that mixture responses are different when preceded by background. In a subset of odor combinations, this history-dependent processing was useful in helping to identify target odors over background. Other odorous backgrounds were highly dominant such that target odors were completely masked by their presence. Our data are consistent in both low and high odor concentrations and in anesthetized and awake mice. Thus, odor processing in the OB is strongly influenced by the recent history of activity, which could have a powerful impact on how odors are perceived. SIGNIFICANCE STATEMENT We examined a basic feature of sensory processing in the olfactory bulb. Specifically, we measured how mitral cells adapt to continuous background odors and how target odors are encoded on top of such background. Our results show clear differences in odor coding based on the immediate history of the stimulus. Our results support the argument that odor coding in the olfactory bulb depends on the recent history of the sensory environment.
Collapse
|
34
|
Spatial Processing Is Frequency Specific in Auditory Cortex But Not in the Midbrain. J Neurosci 2017; 37:6588-6599. [PMID: 28559383 PMCID: PMC5511886 DOI: 10.1523/jneurosci.3034-16.2017] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2016] [Revised: 02/28/2017] [Accepted: 03/16/2017] [Indexed: 11/25/2022] Open
Abstract
The cochlea behaves like a bank of band-pass filters, segregating information into different frequency channels. Some aspects of perception reflect processing within individual channels, but others involve the integration of information across them. One instance of this is sound localization, which improves with increasing bandwidth. The processing of binaural cues for sound location has been studied extensively. However, although the advantage conferred by bandwidth is clear, we currently know little about how this additional information is combined to form our percept of space. We investigated the ability of cells in the auditory system of guinea pigs to compare interaural level differences (ILDs), a key localization cue, between tones of disparate frequencies in each ear. Cells in auditory cortex believed to be integral to ILD processing (excitatory from one ear, inhibitory from the other: EI cells) compare ILDs separately over restricted frequency ranges which are not consistent with their monaural tuning. In contrast, cells that are excitatory from both ears (EE cells) show no evidence of frequency-specific processing. Both cell types are explained by a model in which ILDs are computed within separate frequency channels and subsequently combined in a single cortical cell. Interestingly, ILD processing in all inferior colliculus cell types (EE and EI) is largely consistent with processing within single, matched-frequency channels from each ear. Our data suggest a clear constraint on the way that localization cues are integrated: cortical ILD tuning to broadband sounds is a composite of separate, frequency-specific, binaurally sensitive channels. This frequency-specific processing appears after the level of the midbrain. SIGNIFICANCE STATEMENT For some sensory modalities (e.g., somatosensation, vision), the spatial arrangement of the outside world is inherited by the brain from the periphery. The auditory periphery is arranged spatially by frequency, not spatial location. Therefore, our auditory perception of location must be synthesized from physical cues in separate frequency channels. There are multiple cues (e.g., timing, level, spectral cues), but even single cues (e.g., level differences) are frequency dependent. The synthesis of location must account for this frequency dependence, but it is not known how this might occur. Here, we investigated how interaural-level differences are combined across frequency along the ascending auditory system. We found that the integration in auditory cortex preserves the independence of the different-level cues in different frequency regions.
Collapse
|
35
|
Meyer AF, Williamson RS, Linden JF, Sahani M. Models of Neuronal Stimulus-Response Functions: Elaboration, Estimation, and Evaluation. Front Syst Neurosci 2017; 10:109. [PMID: 28127278 PMCID: PMC5226961 DOI: 10.3389/fnsys.2016.00109] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2016] [Accepted: 12/19/2016] [Indexed: 11/13/2022] Open
Abstract
Rich, dynamic, and dense sensory stimuli are encoded within the nervous system by the time-varying activity of many individual neurons. A fundamental approach to understanding the nature of the encoded representation is to characterize the function that relates the moment-by-moment firing of a neuron to the recent history of a complex sensory input. This review provides a unifying and critical survey of the techniques that have been brought to bear on this effort thus far—ranging from the classical linear receptive field model to modern approaches incorporating normalization and other nonlinearities. We address separately the structure of the models; the criteria and algorithms used to identify the model parameters; and the role of regularizing terms or “priors.” In each case we consider benefits or drawbacks of various proposals, providing examples for when these methods work and when they may fail. Emphasis is placed on key concepts rather than mathematical details, so as to make the discussion accessible to readers from outside the field. Finally, we review ways in which the agreement between an assumed model and the neuron's response may be quantified. Re-implemented and unified code for many of the methods are made freely available.
Collapse
Affiliation(s)
- Arne F Meyer
- Gatsby Computational Neuroscience Unit, University College London London, UK
| | - Ross S Williamson
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear InfirmaryBoston, MA, USA; Department of Otology and Laryngology, Harvard Medical SchoolBoston, MA, USA
| | - Jennifer F Linden
- Ear Institute, University College LondonLondon, UK; Department of Neuroscience, Physiology and Pharmacology, University College LondonLondon, UK
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London London, UK
| |
Collapse
|
36
|
Cui Y, Wang YV, Park SJH, Demb JB, Butts DA. Divisive suppression explains high-precision firing and contrast adaptation in retinal ganglion cells. eLife 2016; 5:e19460. [PMID: 27841746 PMCID: PMC5108594 DOI: 10.7554/elife.19460] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2016] [Accepted: 10/19/2016] [Indexed: 11/13/2022] Open
Abstract
Visual processing depends on specific computations implemented by complex neural circuits. Here, we present a circuit-inspired model of retinal ganglion cell computation, targeted to explain their temporal dynamics and adaptation to contrast. To localize the sources of such processing, we used recordings at the levels of synaptic input and spiking output in the in vitro mouse retina. We found that an ON-Alpha ganglion cell's excitatory synaptic inputs were described by a divisive interaction between excitation and delayed suppression, which explained nonlinear processing that was already present in ganglion cell inputs. Ganglion cell output was further shaped by spike generation mechanisms. The full model accurately predicted spike responses with unprecedented millisecond precision, and accurately described contrast adaptation of the spike train. These results demonstrate how circuit and cell-intrinsic mechanisms interact for ganglion cell function and, more generally, illustrate the power of circuit-inspired modeling of sensory processing.
Collapse
Affiliation(s)
- Yuwei Cui
- Department of Biology, University of Maryland, College Park, United States
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, United States
| | - Yanbin V Wang
- Department of Ophthalmology and Visual Science, Yale University, New Haven, United States
- Department of Cellular and Molecular Physiology, Yale University, New Haven, United States
| | - Silvia J H Park
- Department of Ophthalmology and Visual Science, Yale University, New Haven, United States
| | - Jonathan B Demb
- Department of Ophthalmology and Visual Science, Yale University, New Haven, United States
- Department of Cellular and Molecular Physiology, Yale University, New Haven, United States
| | - Daniel A Butts
- Department of Biology, University of Maryland, College Park, United States
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, United States
| |
Collapse
|