1
|
Rançon U, Masquelier T, Cottereau BR. A general model unifying the adaptive, transient and sustained properties of ON and OFF auditory neural responses. PLoS Comput Biol 2024; 20:e1012288. [PMID: 39093852 DOI: 10.1371/journal.pcbi.1012288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 06/29/2024] [Indexed: 08/04/2024] Open
Abstract
Sounds are temporal stimuli decomposed into numerous elementary components by the auditory nervous system. For instance, a temporal to spectro-temporal transformation modelling the frequency decomposition performed by the cochlea is a widely adopted first processing step in today's computational models of auditory neural responses. Similarly, increments and decrements in sound intensity (i.e., of the raw waveform itself or of its spectral bands) constitute critical features of the neural code, with high behavioural significance. However, despite the growing attention of the scientific community on auditory OFF responses, their relationship with transient ON, sustained responses and adaptation remains unclear. In this context, we propose a new general model, based on a pair of linear filters, named AdapTrans, that captures both sustained and transient ON and OFF responses into a unifying and easy to expand framework. We demonstrate that filtering audio cochleagrams with AdapTrans permits to accurately render known properties of neural responses measured in different mammal species such as the dependence of OFF responses on the stimulus fall time and on the preceding sound duration. Furthermore, by integrating our framework into gold standard and state-of-the-art machine learning models that predict neural responses from audio stimuli, following a supervised training on a large compilation of electrophysiology datasets (ready-to-deploy PyTorch models and pre-processed datasets shared publicly), we show that AdapTrans systematically improves the prediction accuracy of estimated responses within different cortical areas of the rat and ferret auditory brain. Together, these results motivate the use of our framework for computational and systems neuroscientists willing to increase the plausibility and performances of their models of audition.
Collapse
Affiliation(s)
- Ulysse Rançon
- CerCo UMR 5549, CNRS - Université Toulouse III, Toulouse, France
| | | | - Benoit R Cottereau
- CerCo UMR 5549, CNRS - Université Toulouse III, Toulouse, France
- IPAL, CNRS IRL62955, Singapore, Singapore
| |
Collapse
|
2
|
Englitz B, Akram S, Elhilali M, Shamma S. Decoding contextual influences on auditory perception from primary auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.12.24.573229. [PMID: 38187523 PMCID: PMC10769425 DOI: 10.1101/2023.12.24.573229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
Perception can be highly dependent on stimulus context, but whether and how sensory areas encode the context remains uncertain. We used an ambiguous auditory stimulus - a tritone pair - to investigate the neural activity associated with a preceding contextual stimulus that strongly influenced the tritone pair's perception: either as an ascending or a descending step in pitch. We recorded single-unit responses from a population of auditory cortical cells in awake ferrets listening to the tritone pairs preceded by the contextual stimulus. We find that the responses adapt locally to the contextual stimulus, consistent with human MEG recordings from the auditory cortex under the same conditions. Decoding the population responses demonstrates that cells responding to pitch-class-changes are able to predict well the context-sensitive percept of the tritone pairs. Conversely, decoding the individual pitch-class representations and taking their distance in the circular Shepard tone space predicts the opposite of the percept. The various percepts can be readily captured and explained by a neural model of cortical activity based on populations of adapting, pitch-class and pitch-class-direction cells, aligned with the neurophysiological responses. Together, these decoding and model results suggest that contextual influences on perception may well be already encoded at the level of the primary sensory cortices, reflecting basic neural response properties commonly found in these areas.
Collapse
|
3
|
de Hoz L, McAlpine D. Noises on-How the Brain Deals with Acoustic Noise. BIOLOGY 2024; 13:501. [PMID: 39056695 PMCID: PMC11274191 DOI: 10.3390/biology13070501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Revised: 07/01/2024] [Accepted: 07/01/2024] [Indexed: 07/28/2024]
Abstract
What is noise? When does a sound form part of the acoustic background and when might it come to our attention as part of the foreground? Our brain seems to filter out irrelevant sounds in a seemingly effortless process, but how this is achieved remains opaque and, to date, unparalleled by any algorithm. In this review, we discuss how noise can be both background and foreground, depending on what a listener/brain is trying to achieve. We do so by addressing questions concerning the brain's potential bias to interpret certain sounds as part of the background, the extent to which the interpretation of sounds depends on the context in which they are heard, as well as their ethological relevance, task-dependence, and a listener's overall mental state. We explore these questions with specific regard to the implicit, or statistical, learning of sounds and the role of feedback loops between cortical and subcortical auditory structures.
Collapse
Affiliation(s)
- Livia de Hoz
- Neuroscience Research Center, Charité—Universitätsmedizin Berlin, 10117 Berlin, Germany
- Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany
| | - David McAlpine
- Neuroscience Research Center, Charité—Universitätsmedizin Berlin, 10117 Berlin, Germany
- Department of Linguistics, Macquarie University Hearing, Australian Hearing Hub, Sydney, NSW 2109, Australia
| |
Collapse
|
4
|
Mohammadi M, Carriot J, Mackrous I, Cullen KE, Chacron MJ. Neural populations within macaque early vestibular pathways are adapted to encode natural self-motion. PLoS Biol 2024; 22:e3002623. [PMID: 38687807 PMCID: PMC11086886 DOI: 10.1371/journal.pbio.3002623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 05/10/2024] [Accepted: 04/11/2024] [Indexed: 05/02/2024] Open
Abstract
How the activities of large neural populations are integrated in the brain to ensure accurate perception and behavior remains a central problem in systems neuroscience. Here, we investigated population coding of naturalistic self-motion by neurons within early vestibular pathways in rhesus macaques (Macacca mulatta). While vestibular neurons displayed similar dynamic tuning to self-motion, inspection of their spike trains revealed significant heterogeneity. Further analysis revealed that, during natural but not artificial stimulation, heterogeneity resulted primarily from variability across neurons as opposed to trial-to-trial variability. Interestingly, vestibular neurons displayed different correlation structures during naturalistic and artificial self-motion. Specifically, while correlations due to the stimulus (i.e., signal correlations) did not differ, correlations between the trial-to-trial variabilities of neural responses (i.e., noise correlations) were instead significantly positive during naturalistic but not artificial stimulation. Using computational modeling, we show that positive noise correlations during naturalistic stimulation benefits information transmission by heterogeneous vestibular neural populations. Taken together, our results provide evidence that neurons within early vestibular pathways are adapted to the statistics of natural self-motion stimuli at the population level. We suggest that similar adaptations will be found in other systems and species.
Collapse
Affiliation(s)
- Mohammad Mohammadi
- Department of Biological and Biomedical Engineering, McGill University, Montreal, Canada
| | - Jerome Carriot
- Department of Physiology, McGill University, Montreal, Canada
| | | | - Kathleen E. Cullen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
- Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
- Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, Maryland, United States of America
| | | |
Collapse
|
5
|
Peng F, Harper NS, Mishra AP, Auksztulewicz R, Schnupp JWH. Dissociable Roles of the Auditory Midbrain and Cortex in Processing the Statistical Features of Natural Sound Textures. J Neurosci 2024; 44:e1115232023. [PMID: 38267259 PMCID: PMC10919253 DOI: 10.1523/jneurosci.1115-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 11/23/2023] [Accepted: 12/11/2023] [Indexed: 01/26/2024] Open
Abstract
Sound texture perception takes advantage of a hierarchy of time-averaged statistical features of acoustic stimuli, but much remains unclear about how these statistical features are processed along the auditory pathway. Here, we compared the neural representation of sound textures in the inferior colliculus (IC) and auditory cortex (AC) of anesthetized female rats. We recorded responses to texture morph stimuli that gradually add statistical features of increasingly higher complexity. For each texture, several different exemplars were synthesized using different random seeds. An analysis of transient and ongoing multiunit responses showed that the IC units were sensitive to every type of statistical feature, albeit to a varying extent. In contrast, only a small proportion of AC units were overtly sensitive to any statistical features. Differences in texture types explained more of the variance of IC neural responses than did differences in exemplars, indicating a degree of "texture type tuning" in the IC, but the same was, perhaps surprisingly, not the case for AC responses. We also evaluated the accuracy of texture type classification from single-trial population activity and found that IC responses became more informative as more summary statistics were included in the texture morphs, while for AC population responses, classification performance remained consistently very low. These results argue against the idea that AC neurons encode sound type via an overt sensitivity in neural firing rate to fine-grain spectral and temporal statistical features.
Collapse
Affiliation(s)
- Fei Peng
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
| | - Nicol S Harper
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford OX1 2JD, United Kingdom
| | - Ambika P Mishra
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
| | - Ryszard Auksztulewicz
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
- Center for Cognitive Neuroscience Berlin, Free University Berlin, Berlin 14195, Germany
| | - Jan W H Schnupp
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
| |
Collapse
|
6
|
Todd J, Yeark M, Auriac P, Paton B, Winkler I. Order effects in task-free learning: Tuning to information-carrying sound features. Cortex 2024; 172:114-124. [PMID: 38295554 DOI: 10.1016/j.cortex.2023.10.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 10/28/2023] [Accepted: 10/31/2023] [Indexed: 02/02/2024]
Abstract
Event-related potentials (ERPs) acquired during task-free passive listening can be used to study how sensitivity to common pattern repetitions and rare deviations changes over time. These changes are purported to represent the formation and accumulation of precision in internal models that anticipate future states based on probabilistic and/or statistical learning. This study features an unexpected finding; a strong order-dependence in the speed with which deviant responses are elicited that anchors to first learning. Participants heard four repetitions of a sequence in which an equal number of short (30 msec) and long (60 msec) pure tones were arranged into four blocks in which one was common (the standard, p = .875) and the other rare (the deviant, p = .125) with probabilities alternating across blocks. Some participants always heard the sequences commencing with the 30 msec deviant block, and others always with the 60 msec deviant block first. A deviance-detection component known as mismatch negativity (MMN) was extracted from responses and the point in time at which MMN reached maximum amplitude was used as the dependent variable. The results show that if participants heard sequences commencing with the 60 msec deviant block first, the MMN to the 60 msec and 30 msec deviant peaked at an equivalent latency. However, if participants heard sequences commencing with the 30 msec deviant first, the MMN peaked earlier to the 60 msec deviant. Furthermore, while the 30 msec MMN latency did not differ as a function of sequence composition, the 60 msec MMN latency did and was earlier when the sequences began with a 30 msec deviant first. By examining MMN latency effects as a function of age and hearing level it was apparent that the differentiation in 30 msec and 60 msec MMN latency expands with older age and raised hearing threshold due to prolongation of the time taken for the 30 msec MMN to peak. The observations are discussed with reference to how the initial sound composition may tune the auditory system to be more sensitive to different cues (i.e., offset responses vs perceived loudness). The order-effect demonstrates a remarkably powerful anchoring to first learning that might reflect initial tuning to the most valuable discriminating feature within a given listening environment, an effect that defies explanation based on statistical information alone.
Collapse
Affiliation(s)
- Juanita Todd
- School of Psychological Sciences, University of Newcastle, Callaghan, Australia.
| | - Mattsen Yeark
- School of Psychological Sciences, University of Newcastle, Callaghan, Australia.
| | - Paul Auriac
- School of Psychological Sciences, University of Newcastle, Callaghan, Australia.
| | - Bryan Paton
- School of Psychological Sciences, University of Newcastle, Callaghan, Australia.
| | - István Winkler
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary.
| |
Collapse
|
7
|
Schmid C, Haziq M, Baese-Berk MM, Murray JM, Jaramillo S. Passive exposure to task-relevant stimuli enhances categorization learning. eLife 2024; 12:RP88406. [PMID: 38265440 PMCID: PMC10945695 DOI: 10.7554/elife.88406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2024] Open
Abstract
Learning to perform a perceptual decision task is generally achieved through sessions of effortful practice with feedback. Here, we investigated how passive exposure to task-relevant stimuli, which is relatively effortless and does not require feedback, influences active learning. First, we trained mice in a sound-categorization task with various schedules combining passive exposure and active training. Mice that received passive exposure exhibited faster learning, regardless of whether this exposure occurred entirely before active training or was interleaved between active sessions. We next trained neural-network models with different architectures and learning rules to perform the task. Networks that use the statistical properties of stimuli to enhance separability of the data via unsupervised learning during passive exposure provided the best account of the behavioral observations. We further found that, during interleaved schedules, there is an increased alignment between weight updates from passive exposure and active training, such that a few interleaved sessions can be as effective as schedules with long periods of passive exposure before active training, consistent with our behavioral observations. These results provide key insights for the design of efficient training schedules that combine active learning and passive exposure in both natural and artificial systems.
Collapse
Affiliation(s)
- Christian Schmid
- Institute of Neuroscience, University of OregonEugeneUnited States
| | - Muhammad Haziq
- Institute of Neuroscience, University of OregonEugeneUnited States
| | | | - James M Murray
- Institute of Neuroscience, University of OregonEugeneUnited States
| | | |
Collapse
|
8
|
Angeloni CF, Młynarski W, Piasini E, Williams AM, Wood KC, Garami L, Hermundstad AM, Geffen MN. Dynamics of cortical contrast adaptation predict perception of signals in noise. Nat Commun 2023; 14:4817. [PMID: 37558677 PMCID: PMC10412650 DOI: 10.1038/s41467-023-40477-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 07/27/2023] [Indexed: 08/11/2023] Open
Abstract
Neurons throughout the sensory pathway adapt their responses depending on the statistical structure of the sensory environment. Contrast gain control is a form of adaptation in the auditory cortex, but it is unclear whether the dynamics of gain control reflect efficient adaptation, and whether they shape behavioral perception. Here, we trained mice to detect a target presented in background noise shortly after a change in the contrast of the background. The observed changes in cortical gain and behavioral detection followed the dynamics of a normative model of efficient contrast gain control; specifically, target detection and sensitivity improved slowly in low contrast, but degraded rapidly in high contrast. Auditory cortex was required for this task, and cortical responses were not only similarly affected by contrast but predicted variability in behavioral performance. Combined, our results demonstrate that dynamic gain adaptation supports efficient coding in auditory cortex and predicts the perception of sounds in noise.
Collapse
Affiliation(s)
- Christopher F Angeloni
- Psychology Graduate Group, University of Pennsylvania, Philadelphia, PA, USA
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Wiktor Młynarski
- Faculty of Biology, Ludwig Maximilian University of Munich, Munich, Germany
- Bernstein Center for Computational Neuroscience, Munich, Germany
| | - Eugenio Piasini
- International School for Advanced Studies (SISSA), Trieste, Italy
| | - Aaron M Williams
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA, USA
| | - Katherine C Wood
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Linda Garami
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Ann M Hermundstad
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Maria N Geffen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA.
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Neuroscience, Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
9
|
Boothalingam S, Peterson A, Powell L, Easwar V. Auditory brainstem mechanisms likely compensate for self-imposed peripheral inhibition. Sci Rep 2023; 13:12693. [PMID: 37542191 PMCID: PMC10403563 DOI: 10.1038/s41598-023-39850-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 08/01/2023] [Indexed: 08/06/2023] Open
Abstract
Feedback networks in the brain regulate downstream auditory function as peripheral as the cochlea. However, the upstream neural consequences of this peripheral regulation are less understood. For instance, the medial olivocochlear reflex (MOCR) in the brainstem causes putative attenuation of responses generated in the cochlea and cortex, but those generated in the brainstem are perplexingly unaffected. Based on known neural circuitry, we hypothesized that the inhibition of peripheral input is compensated for by positive feedback in the brainstem over time. We predicted that the inhibition could be captured at the brainstem with shorter (1.5 s) than previously employed long duration (240 s) stimuli where this inhibition is likely compensated for. Results from 16 normal-hearing human listeners support our hypothesis in that when the MOCR is activated, there is a robust reduction of responses generated at the periphery, brainstem, and cortex for short-duration stimuli. Such inhibition at the brainstem, however, diminishes for long-duration stimuli suggesting some compensatory mechanisms at play. Our findings provide a novel non-invasive window into potential gain compensation mechanisms in the brainstem that may have implications for auditory disorders such as tinnitus. Our methodology will be useful in the evaluation of efferent function in individuals with hearing loss.
Collapse
Affiliation(s)
- Sriram Boothalingam
- Waisman Center and Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, 53705, USA.
- Macquarie University, Sydney, NSW, 2109, Australia.
- National Acoustic Laboratories, Sydney, NSW, 2109, Australia.
| | - Abigayle Peterson
- Waisman Center and Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, 53705, USA
- Macquarie University, Sydney, NSW, 2109, Australia
| | - Lindsey Powell
- Waisman Center and Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, 53705, USA
| | - Vijayalakshmi Easwar
- Waisman Center and Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, 53705, USA
- Macquarie University, Sydney, NSW, 2109, Australia
- National Acoustic Laboratories, Sydney, NSW, 2109, Australia
| |
Collapse
|
10
|
Shadron K, Peña JL. Development of frequency tuning shaped by spatial cue reliability in the barn owl's auditory midbrain. eLife 2023; 12:e84760. [PMID: 37166099 PMCID: PMC10238092 DOI: 10.7554/elife.84760] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 05/10/2023] [Indexed: 05/12/2023] Open
Abstract
Sensory systems preferentially strengthen responses to stimuli based on their reliability at conveying accurate information. While previous reports demonstrate that the brain reweighs cues based on dynamic changes in reliability, how the brain may learn and maintain neural responses to sensory statistics expected to be stable over time is unknown. The barn owl's midbrain features a map of auditory space where neurons compute horizontal sound location from the interaural time difference (ITD). Frequency tuning of midbrain map neurons correlates with the most reliable frequencies for the neurons' preferred ITD (Cazettes et al., 2014). Removal of the facial ruff led to a specific decrease in the reliability of high frequencies from frontal space. To directly test whether permanent changes in ITD reliability drive frequency tuning, midbrain map neurons were recorded from adult owls, with the facial ruff removed during development, and juvenile owls, before facial ruff development. In both groups, frontally tuned neurons were tuned to frequencies lower than in normal adult owls, consistent with the change in ITD reliability. In addition, juvenile owls exhibited more heterogeneous frequency tuning, suggesting normal developmental processes refine tuning to match ITD reliability. These results indicate causality of long-term statistics of spatial cues in the development of midbrain frequency tuning properties, implementing probabilistic coding for sound localization.
Collapse
Affiliation(s)
- Keanu Shadron
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of MedicineBronxUnited States
| | - José Luis Peña
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of MedicineBronxUnited States
| |
Collapse
|
11
|
Parida S, Liu ST, Sadagopan S. Adaptive mechanisms facilitate robust performance in noise and in reverberation in an auditory categorization model. Commun Biol 2023; 6:456. [PMID: 37130918 PMCID: PMC10154343 DOI: 10.1038/s42003-023-04816-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Accepted: 04/05/2023] [Indexed: 05/04/2023] Open
Abstract
For robust vocalization perception, the auditory system must generalize over variability in vocalization production as well as variability arising from the listening environment (e.g., noise and reverberation). We previously demonstrated using guinea pig and marmoset vocalizations that a hierarchical model generalized over production variability by detecting sparse intermediate-complexity features that are maximally informative about vocalization category from a dense spectrotemporal input representation. Here, we explore three biologically feasible model extensions to generalize over environmental variability: (1) training in degraded conditions, (2) adaptation to sound statistics in the spectrotemporal stage and (3) sensitivity adjustment at the feature detection stage. All mechanisms improved vocalization categorization performance, but improvement trends varied across degradation type and vocalization type. One or more adaptive mechanisms were required for model performance to approach the behavioral performance of guinea pigs on a vocalization categorization task. These results highlight the contributions of adaptive mechanisms at multiple auditory processing stages to achieve robust auditory categorization.
Collapse
Affiliation(s)
- Satyabrata Parida
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Shi Tong Liu
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA.
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA.
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA.
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| |
Collapse
|
12
|
Lube AJ, Ma X, Carlson BA. Spike timing-dependent plasticity alters electrosensory neuron synaptic strength in vitro but does not consistently predict changes in sensory tuning in vivo. J Neurophysiol 2023; 129:1127-1144. [PMID: 37073981 PMCID: PMC10151048 DOI: 10.1152/jn.00498.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 04/12/2023] [Accepted: 04/13/2023] [Indexed: 04/20/2023] Open
Abstract
How do sensory systems optimize detection of behaviorally relevant stimuli when the sensory environment is constantly changing? We addressed the role of spike timing-dependent plasticity (STDP) in driving changes in synaptic strength in a sensory pathway and whether those changes in synaptic strength could alter sensory tuning. It is challenging to precisely control temporal patterns of synaptic activity in vivo and replicate those patterns in vitro in behaviorally relevant ways. This makes it difficult to make connections between STDP-induced changes in synaptic physiology and plasticity in sensory systems. Using the mormyrid species Brevimyrus niger and Brienomyrus brachyistius, which produce electric organ discharges for electrolocation and communication, we can precisely control the timing of synaptic input in vivo and replicate these same temporal patterns of synaptic input in vitro. In central electrosensory neurons in the electric communication pathway, using whole cell intracellular recordings in vitro, we paired presynaptic input with postsynaptic spiking at different delays. Using whole cell intracellular recordings in awake, behaving fish, we paired sensory stimulation with postsynaptic spiking using the same delays. We found that Hebbian STDP predictably alters sensory tuning in vitro and is mediated by NMDA receptors. However, the change in synaptic responses induced by sensory stimulation in vivo did not adhere to the direction predicted by the STDP observed in vitro. Further analysis suggests that this difference is influenced by polysynaptic activity, including inhibitory interneurons. Our findings suggest that STDP rules operating at identified synapses may not drive predictable changes in sensory responses at the circuit level.NEW & NOTEWORTHY We replicated behaviorally relevant temporal patterns of synaptic activity in vitro and used the same patterns during sensory stimulation in vivo. There was a Hebbian spike timing-dependent plasticity (STDP) pattern in vitro, but sensory responses in vivo did not shift according to STDP predictions. Analysis suggests that this disparity is influenced by differences in polysynaptic activity, including inhibitory interneurons. These results suggest that STDP rules at synapses in vitro do not necessarily apply to circuits in vivo.
Collapse
Affiliation(s)
- Adalee J Lube
- Department of Biology, Washington University in St. Louis, St. Louis, Missouri, United States
| | - Xiaofeng Ma
- Department of Biology, Washington University in St. Louis, St. Louis, Missouri, United States
| | - Bruce A Carlson
- Department of Biology, Washington University in St. Louis, St. Louis, Missouri, United States
| |
Collapse
|
13
|
Song P, Zhai Y, Yu X. Stimulus-Specific Adaptation (SSA) in the Auditory System: Functional Relevance and Underlying Mechanisms. Neurosci Biobehav Rev 2023; 149:105190. [PMID: 37085022 DOI: 10.1016/j.neubiorev.2023.105190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 04/17/2023] [Accepted: 04/18/2023] [Indexed: 04/23/2023]
Abstract
Rapid detection of novel stimuli that appear suddenly in the surrounding environment is crucial for an animal's survival. Stimulus-specific adaptation (SSA) may be an important mechanism underlying novelty detection. In this review, we discuss the latest advances in SSA research by addressing four main aspects: 1) the frequency dependence of SSA and the origin of SSA in the auditory cortex: 2) spatial SSA and its comparison with frequency SSA: 3) feature integration in SSA and its implications in novelty detection: 4) functional significance and the physiological mechanism of SSA. Although SSA has been extensively investigated, the cognitive insights from SSA studies are extremely limited. Future work should aim to bridge these gaps.
Collapse
Affiliation(s)
- Peirun Song
- Department of Anesthesia, Women's Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang Province, China; Zhejiang Provincial Key Laboratory of Precision Diagnosis and Therapy for Major Gynecological Diseases, Women's Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China; Department of Anesthesiology, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, China; Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, Zhejiang Province, China
| | - Yuying Zhai
- Department of Anesthesia, Women's Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang Province, China
| | - Xiongjie Yu
- Department of Anesthesia, Women's Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang Province, China; Zhejiang Provincial Key Laboratory of Precision Diagnosis and Therapy for Major Gynecological Diseases, Women's Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China; Department of Anesthesiology, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, China; Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, Zhejiang Province, China.
| |
Collapse
|
14
|
Willmore BDB, King AJ. Adaptation in auditory processing. Physiol Rev 2023; 103:1025-1058. [PMID: 36049112 PMCID: PMC9829473 DOI: 10.1152/physrev.00011.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Adaptation is an essential feature of auditory neurons, which reduces their responses to unchanging and recurring sounds and allows their response properties to be matched to the constantly changing statistics of sounds that reach the ears. As a consequence, processing in the auditory system highlights novel or unpredictable sounds and produces an efficient representation of the vast range of sounds that animals can perceive by continually adjusting the sensitivity and, to a lesser extent, the tuning properties of neurons to the most commonly encountered stimulus values. Together with attentional modulation, adaptation to sound statistics also helps to generate neural representations of sound that are tolerant to background noise and therefore plays a vital role in auditory scene analysis. In this review, we consider the diverse forms of adaptation that are found in the auditory system in terms of the processing levels at which they arise, the underlying neural mechanisms, and their impact on neural coding and perception. We also ask what the dynamics of adaptation, which can occur over multiple timescales, reveal about the statistical properties of the environment. Finally, we examine how adaptation to sound statistics is influenced by learning and experience and changes as a result of aging and hearing loss.
Collapse
Affiliation(s)
- Ben D. B. Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J. King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
15
|
Mischler G, Keshishian M, Bickel S, Mehta AD, Mesgarani N. Deep neural networks effectively model neural adaptation to changing background noise and suggest nonlinear noise filtering methods in auditory cortex. Neuroimage 2023; 266:119819. [PMID: 36529203 PMCID: PMC10510744 DOI: 10.1016/j.neuroimage.2022.119819] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 11/28/2022] [Accepted: 12/15/2022] [Indexed: 12/23/2022] Open
Abstract
The human auditory system displays a robust capacity to adapt to sudden changes in background noise, allowing for continuous speech comprehension despite changes in background environments. However, despite comprehensive studies characterizing this ability, the computations that underly this process are not well understood. The first step towards understanding a complex system is to propose a suitable model, but the classical and easily interpreted model for the auditory system, the spectro-temporal receptive field (STRF), cannot match the nonlinear neural dynamics involved in noise adaptation. Here, we utilize a deep neural network (DNN) to model neural adaptation to noise, illustrating its effectiveness at reproducing the complex dynamics at the levels of both individual electrodes and the cortical population. By closely inspecting the model's STRF-like computations over time, we find that the model alters both the gain and shape of its receptive field when adapting to a sudden noise change. We show that the DNN model's gain changes allow it to perform adaptive gain control, while the spectro-temporal change creates noise filtering by altering the inhibitory region of the model's receptive field. Further, we find that models of electrodes in nonprimary auditory cortex also exhibit noise filtering changes in their excitatory regions, suggesting differences in noise filtering mechanisms along the cortical hierarchy. These findings demonstrate the capability of deep neural networks to model complex neural adaptation and offer new hypotheses about the computations the auditory cortex performs to enable noise-robust speech perception in real-world, dynamic environments.
Collapse
Affiliation(s)
- Gavin Mischler
- Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, New York, United States; Department of Electrical Engineering, Columbia University, New York, United States
| | - Menoua Keshishian
- Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, New York, United States; Department of Electrical Engineering, Columbia University, New York, United States
| | - Stephan Bickel
- Hofstra Northwell School of Medicine, Manhasset, New York, United States
| | - Ashesh D Mehta
- Hofstra Northwell School of Medicine, Manhasset, New York, United States
| | - Nima Mesgarani
- Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, New York, United States; Department of Electrical Engineering, Columbia University, New York, United States.
| |
Collapse
|
16
|
Cullen KE, Chacron MJ. Neural substrates of perception in the vestibular thalamus during natural self-motion: A review. CURRENT RESEARCH IN NEUROBIOLOGY 2023; 4:100073. [PMID: 36926598 PMCID: PMC10011815 DOI: 10.1016/j.crneur.2023.100073] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 12/01/2022] [Accepted: 01/03/2023] [Indexed: 01/13/2023] Open
Abstract
Accumulating evidence across multiple sensory modalities suggests that the thalamus does not simply relay information from the periphery to the cortex. Here we review recent findings showing that vestibular neurons within the ventral posteriolateral area of the thalamus perform nonlinear transformations on their afferent input that determine our subjective awareness of motion. Specifically, these neurons provide a substrate for previous psychophysical observations that perceptual discrimination thresholds are much better than predictions from Weber's law. This is because neural discrimination thresholds, which are determined from both variability and sensitivity, initially increase but then saturate with increasing stimulus amplitude, thereby matching the previously observed dependency of perceptual self-motion discrimination thresholds. Moreover, neural response dynamics give rise to unambiguous and optimized encoding of natural but not artificial stimuli. Finally, vestibular thalamic neurons selectively encode passively applied motion when occurring concurrently with voluntary (i.e., active) movements. Taken together, these results show that the vestibular thalamus plays an essential role towards generating motion perception as well as shaping our vestibular sense of agency that is not simply inherited from afferent input.
Collapse
Affiliation(s)
- Kathleen E Cullen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, USA.,Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, USA.,Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, USA.,Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, USA
| | | |
Collapse
|
17
|
Valderrama JT, de la Torre A, McAlpine D. The hunt for hidden hearing loss in humans: From preclinical studies to effective interventions. Front Neurosci 2022; 16:1000304. [PMID: 36188462 PMCID: PMC9519997 DOI: 10.3389/fnins.2022.1000304] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 08/29/2022] [Indexed: 11/30/2022] Open
Abstract
Many individuals experience hearing problems that are hidden under a normal audiogram. This not only impacts on individual sufferers, but also on clinicians who can offer little in the way of support. Animal studies using invasive methodologies have developed solid evidence for a range of pathologies underlying this hidden hearing loss (HHL), including cochlear synaptopathy, auditory nerve demyelination, elevated central gain, and neural mal-adaptation. Despite progress in pre-clinical models, evidence supporting the existence of HHL in humans remains inconclusive, and clinicians lack any non-invasive biomarkers sensitive to HHL, as well as a standardized protocol to manage hearing problems in the absence of elevated hearing thresholds. Here, we review animal models of HHL as well as the ongoing research for tools with which to diagnose and manage hearing difficulties associated with HHL. We also discuss new research opportunities facilitated by recent methodological tools that may overcome a series of barriers that have hampered meaningful progress in diagnosing and treating of HHL.
Collapse
Affiliation(s)
- Joaquin T. Valderrama
- National Acoustic Laboratories, Sydney, NSW, Australia
- Department of Linguistics, Macquarie University Hearing, Macquarie University, Sydney, NSW, Australia
- *Correspondence: Joaquin T. Valderrama, ;
| | - Angel de la Torre
- Department of Signal Theory, Telematics and Communications, University of Granada, Granada, Spain
- Research Centre for Information and Communications Technologies (CITIC-UGR), University of Granada, Granada, Spain
| | - David McAlpine
- Department of Linguistics, Macquarie University Hearing, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
18
|
Agrawal T, Schachner A. Hearing water temperature: characterizing the development of nuanced perception of sound sources. Dev Sci 2022; 26:e13321. [PMID: 36068928 DOI: 10.1111/desc.13321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 08/18/2022] [Accepted: 08/22/2022] [Indexed: 11/30/2022]
Abstract
Without conscious thought, listeners link events in the world to sounds they hear. We study one surprising example: Adults can judge the temperature of water simply from hearing it being poured. We test development of the ability to hear water temperature, with the goal of informing developmental theories regarding the origins and cognitive bases of nuanced sound source judgments. We first confirmed that adults accurately distinguished the sounds of hot and cold water (pre-registered Exps. 1, 2; total N = 384), even though many were unaware or uncertain of this ability. By contrast, children showed protracted development of this skill over the course of middle childhood (Exps. 2, 3; total N = 178). In spite of accurately identifying other sounds and hot/cold images, older children (7-11 years) but not younger children (3-6 years) reliably distinguished the sounds of hot and cold water. Accuracy increased with age; 11 year old's performance was similar to adults'. Adults also showed individual differences in accuracy that were predicted by their amount of prior relevant experience (Exp. 1). Experience may similarly play a role in children's performance; differences in auditory sensitivity and multimodal integration may also contribute to young children's failures. The ability to hear water temperature develops slowly over childhood, such that nuanced auditory information that is easily and quickly accessible to adults is not available to guide young children's behavior. Adults can make nuanced judgments from sound, including accurately judging the temperature of water from the sound of it being poured. Children showed protracted development of this skill over the course of middle childhood, such that 7-11 year-olds reliably succeeded while 3-6 year-olds performed at chance. Developmental changes may be due to experience (adults with greater relevant experience showed higher accuracy) and development of multimodal integration and auditory sensitivity. Young children may not detect subtle auditory information that adults easily perceive. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
| | - Adena Schachner
- Department of Psychology, University of California, San Diego, USA
| |
Collapse
|
19
|
Andreeva IG, Ogorodnikova EA. Auditory Adaptation to Speech Signal Characteristics. J EVOL BIOCHEM PHYS+ 2022. [DOI: 10.1134/s0022093022050027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
20
|
Jennings SG, Dominguez J. Firing Rate Adaptation of the Human Auditory Nerve Optimizes Neural Signal-to-Noise Ratios. J Assoc Res Otolaryngol 2022; 23:365-378. [PMID: 35254540 PMCID: PMC9085988 DOI: 10.1007/s10162-022-00841-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 02/14/2022] [Indexed: 10/18/2022] Open
Abstract
Several physiological mechanisms act on the response of the auditory nerve (AN) during acoustic stimulation, resulting in an adjustment in auditory gain. These mechanisms include-but are not limited to-firing rate adaptation, dynamic range adaptation, the middle ear muscle reflex, and the medial olivocochlear reflex. A potential role of these mechanisms is to improve the neural signal-to-noise ratio (SNR) at the output of the AN in real time. This study tested the hypothesis that neural SNRs, inferred from non-invasive assessment of the human AN, improve over the duration of acoustic stimulation. Cochlear potentials were measured in response to a series of six high-level clicks embedded in a series of six lower-level broadband noise bursts. This paradigm elicited a compound action potential (CAP) in response to each click and to the onset of each noise burst. The ratio of CAP amplitudes elicited by each click and noise burst pair (i.e., neural SNR) was tracked over the six click/noise bursts. The main finding was a rapid (< 24 ms) increase in neural SNR from the first to the second click/noise burst, consistent with a real-time adjustment in the response of the auditory periphery toward improving the SNR of the signal transmitted to the brainstem. Analysis of cochlear microphonic and ear canal sound pressure recordings, as well as the time course for this improvement in neural SNR, supports the conclusion that firing rate adaptation is likely the primary mechanism responsible for improving neural SNR, while dynamic range adaptation, the middle ear muscle reflex, and the medial olivocochlear reflex played a secondary role on the effects observed in this study. Real-time improvements in neural SNR are significant because they may be essential for robust encoding of speech and other relevant stimuli in the presence of background noise.
Collapse
Affiliation(s)
- Skyler G Jennings
- Department of Communication Sciences and Disorders, The University of Utah, 390 South, 1530 East, BEHS 1201, Salt Lake City, UT, 84112, USA.
| | - Juan Dominguez
- Department of Communication Sciences and Disorders, The University of Utah, 390 South, 1530 East, BEHS 1201, Salt Lake City, UT, 84112, USA
| |
Collapse
|
21
|
Ivanov AZ, King AJ, Willmore BDB, Walker KMM, Harper NS. Cortical adaptation to sound reverberation. eLife 2022; 11:75090. [PMID: 35617119 PMCID: PMC9213001 DOI: 10.7554/elife.75090] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 05/25/2022] [Indexed: 11/13/2022] Open
Abstract
In almost every natural environment, sounds are reflected by nearby objects, producing many delayed and distorted copies of the original sound, known as reverberation. Our brains usually cope well with reverberation, allowing us to recognize sound sources regardless of their environments. In contrast, reverberation can cause severe difficulties for speech recognition algorithms and hearing-impaired people. The present study examines how the auditory system copes with reverberation. We trained a linear model to recover a rich set of natural, anechoic sounds from their simulated reverberant counterparts. The model neurons achieved this by extending the inhibitory component of their receptive filters for more reverberant spaces, and did so in a frequency-dependent manner. These predicted effects were observed in the responses of auditory cortical neurons of ferrets in the same simulated reverberant environments. Together, these results suggest that auditory cortical neurons adapt to reverberation by adjusting their filtering properties in a manner consistent with dereverberation.
Collapse
Affiliation(s)
- Aleksandar Z Ivanov
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Ben D B Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Kerry M M Walker
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Nicol S Harper
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
22
|
Carriot J, McAllister G, Hooshangnejad H, Mackrous I, Cullen KE, Chacron MJ. Sensory adaptation mediates efficient and unambiguous encoding of natural stimuli by vestibular thalamocortical pathways. Nat Commun 2022; 13:2612. [PMID: 35551186 PMCID: PMC9098492 DOI: 10.1038/s41467-022-30348-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 04/26/2022] [Indexed: 11/09/2022] Open
Abstract
Sensory systems must continuously adapt to optimally encode stimuli encountered within the natural environment. The prevailing view is that such optimal coding comes at the cost of increased ambiguity, yet to date, prior studies have focused on artificial stimuli. Accordingly, here we investigated whether such a trade-off between optimality and ambiguity exists in the encoding of natural stimuli in the vestibular system. We recorded vestibular nuclei and their target vestibular thalamocortical neurons during naturalistic and artificial self-motion stimulation. Surprisingly, we found no trade-off between optimality and ambiguity. Using computational methods, we demonstrate that thalamocortical neural adaptation in the form of contrast gain control actually reduces coding ambiguity without compromising the optimality of coding under naturalistic but not artificial stimulation. Thus, taken together, our results challenge the common wisdom that adaptation leads to ambiguity and instead suggest an essential role in underlying unambiguous optimized encoding of natural stimuli.
Collapse
Affiliation(s)
- Jerome Carriot
- Department of Physiology, McGill University, Montréal, Canada
| | | | - Hamed Hooshangnejad
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, USA
| | | | - Kathleen E Cullen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, USA.,Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, USA.,Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, USA.,Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, USA
| | | |
Collapse
|
23
|
Crozier RA, Wismer ZQ, Parra-Munevar J, Plummer MR, Davis RL. Amplification of input differences by dynamic heterogeneity in the spiral ganglion. J Neurophysiol 2022; 127:1317-1333. [PMID: 35389760 PMCID: PMC9054264 DOI: 10.1152/jn.00544.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 03/30/2022] [Accepted: 03/30/2022] [Indexed: 11/22/2022] Open
Abstract
A defining feature of type I primary auditory afferents that compose ∼95% of the spiral ganglion is their intrinsic electrophysiological heterogeneity. This diversity is evident both between and within unitary, rapid, and slow adaptation (UA, RA, and SA) classes indicative of specializations designed to shape sensory receptor input. But to what end? Our initial impulse is to expect the opposite: that auditory afferents fire uniformly to represent acoustic stimuli with accuracy and high fidelity. Yet this is clearly not the case. One explanation for this neural signaling strategy is to coordinate a system in which differences between input stimuli are amplified. If this is correct, then stimulus disparity enhancements within the primary afferents should be transmitted seamlessly into auditory processing pathways that utilize population coding for difference detection. Using sound localization as an example, one would expect to observe separately regulated differences in intensity level compared with timing or spectral cues within a graded tonotopic distribution. This possibility was evaluated by examining the neuromodulatory effects of cAMP on immature neurons with high excitability and slow membrane kinetics. We found that electrophysiological correlates of intensity and timing were indeed independently regulated and tonotopically distributed, depending on intracellular cAMP signaling level. These observations, therefore, are indicative of a system in which differences between signaling elements of individual stimulus attributes are systematically amplified according to auditory processing constraints. Thus, dynamic heterogeneity mediated by cAMP in the spiral ganglion has the potential to enhance the representations of stimulus input disparities transmitted into higher level difference detection circuitry.NEW & NOTEWORTHY Can changes in intracellular second messenger signaling within primary auditory afferents shift our perception of sound? Results presented herein lead to this conclusion. We found that intracellular cAMP signaling level systematically altered the kinetics and excitability of primary auditory afferents, exemplifying how dynamic heterogeneity can enhance differences between electrophysiological correlates of timing and intensity.
Collapse
Affiliation(s)
| | - Zachary Q Wismer
- AtlantiCare Regional Medical Center, Department of Family Medicine, Atlantic City, New Jersey
| | - Jeffrey Parra-Munevar
- Department of Cell Biology and Neuroscience, Rutgers University, Piscataway, New Jersey
| | - Mark R Plummer
- Department of Cell Biology and Neuroscience, Rutgers University, Piscataway, New Jersey
| | - Robin L Davis
- Department of Cell Biology and Neuroscience, Rutgers University, Piscataway, New Jersey
| |
Collapse
|
24
|
Calapai A, Cabrera-Moreno J, Moser T, Jeschke M. Flexible auditory training, psychophysics, and enrichment of common marmosets with an automated, touchscreen-based system. Nat Commun 2022; 13:1648. [PMID: 35347139 PMCID: PMC8960775 DOI: 10.1038/s41467-022-29185-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 02/28/2022] [Indexed: 11/09/2022] Open
Abstract
Devising new and more efficient protocols to analyze the phenotypes of non-human primates, as well as their complex nervous systems, is rapidly becoming of paramount importance. This is because with genome-editing techniques, recently adopted to non-human primates, new animal models for fundamental and translational research have been established. One aspect in particular, namely cognitive hearing, has been difficult to assess compared to visual cognition. To address this, we devised autonomous, standardized, and unsupervised training and testing of auditory capabilities of common marmosets with a cage-based standalone, wireless system. All marmosets tested voluntarily operated the device on a daily basis and went from naïve to experienced at their own pace and with ease. Through a series of experiments, here we show, that animals autonomously learn to associate sounds with images; to flexibly discriminate sounds, and to detect sounds of varying loudness. The developed platform and training principles combine in-cage training of common marmosets for cognitive and psychoacoustic assessment with an enriched environment that does not rely on dietary restriction or social separation, in compliance with the 3Rs principle. The authors present a cage-based stand-alone platform for autonomous, standardized, and unsupervised training and testing of visuo-auditory-cued behaviours of common marmosets. The experiments do not require dietary restriction or social separation.
Collapse
Affiliation(s)
- A Calapai
- Cognitive Neuroscience Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Göttingen, Germany.,Cognitive Hearing in Primates (CHiP) Group, Auditory Neuroscience and Optogenetics Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Göttingen, Germany.,Auditory Neuroscience and Optogenetics Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Göttingen, Germany.,Leibniz ScienceCampus "Primate Cognition", Göttingen, Germany
| | - J Cabrera-Moreno
- Cognitive Hearing in Primates (CHiP) Group, Auditory Neuroscience and Optogenetics Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Göttingen, Germany.,Auditory Neuroscience and Optogenetics Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Göttingen, Germany.,Institute for Auditory Neuroscience and InnerEarLab, University Medical Center Göttingen, 37075, Göttingen, Germany.,Göttingen Graduate School for Neurosciences, Biophysics and Molecular Biosciences, University of Göttingen, 37075, Göttingen, Germany
| | - T Moser
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Göttingen, Germany.,Institute for Auditory Neuroscience and InnerEarLab, University Medical Center Göttingen, 37075, Göttingen, Germany.,Göttingen Graduate School for Neurosciences, Biophysics and Molecular Biosciences, University of Göttingen, 37075, Göttingen, Germany.,Auditory Neuroscience Group and Synaptic Nanophysiology Group, Max Planck Institute for Multidisciplinary Sciences, 37077, Göttingen, Germany.,Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, 37075, Göttingen, Germany
| | - M Jeschke
- Cognitive Hearing in Primates (CHiP) Group, Auditory Neuroscience and Optogenetics Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Göttingen, Germany. .,Auditory Neuroscience and Optogenetics Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Göttingen, Germany. .,Leibniz ScienceCampus "Primate Cognition", Göttingen, Germany. .,Institute for Auditory Neuroscience and InnerEarLab, University Medical Center Göttingen, 37075, Göttingen, Germany.
| |
Collapse
|
25
|
Marrufo-Pérez MI, Lopez-Poveda EA. Adaptation to noise in normal and impaired hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:1741. [PMID: 35364964 DOI: 10.1121/10.0009802] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Accepted: 02/26/2022] [Indexed: 06/14/2023]
Abstract
Many aspects of hearing function are negatively affected by background noise. Listeners, however, have some ability to adapt to background noise. For instance, the detection of pure tones and the recognition of isolated words embedded in noise can improve gradually as tones and words are delayed a few hundred milliseconds in the noise. While some evidence suggests that adaptation to noise could be mediated by the medial olivocochlear reflex, adaptation can occur for people who do not have a functional reflex. Since adaptation can facilitate hearing in noise, and hearing in noise is often harder for hearing-impaired than for normal-hearing listeners, it is conceivable that adaptation is impaired with hearing loss. It remains unclear, however, if and to what extent this is the case, or whether impaired adaptation contributes to the greater difficulties experienced by hearing-impaired listeners understanding speech in noise. Here, we review adaptation to noise, the mechanisms potentially contributing to this adaptation, and factors that might reduce the ability to adapt to background noise, including cochlear hearing loss, cochlear synaptopathy, aging, and noise exposure. The review highlights few knowns and many unknowns about adaptation to noise, and thus paves the way for further research on this topic.
Collapse
Affiliation(s)
- Miriam I Marrufo-Pérez
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, 37007 Salamanca, Spain
| | - Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, 37007 Salamanca, Spain
| |
Collapse
|
26
|
Auerbach BD, Gritton HJ. Hearing in Complex Environments: Auditory Gain Control, Attention, and Hearing Loss. Front Neurosci 2022; 16:799787. [PMID: 35221899 PMCID: PMC8866963 DOI: 10.3389/fnins.2022.799787] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 01/18/2022] [Indexed: 12/12/2022] Open
Abstract
Listening in noisy or complex sound environments is difficult for individuals with normal hearing and can be a debilitating impairment for those with hearing loss. Extracting meaningful information from a complex acoustic environment requires the ability to accurately encode specific sound features under highly variable listening conditions and segregate distinct sound streams from multiple overlapping sources. The auditory system employs a variety of mechanisms to achieve this auditory scene analysis. First, neurons across levels of the auditory system exhibit compensatory adaptations to their gain and dynamic range in response to prevailing sound stimulus statistics in the environment. These adaptations allow for robust representations of sound features that are to a large degree invariant to the level of background noise. Second, listeners can selectively attend to a desired sound target in an environment with multiple sound sources. This selective auditory attention is another form of sensory gain control, enhancing the representation of an attended sound source while suppressing responses to unattended sounds. This review will examine both “bottom-up” gain alterations in response to changes in environmental sound statistics as well as “top-down” mechanisms that allow for selective extraction of specific sound features in a complex auditory scene. Finally, we will discuss how hearing loss interacts with these gain control mechanisms, and the adaptive and/or maladaptive perceptual consequences of this plasticity.
Collapse
Affiliation(s)
- Benjamin D. Auerbach
- Department of Molecular and Integrative Physiology, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- *Correspondence: Benjamin D. Auerbach,
| | - Howard J. Gritton
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Comparative Biosciences, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, United States
| |
Collapse
|
27
|
Noise exposure levels predict blood levels of the inner ear protein prestin. Sci Rep 2022; 12:1154. [PMID: 35064195 PMCID: PMC8783004 DOI: 10.1038/s41598-022-05131-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Accepted: 12/30/2021] [Indexed: 12/20/2022] Open
Abstract
Serological biomarkers of inner ear proteins are a promising new approach for studying human hearing. Here, we focus on the serological measurement of prestin, a protein integral to a human’s highly sensitive hearing, expressed in cochlear outer hair cells (OHCs). Building from recent nonhuman studies that associated noise-induced OHC trauma with reduced serum prestin levels, and studies suggesting subclinical hearing damage in humans regularly engaging in noisy activities, we investigated the relation between serum prestin levels and environmental noise levels in young adults with normal clinical audiograms. We measured prestin protein levels from circulating blood and collected noise level data multiple times over the course of the experiment using body-worn sound recorders. Results indicate that serum prestin levels have a negative relation with noise exposure: individuals with higher routine noise exposure levels tended to have lower prestin levels. Moreover, when grouping participants based on their risk for a clinically-significant noise-induced hearing loss, we found that prestin levels differed significantly between groups, even though behavioral hearing thresholds were similar. We discuss possible interpretations for our findings including whether lower serum levels may reflect subclinical levels of OHC damage, or possibly an adaptive, protective mechanism in which prestin expression is downregulated in response to loud environments.
Collapse
|
28
|
Sound level context modulates neural activity in the human brainstem. Sci Rep 2021; 11:22581. [PMID: 34799632 PMCID: PMC8605015 DOI: 10.1038/s41598-021-02055-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 10/27/2021] [Indexed: 11/08/2022] Open
Abstract
Optimal perception requires adaptation to sounds in the environment. Adaptation involves representing the acoustic stimulation history in neural response patterns, for example, by altering response magnitude or latency as sound-level context changes. Neurons in the auditory brainstem of rodents are sensitive to acoustic stimulation history and sound-level context (often referred to as sensitivity to stimulus statistics), but the degree to which the human brainstem exhibits such neural adaptation is unclear. In six electroencephalography experiments with over 125 participants, we demonstrate that the response latency of the human brainstem is sensitive to the history of acoustic stimulation over a few tens of milliseconds. We further show that human brainstem responses adapt to sound-level context in, at least, the last 44 ms, but that neural sensitivity to sound-level context decreases when the time window over which acoustic stimuli need to be integrated becomes wider. Our study thus provides evidence of adaptation to sound-level context in the human brainstem and of the timescale over which sound-level information affects neural responses to sound. The research delivers an important link to studies on neural adaptation in non-human animals.
Collapse
|
29
|
Abstract
Perception adapts to the properties of prior stimulation, as illustrated by phenomena such as visual color constancy or speech context effects. In the auditory domain, only little is known about adaptive processes when it comes to the attribute of auditory brightness. Here, we report an experiment that tests whether listeners adapt to spectral colorations imposed on naturalistic music and speech excerpts. Our results indicate consistent contrastive adaptation of auditory brightness judgments on a trial-by-trial basis. The pattern of results suggests that these effects tend to grow with an increase in the duration of the adaptor context but level off after around 8 trials of 2 s duration. A simple model of the response criterion yields a correlation of r = .97 with the measured data and corroborates the notion that brightness perception adapts on timescales that fall in the range of auditory short-term memory. Effects turn out to be similar for spectral filtering based on linear spectral filter slopes and filtering based on a measured transfer function from a commercially available hearing device. Overall, our findings demonstrate the adaptivity of auditory brightness perception under realistic acoustical conditions.
Collapse
Affiliation(s)
- Kai Siedenburg
- Department of Medical Physics and Acoustics, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany.
| | - Feline Malin Barg
- Department of Medical Physics and Acoustics, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Henning Schepker
- Department of Medical Physics and Acoustics, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
- Starkey Hearing, Eden Prairie, MN, USA
| |
Collapse
|
30
|
Abstract
A central goal of neuroscience is to understand the representations formed by brain activity patterns and their connection to behaviour. The classic approach is to investigate how individual neurons encode stimuli and how their tuning determines the fidelity of the neural representation. Tuning analyses often use the Fisher information to characterize the sensitivity of neural responses to small changes of the stimulus. In recent decades, measurements of large populations of neurons have motivated a complementary approach, which focuses on the information available to linear decoders. The decodable information is captured by the geometry of the representational patterns in the multivariate response space. Here we review neural tuning and representational geometry with the goal of clarifying the relationship between them. The tuning induces the geometry, but different sets of tuned neurons can induce the same geometry. The geometry determines the Fisher information, the mutual information and the behavioural performance of an ideal observer in a range of psychophysical tasks. We argue that future studies can benefit from considering both tuning and geometry to understand neural codes and reveal the connections between stimuli, brain activity and behaviour.
Collapse
|
31
|
Adibi M, Lampl I. Sensory Adaptation in the Whisker-Mediated Tactile System: Physiology, Theory, and Function. Front Neurosci 2021; 15:770011. [PMID: 34776857 PMCID: PMC8586522 DOI: 10.3389/fnins.2021.770011] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 09/30/2021] [Indexed: 12/03/2022] Open
Abstract
In the natural environment, organisms are constantly exposed to a continuous stream of sensory input. The dynamics of sensory input changes with organism's behaviour and environmental context. The contextual variations may induce >100-fold change in the parameters of the stimulation that an animal experiences. Thus, it is vital for the organism to adapt to the new diet of stimulation. The response properties of neurons, in turn, dynamically adjust to the prevailing properties of sensory stimulation, a process known as "neuronal adaptation." Neuronal adaptation is a ubiquitous phenomenon across all sensory modalities and occurs at different stages of processing from periphery to cortex. In spite of the wealth of research on contextual modulation and neuronal adaptation in visual and auditory systems, the neuronal and computational basis of sensory adaptation in somatosensory system is less understood. Here, we summarise the recent finding and views about the neuronal adaptation in the rodent whisker-mediated tactile system and further summarise the functional effect of neuronal adaptation on the response dynamics and encoding efficiency of neurons at single cell and population levels along the whisker-mediated touch system in rodents. Based on direct and indirect pieces of evidence presented here, we suggest sensory adaptation provides context-dependent functional mechanisms for noise reduction in sensory processing, salience processing and deviant stimulus detection, shift between integration and coincidence detection, band-pass frequency filtering, adjusting neuronal receptive fields, enhancing neural coding and improving discriminability around adapting stimuli, energy conservation, and disambiguating encoding of principal features of tactile stimuli.
Collapse
Affiliation(s)
- Mehdi Adibi
- Department of Physiology and Biomedicine Discovery Institute, Monash University, Clayton, VIC, Australia
- Department of Neuroscience and Padova Neuroscience Center (PNC), University of Padova, Padova, Italy
| | - Ilan Lampl
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| |
Collapse
|
32
|
Dodda A, Das S. Demonstration of Stochastic Resonance, Population Coding, and Population Voting Using Artificial MoS 2 Based Synapses. ACS NANO 2021; 15:16172-16182. [PMID: 34648278 DOI: 10.1021/acsnano.1c05042] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Fast detection of weak signals at low energy expenditure is a challenging but inescapable task for the evolutionary success of animals that survive in resource constrained environments. This task is accomplished by the sensory nervous system by exploiting the synergy between three astounding neural phenomena, namely, stochastic resonance (SR), population coding (PC), and population voting (PV). In SR, the constructive role of synaptic noise is exploited for the detection of otherwise invisible signals. In PC, the redundancy in neural population is exploited to reduce the detection latency. Finally, PV ensures unambiguous signal detection even in the presence of excessive noise. Here we adopt a similar strategies and experimentally demonstrate how a population of stochastic artificial neurons based on monolayer MoS2 field effect transistors (FETs) can use an optimum amount of white Gaussian noise and population voting to detect invisible signals at a frugal energy expenditure (∼10s of nano-Joules). Our findings can aid remote sensing in the emerging era of the Internet of things (IoT) that thrive on energy efficiency.
Collapse
Affiliation(s)
- Akhil Dodda
- Department of Engineering Science and Mechanics, Pennsylvania State University, University Park, Pennsylvania 16802, United States
| | - Saptarshi Das
- Department of Engineering Science and Mechanics, Pennsylvania State University, University Park, Pennsylvania 16802, United States
- Department of Materials Science and Engineering, Pennsylvania State University, University Park, Pennsylvania 16802, United States
- Materials Research Institute, Pennsylvania State University, University Park, Pennsylvania 16802, United States
| |
Collapse
|
33
|
Salloom WB, Strickland EA. The effect of broadband elicitor laterality on psychoacoustic gain reduction across signal frequency. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:2817. [PMID: 34717476 PMCID: PMC8520488 DOI: 10.1121/10.0006662] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 09/17/2021] [Accepted: 09/21/2021] [Indexed: 05/19/2023]
Abstract
There are psychoacoustic methods thought to measure gain reduction, which may be from the medial olivocochlear reflex (MOCR), a bilateral feedback loop that adjusts cochlear gain. Although studies have used ipsilateral and contralateral elicitors and have examined strength at different signal frequencies, these factors have not been examined within a single study. Therefore, basic questions about gain reduction, such as the relative strength of ipsilateral vs contralateral elicitation and the relative strength across signal frequency, are not known. In the current study, gain reduction from ipsilateral, contralateral, and bilateral elicitors was measured at 1-, 2-, and 4-kHz signal frequencies using forward masking paradigms at a range of elicitor levels in a repeated measures design. Ipsilateral and bilateral strengths were similar and significantly larger than contralateral strength across signal frequencies. Growth of gain reduction with precursor level tended to differ with signal frequency, although not significantly. Data from previous studies are considered in light of the results of this study. Behavioral results are also considered relative to anatomical and physiological data on the MOCR. These results indicate that, in humans, cochlear gain reduction is broad across frequencies and is robust for ipsilateral and bilateral elicitation but small for contralateral elicitation.
Collapse
Affiliation(s)
- William B Salloom
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, Indiana 47907, USA
| | - Elizabeth A Strickland
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, Indiana 47907, USA
| |
Collapse
|
34
|
Sun J, Wang Z, Tian X. Manual Gestures Modulate Early Neural Responses in Loudness Perception. Front Neurosci 2021; 15:634967. [PMID: 34539324 PMCID: PMC8440995 DOI: 10.3389/fnins.2021.634967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Accepted: 08/06/2021] [Indexed: 12/02/2022] Open
Abstract
How different sensory modalities interact to shape perception is a fundamental question in cognitive neuroscience. Previous studies in audiovisual interaction have focused on abstract levels such as categorical representation (e.g., McGurk effect). It is unclear whether the cross-modal modulation can extend to low-level perceptual attributes. This study used motional manual gestures to test whether and how the loudness perception can be modulated by visual-motion information. Specifically, we implemented a novel paradigm in which participants compared the loudness of two consecutive sounds whose intensity changes around the just noticeable difference (JND), with manual gestures concurrently presented with the second sound. In two behavioral experiments and two EEG experiments, we investigated our hypothesis that the visual-motor information in gestures would modulate loudness perception. Behavioral results showed that the gestural information biased the judgment of loudness. More importantly, the EEG results demonstrated that early auditory responses around 100 ms after sound onset (N100) were modulated by the gestures. These consistent results in four behavioral and EEG experiments suggest that visual-motor processing can integrate with auditory processing at an early perceptual stage to shape the perception of a low-level perceptual attribute such as loudness, at least under challenging listening conditions.
Collapse
Affiliation(s)
- Jiaqiu Sun
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China.,NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China
| | - Ziqing Wang
- NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China.,Shanghai Key Laboratory of Brain Functional Genomics, Ministry of Education, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Xing Tian
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China.,NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China.,Shanghai Key Laboratory of Brain Functional Genomics, Ministry of Education, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| |
Collapse
|
35
|
Meirhaeghe N, Sohn H, Jazayeri M. A precise and adaptive neural mechanism for predictive temporal processing in the frontal cortex. Neuron 2021; 109:2995-3011.e5. [PMID: 34534456 PMCID: PMC9737059 DOI: 10.1016/j.neuron.2021.08.025] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 07/02/2021] [Accepted: 08/18/2021] [Indexed: 12/14/2022]
Abstract
The theory of predictive processing posits that the brain computes expectations to process information predictively. Empirical evidence in support of this theory, however, is scarce and largely limited to sensory areas. Here, we report a precise and adaptive mechanism in the frontal cortex of non-human primates consistent with predictive processing of temporal events. We found that the speed of neural dynamics is precisely adjusted according to the average time of an expected stimulus. This speed adjustment, in turn, enables neurons to encode stimuli in terms of deviations from expectation. This lawful relationship was evident across multiple experiments and held true during learning: when temporal statistics underwent covert changes, neural responses underwent predictable changes that reflected the new mean. Together, these results highlight a precise mathematical relationship between temporal statistics in the environment and neural activity in the frontal cortex that may serve as a mechanism for predictive temporal processing.
Collapse
Affiliation(s)
- Nicolas Meirhaeghe
- Harvard-MIT Division of Health Sciences & Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Hansem Sohn
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA,Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| |
Collapse
|
36
|
Souffi S, Nodal FR, Bajo VM, Edeline JM. When and How Does the Auditory Cortex Influence Subcortical Auditory Structures? New Insights About the Roles of Descending Cortical Projections. Front Neurosci 2021; 15:690223. [PMID: 34413722 PMCID: PMC8369261 DOI: 10.3389/fnins.2021.690223] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 06/14/2021] [Indexed: 12/28/2022] Open
Abstract
For decades, the corticofugal descending projections have been anatomically well described but their functional role remains a puzzling question. In this review, we will first describe the contributions of neuronal networks in representing communication sounds in various types of degraded acoustic conditions from the cochlear nucleus to the primary and secondary auditory cortex. In such situations, the discrimination abilities of collicular and thalamic neurons are clearly better than those of cortical neurons although the latter remain very little affected by degraded acoustic conditions. Second, we will report the functional effects resulting from activating or inactivating corticofugal projections on functional properties of subcortical neurons. In general, modest effects have been observed in anesthetized and in awake, passively listening, animals. In contrast, in behavioral tasks including challenging conditions, behavioral performance was severely reduced by removing or transiently silencing the corticofugal descending projections. This suggests that the discriminative abilities of subcortical neurons may be sufficient in many acoustic situations. It is only in particularly challenging situations, either due to the task difficulties and/or to the degraded acoustic conditions that the corticofugal descending connections bring additional abilities. Here, we propose that it is both the top-down influences from the prefrontal cortex, and those from the neuromodulatory systems, which allow the cortical descending projections to impact behavioral performance in reshaping the functional circuitry of subcortical structures. We aim at proposing potential scenarios to explain how, and under which circumstances, these projections impact on subcortical processing and on behavioral responses.
Collapse
Affiliation(s)
- Samira Souffi
- Department of Integrative and Computational Neurosciences, Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR CNRS 9197, Paris-Saclay University, Orsay, France
| | - Fernando R Nodal
- Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, United Kingdom
| | - Victoria M Bajo
- Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, United Kingdom
| | - Jean-Marc Edeline
- Department of Integrative and Computational Neurosciences, Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR CNRS 9197, Paris-Saclay University, Orsay, France
| |
Collapse
|
37
|
Martelli C, Storace DA. Stimulus Driven Functional Transformations in the Early Olfactory System. Front Cell Neurosci 2021; 15:684742. [PMID: 34413724 PMCID: PMC8369031 DOI: 10.3389/fncel.2021.684742] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 07/06/2021] [Indexed: 11/17/2022] Open
Abstract
Olfactory stimuli are encountered across a wide range of odor concentrations in natural environments. Defining the neural computations that support concentration invariant odor perception, odor discrimination, and odor-background segmentation across a wide range of stimulus intensities remains an open question in the field. In principle, adaptation could allow the olfactory system to adjust sensory representations to the current stimulus conditions, a well-known process in other sensory systems. However, surprisingly little is known about how adaptation changes olfactory representations and affects perception. Here we review the current understanding of how adaptation impacts processing in the first two stages of the vertebrate olfactory system, olfactory receptor neurons (ORNs), and mitral/tufted cells.
Collapse
Affiliation(s)
- Carlotta Martelli
- Institute of Developmental Biology and Neurobiology, University of Mainz, Mainz, Germany
| | - Douglas Anthony Storace
- Department of Biological Science, Florida State University, Tallahassee, FL, United States
- Program in Neuroscience, Florida State University, Tallahassee, FL, United States
| |
Collapse
|
38
|
Abstract
The perception of sensory events can be enhanced or suppressed by the surrounding spatial and temporal context in ways that facilitate the detection of novel objects and contribute to the perceptual constancy of those objects under variable conditions. In the auditory system, the phenomenon known as auditory enhancement reflects a general principle of contrast enhancement, in which a target sound embedded within a background sound becomes perceptually more salient if the background is presented first by itself. This effect is highly robust, producing an effective enhancement of the target of up to 25 dB (more than two orders of magnitude in intensity), depending on the task. Despite the importance of the effect, neural correlates of auditory contrast enhancement have yet to be identified in humans. Here, we used the auditory steady-state response to probe the neural representation of a target sound under conditions of enhancement. The probe was simultaneously modulated in amplitude with two modulation frequencies to distinguish cortical from subcortical responses. We found robust correlates for neural enhancement in the auditory cortical, but not subcortical, responses. Our findings provide empirical support for a previously unverified theory of auditory enhancement based on neural adaptation of inhibition and point to approaches for improving sensory prostheses for hearing loss, such as hearing aids and cochlear implants.
Collapse
|
39
|
Jennings SG. The role of the medial olivocochlear reflex in psychophysical masking and intensity resolution in humans: a review. J Neurophysiol 2021; 125:2279-2308. [PMID: 33909513 PMCID: PMC8285664 DOI: 10.1152/jn.00672.2020] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 03/16/2021] [Accepted: 04/02/2021] [Indexed: 02/01/2023] Open
Abstract
This review addresses the putative role of the medial olivocochlear (MOC) reflex in psychophysical masking and intensity resolution in humans. A framework for interpreting psychophysical results in terms of the expected influence of the MOC reflex is introduced. This framework is used to review the effects of a precursor or contralateral acoustic stimulation on 1) simultaneous masking of brief tones, 2) behavioral estimates of cochlear gain and frequency resolution in forward masking, 3) the buildup and decay of forward masking, and 4) measures of intensity resolution. Support, or lack thereof, for a role of the MOC reflex in psychophysical perception is discussed in terms of studies on estimates of MOC strength from otoacoustic emissions and the effects of resection of the olivocochlear bundle in patients with vestibular neurectomy. Novel, innovative approaches are needed to resolve the dissatisfying conclusion that current results are unable to definitively confirm or refute the role of the MOC reflex in masking and intensity resolution.
Collapse
Affiliation(s)
- Skyler G Jennings
- Department of Communication Sciences and Disorders, The University of Utah, Salt Lake City, Utah
| |
Collapse
|
40
|
Causal inference in environmental sound recognition. Cognition 2021; 214:104627. [PMID: 34044231 DOI: 10.1016/j.cognition.2021.104627] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Revised: 01/28/2021] [Accepted: 02/05/2021] [Indexed: 11/23/2022]
Abstract
Sound is caused by physical events in the world. Do humans infer these causes when recognizing sound sources? We tested whether the recognition of common environmental sounds depends on the inference of a basic physical variable - the source intensity (i.e., the power that produces a sound). A source's intensity can be inferred from the intensity it produces at the ear and its distance, which is normally conveyed by reverberation. Listeners could thus use intensity at the ear and reverberation to constrain recognition by inferring the underlying source intensity. Alternatively, listeners might separate these acoustic cues from their representation of a sound's identity in the interest of invariant recognition. We compared these two hypotheses by measuring recognition accuracy for sounds with typically low or high source intensity (e.g., pepper grinders vs. trucks) that were presented across a range of intensities at the ear or with reverberation cues to distance. The recognition of low-intensity sources (e.g., pepper grinders) was impaired by high presentation intensities or reverberation that conveyed distance, either of which imply high source intensity. Neither effect occurred for high-intensity sources. The results suggest that listeners implicitly use the intensity at the ear along with distance cues to infer a source's power and constrain its identity. The recognition of real-world sounds thus appears to depend upon the inference of their physical generative parameters, even generative parameters whose cues might otherwise be separated from the representation of a sound's identity.
Collapse
|
41
|
Abstract
The ability to adapt to changes in stimulus statistics is a hallmark of sensory systems. Here, we developed a theoretical framework that can account for the dynamics of adaptation from an information processing perspective. We use this framework to optimize and analyze adaptive sensory codes, and we show that codes optimized for stationary environments can suffer from prolonged periods of poor performance when the environment changes. To mitigate the adversarial effects of these environmental changes, sensory systems must navigate tradeoffs between the ability to accurately encode incoming stimuli and the ability to rapidly detect and adapt to changes in the distribution of these stimuli. We derive families of codes that balance these objectives, and we demonstrate their close match to experimentally observed neural dynamics during mean and variance adaptation. Our results provide a unifying perspective on adaptation across a range of sensory systems, environments, and sensory tasks.
Collapse
|
42
|
Contributions of natural signal statistics to spectral context effects in consonant categorization. Atten Percept Psychophys 2021; 83:2694-2708. [PMID: 33987821 DOI: 10.3758/s13414-021-02310-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/23/2021] [Indexed: 11/08/2022]
Abstract
Speech perception, like all perception, takes place in context. Recognition of a given speech sound is influenced by the acoustic properties of surrounding sounds. When the spectral composition of earlier (context) sounds (e.g., a sentence with more energy at lower third formant [F3] frequencies) differs from that of a later (target) sound (e.g., consonant with intermediate F3 onset frequency), the auditory system magnifies this difference, biasing target categorization (e.g., towards higher-F3-onset /d/). Historically, these studies used filters to force context stimuli to possess certain spectral compositions. Recently, these effects were produced using unfiltered context sounds that already possessed the desired spectral compositions (Stilp & Assgari, 2019, Attention, Perception, & Psychophysics, 81, 2037-2052). Here, this natural signal statistics approach is extended to consonant categorization (/g/-/d/). Context sentences were either unfiltered (already possessing the desired spectral composition) or filtered (to imbue specific spectral characteristics). Long-term spectral characteristics of unfiltered contexts were poor predictors of shifts in consonant categorization, but short-term characteristics (last 475 ms) were excellent predictors. This diverges from vowel data, where long-term and shorter-term intervals (last 1,000 ms) were equally strong predictors. Thus, time scale plays a critical role in how listeners attune to signal statistics in the acoustic environment.
Collapse
|
43
|
DeRoy Milvae K, Alexander JM, Strickland EA. The relationship between ipsilateral cochlear gain reduction and speech-in-noise recognition at positive and negative signal-to-noise ratios. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:3449. [PMID: 34241110 PMCID: PMC8411890 DOI: 10.1121/10.0003964] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Revised: 03/09/2021] [Accepted: 03/11/2021] [Indexed: 06/13/2023]
Abstract
Active mechanisms that regulate cochlear gain are hypothesized to influence speech-in-noise perception. However, evidence of a relationship between the amount of cochlear gain reduction and speech-in-noise recognition is mixed. Findings may conflict across studies because different signal-to-noise ratios (SNRs) were used to evaluate speech-in-noise recognition. Also, there is evidence that ipsilateral elicitation of cochlear gain reduction may be stronger than contralateral elicitation, yet, most studies have investigated the contralateral descending pathway. The hypothesis that the relationship between ipsilateral cochlear gain reduction and speech-in-noise recognition depends on the SNR was tested. A forward masking technique was used to quantify the ipsilateral cochlear gain reduction in 24 young adult listeners with normal hearing. Speech-in-noise recognition was measured with the PRESTO-R sentence test using speech-shaped noise presented at -3, 0, and +3 dB SNR. Interestingly, greater cochlear gain reduction was associated with lower speech-in-noise recognition, and the strength of this correlation increased as the SNR became more adverse. These findings support the hypothesis that the SNR influences the relationship between ipsilateral cochlear gain reduction and speech-in-noise recognition. Future studies investigating the relationship between cochlear gain reduction and speech-in-noise recognition should consider the SNR and both descending pathways.
Collapse
Affiliation(s)
- Kristina DeRoy Milvae
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana 47907, USA
| | - Joshua M Alexander
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana 47907, USA
| | - Elizabeth A Strickland
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana 47907, USA
| |
Collapse
|
44
|
Homma NY, Hullett PW, Atencio CA, Schreiner CE. Auditory Cortical Plasticity Dependent on Environmental Noise Statistics. Cell Rep 2021; 30:4445-4458.e5. [PMID: 32234479 PMCID: PMC7326484 DOI: 10.1016/j.celrep.2020.03.014] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Revised: 08/07/2019] [Accepted: 03/05/2020] [Indexed: 01/14/2023] Open
Abstract
During critical periods, neural circuits develop to form receptive fields that adapt to the sensory environment and enable optimal performance of relevant tasks. We hypothesized that early exposure to background noise can improve signal-in-noise processing, and the resulting receptive field plasticity in the primary auditory cortex can reveal functional principles guiding that important task. We raised rat pups in different spectro-temporal noise statistics during their auditory critical period. As adults, they showed enhanced behavioral performance in detecting vocalizations in noise. Concomitantly, encoding of vocalizations in noise in the primary auditory cortex improves with noise-rearing. Significantly, spectro-temporal modulation plasticity shifts cortical preferences away from the exposed noise statistics, thus reducing noise interference with the foreground sound representation. Auditory cortical plasticity shapes receptive field preferences to optimally extract foreground information in noisy environments during noise-rearing. Early noise exposure induces cortical circuits to implement efficient coding in the joint spectral and temporal modulation domain. After rearing rats in moderately loud spectro-temporally modulated background noise, Homma et al. investigated signal-in-noise processing in the primary auditory cortex. Noise-rearing improved vocalization-in-noise performance in both behavioral testing and neural decoding. Cortical plasticity shifted neuronal spectro-temporal modulation preferences away from the exposed noise statistics.
Collapse
Affiliation(s)
- Natsumi Y Homma
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Patrick W Hullett
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Craig A Atencio
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Christoph E Schreiner
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA 94143, USA.
| |
Collapse
|
45
|
Slow Resting State Fluctuations Enhance Neuronal and Behavioral Responses to Looming Sounds. Brain Topogr 2021; 35:121-141. [PMID: 33768383 DOI: 10.1007/s10548-021-00826-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Accepted: 02/17/2021] [Indexed: 01/01/2023]
Abstract
We investigate both experimentally and using a computational model how the power of the electroencephalogram (EEG) recorded in human subjects tracks the presentation of sounds with acoustic intensities that increase exponentially (looming) or remain constant (flat). We focus on the link between this EEG tracking response, behavioral reaction times and the time scale of fluctuations in the resting state, which show considerable inter-subject variability. Looming sounds are shown to generally elicit a sustained power increase in the alpha and beta frequency bands. In contrast, flat sounds only elicit a transient upsurge at frequencies ranging from 7 to 45 Hz. Likewise, reaction times (RTs) in an audio-tactile task at different latencies from sound onset also present significant differences between sound types. RTs decrease with increasing looming intensities, i.e. as the sense of urgency increases, but remain constant with stationary flat intensities. We define the reaction time variation or "gain" during looming sound presentation, and show that higher RT gains are associated with stronger correlations between EEG power responses and sound intensity. Higher RT gain further entails higher relative power differences between loom and flat in the alpha and beta bands. The full-width-at-half-maximum of the autocorrelation function of the eyes-closed resting state EEG also increases with RT gain. The effects are topographically located over the central and frontal electrodes. A computational model reveals that the increase in stimulus-response correlation in subjects with slower resting state fluctuations is expected when EEG power fluctuations at each electrode and in a given band are viewed as simple coupled low-pass filtered noise processes jointly driven by the sound intensity. The model assumes that the strength of stimulus-power coupling is proportional to RT gain in different coupling scenarios, suggesting a mechanism by which slower resting state fluctuations enhance EEG response and shorten reaction times.
Collapse
|
46
|
Robustness to Noise in the Auditory System: A Distributed and Predictable Property. eNeuro 2021; 8:ENEURO.0043-21.2021. [PMID: 33632813 PMCID: PMC7986545 DOI: 10.1523/eneuro.0043-21.2021] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 02/17/2021] [Accepted: 02/17/2021] [Indexed: 12/30/2022] Open
Abstract
Background noise strongly penalizes auditory perception of speech in humans or vocalizations in animals. Despite this, auditory neurons are still able to detect communications sounds against considerable levels of background noise. We collected neuronal recordings in cochlear nucleus (CN), inferior colliculus (IC), auditory thalamus, and primary and secondary auditory cortex in response to vocalizations presented either against a stationary or a chorus noise in anesthetized guinea pigs at three signal-to-noise ratios (SNRs; −10, 0, and 10 dB). We provide evidence that, at each level of the auditory system, five behaviors in noise exist within a continuum, from neurons with high-fidelity representations of the signal, mostly found in IC and thalamus, to neurons with high-fidelity representations of the noise, mostly found in CN for the stationary noise and in similar proportions in each structure for the chorus noise. The two cortical areas displayed fewer robust responses than the IC and thalamus. Furthermore, between 21% and 72% of the neurons (depending on the structure) switch categories from one background noise to another, even if the initial assignment of these neurons to a category was confirmed by a severe bootstrap procedure. Importantly, supervised learning pointed out that assigning a recording to one of the five categories can be predicted up to a maximum of 70% based on both the response to signal alone and noise alone.
Collapse
|
47
|
Siveke I, Myoga MH, Grothe B, Felmy F. Ambient noise exposure induces long-term adaptations in adult brainstem neurons. Sci Rep 2021; 11:5139. [PMID: 33664302 PMCID: PMC7933235 DOI: 10.1038/s41598-021-84230-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Accepted: 02/12/2021] [Indexed: 11/09/2022] Open
Abstract
To counterbalance long-term environmental changes, neuronal circuits adapt the processing of sensory information. In the auditory system, ongoing background noise drives long-lasting adaptive mechanism in binaural coincidence detector neurons in the superior olive. However, the compensatory cellular mechanisms of the binaural neurons in the medial superior olive (MSO) to long-term background changes are unexplored. Here we investigated the cellular properties of MSO neurons during long-lasting adaptations induced by moderate omnidirectional noise exposure. After noise exposure, the input resistance of MSO neurons of mature Mongolian gerbils was reduced, likely due to an upregulation of hyperpolarisation-activated cation and low voltage-activated potassium currents. Functionally, the long-lasting adaptations increased the action potential current threshold and facilitated high frequency output generation. Noise exposure accelerated the occurrence of spontaneous postsynaptic currents. Together, our data suggest that cellular adaptations in coincidence detector neurons of the MSO to continuous noise exposure likely increase the sensitivity to differences in sound pressure levels.
Collapse
Affiliation(s)
- Ida Siveke
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-University Munich, 82152, Planegg-Martinsried, Germany. .,Institute of Zoology and Neurobiology, Ruhr-University Bochum, Universitätsstrasse 150, 44780, Bochum, Germany.
| | - Mike H Myoga
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-University Munich, 82152, Planegg-Martinsried, Germany
| | - Benedikt Grothe
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-University Munich, 82152, Planegg-Martinsried, Germany
| | - Felix Felmy
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-University Munich, 82152, Planegg-Martinsried, Germany. .,Institute of Zoology, University of Veterinary Medicine Hannover, Foundation, Bünteweg 17, 30599, Hannover, Germany.
| |
Collapse
|
48
|
Hosseini M, Rodriguez G, Guo H, Lim HH, Plourde E. The effect of input noises on the activity of auditory neurons using GLM-based metrics. J Neural Eng 2021; 18. [PMID: 33626516 DOI: 10.1088/1741-2552/abe979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Accepted: 02/24/2021] [Indexed: 11/11/2022]
Abstract
CONTEXT The auditory system is extremely efficient in extracting auditory information in the presence of background noise. However, people with auditory implants have a hard time understanding speech in noisy conditions. Understanding the mechanisms of perception in noise could lead to better stimulation or preprocessing strategies for such implants. OBJECTIVE The neural mechanisms related to the processing of background noise, especially in the inferior colliculus (IC) where the auditory midbrain implant is located, are still not well understood. We thus wish to investigate if there is a difference in the activity of neurons in the IC when presenting noisy vocalizations with different types of noise (stationary vs. non-stationary), input signal-to-noise ratios (SNR) and signal levels. APPROACH We developed novel metrics based on a generalized linear model (GLM) to investigate the effect of a given input noise on neural activity. We used these metrics to analyze neural data recorded from the IC in ketamine-anesthetized female Hartley guinea pigs while presenting noisy vocalizations. MAIN RESULTS We found that non-stationary noise clearly contributes to the multi-unit neural activity in the IC by causing excitation, regardless of the SNR, input level or vocalization type. However, when presenting white or natural stationary noises, a great diversity of responses was observed for the different conditions, where the multi-unit activity of some sites was affected by the presence of noise and the activity of others was not. SIGNIFICANCE The GLM-based metrics allowed the identification of a clear distinction between the effect of white or natural stationary noises and that of non-stationary noise on the multi-unit activity in the IC. This had not been observed before and indicates that the so-called noise invariance in the IC is dependent on the input noisy conditions. This could suggest different preprocessing or stimulation approaches for auditory midbrain implants depending on the noisy conditions.
Collapse
Affiliation(s)
- Maryam Hosseini
- Electrical engineering, Université de Sherbrooke, 2500 Boulevard de l'Université, Sherbrooke, Quebec, J1K 2R1, CANADA
| | - Gerardo Rodriguez
- Biomedical engineering, University of Minnesota, 312 Church St SE, Minneapolis, Minnesota, 55455, UNITED STATES
| | - Hongsun Guo
- Biomedical engineering, University of Minnesota, 312 Church St SE, Minneapolis, Minnesota, 55455, UNITED STATES
| | - Hubert H Lim
- Department of Biomedical Engineering, University of Minnesota, 7-105 Hasselmo Hall, 312 Church Street SE, Minneapolis, MN 55455, USA, Minneapolis, Minnesota, 55455, UNITED STATES
| | - Eric Plourde
- Electrical engineering, Université de Sherbrooke, 2500 Boulevard de l'Université, Sherbrooke, Quebec, J1K 2R1, CANADA
| |
Collapse
|
49
|
Asokan MM, Williamson RS, Hancock KE, Polley DB. Inverted central auditory hierarchies for encoding local intervals and global temporal patterns. Curr Biol 2021; 31:1762-1770.e4. [PMID: 33609455 DOI: 10.1016/j.cub.2021.01.076] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 12/01/2020] [Accepted: 01/21/2021] [Indexed: 01/02/2023]
Abstract
In sensory systems, representational features of increasing complexity emerge at successive stages of processing. In the mammalian auditory pathway, the clearest change from brainstem to cortex is defined by what is lost, not by what is gained, in that high-fidelity temporal coding becomes increasingly restricted to slower acoustic modulation rates.1,2 Here, we explore the idea that sluggish temporal processing is more than just an inability for fast processing, but instead reflects an emergent specialization for encoding sound features that unfold on very slow timescales.3,4 We performed simultaneous single unit ensemble recordings from three hierarchical stages of auditory processing in awake mice - the inferior colliculus (IC), medial geniculate body of the thalamus (MGB) and primary auditory cortex (A1). As expected, temporal coding of brief local intervals (0.001 - 0.1 s) separating consecutive noise bursts was robust in the IC and declined across MGB and A1. By contrast, slowly developing (∼1 s period) global rhythmic patterns of inter-burst interval sequences strongly modulated A1 spiking, were weakly captured by MGB neurons, and not at all by IC neurons. Shifts in stimulus regularity were not represented by changes in A1 spike rates, but rather in how the spikes were arranged in time. These findings show that low-level auditory neurons with fast timescales encode isolated sound features but not the longer gestalt, while the extended timescales in higher-level areas can facilitate sensitivity to slower contextual changes in the sensory environment.
Collapse
Affiliation(s)
- Meenakshi M Asokan
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston MA 02114 USA; Division of Medical Sciences, Harvard Medical School, Boston MA 02114 USA
| | - Ross S Williamson
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston MA 02114 USA; Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston MA 02114 USA
| | - Kenneth E Hancock
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston MA 02114 USA; Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston MA 02114 USA
| | - Daniel B Polley
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston MA 02114 USA; Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston MA 02114 USA.
| |
Collapse
|
50
|
Christensen-Dalsgaard J, Kuokkanen P, Matthews JE, Carr CE. Strongly directional responses to tones and conspecific calls in the auditory nerve of the Tokay gecko, Gekko gecko. J Neurophysiol 2021; 125:887-902. [PMID: 33534648 DOI: 10.1152/jn.00576.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The configuration of lizard ears, where sound can reach both surfaces of the eardrums, produces a strongly directional ear, but the subsequent processing of sound direction by the auditory pathway is unknown. We report here on directional responses from the first stage, the auditory nerve. We used laser vibrometry to measure eardrum responses in Tokay geckos and in the same animals recorded 117 auditory nerve single fiber responses to free-field sound from radially distributed speakers. Responses from all fibers showed strongly lateralized activity at all frequencies, with an ovoidal directivity that resembled the eardrum directivity. Geckos are vocal and showed pronounced nerve fiber directionality to components of the call. To estimate the accuracy with which a gecko could discriminate between sound sources, we computed the Fisher information (FI) for each neuron. FI was highest just contralateral to the midline, front and back. Thus, the auditory nerve could provide a population code for sound source direction, and geckos should have a high capacity to differentiate between midline sound sources. In brain, binaural comparisons, for example, by IE (ipsilateral excitatory, contralateral inhibitory) neurons, should sharpen the lateralized responses and extend the dynamic range of directionality.NEW & NOTEWORTHY In mammals, the two ears are unconnected pressure receivers, and sound direction is computed from binaural interactions in the brain, but in lizards, the eardrums interact acoustically, producing a strongly directional response. We show strongly lateralized responses from gecko auditory nerve fibers to directional sound stimulation and high Fisher information on either side of the midline. Thus, already the auditory nerve provides a population code for sound source direction in the gecko.
Collapse
Affiliation(s)
| | - Paula Kuokkanen
- Department of Biology, Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
| | | | - Catherine E Carr
- Department of Biology, University of Maryland, College Park, Maryland
| |
Collapse
|