1
|
Boothalingam S, Peterson A, Powell L, Easwar V. Auditory brainstem mechanisms likely compensate for self-imposed peripheral inhibition. Sci Rep 2023; 13:12693. [PMID: 37542191 PMCID: PMC10403563 DOI: 10.1038/s41598-023-39850-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 08/01/2023] [Indexed: 08/06/2023] Open
Abstract
Feedback networks in the brain regulate downstream auditory function as peripheral as the cochlea. However, the upstream neural consequences of this peripheral regulation are less understood. For instance, the medial olivocochlear reflex (MOCR) in the brainstem causes putative attenuation of responses generated in the cochlea and cortex, but those generated in the brainstem are perplexingly unaffected. Based on known neural circuitry, we hypothesized that the inhibition of peripheral input is compensated for by positive feedback in the brainstem over time. We predicted that the inhibition could be captured at the brainstem with shorter (1.5 s) than previously employed long duration (240 s) stimuli where this inhibition is likely compensated for. Results from 16 normal-hearing human listeners support our hypothesis in that when the MOCR is activated, there is a robust reduction of responses generated at the periphery, brainstem, and cortex for short-duration stimuli. Such inhibition at the brainstem, however, diminishes for long-duration stimuli suggesting some compensatory mechanisms at play. Our findings provide a novel non-invasive window into potential gain compensation mechanisms in the brainstem that may have implications for auditory disorders such as tinnitus. Our methodology will be useful in the evaluation of efferent function in individuals with hearing loss.
Collapse
Affiliation(s)
- Sriram Boothalingam
- Waisman Center and Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, 53705, USA.
- Macquarie University, Sydney, NSW, 2109, Australia.
- National Acoustic Laboratories, Sydney, NSW, 2109, Australia.
| | - Abigayle Peterson
- Waisman Center and Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, 53705, USA
- Macquarie University, Sydney, NSW, 2109, Australia
| | - Lindsey Powell
- Waisman Center and Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, 53705, USA
| | - Vijayalakshmi Easwar
- Waisman Center and Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, 53705, USA
- Macquarie University, Sydney, NSW, 2109, Australia
- National Acoustic Laboratories, Sydney, NSW, 2109, Australia
| |
Collapse
|
2
|
Auksztulewicz R, Rajendran VG, Peng F, Schnupp JWH, Harper NS. Omission responses in local field potentials in rat auditory cortex. BMC Biol 2023; 21:130. [PMID: 37254137 DOI: 10.1186/s12915-023-01592-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 04/11/2023] [Indexed: 06/01/2023] Open
Abstract
BACKGROUND Non-invasive recordings of gross neural activity in humans often show responses to omitted stimuli in steady trains of identical stimuli. This has been taken as evidence for the neural coding of prediction or prediction error. However, evidence for such omission responses from invasive recordings of cellular-scale responses in animal models is scarce. Here, we sought to characterise omission responses using extracellular recordings in the auditory cortex of anaesthetised rats. We profiled omission responses across local field potentials (LFP), analogue multiunit activity (AMUA), and single/multi-unit spiking activity, using stimuli that were fixed-rate trains of acoustic noise bursts where 5% of bursts were randomly omitted. RESULTS Significant omission responses were observed in LFP and AMUA signals, but not in spiking activity. These omission responses had a lower amplitude and longer latency than burst-evoked sensory responses, and omission response amplitude increased as a function of the number of preceding bursts. CONCLUSIONS Together, our findings show that omission responses are most robustly observed in LFP and AMUA signals (relative to spiking activity). This has implications for models of cortical processing that require many neurons to encode prediction errors in their spike output.
Collapse
Affiliation(s)
- Ryszard Auksztulewicz
- Center for Cognitive Neuroscience Berlin, Free University Berlin, Berlin, Germany.
- Dept of Neuroscience, City University of Hong Kong, Hong Kong, Hong Kong S.A.R..
| | | | - Fei Peng
- Dept of Neuroscience, City University of Hong Kong, Hong Kong, Hong Kong S.A.R
| | | | | |
Collapse
|
3
|
Auerbach BD, Gritton HJ. Hearing in Complex Environments: Auditory Gain Control, Attention, and Hearing Loss. Front Neurosci 2022; 16:799787. [PMID: 35221899 PMCID: PMC8866963 DOI: 10.3389/fnins.2022.799787] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 01/18/2022] [Indexed: 12/12/2022] Open
Abstract
Listening in noisy or complex sound environments is difficult for individuals with normal hearing and can be a debilitating impairment for those with hearing loss. Extracting meaningful information from a complex acoustic environment requires the ability to accurately encode specific sound features under highly variable listening conditions and segregate distinct sound streams from multiple overlapping sources. The auditory system employs a variety of mechanisms to achieve this auditory scene analysis. First, neurons across levels of the auditory system exhibit compensatory adaptations to their gain and dynamic range in response to prevailing sound stimulus statistics in the environment. These adaptations allow for robust representations of sound features that are to a large degree invariant to the level of background noise. Second, listeners can selectively attend to a desired sound target in an environment with multiple sound sources. This selective auditory attention is another form of sensory gain control, enhancing the representation of an attended sound source while suppressing responses to unattended sounds. This review will examine both “bottom-up” gain alterations in response to changes in environmental sound statistics as well as “top-down” mechanisms that allow for selective extraction of specific sound features in a complex auditory scene. Finally, we will discuss how hearing loss interacts with these gain control mechanisms, and the adaptive and/or maladaptive perceptual consequences of this plasticity.
Collapse
Affiliation(s)
- Benjamin D. Auerbach
- Department of Molecular and Integrative Physiology, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- *Correspondence: Benjamin D. Auerbach,
| | - Howard J. Gritton
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Comparative Biosciences, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, United States
| |
Collapse
|
4
|
Mesik J, Wojtczak M. Effects of noise precursors on the detection of amplitude and frequency modulation for tones in noise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:3581. [PMID: 33379905 PMCID: PMC8097715 DOI: 10.1121/10.0002879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Revised: 11/05/2020] [Accepted: 11/16/2020] [Indexed: 06/12/2023]
Abstract
Recent studies on amplitude modulation (AM) detection for tones in noise reported that AM-detection thresholds improve when the AM stimulus is preceded by a noise precursor. The physiological mechanisms underlying this AM unmasking are unknown. One possibility is that adaptation to the level of the noise precursor facilitates AM encoding by causing a shift in neural rate-level functions to optimize level encoding around the precursor level. The aims of this study were to investigate whether such a dynamic-range adaptation is a plausible mechanism for the AM unmasking and whether frequency modulation (FM), thought to be encoded via AM, also exhibits the unmasking effect. Detection thresholds for AM and FM of tones in noise were measured with and without a fixed-level precursor. Listeners showing the unmasking effect were then tested with the precursor level roved over a wide range to modulate the effect of adaptation to the precursor level on the detection of the subsequent AM. It was found that FM detection benefits from a precursor and the magnitude of FM unmasking correlates with that of AM unmasking. Moreover, consistent with dynamic-range adaptation, the unmasking magnitude weakens as the level difference between the precursor and simultaneous masker of the tone increases.
Collapse
Affiliation(s)
- Juraj Mesik
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, USA
| | - Magdalena Wojtczak
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
5
|
Herrmann B, Augereau T, Johnsrude IS. Neural Responses and Perceptual Sensitivity to Sound Depend on Sound-Level Statistics. Sci Rep 2020; 10:9571. [PMID: 32533068 PMCID: PMC7293331 DOI: 10.1038/s41598-020-66715-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Accepted: 05/22/2020] [Indexed: 01/11/2023] Open
Abstract
Sensitivity to sound-level statistics is crucial for optimal perception, but research has focused mostly on neurophysiological recordings, whereas behavioral evidence is sparse. We use electroencephalography (EEG) and behavioral methods to investigate how sound-level statistics affect neural activity and the detection of near-threshold changes in sound amplitude. We presented noise bursts with sound levels drawn from distributions with either a low or a high modal sound level. One participant group listened to the stimulation while EEG was recorded (Experiment I). A second group performed a behavioral amplitude-modulation detection task (Experiment II). Neural activity depended on sound-level statistical context in two different ways. Consistent with an account positing that the sensitivity of neurons to sound intensity adapts to ambient sound level, responses for higher-intensity bursts were larger in low-mode than high-mode contexts, whereas responses for lower-intensity bursts did not differ between contexts. In contrast, a concurrent slow neural response indicated prediction-error processing: The response was larger for bursts at intensities that deviated from the predicted statistical context compared to those not deviating. Behavioral responses were consistent with prediction-error processing, but not with neural adaptation. Hence, neural activity adapts to sound-level statistics, but fine-tuning of perceptual sensitivity appears to involve neural prediction-error responses.
Collapse
Affiliation(s)
- Björn Herrmann
- Department of Psychology and Brain & Mind Institute, University of Western Ontario, N6A 3K7, London, ON, Canada. .,Rotman Research Institute, Baycrest, M6A 2E1, Toronto, ON, Canada. .,Department of Psychology, University of Toronto, M5S 1A1, Toronto, ON, Canada.
| | - Thomas Augereau
- Department of Psychology and Brain & Mind Institute, University of Western Ontario, N6A 3K7, London, ON, Canada
| | - Ingrid S Johnsrude
- Department of Psychology and Brain & Mind Institute, University of Western Ontario, N6A 3K7, London, ON, Canada.,School of Communication Sciences & Disorders, University of Western Ontario, N6A 5B7, London, ON, Canada
| |
Collapse
|
6
|
Abstract
Human listeners appear to represent the textures of sounds through a process of automatic time averaging that exists beyond volition. This process distils likely background sounds into their summary statistics, a computationally efficient way of dealing with complex auditory scenes.
Collapse
Affiliation(s)
- David McAlpine
- Department of Linguistics, and The Australian Hearing Hub, Macquarie University, 16 University Avenue, Sydney, NSW, 2109, Australia.
| |
Collapse
|
7
|
Górska U, Rupp A, Boubenec Y, Celikel T, Englitz B. Evidence Integration in Natural Acoustic Textures during Active and Passive Listening. eNeuro 2018; 5:ENEURO.0090-18.2018. [PMID: 29662943 PMCID: PMC5898696 DOI: 10.1523/eneuro.0090-18.2018] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Revised: 03/15/2018] [Accepted: 03/20/2018] [Indexed: 11/21/2022] Open
Abstract
Many natural sounds can be well described on a statistical level, for example, wind, rain, or applause. Even though the spectro-temporal profile of these acoustic textures is highly dynamic, changes in their statistics are indicative of relevant changes in the environment. Here, we investigated the neural representation of change detection in natural textures in humans, and specifically addressed whether active task engagement is required for the neural representation of this change in statistics. Subjects listened to natural textures whose spectro-temporal statistics were modified at variable times by a variable amount. Subjects were instructed to either report the detection of changes (active) or to passively listen to the stimuli. A subset of passive subjects had performed the active task before (passive-aware vs passive-naive). Psychophysically, longer exposure to pre-change statistics was correlated with faster reaction times and better discrimination performance. EEG recordings revealed that the build-up rate and size of parieto-occipital (PO) potentials reflected change size and change time. Reduced effects were observed in the passive conditions. While P2 responses were comparable across conditions, slope and height of PO potentials scaled with task involvement. Neural source localization identified a parietal source as the main contributor of change-specific potentials, in addition to more limited contributions from auditory and frontal sources. In summary, the detection of statistical changes in natural acoustic textures is predominantly reflected in parietal locations both on the skull and source level. The scaling in magnitude across different levels of task involvement suggests a context-dependent degree of evidence integration.
Collapse
Affiliation(s)
- Urszula Górska
- Department of Neurophysiology, Donders Institute, Radboud University Nijmegen, The Netherlands
- Psychophysiology Laboratory, Institute of Psychology, Jagiellonian University, Krakow, Poland
- Smoluchowski Institute of Physics, Jagiellonian University, Krakow, Poland
| | - Andre Rupp
- Section of Biomagnetism, Department of Neurology, University of Heidelberg, Heidelberg, Germany
| | - Yves Boubenec
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, Paris, France
- Département d'Études Cognitives, École Normale Supérieure, PSL Research University, Paris, France
| | - Tansu Celikel
- Department of Neurophysiology, Donders Institute, Radboud University Nijmegen, The Netherlands
| | - Bernhard Englitz
- Department of Neurophysiology, Donders Institute, Radboud University Nijmegen, The Netherlands
| |
Collapse
|
8
|
Aging Affects Adaptation to Sound-Level Statistics in Human Auditory Cortex. J Neurosci 2018; 38:1989-1999. [PMID: 29358362 DOI: 10.1523/jneurosci.1489-17.2018] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2017] [Revised: 01/04/2018] [Accepted: 01/14/2018] [Indexed: 11/21/2022] Open
Abstract
Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information.SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from sound-level distributions with different modes (15 vs 45 dB). Auditory cortex neurons adapted to sound-level statistics in younger and older adults, but adaptation was incomplete in older people. The data suggest that the aging auditory system does not fully capitalize on the statistics available in sound environments to tune the perceptual system dynamically.
Collapse
|
9
|
Nozaradan S, Mouraux A, Cousineau M. Frequency tagging to track the neural processing of contrast in fast, continuous sound sequences. J Neurophysiol 2017; 118:243-253. [PMID: 28381494 DOI: 10.1152/jn.00971.2016] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2016] [Revised: 03/31/2017] [Accepted: 03/31/2017] [Indexed: 01/23/2023] Open
Abstract
The human auditory system presents a remarkable ability to detect rapid changes in fast, continuous acoustic sequences, as best illustrated in speech and music. However, the neural processing of rapid auditory contrast remains largely unclear, probably due to the lack of methods to objectively dissociate the response components specifically related to the contrast from the other components in response to the sequence of fast continuous sounds. To overcome this issue, we tested a novel use of the frequency-tagging approach allowing contrast-specific neural responses to be tracked based on their expected frequencies. The EEG was recorded while participants listened to 40-s sequences of sounds presented at 8Hz. A tone or interaural time contrast was embedded every fifth sound (AAAAB), such that a response observed in the EEG at exactly 8 Hz/5 (1.6 Hz) or harmonics should be the signature of contrast processing by neural populations. Contrast-related responses were successfully identified, even in the case of very fine contrasts. Moreover, analysis of the time course of the responses revealed a stable amplitude over repetitions of the AAAAB patterns in the sequence, except for the response to perceptually salient contrasts that showed a buildup and decay across repetitions of the sounds. Overall, this new combination of frequency-tagging with an oddball design provides a valuable complement to the classic, transient, evoked potentials approach, especially in the context of rapid auditory information. Specifically, we provide objective evidence on the neural processing of contrast embedded in fast, continuous sound sequences.NEW & NOTEWORTHY Recent theories suggest that the basis of neurodevelopmental auditory disorders such as dyslexia might be an impaired processing of fast auditory changes, highlighting how the encoding of rapid acoustic information is critical for auditory communication. Here, we present a novel electrophysiological approach to capture in humans neural markers of contrasts in fast continuous tone sequences. Contrast-specific responses were successfully identified, even for very fine contrasts, providing direct insight on the encoding of rapid auditory information.
Collapse
Affiliation(s)
- Sylvie Nozaradan
- Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium; .,MARCS Institute for Brain, Behavior, and Development, Sydney, Australia; and.,International Laboratory for Brain, Music, and Sound Research (Brams), Montreal, Quebec, Canada
| | - André Mouraux
- Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
| | - Marion Cousineau
- International Laboratory for Brain, Music, and Sound Research (Brams), Montreal, Quebec, Canada
| |
Collapse
|
10
|
Stilp CE, Kluender KR. Stimulus Statistics Change Sounds from Near-Indiscriminable to Hyperdiscriminable. PLoS One 2016; 11:e0161001. [PMID: 27508391 PMCID: PMC4979885 DOI: 10.1371/journal.pone.0161001] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2016] [Accepted: 07/28/2016] [Indexed: 11/19/2022] Open
Abstract
Objects and events in the sensory environment are generally predictable, making most of the energy impinging upon sensory transducers redundant. Given this fact, efficient sensory systems should detect, extract, and exploit predictability in order to optimize sensitivity to less predictable inputs that are, by definition, more informative. Not only are perceptual systems sensitive to changes in physical stimulus properties, but growing evidence reveals sensitivity both to relative predictability of stimuli and to co-occurrence of stimulus attributes within stimuli. Recent results revealed that auditory perception rapidly reorganizes to efficiently capture covariance among stimulus attributes. Acoustic properties per se were perceptually abandoned, and sounds were instead processed relative to patterns of co-occurrence. Here, we show that listeners' ability to distinguish sounds from one another is driven primarily by the extent to which they are consistent or inconsistent with patterns of covariation among stimulus attributes and, to a lesser extent, whether they are heard frequently or infrequently. When sounds were heard frequently and deviated minimally from the prevailing pattern of covariance among attributes, they were poorly discriminated from one another. In stark contrast, when sounds were heard rarely and markedly violated the pattern of covariance, they became hyperdiscriminable with discrimination performance beyond apparent limits of the auditory system. Plausible cortical candidates underlying these dramatic changes in perceptual organization are discussed. These findings support efficient coding of stimulus statistical structure as a model for both perceptual and neural organization.
Collapse
Affiliation(s)
- Christian E. Stilp
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky, United States of America
| | - Keith R. Kluender
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, United States of America
| |
Collapse
|
11
|
Shuai L, Elhilali M. Task-dependent neural representations of salient events in dynamic auditory scenes. Front Neurosci 2014; 8:203. [PMID: 25100934 PMCID: PMC4104552 DOI: 10.3389/fnins.2014.00203] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2014] [Accepted: 06/27/2014] [Indexed: 11/13/2022] Open
Abstract
Selecting pertinent events in the cacophony of sounds that impinge on our ears every day is regulated by the acoustic salience of sounds in the scene as well as their behavioral relevance as dictated by top-down task-dependent demands. The current study aims to explore the neural signature of both facets of attention, as well as their possible interactions in the context of auditory scenes. Using a paradigm with dynamic auditory streams with occasional salient events, we recorded neurophysiological responses of human listeners using EEG while manipulating the subjects' attentional state as well as the presence or absence of a competing auditory stream. Our results showed that salient events caused an increase in the auditory steady-state response (ASSR) irrespective of attentional state or complexity of the scene. Such increase supplemented ASSR increases due to task-driven attention. Salient events also evoked a strong N1 peak in the ERP response when listeners were attending to the target sound stream, accompanied by an MMN-like component in some cases and changes in the P1 and P300 components under all listening conditions. Overall, bottom-up attention induced by a salient change in the auditory stream appears to mostly modulate the amplitude of the steady-state response and certain event-related potentials to salient sound events; though this modulation is affected by top-down attentional processes and the prominence of these events in the auditory scene as well.
Collapse
Affiliation(s)
| | - Mounya Elhilali
- Laboratory of Computational Audio Perception, Department of Electrical and Computer Engineering, Center for Speech and Language Processing, Johns Hopkins UniversityBaltimore, MD, USA
| |
Collapse
|