1
|
Snir A, Cieśla K, Ozdemir G, Vekslar R, Amedi A. Localizing 3D motion through the fingertips: Following in the footsteps of elephants. iScience 2024; 27:109820. [PMID: 38799571 PMCID: PMC11126990 DOI: 10.1016/j.isci.2024.109820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 03/07/2024] [Accepted: 04/24/2024] [Indexed: 05/29/2024] Open
Abstract
Each sense serves a different specific function in spatial perception, and they all form a joint multisensory spatial representation. For instance, hearing enables localization in the entire 3D external space, while touch traditionally only allows localization of objects on the body (i.e., within the peripersonal space alone). We use an in-house touch-motion algorithm (TMA) to evaluate individuals' capability to understand externalized 3D information through touch, a skill that was not acquired during an individual's development or in evolution. Four experiments demonstrate quick learning and high accuracy in localization of motion using vibrotactile inputs on fingertips and successful audio-tactile integration in background noise. Subjective responses in some participants imply spatial experiences through visualization and perception of tactile "moving" sources beyond reach. We discuss our findings with respect to developing new skills in an adult brain, including combining a newly acquired "sense" with an existing one and computation-based brain organization.
Collapse
Affiliation(s)
- Adi Snir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Katarzyna Cieśla
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Mokra 17, 05-830 Kajetany, Nadarzyn, Poland
| | - Gizem Ozdemir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Rotem Vekslar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| |
Collapse
|
2
|
van der Heijden K, Patel P, Bickel S, Herrero JL, Mehta AD, Mesgarani N. Joint population coding and temporal coherence link an attended talker's voice and location features in naturalistic multi-talker scenes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.13.593814. [PMID: 38798551 PMCID: PMC11118436 DOI: 10.1101/2024.05.13.593814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Listeners readily extract multi-dimensional auditory objects such as a 'localized talker' from complex acoustic scenes with multiple talkers. Yet, the neural mechanisms underlying simultaneous encoding and linking of different sound features - for example, a talker's voice and location - are poorly understood. We analyzed invasive intracranial recordings in neurosurgical patients attending to a localized talker in real-life cocktail party scenarios. We found that sensitivity to an individual talker's voice and location features was distributed throughout auditory cortex and that neural sites exhibited a gradient from sensitivity to a single feature to joint sensitivity to both features. On a population level, cortical response patterns of both dual-feature sensitive sites but also single-feature sensitive sites revealed simultaneous encoding of an attended talker's voice and location features. However, for single-feature sensitive sites, the representation of the primary feature was more precise. Further, sites which selective tracked an attended speech stream concurrently encoded an attended talker's voice and location features, indicating that such sites combine selective tracking of an attended auditory object with encoding of the object's features. Finally, we found that attending a localized talker selectively enhanced temporal coherence between single-feature voice sensitive sites and single-feature location sensitive sites, providing an additional mechanism for linking voice and location in multi-talker scenes. These results demonstrate that a talker's voice and location features are linked during multi-dimensional object formation in naturalistic multi-talker scenes by joint population coding as well as by temporal coherence between neural sites. SIGNIFICANCE STATEMENT Listeners effortlessly extract auditory objects from complex acoustic scenes consisting of multiple sound sources in naturalistic, spatial sound scenes. Yet, how the brain links different sound features to form a multi-dimensional auditory object is poorly understood. We investigated how neural responses encode and integrate an attended talker's voice and location features in spatial multi-talker sound scenes to elucidate which neural mechanisms underlie simultaneous encoding and linking of different auditory features. Our results show that joint population coding as well as temporal coherence mechanisms contribute to distributed multi-dimensional auditory object encoding. These findings shed new light on cortical functional specialization and multidimensional auditory object formation in complex, naturalistic listening scenes. HIGHLIGHTS Cortical responses to an single talker exhibit a distributed gradient, ranging from sites that are sensitive to both a talker's voice and location (dual-feature sensitive sites) to sites that are sensitive to either voice or location (single-feature sensitive sites).Population response patterns of dual-feature sensitive sites encode voice and location features of the attended talker in multi-talker scenes jointly and with equal precision.Despite their sensitivity to a single feature at the level of individual cortical sites, population response patterns of single-feature sensitive sites also encode location and voice features of a talker jointly, but with higher precision for the feature they are primarily sensitive to.Neural sites which selectively track an attended speech stream concurrently encode the attended talker's voice and location features.Attention selectively enhances temporal coherence between voice and location selective sites over time.Joint population coding as well as temporal coherence mechanisms underlie distributed multi-dimensional auditory object encoding in auditory cortex.
Collapse
|
3
|
Alamatsaz N, Rosen MJ, Ihlefeld A. Increased reliance on temporal coding when target sound is softer than the background. Sci Rep 2024; 14:4457. [PMID: 38396044 PMCID: PMC10891139 DOI: 10.1038/s41598-024-54865-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 02/17/2024] [Indexed: 02/25/2024] Open
Abstract
Everyday environments often contain multiple concurrent sound sources that fluctuate over time. Normally hearing listeners can benefit from high signal-to-noise ratios (SNRs) in energetic dips of temporally fluctuating background sound, a phenomenon called dip-listening. Specialized mechanisms of dip-listening exist across the entire auditory pathway. Both the instantaneous fluctuating and the long-term overall SNR shape dip-listening. An unresolved issue regarding cortical mechanisms of dip-listening is how target perception remains invariant to overall SNR, specifically, across different tone levels with an ongoing fluctuating masker. Equivalent target detection over both positive and negative overall SNRs (SNR invariance) is reliably achieved in highly-trained listeners. Dip-listening is correlated with the ability to resolve temporal fine structure, which involves temporally-varying spike patterns. Thus the current work tests the hypothesis that at negative SNRs, neuronal readout mechanisms need to increasingly rely on decoding strategies based on temporal spike patterns, as opposed to spike count. Recordings from chronically implanted electrode arrays in core auditory cortex of trained and awake Mongolian gerbils that are engaged in a tone detection task in 10 Hz amplitude-modulated background sound reveal that rate-based decoding is not SNR-invariant, whereas temporal coding is informative at both negative and positive SNRs.
Collapse
Affiliation(s)
- Nima Alamatsaz
- Graduate School of Biomedical Sciences, Rutgers University, Newark, NJ, USA
- Department of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ, USA
| | - Merri J Rosen
- Northeast Ohio Medical University (NEOMED), Rootstown, OH, USA.
- University Hospitals Hearing Research Center at NEOMED, Rootstown, OH, USA.
- Brain Health Research Institute, Kent State University, Kent, OH, USA.
| | | |
Collapse
|
4
|
Yamoah EN, Pavlinkova G, Fritzsch B. The Development of Speaking and Singing in Infants May Play a Role in Genomics and Dementia in Humans. Brain Sci 2023; 13:1190. [PMID: 37626546 PMCID: PMC10452560 DOI: 10.3390/brainsci13081190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Revised: 08/04/2023] [Accepted: 08/08/2023] [Indexed: 08/27/2023] Open
Abstract
The development of the central auditory system, including the auditory cortex and other areas involved in processing sound, is shaped by genetic and environmental factors, enabling infants to learn how to speak. Before explaining hearing in humans, a short overview of auditory dysfunction is provided. Environmental factors such as exposure to sound and language can impact the development and function of the auditory system sound processing, including discerning in speech perception, singing, and language processing. Infants can hear before birth, and sound exposure sculpts their developing auditory system structure and functions. Exposing infants to singing and speaking can support their auditory and language development. In aging humans, the hippocampus and auditory nuclear centers are affected by neurodegenerative diseases such as Alzheimer's, resulting in memory and auditory processing difficulties. As the disease progresses, overt auditory nuclear center damage occurs, leading to problems in processing auditory information. In conclusion, combined memory and auditory processing difficulties significantly impact people's ability to communicate and engage with their societal essence.
Collapse
Affiliation(s)
- Ebenezer N. Yamoah
- Department of Physiology and Cell Biology, School of Medicine, University of Nevada, Reno, NV 89557, USA;
| | | | - Bernd Fritzsch
- Department of Neurological Sciences, University of Nebraska Medical Center, Omaha, NE 68198, USA
| |
Collapse
|
5
|
Willmore BDB, King AJ. Adaptation in auditory processing. Physiol Rev 2023; 103:1025-1058. [PMID: 36049112 PMCID: PMC9829473 DOI: 10.1152/physrev.00011.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Adaptation is an essential feature of auditory neurons, which reduces their responses to unchanging and recurring sounds and allows their response properties to be matched to the constantly changing statistics of sounds that reach the ears. As a consequence, processing in the auditory system highlights novel or unpredictable sounds and produces an efficient representation of the vast range of sounds that animals can perceive by continually adjusting the sensitivity and, to a lesser extent, the tuning properties of neurons to the most commonly encountered stimulus values. Together with attentional modulation, adaptation to sound statistics also helps to generate neural representations of sound that are tolerant to background noise and therefore plays a vital role in auditory scene analysis. In this review, we consider the diverse forms of adaptation that are found in the auditory system in terms of the processing levels at which they arise, the underlying neural mechanisms, and their impact on neural coding and perception. We also ask what the dynamics of adaptation, which can occur over multiple timescales, reveal about the statistical properties of the environment. Finally, we examine how adaptation to sound statistics is influenced by learning and experience and changes as a result of aging and hearing loss.
Collapse
Affiliation(s)
- Ben D. B. Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J. King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
6
|
Makov S, Pinto D, Har-Shai Yahav P, Miller LM, Zion Golumbic E. "Unattended, distracting or irrelevant": Theoretical implications of terminological choices in auditory selective attention research. Cognition 2023; 231:105313. [PMID: 36344304 DOI: 10.1016/j.cognition.2022.105313] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 09/30/2022] [Accepted: 10/19/2022] [Indexed: 11/06/2022]
Abstract
For seventy years, auditory selective attention research has focused on studying the cognitive mechanisms of prioritizing the processing a 'main' task-relevant stimulus, in the presence of 'other' stimuli. However, a closer look at this body of literature reveals deep empirical inconsistencies and theoretical confusion regarding the extent to which this 'other' stimulus is processed. We argue that many key debates regarding attention arise, at least in part, from inappropriate terminological choices for experimental variables that may not accurately map onto the cognitive constructs they are meant to describe. Here we critically review the more common or disruptive terminological ambiguities, differentiate between methodology-based and theory-derived terms, and unpack the theoretical assumptions underlying different terminological choices. Particularly, we offer an in-depth analysis of the terms 'unattended' and 'distractor' and demonstrate how their use can lead to conflicting theoretical inferences. We also offer a framework for thinking about terminology in a more productive and precise way, in hope of fostering more productive debates and promoting more nuanced and accurate cognitive models of selective attention.
Collapse
Affiliation(s)
- Shiri Makov
- The Gonda Multidisciplinary Center for Brain Research, Bar Ilan University, Israel
| | - Danna Pinto
- The Gonda Multidisciplinary Center for Brain Research, Bar Ilan University, Israel
| | - Paz Har-Shai Yahav
- The Gonda Multidisciplinary Center for Brain Research, Bar Ilan University, Israel
| | - Lee M Miller
- The Center for Mind and Brain, University of California, Davis, CA, United States of America; Department of Neurobiology, Physiology, & Behavior, University of California, Davis, CA, United States of America; Department of Otolaryngology / Head and Neck Surgery, University of California, Davis, CA, United States of America
| | - Elana Zion Golumbic
- The Gonda Multidisciplinary Center for Brain Research, Bar Ilan University, Israel.
| |
Collapse
|
7
|
Paciello F, Ripoli C, Fetoni AR, Grassi C. Redox Imbalance as a Common Pathogenic Factor Linking Hearing Loss and Cognitive Decline. Antioxidants (Basel) 2023; 12:antiox12020332. [PMID: 36829891 PMCID: PMC9952092 DOI: 10.3390/antiox12020332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 01/23/2023] [Accepted: 01/29/2023] [Indexed: 02/04/2023] Open
Abstract
Experimental and clinical data suggest a tight link between hearing and cognitive functions under both physiological and pathological conditions. Indeed, hearing perception requires high-level cognitive processes, and its alterations have been considered a risk factor for cognitive decline. Thus, identifying common pathogenic determinants of hearing loss and neurodegenerative disease is challenging. Here, we focused on redox status imbalance as a possible common pathological mechanism linking hearing and cognitive dysfunctions. Oxidative stress plays a critical role in cochlear damage occurring during aging, as well as in that induced by exogenous factors, including noise. At the same time, increased oxidative stress in medio-temporal brain regions, including the hippocampus, is a hallmark of neurodegenerative disorders like Alzheimer's disease. As such, antioxidant therapy seems to be a promising approach to prevent and/or counteract both sensory and cognitive neurodegeneration. Here, we review experimental evidence suggesting that redox imbalance is a key pathogenetic factor underlying the association between sensorineural hearing loss and neurodegenerative diseases. A greater understanding of the pathophysiological mechanisms shared by these two diseased conditions will hopefully provide relevant information to develop innovative and effective therapeutic strategies.
Collapse
Affiliation(s)
- Fabiola Paciello
- Department of Neuroscience, Università Cattolica del Sacro Cuore, 00168 Rome, Italy
- Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy
| | - Cristian Ripoli
- Department of Neuroscience, Università Cattolica del Sacro Cuore, 00168 Rome, Italy
- Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy
- Correspondence: ; Tel.: +39-0630154966
| | - Anna Rita Fetoni
- Unit of Audiology, Department of Neuroscience, Università degli Studi di Napoli Federico II, 80138 Naples, Italy
| | - Claudio Grassi
- Department of Neuroscience, Università Cattolica del Sacro Cuore, 00168 Rome, Italy
- Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy
| |
Collapse
|
8
|
Cody PA, Tzounopoulos T. Neuromodulatory Mechanisms Underlying Contrast Gain Control in Mouse Auditory Cortex. J Neurosci 2022; 42:5564-5579. [PMID: 35998293 PMCID: PMC9295830 DOI: 10.1523/jneurosci.2054-21.2022] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 05/11/2022] [Accepted: 05/11/2022] [Indexed: 01/16/2023] Open
Abstract
Neural adaptation enables the brain to efficiently process sensory signals despite large changes in background noise. Previous studies have established that recent background spectro- or spatio-temporal statistics scale neural responses to sensory stimuli via a canonical normalization computation, which is conserved among species and sensory domains. In the auditory pathway, one major form of normalization, termed contrast gain control, presents as decreasing instantaneous firing-rate gain, the slope of the neural input-output relationship, with increasing variability of background sound levels (contrast) across time and frequency. Despite this gain rescaling, mean firing-rates in auditory cortex become invariant to sound level contrast, termed contrast invariance. The underlying neuromodulatory mechanisms of these two phenomena remain unknown. To study these mechanisms in male and female mice, we used a 2-photon calcium imaging preparation in layer 2/3 neurons of primary auditory cortex (A1), along with pharmacological and genetic KO approaches. We found that neuromodulatory cortical synaptic zinc signaling is necessary for contrast gain control but not contrast invariance in mouse A1.SIGNIFICANCE STATEMENT When sound levels in the acoustic environment become more variable across time and frequency, the brain decreases response gain to maintain dynamic range and thus stimulus discriminability. This gain adaptation accounts for changes in perceptual judgments in humans and mice; however, the underlying neuromodulatory mechanisms remain poorly understood. Here, we report context-dependent neuromodulatory effects of synaptic zinc that are necessary for contrast gain control in A1. Understanding context-specific neuromodulatory mechanisms, such as contrast gain control, provides insight into A1 cortical mechanisms of adaptation and also into fundamental aspects of perceptual changes that rely on gain modulation, such as attention.
Collapse
Affiliation(s)
- Patrick A Cody
- Pittsburgh Hearing Research Center, Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Thanos Tzounopoulos
- Pittsburgh Hearing Research Center, Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| |
Collapse
|
9
|
Auerbach BD, Gritton HJ. Hearing in Complex Environments: Auditory Gain Control, Attention, and Hearing Loss. Front Neurosci 2022; 16:799787. [PMID: 35221899 PMCID: PMC8866963 DOI: 10.3389/fnins.2022.799787] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 01/18/2022] [Indexed: 12/12/2022] Open
Abstract
Listening in noisy or complex sound environments is difficult for individuals with normal hearing and can be a debilitating impairment for those with hearing loss. Extracting meaningful information from a complex acoustic environment requires the ability to accurately encode specific sound features under highly variable listening conditions and segregate distinct sound streams from multiple overlapping sources. The auditory system employs a variety of mechanisms to achieve this auditory scene analysis. First, neurons across levels of the auditory system exhibit compensatory adaptations to their gain and dynamic range in response to prevailing sound stimulus statistics in the environment. These adaptations allow for robust representations of sound features that are to a large degree invariant to the level of background noise. Second, listeners can selectively attend to a desired sound target in an environment with multiple sound sources. This selective auditory attention is another form of sensory gain control, enhancing the representation of an attended sound source while suppressing responses to unattended sounds. This review will examine both “bottom-up” gain alterations in response to changes in environmental sound statistics as well as “top-down” mechanisms that allow for selective extraction of specific sound features in a complex auditory scene. Finally, we will discuss how hearing loss interacts with these gain control mechanisms, and the adaptive and/or maladaptive perceptual consequences of this plasticity.
Collapse
Affiliation(s)
- Benjamin D. Auerbach
- Department of Molecular and Integrative Physiology, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- *Correspondence: Benjamin D. Auerbach,
| | - Howard J. Gritton
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Comparative Biosciences, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, United States
| |
Collapse
|
10
|
Nussbaum C, von Eiff CI, Skuk VG, Schweinberger SR. Vocal emotion adaptation aftereffects within and across speaker genders: Roles of timbre and fundamental frequency. Cognition 2021; 219:104967. [PMID: 34875400 DOI: 10.1016/j.cognition.2021.104967] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 10/22/2021] [Accepted: 11/23/2021] [Indexed: 12/12/2022]
Abstract
While the human perceptual system constantly adapts to the environment, some of the underlying mechanisms are still poorly understood. For instance, although previous research demonstrated perceptual aftereffects in emotional voice adaptation, the contribution of different vocal cues to these effects is unclear. In two experiments, we used parameter-specific morphing of adaptor voices to investigate the relative roles of fundamental frequency (F0) and timbre in vocal emotion adaptation, using angry and fearful utterances. Participants adapted to voices containing emotion-specific information in either F0 or timbre, with all other parameters kept constant at an intermediate 50% morph level. Full emotional voices and ambiguous voices were used as reference conditions. All adaptor stimuli were either of the same (Experiment 1) or opposite speaker gender (Experiment 2) of subsequently presented target voices. In Experiment 1, we found consistent aftereffects in all adaptation conditions. Crucially, aftereffects following timbre adaptation were much larger than following F0 adaptation and were only marginally smaller than those following full adaptation. In Experiment 2, adaptation aftereffects appeared massively and proportionally reduced, with differences between morph types being no longer significant. These results suggest that timbre plays a larger role than F0 in vocal emotion adaptation, and that vocal emotion adaptation is compromised by eliminating gender-correspondence between adaptor and target stimuli. Our findings also add to mounting evidence suggesting a major role of timbre in auditory adaptation.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany.
| | - Celina I von Eiff
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
| | - Verena G Skuk
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany.
| |
Collapse
|
11
|
宋 长, 赵 岩, 柏 林. [Effects of background noise on auditory response characteristics of primary auditory cortex neurons in awake mice]. NAN FANG YI KE DA XUE XUE BAO = JOURNAL OF SOUTHERN MEDICAL UNIVERSITY 2021; 41:1672-1679. [PMID: 34916193 PMCID: PMC8685701 DOI: 10.12122/j.issn.1673-4254.2021.11.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Indexed: 06/14/2023]
Abstract
OBJECTIVE To study the effects of different continuous background noises on auditory response characteristics of primary auditory cortex (A1) neurons in awake mice. METHODS We performed in vivo cell-attached recordings in layer 4 neurons of the A1 of awake mice to investigate how continuous background noises of different levels affected the intensity tuning, frequency tuning and time characteristics of individual A1 neurons. According to the intensity tuning characteristics and types of stimulation, 44 neurons were devided into 4 groups: monotonic-intensity group (20 monotonic neurons), nonmonotonic-intensity group (6 nonmonotonic neurons), monotonic-frequency group (25 monotonic neurons) and monotonic-latency group (15 monotonic neurons). RESULTS The A1 neurons only had transient spike response within 10 to 40 ms after the onset of continuous wild-band noise stimulation. The noise intensity had no significant effects on the background firing rates of the A1 neurons (P>0.05). The increase of background noise resulted in a significant linear elevation of the intensity threshold of monotonic and nonmonotonic neurons for tone-evoked response (R2>0.90, P < 0.05). No significant difference was observed in the slopes of threshold changes between monotonic and nonmonotonic neurons (P>0.05). The best intensity of nonmonotonic neurons increased along with the intensity of the background noise, and the variation of the best intensity was positively correlated with the change of the threshold of the same neuron (r=0.944, P < 0.001). The frequency response bandwidth and the firing rate of the A1 neurons decreased as the noise intensity increased (P < 0.001), but the best frequency almost remained unchanged (P < 0.001). The increase of background noise intensity resulted in an increased first spike latency of the neurons to the same tone stimulus (P < 0.05) without affecting the time accuracy of the first action potential (P>0.05). CONCLUSION The acoustic response threshold of the A1 neurons increases linearly with the increase of background noise intensity. An increased background noise leads to compressed frequency band-width, a decreased firing rate and a prolonged spike latency, but the frequency selectivity and the time accuracy of auditory response to the same noise remain stable.
Collapse
Affiliation(s)
- 长宝 宋
- 南方医科大学生物医学工程学院数学物理系,广东 广州 510515Department of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
- 南方医科大学基础医学院生理学教研室,广东 广州 510515Department of Basic Medical Science, Southern Medical University, Guangzhou 510515, China
| | - 岩 赵
- 南方医科大学基础医学院生理学教研室,广东 广州 510515Department of Basic Medical Science, Southern Medical University, Guangzhou 510515, China
| | - 林 柏
- 南方医科大学基础医学院生理学教研室,广东 广州 510515Department of Basic Medical Science, Southern Medical University, Guangzhou 510515, China
| |
Collapse
|
12
|
Compression and amplification algorithms in hearing aids impair the selectivity of neural responses to speech. Nat Biomed Eng 2021; 6:717-730. [PMID: 33941898 PMCID: PMC7612903 DOI: 10.1038/s41551-021-00707-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Accepted: 02/25/2021] [Indexed: 02/07/2023]
Abstract
In quiet environments, hearing aids improve the perception of low-intensity sounds. However, for high-intensity sounds in background noise, the aids often fail to provide a benefit to the wearer. Here, by using large-scale single-neuron recordings from hearing-impaired gerbils — an established animal model of human hearing — we show that hearing aids restore the sensitivity of neural responses to speech, but not their selectivity. Rather than reflecting a deficit in supra-threshold auditory processing, the low selectivity is a consequence of hearing-aid compression (which decreases the spectral and temporal contrasts of incoming sound) and of amplification (which distorts neural responses, regardless of whether hearing is impaired). Processing strategies that avoid the trade-off between neural sensitivity and selectivity should improve the performance of hearing aids.
Collapse
|