1
|
Li Y, Zhang J. Binaural advantages in sound temporal information processing by neurons in the rat inferior colliculus. Front Neurosci 2023; 17:1308052. [PMID: 38125407 PMCID: PMC10731313 DOI: 10.3389/fnins.2023.1308052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 11/21/2023] [Indexed: 12/23/2023] Open
Abstract
Previous studies on the advantages of binaural hearing have long been focused on sound localization and spatial stream segregation. The binaural advantages have also been observed in speech perception in reverberation. Both human speech and animal vocalizations contain temporal features that are critical for speech perception and animal communication. However, whether there are binaural advantages for sound temporal information processing in the central auditory system has not been elucidated. Gap detection threshold (GDT), the ability to detect the shortest silent interval in a sound, has been widely used to measure the auditory temporal resolution. In the present study, we determined GDTs of rat inferior collicular neurons under both monaural and binaural hearing conditions. We found that the majority of the inferior collicular neurons in adult rats exhibited binaural advantages in gap detection, i.e., better neural gap detection ability in binaural hearing conditions compared to monaural hearing condition. However, this binaural advantage in sound temporal information processing was not significant in the inferior collicular neurons of P14-21 and P22-30 rats. Additionally, we also observed age-related changes in neural temporal acuity in the rat inferior colliculus. These results demonstrate a new advantage of binaural hearing (i.e., binaural advantage in temporal processing) in the central auditory system in addition to sound localization and spatial stream segregation.
Collapse
Affiliation(s)
| | - Jiping Zhang
- Key Laboratory of Brain Functional Genomics, Ministry of Education, NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, School of Life Sciences, East China Normal University, Shanghai, China
| |
Collapse
|
2
|
Lansbergen SE, Versfeld N, Dreschler WA. Exploring Factors That Contribute to the Success of Rehabilitation With Hearing Aids. Ear Hear 2023; 44:1514-1525. [PMID: 37792897 PMCID: PMC10583950 DOI: 10.1097/aud.0000000000001393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Accepted: 05/10/2023] [Indexed: 10/06/2023]
Abstract
OBJECTIVES Hearing aids are an essential and important part of hearing rehabilitation. The combination of technical data on hearing aids and individual rehabilitation needs can give insight into the factors that contribute to the success of rehabilitation. This study sets out to investigate if different subgroups of (comparable) hearing aids lead to differences in the success of rehabilitation, and whether these differences vary between different domains of auditory functioning. DESIGN This study explored the advantages of including patient-reported outcome measures (PROMs) in the process of purchasing new hearing aids in a large sample of successful hearing aid users. Subject data were obtained from 64 (commercial) hearing aid dispensers and 10 (noncommercial) audiological centers in the Netherlands. The PROM was a 32-item questionnaire and was used to determine the success of rehabilitation using hearing aids by measuring auditory disability over time. The items were mapped on six domains of auditory functioning: detection, discrimination, localization, speech in quiet, speech in noise, and noise tolerance, encompassing a variety of daily-life listening situations. Hearing aids were grouped by means of cluster analysis, resulting in nine subgroups. In total, 1149 subjects were included in this study. A general linear model was used to model the final PROM results. Model results were analyzed via a multifactor Analysis of Variance. Post hoc analyses provided detailed information on model variables. RESULTS Results showed a strong statistically significant effect of hearing aids on self-perceived auditory functioning in general. Clinically relevant differences were found for auditory domains including detection, speech in quiet, speech in noise, and localization. There was only a small, but significant, effect of the different subgroups of hearing aids on the final PROM results, where no differences were found between the auditory domains. Minor differences were found between results obtained in commercial and noncommercial settings, or between novice and experienced users. Severity of Hearing loss, age, gender, and hearing aid style (i.e., behind-the-ear versus receiver-in-canal type) did not have a clinically relevant effect on the final PROM results. CONCLUSIONS The use of hearing aids has a large positive effect on self-perceived auditory functioning. There was however no salient effect of the different subgroups of hearing aids on the final PROM results, indicating that technical properties of hearing aids only play a limited role in this respect. This study challenges the belief that premium devices outperform basic ones, highlighting the need for personalized rehabilitation strategies and the importance of evaluating factors contributing to successful rehabilitation for clinical practice.
Collapse
Affiliation(s)
- Simon E. Lansbergen
- Department(s), Clinical and Experimental Audiology, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Niek Versfeld
- Otolaryngology Head and Neck Surgery, Ear and Hearing, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam Public Health Research Institute, Amsterdam, Boelelaan, The Netherlands
| | - Wouter A. Dreschler
- Department(s), Clinical and Experimental Audiology, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
3
|
Farahbod H, Rogalsky C, Keator LM, Cai J, Pillay SB, Turner K, LaCroix A, Fridriksson J, Binder JR, Middlebrooks JC, Hickok G, Saberi K. Informational Masking in Aging and Brain-lesioned Individuals. J Assoc Res Otolaryngol 2023; 24:67-79. [PMID: 36471207 PMCID: PMC9971540 DOI: 10.1007/s10162-022-00877-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 11/01/2022] [Indexed: 12/12/2022] Open
Abstract
Auditory stream segregation and informational masking were investigated in brain-lesioned individuals, age-matched controls with no neurological disease, and young college-age students. A psychophysical paradigm known as rhythmic masking release (RMR) was used to examine the ability of participants to identify a change in the rhythmic sequence of 20-ms Gaussian noise bursts presented through headphones and filtered through generalized head-related transfer functions to produce the percept of an externalized auditory image (i.e., a 3D virtual reality sound). The target rhythm was temporally interleaved with a masker sequence comprising similar noise bursts in a manner that resulted in a uniform sequence with no information remaining about the target rhythm when the target and masker were presented from the same location (an impossible task). Spatially separating the target and masker sequences allowed participants to determine if there was a change in the target rhythm midway during its presentation. RMR thresholds were defined as the minimum spatial separation between target and masker sequences that resulted in 70.7% correct-performance level in a single-interval 2-alternative forced-choice adaptive tracking procedure. The main findings were (1) significantly higher RMR thresholds for individuals with brain lesions (especially those with damage to parietal areas) and (2) a left-right spatial asymmetry in performance for lesion (but not control) participants. These findings contribute to a better understanding of spatiotemporal relations in informational masking and the neural bases of auditory scene analysis.
Collapse
Affiliation(s)
- Haleh Farahbod
- grid.266093.80000 0001 0668 7243Department of Cognitive Sciences, University of California, Irvine, USA
| | - Corianne Rogalsky
- grid.215654.10000 0001 2151 2636College of Health Solutions, Arizona State University, Tempe, USA
| | - Lynsey M. Keator
- grid.254567.70000 0000 9075 106XDepartment of Communication Sciences and Disorders, University of South Carolina, Columbia, USA
| | - Julia Cai
- grid.215654.10000 0001 2151 2636College of Health Solutions, Arizona State University, Tempe, USA
| | - Sara B. Pillay
- grid.30760.320000 0001 2111 8460Department of Neurology, Medical College of Wisconsin, Milwaukee, USA
| | - Katie Turner
- grid.266093.80000 0001 0668 7243Department of Cognitive Sciences, University of California, Irvine, USA
| | - Arianna LaCroix
- grid.260024.20000 0004 0627 4571College of Health Sciences, Midwestern University, Glendale, USA
| | - Julius Fridriksson
- grid.254567.70000 0000 9075 106XDepartment of Communication Sciences and Disorders, University of South Carolina, Columbia, USA
| | - Jeffrey R. Binder
- grid.30760.320000 0001 2111 8460Department of Neurology, Medical College of Wisconsin, Milwaukee, USA
| | - John C. Middlebrooks
- grid.266093.80000 0001 0668 7243Department of Cognitive Sciences, University of California, Irvine, USA ,grid.266093.80000 0001 0668 7243Department of Otolaryngology, University of California, Irvine, USA ,grid.266093.80000 0001 0668 7243Department of Language Science, University of California, Irvine, USA
| | - Gregory Hickok
- grid.266093.80000 0001 0668 7243Department of Cognitive Sciences, University of California, Irvine, USA ,grid.266093.80000 0001 0668 7243Department of Language Science, University of California, Irvine, USA
| | - Kourosh Saberi
- Department of Cognitive Sciences, University of California, Irvine, USA.
| |
Collapse
|
4
|
Williams IR, Filimontseva A, Connelly CJ, Ryugo DK. The lateral superior olive in the mouse: Two systems of projecting neurons. Front Neural Circuits 2022; 16:1038500. [PMID: 36338332 PMCID: PMC9630946 DOI: 10.3389/fncir.2022.1038500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 09/29/2022] [Indexed: 01/24/2023] Open
Abstract
The lateral superior olive (LSO) is a key structure in the central auditory system of mammals that exerts efferent control on cochlear sensitivity and is involved in the processing of binaural level differences for sound localization. Understanding how the LSO contributes to these processes requires knowledge about the resident cells and their connections with other auditory structures. We used standard histological stains and retrograde tracer injections into the inferior colliculus (IC) and cochlea in order to characterize two basic groups of neurons: (1) Principal and periolivary (PO) neurons have projections to the IC as part of the ascending auditory pathway; and (2) lateral olivocochlear (LOC) intrinsic and shell efferents have descending projections to the cochlea. Principal and intrinsic neurons are intermixed within the LSO, exhibit fusiform somata, and have disk-shaped dendritic arborizations. The principal neurons have bilateral, symmetric, and tonotopic projections to the IC. The intrinsic efferents have strictly ipsilateral projections, known to be tonotopic from previous publications. PO and shell neurons represent much smaller populations (<10% of principal and intrinsic neurons, respectively), have multipolar somata, reside outside the LSO, and have non-topographic, bilateral projections. PO and shell neurons appear to have widespread projections to their targets that imply a more diffuse modulatory function. The somata and dendrites of principal and intrinsic neurons form a laminar matrix within the LSO and share quantifiably similar alignment to the tonotopic axis. Their restricted projections emphasize the importance of frequency in binaural processing and efferent control for auditory perception. This study addressed and expanded on previous findings of cell types, circuit laterality, and projection tonotopy in the LSO of the mouse.
Collapse
Affiliation(s)
- Isabella R. Williams
- Garvan Institute of Medical Research, Darlinghurst, NSW, Australia,School of Medical Sciences, University of New South Wales, Kensington, NSW, Australia,*Correspondence: Isabella R. Williams,
| | | | - Catherine J. Connelly
- Garvan Institute of Medical Research, Darlinghurst, NSW, Australia,School of Medical Sciences, University of New South Wales, Kensington, NSW, Australia
| | - David K. Ryugo
- Garvan Institute of Medical Research, Darlinghurst, NSW, Australia,School of Medical Sciences, University of New South Wales, Kensington, NSW, Australia,Department of Otolaryngology-Head, Neck and Skull Base Surgery, St. Vincent’s Hospital, Darlinghurst, NSW, Australia
| |
Collapse
|
5
|
Gibbs BE, Bernstein JGW, Brungart DS, Goupell MJ. Effects of better-ear glimpsing, binaural unmasking, and spectral resolution on spatial release from masking in cochlear-implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1230. [PMID: 36050186 PMCID: PMC9420049 DOI: 10.1121/10.0013746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 08/04/2022] [Accepted: 08/06/2022] [Indexed: 06/15/2023]
Abstract
Bilateral cochlear-implant (BICI) listeners obtain less spatial release from masking (SRM; speech-recognition improvement for spatially separated vs co-located conditions) than normal-hearing (NH) listeners, especially for symmetrically placed maskers that produce similar long-term target-to-masker ratios at the two ears. Two experiments examined possible causes of this deficit, including limited better-ear glimpsing (using speech information from the more advantageous ear in each time-frequency unit), limited binaural unmasking (using interaural differences to improve signal-in-noise detection), or limited spectral resolution. Listeners had NH (presented with unprocessed or vocoded stimuli) or BICIs. Experiment 1 compared natural symmetric maskers, idealized monaural better-ear masker (IMBM) stimuli that automatically performed better-ear glimpsing, and hybrid stimuli that added worse-ear information, potentially restoring binaural cues. BICI and NH-vocoded SRM was comparable to NH-unprocessed SRM for idealized stimuli but was 14%-22% lower for symmetric stimuli, suggesting limited better-ear glimpsing ability. Hybrid stimuli improved SRM for NH-unprocessed listeners but degraded SRM for BICI and NH-vocoded listeners, suggesting they experienced across-ear interference instead of binaural unmasking. In experiment 2, increasing the number of vocoder channels did not change NH-vocoded SRM. BICI SRM deficits likely reflect a combination of across-ear interference, limited better-ear glimpsing, and poorer binaural unmasking that stems from cochlear-implant-processing limitations other than reduced spectral resolution.
Collapse
Affiliation(s)
- Bobby E Gibbs
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Joshua G W Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Douglas S Brungart
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
6
|
Dellaferrera G, Asabuki T, Fukai T. Modeling the Repetition-Based Recovering of Acoustic and Visual Sources With Dendritic Neurons. Front Neurosci 2022; 16:855753. [PMID: 35573290 PMCID: PMC9097820 DOI: 10.3389/fnins.2022.855753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2022] [Accepted: 03/31/2022] [Indexed: 11/13/2022] Open
Abstract
In natural auditory environments, acoustic signals originate from the temporal superimposition of different sound sources. The problem of inferring individual sources from ambiguous mixtures of sounds is known as blind source decomposition. Experiments on humans have demonstrated that the auditory system can identify sound sources as repeating patterns embedded in the acoustic input. Source repetition produces temporal regularities that can be detected and used for segregation. Specifically, listeners can identify sounds occurring more than once across different mixtures, but not sounds heard only in a single mixture. However, whether such a behavior can be computationally modeled has not yet been explored. Here, we propose a biologically inspired computational model to perform blind source separation on sequences of mixtures of acoustic stimuli. Our method relies on a somatodendritic neuron model trained with a Hebbian-like learning rule which was originally conceived to detect spatio-temporal patterns recurring in synaptic inputs. We show that the segregation capabilities of our model are reminiscent of the features of human performance in a variety of experimental settings involving synthesized sounds with naturalistic properties. Furthermore, we extend the study to investigate the properties of segregation on task settings not yet explored with human subjects, namely natural sounds and images. Overall, our work suggests that somatodendritic neuron models offer a promising neuro-inspired learning strategy to account for the characteristics of the brain segregation capabilities as well as to make predictions on yet untested experimental settings.
Collapse
Affiliation(s)
- Giorgia Dellaferrera
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Okinawa, Japan
- Institute of Neuroinformatics, University of Zurich and Swiss Federal Institute of Technology Zurich (ETH), Zurich, Switzerland
| | - Toshitake Asabuki
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Okinawa, Japan
| | - Tomoki Fukai
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Okinawa, Japan
| |
Collapse
|
7
|
Shared cognitive resources between memory and attention during sound-sequence encoding. Atten Percept Psychophys 2022; 84:739-759. [PMID: 35106682 DOI: 10.3758/s13414-021-02390-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/02/2021] [Indexed: 11/08/2022]
Abstract
You are on the phone, walking down a street. This daily situation calls for selective attention, allowing you to ignore surrounding irrelevant sounds, while trying to encode in memory the relevant information from the phone. Attention and memory are indeed two cognitive functions that are interacting constantly. However, their interaction is not yet well characterized during sound-sequence encoding. We independently manipulated both selective attention and working memory in a delayed-matching-to-sample of two tone-series, played successively in one ear. During the first melody presentation (memory encoding), weakly or highly distracting melodies were played in the other ear. Detection of the difference between the two comparison melodies could be easy or difficult, requiring low- or high-precision encoding, i.e., low or high memory load. Sixteen non-musician and 16 musician participants performed this new task. As expected, both groups of participants were less accurate in the difficult memory task and in difficult-to-ignore distractor conditions. Importantly, an interaction between memory-task difficulty and distractor difficulty was found in both groups. Non-musicians presented less difference between easy and difficult-to-ignore distractors in the difficult than in the easy memory task. On the contrary, musicians, with better performance than non-musicians, showed a greater difference between easy and difficult-to-ignore distractors in the difficult than in the easy memory task. In a second experiment including trials without a distractor, we could show that these effects are in line with the cognitive load theory. Taken together, these results speak for shared cognitive resources between working memory and attention during sound-sequence encoding.
Collapse
|
8
|
Liu J, Huang X, Zhang J. Unilateral Conductive Hearing Loss Disrupts the Developmental Refinement of Binaural Processing in the Rat Primary Auditory Cortex. Front Neurosci 2021; 15:762337. [PMID: 34867170 PMCID: PMC8640238 DOI: 10.3389/fnins.2021.762337] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2021] [Accepted: 10/20/2021] [Indexed: 11/23/2022] Open
Abstract
Binaural hearing is critically important for the perception of sound spatial locations. The primary auditory cortex (AI) has been demonstrated to be necessary for sound localization. However, after hearing onset, how the processing of binaural cues by AI neurons develops, and how the binaural processing of AI neurons is affected by reversible unilateral conductive hearing loss (RUCHL), are not fully elucidated. Here, we determined the binaural processing of AI neurons in four groups of rats: postnatal day (P) 14–18 rats, P19–30 rats, P57–70 adult rats, and RUCHL rats (P57–70) with RUCHL during P14–30. We recorded the responses of AI neurons to both monaural and binaural stimuli with variations in interaural level differences (ILDs) and average binaural levels. We found that the monaural response types, the binaural interaction types, and the distributions of the best ILDs of AI neurons in P14–18 rats are already adult-like. However, after hearing onset, there exist developmental refinements in the binaural processing of AI neurons, which are exhibited by the increase in the degree of binaural interaction, and the increase in the sensitivity and selectivity to ILDs. RUCHL during early hearing development affects monaural response types, decreases the degree of binaural interactions, and decreases both the selectivity and sensitivity to ILDs of AI neurons in adulthood. These new evidences help us to understand the refinements and plasticity in the binaural processing of AI neurons during hearing development, and might enhance our understanding in the neuronal mechanism of developmental changes in auditory spatial perception.
Collapse
Affiliation(s)
- Jing Liu
- Key Laboratory of Brain Functional Genomics, Ministry of Education, NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, School of Life Sciences, East China Normal University, Shanghai, China
| | - Xinyi Huang
- Key Laboratory of Brain Functional Genomics, Ministry of Education, NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, School of Life Sciences, East China Normal University, Shanghai, China
| | - Jiping Zhang
- Key Laboratory of Brain Functional Genomics, Ministry of Education, NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, School of Life Sciences, East China Normal University, Shanghai, China
| |
Collapse
|
9
|
Middlebrooks JC. A Search for a Cortical Map of Auditory Space. J Neurosci 2021; 41:5772-5778. [PMID: 34011526 PMCID: PMC8265804 DOI: 10.1523/jneurosci.0501-21.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 05/10/2021] [Accepted: 05/12/2021] [Indexed: 11/21/2022] Open
Abstract
This is the story of a search for a cortical map of auditory space. The search began with a study that was reported in the first issue of The Journal of Neuroscience (Middlebrooks and Pettigrew, 1981). That paper described some unexpected features of spatial sensitivity in the auditory cortex while failing to demonstrate the expected map. In the ensuing 40 years, we have encountered the following: panoramic spatial coding by single neurons; a rich variety of response patterns that are unmasked in the absence of general anesthesia; sharpening of spatial sensitivity when an animal is engaged in a listening task; and reorganization of spatial sensitivity in the presence of competing sounds. We have not encountered a map, but not through lack of trying. On the basis of years of negative results by our group and others, and positive results that are inconsistent with static point-to-point topography, we are confident in concluding that there just ain't no map. Instead, we have come to appreciate the highly dynamic spatial properties of cortical neurons, which serve the needs of listeners in a changing sonic environment.
Collapse
Affiliation(s)
- John C Middlebrooks
- Department of Otolaryngology
- Department of Neurobiology and Behavior
- Department of Cognitive Sciences
- Department of Biomedical Engineering, University of California at Irvine, Irvine, California 92697-5310
| |
Collapse
|