1
|
Effects of Age at Implantation on Outcomes of Cochlear Implantation in Children with Short Durations of Single-Sided Deafness. Otol Neurotol 2023; 44:233-240. [PMID: 36728258 PMCID: PMC9924958 DOI: 10.1097/mao.0000000000003811] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
OBJECTIVE Children with single-sided deafness (SSD) show reduced language and academic development and report hearing challenges. We aim to improve outcomes in children with SSD by providing bilateral hearing through cochlear implantation of the deaf ear with minimal delay. STUDY DESIGN Prospective cohort study of 57 children with SSD provided with cochlear implant (CI) between May 13, 2013, and June 25, 2021. SETTING Tertiary children's hospital. PARTICIPANTS Children with early onset (n = 40) or later onset of SSD (n = 17) received CIs at ages 2.47 ± 1.58 years (early onset group) and 11.67 ± 3.91 years (late onset group) (mean ± SD). Duration of unilateral deafness was limited (mean ± SD = 1.93 ± 1.56 yr). INTERVENTION Cochlear implantation of the deaf ear. MAIN OUTCOMES/MEASURES Evaluations of device use (data logging) and hearing (speech perception, effects of spatial release from masking on speech detection, localization of stationary and moving sound, self-reported hearing questionnaires). RESULTS Results indicated that daily device use is variable (mean ± SD = 5.60 ± 2.97, range = 0.0-14.7 h/d) with particular challenges during extended COVID-19 lockdowns, including school closures (daily use reduced by mean 1.73 h). Speech perception with the CI alone improved (mean ± SD = 65.7 ± 26.4 RAU) but, in the late onset group, remained poorer than in the normal hearing ear. Measures of spatial release from masking also showed asymmetric hearing in the late onset group ( t13 = 5.14, p = 0.001). Localization of both stationary and moving sound was poor (mean ± SD error = 34.6° ± 16.7°) but slightly improved on the deaf side with CI use ( F1,36 = 3.95, p = 0.05). Decreased sound localization significantly correlated with poorer self-reported hearing. CONCLUSIONS AND RELEVANCE Benefits of CI in children with limited durations of SSD may be more restricted for older children/adolescents. Spatial hearing challenges remain. Efforts to increase CI acceptance and consistent use are needed.
Collapse
|
2
|
Persistence and generalization of adaptive changes in auditory localization behavior following unilateral conductive hearing loss. Front Neurosci 2023; 17:1067937. [PMID: 36816127 PMCID: PMC9929551 DOI: 10.3389/fnins.2023.1067937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 01/10/2023] [Indexed: 02/04/2023] Open
Abstract
Introduction Sound localization relies on the neural processing of binaural and monaural spatial cues generated by the physical properties of the head and body. Hearing loss in one ear compromises binaural computations, impairing the ability to localize sounds in the horizontal plane. With appropriate training, adult individuals can adapt to this binaural imbalance and largely recover their localization accuracy. However, it remains unclear how long this learning is retained or whether it generalizes to other stimuli. Methods We trained ferrets to localize broadband noise bursts in quiet conditions and measured their initial head orienting responses and approach-to-target behavior. To evaluate the persistence of auditory spatial learning, we tested the sound localization performance of the animals over repeated periods of monaural earplugging that were interleaved with short or long periods of normal binaural hearing. To explore learning generalization to other stimulus types, we measured the localization accuracy before and after adaptation using different bandwidth stimuli presented against constant or amplitude-modulated background noise. Results Retention of learning resulted in a smaller initial deficit when the same ear was occluded on subsequent occasions. Each time, the animals' performance recovered with training to near pre-plug levels of localization accuracy. By contrast, switching the earplug to the contralateral ear resulted in less adaptation, indicating that the capacity to learn a new strategy for localizing sound is more limited if the animals have previously adapted to conductive hearing loss in the opposite ear. Moreover, the degree of adaptation to the training stimulus for individual animals was significantly correlated with the extent to which learning extended to untrained octave band target sounds presented in silence and to broadband targets presented in background noise, suggesting that adaptation and generalization go hand in hand. Conclusions Together, these findings provide further evidence for plasticity in the weighting of monaural and binaural cues during adaptation to unilateral conductive hearing loss, and show that the training-dependent recovery in spatial hearing can generalize to more naturalistic listening conditions, so long as the target sounds provide sufficient spatial information.
Collapse
|
3
|
Hearing Asymmetry Biases Spatial Hearing in Bimodal Cochlear-Implant Users Despite Bilateral Low-Frequency Hearing Preservation. Trends Hear 2023; 27:23312165221143907. [PMID: 36605011 PMCID: PMC9829999 DOI: 10.1177/23312165221143907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023] Open
Abstract
Many cochlear implant users with binaural residual (acoustic) hearing benefit from combining electric and acoustic stimulation (EAS) in the implanted ear with acoustic amplification in the other. These bimodal EAS listeners can potentially use low-frequency binaural cues to localize sounds. However, their hearing is generally asymmetric for mid- and high-frequency sounds, perturbing or even abolishing binaural cues. Here, we investigated the effect of a frequency-dependent binaural asymmetry in hearing thresholds on sound localization by seven bimodal EAS listeners. Frequency dependence was probed by presenting sounds with power in low-, mid-, high-, or mid-to-high-frequency bands. Frequency-dependent hearing asymmetry was present in the bimodal EAS listening condition (when using both devices) but was also induced by independently switching devices on or off. Using both devices, hearing was near symmetric for low frequencies, asymmetric for mid frequencies with better hearing thresholds in the implanted ear, and monaural for high frequencies with no hearing in the non-implanted ear. Results show that sound-localization performance was poor in general. Typically, localization was strongly biased toward the better hearing ear. We observed that hearing asymmetry was a good predictor for these biases. Notably, even when hearing was symmetric a preferential bias toward the ear using the hearing aid was revealed. We discuss how frequency dependence of any hearing asymmetry may lead to binaural cues that are spatially inconsistent as the spectrum of a sound changes. We speculate that this inconsistency may prevent accurate sound-localization even after long-term exposure to the hearing asymmetry.
Collapse
|
4
|
Differential Effects of Task-Irrelevant Monaural and Binaural Classroom Scenarios on Children's and Adults' Speech Perception, Listening Comprehension, and Visual-Verbal Short-Term Memory. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:15998. [PMID: 36498071 PMCID: PMC9738007 DOI: 10.3390/ijerph192315998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 11/25/2022] [Accepted: 11/26/2022] [Indexed: 06/17/2023]
Abstract
Most studies investigating the effects of environmental noise on children's cognitive performance examine the impact of monaural noise (i.e., same signal to both ears), oversimplifying multiple aspects of binaural hearing (i.e., adequately reproducing interaural differences and spatial information). In the current study, the effects of a realistic classroom-noise scenario presented either monaurally or binaurally on tasks requiring processing of auditory and visually presented information were analyzed in children and adults. In Experiment 1, across age groups, word identification was more impaired by monaural than by binaural classroom noise, whereas listening comprehension (acting out oral instructions) was equally impaired in both noise conditions. In both tasks, children were more affected than adults. Disturbance ratings were unrelated to the actual performance decrements. Experiment 2 revealed detrimental effects of classroom noise on short-term memory (serial recall of words presented pictorially), which did not differ with age or presentation mode (monaural vs. binaural). The present results add to the evidence for detrimental effects of noise on speech perception and cognitive performance, and their interactions with age, using a realistic classroom-noise scenario. Binaural simulations of real-world auditory environments can improve the external validity of studies on the impact of noise on children's and adults' learning.
Collapse
|
5
|
Sensitivity to Envelope Interaural Time Differences: Modeling Auditory Modulation Filtering. J Assoc Res Otolaryngol 2022; 23:35-57. [PMID: 34741225 PMCID: PMC8782955 DOI: 10.1007/s10162-021-00816-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 08/30/2021] [Indexed: 02/03/2023] Open
Abstract
For amplitude-modulated sound, the envelope interaural time difference (ITDENV) is a potential cue for sound-source location. ITDENV is encoded in the lateral superior olive (LSO) of the auditory brainstem, by excitatory-inhibitory (EI) neurons receiving ipsilateral excitation and contralateral inhibition. Between human listeners, sensitivity to ITDENV varies considerably, but ultimately decreases with increasing stimulus carrier frequency, and decreases more strongly with increasing modulation rate. Mechanisms underlying the variation in behavioral sensitivity remain unclear. Here, with increasing carrier frequency (4-10 kHz), as we phenomenologically model the associated decrease in ITDENV sensitivity using arbitrarily fewer neurons consistent across populations, we computationally model the variable sensitivity across human listeners and modulation rates (32-800 Hz) as the decreasing range of membrane frequency responses in LSO neurons. Transposed tones stimulate a bilateral auditory-periphery model, driving model EI neurons where electrical membrane impedance filters the frequency content of inputs driven by amplitude-modulated sound, evoking modulation filtering. Calculated from Fisher information in spike-rate functions of ITDENV, for model EI neuronal populations distinctly reflecting the LSO range in membrane frequency responses, just-noticeable differences in ITDENV collectively reproduce the largest variation in ITDENV sensitivity across human listeners. These slow to fast model populations each generally match the best human ITDENV sensitivity at a progressively higher modulation rate, by membrane-filtering and spike-generation properties producing realistically less than Poisson variance. Non-resonant model EI neurons are also sensitive to interaural intensity differences. With peripheral filters centered between carrier frequency and modulation sideband, fast resonant model EI neurons extend ITDENV sensitivity above 500-Hz modulation.
Collapse
|
6
|
Interaural Place-of-Stimulation Mismatch Estimates Using CT Scans and Binaural Perception, But Not Pitch, Are Consistent in Cochlear-Implant Users. J Neurosci 2021; 41:10161-10178. [PMID: 34725189 PMCID: PMC8660045 DOI: 10.1523/jneurosci.0359-21.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 08/23/2021] [Accepted: 10/01/2021] [Indexed: 11/21/2022] Open
Abstract
Bilateral cochlear implants (BI-CIs) or a CI for single-sided deafness (SSD-CI; one normally functioning acoustic ear) can partially restore spatial-hearing abilities, including sound localization and speech understanding in noise. For these populations, however, interaural place-of-stimulation mismatch can occur and thus diminish binaural sensitivity that relies on interaurally frequency-matched neurons. This study examined whether plasticity-reorganization of central neural pathways over time-can compensate for peripheral interaural place mismatch. We hypothesized differential plasticity across two systems: none for binaural processing but adaptation for pitch perception toward frequencies delivered by the specific electrodes. Interaural place mismatch was evaluated in 19 BI-CI and 23 SSD-CI human subjects (both sexes) using binaural processing (interaural-time-difference discrimination with simultaneous bilateral stimulation), pitch perception (pitch ranking for single electrodes or acoustic tones with sequential bilateral stimulation), and physical electrode-location estimates from computed-tomography (CT) scans. On average, CT scans revealed relatively little BI-CI interaural place mismatch (26° insertion-angle mismatch) but a relatively large SSD-CI mismatch, particularly at low frequencies (166° for an electrode tuned to 300 Hz, decreasing to 14° at 7000 Hz). For BI-CI subjects, the three metrics were in agreement because there was little mismatch. For SSD-CI subjects, binaural and CT measurements were in agreement, suggesting little binaural-system plasticity induced by mismatch. The pitch measurements disagreed with binaural and CT measurements, suggesting place-pitch plasticity or a procedural bias. These results suggest that reducing interaural place mismatch and potentially improving binaural processing by reprogramming the CI frequency allocation would be better done using CT-scan than pitch information.SIGNIFICANCE STATEMENT Electrode-array placement for cochlear implants (bionic prostheses that partially restore hearing) does not explicitly align neural representations of frequency information. The resulting interaural place-of-stimulation mismatch can diminish spatial-hearing abilities. In this study, adults with two cochlear implants showed reasonable interaural alignment, whereas those with one cochlear implant but normal hearing in the other ear often showed mismatch. In cases of mismatch, binaural sensitivity was best when the same cochlear locations were stimulated in both ears, suggesting that binaural brainstem pathways do not experience plasticity to compensate for mismatch. In contrast, interaurally pitch-matched electrodes deviated from cochlear-location estimates and did not optimize binaural sensitivity. Clinical correction of interaural place mismatch using binaural or computed-tomography (but not pitch) information may improve spatial-hearing benefits.
Collapse
|
7
|
Abstract
OBJECTIVES Binaural pitch fusion is the perceptual integration of stimuli that evoke different pitches between the ears into a single auditory image. Adults who use hearing aids (HAs) or cochlear implants (CIs) often experience abnormally broad binaural pitch fusion, such that sounds differing in pitch by as much as 3 to 4 octaves are fused across ears, leading to spectral averaging and speech perception interference. The main goal of this study was to measure binaural pitch fusion in children with different hearing device combinations and compare results across groups and with adults. A second goal was to examine the relationship of binaural pitch fusion to interaural pitch differences or pitch match range, a measure of sequential pitch discriminability. DESIGN Binaural pitch fusion was measured in children between the ages of 6.1 and 11.1 years with bilateral HAs (n = 9), bimodal CI (n = 10), bilateral CIs (n = 17), as well as normal-hearing (NH) children (n = 21). Depending on device combination, stimuli were pure tones or electric pulse trains delivered to individual electrodes. Fusion ranges were measured using simultaneous, dichotic presentation of reference and comparison stimuli in opposite ears, and varying the comparison stimulus to find the range that fused with the reference stimulus. Interaural pitch match functions were measured using sequential presentation of reference and comparison stimuli, and varying the comparison stimulus to find the pitch match center and range. RESULTS Children with bilateral HAs had significantly broader binaural pitch fusion than children with NH, bimodal CI, or bilateral CIs. Children with NH and bilateral HAs, but not children with bimodal or bilateral CIs, had significantly broader fusion than adults with the same hearing status and device configuration. In children with bilateral CIs, fusion range was correlated with several variables that were also correlated with each other: pure-tone average in the second implanted ear before CI, and duration of prior bilateral HA, bimodal CI, or bilateral CI experience. No relationship was observed between fusion range and pitch match differences or range. CONCLUSIONS The findings suggest that binaural pitch fusion is still developing in this age range and depends on hearing device combination but not on interaural pitch differences or discriminability.
Collapse
|
8
|
Physiological Diversity Influences Detection of Stimulus Envelope and Fine Structure in Neurons of the Medial Superior Olive. J Neurosci 2021; 41:6234-6245. [PMID: 34083255 PMCID: PMC8287997 DOI: 10.1523/jneurosci.2354-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2020] [Revised: 05/03/2021] [Accepted: 05/05/2021] [Indexed: 01/10/2023] Open
Abstract
The neurons of the medial superior olive (MSO) of mammals extract azimuthal information from the delays between sounds reaching the two ears [interaural time differences (ITDs)]. Traditionally, all models of sound localization have assumed that MSO neurons represent a single population of cells with specialized and homogeneous intrinsic and synaptic properties that enable the detection of synaptic coincidence on a timescale of tens to hundreds of microseconds. Here, using patch-clamp recordings from large populations of anatomically labeled neurons in brainstem slices from male and female Mongolian gerbils (Meriones unguiculatus), we show that MSO neurons are far more physiologically diverse than previously appreciated, with properties that depend regionally on cell position along the topographic map of frequency. Despite exhibiting a similar morphology, neurons in the MSO exhibit subthreshold oscillations of differing magnitudes that drive action potentials at rates between 100 and 800 Hz. These oscillations are driven primarily by voltage-gated sodium channels and are distinct from resonant properties derived from other active membrane properties. We show that graded differences in these and other physiological properties across the MSO neuron population enable the MSO to duplex the encoding of ITD information in both fast, submillisecond time-varying signals as well as in slower envelopes.SIGNIFICANCE STATEMENT Neurons in the medial superior olive (MSO) encode sound localization cues by detecting microsecond differences in the arrival times of inputs from the left and right ears, and it has been assumed that this computation is made possible by highly stereotyped structural and physiological specializations. Here we report using a large (>400) sample size in which MSO neurons show a strikingly large continuum of functional properties despite exhibiting similar morphologies. We demonstrate that subthreshold oscillations mediated by voltage-gated Na+ channels play a key role in conferring graded differences in firing properties. This functional diversity likely confers capabilities of processing both fast, submillisecond-scale synaptic activity (acoustic "fine structure"), and slow-rising envelope information that is found in amplitude-modulated sounds and speech patterns.
Collapse
|
9
|
Auditory Brainstem Models: Adapting Cochlear Nuclei Improve Spatial Encoding by the Medial Superior Olive in Reverberation. J Assoc Res Otolaryngol 2021; 22:289-318. [PMID: 33861395 DOI: 10.1007/s10162-021-00797-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 03/22/2021] [Indexed: 10/21/2022] Open
Abstract
Listeners typically perceive a sound as originating from the direction of its source, even as direct sound is followed milliseconds later by reflected sound from multiple different directions. Early-arriving sound is emphasised in the ascending auditory pathway, including the medial superior olive (MSO) where binaural neurons encode the interaural-time-difference (ITD) cue for spatial location. Perceptually, weighting of ITD conveyed during rising sound energy is stronger at 600 Hz than at 200 Hz, consistent with the minimum stimulus rate for binaural adaptation, and with the longer reverberation times at 600 Hz, compared with 200 Hz, in many natural outdoor environments. Here, we computationally explore the combined efficacy of adaptation prior to the binaural encoding of ITD cues, and excitatory binaural coincidence detection within MSO neurons, in emphasising ITDs conveyed in early-arriving sound. With excitatory inputs from adapting, nonlinear model spherical bushy cells (SBCs) of the bilateral cochlear nuclei, a nonlinear model MSO neuron with low-threshold potassium channels reproduces the rate-dependent emphasis of rising vs. peak sound energy in ITD encoding; adaptation is equally effective in the model MSO. Maintaining adaptation in model SBCs, and adjusting membrane speed in model MSO neurons, 'left' and 'right' populations of computationally efficient, linear model SBCs and MSO neurons reproduce this stronger weighting of ITD conveyed during rising sound energy at 600 Hz compared to 200 Hz. This hemispheric population model demonstrates a link between strong weighting of spatial information during rising sound energy, and correct unambiguous lateralisation of a speech source in reverberation.
Collapse
|
10
|
Impaired Binaural Hearing in Adults: A Selected Review of the Literature. Front Neurosci 2021; 15:610957. [PMID: 33815037 PMCID: PMC8017161 DOI: 10.3389/fnins.2021.610957] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 02/19/2021] [Indexed: 11/17/2022] Open
Abstract
Despite over 100 years of study, there are still many fundamental questions about binaural hearing that remain unanswered, including how impairments of binaural function are related to the mechanisms of binaural hearing. This review focuses on a number of studies that are fundamental to understanding what is known about the effects of peripheral hearing loss, aging, traumatic brain injury, strokes, brain tumors, and multiple sclerosis (MS) on binaural function. The literature reviewed makes clear that while each of these conditions has the potential to impair the binaural system, the specific abilities of a given patient cannot be known without performing multiple behavioral and/or neurophysiological measurements of binaural sensitivity. Future work in this area has the potential to bring awareness of binaural dysfunction to patients and clinicians as well as a deeper understanding of the mechanisms of binaural hearing, but it will require the integration of clinical research with animal and computational modeling approaches.
Collapse
|
11
|
The gap prepulse inhibition of the acoustic startle (GPIAS) paradigm to assess auditory temporal processing: Monaural versus binaural presentation. Psychophysiology 2020; 58:e13755. [PMID: 33355931 DOI: 10.1111/psyp.13755] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 10/08/2020] [Accepted: 10/09/2020] [Indexed: 01/20/2023]
Abstract
The Gap Prepulse Inhibition of the Acoustic Startle Reflex (GPIAS) is a paradigm used to assess auditory temporal processing in both animals and humans. It consists of the presentation of a silent gap embedded in noise and presented a few milliseconds before a startle sound. The silent gap produces the inhibition of the startle reflex, a phenomenon called gap-prepulse inhibition (GPI). This paradigm is also used to detect tinnitus in animal models. The lack of inhibition by the silent gaps is suggested to be indicative of the presence of tinnitus "filling-in" the gaps. The current research aims at improving the GPIAS technique by comparing the GPI produced by monaural versus binaural silent gaps in 29 normal-hearing subjects. Two gap durations (5 or 50 ms), each embedded in two different frequency backgrounds (centered around 500 or 4 kHz). Both low- and high- frequency narrowband noises had a bandwidth of half an octave. Overall, the startle magnitude was greater for the binaural versus the monaural presentation, which might reflect binaural loudness summation. In addition, the GPI was similar between the monaural and the binaural presentations for the high-frequency background noise. However, the GPI was greater for the low-frequency background noise for the binaural, compared to the monaural, presentation. These findings suggest that monaural GPIAS might be more suited to detect tinnitus compared to the binaural presentation.
Collapse
|
12
|
Abstract
Auditory frisson is the experience of feeling of cold or shivering related to sound in the absence of a physical cold stimulus. Multiple examples of frisson-inducing sounds have been reported, but the mechanism of auditory frisson remains elusive. Typical frisson-inducing sounds may contain a looming effect, in which a sound appears to approach the listener's peripersonal space. Previous studies on sound in peripersonal space have provided objective measurements of sound-inducing effects, but few have investigated the subjective experience of frisson-inducing sounds. Here we explored whether it is possible to produce subjective feelings of frisson by moving a noise sound (white noise, rolling beads noise, or frictional noise produced by rubbing a plastic bag) stimulus around a listener's head. Our results demonstrated that sound-induced frisson can be experienced stronger when auditory stimuli are rotated around the head (binaural moving sounds) than the one without the rotation (monaural static sounds), regardless of the source of the noise sound. Pearson's correlation analysis showed that several acoustic features of auditory stimuli, such as variance of interaural level difference (ILD), loudness, and sharpness, were correlated with the magnitude of subjective frisson. We had also observed that the subjective feelings of frisson by moving a musical sound had increased comparing with a static musical sound.
Collapse
|
13
|
Re-weighting of Sound Localization Cues by Audiovisual Training. Front Neurosci 2019; 13:1164. [PMID: 31802997 PMCID: PMC6873890 DOI: 10.3389/fnins.2019.01164] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Accepted: 10/15/2019] [Indexed: 11/28/2022] Open
Abstract
Sound localization requires the integration in the brain of auditory spatial cues generated by interactions with the external ears, head and body. Perceptual learning studies have shown that the relative weighting of these cues can change in a context-dependent fashion if their relative reliability is altered. One factor that may influence this process is vision, which tends to dominate localization judgments when both modalities are present and induces a recalibration of auditory space if they become misaligned. It is not known, however, whether vision can alter the weighting of individual auditory localization cues. Using virtual acoustic space stimuli, we measured changes in subjects’ sound localization biases and binaural localization cue weights after ∼50 min of training on audiovisual tasks in which visual stimuli were either informative or not about the location of broadband sounds. Four different spatial configurations were used in which we varied the relative reliability of the binaural cues: interaural time differences (ITDs) and frequency-dependent interaural level differences (ILDs). In most subjects and experiments, ILDs were weighted more highly than ITDs before training. When visual cues were spatially uninformative, some subjects showed a reduction in auditory localization bias and the relative weighting of ILDs increased after training with congruent binaural cues. ILDs were also upweighted if they were paired with spatially-congruent visual cues, and the largest group-level improvements in sound localization accuracy occurred when both binaural cues were matched to visual stimuli. These data suggest that binaural cue reweighting reflects baseline differences in the relative weights of ILDs and ITDs, but is also shaped by the availability of congruent visual stimuli. Training subjects with consistently misaligned binaural and visual cues produced the ventriloquism aftereffect, i.e., a corresponding shift in auditory localization bias, without affecting the inter-subject variability in sound localization judgments or their binaural cue weights. Our results show that the relative weighting of different auditory localization cues can be changed by training in ways that depend on their reliability as well as the availability of visual spatial information, with the largest improvements in sound localization likely to result from training with fully congruent audiovisual information.
Collapse
|
14
|
Audiovisual Interactions in Stereo Sound Localization for Individuals With Unilateral Hearing Loss. Trends Hear 2019; 23:2331216519846232. [PMID: 31035906 PMCID: PMC6572873 DOI: 10.1177/2331216519846232] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023] Open
Abstract
This study investigated the effects of unilateral hearing loss (UHL), of either conductive or sensorineural origin, on stereo sound localization and related visual bias in listeners with normal hearing, short-term (acute) UHL, and chronic UHL. Time-delay-based stereophony was used to isolate interaural-time-difference cues for sound source localization in free field. Listeners with acute moderate (<40 dB for tens of minutes) and chronic severe (>50 dB for more than 10 years) UHL showed poor localization and compressed auditory space that favored the intact ear. Listeners with chronic moderate (<50 dB for more than 12 years) UHL performed near normal. These results show that the auditory spatial mechanisms that allow stereo localization become less sensitive to moderate UHL in the long term. Presenting LED flashes at either the same or a different location as the sound source elicited visual bias in all groups but to different degrees. Hearing loss led to increased visual bias, especially on the impaired side, for the severe and acute UHL listeners, suggesting that vision plays a compensatory role in restoring perceptual spatial symmetry.
Collapse
|
15
|
Modeling Sluggishness in Binaural Unmasking of Speech for Maskers With Time-Varying Interaural Phase Differences. Trends Hear 2019; 22:2331216517753547. [PMID: 29338577 PMCID: PMC5774735 DOI: 10.1177/2331216517753547] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
In studies investigating binaural processing in human listeners, relatively long and task-dependent time constants of a binaural window ranging from 10 ms to 250 ms have been observed. Such time constants are often thought to reflect “binaural sluggishness.” In this study, the effect of binaural sluggishness on binaural unmasking of speech in stationary speech-shaped noise is investigated in 10 listeners with normal hearing. In order to design a masking signal with temporally varying binaural cues, the interaural phase difference of the noise was modulated sinusoidally with frequencies ranging from 0.25 Hz to 64 Hz. The lowest, that is the best, speech reception thresholds (SRTs) were observed for the lowest modulation frequency. SRTs increased with increasing modulation frequency up to 4 Hz. For higher modulation frequencies, SRTs remained constant in the range of 1 dB to 1.5 dB below the SRT determined in the diotic situation. The outcome of the experiment was simulated using a short-term binaural speech intelligibility model, which combines an equalization–cancellation (EC) model with the speech intelligibility index. This model segments the incoming signal into 23.2-ms time frames in order to predict release from masking in modulated noises. In order to predict the results from this study, the model required a further time constant applied to the EC mechanism representing binaural sluggishness. The best agreement with perceptual data was achieved using a temporal window of 200 ms in the EC mechanism.
Collapse
|
16
|
The Calyx of Held: A Hypothesis on the Need for Reliable Timing in an Intensity-Difference Encoder. Neuron 2018; 100:534-549. [PMID: 30408442 PMCID: PMC6263157 DOI: 10.1016/j.neuron.2018.10.026] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2018] [Revised: 08/16/2018] [Accepted: 10/15/2018] [Indexed: 12/18/2022]
Abstract
The calyx of Held is the preeminent model for the study of synaptic function in the mammalian CNS. Despite much work on the synapse and associated circuit, its role in hearing remains enigmatic. We propose that the calyx is one of the key adaptations that enables an animal to lateralize transient sounds. The calyx is part of a binaural circuit that is biased toward high sound frequencies and is sensitive to intensity differences between the ears. This circuit also shows marked sensitivity to interaural time differences, but only for brief sound transients ("clicks"). In a natural environment, such transients are rare except as adventitious sounds generated by other animals moving at close range. We argue that the calyx, and associated temporal specializations, evolved to enable spatial localization of sound transients, through a neural code congruent with the circuit's sensitivity to interaural intensity differences, thereby conferring a key benefit to survival.
Collapse
|
17
|
Bimodal Hearing in Individuals with Severe-to-Profound Hearing Loss: Benefits, Challenges, and Management. Semin Hear 2018; 39:405-413. [PMID: 30374211 DOI: 10.1055/s-0038-1670706] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022] Open
Abstract
Binaural hearing offers numerous advantages over monaural hearing. While bilateral implants are a successful treatment option for some patients, many individuals choose to achieve binaural hearing by using a cochlear implant with a contralateral hearing aid. Compared with monaural hearing, benefits of bimodal hearing include improved speech perception in quiet and in noise, improved localization, and more natural sound quality. Despite the advantages, there exist disadvantages to bimodal hearing, primarily related to binaural integration. Management of these devices can be challenging in that the hearing aid and cochlear implant may be managed by different clinicians. When fitting devices, strategies are recommended to optimize the integration of input from both devices. In managing bimodal devices, recommended outcomes measures include those that would reflect bimodal benefit, such as speech understanding in noise and spatial sound quality perception.
Collapse
|
18
|
Across Species "Natural Ablation" Reveals the Brainstem Source of a Noninvasive Biomarker of Binaural Hearing. J Neurosci 2018; 38:8563-8573. [PMID: 30126974 DOI: 10.1523/jneurosci.1211-18.2018] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2018] [Revised: 08/08/2018] [Accepted: 08/13/2018] [Indexed: 02/05/2023] Open
Abstract
The binaural interaction component (BIC) of the auditory brainstem response is a noninvasive electroencephalographic signature of neural processing of binaural sounds. Despite its potential as a clinical biomarker, the neural structures and mechanism that generate the BIC are not known. We explore here the hypothesis that the BIC emerges from excitatory-inhibitory interactions in auditory brainstem neurons. We measured the BIC in response to click stimuli while varying interaural time differences (ITDs) in subjects of either sex from five animal species. Species had head sizes spanning a 3.5-fold range and correspondingly large variations in the sizes of the auditory brainstem nuclei known to process binaural sounds [the medial superior olive (MSO) and the lateral superior olive (LSO)]. The BIC was reliably elicited in all species, including those that have small or inexistent MSOs. In addition, the range of ITDs where BIC was elicited was independent of animal species, suggesting that the BIC is not a reflection of the processing of ITDs per se. Finally, we provide a model of the amplitude and latency of the BIC peak, which is based on excitatory-inhibitory synaptic interactions, without assuming any specific arrangement of delay lines. Our results show that the BIC is preserved across species ranging from mice to humans. We argue that this is the result of generic excitatory-inhibitory synaptic interactions at the level of the LSO, and thus best seen as reflecting the integration of binaural inputs as opposed to their spatial properties.SIGNIFICANCE STATEMENT Noninvasive electrophysiological measures of sensory system activity are critical for the objective clinical diagnosis of human sensory processing deficits. The binaural component of sound-evoked auditory brainstem responses is one such measure of binaural auditory coding fidelity in the early stages of the auditory system. Yet, the precise neurons that lead to this evoked potential are not fully understood. This paper provides a comparative study of this potential in different mammals and shows that it is preserved across species, from mice to men, despite large variations in morphology and neuroanatomy. Our results confirm its relevance to the assessment of binaural hearing integrity in humans and demonstrates how it can be used to bridge the gap between rodent models and humans.
Collapse
|
19
|
Bimodal benefit for cochlear implant listeners with different grades of hearing loss in the opposite ear. Acta Otolaryngol 2018; 138:713-721. [PMID: 29553839 DOI: 10.1080/00016489.2018.1444281] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
OBJECTIVE To determine speech perception in quiet and noise of adult cochlear implant listeners retaining a hearing aid contralaterally. Second, to investigate the influence of contralateral hearing thresholds and speech perception on bimodal hearing. PATIENTS AND METHODS Sentence recognition with hearing aid alone, cochlear implant alone and bimodally at 6 months after cochlear implantation were assessed in 148 postlingually deafened adults. Data were analyzed for bimodal summation using measures of speech perception in quiet and in noise. RESULTS Most of the subjects showed improved sentence recognition in quiet and in noise in the bimodal condition compared to the hearing aid-only or cochlear implant-only mode. The large variability of bimodal benefit in quiet can be partially explained by the degree of pure tone loss. Also, subjects with better hearing on the acoustic side experience significant benefit from the additional electrical input. CONCLUSIONS Bimodal summation shows different characteristics in quiet and noise. Bimodal benefit in quiet depends on hearing thresholds at higher frequencies as well as in the lower- and middle-frequency ranges. For the bimodal benefit in noise, no correlation with hearing threshold in any frequency range was found.
Collapse
|
20
|
Principal cells of the brainstem's interaural sound level detector are temporal differentiators rather than integrators. eLife 2018; 7:33854. [PMID: 29901438 PMCID: PMC6063729 DOI: 10.7554/elife.33854] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2017] [Accepted: 06/10/2018] [Indexed: 11/22/2022] Open
Abstract
The brainstem’s lateral superior olive (LSO) is thought to be crucial for localizing high-frequency sounds by coding interaural sound level differences (ILD). Its neurons weigh contralateral inhibition against ipsilateral excitation, making their firing rate a function of the azimuthal position of a sound source. Since the very first in vivo recordings, LSO principal neurons have been reported to give sustained and temporally integrating ‘chopper’ responses to sustained sounds. Neurons with transient responses were observed but largely ignored and even considered a sign of pathology. Using the Mongolian gerbil as a model system, we have obtained the first in vivo patch clamp recordings from labeled LSO neurons and find that principal LSO neurons, the most numerous projection neurons of this nucleus, only respond at sound onset and show fast membrane features suggesting an importance for timing. These results provide a new framework to interpret previously puzzling features of this circuit.
Collapse
|
21
|
Abstract
OBJECTIVE Binaural cues such as interaural level differences (ILDs) are used to organise auditory perception and to segregate sound sources in complex acoustical environments. In bilaterally fitted hearing aids, dynamic-range compression operating independently at each ear potentially alters these ILDs, thus distorting binaural perception and sound source segregation. DESIGN A binaurally-linked model-based fast-acting dynamic compression algorithm designed to approximate the normal-hearing basilar membrane (BM) input-output function in hearing-impaired listeners is suggested. A multi-center evaluation in comparison with an alternative binaural and two bilateral fittings was performed to assess the effect of binaural synchronisation on (a) speech intelligibility and (b) perceived quality in realistic conditions. STUDY SAMPLE 30 and 12 hearing impaired (HI) listeners were aided individually with the algorithms for both experimental parts, respectively. RESULTS A small preference towards the proposed model-based algorithm in the direct quality comparison was found. However, no benefit of binaural-synchronisation regarding speech intelligibility was found, suggesting a dominant role of the better ear in all experimental conditions. CONCLUSION The suggested binaural synchronisation of compression algorithms showed a limited effect on the tested outcome measures, however, linking could be situationally beneficial to preserve a natural binaural perception of the acoustical environment.
Collapse
|
22
|
Local and Global Spatial Organization of Interaural Level Difference and Frequency Preferences in Auditory Cortex. Cereb Cortex 2018; 28:350-369. [PMID: 29136122 PMCID: PMC5991210 DOI: 10.1093/cercor/bhx295] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2017] [Revised: 10/10/2017] [Indexed: 12/16/2022] Open
Abstract
Despite decades of microelectrode recordings, fundamental questions remain about how auditory cortex represents sound-source location. Here, we used in vivo 2-photon calcium imaging to measure the sensitivity of layer II/III neurons in mouse primary auditory cortex (A1) to interaural level differences (ILDs), the principal spatial cue in this species. Although most ILD-sensitive neurons preferred ILDs favoring the contralateral ear, neurons with either midline or ipsilateral preferences were also present. An opponent-channel decoder accurately classified ILDs using the difference in responses between populations of neurons that preferred contralateral-ear-greater and ipsilateral-ear-greater stimuli. We also examined the spatial organization of binaural tuning properties across the imaged neurons with unprecedented resolution. Neurons driven exclusively by contralateral ear stimuli or by binaural stimulation occasionally formed local clusters, but their binaural categories and ILD preferences were not spatially organized on a more global scale. In contrast, the sound frequency preferences of most neurons within local cortical regions fell within a restricted frequency range, and a tonotopic gradient was observed across the cortical surface of individual mice. These results indicate that the representation of ILDs in mouse A1 is comparable to that of most other mammalian species, and appears to lack systematic or consistent spatial order.
Collapse
|
23
|
Neural Correlates of the Binaural Masking Level Difference in Human Frequency-Following Responses. J Assoc Res Otolaryngol 2017; 18:355-369. [PMID: 27896486 PMCID: PMC5352611 DOI: 10.1007/s10162-016-0603-7] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2016] [Accepted: 11/02/2016] [Indexed: 11/30/2022] Open
Abstract
The binaural masking level difference (BMLD) is an auditory phenomenon where binaural tone-in-noise detection is improved when the phase of either signal or noise is inverted in one of the ears (SπNo or SoNπ, respectively), relative to detection when signal and noise are in identical phase at each ear (SoNo). Processing related to BMLDs and interaural time differences has been confirmed in the auditory brainstem of non-human mammals; in the human auditory brainstem, phase-locked neural responses elicited by BMLD stimuli have not been systematically examined across signal-to-noise ratio. Behavioral and physiological testing was performed in three binaural stimulus conditions: SoNo, SπNo, and SoNπ. BMLDs at 500 Hz were obtained from 14 young, normal-hearing adults (ages 21-26). Physiological BMLDs used the frequency-following response (FFR), a scalp-recorded auditory evoked potential dependent on sustained phase-locked neural activity; FFR tone-in-noise detection thresholds were used to calculate physiological BMLDs. FFR BMLDs were significantly smaller (poorer) than behavioral BMLDs, and FFR BMLDs did not reflect a physiological release from masking, on average. Raw FFR amplitude showed substantial reductions in the SπNo condition relative to SoNo and SoNπ conditions, consistent with negative effects of phase summation from left and right ear FFRs. FFR amplitude differences between stimulus conditions (e.g., SoNo amplitude-SπNo amplitude) were significantly predictive of behavioral SπNo BMLDs; individuals with larger amplitude differences had larger (better) behavioral B MLDs and individuals with smaller amplitude differences had smaller (poorer) behavioral B MLDs. These data indicate a role for sustained phase-locked neural activity in BMLDs of humans and are the first to show predictive relationships between behavioral BMLDs and human brainstem responses.
Collapse
|
24
|
Cortical Measures of Binaural Processing Predict Spatial Release from Masking Performance. Front Hum Neurosci 2017; 11:124. [PMID: 28377706 PMCID: PMC5359282 DOI: 10.3389/fnhum.2017.00124] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 03/03/2017] [Indexed: 12/02/2022] Open
Abstract
Binaural sensitivity is an important contributor to the ability to understand speech in adverse acoustical environments such as restaurants and other social gatherings. The ability to accurately report on binaural percepts is not commonly measured, however, as extensive training is required before reliable measures can be obtained. Here, we investigated the use of auditory evoked potentials (AEPs) as a rapid physiological indicator of detection of interaural phase differences (IPDs) by assessing cortical responses to 180° IPDs embedded in amplitude-modulated carrier tones. We predicted that decrements in encoding of IPDs would be evident in middle age, with further declines found with advancing age and hearing loss. Thus, participants in experiment #1 were young to middle-aged adults with relatively good hearing thresholds while participants in experiment #2 were older individuals with typical age-related hearing loss. Results revealed that while many of the participants in experiment #1 could encode IPDs in stimuli up to 1,000 Hz, few of the participants in experiment #2 had discernable responses to stimuli above 750 Hz. These results are consistent with previous studies that have found that aging and hearing loss impose frequency limits on the ability to encode interaural phase information present in the fine structure of auditory stimuli. We further hypothesized that AEP measures of binaural sensitivity would be predictive of participants' ability to benefit from spatial separation between sound sources, a phenomenon known as spatial release from masking (SRM) which depends upon binaural cues. Results indicate that not only were objective IPD measures well correlated with and predictive of behavioral SRM measures in both experiments, but that they provided much stronger predictive value than age or hearing loss. Overall, the present work shows that objective measures of the encoding of interaural phase information can be readily obtained using commonly available AEP equipment, allowing accurate determination of the degree to which binaural sensitivity has been reduced in individual listeners due to aging and/or hearing loss. In fact, objective AEP measures of interaural phase encoding are actually better predictors of SRM in speech-in-speech conditions than are age, hearing loss, or the combination of age and hearing loss.
Collapse
|
25
|
Sensitivity to Interaural Time Differences Conveyed in the Stimulus Envelope: Estimating Inputs of Binaural Neurons Through the Temporal Analysis of Spike Trains. J Assoc Res Otolaryngol 2016; 17:313-30. [PMID: 27294694 DOI: 10.1007/s10162-016-0573-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2015] [Accepted: 05/30/2016] [Indexed: 01/03/2023] Open
Abstract
Sound-source localization in the horizontal plane relies on detecting small differences in the timing and level of the sound at the two ears, including differences in the timing of the modulated envelopes of high-frequency sounds (envelope interaural time differences (ITDs)). We investigated responses of single neurons in the inferior colliculus (IC) to a wide range of envelope ITDs and stimulus envelope shapes. By a novel means of visualizing neural activity relative to different portions of the periodic stimulus envelope at each ear, we demonstrate the role of neuron-specific excitatory and inhibitory inputs in creating ITD sensitivity (or the lack of it) depending on the specific shape of the stimulus envelope. The underlying binaural brain circuitry and synaptic parameters were modeled individually for each neuron to account for neuron-specific activity patterns. The model explains the effects of envelope shapes on sensitivity to envelope ITDs observed in both normal-hearing listeners and in neural data, and has consequences for understanding how ITD information in stimulus envelopes might be maximized in users of bilateral cochlear implants-for whom ITDs conveyed in the stimulus envelope are the only ITD cues available.
Collapse
|
26
|
Abstract
This special issue contains a collection of 13 papers highlighting the collaborative research and engineering project entitled Advancing Binaural Cochlear Implant Technology-ABCIT-as well as research spin-offs from the project. In this introductory editorial, a brief history of the project is provided, alongside an overview of the studies.
Collapse
|
27
|
Abstract
In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios.
Collapse
|
28
|
A Binaural CI Research Platform for Oticon Medical SP/XP Implants Enabling ITD/ILD and Variable Rate Processing. Trends Hear 2015; 19:19/0/2331216515618655. [PMID: 26721923 PMCID: PMC4771037 DOI: 10.1177/2331216515618655] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
We present the first portable, binaural, real-time research platform compatible with Oticon Medical SP and XP generation cochlear implants. The platform consists of (a) a pair of behind-the-ear devices, each containing front and rear calibrated microphones, (b) a four-channel USB analog-to-digital converter, (c) real-time PC-based sound processing software called the Master Hearing Aid, and (d) USB-connected hardware and output coils capable of driving two implants simultaneously. The platform is capable of processing signals from the four microphones simultaneously and producing synchronized binaural cochlear implant outputs that drive two (bilaterally implanted) SP or XP implants. Both audio signal preprocessing algorithms (such as binaural beamforming) and novel binaural stimulation strategies (within the implant limitations) can be programmed by researchers. When the whole research platform is combined with Oticon Medical SP implants, interaural electrode timing can be controlled on individual electrodes to within ±1 µs and interaural electrode energy differences can be controlled to within ±2%. Hence, this new platform is particularly well suited to performing experiments related to interaural time differences in combination with interaural level differences in real-time. The platform also supports instantaneously variable stimulation rates and thereby enables investigations such as the effect of changing the stimulation rate on pitch perception. Because the processing can be changed on the fly, researchers can use this platform to study perceptual changes resulting from different processing strategies acutely.
Collapse
|
29
|
Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs. Front Neural Circuits 2014; 8:85. [PMID: 25120437 PMCID: PMC4111082 DOI: 10.3389/fncir.2014.00085] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2013] [Accepted: 07/04/2014] [Indexed: 11/13/2022] Open
Abstract
Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted <2 s and, in different cells, excitability either decreased, increased or shifted in latency. Within cells, the modulatory effect of sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.
Collapse
|
30
|
Subcollicular projections to the auditory thalamus and collateral projections to the inferior colliculus. Front Neuroanat 2014; 8:70. [PMID: 25100950 PMCID: PMC4103406 DOI: 10.3389/fnana.2014.00070] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2014] [Accepted: 06/27/2014] [Indexed: 01/03/2023] Open
Abstract
Experiments in several species have identified direct projections to the medial geniculate nucleus (MG) from cells in subcollicular auditory nuclei. Moreover, many cochlear nucleus cells that project to the MG send collateral projections to the inferior colliculus (IC) (Schofield et al., 2014). We conducted three experiments to characterize projections to the MG from the superior olivary and the lateral lemniscal regions in guinea pigs. For experiment 1, we made large injections of retrograde tracer into the MG. Labeled cells were most numerous in the superior paraolivary nucleus, ventral nucleus of the trapezoid body, lateral superior olivary nucleus, ventral nucleus of the lateral lemniscus, ventrolateral tegmental nucleus, paralemniscal region and sagulum. Additional sources include other periolivary nuclei and the medial superior olivary nucleus. The projections are bilateral with an ipsilateral dominance (66%). For experiment 2, we injected tracer into individual MG subdivisions. The results show that the subcollicular projections terminate primarily in the medial MG, with the dorsal MG a secondary target. The variety of projecting nuclei suggest a range of functions, including monaural and binaural aspects of hearing. These direct projections could provide the thalamus with some of the earliest (i.e., fastest) information regarding acoustic stimuli. For experiment 3, we made large injections of different retrograde tracers into one MG and the homolateral IC to identify cells that project to both targets. Such cells were numerous and distributed across many of the nuclei listed above, mostly ipsilateral to the injections. The prominence of the collateral projections suggests that the same information is delivered to both the IC and the MG, or perhaps that a common signal is being delivered as a preparatory indicator or temporal reference point. The results are discussed from functional and evolutionary perspectives.
Collapse
|
31
|
Relating age and hearing loss to monaural, bilateral, and binaural temporal sensitivity. Front Neurosci 2014; 8:172. [PMID: 25009458 PMCID: PMC4070059 DOI: 10.3389/fnins.2014.00172] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2014] [Accepted: 06/05/2014] [Indexed: 11/13/2022] Open
Abstract
Older listeners are more likely than younger listeners to have difficulties in making temporal discriminations among auditory stimuli presented to one or both ears. In addition, the performance of older listeners is often observed to be more variable than that of younger listeners. The aim of this work was to relate age and hearing loss to temporal processing ability in a group of younger and older listeners with a range of hearing thresholds. Seventy-eight listeners were tested on a set of three temporal discrimination tasks (monaural gap discrimination, bilateral gap discrimination, and binaural discrimination of interaural differences in time). To examine the role of temporal fine structure in these tasks, four types of brief stimuli were used: tone bursts, broad-frequency chirps with rising or falling frequency contours, and random-phase noise bursts. Between-subject group analyses conducted separately for each task revealed substantial increases in temporal thresholds for the older listeners across all three tasks, regardless of stimulus type, as well as significant correlations among the performance of individual listeners across most combinations of tasks and stimuli. Differences in performance were associated with the stimuli in the monaural and binaural tasks, but not the bilateral task. Temporal fine structure differences among the stimuli had the greatest impact on monaural thresholds. Threshold estimate values across all tasks and stimuli did not show any greater variability for the older listeners as compared to the younger listeners. A linear mixed model applied to the data suggested that age and hearing loss are independent factors responsible for temporal processing ability, thus supporting the increasingly accepted hypothesis that temporal processing can be impaired for older compared to younger listeners with similar hearing and/or amounts of hearing loss.
Collapse
|
32
|
Towards a unifying basis of auditory thresholds: binaural summation. J Assoc Res Otolaryngol 2014; 15:219-34. [PMID: 24385083 PMCID: PMC3946133 DOI: 10.1007/s10162-013-0432-x] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2013] [Accepted: 12/05/2013] [Indexed: 11/29/2022] Open
Abstract
Absolute auditory threshold decreases with increasing sound duration, a phenomenon explainable by the assumptions that the sound evokes neural events whose probabilities of occurrence are proportional to the sound's amplitude raised to an exponent of about 3 and that a constant number of events are required for threshold (Heil and Neubauer, Proc Natl Acad Sci USA 100:6151-6156, 2003). Based on this probabilistic model and on the assumption of perfect binaural summation, an equation is derived here that provides an explicit expression of the binaural threshold as a function of the two monaural thresholds, irrespective of whether they are equal or unequal, and of the exponent in the model. For exponents >0, the predicted binaural advantage is largest when the two monaural thresholds are equal and decreases towards zero as the monaural threshold difference increases. This equation is tested and the exponent derived by comparing binaural thresholds with those predicted on the basis of the two monaural thresholds for different values of the exponent. The thresholds, measured in a large sample of human subjects with equal and unequal monaural thresholds and for stimuli with different temporal envelopes, are compatible only with an exponent close to 3. An exponent of 3 predicts a binaural advantage of 2 dB when the two ears are equally sensitive. Thus, listening with two (equally sensitive) ears rather than one has the same effect on absolute threshold as doubling duration. The data suggest that perfect binaural summation occurs at threshold and that peripheral neural signals are governed by an exponent close to 3. They might also shed new light on mechanisms underlying binaural summation of loudness.
Collapse
|
33
|
Anatomical limits on interaural time differences: an ecological perspective. Front Neurosci 2014; 8:34. [PMID: 24592209 PMCID: PMC3937989 DOI: 10.3389/fnins.2014.00034] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2013] [Accepted: 02/09/2014] [Indexed: 11/13/2022] Open
Abstract
Human listeners, and other animals too, use interaural time differences (ITD) to localize sounds. If the sounds are pure tones, a simple frequency factor relates the ITD to the interaural phase difference (IPD), for which there are known iso-IPD boundaries, 90°, 180°… defining regions of spatial perception. In this article, iso-IPD boundaries for humans are translated into azimuths using a spherical head model (SHM), and the calculations are checked by free-field measurements. The translated boundaries provide quantitative tests of an ecological interpretation for the dramatic onset of ITD insensitivity at high frequencies. According to this interpretation, the insensitivity serves as a defense against misinformation and can be attributed to limits on binaural processing in the brainstem. Calculations show that the ecological explanation passes the tests only if the binaural brainstem properties evolved or developed consistent with heads that are 50% smaller than current adult heads. Measurements on more realistic head shapes relax that requirement only slightly. The problem posed by the discrepancy between the current head size and a smaller, ideal head size was apparently solved by the evolution or development of central processes that discount large IPDs in favor of interaural level differences. The latter become more important with increasing head size.
Collapse
|
34
|
Developmental plasticity of spatial hearing following asymmetric hearing loss: context-dependent cue integration and its clinical implications. Front Syst Neurosci 2013; 7:123. [PMID: 24409125 PMCID: PMC3873525 DOI: 10.3389/fnsys.2013.00123] [Citation(s) in RCA: 59] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2013] [Accepted: 12/12/2013] [Indexed: 11/23/2022] Open
Abstract
Under normal hearing conditions, comparisons of the sounds reaching each ear are critical for accurate sound localization. Asymmetric hearing loss should therefore degrade spatial hearing and has become an important experimental tool for probing the plasticity of the auditory system, both during development and adulthood. In clinical populations, hearing loss affecting one ear more than the other is commonly associated with otitis media with effusion, a disorder experienced by approximately 80% of children before the age of two. Asymmetric hearing may also arise in other clinical situations, such as after unilateral cochlear implantation. Here, we consider the role played by spatial cue integration in sound localization under normal acoustical conditions. We then review evidence for adaptive changes in spatial hearing following a developmental hearing loss in one ear, and show that adaptation may be achieved either by learning a new relationship between the altered cues and directions in space or by changing the way different cues are integrated in the brain. We next consider developmental plasticity as a source of vulnerability, describing maladaptive effects of asymmetric hearing loss that persist even when normal hearing is provided. We also examine the extent to which the consequences of asymmetric hearing loss depend upon its timing and duration. Although much of the experimental literature has focused on the effects of a stable unilateral hearing loss, some of the most common hearing impairments experienced by children tend to fluctuate over time. We therefore propose that there is a need to bridge this gap by investigating the effects of recurring hearing loss during development, and outline recent steps in this direction. We conclude by arguing that this work points toward a more nuanced view of developmental plasticity, in which plasticity may be selectively expressed in response to specific sensory contexts, and consider the clinical implications of this.
Collapse
|
35
|
Frequency response areas in the inferior colliculus: nonlinearity and binaural interaction. Front Neural Circuits 2013; 7:90. [PMID: 23675323 PMCID: PMC3650518 DOI: 10.3389/fncir.2013.00090] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2013] [Accepted: 04/22/2013] [Indexed: 11/13/2022] Open
Abstract
The tuning, binaural properties, and encoding characteristics of neurons in the central nucleus of the inferior colliculus (CNIC) were investigated to shed light on nonlinearities in the responses of these neurons. Results were analyzed for three types of neurons (I, O, and V) in the CNIC of decerebrate cats. Rate responses to binaural stimuli were characterized using a 1st- plus 2nd-order spectral integration model. Parameters of the model were derived using broadband stimuli with random spectral shapes (RSS). This method revealed four characteristics of CNIC neurons: (1) Tuning curves derived from broadband stimuli have fixed (i. e., level tolerant) bandwidths across a 50-60 dB range of sound levels; (2) 1st-order contralateral weights (particularly for type I and O neurons) were usually larger in magnitude than corresponding ipsilateral weights; (3) contralateral weights were more important than ipsilateral weights when using the model to predict responses to untrained noise stimuli; and (4) 2nd-order weight functions demonstrate frequency selectivity different from that of 1st-order weight functions. Furthermore, while the inclusion of 2nd-order terms in the model usually improved response predictions related to untrained RSS stimuli, they had limited impact on predictions related to other forms of filtered broadband noise [e. g., virtual-space stimuli (VS)]. The accuracy of the predictions varied considerably by response type. Predictions were most accurate for I neurons, and less accurate for O and V neurons, except at the lowest stimulus levels. These differences in prediction performance support the idea that type I, O, and V neurons encode different aspects of the stimulus: while type I neurons are most capable of producing linear representations of spectral shape, type O and V neurons may encode spectral features or temporal stimulus properties in a manner not easily explained with the low-order model. Supported by NIH grant DC00115.
Collapse
|
36
|
Patterns of convergence in the central nucleus of the inferior colliculus of the Mongolian gerbil: organization of inputs from the superior olivary complex in the low frequency representation. Front Neural Circuits 2013; 7:29. [PMID: 23509001 PMCID: PMC3589697 DOI: 10.3389/fncir.2013.00029] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2012] [Accepted: 02/07/2013] [Indexed: 11/13/2022] Open
Abstract
Projections to the inferior colliculus (IC) from the lateral and medial superior olivary nuclei (LSO and MSO) were studied in the gerbil (Meriones unguiculatus) with neuroanatomical tract-tracing methods. The terminal fields of projecting axons were labeled via anterograde transport of biotinylated dextran amine (BDA) and were localized on series of horizontal sections through the IC. In addition, to make the results easier to visualize in three dimensions and to facilitate comparisons among cases, the data were also reconstructed into the transverse plane. The results show that the terminal fields from the low frequency parts of the LSO and MSO are concentrated in a dorsal, lateral, and rostral area that is referred to as the "pars lateralis" of the central nucleus by analogy with the cat. This region also receives substantial input from both the contralateral and ipsilateral cochlear nuclei (Cant and Benson, 2008) and presumably plays a major role in processing binaural, low frequency information. The basic pattern of organization in the gerbil IC is similar to that of other rodents, although the low frequency part of the central nucleus in gerbils appears to be relatively greater than in the rat, consistent with differences in the audiograms of the two species.
Collapse
|
37
|
Informational masking and spatial hearing in listeners with and without unilateral hearing loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2012; 55:511-531. [PMID: 22215037 PMCID: PMC3320681 DOI: 10.1044/1092-4388(2011/10-0205)] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
PURPOSE This study assessed selective listening for speech in individuals with and without unilateral hearing loss (UHL) and the potential relationship between spatial release from informational masking and localization ability in listeners with UHL. METHOD Twelve adults with UHL and 12 normal-hearing controls completed a series of monaural and binaural speech tasks that were designed to measure informational masking. They also completed a horizontal localization task. RESULTS Monaural performance by participants with UHL was comparable to that of normal-hearing participants. Unlike the normal-hearing participants, the participants with UHL did not exhibit a true spatial release from informational masking. Rather, their performance could be predicted by head shadow effects. Performance among participants with UHL in the localization task was quite variable, with some showing near-normal abilities and others demonstrating no localization ability. CONCLUSION Individuals with UHL did not show deficits in all listening situations but were at a significant disadvantage when listening to speech in environments where normal-hearing listeners benefit from spatial separation between target and masker. This inability to capitalize on spatial cues for selective listening does not appear to be related to localization ability.
Collapse
|
38
|
Neural circuits underlying adaptation and learning in the perception of auditory space. Neurosci Biobehav Rev 2011; 35:2129-39. [PMID: 21414354 PMCID: PMC3198863 DOI: 10.1016/j.neubiorev.2011.03.008] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2010] [Revised: 03/03/2011] [Accepted: 03/07/2011] [Indexed: 10/25/2022]
Abstract
Sound localization mechanisms are particularly plastic during development, when the monaural and binaural acoustic cues that form the basis for spatial hearing change in value as the body grows. Recent studies have shown that the mature brain retains a surprising capacity to relearn to localize sound in the presence of substantially altered auditory spatial cues. In addition to the long-lasting changes that result from learning, behavioral and electrophysiological studies have demonstrated that auditory spatial processing can undergo rapid adjustments in response to changes in the statistics of recent stimulation, which help to maintain sensitivity over the range where most stimulus values occur. Through a combination of recording studies and methods for selectively manipulating the activity of specific neuronal populations, progress is now being made in identifying the cortical and subcortical circuits in the brain that are responsible for the dynamic coding of auditory spatial information.
Collapse
|
39
|
Spatial tuning to sound-source azimuth in the inferior colliculus of unanesthetized rabbit. J Neurophysiol 2011; 106:2698-708. [PMID: 21849611 PMCID: PMC3214120 DOI: 10.1152/jn.00532.2011] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2011] [Accepted: 08/12/2011] [Indexed: 11/22/2022] Open
Abstract
Despite decades of research devoted to the study of inferior colliculus (IC) neurons' tuning to sound-source azimuth, there remain many unanswered questions because no previous study has examined azimuth tuning over a full range of 360° azimuths at a wide range of stimulus levels in an unanesthetized preparation. Furthermore, a comparison of azimuth tuning to binaural and contralateral ear stimulation over ranges of full azimuths and widely varying stimulus levels has not previously been reported. To fill this void, we have conducted a study of azimuth tuning in the IC of the unanesthetized rabbit over a 300° range of azimuths at stimulus levels of 10-50 dB above neural threshold to both binaural and contralateral ear stimulation using virtual auditory space stimuli. This study provides systematic evidence for neural coding of azimuth. We found the following: 1) level-tolerant azimuth tuning was observed in the top 35% regarding vector strength and in the top 15% regarding vector angle of IC neurons; 2) preserved azimuth tuning to binaural stimulation at high stimulus levels was created as a consequence of binaural facilitation in the contralateral sound field and binaural suppression in the ipsilateral sound field; 3) the direction of azimuth tuning to binaural stimulation was primarily in the contralateral sound field, and its center shifted laterally toward -90° with increasing stimulus level; 4) at 10 dB, azimuth tuning to binaural and contralateral stimulation was similar, indicating that it was mediated by monaural mechanisms; and 5) at higher stimulus levels, azimuth tuning to contralateral ear stimulation was severely degraded. These findings form a foundation for understanding neural mechanisms of localizing sound-source azimuth.
Collapse
|
40
|
Chronic detachable headphones for acoustic stimulation in freely moving animals. J Neurosci Methods 2010; 189:44-50. [PMID: 20346981 PMCID: PMC2877876 DOI: 10.1016/j.jneumeth.2010.03.017] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2009] [Revised: 03/05/2010] [Accepted: 03/16/2010] [Indexed: 11/30/2022]
Abstract
A growing number of studies of auditory processing are being carried out in awake, behaving animals, creating a need for precisely controlled sound delivery without restricting head movements. We have designed a system for closed-field stimulus presentation in freely moving ferrets, which comprises lightweight, adjustable headphones that can be consistently positioned over the ears via a small, skull-mounted implant. The invasiveness of the implant was minimized by simplifying its construction and using dental adhesive only for attaching it to the skull, thereby reducing the surgery required and avoiding the use of screws or other anchoring devices. Attaching the headphones to a chronic implant also reduced the amount of contact they had with the head and ears, increasing the willingness of the animals to wear them. We validated sound stimulation via the headphones in ferrets trained previously in a free-field task to localize stimuli presented from one of two loudspeakers. Noise bursts were delivered binaurally over the headphones and interaural level differences (ILDs) were introduced to allow the sound to be lateralized. Animals rapidly transferred from the free-field task to indicate the perceived location of the stimulus presented over headphones. They showed near perfect lateralization with a 5 dB ILD, matching the scores achieved in the free-field task. As expected, the ferrets' performance declined when the ILD was reduced in value. This closed-field system can easily be adapted for use in other species, and provides a reliable means of presenting closed-field stimuli whilst monitoring behavioral responses in freely moving animals.
Collapse
|
41
|
Interaural time sensitivity dominated by cochlea-induced envelope patterns. J Neurosci 2003; 23:6345-50. [PMID: 12867519 PMCID: PMC6740541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2023] Open
Abstract
To localize sounds in space, humans heavily depend on minute interaural time differences (ITDs) generated by path-length differences to the two ears. Physiological studies of ITD sensitivity have mostly used deterministic, periodic sounds, in which either the waveform fine structure or a sinusoidal envelope is delayed interaurally. For natural broadband stimuli, however, auditory frequency selectivity causes individual channels to have their own envelopes; the temporal code in these channels is thus a mixture of fine structure and envelope. This study introduces a method to disentangle the contributions of fine structure and envelope in both binaural and monaural responses to broadband noise. In the inferior colliculus (IC) of the cat, a population of neurons was found in which envelope fluctuations dominate ITD sensitivity. This population extends over a surprisingly wide range of frequencies, including low frequencies for which fine-structure information is also available. A comparison with the auditory nerve suggests that an elaboration of envelope coding occurs between the nerve and the IC. These results suggest that internally generated envelopes play a more important role in binaural hearing than is commonly thought.
Collapse
|
42
|
Frequency-specific interaural level difference tuning predicts spatial response patterns of space-specific neurons in the barn owl inferior colliculus. J Neurosci 2003; 23:4677-88. [PMID: 12805307 PMCID: PMC6740778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2023] Open
Abstract
Space-specific neurons in the barn owl's inferior colliculus have spatial receptive fields (RFs) because of sensitivity to interaural time difference and frequency-specific interaural level difference (ILD). These neurons are assumed to be tuned to the frequency-specific ILDs occurring at their spatial RFs, but attempts to assess this tuning with traditional narrowband stimuli have had limited success. Indeed, tuning assessed in this manner, when processed via a linear model of spectral integration, typically explains only approximately half the variance in spatial response patterns. Here we report our findings that frequency-specific ILD tuning of space-specific neurons, when assessed from responses to broadband stimuli, predicted nearly 75% of the variance in spatial responses, using a linear model of spectral integration (p < 0.0001; n = 97 neurons). Furthermore, when we tested neurons using only those frequencies we found to be spatially relevant, we saw that their responses were similar to those elicited by broadband stimuli. When we used frequencies not identified as spatially relevant, such similarity was lacking. Furthermore, spectral components that elicited high firing rates when presented as narrowband stimuli were found in several cases to be irrelevant for or detrimental to the definition of spatial RFs. Thus, neurons achieved sharp spatial tuning by selecting for ILDs of a subset of spectral components in noise, some of which were not identified using narrowband stimuli.
Collapse
|
43
|
The coding of spatial location by single units in the lateral superior olive of the cat. I. Spatial receptive fields in azimuth. J Neurosci 2002; 22:1454-67. [PMID: 11850472 PMCID: PMC6757576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2023] Open
Abstract
The lateral superior olive (LSO) is one of the most peripheral auditory nuclei receiving inputs from both ears, and LSO neurons are sensitive to interaural level differences (ILDs), one of the primary acoustical cues for sound location. We used the virtual space (VS) technique to present over earphones broadband stimuli containing natural combinations of localization cues as a function of azimuth while recording extracellular responses from single LSO cells. The responses of LSO cells exhibited spatial receptive fields (SRFs) in azimuth consonant with their sensitivity to ILDs of stimuli presented dichotically: high discharge rates for ipsilateral azimuths where stimulus amplitude to the excitatory ear exceeded that to the inhibitory ear, rapidly declining rates near the midline, and low rates for contralateral azimuths where the amplitude to the inhibitory ear exceeded that to the excitatory ear. Relative to binaural stimulation, presentations of the VS stimuli to the ipsilateral ear alone yielded increased rates, particularly in the contralateral field, confirming that the binaural SRFs were shaped by contralateral inhibition. Our finding that LSO neurons respond to azimuth consistent with their ILD sensitivity supports the long-held hypothesis that LSO neurons compute a correlate of the ILD present in free-field stimuli. Only weak correlations between the properties of pure-tone ILD functions and the SRFs were found, indicating that ILD sensitivity measured at only one sound level is not sufficient to predict sensitivity to azimuth. Sensitivity to spatial location was also retained over a wide range of stimulus levels under binaural, but not monaural, conditions.
Collapse
|
44
|
Detection of large interaural delays and its implication for models of binaural interaction. J Assoc Res Otolaryngol 2002; 3:80-8. [PMID: 12083726 PMCID: PMC3202365 DOI: 10.1007/s101620020006] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The interaural time difference (ITD) is a major cue to sound localization along the horizontal plane. The maximum natural ITD occurs when a sound source is positioned opposite to one ear. We examined the ability of owls and humans to detect large ITDs in sounds presented through headphones. Stimuli consisted of either broad or narrow bands of Gaussian noise, 100 ms in duration. Using headphones allowed presentation of ITDs that are greater than the maximum natural ITD. Owls were able to discriminate a sound leading to the left ear from one leading to the right ear, for ITDs that are 5 times the maximum natural delay. Neural recordings from optic-tectum neurons, however, show that best ITDs are usually well within the natural range and are never as large as ITDs that are behaviorally discriminable. A model of binaural crosscorrelation with short delay lines is shown to explain behavioral detection of large ITDs. The model uses curved trajectories of a cross-correlation pattern as the basis for detection. These trajectories represent side peaks of neural ITD-tuning curves and successfully predict localization reversals by both owls and human subjects.
Collapse
|
45
|
From spectrum to space: the contribution of level difference cues to spatial receptive fields in the barn owl inferior colliculus. J Neurosci 2002; 22:284-93. [PMID: 11756512 PMCID: PMC6757622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2023] Open
Abstract
Space-specific neurons in the owl's inferior colliculus have spatial receptive fields (RFs) computed from interaural time (ITD) and level (ILD) differences. Because of the shape of the owl's head, these cues vary with frequency in a manner specific for each location. We sought to determine the contribution of ILD to spatial selectivity. We measured the normal spatial receptive fields of space-specific neurons using virtual sound sources (i.e., noises filtered to simulate external sound sources, presented using headphones). The virtual-source filters were then altered so that ITD was fixed while frequency-specific ILDs varied according to location in the usual manner. The resulting "ILD-alone" RF typically revealed a horizontal band of excitation that included the normal RF. Above and below, the neurons were inhibited. Interestingly, the maxima of ILD-alone RFs were generally outside the normal RF, suggesting that space-specific neurons are not optimally tuned to the ILD spectrum occurring at the normal RF location. Congruously, frequency-specific ILD tuning, assessed with tones, better matched the ILDs at the peak of the ILD-alone RF than those at the peak of the normal RF. The firing evoked from the normal RF may thus reflect the balance of excitatory and inhibitory inputs needed to appropriately restrict the receptive field. Frequency-specific ILD tuning curves were combined with measured head-filtering characteristics to predict responses to the frequency-specific ILDs at each location. The predicted ILD-alone RFs, which are based on a simple sum of frequency-specific inputs, accounted for 56% of the variance in our measured ILD-alone RFs.
Collapse
|
46
|
Interaural intensity difference processing in auditory midbrain neurons: effects of a transient early inhibitory input. J Neurosci 1999; 19:1149-63. [PMID: 9920676 PMCID: PMC6782152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2023] Open
Abstract
Interaural intensity differences (IIDs) are important cues that animals use to localize high-frequency sounds. Neurons sensitive to IIDs are excited by stimulation of one ear and inhibited by stimulation of the other ear, such that the response magnitude of the cell depends on the relative strengths of the two inputs, which in turn depends on the sound intensities at the ears. In the auditory midbrain nucleus, the inferior colliculus (IC), many IID-sensitive neurons have response functions that decline steeply from maximum to zero spikes as a function of IID. However, there are also many neurons with much more shallow response functions that do not decline to zero spikes. We present evidence from single-unit recordings in the Free-tailed bat's IC that this partially inhibited response pattern is a result of the inhibitory input to these cells being very brief ( approximately 2 msec). Of the cells sampled, 54 of 137 (40%) achieved partial inhibition when tested with 60 msec tones, and the inhibition to these 54 cells occurred primarily during the first few milliseconds of the excitatory response. Consequently, the initial component of the response was highly sensitive to IIDs, whereas the later component was primarily insensitive to IIDs. Each of the 54 "partially inhibited" cells was able to reach complete inhibition with very short stimuli, such as simulated bat echolocation calls that invoked only the initial, IID-sensitive component. Local application of inhibitory transmitter antagonists disabled the short inhibitory input, indicating that this response pattern is created within the IC.
Collapse
|
47
|
Temporal and binaural properties in dorsal cochlear nucleus and its output tract. J Neurosci 1998; 18:10157-70. [PMID: 9822769 PMCID: PMC6793293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2023] Open
Abstract
The dorsal cochlear nucleus (DCN) is one of three nuclei at the terminal zone of the auditory nerve. Axons of its projection neurons course via the dorsal acoustic stria (DAS) to the inferior colliculus (IC), where their signals are integrated with inputs from various other sources. The DCN presumably conveys sensitivity to spectral features, and it has been hypothesized that it plays a role in sound localization based on pinna cues. To account for its remarkable spectral properties, a DCN circuit scheme was developed in which three inputs converge onto projection neurons: auditory nerve fibers, inhibitory interneurons, and wide-band inhibitors, which possibly consist of Onset-chopper (Oc) cells. We studied temporal and binaural properties in DCN and DAS and examined whether the temporal properties are consistent with the model circuit. Interneurons (type II) and projection (types III and IV) neurons differed from Oc cells by their longer latencies and temporally nonlinear responses to amplitude-modulated tones. They also showed evidence of early inhibition to clicks. All projection neurons examined were inhibited by stimulation of the contralateral ear, particularly by broadband noise, and this inhibition also had short latency. Because Oc cells had short-latency responses and were well driven by broadband stimuli, we propose that they provide short-latency inhibition to DCN for both ipsilateral and contralateral stimuli. These results indicate more complex temporal behavior in DCN than has previously been emphasized, but they are consistent with the recently described nonlinear behavior to spectral manipulations and with the connectivity scheme deduced from such manipulations.
Collapse
|