51
|
Tolnai S, Beutelmann R, Klump GM. Effect of preceding stimulation on sound localization and its representation in the auditory midbrain. Eur J Neurosci 2017; 45:460-471. [PMID: 27891687 DOI: 10.1111/ejn.13491] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2016] [Revised: 10/27/2016] [Accepted: 11/21/2016] [Indexed: 11/29/2022]
Affiliation(s)
- Sandra Tolnai
- Cluster of Excellence Hearing4all; Animal Physiology and Behaviour Group; Department of Neuroscience; School of Medicine and Health Sciences; University of Oldenburg; Oldenburg D-26111 Germany
| | - Rainer Beutelmann
- Cluster of Excellence Hearing4all; Animal Physiology and Behaviour Group; Department of Neuroscience; School of Medicine and Health Sciences; University of Oldenburg; Oldenburg D-26111 Germany
| | - Georg M. Klump
- Cluster of Excellence Hearing4all; Animal Physiology and Behaviour Group; Department of Neuroscience; School of Medicine and Health Sciences; University of Oldenburg; Oldenburg D-26111 Germany
| |
Collapse
|
52
|
Learning to echolocate in sighted people: a correlational study on attention, working memory and spatial abilities. Exp Brain Res 2016; 235:809-818. [PMID: 27888324 PMCID: PMC5315722 DOI: 10.1007/s00221-016-4833-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2016] [Accepted: 11/10/2016] [Indexed: 12/04/2022]
Abstract
Echolocation can be beneficial for the orientation and mobility of visually impaired people. Research has shown considerable individual differences for acquiring this skill. However, individual characteristics that affect the learning of echolocation are largely unknown. In the present study, we examined individual factors that are likely to affect learning to echolocate: sustained and divided attention, working memory, and spatial abilities. To that aim, sighted participants with normal hearing performed an echolocation task that was adapted from a previously reported size-discrimination task. In line with existing studies, we found large individual differences in echolocation ability. We also found indications that participants were able to improve their echolocation ability. Furthermore, we found a significant positive correlation between improvement in echolocation and sustained and divided attention, as measured in the PASAT. No significant correlations were found with our tests regarding working memory and spatial abilities. These findings may have implications for the development of guidelines for training echolocation that are tailored to the individual with a visual impairment.
Collapse
|
53
|
Statistics of natural reverberation enable perceptual separation of sound and space. Proc Natl Acad Sci U S A 2016; 113:E7856-E7865. [PMID: 27834730 DOI: 10.1073/pnas.1612524113] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us.
Collapse
|
54
|
Zeitooni M, Mäki-Torkko E, Stenfelt S. Binaural Hearing Ability With Bilateral Bone Conduction Stimulation in Subjects With Normal Hearing: Implications for Bone Conduction Hearing Aids. Ear Hear 2016; 37:690-702. [DOI: 10.1097/aud.0000000000000336] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
55
|
Reichert MS, Symes LB, Höbel G. Lighting up sound preferences: cross-modal influences on the precedence effect in treefrogs. Anim Behav 2016. [DOI: 10.1016/j.anbehav.2016.07.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
56
|
Montagne C, Zhou Y. Visual capture of a stereo sound: Interactions between cue reliability, sound localization variability, and cross-modal bias. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:471. [PMID: 27475171 DOI: 10.1121/1.4955314] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Multisensory interactions involve coordination and sometimes competition between multiple senses. Vision usually dominates audition in spatial judgments when light and sound stimuli are presented from two different physical locations. This study investigated the influence of vision on the perceived location of a phantom sound source placed in a stereo sound field using a pair of loudspeakers emitting identical signals that were delayed or attenuated relative to each other. Results show that although a similar horizontal range (+/-45°) was reported for timing-modulated and level-modulated signals, listeners' localization performance showed greater variability for the timing signals. When visual stimuli were presented simultaneously with the auditory stimuli, listeners showed stronger visual bias for timing-modulated signals than level-modulated and single-speaker control signals. Trial-to-trial errors remained relatively stable over time, suggesting that sound localization uncertainty has an immediate and long-lasting effect on the across-modal bias. Binaural signal analyses further reveal that interaural differences of time and intensity-the two primary cues for sound localization in the azimuthal plane-are inherently more ambiguous for signals placed using timing. These results suggest that binaural ambiguity is intrinsically linked with localization variability and the strength of cross-modal bias in sound localization.
Collapse
Affiliation(s)
- Christopher Montagne
- Laboratory of Auditory Computation & Neurophysiology, Department of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, Arizona 85287, USA
| | - Yi Zhou
- Laboratory of Auditory Computation & Neurophysiology, Department of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, Arizona 85287, USA
| |
Collapse
|
57
|
Zahorik P, Brandewie EJ. Speech intelligibility in rooms: Effect of prior listening exposure interacts with room acoustics. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:74. [PMID: 27475133 PMCID: PMC6497457 DOI: 10.1121/1.4954723] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
There is now converging evidence that a brief period of prior listening exposure to a reverberant room can influence speech understanding in that environment. Although the effect appears to depend critically on the amplitude modulation characteristic of the speech signal reaching the ear, the extent to which the effect may be influenced by room acoustics has not been thoroughly evaluated. This study seeks to fill this gap in knowledge by testing the effect of prior listening exposure or listening context on speech understanding in five different simulated sound fields, ranging from anechoic space to a room with broadband reverberation time (T60) of approximately 3 s. Although substantial individual variability in the effect was observed and quantified, the context effect was, on average, strongly room dependent. At threshold, the effect was minimal in anechoic space, increased to a maximum of 3 dB on average in moderate reverberation (T60 = 1 s), and returned to minimal levels again in high reverberation. This interaction suggests that the functional effects of prior listening exposure may be limited to sound fields with moderate reverberation (0.4 ≤ T60 ≤ 1 s).
Collapse
Affiliation(s)
- Pavel Zahorik
- Department of Otolaryngology and Communicative Disorders, University of Louisville, Louisville, Kentucky 40292, USA
| | - Eugene J Brandewie
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
58
|
Li XT, Wang NY, Wang YJ, Xu ZQ, Liu JF, Bai YF, Dai JS, Zhao JY. Responses from two firing patterns in inferior colliculus neurons to stimulation of the lateral lemniscus dorsal nucleus. Neural Regen Res 2016; 11:787-94. [PMID: 27335563 PMCID: PMC4904470 DOI: 10.4103/1673-5374.182706] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The γ-aminobutyric acid neurons (GABAergic neurons) in the inferior colliculus are classified into various patterns based on their intrinsic electrical properties to a constant current injection. Although this classification is associated with physiological function, the exact role for neurons with various firing patterns in acoustic processing remains poorly understood. In the present study, we analyzed characteristics of inferior colliculus neurons in vitro, and recorded responses to stimulation of the dorsal nucleus of the lateral lemniscus using the whole-cell patch clamp technique. Seven inferior colliculus neurons were tested and were classified into two firing patterns: sustained-regular (n = 4) and sustained-adapting firing patterns (n = 3). The majority of inferior colliculus neurons exhibited slight changes in response to stimulation and bicuculline. The responses of one neuron with a sustained-adapting firing pattern were suppressed after stimulation, but recovered to normal levels following application of the γ-aminobutyric acid receptor antagonist. One neuron with a sustained-regular pattern showed suppressed stimulation responses, which were not affected by bicuculline. Results suggest that GABAergic neurons in the inferior colliculus exhibit sustained-regular or sustained-adapting firing patterns. Additionally, GABAergic projections from the dorsal nucleus of the lateral lemniscus to the inferior colliculus are associated with sound localization. The different neuronal responses of various firing patterns suggest a role in sound localization. A better understanding of these mechanisms and functions will provide better clinical treatment paradigms for hearing deficiencies.
Collapse
Affiliation(s)
- Xiao-Ting Li
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Ning-Yu Wang
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Yan-Jun Wang
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Zhi-Qing Xu
- Department of Neurophysiology, Capital Medical University, Beijing, China
| | - Jin-Feng Liu
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Yun-Fei Bai
- Department of Neurophysiology, Capital Medical University, Beijing, China
| | - Jin-Sheng Dai
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Jing-Yi Zhao
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
59
|
Pastore MT, Trahiotis C, Braasch J. The import of within-listener variability to understanding the precedence effect. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 139:1235-1240. [PMID: 27036259 DOI: 10.1121/1.4944571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The purpose of this study was to gather behavioral data concerning the precedence effect as manifested by the localization-dominance of the leading elements of compound stimuli. This investigation was motivated by recent findings of Shackleton and Palmer [(2006). J. Assoc. Res. Otolaryngol. 7, 425-442], who measured the electro-physiological responses of single units in the inferior colliculus of the guinea pig. The neural data from Shackleton and Palmer indicated that processing of binaural cues like those relevant to understanding localization dominance is greatly affected by internal, neural noise. In order to evaluate the generality of their physiological results to human perception, the present study measured localization dominance so that behavioral responses within and across sets of samples (i.e., tokens) of frozen noises could be compared. Conceptually consistent with Shackleton and Palmer's neural data, the variability of perceived intracranial lateral positions produced by repeated presentations of the same tokens of noise was greater than the variability of intracranial lateral positions measured across different tokens of noise. This was true for each of the four individual listeners and for each of the 72 stimulus conditions studied. Thus, measured either neuro-physiologically (Shackleton and Palmer, 2006) or behaviorally (this study), the import of within-listener variability appears to be a general, intrinsic aspect of binaural information processing.
Collapse
Affiliation(s)
- M Torben Pastore
- Center for Cognition, Communication & Culture, School of Architecture, Rensselaer Polytechnic Institute, Troy, New York 12180, USA
| | - Constantine Trahiotis
- Departments of Neuroscience and Surgery (Otolaryngology), University of Connecticut Health Center, Farmington, Connecticut 06030, USA
| | - Jonas Braasch
- Center for Cognition, Communication & Culture, School of Architecture, Rensselaer Polytechnic Institute, Troy, New York 12180, USA
| |
Collapse
|
60
|
Blind people are more sensitive than sighted people to binaural sound-location cues, particularly inter-aural level differences. Hear Res 2016; 332:223-232. [DOI: 10.1016/j.heares.2015.09.012] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/21/2015] [Revised: 09/12/2015] [Accepted: 09/15/2015] [Indexed: 11/20/2022]
|
61
|
Hossain S, Montazeri V, Assmann PF, Litovsky RY. Precedence based speech segregation in bilateral cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:EL545-EL550. [PMID: 26723365 PMCID: PMC4691255 DOI: 10.1121/1.4937906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2015] [Revised: 11/06/2015] [Accepted: 12/02/2015] [Indexed: 06/05/2023]
Abstract
The precedence effect (PE) enables the perceptual dominance by a source (lead) over an echo (lag) in reverberant environments. In addition to facilitating sound localization, the PE can play an important role in spatial unmasking of speech. Listeners attending to binaural vocoder simulations with identical channel center frequencies and phase demonstrated PE-based benefits in a closed-set speech segregation task. When presented with the same stimuli, bilateral cochlear implant users did not derive such benefits. These findings suggest that envelope extraction in itself may not lead to a breakdown of the PE benefits, and that other factors may play a role.
Collapse
Affiliation(s)
- Shaikat Hossain
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson Texas 75083-0688, USA
| | - Vahid Montazeri
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson Texas 75083-0688, USA
| | - Peter F Assmann
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson Texas 75083-0688, USA
| | - Ruth Y Litovsky
- Department of Communication Disorders and Waisman Center, University of Wisconsin, Madison Wisconsin 53706, USA , , ,
| |
Collapse
|
62
|
Brown AD, Jones HG, Kan A, Thakkar T, Stecker GC, Goupell MJ, Litovsky RY. Evidence for a neural source of the precedence effect in sound localization. J Neurophysiol 2015; 114:2991-3001. [PMID: 26400253 PMCID: PMC4737417 DOI: 10.1152/jn.00243.2015] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2015] [Accepted: 09/23/2015] [Indexed: 11/22/2022] Open
Abstract
Normal-hearing human listeners and a variety of studied animal species localize sound sources accurately in reverberant environments by responding to the directional cues carried by the first-arriving sound rather than spurious cues carried by later-arriving reflections, which are not perceived discretely. This phenomenon is known as the precedence effect (PE) in sound localization. Despite decades of study, the biological basis of the PE remains unclear. Though the PE was once widely attributed to central processes such as synaptic inhibition in the auditory midbrain, a more recent hypothesis holds that the PE may arise essentially as a by-product of normal cochlear function. Here we evaluated the PE in a unique human patient population with demonstrated sensitivity to binaural information but without functional cochleae. Users of bilateral cochlear implants (CIs) were tested in a psychophysical task that assessed the number and location(s) of auditory images perceived for simulated source-echo (lead-lag) stimuli. A parallel experiment was conducted in a group of normal-hearing (NH) listeners. Key findings were as follows: 1) Subjects in both groups exhibited lead-lag fusion. 2) Fusion was marginally weaker in CI users than in NH listeners but could be augmented by systematically attenuating the amplitude of the lag stimulus to coarsely simulate adaptation observed in acoustically stimulated auditory nerve fibers. 3) Dominance of the lead in localization varied substantially among both NH and CI subjects but was evident in both groups. Taken together, data suggest that aspects of the PE can be elicited in CI users, who lack functional cochleae, thus suggesting that neural mechanisms are sufficient to produce the PE.
Collapse
Affiliation(s)
- Andrew D Brown
- Waisman Center, University of Wisconsin, Madison, Wisconsin
| | - Heath G Jones
- Waisman Center, University of Wisconsin, Madison, Wisconsin
| | - Alan Kan
- Waisman Center, University of Wisconsin, Madison, Wisconsin
| | - Tanvi Thakkar
- Waisman Center, University of Wisconsin, Madison, Wisconsin
| | - G Christopher Stecker
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee; and
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland
| | - Ruth Y Litovsky
- Waisman Center, University of Wisconsin, Madison, Wisconsin;
| |
Collapse
|
63
|
Pastore MT, Braasch J. The precedence effect with increased lag level. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:2079-2089. [PMID: 26520291 DOI: 10.1121/1.4929940] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
When a pair of sounds arrive from different directions with a sufficiently short delay between them, listeners hear a perceptually fused image with a perceived location that is dominated by the first arriving sound. This is called the precedence effect. To test the limits of this phenomenon, 200-ms noise stimuli were presented over headphones to model a temporally overlapping direct sound (lead) with a single reflection (lag) at inter-stimulus intervals (ISIs) of 0-5 ms. Lag intensity exceeded that of the lead by 0-10 dB. Results for 16 listeners show that lateralization shifted from the position of the lead towards the lag as lag level increased. Response variability also increased with lag level. An oscillatory pattern emerged across ISIs as lag level increased, to a degree that varied greatly between listeners. Analysis of modeled binaural cues suggests that these oscillatory patterns are correlated with ILDs produced by the physical interference of lead and lag during the ongoing portion of the stimulus, especially in the 764-Hz frequency band. Different listeners apparently weighted cues from the onset versus ongoing portions of the stimulus differently, as evidenced by the varying degree of influence the ongoing ILD cues had on listeners' perceived lateralization.
Collapse
Affiliation(s)
- M Torben Pastore
- Center for Cognition, Communication and Culture, School of Architecture, Rensselaer Polytechnic Institute, Troy, New York 12180, USA
| | - Jonas Braasch
- Center for Cognition, Communication and Culture, School of Architecture, Rensselaer Polytechnic Institute, Troy, New York 12180, USA
| |
Collapse
|
64
|
A Neural Model of Auditory Space Compatible with Human Perception under Simulated Echoic Conditions. PLoS One 2015; 10:e0137900. [PMID: 26355676 PMCID: PMC4565656 DOI: 10.1371/journal.pone.0137900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2015] [Accepted: 08/22/2015] [Indexed: 11/19/2022] Open
Abstract
In a typical auditory scene, sounds from different sources and reflective surfaces summate in the ears, causing spatial cues to fluctuate. Prevailing hypotheses of how spatial locations may be encoded and represented across auditory neurons generally disregard these fluctuations and must therefore invoke additional mechanisms for detecting and representing them. Here, we consider a different hypothesis in which spatial perception corresponds to an intermediate or sub-maximal firing probability across spatially selective neurons within each hemisphere. The precedence or Haas effect presents an ideal opportunity for examining this hypothesis, since the temporal superposition of an acoustical reflection with sounds arriving directly from a source can cause otherwise stable cues to fluctuate. Our findings suggest that subjects’ experiences may simply reflect the spatial cues that momentarily arise under various acoustical conditions and how these cues are represented. We further suggest that auditory objects may acquire “edges” under conditions when interaural time differences are broadly distributed.
Collapse
|
65
|
Gai Y, Ruhland JL, Yin TCT. Behavior and modeling of two-dimensional precedence effect in head-unrestrained cats. J Neurophysiol 2015; 114:1272-85. [PMID: 26133795 DOI: 10.1152/jn.00214.2015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2015] [Accepted: 06/29/2015] [Indexed: 11/22/2022] Open
Abstract
The precedence effect (PE) is an auditory illusion that occurs when listeners localize nearly coincident and similar sounds from different spatial locations, such as a direct sound and its echo. It has mostly been studied in humans and animals with immobile heads in the horizontal plane; speaker pairs were often symmetrically located in the frontal hemifield. The present study examined the PE in head-unrestrained cats for a variety of paired-sound conditions along the horizontal, vertical, and diagonal axes. Cats were trained with operant conditioning to direct their gaze to the perceived sound location. Stereotypical PE-like behaviors were observed for speaker pairs placed in azimuth or diagonally in the frontal hemifield as the interstimulus delay was varied. For speaker pairs in the median sagittal plane, no clear PE-like behavior occurred. Interestingly, when speakers were placed diagonally in front of the cat, certain PE-like behavior emerged along the vertical dimension. However, PE-like behavior was not observed when both speakers were located in the left hemifield. A Hodgkin-Huxley model was used to simulate responses of neurons in the medial superior olive (MSO) to sound pairs in azimuth. The novel simulation incorporated a low-threshold potassium current and frequency mismatches to generate internal delays. The model exhibited distinct PE-like behavior, such as summing localization and localization dominance. The simulation indicated that certain encoding of the PE could have occurred before information reaches the inferior colliculus, and MSO neurons with binaural inputs having mismatched characteristic frequencies may play an important role.
Collapse
Affiliation(s)
- Yan Gai
- Department of Neuroscience, University of Wisconsin, Madison, Wisconsin; and Department of Biomedical Engineering, Saint Louis University, St. Louis, Missouri
| | - Janet L Ruhland
- Department of Neuroscience, University of Wisconsin, Madison, Wisconsin; and
| | - Tom C T Yin
- Department of Neuroscience, University of Wisconsin, Madison, Wisconsin; and
| |
Collapse
|
66
|
Diedesch AC, Stecker GC. Temporal weighting of binaural information at low frequencies: Discrimination of dynamic interaural time and level differences. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:125-133. [PMID: 26233013 PMCID: PMC4499054 DOI: 10.1121/1.4922327] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2014] [Revised: 05/15/2015] [Accepted: 05/27/2015] [Indexed: 05/29/2023]
Abstract
The importance of sound onsets in binaural hearing has been addressed in many studies, particularly at high frequencies, where the onset of the envelope may carry much of the useful binaural information. Some studies suggest that sound onsets might play a similar role in the processing of binaural cues [e.g., fine-structure interaural time differences (ITD)] at low frequencies. This study measured listeners' sensitivity to ITD and interaural level differences (ILD) present in early (i.e., onset) and late parts of 80-ms pure tones of 250-, 500-, and 1000-Hz frequency. Following previous studies, tones carried static interaural cues or dynamic cues that peaked at sound onset and diminished to zero at sound offset or vice versa. Although better thresholds were observed in static than dynamic conditions overall, ITD discrimination was especially impaired, regardless of frequency, when cues were not available at sound onset. Results for ILD followed a similar pattern at 1000 Hz; at lower frequencies, ILD thresholds did not differ significantly between dynamic-cue conditions. The results support the "onset" hypothesis of Houtgast and Plomp [(1968). J. Acoust. Soc. Am. 44, 807-812] for ITD discrimination, but not necessarily ILD discrimination, in low-frequency pure tones.
Collapse
Affiliation(s)
- Anna C Diedesch
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, 1215 21st Avenue South, Nashville, Tennessee 37232, USA
| | - G Christopher Stecker
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, 1215 21st Avenue South, Nashville, Tennessee 37232, USA
| |
Collapse
|