1
|
Joris PX, Verschooten E, Mc Laughlin M, Versteegh C, van der Heijden M. Frequency selectivity in monkey auditory nerve studied with suprathreshold multicomponent stimuli. Hear Res 2024; 443:108964. [PMID: 38277882 DOI: 10.1016/j.heares.2024.108964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 01/15/2024] [Accepted: 01/20/2024] [Indexed: 01/28/2024]
Abstract
Data from non-human primates can help extend observations from non-primate species to humans. Here we report measurements on the auditory nerve of macaque monkeys in the context of a controversial topic important to human hearing. A range of techniques have been used to examine the claim, which is not generally accepted, that human frequency tuning is sharper than traditionally thought, and sharper than in commonly used animal models. Data from single auditory-nerve fibers occupy a pivotal position to examine this claim, but are not available for humans. A previous study reported sharper tuning in auditory-nerve fibers of macaque relative to the cat. A limitation of these and other single-fiber data is that frequency selectivity was measured with tonal threshold-tuning curves, which do not directly assess spectral filtering and whose shape is sharpened by cochlear nonlinearity. Our aim was to measure spectral filtering with wideband suprathreshold stimuli in the macaque auditory nerve. We obtained responses of single nerve fibers of anesthetized macaque monkeys and cats to a suprathreshold, wideband, multicomponent stimulus designed to allow characterization of spectral filtering at any cochlear locus. Quantitatively the differences between the two species are smaller than in previous studies, but consistent with these studies the filters obtained show a trend of sharper tuning in macaque, relative to the cat, for fibers in the basal half of the cochlea. We also examined differences in group delay measured on the phase data near the characteristic frequency versus in the low-frequency tail. The phase data are consistent with the interpretation of sharper frequency tuning in monkey in the basal half of the cochlea. We conclude that use of suprathreshold, wide-band stimuli supports the interpretation of sharper frequency selectivity in macaque nerve fibers relative to the cat, although the difference is less marked than apparent from the assessment with tonal threshold-based data.
Collapse
Affiliation(s)
- P X Joris
- Lab of Auditory Neurophysiology, KU Leuven, O&N2 KU Leuven, Herestraat 49 bus 1021, Leuven B-3000, Belgium.
| | - E Verschooten
- Lab of Auditory Neurophysiology, KU Leuven, O&N2 KU Leuven, Herestraat 49 bus 1021, Leuven B-3000, Belgium
| | - M Mc Laughlin
- Lab of Auditory Neurophysiology, KU Leuven, O&N2 KU Leuven, Herestraat 49 bus 1021, Leuven B-3000, Belgium
| | - Cpc Versteegh
- Department of Neuroscience, Erasmus MC, Rotterdam, the Netherlands
| | | |
Collapse
|
2
|
van der Heijden M, Vavakou A. Rectifying and sluggish: Outer hair cells as regulators rather than amplifiers. Hear Res 2021; 423:108367. [PMID: 34686384 DOI: 10.1016/j.heares.2021.108367] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 08/27/2021] [Accepted: 09/29/2021] [Indexed: 11/04/2022]
Abstract
In the cochlea, mechano-electrical transduction is preceded by dynamic range compression. Outer hair cells (OHCs) and their voltage dependent length changes, known as electromotility, play a central role in this compression process, but the exact mechanisms are poorly understood. Here we review old and new experimental findings and show that (1) just audible high-frequency tones evoke an ∼1-microvolt AC receptor potential in basal OHCs; (2) any mechanical amplification of soft high-frequency tones by OHC motility would have an adverse effect on their audibility; (3) having a higher basolateral K+ conductance, while increasing the OHC corner frequency, does not boost the magnitude of the high-frequency AC receptor potential; (4) OHC receptor currents display a substantial rectified (DC) component; (5) mechanical DC responses (baseline shifts) to acoustic stimuli, while insignificant on the basilar membrane, can be comparable in magnitude to AC responses when recorded in the organ of Corti, both in the apex and the base. In the basal turn, the DC component may even exceed the AC component, lending support to Dallos' suggestion that both apical and basal OHCs display a significant degree of rectification. We further show that (6) low-intensity cochlear traveling waves, by virtue of their abrupt transition from fast to slow propagation, are well suited to transport high-frequency energy with minimal losses (∼2-dB loss for 16-kHz tones in the gerbil); (7) a 90-dB, 16-kHz tone, if transmitted without loss to its tonotopic place, would evoke a destructive displacement amplitude of 564 nm. We interpret these findings in a framework in which local dissipation is regulated by OHC motility.
Collapse
Affiliation(s)
| | - Anna Vavakou
- Department of Neuroscience, Erasmus MC, Rotterdam, the Netherlands
| |
Collapse
|
3
|
Moheimanian L, Paraskevopoulou SE, Adamek M, Schalk G, Brunner P. Modulation in cortical excitability disrupts information transfer in perceptual-level stimulus processing. Neuroimage 2021; 243:118498. [PMID: 34428572 DOI: 10.1016/j.neuroimage.2021.118498] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2020] [Revised: 07/15/2021] [Accepted: 08/20/2021] [Indexed: 10/20/2022] Open
Abstract
Despite significant interest in the neural underpinnings of behavioral variability, little light has been shed on the cortical mechanism underlying the failure to respond to perceptual-level stimuli. We hypothesized that cortical activity resulting from perceptual-level stimuli is sensitive to the moment-to-moment fluctuations in cortical excitability, and thus may not suffice to produce a behavioral response. We tested this hypothesis using electrocorticographic recordings to follow the propagation of cortical activity in six human subjects that responded to perceptual-level auditory stimuli. Here we show that for presentations that did not result in a behavioral response, the likelihood of cortical activity decreased from auditory cortex to motor cortex, and was related to reduced local cortical excitability. Cortical excitability was quantified using instantaneous voltage during a short window prior to cortical activity onset. Therefore, when humans are presented with an auditory stimulus close to perceptual-level threshold, moment-by-moment fluctuations in cortical excitability determine whether cortical responses to sensory stimulation successfully connect auditory input to a resultant behavioral response.
Collapse
Affiliation(s)
- Ladan Moheimanian
- National Center for Adaptive Neurotechnologies, Albany, NY, USA; Department of Biomedical Sciences, State University of New York at Albany, Albany, NY, USA
| | | | - Markus Adamek
- National Center for Adaptive Neurotechnologies, Albany, NY, USA; Department of Neuroscience, Washington University School of Medicine, St. Louis, MO, USA
| | - Gerwin Schalk
- National Center for Adaptive Neurotechnologies, Albany, NY, USA; Department of Biomedical Sciences, State University of New York at Albany, Albany, NY, USA
| | - Peter Brunner
- National Center for Adaptive Neurotechnologies, Albany, NY, USA; Department of Biomedical Sciences, State University of New York at Albany, Albany, NY, USA; Department of Neurology, Albany Medical College, Albany, NY, USA; Department of Neurosurgery, Washington University School of Medicine, St. Louis, MO, USA.
| |
Collapse
|
4
|
Temporal Correlates to Monaural Edge Pitch in the Distribution of Interspike Interval Statistics in the Auditory Nerve. eNeuro 2021; 8:ENEURO.0292-21.2021. [PMID: 34281977 PMCID: PMC8387151 DOI: 10.1523/eneuro.0292-21.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 07/07/2021] [Indexed: 12/02/2022] Open
Abstract
Pitch is a perceptual attribute enabling perception of melody. There is no consensus regarding the fundamental nature of pitch and its underlying neural code. A stimulus which has received much interest in psychophysical and computational studies is noise with a sharp spectral edge. High-pass (HP) or low-pass (LP) noise gives rise to a pitch near the edge frequency (monaural edge pitch; MEP). The simplicity of this stimulus, combined with its spectral and autocorrelation properties, make it an interesting stimulus to examine spectral versus temporal cues that could underly its pitch. We recorded responses of single auditory nerve (AN) fibers in chinchilla to MEP-stimuli varying in edge frequency. Temporal cues were examined with shuffled autocorrelogram (SAC) analysis. Correspondence between the population’s dominant interspike interval and reported pitch estimates was poor. A fuller analysis of the population interspike interval distribution, which incorporates not only the dominant but all intervals, results in good matches with behavioral results, but not for the entire range of edge frequencies that generates pitch. Finally, we also examined temporal structure over a slower time scale, intermediate between average firing rate and interspike intervals, by studying the SAC envelope. We found that, in response to a given MEP stimulus, this feature also systematically varies with edge frequency, across fibers with different characteristic frequency (CF). Because neural mechanisms to extract envelope cues are well established, and because this cue is not limited by coding of stimulus fine-structure, this newly identified slower temporal cue is a more plausible basis for pitch than cues based on fine-structure.
Collapse
|
5
|
Turner MD, Berg BG. Transition bandwidths for stimuli with sparse spectral densities (L). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:794. [PMID: 32113300 DOI: 10.1121/10.0000651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Accepted: 01/10/2020] [Indexed: 06/10/2023]
Abstract
Transition bandwidths, observed as peaks in threshold functions in band-widening discrimination experiments, exhibit a number of notable features, such as a tenfold range of individual differences. The transition from a discrimination process based on temporal features to a process akin to profile analysis occurs automatically when the stimulus becomes wide enough to support across channel comparisons. A challenging finding is that transition bandwidths are unaffected by spectral density, tolerating frequency differences between spectral components as great as 400 Hz. Theoretical considerations based on this fact favor distinguishing between spectral and temporal processes as early as the initial stage of peripheral filtering.
Collapse
Affiliation(s)
- Matthew D Turner
- Department of Cognitive Sciences, University of California Irvine, Irvine, California 92697-5100, USA
| | - Bruce G Berg
- Department of Cognitive Sciences, University of California Irvine, Irvine, California 92697-5100, USA
| |
Collapse
|
6
|
Effect of sound level on virtual and free-field localization of brief sounds in the anterior median plane. Hear Res 2018; 365:28-35. [DOI: 10.1016/j.heares.2018.06.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/18/2017] [Revised: 05/31/2018] [Accepted: 06/08/2018] [Indexed: 11/19/2022]
|
7
|
Berg BG, Zhu J, Tan AY, Borucki EM. Discrimination bandwidths for amplitude modulated and quasi-frequency modulated tones with spectral cues degraded by a roving-level. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 143:3639. [PMID: 29960508 DOI: 10.1121/1.5042541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Theoretically, discriminating an amplitude modulated tone (AM) from a quasi-frequency modulated tone (QFM) is an ideal task for measuring the bandwidth of phase sensitivity because the stimuli have identical amplitude spectra but different phase spectra. The stimuli are perfectly discriminable at narrow bandwidths, but become indistinguishable at wide bandwidths. Measurements, however, are thought to be compromised by auditory distortion products, particularly a cubic distortion tone which interacts with the lower sideband of the stimulus to create an intensity cue. The results and implications of using a roving level procedure to eliminate distortion product effects are discussed.
Collapse
Affiliation(s)
- Bruce G Berg
- Department of Cognitive Sciences, University of California Irvine, Irvine, California 92697-5100, USA
| | - Joann Zhu
- Department of Cognitive Sciences, University of California Irvine, Irvine, California 92697-5100, USA
| | - Alison Y Tan
- Department of Cognitive Sciences, University of California Irvine, Irvine, California 92697-5100, USA
| | - Ewa M Borucki
- Department of Cognitive Sciences, University of California Irvine, Irvine, California 92697-5100, USA
| |
Collapse
|
8
|
Wei L, Karino S, Verschooten E, Joris PX. Enhancement of phase-locking in rodents. I. An axonal recording study in gerbil. J Neurophysiol 2017; 118:2009-2023. [PMID: 28701535 DOI: 10.1152/jn.00194.2016] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2016] [Revised: 07/10/2017] [Accepted: 07/11/2017] [Indexed: 11/22/2022] Open
Abstract
The trapezoid body (TB) contains axons of neurons in the anteroventral cochlear nucleus projecting to monaural and binaural nuclei in the superior olivary complex (SOC). Characterization of these monaural inputs is important for the interpretation of response properties of SOC neurons. In particular, understanding of the sensitivity to interaural time differences (ITDs) in neurons of the medial and lateral superior olive requires knowledge of the temporal firing properties of the monaural excitatory and inhibitory inputs to these neurons. In recent years, studies of ITD sensitivity of SOC neurons have made increasing use of small animal models with good low-frequency hearing, particularly the gerbil. We presented stimuli as used in binaural studies to monaural neurons in the TB and studied their temporal coding. We found that general trends as have been described in the cat are present in gerbil, but with some important differences. Phase-locking to pure tones tends to be higher in TB axons and in neurons of the medial nucleus of the TB (MNTB) than in the auditory nerve for neurons with characteristic frequencies (CFs) below 1 kHz, but this enhancement is quantitatively more modest than in cat. Stronger enhancement is common when TB neurons are stimulated at low frequencies below CF. It is rare for TB neurons in gerbil to entrain to low-frequency stimuli, i.e., to discharge a well-timed spike on every stimulus cycle. Also, complex phase-locking behavior, with multiple modes of increased firing probability per stimulus cycle, is common in response to low frequencies below CF.NEW & NOTEWORTHY Phase-locking is an important property of neurons in the early auditory pathway: it is critical for the sensitivity to time differences between the two ears enabling spatial hearing. Studies in cat have shown an improvement in phase-locking from the peripheral to the central auditory nervous system. We recorded from axons in an output tract of the cochlear nucleus and show that a similar but more limited form of temporal enhancement is present in gerbil.
Collapse
Affiliation(s)
- Liting Wei
- Laboratory of Auditory Neurophysiology, KU Leuven, Leuven, Belgium
| | - Shotaro Karino
- Laboratory of Auditory Neurophysiology, KU Leuven, Leuven, Belgium
| | - Eric Verschooten
- Laboratory of Auditory Neurophysiology, KU Leuven, Leuven, Belgium
| | - Philip X Joris
- Laboratory of Auditory Neurophysiology, KU Leuven, Leuven, Belgium
| |
Collapse
|
9
|
Happel MFK, Ohl FW. Compensating Level-Dependent Frequency Representation in Auditory Cortex by Synaptic Integration of Corticocortical Input. PLoS One 2017; 12:e0169461. [PMID: 28046062 PMCID: PMC5207691 DOI: 10.1371/journal.pone.0169461] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2016] [Accepted: 12/16/2016] [Indexed: 11/20/2022] Open
Abstract
Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex.
Collapse
Affiliation(s)
- Max F. K. Happel
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany
- Institute of Biology, Otto-von-Guericke-University, D-39120 Magdeburg, Germany
- * E-mail: (MH); (FO)
| | - Frank W. Ohl
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany
- Institute of Biology, Otto-von-Guericke-University, D-39120 Magdeburg, Germany
- Center for Behavioral Brain Sciences (CBBS), Magdeburg, Germany
- * E-mail: (MH); (FO)
| |
Collapse
|
10
|
Colin D, Micheyl C, Girod A, Truy E, Gallégo S. Binaural Diplacusis and Its Relationship with Hearing-Threshold Asymmetry. PLoS One 2016; 11:e0159975. [PMID: 27536884 PMCID: PMC4990190 DOI: 10.1371/journal.pone.0159975] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2016] [Accepted: 07/11/2016] [Indexed: 12/02/2022] Open
Abstract
Binaural pitch diplacusis refers to a perceptual anomaly whereby the same sound is perceived as having a different pitch depending on whether it is presented in the left or the right ear. Results in the literature suggest that this phenomenon is more prevalent, and larger, in individuals with asymmetric hearing loss than in individuals with symmetric hearing. However, because studies devoted to this effect have thus far involved small samples, the prevalence of the effect, and its relationship with interaural asymmetries in hearing thresholds, remain unclear. In this study, psychometric functions for interaural pitch comparisons were measured in 55 subjects, including 12 normal-hearing and 43 hearing-impaired participants. Statistically significant pitch differences between the left and right ears were observed in normal-hearing participants, but the effect was usually small (less than 1.5/16 octave, or about 7%). For the hearing-impaired participants, statistically significant interaural pitch differences were found in about three-quarters of the cases. Moreover, for about half of these participants, the difference exceeded 1.5/16 octaves and, in some participants, was as large as or larger than 1/4 octave. This was the case even for the lowest frequency tested, 500 Hz. The pitch differences were weakly, but significantly, correlated with the difference in hearing thresholds between the two ears, such that larger threshold asymmetries were statistically associated with larger pitch differences. For the vast majority of the hearing-impaired participants, the direction of the pitch differences was such that pitch was perceived as higher on the side with the higher (i.e., ‘worse’) hearing thresholds than on the opposite side. These findings are difficult to reconcile with purely temporal models of pitch perception, but may be accounted for by place-based or spectrotemporal models.
Collapse
Affiliation(s)
- David Colin
- Lyon Neuroscience Research Center, IMPACT Team, CRNL, INSERM U1028, CNRS UMR5292, Lyon, France
- Institut des Sciences et Techniques de la Réadaptation, Lyon, France
- University Lyon 1, Lyon, France
- * E-mail:
| | | | - Anneline Girod
- Institut des Sciences et Techniques de la Réadaptation, Lyon, France
| | - Eric Truy
- Lyon Neuroscience Research Center, IMPACT Team, CRNL, INSERM U1028, CNRS UMR5292, Lyon, France
- Departement ORL, Hôpital Edouard Herriot, Centre Hospitalier et Universitaire, Lyon, France
- University Lyon 1, Lyon, France
| | - Stéphane Gallégo
- Institut des Sciences et Techniques de la Réadaptation, Lyon, France
- University Lyon 1, Lyon, France
| |
Collapse
|
11
|
Heil P, Peterson AJ. Spike timing in auditory-nerve fibers during spontaneous activity and phase locking. Synapse 2016; 71:5-36. [DOI: 10.1002/syn.21925] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2015] [Revised: 07/20/2016] [Accepted: 07/24/2016] [Indexed: 12/22/2022]
Affiliation(s)
- Peter Heil
- Department of Systems Physiology of Learning; Leibniz Institute for Neurobiology; Magdeburg 39118 Germany
- Center for Behavioral Brain Sciences; Magdeburg Germany
| | - Adam J. Peterson
- Department of Systems Physiology of Learning; Leibniz Institute for Neurobiology; Magdeburg 39118 Germany
| |
Collapse
|
12
|
Lewis JD, Kopun J, Neely ST, Schmid KK, Gorga MP. Tone-burst auditory brainstem response wave V latencies in normal-hearing and hearing-impaired ears. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:3210-3219. [PMID: 26627795 PMCID: PMC4662677 DOI: 10.1121/1.4935516] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2015] [Revised: 10/20/2015] [Accepted: 10/28/2015] [Indexed: 06/05/2023]
Abstract
The metric used to equate stimulus level [sound pressure level (SPL) or sensation level (SL)] between ears with normal hearing (NH) and ears with hearing loss (HL) in comparisons of auditory function can influence interpretation of results. When stimulus level is equated in dB SL, higher SPLs are presented to ears with HL due to their reduced sensitivity. As a result, it may be difficult to determine if differences between ears with NH and ears with HL are due to cochlear pathology or level-dependent changes in cochlear mechanics. To the extent that level-dependent changes in cochlear mechanics contribute to auditory brainstem response latencies, comparisons between normal and pathologic ears may depend on the stimulus levels at which comparisons are made. To test this hypothesis, wave V latencies were measured in 16 NH ears and 15 ears with mild-to-moderate HL. When stimulus levels were equated in SL, latencies were shorter in HL ears. However, latencies were similar for NH and HL ears when stimulus levels were equated in SPL. These observations demonstrate that the effect of stimulus level on wave V latency is large relative to the effect of HL, at least in cases of mild-to-moderate HL.
Collapse
Affiliation(s)
- James D Lewis
- Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA
| | - Judy Kopun
- Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA
| | - Stephen T Neely
- Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA
| | - Kendra K Schmid
- Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA
| | - Michael P Gorga
- Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA
| |
Collapse
|
13
|
van der Heijden M, Versteegh CPC. Energy Flux in the Cochlea: Evidence Against Power Amplification of the Traveling Wave. J Assoc Res Otolaryngol 2015; 16:581-97. [PMID: 26148491 PMCID: PMC4569608 DOI: 10.1007/s10162-015-0529-5] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2014] [Accepted: 05/28/2015] [Indexed: 11/27/2022] Open
Abstract
Traveling waves in the inner ear exhibit an amplitude peak that shifts with frequency. The peaking is commonly believed to rely on motile processes that amplify the wave by inserting energy. We recorded the vibrations at adjacent positions on the basilar membrane in sensitive gerbil cochleae and tested the putative power amplification in two ways. First, we determined the energy flux of the traveling wave at its peak and compared it to the acoustic power entering the ear, thereby obtaining the net cochlear power gain. For soft sounds, the energy flux at the peak was 1 ± 0.6 dB less than the middle ear input power. For more intense sounds, increasingly smaller fractions of the acoustic power actually reached the peak region. Thus, we found no net power amplification of soft sounds and a strong net attenuation of intense sounds. Second, we analyzed local wave propagation on the basilar membrane. We found that the waves slowed down abruptly when approaching their peak, causing an energy densification that quantitatively matched the amplitude peaking, similar to the growth of sea waves approaching the beach. Thus, we found no local power amplification of soft sounds and strong local attenuation of intense sounds. The most parsimonious interpretation of these findings is that cochlear sensitivity is not realized by amplifying acoustic energy, but by spatially focusing it, and that dynamic compression is realized by adjusting the amount of dissipation to sound intensity.
Collapse
Affiliation(s)
- Marcel van der Heijden
- Department of Neuroscience, Erasmus MC, Room Ee 1285, P.O. Box 2040, 3000 CA, Rotterdam, The Netherlands
| | - Corstiaen P C Versteegh
- Department of Neuroscience, Erasmus MC, Room Ee 1285, P.O. Box 2040, 3000 CA, Rotterdam, The Netherlands.
| |
Collapse
|
14
|
Heil P, Peterson AJ. Basic response properties of auditory nerve fibers: a review. Cell Tissue Res 2015; 361:129-58. [PMID: 25920587 DOI: 10.1007/s00441-015-2177-9] [Citation(s) in RCA: 68] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2014] [Accepted: 03/19/2015] [Indexed: 01/26/2023]
Abstract
All acoustic information from the periphery is encoded in the timing and rates of spikes in the population of spiral ganglion neurons projecting to the central auditory system. Considerable progress has been made in characterizing the physiological properties of type-I and type-II primary auditory afferents and understanding the basic properties of type-I afferents in response to sounds. Here, we review some of these properties, with emphasis placed on issues such as the stochastic nature of spike timing during spontaneous and driven activity, frequency tuning curves, spike-rate-versus-level functions, dynamic-range and spike-rate adaptation, and phase locking to stimulus fine structure and temporal envelope. We also review effects of acoustic trauma on some of these response properties.
Collapse
Affiliation(s)
- Peter Heil
- Leibniz Institute for Neurobiology, Brenneckestrasse 6, 39118, Magdeburg, Germany,
| | | |
Collapse
|
15
|
Bones O, Plack CJ. Subcortical representation of musical dyads: individual differences and neural generators. Hear Res 2015; 323:9-21. [PMID: 25636498 DOI: 10.1016/j.heares.2015.01.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/28/2014] [Revised: 01/07/2015] [Accepted: 01/19/2015] [Indexed: 10/24/2022]
Abstract
When two notes are played simultaneously they form a musical dyad. The sensation of pleasantness, or "consonance", of a dyad is likely driven by the harmonic relation of the frequency components of the combined spectrum of the two notes. Previous work has demonstrated a relation between individual preference for consonant over dissonant dyads, and the strength of neural temporal coding of the harmonicity of consonant relative to dissonant dyads as measured using the electrophysiological "frequency-following response" (FFR). However, this work also demonstrated that both these variables correlate strongly with musical experience. The current study was designed to determine whether the relation between consonance preference and neural temporal coding is maintained when controlling for musical experience. The results demonstrate that strength of neural coding of harmonicity is predictive of individual preference for consonance even for non-musicians. An additional purpose of the current study was to assess the cochlear generation site of the FFR to low-frequency dyads. By comparing the reduction in FFR strength when high-pass masking noise was added to the output of a model of the auditory periphery, the results provide evidence for the FFR to low-frequency dyads resulting in part from basal cochlear generators.
Collapse
Affiliation(s)
- Oliver Bones
- School of Psychological Sciences, University of Manchester, Manchester M13 9PL, UK.
| | - Christopher J Plack
- School of Psychological Sciences, University of Manchester, Manchester M13 9PL, UK
| |
Collapse
|
16
|
|
17
|
Lopez-Poveda EA. Why do I hear but not understand? Stochastic undersampling as a model of degraded neural encoding of speech. Front Neurosci 2014; 8:348. [PMID: 25400543 PMCID: PMC4214224 DOI: 10.3389/fnins.2014.00348] [Citation(s) in RCA: 52] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2014] [Accepted: 10/12/2014] [Indexed: 11/13/2022] Open
Abstract
Hearing impairment is a serious disease with increasing prevalence. It is defined based on increased audiometric thresholds but increased thresholds are only partly responsible for the greater difficulty understanding speech in noisy environments experienced by some older listeners or by hearing-impaired listeners. Identifying the additional factors and mechanisms that impair intelligibility is fundamental to understanding hearing impairment but these factors remain uncertain. Traditionally, these additional factors have been sought in the way the speech spectrum is encoded in the pattern of impaired mechanical cochlear responses. Recent studies, however, are steering the focus toward impaired encoding of the speech waveform in the auditory nerve. In our recent work, we gave evidence that a significant factor might be the loss of afferent auditory nerve fibers, a pathology that comes with aging or noise overexposure. Our approach was based on a signal-processing analogy whereby the auditory nerve may be regarded as a stochastic sampler of the sound waveform and deafferentation may be described in terms of waveform undersampling. We showed that stochastic undersampling simultaneously degrades the encoding of soft and rapid waveform features, and that this degrades speech intelligibility in noise more than in quiet without significant increases in audiometric thresholds. Here, we review our recent work in a broader context and argue that the stochastic undersampling analogy may be extended to study the perceptual consequences of various different hearing pathologies and their treatment.
Collapse
Affiliation(s)
- Enrique A. Lopez-Poveda
- Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de SalamancaSalamanca, Spain
- Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Universidad de SalamancaSalamanca, Spain
- Departamento de Cirugía, Facultad de Medicina, Universidad de SalamancaSalamanca, Spain
| |
Collapse
|
18
|
Alves-Pinto A, Palmer AR, Lopez-Poveda EA. Perception and coding of high-frequency spectral notches: potential implications for sound localization. Front Neurosci 2014; 8:112. [PMID: 24904258 PMCID: PMC4034511 DOI: 10.3389/fnins.2014.00112] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2013] [Accepted: 04/29/2014] [Indexed: 11/13/2022] Open
Abstract
The interaction of sound waves with the human pinna introduces high-frequency notches (5-10 kHz) in the stimulus spectrum that are thought to be useful for vertical sound localization. A common view is that these notches are encoded as rate profiles in the auditory nerve (AN). Here, we review previously published psychoacoustical evidence in humans and computer-model simulations of inner hair cell responses to noises with and without high-frequency spectral notches that dispute this view. We also present new recordings from guinea pig AN and "ideal observer" analyses of these recordings that suggest that discrimination between noises with and without high-frequency spectral notches is probably based on the information carried in the temporal pattern of AN discharges. The exact nature of the neural code involved remains nevertheless uncertain: computer model simulations suggest that high-frequency spectral notches are encoded in spike timing patterns that may be operant in the 4-7 kHz frequency regime, while "ideal observer" analysis of experimental neural responses suggest that an effective cue for high-frequency spectral discrimination may be based on sampling rates of spike arrivals of AN fibers using non-overlapping time binwidths of between 4 and 9 ms. Neural responses show that sensitivity to high-frequency notches is greatest for fibers with low and medium spontaneous rates than for fibers with high spontaneous rates. Based on this evidence, we conjecture that inter-subject variability at high-frequency spectral notch detection and, consequently, at vertical sound localization may partly reflect individual differences in the available number of functional medium- and low-spontaneous-rate fibers.
Collapse
Affiliation(s)
- Ana Alves-Pinto
- Klinikum rechts der Isar, Technische Universität MünchenMunich, Germany
| | - Alan R. Palmer
- Medical Research Council Institute of Hearing Research, University ParkNottingham, UK
| | - Enrique A. Lopez-Poveda
- Departamento de Cirugía, Facultad de Medicina, Instituto de Neurociencias de Castilla y León, Instituto de Investigación Biomédica de Salamanca, Universidad de SalamancaSalamanca, Spain
| |
Collapse
|
19
|
Bones O, Hopkins K, Krishnan A, Plack CJ. Phase locked neural activity in the human brainstem predicts preference for musical consonance. Neuropsychologia 2014; 58:23-32. [PMID: 24690415 PMCID: PMC4040538 DOI: 10.1016/j.neuropsychologia.2014.03.011] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2013] [Revised: 03/20/2014] [Accepted: 03/21/2014] [Indexed: 11/03/2022]
Abstract
When musical notes are combined to make a chord, the closeness of fit of the combined spectrum to a single harmonic series (the 'harmonicity' of the chord) predicts the perceived consonance (how pleasant and stable the chord sounds; McDermott, Lehr, & Oxenham, 2010). The distinction between consonance and dissonance is central to Western musical form. Harmonicity is represented in the temporal firing patterns of populations of brainstem neurons. The current study investigates the role of brainstem temporal coding of harmonicity in the perception of consonance. Individual preference for consonant over dissonant chords was measured using a rating scale for pairs of simultaneous notes. In order to investigate the effects of cochlear interactions, notes were presented in two ways: both notes to both ears or each note to different ears. The electrophysiological frequency following response (FFR), reflecting sustained neural activity in the brainstem synchronised to the stimulus, was also measured. When both notes were presented to both ears the perceptual distinction between consonant and dissonant chords was stronger than when the notes were presented to different ears. In the condition in which both notes were presented to the both ears additional low-frequency components, corresponding to difference tones resulting from nonlinear cochlear processing, were observable in the FFR effectively enhancing the neural harmonicity of consonant chords but not dissonant chords. Suppressing the cochlear envelope component of the FFR also suppressed the additional frequency components. This suggests that, in the case of consonant chords, difference tones generated by interactions between notes in the cochlea enhance the perception of consonance. Furthermore, individuals with a greater distinction between consonant and dissonant chords in the FFR to individual harmonics had a stronger preference for consonant over dissonant chords. Overall, the results provide compelling evidence for the role of neural temporal coding in the perception of consonance, and suggest that the representation of harmonicity in phase locked neural firing drives the perception of consonance.
Collapse
Affiliation(s)
- Oliver Bones
- School of Psychological Sciences, The University of Manchester, Manchester M13 9PL, UK.
| | - Kathryn Hopkins
- School of Psychological Sciences, The University of Manchester, Manchester M13 9PL, UK
| | - Ananthanarayan Krishnan
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN 47907, USA
| | - Christopher J Plack
- School of Psychological Sciences, The University of Manchester, Manchester M13 9PL, UK
| |
Collapse
|
20
|
Cone B, Whitaker R. Dynamics of infant cortical auditory evoked potentials (CAEPs) for tone and speech tokens. Int J Pediatr Otorhinolaryngol 2013; 77:1162-73. [PMID: 23722003 PMCID: PMC3700622 DOI: 10.1016/j.ijporl.2013.04.030] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/22/2013] [Revised: 04/19/2013] [Accepted: 04/20/2013] [Indexed: 11/24/2022]
Abstract
OBJECTIVES Cortical auditory evoked potentials (CAEPs) to tones and speech sounds were obtained in infants to: (1) further knowledge of auditory development above the level of the brainstem during the first year of life; (2) establish CAEP input-output functions for tonal and speech stimuli as a function of stimulus level and (3) elaborate the data-base that establishes CAEP in infants tested while awake using clinically relevant stimuli, thus providing methodology that would have translation to pediatric audiological assessment. Hypotheses concerning CAEP development were that the latency and amplitude input-output functions would reflect immaturity in encoding stimulus level. In a second experiment, infants were tested with the same stimuli used to evoke the CAEPs. Thresholds for these stimuli were determined using observer-based psychophysical techniques. The hypothesis was that the behavioral thresholds would be correlated with CAEP input-output functions because of shared cortical response areas known to be active in sound detection. DESIGN 36 infants, between the ages of 4 and 12 months (mean=8 months, s.d.=1.8 months) and 9 young adults (mean age 21 years) with normal hearing were tested. First, CAEPs amplitude and latency input-output functions were obtained for 4 tone bursts and 7 speech tokens. The tone bursts stimuli were 50 ms tokens of pure tones at 0.5, 1.0, 2.0 and 4.0 kHz. The speech sound tokens, /a/, /i/, /o/, /u/, /m/, /s/, and /∫/, were created from natural speech samples and were also 50 ms in duration. CAEPs were obtained for tone burst and speech token stimuli at 10 dB level decrements in descending order from 70 dB SPL. All CAEP tests were completed while the infants were awake and engaged in quiet play. For the second experiment, observer-based psychophysical methods were used to establish perceptual threshold for the same speech sound and tone tokens. RESULTS Infant CAEP component latencies were prolonged by 100-150 ms in comparison to adults. CAEP latency-intensity input output functions were steeper in infants compared to adults. CAEP amplitude growth functions with respect to stimulus SPL are adult-like at this age, particularly for the earliest component, P1-N1. Infant perceptual thresholds were elevated with respect to those found in adults. Furthermore, perceptual thresholds were higher, on average, than levels at which CAEPs could be obtained. When CAEP amplitudes were plotted with respect to perceptual threshold (dB SL), the infant CAEP amplitude growth slopes were steeper than in adults. CONCLUSIONS Although CAEP latencies indicate immaturity in neural transmission at the level of the cortex, amplitude growth with respect to stimulus SPL is adult-like at this age, particularly for the earliest component, P1-N1. The latency and amplitude input-output functions may provide additional information as to how infants perceive stimulus level. The reasons for the discrepancy between electrophysiologic and perceptual threshold may be due to immaturity in perceptual temporal resolution abilities and the broad-band listening strategy employed by infants. The findings from the current study can be translated to the clinical setting. It is possible to use tonal or speech sound tokens to evoke CAEPs in an awake, passively alert infant, and thus determine whether these sounds activate the auditory cortex. This could be beneficial in the verification of hearing aid or cochlear implant benefit.
Collapse
Affiliation(s)
- Barbara Cone
- University of Arizona, Department of Speech, Language and Hearing Sciences, PO Box 210071, Tucson, AZ 85721, United States.
| | - Richard Whitaker
- Hearing Science of Rancho Cucamonga 6283 Grove Avenue Suite 104 Rancho Cucamonga, CA 91730 909-920-9906
| |
Collapse
|
21
|
Phillips DJ, Schei JL, Meighan PC, Rector DM. State-dependent changes in cortical gain control as measured by auditory evoked responses to varying intensity stimuli. Sleep 2011; 34:1527-37. [PMID: 22043124 DOI: 10.5665/sleep.1392] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
Abstract
STUDY OBJECTIVES Auditory evoked potential (AEP) components correspond to sequential activation of brain structures within the auditory pathway and reveal neural activity during sensory processing. To investigate state-dependent modulation of stimulus intensity response profiles within different brain structures, we assessed AEP components across both stimulus intensity and state. DESIGN We implanted adult female Sprague-Dawley rats (N = 6) with electrodes to measure EEG, EKG, and EMG. Intermittent auditory stimuli (6-12 s) varying from 50 to 75 dBa were delivered over a 24-h period. Data were parsed into 2-s epochs and scored for wake/sleep state. RESULTS All AEP components increased in amplitude with increased stimulus intensity during wake. During quiet sleep, however, only the early latency response (ELR) showed this relationship, while the middle latency response (MLR) increased at the highest 75 dBa intensity, and the late latency response (LLR) showed no significant change across the stimulus intensities tested. During rapid eye movement sleep (REM), both ELR and LLR increased, similar to wake, but MLR was severely attenuated. CONCLUSIONS Stimulation intensity and the corresponding AEP response profile were dependent on both brain structure and sleep state. Lower brain structures maintained stimulus intensity and neural response relationships during sleep. This relationship was not observed in the cortex, implying state-dependent modification of stimulus intensity coding. Since AEP amplitude is not modulated by stimulus intensity during sleep, differences between paired 75/50 dBa stimuli could be used to determine state better than individual intensities.
Collapse
Affiliation(s)
- Derrick J Phillips
- Department of Veterinary and Comparative Anatomy, Pharmacology and Physiology, Washington State University, Pullman, WA 99164, USA
| | | | | | | |
Collapse
|
22
|
Effect of instantaneous frequency glides on interaural time difference processing by auditory coincidence detectors. Proc Natl Acad Sci U S A 2011; 108:18138-43. [PMID: 22006305 DOI: 10.1073/pnas.1108921108] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Detecting interaural time difference (ITD) is crucial for sound localization. The temporal accuracy required to detect ITD, and how ITD is initially encoded, continue to puzzle scientists. A fundamental question is whether the monaural inputs to the binaural ITD detectors differ only in their timing, when temporal and spectral tunings are largely inseparable in the auditory pathway. Here, we investigate the spectrotemporal selectivity of the monaural inputs to ITD detector neurons of the owl. We found that these inputs are selective for instantaneous frequency glides. Modeling shows that ITD tuning depends strongly on whether the monaural inputs are spectrotemporally matched, an effect that may generalize to mammals. We compare the spectrotemporal selectivity of monaural inputs of ITD detector neurons in vivo, demonstrating that their selectivity matches. Finally, we show that this refinement can develop through spike timing-dependent plasticity. Our findings raise the unexplored issue of time-dependent frequency tuning in auditory coincidence detectors and offer a unifying perspective.
Collapse
|
23
|
Frequency selectivity in Old-World monkeys corroborates sharp cochlear tuning in humans. Proc Natl Acad Sci U S A 2011; 108:17516-20. [PMID: 21987783 DOI: 10.1073/pnas.1105867108] [Citation(s) in RCA: 93] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Frequency selectivity in the inner ear is fundamental to hearing and is traditionally thought to be similar across mammals. Although direct measurements are not possible in humans, estimates of frequency tuning based on noninvasive recordings of sound evoked from the cochlea (otoacoustic emissions) have suggested substantially sharper tuning in humans but remain controversial. We report measurements of frequency tuning in macaque monkeys, Old-World primates phylogenetically closer to humans than the laboratory animals often taken as models of human hearing (e.g., cats, guinea pigs, chinchillas). We find that measurements of tuning obtained directly from individual auditory-nerve fibers and indirectly using otoacoustic emissions both indicate that at characteristic frequencies above about 500 Hz, peripheral frequency selectivity in macaques is significantly sharper than in these common laboratory animals, matching that inferred for humans above 4-5 kHz. Compared with the macaque, the human otoacoustic estimates thus appear neither prohibitively sharp nor exceptional. Our results validate the use of otoacoustic emissions for noninvasive measurement of cochlear tuning and corroborate the finding of sharp tuning in humans. The results have important implications for understanding the mechanical and neural coding of sound in the human cochlea, and thus for developing strategies to compensate for the degradation of tuning in the hearing-impaired.
Collapse
|
24
|
Laback B, Zimmermann I, Majdak P, Baumgartner WD, Pok SM. Effects of envelope shape on interaural envelope delay sensitivity in acoustic and electric hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2011; 130:1515-29. [PMID: 21895091 DOI: 10.1121/1.3613704] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
The envelope shape is important for the perception of interaural time difference (ITD) in the envelope as supported by the improved sensitivity for transposed tones compared to sinusoidally amplitude-modulated (SAM) tones. The present study investigated the effects of specific envelope parameters in nine normal-hearing (NH) and seven cochlear-implant (CI) listeners, using high-rate carriers with 27-Hz trapezoidal modulation. In NH listeners, increasing the off time (the silent interval in each modulation cycle) up to 12 ms, increasing the envelope slope from 6 to 8 dB/ms, and increasing the peak level improved ITD sensitivity. The combined effect of the off time and slope accounts for the gain in sensitivity for transposed tones relative to SAM tones. In CI listeners, increasing the off time up to 20 ms improved sensitivity, but increasing the slope showed no systematic effect. A 27-pulses/s electric pulse train, representing a special case of modulation with infinitely steep slopes and maximum possible off time, yielded considerably higher sensitivity compared to the best condition with trapezoidal modulation. Overall, the results of this study indicate that envelope-ITD sensitivity could be improved by using CI processing schemes that simultaneously increase the off time and the peak level of the signal envelope.
Collapse
Affiliation(s)
- Bernhard Laback
- Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, A-1040 Vienna, Austria.
| | | | | | | | | |
Collapse
|
25
|
Schnee ME, Santos-Sacchi J, Castellano-Muñoz M, Kong JH, Ricci AJ. Calcium-dependent synaptic vesicle trafficking underlies indefatigable release at the hair cell afferent fiber synapse. Neuron 2011; 70:326-38. [PMID: 21521617 DOI: 10.1016/j.neuron.2011.01.031] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/19/2011] [Indexed: 10/18/2022]
Abstract
Sensory hair cell ribbon synapses respond to graded stimulation in a linear, indefatigable manner, requiring that vesicle trafficking to synapses be rapid and nonrate-limiting. Real-time monitoring of vesicle fusion identified two release components. The first was saturable with both release rate and magnitude varying linearly with Ca(2+), however the magnitude was too small to account for sustained afferent firing rates. A second superlinear release component required recruitment, in a Ca(2+)-dependent manner, of vesicles not in the immediate vicinity of the synapse. The superlinear component had a constant rate with its onset varying with Ca(2+) load. High-speed Ca(2+) imaging revealed a nonlinear increase in internal Ca(2+) correlating with the superlinear capacitance change, implicating release of stored Ca(2+) in driving vesicle recruitment. These data, supported by a mass action model, suggest sustained release at hair cell afferent fiber synapse is dictated by Ca(2+)-dependent vesicle recruitment from a reserve pool.
Collapse
Affiliation(s)
- Michael E Schnee
- Department of Otolaryngology, Stanford University School of Medicine, Stanford, CA 94304, USA
| | | | | | | | | |
Collapse
|
26
|
Lütkenhöner B. Auditory signal detection appears to depend on temporal integration of subthreshold activity in auditory cortex. Brain Res 2011; 1385:206-16. [PMID: 21316353 DOI: 10.1016/j.brainres.2011.02.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2010] [Revised: 11/05/2010] [Accepted: 02/03/2011] [Indexed: 11/19/2022]
Abstract
The threshold of hearing decreases with increasing sound duration up to a limit of a few hundred milliseconds, whereas other auditory time constants are orders of magnitude shorter. A possible solution to this resolution-integration paradox is that temporal integration occurs more centrally than computations depending on high temporal resolution. But this would require information about subthreshold events in the periphery to reach higher centers. Here we show that this prerequisite is fulfilled. The auditory evoked response to a just perceptible pulse series does basically not depend on whether single pulses are below or above behavioral threshold. The failure to find evidence of temporal integration up to response latencies of 30 ms suggests that the integrator is located more centrally than primary auditory cortex. By using noise to its advantage, the auditory system apparently has established a central integration mechanism that is about as efficient as the peripheral one in the visual system.
Collapse
Affiliation(s)
- Bernd Lütkenhöner
- Section of Experimental Audiology, ENT Clinic, Münster University Hospital, Münster, Germany.
| |
Collapse
|
27
|
Versteegh CPC, Meenderink SWF, van der Heijden M. Response characteristics in the apex of the gerbil cochlea studied through auditory nerve recordings. J Assoc Res Otolaryngol 2011; 12:301-16. [PMID: 21213012 PMCID: PMC3085685 DOI: 10.1007/s10162-010-0255-y] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2010] [Accepted: 12/10/2010] [Indexed: 12/02/2022] Open
Abstract
In this study, we analyze the processing of low-frequency sounds in the cochlear apex through responses of auditory nerve fibers (ANFs) that innervate the apex. Single tones and irregularly spaced tone complexes were used to evoke ANF responses in Mongolian gerbil. The spike arrival times were analyzed in terms of phase locking, peripheral frequency selectivity, group delays, and the nonlinear effects of sound pressure level (SPL). Phase locking to single tones was similar to that in cat. Vector strength was maximal for stimulus frequencies around 500 Hz, decreased above 1 kHz, and became insignificant above 4 to 5 kHz. We used the responses to tone complexes to determine amplitude and phase curves of ANFs having a characteristic frequency (CF) below 5 kHz. With increasing CF, amplitude curves gradually changed from broadly tuned and asymmetric with a steep low-frequency flank to more sharply tuned and asymmetric with a steep high-frequency flank. Over the same CF range, phase curves gradually changed from a concave-upward shape to a concave-downward shape. Phase curves consisted of two or three approximately straight segments. Group delay was analyzed separately for these segments. Generally, the largest group delay was observed near CF. With increasing SPL, most amplitude curves broadened, sometimes accompanied by a downward shift of best frequency, and group delay changed along the entire range of stimulus frequencies. We observed considerable across-ANF variation in the effects of SPL on both amplitude and phase. Overall, our data suggest that mechanical responses in the apex of the cochlea are considerably nonlinear and that these nonlinearities are of a different character than those known from the base of the cochlea.
Collapse
|
28
|
|
29
|
Spatiotemporal representation of the pitch of harmonic complex tones in the auditory nerve. J Neurosci 2010; 30:12712-24. [PMID: 20861376 DOI: 10.1523/jneurosci.6365-09.2010] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The pitch of harmonic complex tones plays an important role in speech and music perception and the analysis of auditory scenes, yet traditional rate-place and temporal models for pitch processing provide only an incomplete description of the psychophysical data. To test physiologically a model based on spatiotemporal pitch cues created by the cochlear traveling wave (Shamma, 1985), we recorded from single fibers in the auditory nerve of anesthetized cat in response to harmonic complex tones with missing fundamentals and equal-amplitude harmonics. We used the principle of scaling invariance in cochlear mechanics to infer the spatiotemporal response pattern to a given stimulus from a series of measurements made in a single fiber as a function of fundamental frequency F0. We found that spatiotemporal cues to resolved harmonics are available for F0 values between 350 and 1100 Hz and that these cues are more robust than traditional rate-place cues at high stimulus levels. The lower F0 limit is determined by the limited frequency selectivity of the cochlea, whereas the upper limit is caused by the degradation of phase locking to the stimulus fine structure at high frequencies. The spatiotemporal representation is consistent with the upper F0 limit to the perception of the pitch of complex tones with a missing fundamental, and its effectiveness does not depend on the relative phase between resolved harmonics. The spatiotemporal representation is thus consistent with key trends in human psychophysics.
Collapse
|
30
|
Temchin AN, Ruggero MA. Phase-locked responses to tones of chinchilla auditory nerve fibers: implications for apical cochlear mechanics. J Assoc Res Otolaryngol 2009; 11:297-318. [PMID: 19921334 DOI: 10.1007/s10162-009-0197-4] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2009] [Accepted: 10/25/2009] [Indexed: 10/20/2022] Open
Abstract
Responses to tones with frequency < or = 5 kHz were recorded from auditory nerve fibers (ANFs) of anesthetized chinchillas. With increasing stimulus level, discharge rate-frequency functions shift toward higher and lower frequencies, respectively, for ANFs with characteristic frequencies (CFs) lower and higher than approximately 0.9 kHz. With increasing frequency separation from CF, rate-level functions are less steep and/or saturate at lower rates than at CF, indicating a CF-specific nonlinearity. The strength of phase locking has lower high-frequency cutoffs for CFs >4 kHz than for CFs < 3 kHz. Phase-frequency functions of ANFs with CFs lower and higher than approximately 0.9 kHz have inflections, respectively, at frequencies higher and lower than CF. For CFs >2 kHz, the inflections coincide with the tip-tail transitions of threshold tuning curves. ANF responses to CF tones exhibit cumulative phase lags of 1.5 periods for CFs 0.7-3 kHz and lesser amounts for lower CFs. With increases of stimulus level, responses increasingly lag (lead) lower-level responses at frequencies lower (higher) than CF, so that group delays are maximal at, or slightly above, CF. The CF-specific magnitude and phase nonlinearities of ANFs with CFs < 2.5 kHz span their entire response bandwidths. Several properties of ANFs undergo sharp transitions in the cochlear region with CFs 2-5 kHz. Overall, the responses of chinchilla ANFs resemble those in other mammalian species but contrast with available measurements of apical cochlear vibrations in chinchilla, implying that either the latter are flawed or that a nonlinear "second filter" is interposed between vibrations and ANF excitation.
Collapse
Affiliation(s)
- Andrei N Temchin
- Hugh Knowles Center (Department of Communication Sciences and Disorders), Northwestern University, 2240 Campus Drive, Evanston, IL 60208-3550, USA
| | | |
Collapse
|
31
|
Michalewski HJ, Starr A, Zeng FG, Dimitrijevic A. N100 cortical potentials accompanying disrupted auditory nerve activity in auditory neuropathy (AN): effects of signal intensity and continuous noise. Clin Neurophysiol 2009; 120:1352-63. [PMID: 19535287 DOI: 10.1016/j.clinph.2009.05.013] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2009] [Revised: 04/17/2009] [Accepted: 05/16/2009] [Indexed: 11/28/2022]
Abstract
OBJECTIVE Auditory temporal processes in quiet are impaired in auditory neuropathy (AN) similar to normal hearing subjects tested in noise. N100 latencies were measured from AN subjects at several tone intensities in quiet and noise for comparison with a group of normal hearing individuals. METHODS Subjects were tested with brief 100 ms tones (1.0 kHz, 100-40 dB SPL) in quiet and in continuous noise (90 dB SPL). N100 latency and amplitude were analyzed as a function of signal intensity and audibility. RESULTS N100 latency in AN in quiet was delayed and amplitude was reduced compared to the normal group; the extent of latency delay was related to psychoacoustic measures of gap detection threshold and speech recognition scores, but not to audibility. Noise in normal hearing subjects was accompanied by N100 latency delays and amplitude reductions paralleling those found in AN tested in quiet. Additional N100 latency delays and amplitude reductions occurred in AN with noise. CONCLUSIONS N100 latency to tones and performance on auditory temporal tasks were related in AN subjects. Noise masking in normal hearing subjects affected N100 latency to resemble AN in quiet. SIGNIFICANCE N100 latency to tones may serve as an objective measure of the efficiency of auditory temporal processes.
Collapse
Affiliation(s)
- Henry J Michalewski
- Department of Neurology, Med. Surge I, Room 150, University of California, Irvine, CA 92697-4290, USA.
| | | | | | | |
Collapse
|
32
|
Rhode WS, Roth GL, Recio-Spinoso A. Response properties of cochlear nucleus neurons in monkeys. Hear Res 2009; 259:1-15. [PMID: 19531377 DOI: 10.1016/j.heares.2009.06.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/25/2008] [Revised: 06/05/2009] [Accepted: 06/10/2009] [Indexed: 10/20/2022]
Abstract
Much of what is known about how the cochlear nuclei participate in mammalian hearing comes from studies of non-primate mammalian species. To determine to what extent the cochlear nuclei of primates resemble those of other mammalian orders, we have recorded responses to sound in three primate species: marmosets, cynomolgus macaques, and squirrel monkeys. These recordings show that the same types of temporal firing patterns are found in primates that have been described in other mammals. Responses to tones of neurons in the ventral cochlear nucleus have similar tuning, latencies, post-stimulus time and interspike interval histograms as those recorded in non-primate cochlear nucleus neurons. In the dorsal cochlear nucleus, too, responses were similar. From these results it is evident that insights gained from non-primate studies can be applied to the peripheral auditory system of primates.
Collapse
Affiliation(s)
- William S Rhode
- Department of Physiology, University of Wisconsin, 1300 University Avenue, Madison, WI 53706, USA.
| | | | | |
Collapse
|
33
|
Dimitrijevic A, Lolli B, Michalewski HJ, Pratt H, Zeng FG, Starr A. Intensity changes in a continuous tone: Auditory cortical potentials comparison with frequency changes. Clin Neurophysiol 2009; 120:374-83. [DOI: 10.1016/j.clinph.2008.11.009] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2008] [Revised: 10/27/2008] [Accepted: 11/09/2008] [Indexed: 11/15/2022]
|
34
|
Alves-Pinto A, Lopez-Poveda EA. Psychophysical assessment of the level-dependent representation of high-frequency spectral notches in the peripheral auditory system. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2008; 124:409-421. [PMID: 18646986 DOI: 10.1121/1.2920957] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
To discriminate between broadband noises with and without a high-frequency spectral notch is more difficult at 70-80 dB sound pressure level than at lower or higher levels [Alves-Pinto, A. and Lopez-Poveda, E. A. (2005). "Detection of high-frequency spectral notches as a function of level," J. Acoust. Soc. Am. 118, 2458-2469]. One possible explanation is that the notch is less clearly represented internally at 70-80 dB SPL than at any other level. To test this hypothesis, forward-masking patterns were measured for flat-spectrum and notched noise maskers for masker levels of 50, 70, 80, and 90 dB SPL. Masking patterns were measured in two conditions: (1) fixing the masker-probe time interval at 2 ms and (2) varying the interval to achieve similar masked thresholds for different masker levels. The depth of the spectral notch remained approximately constant in the fixed-interval masking patterns and gradually decreased with increasing masker level in the variable-interval masking patterns. This difference probably reflects the effects of peripheral compression. These results are inconsistent with the nonmonotonic level-dependent performance in spectral discrimination. Assuming that a forward-masking pattern is a reasonable psychoacoustical correlate of the auditory-nerve rate-profile representation of the stimulus spectrum, these results undermine the common view that high-frequency spectral notches must be encoded in the rate-profile of auditory-nerve fibers.
Collapse
Affiliation(s)
- Ana Alves-Pinto
- Unidad de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Avenida Alfonso X "El Sabio" s/n, 37007 Salamanca, Spain.
| | | |
Collapse
|
35
|
Binaural interactions shape binaural response structures and frequency response functions in primary auditory cortex. Hear Res 2008; 238:68-76. [PMID: 18295994 DOI: 10.1016/j.heares.2008.01.003] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/02/2007] [Revised: 01/09/2008] [Accepted: 01/15/2008] [Indexed: 11/24/2022]
Abstract
The overall purpose of this study is to examine the behavior of primary auditory cortex (AI) units in the three-dimensional stimulus space that resembles normal listening conditions, viz., level at the two ears and frequency. A binaural-level response area (LRA) is the response to a matrix of contralateral and ipsilateral stimuli presented at a single frequency. LRAs have been examined in the inferior colliculus and AI and found to be highly organized response patterns that are shaped by binaural interactions. The aggregate of LRAs across frequency is the binaural response structure (BRS), a new concept that captures unit behavior in this three-dimensional stimulus space. Since binaural interactions contribute greatly to configuring component LRAs, it is clear that binaural interactions help shape the aggregate BRS. The BRS contains the data required to generate binaural frequency response functions. The frequency range and magnitude of these functions depend on the level of the stimulus at each ear and the configuration of the BRS. Changing either level can greatly alter the binaural frequency response function. Thus, in addition to their classic role in localization, binaural interactions play a fundamentally important role in determining the frequency domain of units in AI.
Collapse
|
36
|
Vongpaisal T, Pichora-Fuller MK. Effect of age on F0 difference limen and concurrent vowel identification. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2007; 50:1139-56. [PMID: 17905901 DOI: 10.1044/1092-4388(2007/079)] [Citation(s) in RCA: 69] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
PURPOSE To investigate the effect of age on voice fundamental frequency (F0) difference limen (DL) and identification of concurrently presented vowels. METHOD Fifteen younger and 15 older adults with normal audiometric thresholds in the speech range participated in 2 experiments. In Experiment 1, F0 DLs were measured for a synthesized vowel. In Experiment 2, accuracy in identifying concurrently presented vowel pairs was measured. Vowel pairs were formed from 5 synthesized vowels with F0 separations ranging from 0 to 4 semitones. RESULTS Younger adults had smaller (better) F0 DLs than older adults. For the older group, age was significantly correlated with F0 DLs. Younger adults identified concurrent vowels more accurately than older adults. When the vowels in the pairs had different formants, both age groups benefited similarly from F0 separation. Interestingly, when both constituent vowels had identical formants, F0 separation was deleterious, especially for older adults. Pure-tone average threshold did not correlate significantly with either F0 DL or accuracy in concurrent vowel identification. CONCLUSION Age-related declines were confirmed for F0 DLs, identification of concurrently spoken vowels, and benefit from F0 separation between vowels with identical formants. This pattern of findings is consistent with age-related deficits in periodicity coding.
Collapse
Affiliation(s)
- Tara Vongpaisal
- Department of Psychology, University of Toronto at Mississauga, 3359 Mississauga Road North, Mississauga, Ontario L5L 1C6, Canada
| | | |
Collapse
|
37
|
He NJ, Mills JH, Dubno JR. Frequency modulation detection: effects of age, psychophysical method, and modulation waveform. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2007; 122:467-77. [PMID: 17614504 DOI: 10.1121/1.2741208] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
As part of an ongoing study of auditory aging, detection of sinusoidal and quasitrapezoidal frequency modulation (FM) was measured with a 5-Hz modulation frequency and 500- and 4000-Hz carriers in two experiments. In Experiment 1, psychometric functions for FM detection were measured with several modulation waveform time patterns in younger adults with normal hearing. Detection of a three-cycle modulated signal improved when its duration was extended by a preceding unmodulated cycle, an effect similar to adding a modulated cycle. In Experiment 2, FM detection was measured for younger and older adults with normal hearing using two psychophysical methods. Similar to frequency discrimination, FM detection was poorer in older than younger subjects and age-related differences were larger at 500 Hz than at 4000 Hz, suggesting that FM detection with low modulation frequencies and frequency discrimination may share common underlying mechanisms. One mechanism is likely related to temporal information coded by neural phase locking which is strong at low frequencies and decreases with increasing frequency, as observed in animals. The frequency-dependent aging effect suggests that this temporal mechanism may be affected by age. The effect of psychophysical method was sizable and frequency dependent, whereas the effect of modulation waveform was minimal.
Collapse
Affiliation(s)
- Ning-ji He
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina 29425, USA.
| | | | | |
Collapse
|
38
|
Lopez-Poveda EA, Barrios LF, Alves-Pinto A. Psychophysical estimates of level-dependent best-frequency shifts in the apical region of the human basilar membrane. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2007; 121:3646-54. [PMID: 17552716 DOI: 10.1121/1.2722046] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
It is now undisputed that the best frequency (BF) of basal basilar-membrane (BM) sites shifts downwards as the stimulus level increases. The direction of the shift for apical sites is, by contrast, less well established. Auditory nerve studies suggest that the BF shifts in opposite directions for apical and basal BM sites with increasing stimulus level. This study attempts to determine if this is the case in humans. Psychophysical tuning curves (PTCs) were measured using forward masking for probe frequencies of 125, 250, 500, and 6000 Hz. The level of a masker tone required to just mask a fixed low-level probe tone was measured for different masker-probe time intervals. The duration of the intervals was adjusted as necessary to obtain PTCs for the widest possible range of masker levels. The BF was identified from function fits to the measured PTCs and it almost always decreased with increasing level. This result is inconsistent with most auditory-nerve observations obtained from other mammals. Several explanations are discussed, including that it may be erroneous to assume that low-frequency PTCs reflect the tuning of apical BM sites exclusively and that the inherent frequency response of the inner hair cell may account for the discrepancy.
Collapse
Affiliation(s)
- Enrique A Lopez-Poveda
- Unidad de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Av. Alfonso X El Sabio s/n, 37007 Salamanca, Spain.
| | | | | |
Collapse
|
39
|
Lakie M, Loram ID. Manually controlled human balancing using visual, vestibular and proprioceptive senses involves a common, low frequency neural process. J Physiol 2006; 577:403-16. [PMID: 16959857 PMCID: PMC2000668 DOI: 10.1113/jphysiol.2006.116772] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Ten subjects balanced their own body or a mechanically equivalent unstable inverted pendulum by hand, through a compliant spring linkage. Their balancing process was always characterized by repeated small reciprocating hand movements. These bias adjustments were an observable sign of intermittent alterations in neural output. On average, the adjustments occurred at intervals of approximately 400 ms. To generate appropriate stabilizing bias adjustments, sensory information about body or load movement is needed. Subjects used visual, vestibular or proprioceptive sensation alone and in combination to perform the tasks. We first ask, is the time between adjustments (bias duration) sensory specific? Vision is associated with slow responses. Other senses involved with balance are known to be faster. Our second question is; does bias duration depend on sensory abundance? An appropriate bias adjustment cannot occur until unplanned motion is unambiguously perceived (a sensory threshold). The addition of more sensory data should therefore expedite action, decreasing the mean bias adjustment duration. Statistical analysis showed that (1) the mean bias adjustment duration was remarkably independent of the sensory modality and (2) the addition of one or two sensory modalities made a small, but significant, decrease in the mean bias adjustment duration. Thus, a threshold effect can alter only a very minor part of the bias duration. The bias adjustment duration in manual balancing must reflect something more than visual sensation and perceptual thresholds; our suggestion is that it is a common central motor planning process. We predict that similar processes may be identified in the control of standing.
Collapse
Affiliation(s)
- Martin Lakie
- Applied Physiology Research Group, School of Sport and Exercise Sciences, University of Birmingham, UK.
| | | |
Collapse
|
40
|
Ishizuka K, Nakatani T, Minami Y, Miyazaki N. Speech feature extraction method using subband-based periodicity and nonperiodicity decomposition. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2006; 120:443-52. [PMID: 16875240 DOI: 10.1121/1.2205131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
This paper proposes a speech feature extraction method that utilizes periodicity and nonperiodicity for robust automatic speech recognition. The method was motivated by the auditory comb filtering hypothesis proposed in speech perception research. The method divides input signals into subband signals, which it then decomposes into their periodic and nonperiodic components using comb filters independently designed in each subband. Both features are used as feature parameters. This representation exploits the robustness of periodicity measurements as regards noise while preserving the overall speech information content. In addition, periodicity is estimated independently in each subband, providing robustness as regards noise spectrum bias. The framework is similar to that of a previous study [Jackson et al., Proc. of Eurospeech. (2003), pp. 2321-2324], which is based on cascade processing motivated by speech production. However, the proposed method differs in its design philosophy, which is based on parallel distributed processing motivated by speech perception. Continuous digit speech recognition experiments in the presence of noise confirmed that the proposed method performs better than conventional methods when the noise in the training and test data sets differs.
Collapse
Affiliation(s)
- Kentaro Ishizuka
- NTT Communication Science Laboratories, NTT Corporation, Hikaridai 2-4, Seikacho, Sourakugun, Kyoto 619-0237, Japan.
| | | | | | | |
Collapse
|
41
|
Gorga MP, Johnson TA, Kaminski JR, Beauchaine KL, Garner CA, Neely ST. Using a combination of click- and tone burst-evoked auditory brain stem response measurements to estimate pure-tone thresholds. Ear Hear 2006; 27:60-74. [PMID: 16446565 PMCID: PMC2441480 DOI: 10.1097/01.aud.0000194511.14740.9c] [Citation(s) in RCA: 80] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
DESIGN A retrospective medical record review of evoked potential and audiometric data were used to determine the accuracy with which click-evoked and tone burst-evoked auditory brain stem response (ABR) thresholds predict pure-tone audiometric thresholds. METHODS The medical records were reviewed of a consecutive group of patients who were referred for ABR testing for audiometric purposes over the past 4 yrs. ABR thresholds were measured for clicks and for several tone bursts, including a single-cycle, Blackman-windowed, 250-Hz tone burst, which has a broad spectrum with little energy above 600 Hz. Typically, the ABR data were collected because the patients were unable to provide reliable estimates of hearing sensitivity, based on behavioral test techniques, due to developmental level. Data were included only if subsequently obtained behavioral audiometric data were available to which the ABR data could be compared. Almost invariably, the behavioral data were collected after the ABR results were obtained. Because of this, data were included on only those ears for which middle ear tests (tympanometry, otoscopic examination, pure-tone air- and bone-conduction thresholds) indicated that middle ear status was similar at the times of both tests. With these inclusion criteria, data were available on 140 ears of 77 subjects. RESULTS Correlation was 0.94 between click-evoked ABR thresholds and the average pure-tone threshold at 2 and 4 kHz. Correlations exceeded 0.92 between ABR thresholds for the 250-Hz tone burst and low-frequency behavioral thresholds (250 Hz, 500 Hz, and the average pure-tone thresholds at 250 and 500 Hz). Similar or higher correlations were observed when ABR thresholds at other frequencies were compared with the pure-tone thresholds at corresponding frequencies. Differences between ABR and behavioral threshold depended on behavioral threshold, with ABR thresholds overestimating behavioral threshold in cases of normal hearing and underestimating behavioral threshold in cases of hearing loss. CONCLUSIONS These results suggest that ABR thresholds can be used to predict pure-tone behavioral thresholds for a wide range of frequencies. Although controversial, the data reviewed in this paper suggest that click-evoked ABR thresholds result in reasonable predictions of the average behavioral thresholds at 2 and 4 kHz. However, there were cases for which click-evoked ABR thresholds underestimated hearing loss at these frequencies. There are several other reasons why click-evoked ABR measurements were made, including that they (1) generally result in well-formed responses, (2) assist in determining whether auditory neuropathy exists, and (3) can be obtained in a relatively brief amount of time. Low-frequency thresholds were predicted well by ABR thresholds to a single-cycle, 250-Hz tone burst. In combination, click-evoked and low-frequency tone burst-evoked ABR threshold measurements might be used to quickly provide important clinical information for both ends of the audiogram. These measurements could be supplemented by ABR threshold measurements at other frequencies, if time permits. However, it may be possible to plan initial intervention strategies based on data for these two stimuli.
Collapse
Affiliation(s)
- Michael P Gorga
- Boys Town National Research Hospital, Omaha, Nebraska 68131, USA.
| | | | | | | | | | | |
Collapse
|
42
|
Pichora-Fuller MK, Singh G. Effects of age on auditory and cognitive processing: implications for hearing aid fitting and audiologic rehabilitation. Trends Amplif 2006; 10:29-59. [PMID: 16528429 PMCID: PMC4111543 DOI: 10.1177/108471380601000103] [Citation(s) in RCA: 274] [Impact Index Per Article: 15.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Recent advances in research and clinical practice concerning aging and auditory communication have been driven by questions about age-related differences in peripheral hearing, central auditory processing, and cognitive processing. A "site-of-lesion'' view based on anatomic levels inspired research to test competing hypotheses about the contributions of changes at these three levels of the nervous system. A "processing'' view based on psychologic functions inspired research to test alternative hypotheses about how lower-level sensory processes and higher-level cognitive processes interact. In the present paper, we suggest that these two views can begin to be unified following the example set by the cognitive neuroscience of aging. The early pioneers of audiology anticipated such a unified view, but today, advances in science and technology make it both possible and necessary. Specifically, we argue that a synthesis of new knowledge concerning the functional neuroscience of auditory cognition is necessary to inform the design and fitting of digital signal processing in "intelligent'' hearing devices, as well as to inform best practices for resituating hearing aid fitting in a broader context of audiologic rehabilitation. Long-standing approaches to rehabilitative audiology should be revitalized to emphasize the important role that training and therapy play in promoting compensatory brain reorganization as older adults acclimatize to new technologies. The purpose of the present paper is to provide an integrated framework for understanding how auditory and cognitive processing interact when older adults listen, comprehend, and communicate in realistic situations, to review relevant models and findings, and to suggest how new knowledge about age-related changes in audition and cognition may influence future developments in hearing aid fitting and audiologic rehabilitation.
Collapse
Affiliation(s)
- M Kathleen Pichora-Fuller
- Department of Psychology, University of Toronto, 3359 Mississauga Road, Mississauga, Ontario, Canada L5L 1C6.
| | | |
Collapse
|
43
|
Melzer P, Champney GC, Maguire MJ, Ebner FF. Rate code and temporal code for frequency of whisker stimulation in rat primary and secondary somatic sensory cortex. Exp Brain Res 2006; 172:370-86. [PMID: 16456683 DOI: 10.1007/s00221-005-0334-1] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2005] [Accepted: 12/06/2005] [Indexed: 10/25/2022]
Abstract
We recorded responses to frequencies of whisker stimulation from 479 neurons in primary (S1) and secondary (S2) somatic sensory cortex of 26 urethane-anesthetized rats. Five whiskers on the right side of the snout were deflected with air puffs at seven frequencies between 1 and 18/s. In left S1 (barrels and septa) and S2, subsets of neurons (5%) responded to whisker stimulation across the entire range of frequencies with > or = 1 electrical discharges/ten stimuli (full responders). In contrast, 60% of the recorded cells responded above threshold only at stimulus frequencies below 6/s and 35% remained subthreshold at all frequencies tested. Thus, the full responders are unique in that they were always responsive and appeared particularly suited to facilitate a dynamic, broadband processing of stimulus frequency. Full responders were most responsive at 1 stimulus/s, and showed greatest synchrony with whisker motion at 18 stimuli/s. The barrel cells responded with the greatest temporal accuracy between 3 and 15 stimuli/s. The septum cells responded less accurately, but maintained their accuracy at all frequencies. Only septum cells continued to increase their discharge rate with increasing stimulus frequency. The S2 cells discharged with lowest temporal accuracy modulated only by stimulus frequencies < or = 6/s and exhibited the steepest decrease in discharge/stimulus with increasing stimulus frequency. Our observations suggest that full responders in the septa are well suited to encode high frequencies of whisker stimulation in timing and rate of discharge. The barrel cells, in contrast, showed the strongest temporal coding at stimulus frequencies in the middle range, and S2 cells were most sensitive to differences in low frequencies. The ubiquitous decline in discharge/stimulus in S1 and S2 may explain the decrease in blood flow observed at increasing stimulus frequency with functional imaging.
Collapse
Affiliation(s)
- Peter Melzer
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Ave. S, Nashville, TN 37203, USA.
| | | | | | | |
Collapse
|
44
|
Ruggero MA, Temchin AN. Unexceptional sharpness of frequency tuning in the human cochlea. Proc Natl Acad Sci U S A 2005; 102:18614-9. [PMID: 16344475 PMCID: PMC1311742 DOI: 10.1073/pnas.0509323102] [Citation(s) in RCA: 78] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The responses to sound of auditory-nerve fibers are well known in many animals but are topics of conjecture for humans. Some investigators have claimed that the auditory-nerve fibers of humans are more sharply tuned than are those of various experimental animals. Here we invalidate such claims. First, we show that forward-masking psychophysical tuning curves, which were used as the principal support for those claims, greatly overestimate the sharpness of cochlear tuning in experimental animals and, hence, also probably in humans. Second, we calibrate compound action potential tuning curves against the tuning of auditory-nerve fibers in experimental animals and use compound action potential tuning curves recorded in humans to show that the sharpness of tuning in human cochleae is not exceptional and that it is actually similar to tuning in all mammals and birds for which comparisons are possible. Third, we note that the similarity of frequency of tuning across species with widely diverse cochlear lengths and auditory bandwidths implies that for any given stimulus frequency the "cochlear amplifier" is confined to a highly localized region of the cochlea.
Collapse
Affiliation(s)
- Mario A Ruggero
- Department of Communication Sciences and Disorders, The Hugh Knowles Center, and Institute for Neuroscience, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA.
| | | |
Collapse
|
45
|
Johnson TA, Brown CJ. Threshold Prediction Using the Auditory Steady-State Response and the Tone Burst Auditory Brain Stem Response: A Within-Subject Comparison. Ear Hear 2005; 26:559-76. [PMID: 16377993 DOI: 10.1097/01.aud.0000188105.75872.a3] [Citation(s) in RCA: 48] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE The purpose of this study was to evaluate the accuracy with which auditory steady-state response (ASSR) and tone burst auditory brain stem response (ABR) thresholds predict behavioral thresholds, using a within-subjects design. Because the spectra of the stimuli used to evoke the ABR and the ASSR differ, it was hypothesized that the predictive accuracy also would differ, particularly in subjects with steeply sloping hearing losses. DESIGN ASSR and ABR thresholds were recorded in a group of 14 adults with normal hearing, 10 adults with flat, sensorineural hearing losses, and 10 adults with steeply sloping, high-frequency, sensorineural hearing losses. Evoked-potential thresholds were recorded at 1, 1.5, and 2 kHz and were compared with behavioral, pure-tone thresholds. The predictive accuracy of two ABR protocols was evaluated: Blackman-gated tone bursts and linear-gated tone bursts presented in a background of notched noise. Two ASSR stimulation protocols also were evaluated: 100% amplitude-modulated (AM) sinusoids and 100% AM plus 25% frequency-modulated (FM) sinusoids. RESULTS The results suggested there was no difference in the accuracy with which either ABR protocol predicted behavioral threshold, nor was there any difference in the predictive accuracy of the two ASSR protocols. On average, ABR thresholds were recorded 3 dB closer to behavioral threshold than ASSR thresholds. However, in the subjects with the most steeply sloping hearing losses, ABR thresholds were recorded as much as 25 dB below behavioral threshold, whereas ASSR thresholds were never recorded more than 5 dB below behavioral threshold, which may reflect more spread of excitation for the ABR than for the ASSR. In contrast, the ASSR overestimated behavioral threshold in two subjects with normal hearing, where the ABR provided a more accurate prediction of behavioral threshold. CONCLUSIONS Both the ABR and the ASSR provided reasonably accurate predictions of behavioral threshold across the three subject groups. There was no evidence that the predictive accuracy of the ABR evoked using Blackman-gated tone bursts differed from the predictive accuracy observed when linear-gated tone bursts were presented in conjunction with notched noise. Similarly, there was no evidence that the predictive accuracy of the AM ASSR differed from the AM/FM ASSR. In general, ABR thresholds were recorded at levels closer to behavioral threshold than the ASSR. For certain individuals with steeply sloping hearing losses, the ASSR may be a more accurate predictor of behavioral thresholds; however, the ABR may be a more appropriate choice when predicting behavioral thresholds in a population where the incidence of normal hearing is expected to be high.
Collapse
Affiliation(s)
- Tiffany A Johnson
- Department of Speech Pathology and Audiology, University of Iowa, Iowa City, Iowa, USA.
| | | |
Collapse
|
46
|
Alves-Pinto A, Lopez-Poveda EA. Detection of high-frequency spectral notches as a function of level. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2005; 118:2458-69. [PMID: 16266167 DOI: 10.1121/1.2032067] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
High-frequency spectral notches are important cues for sound localization. Our ability to detect them must depend on their representation as auditory nerve (AN) rate profiles. Because of the low threshold and the narrow dynamic range of most AN fibers, these rate profiles deteriorate at high levels. The system may compensate by using onset rate profiles whose dynamic range is wider, or by using low-spontaneous-rate fibers, whose threshold is higher. To test these hypotheses, the threshold notch depth necessary to discriminate between a flat spectrum broadband noise and a similar noise with a spectral notch centered at 8 kHz was measured at levels from 32 to 100 dB SPL. The importance of the onset rate-profile representation of the notch was estimated by varying the stimulus duration and its rise time. For a large proportion of listeners, threshold notch depth varied nonmonotonically with level, increasing for levels up to 70-80 dB SPL and decreasing thereafter. The nonmonotonic aspect of the function was independent of notch bandwidth and stimulus duration. Thresholds were independent of stimulus rise time but increased for the shorter noise bursts. Results are discussed in terms of the ability of the AN to convey spectral notch information at different levels.
Collapse
Affiliation(s)
- Ana Alves-Pinto
- Unidad de Computación Auditiva y Psicoacústica: Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Avenida Alfonso X El Sabio, Salamanca, Spain
| | | |
Collapse
|
47
|
Karino S, Yamasoba T, Ito K, Kaga K. Alteration of Frequency Range for Binaural Beats in Acute Low-Tone Hearing Loss. Audiol Neurootol 2005; 10:201-8. [PMID: 15809499 DOI: 10.1159/000084841] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2004] [Accepted: 12/17/2004] [Indexed: 11/19/2022] Open
Abstract
The effect of acute low-tone sensorineural hearing loss (ALHL) on the interaural frequency difference (IFD) required for perception of binaural beats (BBs) was investigated in 12 patients with unilateral ALHL and 7 patients in whom ALHL had lessened. A continuous pure tone of 30 dB sensation level at 250 Hz was presented to the contralateral, normal-hearing ear. The presence of BBs was determined by a subjective yes-no procedure as the frequency of a loudness-balanced test tone was gradually adjusted around 250 Hz in the affected ear. The frequency range in which no BBs were perceived (FRNB) was significantly wider in the patients with ALHL than in the controls, and FRNBs became narrower in the recovered ALHL group. Specifically, detection of slow BBs with a small IFD was impaired in this limited (10 s) observation period. The significant correlation between the hearing level at 250 Hz and FRNBs suggests that FRNBs represent the degree of cochlear damage caused by ALHL.
Collapse
Affiliation(s)
- Shotaro Karino
- Department of Otolaryngology, Head and Neck Surgery, Faculty of Medicine, University of Tokyo, Tokyo 113-8655, Japan
| | | | | | | |
Collapse
|
48
|
Lopez-Poveda EA. Spectral processing by the peripheral auditory system: facts and models. INTERNATIONAL REVIEW OF NEUROBIOLOGY 2005; 70:7-48. [PMID: 16472630 DOI: 10.1016/s0074-7742(05)70001-5] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Affiliation(s)
- Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca 37007, Spain
| |
Collapse
|
49
|
Abstract
Level-invariant detection refers to findings that thresholds in tone-in-noise detection are unaffected by roving-level procedures that degrade energy cues. Such data are inconsistent with ideas that detection is based on the energy passed by an auditory filter. A hypothesis that detection is based on a level-invariant temporal cue is advanced. Simulations of a leaky-integrator model, consisting of a bandpass filter, half-wave rectification, and a lowpass filter, account for thresholds in band-widening experiments. The decision variable is calculated from the discrete Fourier transform of the leaky-integrator output. A counterintuitive finding is the apparent disassociation of the phenomenon of critical bands estimated from band-widening experiments and the theory of auditory filters. Physiological plausibility is demonstrated by showing that a leaky integrator describes the discharge cadence of primary afferents for tone-in-noise stimuli as well as for complex periodic sounds.
Collapse
Affiliation(s)
- Bruce G Berg
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA 92697, USA.
| |
Collapse
|
50
|
Chertoff ME. Analytic treatment of the compound action potential: estimating the summed post-stimulus time histogram and unit response. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2004; 116:3022-3030. [PMID: 15603147 DOI: 10.1121/1.1791911] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The convolution of an equation representing a summed post-stimulus time histogram computed across auditory nerve fibers [P(t)] with an equation representing a single-unit wave form [U(t)], resulted in an analytic expression for the compound action potential (CAP). The solution was fit to CAPs recorded to low and high frequency stimuli at various signal levels. The correlation between the CAP and the analytic expression was generally greater than 0.90. At high levels the width of P(t) was broader for low frequency stimuli than for high frequency signals, but delays were comparable. This indicates that at high signal levels there is an overlap in the population of auditory nerve fibers contributing to the CAP for both low and high frequency stimuli but low frequencies include contributions from more apical regions. At low signal levels the width of P(t) decreased for most frequencies and delays increased. The frequency of oscillation of U(t) was largest for high frequency stimuli and decreased for low frequency stimuli. The decay of U(t) was largest at 8 kHz and smallest at 1 kHz. These results indicate that the hair cell or neural mechanisms involved in the generation of action potentials may differ along the cochlear partition.
Collapse
Affiliation(s)
- Mark E Chertoff
- Department of Hearing and Speech, University of Kansas Medical Center, Kansas City, Kansas 66103-0001, USA.
| |
Collapse
|