1
|
Kuo CY, Liu JW, Wang CH, Juan CH, Hsieh IH. The role of carrier spectral composition in the perception of musical pitch. Atten Percept Psychophys 2023; 85:2083-2099. [PMID: 37479873 DOI: 10.3758/s13414-023-02761-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/07/2023] [Indexed: 07/23/2023]
Abstract
Temporal envelope fluctuations of natural sounds convey critical information to speech and music processing. In particular, musical pitch perception is assumed to be primarily underlined by temporal envelope encoding. While increasing evidence demonstrates the importance of carrier fine structure to complex pitch perception, how carrier spectral information affects musical pitch perception is less clear. Here, transposed tones designed to convey identical envelope information across different carriers were used to assess the effects of carrier spectral composition to pitch discrimination and musical-interval and melody identifications. Results showed that pitch discrimination thresholds became lower (better) with increasing carrier frequencies from 1k to 10k Hz, with performance comparable to that of pure sinusoids. Musical interval and melody defined by the periodicity of sine- or harmonic complex envelopes across carriers were identified with greater than 85% accuracy even on a 10k-Hz carrier. Moreover, enhanced interval and melody identification performance was observed with increasing carrier frequency up to 6k Hz. Findings suggest a perceptual enhancement of temporal envelope information with increasing carrier spectral region in musical pitch processing, at least for frequencies up to 6k Hz. For carriers in the extended high-frequency region (8-20k Hz), the use of temporal envelope information to music pitch processing may vary depending on task requirement. Collectively, these results implicate the fidelity of temporal envelope information to musical pitch perception is more pronounced than previously considered, with ecological implications.
Collapse
Affiliation(s)
- Chao-Yin Kuo
- Institute of Cognitive Neuroscience, National Central University, No. 300, Zhongda Rd., Zhongli District, Taoyuan City, 320317, Taiwan
- Department of Otolaryngology-Head and Neck Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei City, Taiwan
| | - Jia-Wei Liu
- Institute of Cognitive Neuroscience, National Central University, No. 300, Zhongda Rd., Zhongli District, Taoyuan City, 320317, Taiwan
| | - Chih-Hung Wang
- Department of Otolaryngology-Head and Neck Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei City, Taiwan
| | - Chi-Hung Juan
- Institute of Cognitive Neuroscience, National Central University, No. 300, Zhongda Rd., Zhongli District, Taoyuan City, 320317, Taiwan
- Cognitive Intelligence and Precision Healthcare Center, National Central University, No. 300, Zhongda Rd., Zhongli District, Taoyuan City, 320317, Taiwan
| | - I-Hui Hsieh
- Institute of Cognitive Neuroscience, National Central University, No. 300, Zhongda Rd., Zhongli District, Taoyuan City, 320317, Taiwan.
- Cognitive Intelligence and Precision Healthcare Center, National Central University, No. 300, Zhongda Rd., Zhongli District, Taoyuan City, 320317, Taiwan.
| |
Collapse
|
2
|
Wagner JD, Gelman A, Hancock KE, Chung Y, Delgutte B. Rabbits use both spectral and temporal cues to discriminate the fundamental frequency of harmonic complexes with missing fundamentals. J Neurophysiol 2022; 127:290-312. [PMID: 34879207 PMCID: PMC8759963 DOI: 10.1152/jn.00366.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
The pitch of harmonic complex tones (HCTs) common in speech, music, and animal vocalizations plays a key role in the perceptual organization of sound. Unraveling the neural mechanisms of pitch perception requires animal models, but little is known about complex pitch perception by animals, and some species appear to use different pitch mechanisms than humans. Here, we tested rabbits' ability to discriminate the fundamental frequency (F0) of HCTs with missing fundamentals, using a behavioral paradigm inspired by foraging behavior in which rabbits learned to harness a spatial gradient in F0 to find the location of a virtual target within a room for a food reward. Rabbits were initially trained to discriminate HCTs with F0s in the range 400-800 Hz and with harmonics covering a wide frequency range (800-16,000 Hz) and then tested with stimuli differing in spectral composition to test the role of harmonic resolvability (experiment 1) or in F0 range (experiment 2) or in both F0 and spectral content (experiment 3). Together, these experiments show that rabbits can discriminate HCTs over a wide F0 range (200-1,600 Hz) encompassing the range of conspecific vocalizations and can use either the spectral pattern of harmonics resolved by the cochlea for higher F0s or temporal envelope cues resulting from interaction between unresolved harmonics for lower F0s. The qualitative similarity of these results to human performance supports the use of rabbits as an animal model for studies of pitch mechanisms, providing species differences in cochlear frequency selectivity and F0 range of vocalizations are taken into account.NEW & NOTEWORTHY Understanding the neural mechanisms of pitch perception requires experiments in animal models, but little is known about pitch perception by animals. Here we show that rabbits, a popular animal in auditory neuroscience, can discriminate complex sounds differing in pitch using either spectral cues or temporal cues. The results suggest that the role of spectral cues in pitch perception by animals may have been underestimated by predominantly testing low frequencies in the range of human voice.
Collapse
Affiliation(s)
- Joseph D. Wagner
- 1Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts,3Department of Biomedical Engineering, Boston University, Boston, Massachusetts
| | - Alice Gelman
- 1Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts
| | - Kenneth E. Hancock
- 1Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts,2Department of Otolaryngology, Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts
| | - Yoojin Chung
- 1Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts,2Department of Otolaryngology, Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts
| | - Bertrand Delgutte
- 1Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts,2Department of Otolaryngology, Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
3
|
Temporal Correlates to Monaural Edge Pitch in the Distribution of Interspike Interval Statistics in the Auditory Nerve. eNeuro 2021; 8:ENEURO.0292-21.2021. [PMID: 34281977 PMCID: PMC8387151 DOI: 10.1523/eneuro.0292-21.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 07/07/2021] [Indexed: 12/02/2022] Open
Abstract
Pitch is a perceptual attribute enabling perception of melody. There is no consensus regarding the fundamental nature of pitch and its underlying neural code. A stimulus which has received much interest in psychophysical and computational studies is noise with a sharp spectral edge. High-pass (HP) or low-pass (LP) noise gives rise to a pitch near the edge frequency (monaural edge pitch; MEP). The simplicity of this stimulus, combined with its spectral and autocorrelation properties, make it an interesting stimulus to examine spectral versus temporal cues that could underly its pitch. We recorded responses of single auditory nerve (AN) fibers in chinchilla to MEP-stimuli varying in edge frequency. Temporal cues were examined with shuffled autocorrelogram (SAC) analysis. Correspondence between the population’s dominant interspike interval and reported pitch estimates was poor. A fuller analysis of the population interspike interval distribution, which incorporates not only the dominant but all intervals, results in good matches with behavioral results, but not for the entire range of edge frequencies that generates pitch. Finally, we also examined temporal structure over a slower time scale, intermediate between average firing rate and interspike intervals, by studying the SAC envelope. We found that, in response to a given MEP stimulus, this feature also systematically varies with edge frequency, across fibers with different characteristic frequency (CF). Because neural mechanisms to extract envelope cues are well established, and because this cue is not limited by coding of stimulus fine-structure, this newly identified slower temporal cue is a more plausible basis for pitch than cues based on fine-structure.
Collapse
|
4
|
Hoover EC, Kinney BN, Bell KL, Gallun FJ, Eddins DA. A Comparison of Behavioral Methods for Indexing the Auditory Processing of Temporal Fine Structure Cues. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:2018-2034. [PMID: 31145649 PMCID: PMC6808371 DOI: 10.1044/2019_jslhr-h-18-0217] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 12/20/2018] [Accepted: 02/13/2019] [Indexed: 05/12/2023]
Abstract
Purpose Growing evidence supports the inclusion of perceptual tests that quantify the processing of temporal fine structure (TFS) in clinical hearing assessment. Many tasks have been used to evaluate TFS in the laboratory that vary greatly in the stimuli used and whether the judgments require monaural or binaural comparisons of TFS. The purpose of this study was to compare laboratory measures of TFS for inclusion in a battery of suprathreshold auditory tests. A subset of available TFS tasks were selected on the basis of potential clinical utility and were evaluated using metrics that focus on characteristics important for clinical use. Method TFS measures were implemented in replication of studies that demonstrated clinical utility. Monaural, diotic, and dichotic measures were evaluated in 11 young listeners with normal hearing. Measures included frequency modulation (FM) tasks, harmonic frequency shift detection, interaural phase difference (TFS-low frequency), interaural time difference (ITD), monaural gap duration discrimination, and tone detection in noise with and without a difference in interaural phase (N0S0, N0Sπ). Data were compared with published results and evaluated with metrics of consistency and efficiency. Results Thresholds obtained were consistent with published data. There was no evidence of predictive relationships among the measures consistent with a homogenous group. The most stable tasks across repeated testing were TFS-low frequency, diotic and dichotic FM, and N0Sπ. Monaural and diotic FM had the lowest normalized variance and were the most efficient accounting for differences in total test duration, followed by ITD. Conclusions Despite a long stimulus duration, FM tasks dominated comparisons of consistency and efficiency. Small differences separated the dichotic tasks FM, ITD, and N0Sπ. Future comparisons following procedural optimization of the tasks will evaluate clinical efficiency in populations with impairment.
Collapse
Affiliation(s)
- Eric C. Hoover
- Department of Communication Sciences and Disorders, University of South Florida, Tampa
| | - Brianna N. Kinney
- Department of Communication Sciences and Disorders, University of South Florida, Tampa
| | - Karen L. Bell
- Department of Communication Sciences and Disorders, University of South Florida, Tampa
| | - Frederick J. Gallun
- National Center for Rehabilitative Auditory Research, Portland VA Medical Center, Oregon
- Department of Otolaryngology–Head and Neck Surgery, Oregon Health and Science University, Portland
| | - David A. Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa
| |
Collapse
|
5
|
Bianchi F, Carney LH, Dau T, Santurette S. Effects of Musical Training and Hearing Loss on Fundamental Frequency Discrimination and Temporal Fine Structure Processing: Psychophysics and Modeling. J Assoc Res Otolaryngol 2019; 20:263-277. [PMID: 30693416 PMCID: PMC6513935 DOI: 10.1007/s10162-018-00710-2] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2018] [Accepted: 12/19/2018] [Indexed: 11/01/2022] Open
Abstract
Several studies have shown that musical training leads to improved fundamental frequency (F0) discrimination for young listeners with normal hearing (NH). It is unclear whether a comparable effect of musical training occurs for listeners whose sensory encoding of F0 is degraded. To address this question, the effect of musical training was investigated for three groups of listeners (young NH, older NH, and older listeners with hearing impairment, HI). In a first experiment, F0 discrimination was investigated using complex tones that differed in harmonic content and phase configuration (sine, positive, or negative Schroeder). Musical training was associated with significantly better F0 discrimination of complex tones containing low-numbered harmonics for all groups of listeners. Part of this effect was caused by the fact that musicians were more robust than non-musicians to harmonic roving. Despite the benefit relative to their non-musicians counterparts, the older musicians, with or without HI, performed worse than the young musicians. In a second experiment, binaural sensitivity to temporal fine structure (TFS) cues was assessed for the same listeners by estimating the highest frequency at which an interaural phase difference was perceived. Performance was better for musicians for all groups of listeners and the use of TFS cues was degraded for the two older groups of listeners. These findings suggest that musical training is associated with an enhancement of both TFS cues encoding and F0 discrimination in young and older listeners with or without HI, although the musicians' benefit decreased with increasing hearing loss. Additionally, models of the auditory periphery and midbrain were used to examine the effect of HI on F0 encoding. The model predictions reflected the worsening in F0 discrimination with increasing HI and accounted for up to 80 % of the variance in the data.
Collapse
Affiliation(s)
- Federica Bianchi
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Ørsteds Plads, Building 352, 2800, Lyngby, Denmark.
- Current Affiliation: Oticon Medical, Kongebakken 9, Smørum, Denmark.
| | - Laurel H Carney
- Departments of Biomedical Engineering and Neuroscience, University of Rochester, Rochester, NY, USA
| | - Torsten Dau
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Ørsteds Plads, Building 352, 2800, Lyngby, Denmark
| | - Sébastien Santurette
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Ørsteds Plads, Building 352, 2800, Lyngby, Denmark
- Department of Otorhinolaryngology, Head and Neck Surgery & Audiology, Rigshospitalet, 2100, Copenhagen, Denmark
| |
Collapse
|
6
|
Petitpré C, Wu H, Sharma A, Tokarska A, Fontanet P, Wang Y, Helmbacher F, Yackle K, Silberberg G, Hadjab S, Lallemend F. Neuronal heterogeneity and stereotyped connectivity in the auditory afferent system. Nat Commun 2018; 9:3691. [PMID: 30209249 PMCID: PMC6135759 DOI: 10.1038/s41467-018-06033-3] [Citation(s) in RCA: 142] [Impact Index Per Article: 23.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2018] [Accepted: 07/31/2018] [Indexed: 01/07/2023] Open
Abstract
Spiral ganglion (SG) neurons of the cochlea convey all auditory inputs to the brain, yet the cellular and molecular complexity necessary to decode the various acoustic features in the SG has remained unresolved. Using single-cell RNA sequencing, we identify four types of SG neurons, including three novel subclasses of type I neurons and the type II neurons, and provide a comprehensive genetic framework that define their potential synaptic communication patterns. The connectivity patterns of the three subclasses of type I neurons with inner hair cells and their electrophysiological profiles suggest that they represent the intensity-coding properties of auditory afferents. Moreover, neuron type specification is already established at birth, indicating a neuronal diversification process independent of neuronal activity. Thus, this work provides a transcriptional catalog of neuron types in the cochlea, which serves as a valuable resource for dissecting cell-type-specific functions of dedicated afferents in auditory perception and in hearing disorders.
Collapse
Affiliation(s)
- Charles Petitpré
- Department of Neuroscience, Karolinska Institutet, Biomedicum, Stockholm, 171 77, Sweden
| | - Haohao Wu
- Department of Neuroscience, Karolinska Institutet, Biomedicum, Stockholm, 171 77, Sweden
| | - Anil Sharma
- Department of Neuroscience, Karolinska Institutet, Biomedicum, Stockholm, 171 77, Sweden
| | - Anna Tokarska
- Department of Neuroscience, Karolinska Institutet, Biomedicum, Stockholm, 171 77, Sweden
| | - Paula Fontanet
- Department of Neuroscience, Karolinska Institutet, Biomedicum, Stockholm, 171 77, Sweden
| | - Yiqiao Wang
- Department of Neuroscience, Karolinska Institutet, Biomedicum, Stockholm, 171 77, Sweden
| | - Françoise Helmbacher
- Aix-Marseille Université, CNRS UMR7288, Institut de Biologie du Développement de Marseille (IBDM), 13009, Marseille, France
| | - Kevin Yackle
- Department of Physiology, University of California-San Francisco, San Francisco, CA, 94158, USA
| | - Gilad Silberberg
- Department of Neuroscience, Karolinska Institutet, Biomedicum, Stockholm, 171 77, Sweden
| | - Saida Hadjab
- Department of Neuroscience, Karolinska Institutet, Biomedicum, Stockholm, 171 77, Sweden
| | - François Lallemend
- Department of Neuroscience, Karolinska Institutet, Biomedicum, Stockholm, 171 77, Sweden.
| |
Collapse
|
7
|
Meister H, Schreitmüller S, Ortmann M, Rählmann S, Walger M. Effects of Hearing Loss and Cognitive Load on Speech Recognition with Competing Talkers. Front Psychol 2016; 7:301. [PMID: 26973585 PMCID: PMC4777916 DOI: 10.3389/fpsyg.2016.00301] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Accepted: 02/16/2016] [Indexed: 12/30/2022] Open
Abstract
Everyday communication frequently comprises situations with more than one talker speaking at a time. These situations are challenging since they pose high attentional and memory demands placing cognitive load on the listener. Hearing impairment additionally exacerbates communication problems under these circumstances. We examined the effects of hearing loss and attention tasks on speech recognition with competing talkers in older adults with and without hearing impairment. We hypothesized that hearing loss would affect word identification, talker separation and word recall and that the difficulties experienced by the hearing impaired listeners would be especially pronounced in a task with high attentional and memory demands. Two listener groups closely matched for their age and neuropsychological profile but differing in hearing acuity were examined regarding their speech recognition with competing talkers in two different tasks. One task required repeating back words from one target talker (1TT) while ignoring the competing talker whereas the other required repeating back words from both talkers (2TT). The competing talkers differed with respect to their voice characteristics. Moreover, sentences either with low or high context were used in order to consider linguistic properties. Compared to their normal hearing peers, listeners with hearing loss revealed limited speech recognition in both tasks. Their difficulties were especially pronounced in the more demanding 2TT task. In order to shed light on the underlying mechanisms, different error sources, namely having misunderstood, confused, or omitted words were investigated. Misunderstanding and omitting words were more frequently observed in the hearing impaired than in the normal hearing listeners. In line with common speech perception models, it is suggested that these effects are related to impaired object formation and taxed working memory capacity (WMC). In a post-hoc analysis, the listeners were further separated with respect to their WMC. It appeared that higher capacity could be used in the sense of a compensatory mechanism with respect to the adverse effects of hearing loss, especially with low context speech.
Collapse
Affiliation(s)
- Hartmut Meister
- Jean-Uhrmacher-Institute for Clinical ENT-Research, University of Cologne Cologne, Germany
| | - Stefan Schreitmüller
- Jean-Uhrmacher-Institute for Clinical ENT-Research, University of Cologne Cologne, Germany
| | - Magdalene Ortmann
- Jean-Uhrmacher-Institute for Clinical ENT-Research, University of Cologne Cologne, Germany
| | - Sebastian Rählmann
- Jean-Uhrmacher-Institute for Clinical ENT-Research, University of Cologne Cologne, Germany
| | - Martin Walger
- Clinic of Otorhinolaryngology, Head and Neck Surgery, University of Cologne Cologne, Germany
| |
Collapse
|
8
|
Bidelman GM, Alain C. Hierarchical neurocomputations underlying concurrent sound segregation: Connecting periphery to percept. Neuropsychologia 2015; 68:38-50. [DOI: 10.1016/j.neuropsychologia.2014.12.020] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2014] [Revised: 12/18/2014] [Accepted: 12/22/2014] [Indexed: 10/24/2022]
|
9
|
Sayles M, Stasiak A, Winter IM. Reverberation impairs brainstem temporal representations of voiced vowel sounds: challenging "periodicity-tagged" segregation of competing speech in rooms. Front Syst Neurosci 2015; 8:248. [PMID: 25628545 PMCID: PMC4290552 DOI: 10.3389/fnsys.2014.00248] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2014] [Accepted: 12/18/2014] [Indexed: 11/26/2022] Open
Abstract
The auditory system typically processes information from concurrently active sound sources (e.g., two voices speaking at once), in the presence of multiple delayed, attenuated and distorted sound-wave reflections (reverberation). Brainstem circuits help segregate these complex acoustic mixtures into “auditory objects.” Psychophysical studies demonstrate a strong interaction between reverberation and fundamental-frequency (F0) modulation, leading to impaired segregation of competing vowels when segregation is on the basis of F0 differences. Neurophysiological studies of complex-sound segregation have concentrated on sounds with steady F0s, in anechoic environments. However, F0 modulation and reverberation are quasi-ubiquitous. We examine the ability of 129 single units in the ventral cochlear nucleus (VCN) of the anesthetized guinea pig to segregate the concurrent synthetic vowel sounds /a/ and /i/, based on temporal discharge patterns under closed-field conditions. We address the effects of added real-room reverberation, F0 modulation, and the interaction of these two factors, on brainstem neural segregation of voiced speech sounds. A firing-rate representation of single-vowels' spectral envelopes is robust to the combination of F0 modulation and reverberation: local firing-rate maxima and minima across the tonotopic array code vowel-formant structure. However, single-vowel F0-related periodicity information in shuffled inter-spike interval distributions is significantly degraded in the combined presence of reverberation and F0 modulation. Hence, segregation of double-vowels' spectral energy into two streams (corresponding to the two vowels), on the basis of temporal discharge patterns, is impaired by reverberation; specifically when F0 is modulated. All unit types (primary-like, chopper, onset) are similarly affected. These results offer neurophysiological insights to perceptual organization of complex acoustic scenes under realistically challenging listening conditions.
Collapse
Affiliation(s)
- Mark Sayles
- Centre for the Neural Basis of Hearing, The Physiological Laboratory, Department of Physiology, Development and Neuroscience, University of Cambridge Cambridge, UK
| | - Arkadiusz Stasiak
- Centre for the Neural Basis of Hearing, The Physiological Laboratory, Department of Physiology, Development and Neuroscience, University of Cambridge Cambridge, UK
| | - Ian M Winter
- Centre for the Neural Basis of Hearing, The Physiological Laboratory, Department of Physiology, Development and Neuroscience, University of Cambridge Cambridge, UK
| |
Collapse
|
10
|
Kale S, Micheyl C, Heinz MG. Effects of sensorineural hearing loss on temporal coding of harmonic and inharmonic tone complexes in the auditory nerve. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2013; 787:109-18. [PMID: 23716215 DOI: 10.1007/978-1-4614-1590-9_13] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/19/2023]
Abstract
Listeners with sensorineural hearing loss (SNHL) often show poorer thresholds for fundamental-frequency (F0) discrimination and poorer discrimination between harmonic and frequency-shifted (inharmonic) complex tones, than normal-hearing (NH) listeners-especially when these tones contain resolved or partially resolved components. It has been suggested that these perceptual deficits reflect reduced access to temporal-fine-structure (TFS) information and could be due to degraded phase locking in the auditory nerve (AN) with SNHL. In the present study, TFS and temporal-envelope (ENV) cues in single AN-fiber responses to band-pass-filtered harmonic and inharmonic complex tones were -measured in chinchillas with either normal-hearing or noise-induced SNHL. The stimuli were comparable to those used in recent psychophysical studies of F0 and harmonic/inharmonic discrimination. As in those studies, the rank of the center component was manipulated to produce -different resolvability conditions, different phase relationships (cosine and random phase) were tested, and background noise was present. Neural TFS and ENV cues were quantified using cross-correlation coefficients computed using shuffled cross correlograms between neural responses to REF (harmonic) and TEST (F0- or frequency-shifted) stimuli. In animals with SNHL, AN-fiber tuning curves showed elevated thresholds, broadened tuning, best-frequency shifts, and downward shifts in the dominant TFS response component; however, no significant degradation in the ability of AN fibers to encode TFS or ENV cues was found. Consistent with optimal-observer analyses, the results indicate that TFS and ENV cues depended only on the relevant frequency shift in Hz and thus were not degraded because phase locking remained intact. These results suggest that perceptual "TFS-processing" deficits do not simply reflect degraded phase locking at the level of the AN. To the extent that performance in F0- and harmonic/inharmonic discrimination tasks depend on TFS cues, it is likely through a more complicated (suboptimal) decoding mechanism, which may involve "spatiotemporal" (place-time) neural representations.
Collapse
Affiliation(s)
- Sushrut Kale
- Department of Otolaryngology-Head & Neck Surgery, Columbia University, New York, NY 10032, USA.
| | | | | |
Collapse
|