1
|
Saddler MR, McDermott JH. Models optimized for real-world tasks reveal the task-dependent necessity of precise temporal coding in hearing. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.21.590435. [PMID: 38712054 PMCID: PMC11071365 DOI: 10.1101/2024.04.21.590435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
Neurons encode information in the timing of their spikes in addition to their firing rates. Spike timing is particularly precise in the auditory nerve, where action potentials phase lock to sound with sub-millisecond precision, but its behavioral relevance remains uncertain. We optimized machine learning models to perform real-world hearing tasks with simulated cochlear input, assessing the precision of auditory nerve spike timing needed to reproduce human behavior. Models with high-fidelity phase locking exhibited more human-like sound localization and speech perception than models without, consistent with an essential role in human hearing. However, the temporal precision needed to reproduce human-like behavior varied across tasks, as did the precision that benefited real-world task performance. These effects suggest that perceptual domains incorporate phase locking to different extents depending on the demands of real-world hearing. The results illustrate how optimizing models for realistic tasks can clarify the role of candidate neural codes in perception.
Collapse
Affiliation(s)
- Mark R Saddler
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
- Center for Brains, Minds, and Machines, MIT, Cambridge, MA, USA
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
- Center for Brains, Minds, and Machines, MIT, Cambridge, MA, USA
- Program in Speech and Hearing Biosciences and Technology, Harvard, Cambridge, MA, USA
| |
Collapse
|
2
|
Moore BC. The perception of emotion in music by people with hearing loss and people with cochlear implants. Philos Trans R Soc Lond B Biol Sci 2024; 379:20230258. [PMID: 39005027 PMCID: PMC11444223 DOI: 10.1098/rstb.2023.0258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 10/02/2023] [Indexed: 07/16/2024] Open
Abstract
Music is an important part of life for many people. It can evoke a wide range of emotions, including sadness, happiness, anger, tension, relief and excitement. People with hearing loss and people with cochlear implants have reduced abilities to discriminate some of the features of musical sounds that may be involved in evoking emotions. This paper reviews these changes in perceptual abilities and describes how they affect the perception of emotion in music. For people with acquired partial hearing loss, it appears that the perception of emotion in music is almost normal, whereas congenital partial hearing loss is associated with impaired perception of music emotion. For people with cochlear implants, the ability to discriminate changes in fundamental frequency (associated with perceived pitch) is much worse than normal and musical harmony is hardly perceived. As a result, people with cochlear implants appear to judge emotion in music primarily using tempo and rhythm cues, and this limits the range of emotions that can be judged. This article is part of the theme issue 'Sensing and feeling: an integrative approach to sensory processing and emotional experience'.
Collapse
Affiliation(s)
- Brian C. J. Moore
- Cambridge Hearing Group, Department of Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, UK
| |
Collapse
|
3
|
Kamerer AM, Harris SE, Wichman CS, Rasetshwane DM, Neely ST. The relationship and interdependence of auditory thresholds, proposed behavioural measures of hidden hearing loss, and physiological measures of auditory function. Int J Audiol 2024:1-14. [PMID: 39180321 DOI: 10.1080/14992027.2024.2391986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 07/12/2024] [Accepted: 08/07/2024] [Indexed: 08/26/2024]
Abstract
OBJECTIVES Standard diagnostic measures focus on threshold elevation but hearing concerns may occur independently of threshold elevation - referred to as "hidden hearing loss" (HHL). A deeper understanding of HHL requires measurements that locate dysfunction along the auditory pathway. This study aimed to describe the relationship and interdependence between certain behavioural and physiological measures of auditory function that are thought to be indicative of HHL. DESIGN Data were collected on a battery of behavioural and physiological measures of hearing. Threshold-dependent variance was removed from each measure prior to generating a multiple regression model of the behavioural measures using the physiological measures. STUDY SAMPLE 224 adults in the United States with audiometric thresholds ≤65 dB HL. RESULTS Thresholds accounted for between 21 and 60% of the variance in our behavioural measures and 5-51% in our physiological measures of hearing. There was no evidence that the behavioural measures of hearing could be predicted by the selected physiological measures. CONCLUSIONS Several proposed behavioural measures for HHL: thresholds-in-noise, frequency-modulation detection, and speech recognition in difficult listening conditions, are influenced by hearing sensitivity and are not predicted by outer hair cell or auditory nerve physiology. Therefore, these measures may not be able to assess threshold-independent hearing disorders.
Collapse
Affiliation(s)
| | - Sara E Harris
- Boys Town National Research Hospital, Omaha, NE, USA
| | | | | | | |
Collapse
|
4
|
Guest DR, Rajappa N, Oxenham AJ. Limitations in human auditory spectral analysis at high frequencies. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:326-340. [PMID: 38990035 PMCID: PMC11240212 DOI: 10.1121/10.0026475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 06/07/2024] [Indexed: 07/12/2024]
Abstract
Humans are adept at identifying spectral patterns, such as vowels, in different rooms, at different sound levels, or produced by different talkers. How this feat is achieved remains poorly understood. Two psychoacoustic analogs of spectral pattern recognition are spectral profile analysis and spectrotemporal ripple direction discrimination. This study tested whether pattern-recognition abilities observed previously at low frequencies are also observed at extended high frequencies. At low frequencies (center frequency ∼500 Hz), listeners were able to achieve accurate profile-analysis thresholds, consistent with prior literature. However, at extended high frequencies (center frequency ∼10 kHz), listeners' profile-analysis thresholds were either unmeasurable or could not be distinguished from performance based on overall loudness cues. A similar pattern of results was observed with spectral ripple discrimination, where performance was again considerably better at low than at high frequencies. Collectively, these results suggest a severe deficit in listeners' ability to analyze patterns of intensity across frequency in the extended high-frequency region that cannot be accounted for by cochlear frequency selectivity. One interpretation is that the auditory system is not optimized to analyze such fine-grained across-frequency profiles at extended high frequencies, as they are not typically informative for everyday sounds.
Collapse
Affiliation(s)
- Daniel R Guest
- Department of Biomedical Engineering, University of Rochester, Rochester, New York 14642, USA
| | - Neha Rajappa
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
5
|
Roy A, Bradlow A, Souza P. Effect of frequency compression on fricative perception between normal-hearing English and Mandarin listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:3957-3967. [PMID: 38921646 DOI: 10.1121/10.0026435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 05/26/2024] [Indexed: 06/27/2024]
Abstract
High-frequency speech information is susceptible to inaccurate perception in even mild to moderate forms of hearing loss. Some hearing aids employ frequency-lowering methods such as nonlinear frequency compression (NFC) to help hearing-impaired individuals access high-frequency speech information in more accessible lower-frequency regions. As such techniques cause significant spectral distortion, tests such as the S-Sh Confusion Test help optimize NFC settings to provide high-frequency audibility with the least distortion. Such tests have been traditionally based on speech contrasts pertinent to English. Here, the effects of NFC processing on fricative perception between English and Mandarin listeners are assessed. Small but significant differences in fricative discrimination were observed between the groups. The study demonstrates possible need for language-specific clinical fitting procedures for NFC.
Collapse
Affiliation(s)
- Abhijit Roy
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois 60208, USA
| | - Ann Bradlow
- Department of Linguistics, Northwestern University, Evanston, Illinois 60208, USA
| | - Pamela Souza
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois 60208, USA
| |
Collapse
|
6
|
El Sawaf O, Effa F, Arz JP, Grimault N. Modeling alarm detection in noise for normal and hearing-impaired listeners: the effect of elevated thresholds and enlarged auditory filters. INTERNATIONAL JOURNAL OF OCCUPATIONAL SAFETY AND ERGONOMICS 2024; 30:264-271. [PMID: 38124394 DOI: 10.1080/10803548.2023.2294624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
A model was developed to assess how elevated absolute thresholds and enlarged auditory filters can impede the ability to detect alarms in a noisy background, such alarms being of paramount importance to ensure the safety of workers. Based on previously measured masked thresholds of 80 listeners in five groups (normal hearing to strongly impaired), the model was derived from signal detection theory (SDT) applied to Glasberg and Moore's excitation pattern model. The model can describe the influence of absolute thresholds and enlarged auditory filters together or separately on the detection ability for normal hearing and hearing-impaired listeners with various hearing profiles. Furthermore, it suggests that enlarged auditory filters alone can explain all of the impairment in this specific alarm detection task. Finally, the possibility of further development of the model into an alarm detection model is discussed.
Collapse
Affiliation(s)
- Ossen El Sawaf
- French Research and Safety Institute for the Prevention of Occupational Accidents and Diseases (INRS), Work Equipment Engineering Division, France
- French National Centre for Scientific Research (CNRS) UMR5292, Centre Hospitalier Le Vinatier, France
- Mechanics and Acoustics Laboratory (LMA), Aix-Marseille University Centrale Méditerranée, France
| | - François Effa
- French Research and Safety Institute for the Prevention of Occupational Accidents and Diseases (INRS), Work Equipment Engineering Division, France
- French National Centre for Scientific Research (CNRS) UMR5292, Centre Hospitalier Le Vinatier, France
| | - Jean-Pierre Arz
- French Research and Safety Institute for the Prevention of Occupational Accidents and Diseases (INRS), Work Equipment Engineering Division, France
| | - Nicolas Grimault
- French National Centre for Scientific Research (CNRS) UMR5292, Centre Hospitalier Le Vinatier, France
| |
Collapse
|
7
|
Lively S, Agrawal S, Stewart M, Dwyer RT, Strobel L, Marcinkevich P, Hetlinger C, Croce J. CROS or hearing aid? Selecting the ideal solution for unilateral CI patients with limited aidable hearing in the contralateral ear. PLoS One 2024; 19:e0293811. [PMID: 38394286 PMCID: PMC10890777 DOI: 10.1371/journal.pone.0293811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 10/19/2023] [Indexed: 02/25/2024] Open
Abstract
A hearing aid or a contralateral routing of signal device are options for unilateral cochlear implant listeners with limited hearing in the unimplanted ear; however, it is uncertain which device provides greater benefit beyond unilateral listening alone. Eighteen unilateral cochlear implant listeners participated in this prospective, within-participants, repeated measures study. Participants were tested with the cochlear implant alone, cochlear implant + hearing aid, and cochlear implant + contralateral routing of signal device configurations with a one-month take-home period between each in-person visit. Audiograms, speech perception in noise, and lateralization were evaluated. Subjective feedback was obtained via questionnaires. Marked improvement in speech in noise and non-implanted ear lateralization accuracy were observed with the addition of a contralateral hearing aid. There were no significant differences in speech recognition between listening configurations. However, the chronic device use questionnaires and the final device selection showed a clear preference for the hearing aid in spatial awareness and communication domains. Individuals with limited hearing in their unimplanted ears demonstrate significant improvement with the addition of a contralateral device. Subjective questionnaires somewhat contrast with clinic-based outcome measures, highlighting the delicate decision-making process involved in clinically advising one device or another to maximize communication benefits.
Collapse
Affiliation(s)
- Sarah Lively
- Department of Otolaryngology, Thomas Jefferson University Hospital, Philadelphia, PA, United States of America
| | - Smita Agrawal
- Collaborative Research Group, Clinical Research, Advanced Bionics, Valencia, CA, United States of America
| | - Matthew Stewart
- Department of Otolaryngology, Thomas Jefferson University Hospital, Philadelphia, PA, United States of America
| | - Robert T. Dwyer
- Collaborative Research Group, Clinical Research, Advanced Bionics, Valencia, CA, United States of America
| | - Laura Strobel
- Department of Otolaryngology, Thomas Jefferson University Hospital, Philadelphia, PA, United States of America
| | - Paula Marcinkevich
- Department of Otolaryngology, Thomas Jefferson University Hospital, Philadelphia, PA, United States of America
| | - Chris Hetlinger
- Research and Technology Group, Advanced Bionics, Valencia, CA, United States of America
| | - Julia Croce
- Department of Otolaryngology, Thomas Jefferson University Hospital, Philadelphia, PA, United States of America
| |
Collapse
|
8
|
Rajappa N, Guest DR, Oxenham AJ. Benefits of Harmonicity for Hearing in Noise Are Limited to Detection and Pitch-Related Discrimination Tasks. BIOLOGY 2023; 12:1522. [PMID: 38132348 PMCID: PMC10740545 DOI: 10.3390/biology12121522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Revised: 12/07/2023] [Accepted: 12/08/2023] [Indexed: 12/23/2023]
Abstract
Harmonic complex tones are easier to detect in noise than inharmonic complex tones, providing a potential perceptual advantage in complex auditory environments. Here, we explored whether the harmonic advantage extends to other auditory tasks that are important for navigating a noisy auditory environment, such as amplitude- and frequency-modulation detection. Sixty young normal-hearing listeners were tested, divided into two equal groups with and without musical training. Consistent with earlier studies, harmonic tones were easier to detect in noise than inharmonic tones, with a signal-to-noise ratio (SNR) advantage of about 2.5 dB, and the pitch discrimination of the harmonic tones was more accurate than that of inharmonic tones, even after differences in audibility were accounted for. In contrast, neither amplitude- nor frequency-modulation detection was superior with harmonic tones once differences in audibility were accounted for. Musical training was associated with better performance only in pitch-discrimination and frequency-modulation-detection tasks. The results confirm a detection and pitch-perception advantage for harmonic tones but reveal that the harmonic benefits do not extend to suprathreshold tasks that do not rely on extracting the fundamental frequency. A general theory is proposed that may account for the effects of both noise and memory on pitch-discrimination differences between harmonic and inharmonic tones.
Collapse
Affiliation(s)
- Neha Rajappa
- Department of Psychology, University of Minnesota, Minneapolis, MN 55455, USA;
| | - Daniel R. Guest
- Department of Biomedical Engineering, University of Rochester, Rochester, NY 14627, USA;
| | - Andrew J. Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, MN 55455, USA;
| |
Collapse
|
9
|
Kreft HA, Oxenham AJ. Auditory enhancement in younger and older listeners with normal and impaired hearinga). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:3821-3832. [PMID: 38109406 PMCID: PMC10730236 DOI: 10.1121/10.0023937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 11/17/2023] [Accepted: 11/20/2023] [Indexed: 12/20/2023]
Abstract
Auditory enhancement is a spectral contrast aftereffect that can facilitate the detection of novel events in an ongoing background. A single-interval paradigm combined with roved frequency content between trials can yield as much as 20 dB enhancement in young normal-hearing listeners. This study compared such enhancement in 15 listeners with sensorineural hearing loss with that in 15 age-matched adults and 15 young adults with normal audiograms. All groups were presented with stimulus levels of 70 dB sound pressure level (SPL) per component. The two groups with normal hearing were also tested at 45 dB SPL per component. The hearing-impaired listeners showed very little enhancement overall. However, when tested at the same high (70-dB) level, both young and age-matched normal-hearing listeners also showed substantially reduced enhancement, relative to that found at 45 dB SPL. Some differences in enhancement emerged between young and older normal-hearing listeners at the lower sound level. The results suggest that enhancement is highly level-dependent and may also decrease somewhat with age or slight hearing loss. Implications for hearing-impaired listeners may include a poorer ability to adapt to real-world acoustic variability, due in part to the higher levels at which sound must be presented to be audible.
Collapse
Affiliation(s)
- Heather A Kreft
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
10
|
Füllgrabe C, Fontan L, Vidal É, Massari H, Moore BCJ. Effects of hearing loss, age, noise exposure, and listening skills on envelope regularity discrimination. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:2453-2461. [PMID: 37850836 DOI: 10.1121/10.0021884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 10/03/2023] [Indexed: 10/19/2023]
Abstract
The envelope regularity discrimination (ERD) test assesses the ability to discriminate irregular from regular amplitude modulation (AM). The measured threshold is called the irregularity index (II). It was hypothesized that the II at threshold should be almost unaffected by the loudness recruitment that is associated with cochlear hearing loss because the effect of recruitment is similar to multiplying the AM depth by a certain factor, and II values depend on the amount of envelope irregularity relative to the baseline modulation depth. To test this hypothesis, the ERD test was administered to 60 older adults with varying degrees of hearing loss, using carrier frequencies of 1 and 4 kHz. The II values for the two carrier frequencies were highly correlated, indicating that the ERD test was measuring a consistent characteristic of each subject. The II values at 1 and 4 kHz were not significantly correlated with the audiometric thresholds at the corresponding frequencies, consistent with the hypothesis. The II values at 4 kHz were significantly positively correlated with age. There was an unexpected negative correlation between II values and a measure of noise exposure. This is argued to reflect the confounding effects of listening skills.
Collapse
Affiliation(s)
- Christian Füllgrabe
- Ear Institute, University College London, 332 Gray's Inn Road, London, WC1X 8EE, United Kingdom
| | | | | | | | - Brian C J Moore
- Department of Psychology, University of Cambridge, Downing Street, Cambridge, CB2 3EB, United Kingdom
| |
Collapse
|
11
|
Vinay, Moore BCJ. Exploiting individual differences to assess the role of place and phase locking cues in auditory frequency discrimination at 2 kHz. Sci Rep 2023; 13:13801. [PMID: 37612303 PMCID: PMC10447419 DOI: 10.1038/s41598-023-40571-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 08/13/2023] [Indexed: 08/25/2023] Open
Abstract
The relative role of place and temporal mechanisms in auditory frequency discrimination was assessed for a centre frequency of 2 kHz. Four measures of frequency discrimination were obtained for 63 normal-hearing participants: detection of frequency modulation using modulation rates of 2 Hz (FM2) and 20 Hz (FM20); detection of a change in frequency across successive pure tones (difference limen for frequency, DLF); and detection of changes in the temporal fine structure of bandpass filtered complex tones centred at 2 kHz (TFS). Previous work has suggested that: FM2 depends on the use of both temporal and place cues; FM20 depends primarily on the use of place cues because the temporal mechanism cannot track rapid changes in frequency; DLF depends primarily on temporal cues; TFS depends exclusively on temporal cues. This led to the following predicted patterns of the correlations of scores across participants: DLF and TFS should be highly correlated; FM2 should be correlated with DLF and TFS; FM20 should not be correlated with DLF or TFS. The results were broadly consistent with these predictions and with the idea that frequency discrimination at 2 kHz depends partly or primarily on temporal cues except for frequency modulation detection at a high rate.
Collapse
Affiliation(s)
- Vinay
- Audiology Group, Department of Neuromedicine and Movement Science, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology (NTNU), Tungasletta 2, 7491, Trondheim, Norway.
| | - Brian C J Moore
- Cambridge Hearing Group, Department of Psychology, University of Cambridge, Cambridge, UK
| |
Collapse
|
12
|
Gockel HE, Carlyon RP. Effect of diotic versus dichotic presentation on the pitch perception of tone complexes at medium and very high frequencies. Sci Rep 2023; 13:13247. [PMID: 37582928 PMCID: PMC10427668 DOI: 10.1038/s41598-023-40122-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Accepted: 08/04/2023] [Indexed: 08/17/2023] Open
Abstract
Difference limens for fundamental frequency (F0), F0DLs, are usually small for complex tones containing low harmonics that are resolved in the auditory periphery, but worsen when the rank of the lowest harmonic increases above about 6-8 and harmonics become less resolved. The traditional explanation for this, in terms of resolvability, has been challenged and an alternative explanation in terms of harmonic rank was suggested. Here, to disentangle the effects of resolvability and harmonic rank the complex tones were presented either diotically (all harmonics to both ears) or dichotically (even and odd harmonics to opposite ears); the latter increases resolvability but does not affect harmonic rank. F0DLs were measured for 14 listeners for complex tones containing harmonics 6-10 with F0s of 280 and 1400 Hz, presented diotically or dichotically. For the low F0, F0DLs were significantly lower for the dichotic than for the diotic condition. This is consistent with a benefit of increased resolvability of harmonics for F0 discrimination and extends previous results to harmonics as low as the sixth. In contrast, for the high F0, F0DLs were similar for the two presentation modes, adding to evidence for differences in pitch perception between tones with low-to-medium and very-high frequency content.
Collapse
Affiliation(s)
- Hedwig E Gockel
- MRC Cognition and Brain Sciences Unit, Cambridge Hearing Group, University of Cambridge, 15 Chaucer Road, Cambridge, CB2 7EF, UK.
| | - Robert P Carlyon
- MRC Cognition and Brain Sciences Unit, Cambridge Hearing Group, University of Cambridge, 15 Chaucer Road, Cambridge, CB2 7EF, UK
| |
Collapse
|
13
|
Moore BCJ, Vinay. Assessing mechanisms of frequency discrimination by comparison of different measures over a wide frequency range. Sci Rep 2023; 13:11379. [PMID: 37452119 PMCID: PMC10349105 DOI: 10.1038/s41598-023-38600-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 07/11/2023] [Indexed: 07/18/2023] Open
Abstract
It has been hypothesized that auditory detection of frequency modulation (FM) for low FM rates depends on the use of both temporal (phase locking) and place cues, depending on the carrier frequency, while detection of FM at high rates depends primarily on the use of place cues. To test this, FM detection for 2 and 20 Hz rates was measured over a wide frequency range, 1-10 kHz, including high frequencies for which temporal cues are assumed to be very weak. Performance was measured over the same frequency range for a task involving detection of changes in the temporal fine structure (TFS) of bandpass filtered complex tones, for which performance is assumed to depend primarily on the use of temporal cues. FM thresholds were better for the 2- than for the 20-Hz rate for center frequencies up to 4 kHz, while the reverse was true for higher center frequencies. For both FM rates, the thresholds, expressed as a proportion of the center frequency, were roughly constant for center frequencies from 6 to 10 Hz, consistent with the use of place cues. For the TFS task, thresholds worsened progressively with increasing frequency above 4 kHz, consistent with the weakening of temporal cues.
Collapse
Affiliation(s)
- Brian C J Moore
- Cambridge Hearing Group, Department of Psychology, University of Cambridge, Cambridge, UK.
| | - Vinay
- Audiology Group, Department of Neuromedicine and Movement Science, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology (NTNU), Tungasletta 2, 7491, Trondheim, Norway
| |
Collapse
|
14
|
Bravard R, Demany L, Pressnitzer D. Controlling audibility with noise for online experiments using sound. JASA EXPRESS LETTERS 2023; 3:064402. [PMID: 37379207 DOI: 10.1121/10.0019807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 06/06/2023] [Indexed: 06/30/2023]
Abstract
Online auditory experiments use the sound delivery equipment of each participant, with no practical way to calibrate sound level or frequency response. Here, a method is proposed to control sensation level across frequencies: embedding stimuli in threshold-equalizing noise. In a cohort of 100 online participants, noise could equate detection thresholds from 125 to 4000 Hz. Equalization was successful even for participants with atypical thresholds in quiet, due either to poor quality equipment or unreported hearing loss. Moreover, audibility in quiet was highly variable, as overall level was uncalibrated, but variability was much reduced with noise. Use cases are discussed.
Collapse
Affiliation(s)
- Rodrigue Bravard
- Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, Paris Sciences et Lettres University, Centre National de la Recherche Scientifique, 75005 Paris, France
| | - Laurent Demany
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, Centre National de la Recherche Scientifique, École Pratique des Hautes Études, and Université de Bordeaux, 33076 Bordeaux, , ,
| | - Daniel Pressnitzer
- Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, Paris Sciences et Lettres University, Centre National de la Recherche Scientifique, 75005 Paris, France
| |
Collapse
|
15
|
Narne VK, Jain S, Ravi SK, Almudhi A, Krishna Y, Moore BCJ. The effect of recreational noise exposure on amplitude-modulation detection, hearing sensitivity at frequencies above 8 kHz, and perception of speech in noise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:2562. [PMID: 37129676 DOI: 10.1121/10.0017973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 04/08/2023] [Indexed: 05/03/2023]
Abstract
Psychoacoustic and speech perception measures were compared for a group who were exposed to noise regularly through listening to music via personal music players (PMP) and a control group without such exposure. Lifetime noise exposure, quantified using the NESI questionnaire, averaged ten times higher for the exposed group than for the control group. Audiometric thresholds were similar for the two groups over the conventional frequency range up to 8 kHz, but for higher frequencies, the exposed group had higher thresholds than the control group. Amplitude modulation detection (AMD) thresholds were measured using a 4000-Hz sinusoidal carrier presented in threshold-equalizing noise at 30, 60, and 90 dB sound pressure level (SPL) for modulation frequencies of 8, 16, 32, and 64 Hz. At 90 dB SPL but not at the lower levels, AMD thresholds were significantly higher (worse) for the exposed than for the control group, especially for low modulation frequencies. The exposed group required significantly higher signal-to-noise ratios than the control group to understand sentences in noise. Otoacoustic emissions did not differ for the two groups. It is concluded that listening to music via PMP can have subtle deleterious effects on speech perception, AM detection, and hearing sensitivity over the extended high-frequency range.
Collapse
Affiliation(s)
- Vijaya Kumar Narne
- Department of Medical Rehabilitation Sciences, College of Applied Medical Sciences, King Khalid University, Abha 61481, Saudi Arabia
| | - Saransh Jain
- All India Institute of Speech and Hearing, University of Mysore, Mysuru, India
| | - Sunil Kumar Ravi
- Department of Medical Rehabilitation Sciences, College of Applied Medical Sciences, King Khalid University, Abha 61481, Saudi Arabia
| | - Abdulaziz Almudhi
- Department of Medical Rehabilitation Sciences, College of Applied Medical Sciences, King Khalid University, Abha 61481, Saudi Arabia
| | - Yerraguntla Krishna
- Department of Medical Rehabilitation Sciences, College of Applied Medical Sciences, King Khalid University, Abha 61481, Saudi Arabia
- All India Institute of Speech and Hearing, University of Mysore, Mysuru, India
| | - Brian C J Moore
- Cambridge Hearing Group, Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
16
|
An overview of factors affecting bimodal and electric-acoustic stimulation (EAS) speech understanding outcomes. Hear Res 2023; 431:108736. [PMID: 36931019 DOI: 10.1016/j.heares.2023.108736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 02/13/2023] [Accepted: 03/04/2023] [Indexed: 03/08/2023]
Abstract
Improvements in device technology, surgical technique, and patient outcomes have resulted in a broadening of cochlear implantation criteria to consider those with increasing levels of useful low-to-mid frequency residual acoustic hearing. Residual acoustic hearing allows for the addition of a hearing aid (HA) to complement the cochlear implant (CI) and has demonstrated enhanced listening outcomes. However, wide inter-subject outcome variability exists and thus identification of contributing factors would be of clinical interest and may aid with pre-operative patient counselling. The optimal fitting procedure and frequency assignments for the two hearing devices used in combination to enhance listening outcomes also remains unclear. The understanding of how acoustic and electric speech information is fundamentally combined and utilised by the listener may allow for the optimisation of device fittings and frequency allocations to provide best bimodal and electric-acoustic stimulation (EAS) patient outcomes. This article will provide an overview of contributing factors to bimodal and EAS listening outcomes, explore areas of contention, and discuss common study limitations.
Collapse
|
17
|
Perugia E, Marmel F, Kluk K. Feasibility of Diagnosing Dead Regions Using Auditory Steady-State Responses to an Exponentially Amplitude Modulated Tone in Threshold Equalizing Notched Noise, Assessed Using Normal-Hearing Participants. Trends Hear 2023; 27:23312165231173234. [PMID: 37384583 PMCID: PMC10336760 DOI: 10.1177/23312165231173234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 04/11/2023] [Accepted: 04/14/2023] [Indexed: 07/01/2023] Open
Abstract
The aim of this study was to assess feasibility of using electrophysiological auditory steady-state response (ASSR) masking for detecting dead regions (DRs). Fifteen normally hearing adults were tested using behavioral and electrophysiological tasks. In the electrophysiological task, ASSRs were recorded to a 2 kHz exponentially amplitude-modulated tone (AM2) presented within a notched threshold equalizing noise (TEN) whose center frequency (CFNOTCH) varied. We hypothesized that, in the absence of DRs, ASSR amplitudes would be largest for CFNOTCH at/or near the signal frequency. In the presence of a DR at the signal frequency, the largest ASSR amplitude would occur at a frequency (fmax) far away from the signal frequency. The AM2 and the TEN were presented at 60 and 75 dB SPL, respectively. In the behavioral task, for the same maskers as above, the masker level at which an AM and a pure tone could just be distinguished, denoted AM2ML, was determined, for low (10 dB above absolute AM2 threshold) and high (60 dB SPL) signal levels. We also hypothesized that the value of fmax would be similar for both techniques. The ASSR fmax values obtained from grand average ASSR amplitudes, but not from individual amplitudes, were consistent with our hypotheses. The agreement between the behavioral fmax and ASSR fmax was poor. The within-session ASSR-amplitude repeatability was good for AM2 alone, but poor for AM2 in notched TEN. The ASSR-amplitude variability between and within participants seems to be a major roadblock to developing our approach into an effective DR detection method.
Collapse
Affiliation(s)
- Emanuele Perugia
- Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Frederic Marmel
- Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Karolina Kluk
- Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| |
Collapse
|
18
|
Mehta AH, Oxenham AJ. Role of perceptual integration in pitch discrimination at high frequenciesa). JASA EXPRESS LETTERS 2022; 2:084402. [PMID: 37311192 PMCID: PMC10264831 DOI: 10.1121/10.0013429] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Accepted: 07/26/2022] [Indexed: 06/15/2023]
Abstract
At very high frequencies, fundamental-frequency difference limens (F0DLs) for five-component harmonic complex tones can be better than predicted by optimal integration of information, assuming performance is limited by noise at the peripheral level, but are in line with predictions based on more central sources of noise. This study investigates whether there is a minimum number of harmonic components needed for such super-optimal integration effects and if harmonic range or inharmonicity affects this super-optimal integration. Results show super-optimal integration, even with two harmonic components and for most combinations of consecutive harmonic, but not inharmonic, components.
Collapse
Affiliation(s)
- Anahita H Mehta
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, USA ,
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, USA ,
| |
Collapse
|
19
|
Gockel HE, Carlyon RP. On mistuning detection and beat perception for harmonic complex tones at low and very high frequencies. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:226. [PMID: 35931513 DOI: 10.1121/10.0012351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 06/19/2022] [Indexed: 06/15/2023]
Abstract
This study assessed the detection of mistuning of a single harmonic in complex tones (CTs) containing either low-frequency harmonics or very high-frequency harmonics, for which phase locking to the temporal fine structure is weak or absent. CTs had F0s of either 280 or 1400 Hz and contained harmonics 6-10, the 8th of which could be mistuned. Harmonics were presented either diotically or dichotically (odd and even harmonics to different ears). In the diotic condition, mistuning-detection thresholds were very low for both F0s and consistent with detection of temporal interactions (beats) produced by peripheral interactions of components. In the dichotic condition, for which the components in each ear were more widely spaced and beats were not reported, the mistuned component was perceptually segregated from the complex for the low F0, but subjects reported no "popping out" for the high F0 and performance was close to chance. This is consistent with the idea that phase locking is required for perceptual segregation to occur. For diotic presentation, the perceived beat rate corresponded to the amount of mistuning (in Hz). It is argued that the beat percept cannot be explained solely by interactions between the mistuned component and its two closest harmonic neighbours.
Collapse
Affiliation(s)
- Hedwig E Gockel
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom
| | - Robert P Carlyon
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom
| |
Collapse
|
20
|
Threshold Equalizing Noise Test Reveals Suprathreshold Loss of Hearing Function, Even in the "Normal" Audiogram Range. Ear Hear 2022; 43:1208-1221. [PMID: 35276701 PMCID: PMC9197144 DOI: 10.1097/aud.0000000000001175] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Objectives: The threshold equalizing noise (TEN(HL)) is a clinically administered test to detect cochlear “dead regions” (i.e., regions of loss of inner hair cell [IHC] connectivity), using a “pass/fail” criterion based on the degree of elevation of a masked threshold in a tone-detection task. With sensorineural hearing loss, some elevation of the masked threshold is commonly observed but usually insufficient to create a “fail” diagnosis. The experiment reported here investigated whether the gray area between pass and fail contained information that correlated with factors such as age or cumulative high-level noise exposure (>100 dBA sound pressure levels), possibly indicative of damage to cochlear structures other than the more commonly implicated outer hair cells. Design: One hundred and twelve participants (71 female) who underwent audiometric screening for a sensorineural hearing loss, classified as either normal or mild, were recruited. Their age range was 32 to 74 years. They were administered the TEN test at four frequencies, 0.75, 1, 3, and 4 kHz, and at two sensation levels, 12 and 24 dB above their pure-tone absolute threshold at each frequency. The test frequencies were chosen to lie either distinctly away from, or within, the 2 to 6 kHz region where noise-induced hearing loss is first clinically observed as a notch in the audiogram. Cumulative noise exposure was assessed by the Noise Exposure Structured Interview (NESI). Elements of the NESI also permitted participant stratification by music experience. Results: Across all frequencies and testing levels, a strong positive correlation was observed between elevation of TEN threshold and absolute threshold. These correlations were little-changed even after noise exposure and music experience were factored out. The correlations were observed even within the range of “normal” hearing (absolute thresholds ≤15 dB HL). Conclusions: Using a clinical test, sensorineural hearing deficits were observable even within the range of clinically “normal” hearing. Results from the TEN test residing between “pass” and “fail” are dominated by processes not related to IHCs. The TEN test for IHC-related function should therefore only be considered for its originally designed function, to generate a binary decision, either pass or fail.
Collapse
|
21
|
Holder JT, Holcomb MA, Snapp H, Labadie RF, Vroegop J, Rocca C, Elgandy MS, Dunn C, Gifford RH. Guidelines for Best Practice in the Audiological Management of Adults Using Bimodal Hearing Configurations. OTOLOGY & NEUROTOLOGY OPEN 2022; 2:e011. [PMID: 36274668 PMCID: PMC9581116 DOI: 10.1097/ono.0000000000000011] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Clinics are treating a growing number of patients with greater amounts of residual hearing. These patients often benefit from a bimodal hearing configuration in which acoustic input from a hearing aid on 1 ear is combined with electrical stimulation from a cochlear implant on the other ear. The current guidelines aim to review the literature and provide best practice recommendations for the evaluation and treatment of individuals with bilateral sensorineural hearing loss who may benefit from bimodal hearing configurations. Specifically, the guidelines review: benefits of bimodal listening, preoperative and postoperative cochlear implant evaluation and programming, bimodal hearing aid fitting, contralateral routing of signal considerations, bimodal treatment for tinnitus, and aural rehabilitation recommendations.
Collapse
Affiliation(s)
| | | | | | | | | | - Christine Rocca
- Guy’s and St. Thomas’ Hearing Implant Centre, London, United Kingdom
| | | | | | | |
Collapse
|
22
|
Brungart DS, Sherlock LP, Kuchinsky SE, Perry TT, Bieber RE, Grant KW, Bernstein JGW. Assessment methods for determining small changes in hearing performance over time. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:3866. [PMID: 35778214 DOI: 10.1121/10.0011509] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Although the behavioral pure-tone threshold audiogram is considered the gold standard for quantifying hearing loss, assessment of speech understanding, especially in noise, is more relevant to quality of life but is only partly related to the audiogram. Metrics of speech understanding in noise are therefore an attractive target for assessing hearing over time. However, speech-in-noise assessments have more potential sources of variability than pure-tone threshold measures, making it a challenge to obtain results reliable enough to detect small changes in performance. This review examines the benefits and limitations of speech-understanding metrics and their application to longitudinal hearing assessment, and identifies potential sources of variability, including learning effects, differences in item difficulty, and between- and within-individual variations in effort and motivation. We conclude by recommending the integration of non-speech auditory tests, which provide information about aspects of auditory health that have reduced variability and fewer central influences than speech tests, in parallel with the traditional audiogram and speech-based assessments.
Collapse
Affiliation(s)
- Douglas S Brungart
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - LaGuinn P Sherlock
- Hearing Conservation and Readiness Branch, U.S. Army Public Health Center, E1570 8977 Sibert Road, Aberdeen Proving Ground, Maryland 21010, USA
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Trevor T Perry
- Hearing Conservation and Readiness Branch, U.S. Army Public Health Center, E1570 8977 Sibert Road, Aberdeen Proving Ground, Maryland 21010, USA
| | - Rebecca E Bieber
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Ken W Grant
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Joshua G W Bernstein
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| |
Collapse
|
23
|
Abstract
Hearing in noise is a core problem in audition, and a challenge for hearing-impaired listeners, yet the underlying mechanisms are poorly understood. We explored whether harmonic frequency relations, a signature property of many communication sounds, aid hearing in noise for normal hearing listeners. We measured detection thresholds in noise for tones and speech synthesized to have harmonic or inharmonic spectra. Harmonic signals were consistently easier to detect than otherwise identical inharmonic signals. Harmonicity also improved discrimination of sounds in noise. The largest benefits were observed for two-note up-down "pitch" discrimination and melodic contour discrimination, both of which could be performed equally well with harmonic and inharmonic tones in quiet, but which showed large harmonic advantages in noise. The results show that harmonicity facilitates hearing in noise, plausibly by providing a noise-robust pitch cue that aids detection and discrimination.
Collapse
|
24
|
Guest DR, Oxenham AJ. Human discrimination and modeling of high-frequency complex tones shed light on the neural codes for pitch. PLoS Comput Biol 2022; 18:e1009889. [PMID: 35239639 PMCID: PMC8923464 DOI: 10.1371/journal.pcbi.1009889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 03/15/2022] [Accepted: 02/02/2022] [Indexed: 11/24/2022] Open
Abstract
Accurate pitch perception of harmonic complex tones is widely believed to rely on temporal fine structure information conveyed by the precise phase-locked responses of auditory-nerve fibers. However, accurate pitch perception remains possible even when spectrally resolved harmonics are presented at frequencies beyond the putative limits of neural phase locking, and it is unclear whether residual temporal information, or a coarser rate-place code, underlies this ability. We addressed this question by measuring human pitch discrimination at low and high frequencies for harmonic complex tones, presented either in isolation or in the presence of concurrent complex-tone maskers. We found that concurrent complex-tone maskers impaired performance at both low and high frequencies, although the impairment introduced by adding maskers at high frequencies relative to low frequencies differed between the tested masker types. We then combined simulated auditory-nerve responses to our stimuli with ideal-observer analysis to quantify the extent to which performance was limited by peripheral factors. We found that the worsening of both frequency discrimination and F0 discrimination at high frequencies could be well accounted for (in relative terms) by optimal decoding of all available information at the level of the auditory nerve. A Python package is provided to reproduce these results, and to simulate responses to acoustic stimuli from the three previously published models of the human auditory nerve used in our analyses.
Collapse
Affiliation(s)
- Daniel R. Guest
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States of America
| | - Andrew J. Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States of America
| |
Collapse
|
25
|
Zhang M, Stern RM, Moncrieff D, Palmer C, Brown CA. Effect of Titrated Exposure to Non-Traumatic Noise on Unvoiced Speech Recognition in Human Listeners with Normal Audiological Profiles. Trends Hear 2022; 26:23312165221117081. [PMID: 35929144 PMCID: PMC9403458 DOI: 10.1177/23312165221117081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Non-traumatic noise exposure has been shown in animal models to impact the processing of envelope cues. However, evidence in human studies has been conflicting, possibly because the measures have not been specifically parameterized based on listeners' exposure profiles. The current study examined young dental-school students, whose exposure to high-frequency non-traumatic dental-drill noise during their course of study is systematic and precisely quantifiable. Twenty-five dental students and twenty-seven non-dental participants were recruited. The listeners were asked to recognize unvoiced sentences that were processed to contain only envelope cues useful for recognition and have been filtered to frequency regions inside or outside the dental noise spectrum. The sentences were presented either in quiet or in one of the noise maskers, including a steady-state noise, a 16-Hz or 32-Hz temporally modulated noise, or a spectrally modulated noise. The dental students showed no difference from the control group in demographic information, audiological screening outcomes, extended high-frequency thresholds, or unvoiced speech in quiet, but consistently performed more poorly for unvoiced speech recognition in modulated noise. The group difference in noise depended on the filtering conditions. The dental group's degraded performances were observed in temporally modulated noise for high-pass filtered condition only and in spectrally modulated noise for low-pass filtered condition only. The current findings provide the most direct evidence to date of a link between non-traumatic noise exposure and supra-threshold envelope processing issues in human listeners despite the normal audiological profiles.
Collapse
Affiliation(s)
- Mengchao Zhang
- Audiology Department, School of Life and Health Sciences, 1722Aston University, Birmingham, B4 7ET, UK
| | - Richard M Stern
- Department of Electrical and Computer Engineering, 6612Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA
| | - Deborah Moncrieff
- School of Communication Sciences and Disorders, 5415University of Memphis, Memphis, Tennessee 38152, USA
| | - Catherine Palmer
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
| | - Christopher A Brown
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
| |
Collapse
|
26
|
An implicit representation of stimulus ambiguity in pupil size. Proc Natl Acad Sci U S A 2021; 118:2107997118. [PMID: 34819369 DOI: 10.1073/pnas.2107997118] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/07/2021] [Indexed: 11/18/2022] Open
Abstract
To guide behavior, perceptual systems must operate on intrinsically ambiguous sensory input. Observers are usually able to acknowledge the uncertainty of their perception, but in some cases, they critically fail to do so. Here, we show that a physiological correlate of ambiguity can be found in pupil dilation even when the observer is not aware of such ambiguity. We used a well-known auditory ambiguous stimulus, known as the tritone paradox, which can induce the perception of an upward or downward pitch shift within the same individual. In two experiments, behavioral responses showed that listeners could not explicitly access the ambiguity in this stimulus, even though their responses varied from trial to trial. However, pupil dilation was larger for the more ambiguous cases. The ambiguity of the stimulus for each listener was indexed by the entropy of behavioral responses, and this entropy was also a significant predictor of pupil size. In particular, entropy explained additional variation in pupil size independent of the explicit judgment of confidence in the specific situation that we investigated, in which the two measures were decoupled. Our data thus suggest that stimulus ambiguity is implicitly represented in the brain even without explicit awareness of this ambiguity.
Collapse
|
27
|
Predicting the Cochlear Dead Regions Using a Machine Learning-Based Approach with Oversampling Techniques. MEDICINA (KAUNAS, LITHUANIA) 2021; 57:medicina57111192. [PMID: 34833410 PMCID: PMC8625869 DOI: 10.3390/medicina57111192] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 10/25/2021] [Accepted: 10/27/2021] [Indexed: 11/22/2022]
Abstract
Background and Objectives: Determining the presence or absence of cochlear dead regions (DRs) is essential in clinical practice. This study proposes a machine learning (ML)-based model that applies oversampling techniques for predicting DRs in patients. Materials and Methods: We used recursive partitioning and regression for classification tree (CT) and logistic regression (LR) as prediction models. To overcome the imbalanced nature of the dataset, oversampling techniques to duplicate examples in the minority class or to synthesize new examples from existing examples in the minority class were adopted, namely the synthetic minority oversampling technique (SMOTE). Results: The accuracy results of the 10-fold cross-validation of the LR and CT with the original data were 0.82 (±0.02) and 0.93 (±0.01), respectively. The accuracy results of the 10-fold cross-validation of the LR and CT with the oversampled data were 0.66 (±0.02) and 0.86 (±0.01), respectively. Conclusions: This study is the first to adopt the SMOTE method to assess the role of oversampling methods on audiological datasets and to develop an ML-based model. Considering that the SMOTE method did not improve the model’s performance, a more flexible model or more clinical features may be needed.
Collapse
|
28
|
Roverud E, Dubno JR, Richards VM, Kidd G. Cross-frequency weights in normal and impaired hearing: Stimulus factors, stimulus dimensions, and associations with speech recognition. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:2327. [PMID: 34717459 PMCID: PMC8637742 DOI: 10.1121/10.0006450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 09/02/2021] [Accepted: 09/08/2021] [Indexed: 06/13/2023]
Abstract
Previous studies of level discrimination reported that listeners with high-frequency sensorineural hearing loss (SNHL) place greater weight on high frequencies than normal-hearing (NH) listeners. It is not clear whether these results are influenced by stimulus factors (e.g., group differences in presentation levels, cross-frequency discriminability of level differences used to measure weights) and whether such weights generalize to other tasks. Here, NH and SNHL weights were measured for level, duration, and frequency discrimination of two-tone complexes after measuring discriminability just-noticeable differences for each frequency and stimulus dimension. Stimuli were presented at equal sensation level (SL) or equal sound pressure level (SPL). Results showed that weights could change depending on which frequency contained the more discriminable level difference with uncontrolled cross-frequency discriminability. When cross-frequency discriminability was controlled, weights were consistent for level and duration discrimination, but not for frequency discrimination. Comparing equal SL and equal SPL weights indicated greater weight on the higher-level tone for level and duration discrimination. Weights were unrelated to improvements in recognition of low-pass-filtered speech with increasing cutoff frequency. These results suggest that cross-frequency weights and NH and SNHL weighting differences are influenced by stimulus factors and may not generalize to the use of speech cues in specific frequency regions.
Collapse
Affiliation(s)
- Elin Roverud
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Judy R Dubno
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 550, Charleston, South Carolina 29425-5500, USA
| | - Virginia M Richards
- Department of Cognitive Sciences, 2201 Social and Behavioral Sciences Gateway, University of California-Irvine, Irvine, California 92697-5100, USA
| | - Gerald Kidd
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| |
Collapse
|
29
|
Dias GFM, Souza MRFD, Iorio MCM. Hearing aid fitting in the elderly: prescription of acoustic gain through frequency thresholds obtained with pure tone and narrow band stimuli. Codas 2021; 33:e20200192. [PMID: 34586327 DOI: 10.1590/2317-1782/20202020192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Accepted: 10/15/2020] [Indexed: 11/21/2022] Open
Abstract
PURPOSE To verify the benefit obtained by the prescription of acoustic gain based on the auditory thresholds obtained with pure tones modulated in frequency and with Narrow Band Noise. METHODS The sample consisted of 30 elderly people, aged 60 years or over with moderate to severe descending sensorineural symmetrical hearing loss with thresholds at 4kHz equal to or less than 70dBHL. There were two groups. GTP (pure tone group): 15 elderly people had their hearing aids fitted through the auditory thresholds obtained with pure tone and the GNB group (narrow band group): 15 elderly people had their hearing aids fitted through the auditory thresholds obtained with NB. The procedures performed before the fitting of hearing aids and after three months of amplification use were: COSI, WRS (Word Recognition Score), Signal/Noise ratio. The International Outcome Inventory for Hearing Aids (IOI-HA) was applied only after three months of hearing aid fitting. RESULTS The elderly people in the group in which the hearing aids were fitted with a prescribed gain based on the hearing thresholds obtained with the Narrow Band stimulus showed better performance in the following tests: WRS on the right ear, total score of the IOI-HA inventory, COSI and longer use of hearing aids compared to the GTP group. CONCLUSION There was a greater benefit with the use of hearing aids, due to the total score of the IOI-HA inventory, COSI scale and longer daily use time of hearing aids, in the group whose prescription of acoustic gain was based on the auditory thresholds obtained with narrow band.
Collapse
|
30
|
Sanchez-Lopez R, Nielsen SG, El-Haj-Ali M, Bianchi F, Fereczkowski M, Cañete OM, Wu M, Neher T, Dau T, Santurette S. Auditory Tests for Characterizing Hearing Deficits in Listeners With Various Hearing Abilities: The BEAR Test Battery. Front Neurosci 2021; 15:724007. [PMID: 34658768 PMCID: PMC8512168 DOI: 10.3389/fnins.2021.724007] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 08/11/2021] [Indexed: 11/15/2022] Open
Abstract
The Better hEAring Rehabilitation (BEAR) project aims to provide a new clinical profiling tool-a test battery-for hearing loss characterization. Although the loss of sensitivity can be efficiently measured using pure-tone audiometry, the assessment of supra-threshold hearing deficits remains a challenge. In contrast to the classical "attenuation-distortion" model, the proposed BEAR approach is based on the hypothesis that the hearing abilities of a given listener can be characterized along two dimensions, reflecting independent types of perceptual deficits (distortions). A data-driven approach provided evidence for the existence of different auditory profiles with different degrees of distortions. Ten tests were included in a test battery, based on their clinical feasibility, time efficiency, and related evidence from the literature. The tests were divided into six categories: audibility, speech perception, binaural processing abilities, loudness perception, spectro-temporal modulation sensitivity, and spectro-temporal resolution. Seventy-five listeners with symmetric, mild-to-severe sensorineural hearing loss were selected from a clinical population. The analysis of the results showed interrelations among outcomes related to high-frequency processing and outcome measures related to low-frequency processing abilities. The results showed the ability of the tests to reveal differences among individuals and their potential use in clinical settings.
Collapse
Affiliation(s)
- Raul Sanchez-Lopez
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark,Interacoustics Research Unit, Kgs. Lyngby, Denmark,*Correspondence: Raul Sanchez-Lopez
| | - Silje Grini Nielsen
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Mouhamad El-Haj-Ali
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Federica Bianchi
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark,Oticon Medical, Smørum, Denmark
| | - Michal Fereczkowski
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark,Institute of Clinical Research, University of Southern Denmark, Odense, Denmark,Research Unit for ORL-Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark
| | - Oscar M. Cañete
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Mengfan Wu
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark,Research Unit for ORL-Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark
| | - Tobias Neher
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark,Research Unit for ORL-Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark
| | - Torsten Dau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark,Torsten Dau
| | - Sébastien Santurette
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark,Centre for Applied Audiology Research, Oticon A/S, Smørum, Denmark,Sébastien Santurette
| |
Collapse
|
31
|
Decreased Reemerging Auditory Brainstem Responses Under Ipsilateral Broadband Masking as a Marker of Noise-Induced Cochlear Synaptopathy. Ear Hear 2021; 42:1062-1071. [PMID: 33625059 DOI: 10.1097/aud.0000000000001009] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
OBJECTIVES In mammals, a 2-hr exposure to an octave-band noise (OBN) at 100 to 108 dB SPL induces loss of synaptic ribbons between inner hair cells and auditory nerve fibers with high thresholds of response (hiT neurons), that encode high-intensity sounds. Here, we tackle the challenge of diagnosing this synaptopathy by a noninvasive functional audiological test, ultimately in humans, despite the expected absence of auditory-threshold elevation and of clear electrophysiological abnormality, hiT neuron contributions being hidden by those of more sensitive and robust neurons. DESIGN The noise-induced synaptopathy was replicated in mice (at 94, 97, and 100 dB SPL; n = 7, 7, and 8, respectively, against 8 unexposed controls), without long-lasting auditory-threshold elevation despite a twofold decrease in ribbon-synapse number for the 100-dB OBN exposure. Auditory brainstem responses (ABRs) were collected using a simultaneous broadband noise masker just able to erase the ABR response to a 60-dB tone burst. Tone burst intensity was then increased up to 100 dB SPL for eliciting reemerging ABRs (R-ABRs), dependent on hiT neurons as more sensitive neurons are masked. RESULTS In most ears exposed to 97-dB-SPL and all ears exposed to 100-dB-SPL OBN, contrary to controls, R-ABRs from the overexposed region have vanished, whereas standard ABR distributions widely overlap. CONCLUSIONS R-ABRs afford an individual noninvasive marker of normal-auditory-threshold cochlear synaptopathy. A simple modification of standard ABRs would allow hidden auditory synaptopathy to be searched in a patient. ABBREVIATIONS ABR: auditory brainstem response; dB SPL: decibel sound pressure level; DPOAE: distortion-product otoacoustic emission; hiT neuron: high-threshold neuron; IHC: inner hair cell; loT neuron: low-threshold neuron; OBN: octave-band noise; OHC: outer hair cell; PBS: phosphate buffer saline; R-ABR: reemerging ABR.
Collapse
|
32
|
Causal inference in environmental sound recognition. Cognition 2021; 214:104627. [PMID: 34044231 DOI: 10.1016/j.cognition.2021.104627] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Revised: 01/28/2021] [Accepted: 02/05/2021] [Indexed: 11/23/2022]
Abstract
Sound is caused by physical events in the world. Do humans infer these causes when recognizing sound sources? We tested whether the recognition of common environmental sounds depends on the inference of a basic physical variable - the source intensity (i.e., the power that produces a sound). A source's intensity can be inferred from the intensity it produces at the ear and its distance, which is normally conveyed by reverberation. Listeners could thus use intensity at the ear and reverberation to constrain recognition by inferring the underlying source intensity. Alternatively, listeners might separate these acoustic cues from their representation of a sound's identity in the interest of invariant recognition. We compared these two hypotheses by measuring recognition accuracy for sounds with typically low or high source intensity (e.g., pepper grinders vs. trucks) that were presented across a range of intensities at the ear or with reverberation cues to distance. The recognition of low-intensity sources (e.g., pepper grinders) was impaired by high presentation intensities or reverberation that conveyed distance, either of which imply high source intensity. Neither effect occurred for high-intensity sources. The results suggest that listeners implicitly use the intensity at the ear along with distance cues to infer a source's power and constrain its identity. The recognition of real-world sounds thus appears to depend upon the inference of their physical generative parameters, even generative parameters whose cues might otherwise be separated from the representation of a sound's identity.
Collapse
|
33
|
Gockel HE, Carlyon RP. On musical interval perception for complex tones at very high frequencies. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:2644. [PMID: 33940917 PMCID: PMC7612123 DOI: 10.1121/10.0004222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 03/17/2021] [Indexed: 06/12/2023]
Abstract
Listeners appear able to extract a residue pitch from high-frequency harmonics for which phase locking to the temporal fine structure is weak or absent. The present study investigated musical interval perception for high-frequency harmonic complex tones using the same stimuli as Lau, Mehta, and Oxenham [J. Neurosci. 37, 9013-9021 (2017)]. Nine young musically trained listeners with especially good high-frequency hearing adjusted various musical intervals using harmonic complex tones containing harmonics 6-10. The reference notes had fundamental frequencies (F0s) of 280 or 1400 Hz. Interval matches were possible, albeit markedly worse, even when all harmonic frequencies were above the presumed limit of phase locking. Matches showed significantly larger systematic errors and higher variability, and subjects required more trials to finish a match for the high than for the low F0. Additional absolute pitch judgments from one subject with absolute pitch, for complex tones containing harmonics 1-5 or 6-10 with a wide range of F0s, were perfect when the lowest frequency component was below about 7 kHz, but at least 50% of responses were incorrect when it was 8 kHz or higher. The results are discussed in terms of the possible effects of phase-locking information and familiarity with high-frequency stimuli on pitch.
Collapse
|
34
|
Bifurcation in brain dynamics reveals a signature of conscious processing independent of report. Nat Commun 2021; 12:1149. [PMID: 33608533 PMCID: PMC7895979 DOI: 10.1038/s41467-021-21393-z] [Citation(s) in RCA: 65] [Impact Index Per Article: 21.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2019] [Accepted: 01/21/2021] [Indexed: 12/05/2022] Open
Abstract
An outstanding challenge for consciousness research is to characterize the neural signature of conscious access independently of any decisional processes. Here we present a model-based approach that uses inter-trial variability to identify the brain dynamics associated with stimulus processing. We demonstrate that, even in the absence of any task or behavior, the electroencephalographic response to auditory stimuli shows bifurcation dynamics around 250–300 milliseconds post-stimulus. Namely, the same stimulus gives rise to late sustained activity on some trials, and not on others. This late neural activity is predictive of task-related reports, and also of reports of conscious contents that are randomly sampled during task-free listening. Source localization further suggests that task-free conscious access recruits the same neural networks as those associated with explicit report, except for frontal executive components. Studying brain dynamics through variability could thus play a key role for identifying the core signatures of conscious access, independent of report. Current knowledge on the neural basis of consciousness mostly relies on situations where people report their perception. Here, the authors provide evidence for the idea that bifurcation in brain dynamics reflects conscious perception independent of report.
Collapse
|
35
|
Demany L, Monteiro G, Semal C, Shamma S, Carlyon RP. The perception of octave pitch affinity and harmonic fusion have a common origin. Hear Res 2021; 404:108213. [PMID: 33662686 PMCID: PMC7614450 DOI: 10.1016/j.heares.2021.108213] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 02/05/2021] [Accepted: 02/10/2021] [Indexed: 02/06/2023]
Abstract
Musicians say that the pitches of tones with a frequency ratio of 2:1 (one octave) have a distinctive affinity, even if the tones do not have common spectral components. It has been suggested, however, that this affinity judgment has no biological basis and originates instead from an acculturation process ‒ the learning of musical rules unrelated to auditory physiology. We measured, in young amateur musicians, the perceptual detectability of octave mistunings for tones presented alternately (melodic condition) or simultaneously (harmonic condition). In the melodic condition, mistuning was detectable only by means of explicit pitch comparisons. In the harmonic condition, listeners could use a different and more efficient perceptual cue: in the absence of mistuning, the tones fused into a single sound percept; mistunings decreased fusion. Performance was globally better in the harmonic condition, in line with the hypothesis that listeners used a fusion cue in this condition; this hypothesis was also supported by results showing that an illusory simultaneity of the tones was much less advantageous than a real simultaneity. In the two conditions, mistuning detection was generally better for octave compressions than for octave stretchings. This asymmetry varied across listeners, but crucially the listener-specific asymmetries observed in the two conditions were highly correlated. Thus, the perception of the melodic octave appeared to be closely linked to the phenomenon of harmonic fusion. As harmonic fusion is thought to be determined by biological factors rather than factors related to musical culture or training, we argue that octave pitch affinity also has, at least in part, a biological basis.
Collapse
Affiliation(s)
- Laurent Demany
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, CNRS, EPHE, and Université de Bordeaux, Bordeaux, France.
| | - Guilherme Monteiro
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, CNRS, EPHE, and Université de Bordeaux, Bordeaux, France
| | - Catherine Semal
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, CNRS, EPHE, and Université de Bordeaux, Bordeaux, France; Bordeaux INP, Bordeaux, France.
| | - Shihab Shamma
- Institute for Systems Research, University of Maryland, College Park, MD, United States; Département d'Etudes Cognitives, Ecole Normale Supérieure, Paris, France.
| | - Robert P Carlyon
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, Cambridge, United Kingdom.
| |
Collapse
|
36
|
Mesik J, Wojtczak M. Effects of noise precursors on the detection of amplitude and frequency modulation for tones in noise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:3581. [PMID: 33379905 PMCID: PMC8097715 DOI: 10.1121/10.0002879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Revised: 11/05/2020] [Accepted: 11/16/2020] [Indexed: 06/12/2023]
Abstract
Recent studies on amplitude modulation (AM) detection for tones in noise reported that AM-detection thresholds improve when the AM stimulus is preceded by a noise precursor. The physiological mechanisms underlying this AM unmasking are unknown. One possibility is that adaptation to the level of the noise precursor facilitates AM encoding by causing a shift in neural rate-level functions to optimize level encoding around the precursor level. The aims of this study were to investigate whether such a dynamic-range adaptation is a plausible mechanism for the AM unmasking and whether frequency modulation (FM), thought to be encoded via AM, also exhibits the unmasking effect. Detection thresholds for AM and FM of tones in noise were measured with and without a fixed-level precursor. Listeners showing the unmasking effect were then tested with the precursor level roved over a wide range to modulate the effect of adaptation to the precursor level on the detection of the subsequent AM. It was found that FM detection benefits from a precursor and the magnitude of FM unmasking correlates with that of AM unmasking. Moreover, consistent with dynamic-range adaptation, the unmasking magnitude weakens as the level difference between the precursor and simultaneous masker of the tone increases.
Collapse
Affiliation(s)
- Juraj Mesik
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, USA
| | - Magdalena Wojtczak
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
37
|
Vinay, Sandhya, Moore BCJ. Effect of age, test frequency and level on thresholds for the TEN(HL) test for people with normal hearing. Int J Audiol 2020; 59:915-920. [DOI: 10.1080/14992027.2020.1783584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Vinay
- Department of Health Sciences, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Sandhya
- Department of Health Sciences, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Brian C. J. Moore
- Department of Experimental Psychology, University of Cambridge, Cambridge, UK
| |
Collapse
|
38
|
Kan A, Meng Q. The Temporal Limits Encoder as a Sound Coding Strategy for Bilateral Cochlear Implants. IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING 2020; 29:265-273. [PMID: 33409339 PMCID: PMC7781292 DOI: 10.1109/taslp.2020.3039601] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
The difference in binaural benefit between bilateral cochlear implant (CI) users and normal hearing (NH) listeners has typically been attributed to CI sound coding strategies not encoding the acoustic fine structure (FS) interaural time differences (ITD). The Temporal Limits Encoder (TLE) strategy is proposed as a potential way of improving binaural hearing benefits for CI users in noisy situations. TLE works by downward-transposition of mid-frequency band-limited channel information and can theoretically provide FS-ITD cues. In this work, the effect of choice of lower limit of the modulator in TLE was examined by measuring performance on a word recognition task and computing the magnitude of binaural benefit in bilateral CI users. Performance listening with the TLE strategy was compared with the commonly used Advanced Combinational Encoder (ACE) CI sound coding strategy. Results showed that setting the lower limit to ≥200 Hz maintained word recognition performance comparable to that of ACE. While most CI listeners exhibited a large binaural benefit (≥6 dB) in at least one of the conditions tested, there was no systematic relationship between the lower limit of the modulator and performance. These results indicate that the TLE strategy has potential to improve binaural hearing abilities in CI users but further work is needed to understand how binaural benefit can be maximized.
Collapse
Affiliation(s)
- Alan Kan
- Waisman Center, University of Wisconsin-Madison at the time this work was conducted. He is now with the School of Engineering, Macquarie University, NSW, Australia, 2109
| | - Qinglin Meng
- Acoustics Laboratory, School of Physics and Optoelectronics, South China University of Technology, Guangzhou, China, 510641
| |
Collapse
|
39
|
Gockel HE, Moore BC, Carlyon RP. Pitch perception at very high frequencies: On psychometric functions and integration of frequency information. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:3322. [PMID: 33261392 PMCID: PMC7613188 DOI: 10.1121/10.0002668] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Accepted: 10/30/2020] [Indexed: 06/12/2023]
Abstract
Lau et al. [J. Neurosci. 37, 9013-9021 (2017)] showed that discrimination of the fundamental frequency (F0) of complex tones with components in a high-frequency region was better than predicted from the optimal combination of information from the individual harmonics. The predictions depend on the assumption that psychometric functions for frequency discrimination have a slope of 1 at high frequencies. This was tested by measuring psychometric functions for F0 discrimination and frequency discrimination. Difference limens for F0 (F0DLs) and difference limens for frequency for each frequency component were also measured. Complex tones contained harmonics 6-10 and had F0s of 280 or 1400 Hz. Thresholds were measured using 210-ms tones presented diotically in diotic threshold-equalizing noise (TEN), and 1000-ms tones presented diotically in dichotic TEN. The slopes of the psychometric functions were close to 1 for all frequencies and F0s. The ratio of predicted to observed F0DLs was around 1 or smaller for both F0s, i.e., not super-optimal, and was significantly smaller for the low than for the high F0. The results are consistent with the idea that place information alone can convey pitch, but pitch is more salient when phase-locking information is available.
Collapse
Affiliation(s)
- Hedwig E. Gockel
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Rd., Cambridge CB2 7EF, UK
| | - Brian C.J. Moore
- Cambridge Hearing Group, Department of Experimental Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, UK
| | - Robert P. Carlyon
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Rd., Cambridge CB2 7EF, UK
| |
Collapse
|
40
|
Whiteford KL, Kreft HA, Oxenham AJ. The role of cochlear place coding in the perception of frequency modulation. eLife 2020; 9:58468. [PMID: 32996463 PMCID: PMC7556860 DOI: 10.7554/elife.58468] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Accepted: 09/29/2020] [Indexed: 12/17/2022] Open
Abstract
Natural sounds convey information via frequency and amplitude modulations (FM and AM). Humans are acutely sensitive to the slow rates of FM that are crucial for speech and music. This sensitivity has long been thought to rely on precise stimulus-driven auditory-nerve spike timing (time code), whereas a coarser code, based on variations in the cochlear place of stimulation (place code), represents faster FM rates. We tested this theory in listeners with normal and impaired hearing, spanning a wide range of place-coding fidelity. Contrary to predictions, sensitivity to both slow and fast FM correlated with place-coding fidelity. We also used incoherent AM on two carriers to simulate place coding of FM and observed poorer sensitivity at high carrier frequencies and fast rates, two properties of FM detection previously ascribed to the limits of time coding. The results suggest a unitary place-based neural code for FM across all rates and carrier frequencies.
Collapse
Affiliation(s)
- Kelly L Whiteford
- Department of Psychology, University of Minnesota, Minneapolis, United States
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, Minneapolis, United States
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, United States
| |
Collapse
|
41
|
Simulations with FADE of the effect of impaired hearing on speech recognition performance cast doubt on the role of spectral resolution. Hear Res 2020; 395:107995. [DOI: 10.1016/j.heares.2020.107995] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 04/06/2020] [Accepted: 05/12/2020] [Indexed: 11/18/2022]
|
42
|
Turton L, Souza P, Thibodeau L, Hickson L, Gifford R, Bird J, Stropahl M, Gailey L, Fulton B, Scarinci N, Ekberg K, Timmer B. Guidelines for Best Practice in the Audiological Management of Adults with Severe and Profound Hearing Loss. Semin Hear 2020; 41:141-246. [PMID: 33364673 PMCID: PMC7744249 DOI: 10.1055/s-0040-1714744] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Individuals with severe to profound hearing loss are likely to present with complex listening needs that require evidence-based solutions. This document is intended to inform the practice of hearing care professionals who are involved in the audiological management of adults with a severe to profound degree of hearing loss and will highlight the special considerations and practices required to optimize outcomes for these individuals.
Collapse
Affiliation(s)
- Laura Turton
- Department of Audiology, South Warwickshire NHS Foundation Trust, Warwick, United Kingdom
| | - Pamela Souza
- Communication Sciences and Disorders and Knowles Hearing Center, Northwestern University, Evanston, Illinois
| | - Linda Thibodeau
- University of Texas at Dallas, Callier Center for Communication Disorders, Dallas, Texas
| | - Louise Hickson
- School of Health and Rehabilitation Sciences, The University of Queensland, Australia
| | - René Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Judith Bird
- Cambridge University Hospital NHS Foundation Trust, United Kingdom
| | - Maren Stropahl
- Department of Science and Technology, Sonova AG, Stäfa, Switzerland
| | | | | | - Nerina Scarinci
- School of Health and Rehabilitation Sciences, The University of Queensland, Australia
| | - Katie Ekberg
- School of Health and Rehabilitation Sciences, The University of Queensland, Australia
| | - Barbra Timmer
- School of Health and Rehabilitation Sciences, The University of Queensland, Australia
| |
Collapse
|
43
|
Tinnitus Does Not Interfere with Auditory and Speech Perception. J Neurosci 2020; 40:6007-6017. [PMID: 32554549 DOI: 10.1523/jneurosci.0396-20.2020] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 05/31/2020] [Accepted: 06/12/2020] [Indexed: 12/13/2022] Open
Abstract
Tinnitus is a sound heard by 15% of the general population in the absence of any external sound. Because external sounds can sometimes mask tinnitus, tinnitus is assumed to affect the perception of external sounds, leading to hypotheses such as "tinnitus filling in the temporal gap" in animal models and "tinnitus inducing hearing difficulty" in human subjects. Here we compared performance in temporal, spectral, intensive, masking and speech-in-noise perception tasks between 45 human listeners with chronic tinnitus (18 females and 27 males with a range of ages and degrees of hearing loss) and 27 young, normal-hearing listeners without tinnitus (11 females and 16 males). After controlling for age, hearing loss, and stimulus variables, we discovered that, contradictory to the widely held assumption, tinnitus does not interfere with the perception of external sounds in 32 of the 36 measures. We interpret the present result to reflect a bottom-up pathway for the external sound and a separate top-down pathway for tinnitus. We propose that these two perceptual pathways can be independently modulated by attention, which leads to the asymmetrical interaction between external and internal sounds, and several other puzzling tinnitus phenomena such as discrepancy in loudness between tinnitus rating and matching. The present results suggest not only a need for new theories involving attention and central noise in animal tinnitus models but also a shift in focus from treating tinnitus to managing its comorbid conditions when addressing complaints about hearing difficulty in individuals with tinnitus.SIGNIFICANCE STATEMENT Tinnitus, or ringing in the ears, is a neurologic disorder that affects 15% of the general population. Here we discovered an asymmetrical relationship between tinnitus and external sounds: although external sounds have been widely used to cover up tinnitus, tinnitus does not impair, and sometimes even improves, the perception of external sounds. This counterintuitive discovery contradicts the general belief held by scientists, clinicians, and even individuals with tinnitus themselves, who often report hearing difficulty, especially in noise. We attribute the counterintuitive discovery to two independent pathways: the bottom-up perception of external sounds and the top-down perception of tinnitus. Clinically, the present work suggests a shift in focus from treating tinnitus itself to treating its comorbid conditions and secondary effects.
Collapse
|
44
|
Füllgrabe C, Moody M, Moore BCJ. No evidence for a link between noise exposure and auditory temporal processing for young adults with normal audiograms. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:EL465. [PMID: 32611153 DOI: 10.1121/10.0001346] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Accepted: 05/16/2020] [Indexed: 06/11/2023]
Abstract
The link between lifetime noise exposure and temporal processing abilities was investigated for 45 normal-hearing participants, recruited from a population of undergraduate students, aged 18 to 23 years. A self-report instrument was employed to assess the amount of neuropathic noise (here defined as sounds with levels exceeding approximately 80 dBA) each participant had been exposed to and sensitivity to temporal-fine-structure and temporal-envelope information was determined using frequency discrimination and envelope irregularity detection tasks, respectively. Despite sizable individual variability in all measures, correlations between noise exposure and the ability to process temporal cues were small and non-significant.
Collapse
Affiliation(s)
- Christian Füllgrabe
- School of Sport, Exercise and Health Sciences, Loughborough University, Ashby Road, Loughborough LE11 3TU, United Kingdom
| | - Matthew Moody
- School of Sport, Exercise and Health Sciences, Loughborough University, Ashby Road, Loughborough LE11 3TU, United Kingdom
| | - Brian C J Moore
- Department of Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, United , ,
| |
Collapse
|
45
|
Wei W, Shi X, Xiong W, He L, Du ZD, Qu T, Qi Y, Gong SS, Liu K, Ma X. RNA-seq Profiling and Co-expression Network Analysis of Long Noncoding RNAs and mRNAs Reveal Novel Pathogenesis of Noise-induced Hidden Hearing Loss. Neuroscience 2020; 434:120-135. [PMID: 32201268 DOI: 10.1016/j.neuroscience.2020.03.023] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2019] [Revised: 02/25/2020] [Accepted: 03/15/2020] [Indexed: 12/16/2022]
Abstract
Noise-induced hidden hearing loss (NIHHL), one of the family of conditions described as noise-induced hearing loss (NIHL), is characterized by synaptopathy following moderate noise exposure that causes only temporary threshold elevation. Long noncoding RNAs (lncRNAs) mediate several essential regulatory functions in a wide range of biological processes and diseases, but their roles in NIHHL remain largely unknown. In order to determine the potential roles of these lncRNAs in the pathogenesis of NIHHL, we first evaluated their expression in NIHHL mice model and mapped possible regulatory functions and targets using RNA-sequencing (RNA-seq). In total, we identified 133 lncRNAs and 522 mRNAs that were significantly dysregulated in the NIHHL model. Gene Ontology (GO) showed that these lncRNAs were involved in multiple cell components and systems including synapses and the nervous and sensory systems. In addition, a lncRNA-mRNA network was constructed to identify core regulatory lncRNAs and transcription factors. KEGG analysis was also used to identify the potential pathways being affected in NIHHL. These analyses allowed us to identify the guanine nucleotide binding protein alpha stimulating (GNAS) gene as a key transcription factor and the adrenergic signaling pathway as a key pathway in the regulation of NIHHL pathogenesis. Our study is the first, to our knowledge, to isolate a lncRNA mediated regulatory pathway associated with NIHHL pathogenesis; these observations may provide fresh insight into the pathogenesis of NIHHL and may pave the way for therapeutic intervention in the future.
Collapse
Affiliation(s)
- Wei Wei
- Department of Otology, Shengjing Hospital, China Medical University, Shenyang 110004, China
| | - Xi Shi
- Department of Otolaryngology-Head and Neck, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China; The Institute of Audiology and Speech Science of Xuzhou Medical College, Xuzhou 221004, China
| | - Wei Xiong
- Department of Otolaryngology-Head and Neck, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - Lu He
- Department of Otolaryngology-Head and Neck, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - Zheng-De Du
- Department of Otolaryngology-Head and Neck, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - Tengfei Qu
- Department of Otolaryngology-Head and Neck, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - Yue Qi
- Department of Otolaryngology-Head and Neck, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - Shu-Sheng Gong
- Department of Otolaryngology-Head and Neck, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - Ke Liu
- Department of Otolaryngology-Head and Neck, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China.
| | - Xiulan Ma
- Department of Otology, Shengjing Hospital, China Medical University, Shenyang 110004, China.
| |
Collapse
|
46
|
Dirks C, Nelson PB, Sladen DP, Oxenham AJ. Mechanisms of Localization and Speech Perception with Colocated and Spatially Separated Noise and Speech Maskers Under Single-Sided Deafness with a Cochlear Implant. Ear Hear 2020; 40:1293-1306. [PMID: 30870240 PMCID: PMC6732049 DOI: 10.1097/aud.0000000000000708] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVES This study tested listeners with a cochlear implant (CI) in one ear and acoustic hearing in the other ear, to assess their ability to localize sound and to understand speech in collocated or spatially separated noise or speech maskers. DESIGN Eight CI listeners with contralateral acoustic hearing ranging from normal hearing to moderate sensorineural hearing loss were tested. Localization accuracy was measured in five of the listeners using stimuli that emphasized the separate contributions of interaural level differences (ILDs) and interaural time differences (ITD) in the temporal envelope and/or fine structure. Sentence recognition was tested in all eight CI listeners, using collocated and spatially separated speech-shaped Gaussian noise and two-talker babble. Performance was compared with that of age-matched normal-hearing listeners via loudspeakers or via headphones with vocoder simulations of CI processing. RESULTS Localization improved with the CI but only when high-frequency ILDs were available. Listeners experienced no additional benefit via ITDs in the stimulus envelope or fine structure using real or vocoder-simulated CIs. Speech recognition in two-talker babble improved with a CI in seven of the eight listeners when the target was located at the front and the babble was presented on the side of the acoustic-hearing ear, but otherwise showed little or no benefit of a CI. CONCLUSION Sound localization can be improved with a CI in cases of significant residual hearing in the contralateral ear, but only for sounds with high-frequency content, and only based on ILDs. In speech understanding, the CI contributed most when it was in the ear with the better signal to noise ratio with a speech masker.
Collapse
Affiliation(s)
- Coral Dirks
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN, USA
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| | - Peggy B. Nelson
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN, USA
| | - Douglas P. Sladen
- Department of Communication Sciences and Disorders, Western Washington University, Bellingham, WA, USA
| | - Andrew J. Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
47
|
The ongoing search for cochlear synaptopathy in humans: Masked thresholds for brief tones in Threshold Equalizing Noise. Hear Res 2020; 392:107960. [PMID: 32334105 DOI: 10.1016/j.heares.2020.107960] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Revised: 03/31/2020] [Accepted: 04/01/2020] [Indexed: 12/28/2022]
Abstract
This study aimed to advance towards a clinical diagnostic method for detection of cochlear synaptopathy with the hypothesis that synaptopathy should be manifested in elevated masked thresholds for brief tones. This hypothesis was tested in tinnitus sufferers, as they are thought to have some degree of synaptopathy. Near-normal-hearing tinnitus sufferers and their matched controls were asked to detect pure tones with durations of 5, 10, 100, and 200 ms presented in low- and high-level Threshold Equalizing Noise. In addition, lifetime noise exposure was estimated for all participants. Contrary to the hypothesis, there was no significant difference in masked thresholds for brief tones between tinnitus sufferers and their matched controls. Masked thresholds were also not related to lifetime noise exposure. There are two possible explanations of the results: 1) the participants in our study did not have cochlear synaptopathy, or 2) synaptopathy does not lead to elevated masked thresholds for brief tones. This study adds a new approach to the growing list of behavioral methods that attempted to detect potential signs of cochlear synaptopathy in humans.
Collapse
|
48
|
Van Eeckhoutte M, Folkeard P, Glista D, Scollie S. Speech recognition, loudness, and preference with extended bandwidth hearing aids for adult hearing aid users. Int J Audiol 2020; 59:780-791. [PMID: 32309996 DOI: 10.1080/14992027.2020.1750718] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Objective: In contrast to the past, some current hearing aids can provide gain for frequencies above 4-5 kHz. This study assessed the effect of wider bandwidth on outcome measures using hearing aids fitted with the DSL v5.0 prescription.Design: There were two conditions: an extended bandwidth condition, for which the maximum available bandwidth was provided, and a restricted bandwidth condition, in which gain was reduced for frequencies above 4.5 kHz. Outcome measures were assessed in both conditions.Study sample: Twenty-four participants with mild-to-moderately-severe sensorineural high-frequency sloping hearing loss.Results: Providing extended bandwidth resulted in maximum audible output frequency values of 7.5 kHz on average for an input level of 65 dB SPL. An improvement in consonant discrimination scores (4.1%), attributable to better perception of /s/, /z/, and /t/ phonemes, was found in the extended bandwidth condition, but no significant change in loudness perception or preferred listening levels was found. Most listeners (79%) had either no preference (33%) or some preference for the extended bandwidth condition (46%).Conclusions: The results suggest that providing the maximum bandwidth available with modern hearing aids fitted with DSL v5.0, using targets from 0.25 to 8 kHz, can be beneficial for the tested population.
Collapse
Affiliation(s)
| | - Paula Folkeard
- National Centre for Audiology, Western University, London, Canada
| | - Danielle Glista
- National Centre for Audiology, Western University, London, Canada.,Communication Sciences and Disorders, Faculty of Health Sciences, Western University, London, Canada
| | - Susan Scollie
- National Centre for Audiology, Western University, London, Canada.,Communication Sciences and Disorders, Faculty of Health Sciences, Western University, London, Canada
| |
Collapse
|
49
|
Mehta AH, Oxenham AJ. Effect of lowest harmonic rank on fundamental-frequency difference limens varies with fundamental frequency. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:2314. [PMID: 32359332 PMCID: PMC7166120 DOI: 10.1121/10.0001092] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2019] [Revised: 03/25/2020] [Accepted: 03/27/2020] [Indexed: 06/11/2023]
Abstract
This study investigated the relationship between fundamental frequency difference limens (F0DLs) and the lowest harmonic number present over a wide range of F0s (30-2000 Hz) for 12-component harmonic complex tones that were presented in either sine or random phase. For fundamental frequencies (F0s) between 100 and 400 Hz, a transition from low (∼1%) to high (∼5%) F0DLs occurred as the lowest harmonic number increased from about seven to ten, in line with earlier studies. At lower and higher F0s, the transition between low and high F0DLs occurred at lower harmonic numbers. The worsening performance at low F0s was reasonably well predicted by the expected decrease in spectral resolution below about 500 Hz. At higher F0s, the degradation in performance at lower harmonic numbers could not be predicted by changes in spectral resolution but remained relatively good (<2%-3%) in some conditions, even when all harmonics were above 8 kHz, confirming that F0 can be extracted from harmonics even when temporal envelope or fine-structure cues are weak or absent.
Collapse
Affiliation(s)
- Anahita H Mehta
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
50
|
Kara E, Aydın K, Akbulut AA, Karakol SN, Durmaz S, Yener HM, Gözen ED, Kara H. Assessment of Hidden Hearing Loss in Normal Hearing Individuals with and Without Tinnitus. J Int Adv Otol 2020; 16:87-92. [PMID: 32209515 PMCID: PMC7224424 DOI: 10.5152/iao.2020.7062] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Revised: 06/19/2019] [Accepted: 06/30/2019] [Indexed: 01/31/2023] Open
Abstract
OBJECTIVES To evaluate the functions of cochlear structures and the distal part of auditory nerve as well as dead regions within the cochlea in individuals with normal hearing with or without tinnitus by using electrophysiological tests. MATERIALS AND METHODS Nine individuals (ages: 21-59 years) with normal hearing with tinnitus were included in the study group. Thirteen individuals (ages: 25-60 years) with normal hearing without tinnitus were included in the control group. Immitancemetric examination, pure-tone audiometry (125Hz-16kHz), speech audiometry in quiet and noise environments, transient evoked otoacoustic emissions (TEOAEs), distortion product otoacoustic emissions (DPOAEs), threshold equalizing noise (TEN test (500Hz-4kHz), and ECochG tests, Beck Depression Questionnaire, Tinnitus Handicap Questionnaire, and Visual Analog Scale were performed. RESULTS In the study group, three patients were found to have a minimal depression and six were found to have a mild depression. In pure-tone audiometry, the threshold (6-16 kHz) in the study group was significantly higher than that of the control group at all frequencies. In the study group, lower performance scores were obtained in speech discrimination in noise in both ears. In the control group, no dead region was detected in the TEN test whereas 75% of subjects in the study group had dead regions. DPOAE and TEOAE responses between study and control group subjects were not different. In the ECochG test, subjects in the study group showed an increase in the summating potential/action potential (SP/AP) ratio in both ears. CONCLUSION Determination of the SP/AP ratio in patients with tinnitus may be useful in diagnosing hidden hearing loss. Detection of dead regions in 75% of patients in the TEN test may indicate that inner hair cells may be responsible for tinnitus.
Collapse
Affiliation(s)
- Eyyup Kara
- Department of Audiology, İstanbul University-Cerrahpaşa School of Health Sciences, İstanbul, Turkey
| | - Kübra Aydın
- Department of Audiology, İstanbul University-Cerrahpaşa School of Health Sciences, İstanbul, Turkey
| | - A Alperen Akbulut
- Department of Audiology, İstanbul University-Cerrahpaşa School of Health Sciences, İstanbul, Turkey
| | - Sare Nur Karakol
- Department of Audiology, İstanbul University-Cerrahpaşa School of Health Sciences, İstanbul, Turkey
| | - Serkan Durmaz
- Department of Audiology, İstanbul University-Cerrahpaşa School of Health Sciences, İstanbul, Turkey
| | - H Murat Yener
- Department of Otorhinolaryngology, İstanbul University- Cerrahpaşa, Cerrahpaşa School of Medicine, İstanbul, Turkey
| | - E Deniz Gözen
- Department of Otorhinolaryngology, İstanbul University- Cerrahpaşa, Cerrahpaşa School of Medicine, İstanbul, Turkey
| | - Halide Kara
- Department of Audiology, İstanbul University-Cerrahpaşa School of Health Sciences, İstanbul, Turkey
| |
Collapse
|