1
|
Cychosz M, Winn MB, Goupell MJ. How to vocode: Using channel vocoders for cochlear-implant research. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:2407-2437. [PMID: 38568143 PMCID: PMC10994674 DOI: 10.1121/10.0025274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 02/14/2024] [Accepted: 02/23/2024] [Indexed: 04/05/2024]
Abstract
The channel vocoder has become a useful tool to understand the impact of specific forms of auditory degradation-particularly the spectral and temporal degradation that reflect cochlear-implant processing. Vocoders have many parameters that allow researchers to answer questions about cochlear-implant processing in ways that overcome some logistical complications of controlling for factors in individual cochlear implant users. However, there is such a large variety in the implementation of vocoders that the term "vocoder" is not specific enough to describe the signal processing used in these experiments. Misunderstanding vocoder parameters can result in experimental confounds or unexpected stimulus distortions. This paper highlights the signal processing parameters that should be specified when describing vocoder construction. The paper also provides guidance on how to determine vocoder parameters within perception experiments, given the experimenter's goals and research questions, to avoid common signal processing mistakes. Throughout, we will assume that experimenters are interested in vocoders with the specific goal of better understanding cochlear implants.
Collapse
Affiliation(s)
- Margaret Cychosz
- Department of Linguistics, University of California, Los Angeles, Los Angeles, California 90095, USA
| | - Matthew B Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota 55455, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, Maryland 20742, USA
| |
Collapse
|
2
|
Newman RS, Morini G, Shroads E, Chatterjee M. Toddlers' fast-mapping from noise-vocoded speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:2432. [PMID: 32359241 PMCID: PMC7176458 DOI: 10.1121/10.0001129] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Revised: 04/02/2020] [Accepted: 04/03/2020] [Indexed: 06/11/2023]
Abstract
The ability to recognize speech that is degraded spectrally is a critical skill for successfully using a cochlear implant (CI). Previous research has shown that toddlers with normal hearing can successfully recognize noise-vocoded words as long as the signal contains at least eight spectral channels [Newman and Chatterjee. (2013). J. Acoust. Soc. Am. 133(1), 483-494; Newman, Chatterjee, Morini, and Remez. (2015). J. Acoust. Soc. Am. 138(3), EL311-EL317], although they have difficulty with signals that only contain four channels of information. Young children with CIs not only need to match a degraded speech signal to a stored representation (word recognition), but they also need to create new representations (word learning), a task that is likely to be more cognitively demanding. Normal-hearing toddlers aged 34 months were tested on their ability to initially learn (fast-map) new words in noise-vocoded stimuli. While children were successful at fast-mapping new words from 16-channel noise-vocoded stimuli, they failed to do so from 8-channel noise-vocoded speech. The level of degradation imposed by 8-channel vocoding appears sufficient to disrupt fast-mapping in young children. Recent results indicate that only CI patients with high spectral resolution can benefit from more than eight active electrodes. This suggests that for many children with CIs, reduced spectral resolution may limit their acquisition of novel words.
Collapse
Affiliation(s)
- Rochelle S Newman
- Department of Hearing and Speech Sciences, University of Maryland, 0100 Lefrak Hall, College Park, Maryland 20742, USA
| | - Giovanna Morini
- Department of Communication Sciences and Disorders, University of Delaware, 100 Discovery Boulevard, Newark, Delaware 19713, USA
| | - Emily Shroads
- Department of Hearing and Speech Sciences, University of Maryland, 0100 Lefrak Hall, College Park, Maryland 20742, USA
| | - Monita Chatterjee
- Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA
| |
Collapse
|
3
|
Patro C, Mendel LL. Semantic influences on the perception of degraded speech by individuals with cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:1778. [PMID: 32237796 DOI: 10.1121/10.0000934] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Accepted: 03/02/2020] [Indexed: 06/11/2023]
Abstract
This study investigated whether speech intelligibility in cochlear implant (CI) users is affected by semantic context. Three groups participated in two experiments: Two groups of listeners with normal hearing (NH) listened to either full spectrum speech or vocoded speech, and one CI group listened to full spectrum speech. Experiment 1 measured participants' sentence recognition as a function of target-to-masker ratio (four-talker babble masker), and experiment 2 measured perception of interrupted speech as a function of duty cycles (long/short uninterrupted speech). Listeners were presented with both semantic congruent/incongruent targets. Results from the two experiments suggested that NH listeners benefitted more from the semantic cues as the listening conditions became more challenging (lower signal-to-noise ratios and interrupted speech with longer silent intervals). However, the CI group received minimal benefit from context, and therefore performed poorly in such conditions. On the contrary, in the conditions that were less challenging, CI users benefitted greatly from the semantic context, and NH listeners did not rely on such cues. The results also confirmed that such differential use of semantic cues appears to originate from the spectro-temporal degradations experienced by CI users, which could be a contributing factor for their poor performance in suboptimal environments.
Collapse
Affiliation(s)
- Chhayakanta Patro
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55414, USA
| | - Lisa Lucks Mendel
- School of Communication Sciences and Disorders, University of Memphis, Memphis, Tennessee 38152, USA
| |
Collapse
|
4
|
Patro C, Mendel LL. Gated Word Recognition by Postlingually Deafened Adults With Cochlear Implants: Influence of Semantic Context. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:145-158. [PMID: 29242894 DOI: 10.1044/2017_jslhr-h-17-0141] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2017] [Accepted: 08/28/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE The main goal of this study was to investigate the minimum amount of sensory information required to recognize spoken words (isolation points [IPs]) in listeners with cochlear implants (CIs) and investigate facilitative effects of semantic contexts on the IPs. METHOD Listeners with CIs as well as those with normal hearing (NH) participated in the study. In Experiment 1, the CI users listened to unprocessed (full-spectrum) stimuli and individuals with NH listened to full-spectrum or vocoder processed speech. IPs were determined for both groups who listened to gated consonant-nucleus-consonant words that were selected based on lexical properties. In Experiment 2, the role of semantic context on IPs was evaluated. Target stimuli were chosen from the Revised Speech Perception in Noise corpus based on the lexical properties of the final words. RESULTS The results indicated that spectrotemporal degradations impacted IPs for gated words adversely, and CI users as well as participants with NH listening to vocoded speech had longer IPs than participants with NH who listened to full-spectrum speech. In addition, there was a clear disadvantage due to lack of semantic context in all groups regardless of the spectral composition of the target speech (full spectrum or vocoded). Finally, we showed that CI users (and users with NH with vocoded speech) can overcome such word processing difficulties with the help of semantic context and perform as well as listeners with NH. CONCLUSION Word recognition occurs even before the entire word is heard because listeners with NH associate an acoustic input with its mental representation to understand speech. The results of this study provide insight into the role of spectral degradation on the processing of spoken words in isolation and the potential benefits of semantic context. These results may also explain why CI users rely substantially on semantic context.
Collapse
Affiliation(s)
| | - Lisa Lucks Mendel
- School of Communication Sciences & Disorders, University of Memphis, TN
| |
Collapse
|
5
|
Patro C, Mendel LL. Role of contextual cues on the perception of spectrally reduced interrupted speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:1336. [PMID: 27586760 DOI: 10.1121/1.4961450] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Understanding speech within an auditory scene is constantly challenged by interfering noise in suboptimal listening environments when noise hinders the continuity of the speech stream. In such instances, a typical auditory-cognitive system perceptually integrates available speech information and "fills in" missing information in the light of semantic context. However, individuals with cochlear implants (CIs) find it difficult and effortful to understand interrupted speech compared to their normal hearing counterparts. This inefficiency in perceptual integration of speech could be attributed to further degradations in the spectral-temporal domain imposed by CIs making it difficult to utilize the contextual evidence effectively. To address these issues, 20 normal hearing adults listened to speech that was spectrally reduced and spectrally reduced interrupted in a manner similar to CI processing. The Revised Speech Perception in Noise test, which includes contextually rich and contextually poor sentences, was used to evaluate the influence of semantic context on speech perception. Results indicated that listeners benefited more from semantic context when they listened to spectrally reduced speech alone. For the spectrally reduced interrupted speech, contextual information was not as helpful under significant spectral reductions, but became beneficial as the spectral resolution improved. These results suggest top-down processing facilitates speech perception up to a point, and it fails to facilitate speech understanding when the speech signals are significantly degraded.
Collapse
Affiliation(s)
- Chhayakanta Patro
- School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee, 38152, USA
| | - Lisa Lucks Mendel
- School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee, 38152, USA
| |
Collapse
|
6
|
The Intelligibility of Interrupted Speech: Cochlear Implant Users and Normal Hearing Listeners. J Assoc Res Otolaryngol 2016; 17:475-91. [PMID: 27090115 PMCID: PMC5023536 DOI: 10.1007/s10162-016-0565-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2015] [Accepted: 03/18/2016] [Indexed: 11/13/2022] Open
Abstract
Compared with normal-hearing listeners, cochlear implant (CI) users display a loss of intelligibility of speech interrupted by silence or noise, possibly due to reduced ability to integrate and restore speech glimpses across silence or noise intervals. The present study was conducted to establish the extent of the deficit typical CI users have in understanding interrupted high-context sentences as a function of a range of interruption rates (1.5 to 24 Hz) and duty cycles (50 and 75 %). Further, factors such as reduced signal quality of CI signal transmission and advanced age, as well as potentially lower speech intelligibility of CI users even in the lack of interruption manipulation, were explored by presenting young, as well as age-matched, normal-hearing (NH) listeners with full-spectrum and vocoded speech (eight-channel and speech intelligibility baseline performance matched). While the actual CI users had more difficulties in understanding interrupted speech and taking advantage of faster interruption rates and increased duty cycle than the eight-channel noise-band vocoded listeners, their performance was similar to the matched noise-band vocoded listeners. These results suggest that while loss of spectro-temporal resolution indeed plays an important role in reduced intelligibility of interrupted speech, these factors alone cannot entirely explain the deficit. Other factors associated with real CIs, such as aging or failure in transmission of essential speech cues, seem to additionally contribute to poor intelligibility of interrupted speech.
Collapse
|
7
|
Aguiar DE, Taylor NE, Li J, Gazanfari DK, Talavage TM, Laflen JB, Neuberger H, Svirsky MA. Information theoretic evaluation of a noiseband-based cochlear implant simulator. Hear Res 2015; 333:185-193. [PMID: 26409068 DOI: 10.1016/j.heares.2015.09.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/02/2015] [Revised: 08/25/2015] [Accepted: 09/20/2015] [Indexed: 10/23/2022]
Abstract
Noise-band vocoders are often used to simulate the signal processing algorithms used in cochlear implants (CIs), producing acoustic stimuli that may be presented to normal hearing (NH) subjects. Such evaluations may obviate the heterogeneity of CI user populations, achieving greater experimental control than when testing on CI subjects. However, it remains an open question whether advancements in algorithms developed on NH subjects using a simulator will necessarily improve performance in CI users. This study assessed the similarity in vowel identification of CI subjects and NH subjects using an 8-channel noise-band vocoder simulator configured to match input and output frequencies or to mimic output after a basalward shift of input frequencies. Under each stimulus condition, NH subjects performed the task both with and without feedback/training. Similarity of NH subjects to CI users was evaluated using correct identification rates and information theoretic approaches. Feedback/training produced higher rates of correct identification, as expected, but also resulted in error patterns that were closer to those of the CI users. Further evaluation remains necessary to determine how patterns of confusion at the token level are affected by the various parameters in CI simulators, providing insight into how a true CI simulation may be developed to facilitate more rapid prototyping and testing of novel CI signal processing and electrical stimulation strategies.
Collapse
Affiliation(s)
- Daniel E Aguiar
- School of Electrical & Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - N Ellen Taylor
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Jing Li
- School of Electrical & Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Daniel K Gazanfari
- School of Electrical & Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Thomas M Talavage
- School of Electrical & Computer Engineering, Purdue University, West Lafayette, IN, USA; Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA.
| | - J Brandon Laflen
- School of Electrical & Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Heidi Neuberger
- DeVault Otologic Research Laboratory, Department of Otolaryngology/Head and Neck Surgery, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Mario A Svirsky
- DeVault Otologic Research Laboratory, Department of Otolaryngology/Head and Neck Surgery, Indiana University School of Medicine, Indianapolis, IN, USA; Department of Otolaryngology-Head & Neck Surgery, New York University School of Medicine, New York, NY, USA
| |
Collapse
|
8
|
Shannon RV. Auditory implant research at the House Ear Institute 1989-2013. Hear Res 2015; 322:57-66. [PMID: 25449009 PMCID: PMC4380593 DOI: 10.1016/j.heares.2014.11.003] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/11/2014] [Revised: 11/04/2014] [Accepted: 11/07/2014] [Indexed: 11/29/2022]
Abstract
The House Ear Institute (HEI) had a long and distinguished history of auditory implant innovation and development. Early clinical innovations include being one of the first cochlear implant (CI) centers, being the first center to implant a child with a cochlear implant in the US, developing the auditory brainstem implant, and developing multiple surgical approaches and tools for Otology. This paper reviews the second stage of auditory implant research at House - in-depth basic research on perceptual capabilities and signal processing for both cochlear implants and auditory brainstem implants. Psychophysical studies characterized the loudness and temporal perceptual properties of electrical stimulation as a function of electrical parameters. Speech studies with the noise-band vocoder showed that only four bands of tonotopically arrayed information were sufficient for speech recognition, and that most implant users were receiving the equivalent of 8-10 bands of information. The noise-band vocoder allowed us to evaluate the effects of the manipulation of the number of bands, the alignment of the bands with the original tonotopic map, and distortions in the tonotopic mapping, including holes in the neural representation. Stimulation pulse rate was shown to have only a small effect on speech recognition. Electric fields were manipulated in position and sharpness, showing the potential benefit of improved tonotopic selectivity. Auditory training shows great promise for improving speech recognition for all patients. And the Auditory Brainstem Implant was developed and improved and its application expanded to new populations. Overall, the last 25 years of research at HEI helped increase the basic scientific understanding of electrical stimulation of hearing and contributed to the improved outcomes for patients with the CI and ABI devices. This article is part of a Special Issue entitled .
Collapse
Affiliation(s)
- Robert V Shannon
- Department of Otolaryngology, University of Southern California, Keck School of Medicine of USC, 806 W. Adams Blvd, Los Angeles, CA 90007-2505, USA.
| |
Collapse
|
9
|
Won JH, Jones GL, Moon IJ, Rubinstein JT. Spectral and temporal analysis of simulated dead regions in cochlear implants. J Assoc Res Otolaryngol 2015; 16:285-307. [PMID: 25740402 DOI: 10.1007/s10162-014-0502-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2014] [Accepted: 12/23/2014] [Indexed: 11/29/2022] Open
Abstract
A cochlear implant (CI) electrode in a "cochlear dead region" will excite neighboring neural populations. In previous research that simulated such dead regions, stimulus information in the simulated dead region was either added to the immediately adjacent frequency regions or dropped entirely. There was little difference in speech perception ability between the two conditions. This may imply that there may be little benefit of ensuring that stimulus information on an electrode in a suspected cochlear dead region is transmitted. Alternatively, performance may be enhanced by a broader frequency redistribution, rather than adding stimuli from the dead region to the edges. In the current experiments, cochlear dead regions were introduced by excluding selected CI electrodes or vocoder noise-bands. Participants were assessed for speech understanding as well as spectral and temporal sensitivities as a function of the size of simulated dead regions. In one set of tests, the normal input frequency range of the sound processor was distributed among the active electrodes in bands with approximately logarithmic spacing ("redistributed" maps); in the remaining tests, information in simulated dead regions was dropped ("dropped" maps). Word recognition and Schroeder-phase discrimination performance, which require both spectral and temporal sensitivities, decreased as the size of simulated dead regions increased, but the redistributed and dropped remappings showed similar performance in these two tasks. Psychoacoustic experiments showed that the near match in word scores may reflect a tradeoff between spectral and temporal sensitivity: spectral-ripple discrimination was substantially degraded in the redistributed condition relative to the dropped condition while performance in a temporal modulation detection task degraded in the dropped condition but remained constant in the redistributed condition.
Collapse
Affiliation(s)
- Jong Ho Won
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, WA, 98195, USA
| | | | | | | |
Collapse
|
10
|
Gresele ADP, Costa MJ, Garcia MV. Compressão de frequências no reconhecimento de fala de idosos com possíveis zonas mortas na cóclea. REVISTA CEFAC 2015. [DOI: 10.1590/1982-021620155414] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
OBJETIVO: avaliar e comparar o desempenho de idosos sem e com zonas mortas na cóclea em testes de reconhecimento de fala, no silêncio e no ruído, usando próteses auditivas sem e com compressão não linear de frequências. MÉTODOS: participaram 38 idosos com perda auditiva de grau leve a moderado e configuração descendente, distribuídos, com base nos resultados da técnica de mascaramento com ruído branco, em: Grupo A - 24 idosos sem indícios de zonas mortas na cóclea; Grupo B - 14 idosos com possíveis zonas mortas na cóclea. Aplicou-se o teste Listas de Sentenças em Português, pesquisando-se os Índices Percentuais de Reconhecimento de Sentenças no Silêncio e no Ruído. As medidas foram obtidas com próteses auditivas, sem e com compressão de frequências. RESULTADOS: o grupo A e o B apresentaram melhora significante no silêncio com as próteses auditivas com compressão de frequências; no ruído nenhum grupo apresentou diferença sem e com compressão de freqüências. Comparando-se os grupos, não houve diferença no silêncio sem e com compressão de frequências. No ruído sem a ativação da compressão houve diferença significante, sendo melhor o desempenho do grupo B. No ruído com a ativação do recurso não houve diferença significante. CONCLUSÃO: no silêncio, ambos os grupos apresentaram melhor desempenho usando próteses com compressão de frequências. No ruído, não houve diferença entre os resultados sem e com compressão de frequências. Comparando-se os grupos, a medida obtida no ruído com próteses auditivas sem compressão de frequências apresentou diferença, na qual o grupo com zonas mortas obteve melhor desempenho.
Collapse
|
11
|
Svirsky MA, Talavage TM, Sinha S, Neuburger H, Azadpour M. Gradual adaptation to auditory frequency mismatch. Hear Res 2014; 322:163-70. [PMID: 25445816 DOI: 10.1016/j.heares.2014.10.008] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/13/2014] [Revised: 10/13/2014] [Accepted: 10/16/2014] [Indexed: 12/01/2022]
Abstract
What is the best way to help humans adapt to a distorted sensory input? Interest in this question is more than academic. The answer may help facilitate auditory learning by people who became deaf after learning language and later received a cochlear implant (a neural prosthesis that restores hearing through direct electrical stimulation of the auditory nerve). There is evidence that some cochlear implants (which provide information that is spectrally degraded to begin with) stimulate neurons with higher characteristic frequency than the acoustic frequency of the original stimulus. In other words, the stimulus is shifted in frequency with respect to what the listener expects to hear. This frequency misalignment may have a negative influence on speech perception by CI users. However, a perfect frequency-place alignment may result in the loss of important low frequency speech information. A trade-off may involve a gradual approach: start with correct frequency-place alignment to allow listeners to adapt to the spectrally degraded signal first, and then gradually increase the frequency shift to allow them to adapt to it over time. We used an acoustic model of a cochlear implant to measure adaptation to a frequency-shifted signal, using either the gradual approach or the "standard" approach (sudden imposition of the frequency shift). Listeners in both groups showed substantial auditory learning, as measured by increases in speech perception scores over the course of fifteen one-hour training sessions. However, the learning process was faster for listeners who were exposed to the gradual approach. These results suggest that gradual rather than sudden exposure may facilitate perceptual learning in the face of a spectrally degraded, frequency-shifted input. This article is part of a Special Issue entitled <Lasker Award>.
Collapse
Affiliation(s)
- Mario A Svirsky
- Dept. of Otolaryngology-HNS, New York University School of Medicine, New York, NY, USA; Center of Neural Science, New York University, New York, NY, USA.
| | - Thomas M Talavage
- ECE, Purdue University, West Lafayette, IN, USA; BME Depts., Purdue University, West Lafayette, IN, USA
| | | | - Heidi Neuburger
- Dept. of Otolaryngology-HNS, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Mahan Azadpour
- Dept. of Otolaryngology-HNS, New York University School of Medicine, New York, NY, USA
| |
Collapse
|
12
|
Benard MR, Başkent D. Perceptual learning of temporally interrupted spectrally degraded speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:1344. [PMID: 25190407 DOI: 10.1121/1.4892756] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Normal-hearing (NH) listeners make use of context, speech redundancy and top-down linguistic processes to perceptually restore inaudible or masked portions of speech. Previous research has shown poorer perception and restoration of interrupted speech in CI users and NH listeners tested with acoustic simulations of CIs. Three hypotheses were investigated: (1) training with CI simulations of interrupted sentences can teach listeners to use the high-level restoration mechanisms more effectively, (2) phonemic restoration benefit, an increase in intelligibility of interrupted sentences once its silent gaps are filled with noise, can be induced with training, and (3) perceptual learning of interrupted sentences can be reflected in clinical speech audiometry. To test these hypotheses, NH listeners were trained using periodically interrupted sentences, also spectrally degraded with a noiseband vocoder as CI simulation. Feedback was presented by displaying the sentence text and playing back both the intact and the interrupted CI simulation of the sentence. Training induced no phonemic restoration benefit, and learning was not transferred to speech audiometry measured with words. However, a significant improvement was observed in overall intelligibility of interrupted spectrally degraded sentences, with or without filler noise, suggesting possibly better use of restoration mechanisms as a result of training.
Collapse
Affiliation(s)
- Michel Ruben Benard
- Pento Audiology Center Zwolle, Oosterlaan 20, 8011 GC Zwolle, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
13
|
Fuller CD, Galvin JJ, Maat B, Free RH, Başkent D. The musician effect: does it persist under degraded pitch conditions of cochlear implant simulations? Front Neurosci 2014; 8:179. [PMID: 25071428 PMCID: PMC4075350 DOI: 10.3389/fnins.2014.00179] [Citation(s) in RCA: 61] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2014] [Accepted: 06/08/2014] [Indexed: 12/05/2022] Open
Abstract
Cochlear implants (CIs) are auditory prostheses that restore hearing via electrical stimulation of the auditory nerve. Compared to normal acoustic hearing, sounds transmitted through the CI are spectro-temporally degraded, causing difficulties in challenging listening tasks such as speech intelligibility in noise and perception of music. In normal hearing (NH), musicians have been shown to better perform than non-musicians in auditory processing and perception, especially for challenging listening tasks. This “musician effect” was attributed to better processing of pitch cues, as well as better overall auditory cognitive functioning in musicians. Does the musician effect persist when pitch cues are degraded, as it would be in signals transmitted through a CI? To answer this question, NH musicians and non-musicians were tested while listening to unprocessed signals or to signals processed by an acoustic CI simulation. The task increasingly depended on pitch perception: (1) speech intelligibility (words and sentences) in quiet or in noise, (2) vocal emotion identification, and (3) melodic contour identification (MCI). For speech perception, there was no musician effect with the unprocessed stimuli, and a small musician effect only for word identification in one noise condition, in the CI simulation. For emotion identification, there was a small musician effect for both. For MCI, there was a large musician effect for both. Overall, the effect was stronger as the importance of pitch in the listening task increased. This suggests that the musician effect may be more rooted in pitch perception, rather than in a global advantage in cognitive processing (in which musicians would have performed better in all tasks). The results further suggest that musical training before (and possibly after) implantation might offer some advantage in pitch processing that could partially benefit speech perception, and more strongly emotion and music perception.
Collapse
Affiliation(s)
- Christina D Fuller
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen Groningen, Netherlands ; Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen Groningen, Netherlands
| | - John J Galvin
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen Groningen, Netherlands ; Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen Groningen, Netherlands ; Division of Communication and Auditory Neuroscience, House Research Institute Los Angeles, CA, USA ; Department of Head and Neck Surgery, David Geffen School of Medicine, UCLA Los Angeles, CA, USA
| | - Bert Maat
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen Groningen, Netherlands ; Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen Groningen, Netherlands
| | - Rolien H Free
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen Groningen, Netherlands ; Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen Groningen, Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen Groningen, Netherlands ; Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen Groningen, Netherlands
| |
Collapse
|
14
|
Bhargava P, Gaudrain E, Başkent D. Top–down restoration of speech in cochlear-implant users. Hear Res 2014; 309:113-23. [DOI: 10.1016/j.heares.2013.12.003] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/11/2013] [Revised: 11/21/2013] [Accepted: 12/12/2013] [Indexed: 10/25/2022]
|
15
|
Bartlett EL. The organization and physiology of the auditory thalamus and its role in processing acoustic features important for speech perception. BRAIN AND LANGUAGE 2013; 126:29-48. [PMID: 23725661 PMCID: PMC3707394 DOI: 10.1016/j.bandl.2013.03.003] [Citation(s) in RCA: 54] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2012] [Revised: 02/28/2013] [Accepted: 03/19/2013] [Indexed: 05/17/2023]
Abstract
The auditory thalamus, or medial geniculate body (MGB), is the primary sensory input to auditory cortex. Therefore, it plays a critical role in the complex auditory processing necessary for robust speech perception. This review will describe the functional organization of the thalamus as it relates to processing acoustic features important for speech perception, focusing on thalamic nuclei that relate to auditory representations of language sounds. The MGB can be divided into three main subdivisions, the ventral, dorsal, and medial subdivisions, each with different connectivity, auditory response properties, neuronal properties, and synaptic properties. Together, the MGB subdivisions actively and dynamically shape complex auditory processing and form ongoing communication loops with auditory cortex and subcortical structures.
Collapse
|
16
|
Newman R, Chatterjee M. Toddlers' recognition of noise-vocoded speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 133:483-94. [PMID: 23297920 PMCID: PMC3548833 DOI: 10.1121/1.4770241] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2011] [Revised: 11/10/2012] [Accepted: 11/14/2012] [Indexed: 05/15/2023]
Abstract
Despite their remarkable clinical success, cochlear-implant listeners today still receive spectrally degraded information. Much research has examined normally hearing adult listeners' ability to interpret spectrally degraded signals, primarily using noise-vocoded speech to simulate cochlear implant processing. Far less research has explored infants' and toddlers' ability to interpret spectrally degraded signals, despite the fact that children in this age range are frequently implanted. This study examines 27-month-old typically developing toddlers' recognition of noise-vocoded speech in a language-guided looking study. Children saw two images on each trial and heard a voice instructing them to look at one item ("Find the cat!"). Full-spectrum sentences or their noise-vocoded versions were presented with varying numbers of spectral channels. Toddlers showed equivalent proportions of looking to the target object with full-speech and 24- or 8-channel noise-vocoded speech; they failed to look appropriately with 2-channel noise-vocoded speech and showed variable performance with 4-channel noise-vocoded speech. Despite accurate looking performance for speech with at least eight channels, children were slower to respond appropriately as the number of channels decreased. These results indicate that 2-yr-olds have developed the ability to interpret vocoded speech, even without practice, but that doing so requires additional processing. These findings have important implications for pediatric cochlear implantation.
Collapse
Affiliation(s)
- Rochelle Newman
- Department of Hearing and Speech Sciences, 0100 Lefrak Hall, University of Maryland, College Park, Maryland 20742, USA.
| | | |
Collapse
|
17
|
Whitmal NA, DeRoy K. Use of an adaptive-bandwidth protocol to measure importance functions for simulated cochlear implant frequency channels. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 131:1359-1370. [PMID: 22352509 PMCID: PMC3292607 DOI: 10.1121/1.3672684] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2011] [Revised: 11/17/2011] [Accepted: 11/18/2011] [Indexed: 05/29/2023]
Abstract
The Articulation Index and Speech Intelligibility Index predict intelligibility scores from measurements of speech and hearing parameters. One component in the prediction is the frequency-importance function, a weighting function that characterizes contributions of particular spectral regions of speech to speech intelligibility. The purpose of this study was to determine whether such importance functions could similarly characterize contributions of electrode channels in cochlear implant systems. Thirty-eight subjects with normal hearing listened to vowel-consonant-vowel tokens, either as recorded or as output from vocoders that simulated aspects of cochlear implant processing. Importance functions were measured using the method of Whitmal and DeRoy [J. Acoust. Soc. Am. 130, 4032-4043 (2011)], in which signal bandwidths were varied adaptively to produce specified token recognition scores in accordance with the transformed up-down rules of Levitt [J. Acoust. Soc. Am. 49, 467-477 (1971)]. Psychometric functions constructed from recognition scores were subsequently converted into importance functions. Comparisons of the resulting importance functions indicate that vocoder processing causes peak importance regions to shift downward in frequency. This shift is attributed to changes in strategy and capability for detecting voicing in speech, and is consistent with previously measured data.
Collapse
Affiliation(s)
- Nathaniel A Whitmal
- Department of Communication Disorders, University of Massachusetts, Amherst, Massachusetts 01003, USA.
| | | |
Collapse
|
18
|
|
19
|
Zhou N, Xu L, Lee CY. The effects of frequency-place shift on consonant confusion in cochlear implant simulations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 128:401-9. [PMID: 20649234 PMCID: PMC2921437 DOI: 10.1121/1.3436558] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2009] [Revised: 04/26/2010] [Accepted: 05/04/2010] [Indexed: 05/26/2023]
Abstract
The effects of frequency-place shift on consonant recognition and confusion matrices were examined. Frequency-place shift was manipulated using a noise-excited vocoder with 4 to 16 channels. In the vocoder processing, the location of the most apical carrier band varied from the matched condition (i.e., 28 mm from the base of the cochlear) to a basal shift (i.e., 22 mm from the base) in a step size of 1 mm. Ten normal-hearing subjects participated in the 20-alternative forced-choice test, where the consonants were presented in a /Ca/ context. Shift of 3 mm or more caused the consonant recognition scores to decrease significantly. The effects of spectral resolution disappeared when the amount of shift reached >or=3 mm. Information transmitted for voicing and place of articulation varied with spectral shift and spectral resolution, while information transmitted for manner was affected only by spectral shift but not spectral resolution. Spectral shift has shown specific effects on the confusion patterns of the consonants. The direction of errors reversed as spectral shift increased and the patterns of reversal were consistent across channel conditions. Overall, transmission of the consonant features can be accounted for by the acoustic features of the speech signal.
Collapse
Affiliation(s)
- Ning Zhou
- School of Hearing, Speech and Language Sciences, Ohio University, Athens, Ohio 45701, USA
| | | | | |
Collapse
|
20
|
Garadat SN, Litovsky RY, Yu G, Zeng FG. Effects of simulated spectral holes on speech intelligibility and spatial release from masking under binaural and monaural listening. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 127:977-89. [PMID: 20136220 PMCID: PMC2830263 DOI: 10.1121/1.3273897] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2008] [Revised: 09/20/2009] [Accepted: 11/22/2009] [Indexed: 05/25/2023]
Abstract
The possibility that "dead regions" or "spectral holes" can account for some differences in performance between bilateral cochlear implant (CI) users and normal-hearing listeners was explored. Using a 20-band noise-excited vocoder to simulate CI processing, this study examined effects of spectral holes on speech reception thresholds (SRTs) and spatial release from masking (SRM) in difficult listening conditions. Prior to processing, stimuli were convolved through head-related transfer-functions to provide listeners with free-field directional cues. Processed stimuli were presented over headphones under binaural or monaural (right ear) conditions. Using Greenwood's [(1990). J. Acoust. Soc. Am. 87, 2592-2605] frequency-position function and assuming a cochlear length of 35 mm, spectral holes were created for variable sizes (6 and 10 mm) and locations (base, middle, and apex). Results show that middle-frequency spectral holes were the most disruptive to SRTs, whereas high-frequency spectral holes were the most disruptive to SRM. Spectral holes generally reduced binaural advantages in difficult listening conditions. These results suggest the importance of measuring dead regions in CI users. It is possible that customized programming for bilateral CI processors based on knowledge about dead regions can enhance performance in adverse listening situations.
Collapse
Affiliation(s)
- Soha N Garadat
- Waisman Center, University of Wisconsin, 1500 Highland Avenue, Madison, Wisconsin 53705, USA
| | | | | | | |
Collapse
|
21
|
Apoux F, Healy EW. On the number of auditory filter outputs needed to understand speech: further evidence for auditory channel independence. Hear Res 2009; 255:99-108. [PMID: 19539016 DOI: 10.1016/j.heares.2009.06.005] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/23/2009] [Revised: 06/01/2009] [Accepted: 06/10/2009] [Indexed: 11/17/2022]
Abstract
The number of auditory filter outputs required to identify phonemes was estimated in two experiments. Stimuli were divided into 30 contiguous equivalent rectangular bandwidths (ERB(N)) spanning 80-7563Hz. Normal-hearing listeners were presented with limited numbers of bands having frequency locations determined randomly from trial to trial to provide a general view, i.e., irrespective of specific band location, of the number of 1-ERB(N)-wide speech bands needed to identify phonemes. The first experiment demonstrated that 20 such bands are required to accurately identify vowels, and 16 are required to identify consonants. In the second experiment, speech-shaped noise or time-reversed speech was introduced to the non-speech bands at various signal-to-noise ratios. Considerably elevated noise levels were necessary to substantially affect phoneme recognition, confirming a high degree of channel independence in the auditory system. The independence observed between auditory filter outputs supports current views of speech recognition in noise in which listeners extract and combine pieces of information randomly distributed both in time and frequency. These findings also suggest that the ability to partition incoming sounds into a large number of narrow bands, an ability often lost in cases of hearing impairment or cochlear implantation, is critical for speech recognition in noise.
Collapse
Affiliation(s)
- Frédéric Apoux
- Department of Speech and Hearing Science, The Ohio State University, Columbus, OH 43210, USA.
| | | |
Collapse
|
22
|
Prates LPCS, Silva FJFD, Iório MCM. Frequency compression and its effects in speech recognition. PRO-FONO : REVISTA DE ATUALIZACAO CIENTIFICA 2009; 21:149-154. [PMID: 19629326 DOI: 10.1590/s0104-56872009000200011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2008] [Accepted: 05/04/2009] [Indexed: 05/28/2023]
Abstract
BACKGROUND frequency compression. AIM to evaluate the index of speech recognition (IPRF) using frequency compression in three different ratios. METHODS monosyllabic words were recorded using an algorithm of frequency compression in three ratios: 1:1, 2:1, 3:1, generating three lists of words. Eighteen listeners accomplished the IPRF using the modified words. They were subdivided in two groups, considering familiarity with the speech material: group of audiologists (F) and group of patients (P). RESULTS a statistically significant decrease in accuracy was observed when using frequency compression. Group F presented a better performance than Group P in all of the applied ratio frequency compression ratios. CONCLUSION Frequency compression hinders speech recognition; as the compression ratio increases, so does the level of difficulty. Familiarity with the words facilitates recognition in any hearing condition.
Collapse
|
23
|
Zhou N, Xu L. Lexical tone recognition with spectrally mismatched envelopes. Hear Res 2008; 246:36-43. [PMID: 18848614 DOI: 10.1016/j.heares.2008.09.006] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/29/2008] [Revised: 09/16/2008] [Accepted: 09/17/2008] [Indexed: 12/21/2022]
Abstract
It has been shown that frequency-place mismatch has detrimental effects on English speech recognition. The present study investigated the effects of mismatched spectral distribution of envelopes on Mandarin Chinese tone recognition using a noise-excited vocoder. In Experiment 1, speech samples were processed to simulate a cochlear implant with various insertion depths. The carrier bands were shifted basally relative to the analysis bands by 1-7 mm in the cochlea. Nine normal-hearing Mandarin Chinese listeners participated in this experiment. Basal shift of the carriers only slightly affected tone recognition. The resistance of tone recognition to spectral shift can be attributed to the overall amplitude contour cues that are independent from spectral manipulations. Experiment 2 examined the effects of frequency compression, where widened analysis bands by 2, 6, and 10 mm were compressively allocated to narrower carrier bands. Five of the 9 subjects participated in Experiment 2. It appears that the expanded frequency information especially on the low frequency end can compensate for the distortion from frequency compression. Thus, spectral shift might not pose a severe problem for tone recognition, and allocation of wider frequency range to include more low frequency information might be beneficial for tone recognition.
Collapse
Affiliation(s)
- Ning Zhou
- School of Hearing, Speech and Language Sciences, Ohio University, Athens, OH 45701, USA
| | | |
Collapse
|
24
|
Apoux F, Bacon SP. Differential contribution of envelope fluctuations across frequency to consonant identification in quiet. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2008; 123:2792. [PMID: 18529195 PMCID: PMC2811548 DOI: 10.1121/1.2897916] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2006] [Revised: 01/22/2008] [Accepted: 02/25/2008] [Indexed: 05/26/2023]
Abstract
Two experiments investigated the effects of critical bandwidth and frequency region on the use of temporal envelope cues for speech. In both experiments, spectral details were reduced using vocoder processing. In experiment 1, consonant identification scores were measured in a condition for which the cutoff frequency of the envelope extractor was half the critical bandwidth (HCB) of the auditory filters centered on each analysis band. Results showed that performance is similar to those obtained in conditions for which the envelope cutoff was set to 160 Hz or above. Experiment 2 evaluated the impact of setting the cutoff frequency of the envelope extractor to values of 4, 8, and 16 Hz or to HCB in one or two contiguous bands for an eight-band vocoder. The cutoff was set to 16 Hz for all the other bands. Overall, consonant identification was not affected by removing envelope fluctuations above 4 Hz in the low- and high-frequency bands. In contrast, speech intelligibility decreased as the cutoff frequency was decreased in the midfrequency region from 16 to 4 Hz. The behavioral results were fairly consistent with a physical analysis of the stimuli, suggesting that clearly measurable envelope fluctuations cannot be attenuated without affecting speech intelligibility.
Collapse
Affiliation(s)
- Frédéric Apoux
- Psychoacoustics Laboratory, Department of Speech and Hearing Science, Arizona State University, PO Box 870102, Tempe, Arizona 85287-0102, USA.
| | | |
Collapse
|
25
|
Goupell MJ, Laback B, Majdak P, Baumgartner WD. Effects of upper-frequency boundary and spectral warping on speech intelligibility in electrical stimulation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2008; 123:2295-309. [PMID: 18397034 PMCID: PMC3061454 DOI: 10.1121/1.2831738] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Speech understanding was tested for seven listeners using 12-electrode Med-El cochlear implants (CIs) and six normal-hearing listeners using a CI simulation. Eighteen different types of processing were evaluated, which varied the frequency-to-tonotopic place mapping and the upper boundary of the frequency and stimulation range. Spectrally unwarped and warped conditions were included. Unlike previous studies on this topic, the lower boundary of the frequency and stimulation range was fixed while the upper boundary was varied. For the unwarped conditions, only eight to ten channels were needed in both quiet and noise to achieve no significant degradation in speech understanding compared to the normal 12-electrode speech processing. The unwarped conditions were often the best conditions for understanding speech; however, small changes in frequency-to-place mapping (<0.77 octaves for the most basal electrode) yielded no significant degradation in performance from the nearest unwarped condition. A second experiment measured the effect of feedback training for both the unwarped and warped conditions. Improvements were found for the unwarped and frequency-expanded conditions, but not for the compressed condition. These results have implications for new CI processing strategies, such as the inclusion of spectral localization cues.
Collapse
Affiliation(s)
- Matthew J Goupell
- Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, A-1040 Vienna, Austria.
| | | | | | | |
Collapse
|
26
|
Moore BCJ. Speech recognition as a function of high-pass filter cutoff frequency for people with and without low-frequency cochlear dead regions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2007; 122:542-53. [PMID: 17622189 DOI: 10.1121/1.2722055] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Regions in the cochlea with no (or very few) functioning inner hair cells and/or neurons are called "dead regions" (DRs). The recognition of high-pass filtered nonsense syllables was measured as a function of filter cutoff frequency for hearing-impaired people with and without low-frequency (apical) cochlear DRs. The diagnosis of any DR was made using the TEN(HL) test, and psychophysical tuning curves were used to define the edge frequency (f(e)) more precisely. Stimuli were amplified differently for each ear, using the "Cambridge formula." For subjects with low-frequency hearing loss but without DRs, scores were high (about 78%) for low cutoff frequencies, remained approximately constant for cutoff frequencies up to 862 Hz, and then worsened with increasing cutoff frequency. For subjects with low-frequency DRs, performance was typically poor for the lowest cutoff frequency (100 Hz), improved as the cutoff frequency was increased to about 0.57f(e), and worsened with further increases. These results indicate that people with low-frequency DRs are able to make effective use of frequency components that fall in the range 0.57f(e) to f(e), but that frequency components below 0.57f(e) have deleterious effects. The results have implications for the fitting of hearing aids to people with low-frequency DRs.
Collapse
|