1
|
Yoon YS, Whitaker R, White N. Frequency importance functions in simulated bimodal cochlear-implant users with spectral holes. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:3589-3599. [PMID: 38829154 PMCID: PMC11151433 DOI: 10.1121/10.0026220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 04/29/2024] [Accepted: 05/12/2024] [Indexed: 06/05/2024]
Abstract
Frequency importance functions (FIFs) for simulated bimodal hearing were derived using sentence perception scores measured in quiet and noise. Acoustic hearing was simulated using low-pass filtering. Electric hearing was simulated using a six-channel vocoder with three input frequency ranges, resulting in overlap, meet, and gap maps, relative to the acoustic cutoff frequency. Spectral holes present in the speech spectra were created within electric stimulation by setting amplitude(s) of channels to zero. FIFs were significantly different between frequency maps. In quiet, the three FIFs were similar with gradually increasing weights with channels 5 and 6 compared to the first three channels. However, the most and least weighted channels slightly varied depending on the maps. In noise, the patterns of the three FIFs were similar to those in quiet, with steeper increasing weights with channels 5 and 6 compared to the first four channels. Thus, channels 5 and 6 contributed to speech perception the most, while channels 1 and 2 contributed the least, regardless of frequency maps. Results suggest that the contribution of cochlear implant frequency bands for bimodal speech perception depends on the degree of frequency overlap between acoustic and electric stimulation and if noise is absent or present.
Collapse
Affiliation(s)
- Yang-Soo Yoon
- Department of Communication Sciences and Disorders, Baylor University, Waco, Texas 76798, USA
| | - Reagan Whitaker
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee 37232, USA
| | - Naomi White
- Department of Communication Sciences and Disorders, Baylor University, Waco, Texas 76798, USA
| |
Collapse
|
2
|
Kocabay AP, Aslan F, Yüce D, Turkyilmaz D. Speech in Noise: Implications of Age, Hearing Loss and Cognition. Folia Phoniatr Logop 2022; 74:345-351. [PMID: 35738235 DOI: 10.1159/000525580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 06/13/2022] [Indexed: 11/19/2022] Open
Abstract
INTRODUCTION Individuals with hearing loss have reduced hearing sensitivity and may not adequately process the temporal cues in acoustic signals. Cognitive skills that decrease with aging and hearing loss contribute negatively on the ability to understand speech. Hence, they may experience communication problems in noisy environments. The aim of the study was to investigate the effect of sloping high frequency hearing loss on speech perception in noise and to examine the impact of temporal and cognitive processing in young and middle-age adults. METHODS Speech in noise (SIN), temporal processing and cognitive tests were conducted to hearing-loss and normal hearing individuals aged 18-59 years. The measurements included the Matrix Sentence Test, Binaural Temporal Fine Structure Sensitivity (TFS) Test, Visual Aural Digit Span (VADS) and Auditory Verbal Learning Test (AVLT). 20 participants with normal hearing were recruited in the control group, whereas 20 participants with hearing loss at high frequencies was composed of the study group. RESULTS Hierarchical regression analysis for SIN was performed by entering 3 separate blocks of independent variables. We entered age and hearing loss into the first block, which explained a significant amount of variability in SIN (R2=0.72, p<0.001). Block 2 was comprised of scores from TFS sensitivity test, this independent variable characterized temporal processing (R2 change= 0.002., p<0.001). Block 3 was consisted of scores from VADS test and AVLT; these variables characterized cognitive processing and accounted for a good portion of SIN variance (R2 change=0.04, p<0.001). The age, hearing loss, and VADS contributed independently in the presence of all independent variables. CONCLUSION The final model accounted for 76.2% of the variance in SIN. The results suggested that sloping hearing loss, aging and cognitive decline affected auditory performance and the poor performance starts from an early age. Additionally, the findings indicated that a more comprehensive approach might be recommended to evaluate the listening skills and identify communication problems.
Collapse
Affiliation(s)
| | - Filiz Aslan
- Department of Audiology, Hacettepe University, Ankara, Turkey
| | - Deniz Yüce
- Department of Preventive Oncology, Hacettepe University, Ankara, Turkey
| | | |
Collapse
|
3
|
Abstract
OBJECTIVES Current hearing aids have a limited bandwidth, which limits the intelligibility and quality of their output, and inhibits their uptake. Recent advances in signal processing, as well as novel methods of transduction, allow for a greater useable frequency range. Previous studies have shown a benefit for this extended bandwidth in consonant recognition, talker-sex identification, and separating sound sources. To explore whether there would be any direct spatial benefits to extending bandwidth, we used a dynamic localization method in a realistic situation. DESIGN Twenty-eight adult participants with minimal hearing loss reoriented themselves as quickly and accurately as comfortable to a new, off-axis near-field talker continuing a story in a background of far-field talkers of the same overall level in a simulated large room with common building materials. All stimuli were low-pass filtered at either 5 or 10 kHz on each trial. To further simulate current hearing aids, participants wore microphones above the pinnae and insert earphones adjusted to provide a linear, zero-gain response. RESULTS Each individual trajectory was recorded with infra-red motion-tracking and analyzed for accuracy, duration, start time, peak velocity, peak velocity time, complexity, reversals, and misorientations. Results across listeners showed a significant increase in peak velocity and significant decrease in start and peak velocity time with greater (10 kHz) bandwidth. CONCLUSIONS These earlier, swifter orientations demonstrate spatial benefits beyond static localization accuracy in plausible conditions; extended bandwidth without pinna cues provided more salient cues in a realistic mixture of talkers.
Collapse
|
4
|
Best V, Roverud E, Baltzell L, Rennies J, Lavandier M. The importance of a broad bandwidth for understanding "glimpsed" speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:3215. [PMID: 31795657 PMCID: PMC6847933 DOI: 10.1121/1.5131651] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
When a target talker speaks in the presence of competing talkers, the listener must not only segregate the voices but also understand the target message based on a limited set of spectrotemporal regions ("glimpses") in which the target voice dominates the acoustic mixture. Here, the hypothesis that a broad audible bandwidth is more critical for these sparse representations of speech than it is for intact speech is tested. Listeners with normal hearing were presented with sentences that were either intact, or progressively "glimpsed" according to a competing two-talker masker presented at various levels. This was achieved by using an ideal binary mask to exclude time-frequency units in the target that would be dominated by the masker in the natural mixture. In each glimpsed condition, speech intelligibility was measured for a range of low-pass conditions (cutoff frequencies from 500 to 8000 Hz). Intelligibility was poorer for sparser speech, and the bandwidth required for optimal intelligibility increased with the sparseness of the speech. The combined effects of glimpsing and bandwidth reduction were well captured by a simple metric based on the proportion of audible target glimpses retained. The findings may be relevant for understanding the impact of high-frequency hearing loss on everyday speech communication.
Collapse
Affiliation(s)
- Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Elin Roverud
- Department of Speech, Language and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Lucas Baltzell
- Department of Speech, Language and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Jan Rennies
- Department of Speech, Language and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Mathieu Lavandier
- Department of Speech, Language and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| |
Collapse
|
5
|
Ricketts TA, Picou EM, Shehorn J, Dittberner AB. Degree of Hearing Loss Affects Bilateral Hearing Aid Benefits in Ecologically Relevant Laboratory Conditions. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:3834-3850. [PMID: 31596645 PMCID: PMC7201333 DOI: 10.1044/2019_jslhr-h-19-0013] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Revised: 05/13/2019] [Accepted: 07/02/2019] [Indexed: 06/10/2023]
Abstract
Purpose Previous evidence supports benefits of bilateral hearing aids, relative to unilateral hearing aid use, in laboratory environments using audio-only (AO) stimuli and relatively simple tasks. The purpose of this study was to evaluate bilateral hearing aid benefits in ecologically relevant laboratory settings, with and without visual cues. In addition, we evaluated the relationship between bilateral benefit and clinically viable predictive variables. Method Participants included 32 adult listeners with hearing loss ranging from mild-moderate to severe-profound. Test conditions varied by hearing aid fitting type (unilateral, bilateral) and modality (AO, audiovisual). We tested participants in complex environments that evaluated the following domains: sentence recognition, word recognition, behavioral listening effort, gross localization, and subjective ratings of spatialization. Signal-to-noise ratio was adjusted to provide similar unilateral speech recognition performance in both modalities and across procedures. Results Significant and similar bilateral benefits were measured for both modalities on all tasks except listening effort, where bilateral benefits were not identified in either modality. Predictive variables were related to bilateral benefits in some conditions. With audiovisual stimuli, increasing hearing loss, unaided speech recognition in noise, and unaided subjective spatial ability were significantly correlated with increased benefits for many outcomes. With AO stimuli, these same predictive variables were not significantly correlated with outcomes. No predictive variables were correlated with bilateral benefits for sentence recognition in either modality. Conclusions Hearing aid users can expect significant bilateral hearing aid advantages for ecologically relevant, complex laboratory tests. Although future confirmatory work is necessary, these data indicate the presence of vision strengthens the relationship between bilateral benefits and degree of hearing loss.
Collapse
Affiliation(s)
- Todd A. Ricketts
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Erin M. Picou
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | | | | |
Collapse
|
6
|
Mukari SZMS, Yusof Y, Ishak WS, Maamor N, Chellapan K, Dzulkifli MA. Relative contributions of auditory and cognitive functions on speech recognition in quiet and in noise among older adults. Braz J Otorhinolaryngol 2018; 86:149-156. [PMID: 30558985 PMCID: PMC9422634 DOI: 10.1016/j.bjorl.2018.10.010] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Accepted: 10/23/2018] [Indexed: 11/28/2022] Open
Abstract
Introduction Hearing acuity, central auditory processing and cognition contribute to the speech recognition difficulty experienced by older adults. Therefore, quantifying the contribution of these factors on speech recognition problem is important in order to formulate a holistic and effective rehabilitation. Objective To examine the relative contributions of auditory functioning and cognition status to speech recognition in quiet and in noise. Methods We measured speech recognition in quiet and in composite noise using the Malay Hearing in noise test on 72 native Malay speakers (60–82 years) older adults with normal to mild hearing loss. Auditory function included pure tone audiogram, gaps-in-noise, and dichotic digit tests. Cognitive function was assessed using the Malay Montreal cognitive assessment. Results Linear regression analyses using backward elimination technique revealed that had the better ear four frequency average (0.5–4 kHz) (4FA), high frequency average and Malay Montreal cognitive assessment attributed to speech perception in quiet (total r2 = 0.499). On the other hand, high frequency average, Malay Montreal cognitive assessment and dichotic digit tests contributed significantly to speech recognition in noise (total r2 = 0.307). Whereas the better ear high frequency average primarily measured the speech recognition in quiet, the speech recognition in noise was mainly measured by cognitive function. Conclusions These findings highlight the fact that besides hearing sensitivity, cognition plays an important role in speech recognition ability among older adults, especially in noisy environments. Therefore, in addition to hearing aids, rehabilitation, which trains cognition, may have a role in improving speech recognition in noise ability of older adults.
Collapse
Affiliation(s)
| | - Yusmeera Yusof
- Universiti Kebangsaan Malaysia, Faculty of Heath Sciences, Kuala Lumpur, Malaysia; Ministry of Health, Putrajaya, Malaysia
| | - Wan Syafira Ishak
- Universiti Kebangsaan Malaysia, Faculty of Heath Sciences, Kuala Lumpur, Malaysia
| | - Nashrah Maamor
- Universiti Kebangsaan Malaysia, Faculty of Heath Sciences, Kuala Lumpur, Malaysia
| | - Kalaivani Chellapan
- Universiti Kebangsaan Malaysia, Faculty of Engineering & Built Environment, Bangi, Malaysia
| | - Mariam Adawiah Dzulkifli
- International Islamic University, Kuliyyah of Islamic Revealed Knowledge and Human Sciences, Kuala Lumpur, Malaysia
| |
Collapse
|
7
|
Spratford M, McLean HH, McCreery R. Relationship of Grammatical Context on Children's Recognition of s/z-Inflected Words. J Am Acad Audiol 2018; 28:799-809. [PMID: 28972469 DOI: 10.3766/jaaa.16151] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Access to aided high-frequency speech information is currently assessed behaviorally using recognition of plural monosyllabic words. Because of semantic and grammatical cues that support word+morpheme recognition in sentence materials, the contribution of high-frequency audibility to sentence recognition is less than that for isolated words. However, young children may not yet have the linguistic competence to take advantage of these cues. A low-predictability sentence recognition task that controls for language ability could be used to assess the impact of high-frequency audibility in a context that more closely represents how children learn language. PURPOSE To determine if differences exist in recognition of s/z-inflected monosyllabic words for children with normal hearing (CNH) and children who are hard of hearing (CHH) across stimuli context (presented in isolation versus embedded medially within a sentence that has low semantic and syntactic predictability) and varying levels of high-frequency audibility (4- and 8-kHz low-pass filtered for CNH and 8-kHz low-pass filtered for CHH). RESEARCH DESIGN A prospective, cross-sectional design was used to analyze word+morpheme recognition in noise for stimuli varying in grammatical context and high-frequency audibility. Low-predictability sentence stimuli were created so that the target word+morpheme could not be predicted by semantic or syntactic cues. Electroacoustic measures of aided access to high-frequency speech sounds were used to predict individual differences in recognition for CHH. STUDY SAMPLE Thirty-five children, aged 5-12 yrs, were recruited to participate in the study; 24 CNH and 11 CHH (bilateral mild to severe hearing loss) who wore hearing aids (HAs). All children were native speakers of English. DATA COLLECTION AND ANALYSIS Monosyllabic word+morpheme recognition was measured in isolated and sentence-embedded conditions at a +10 dB signal-to-noise ratio using steady state, speech-shaped noise. Real-ear probe microphone measures of HAs were obtained for CHH. To assess the effects of high-frequency audibility on word+morpheme recognition for CNH, a repeated-measures ANOVA was used with bandwidth (8 kHz, 4 kHz) and context (isolated, sentence embedded) as within-subjects factors. To compare recognition between CNH and CHH, a mixed-model ANOVA was completed with context (isolated, sentence-embedded) as a within-subjects factor and hearing status as a between-subjects factor. Bivariate correlations between word+morpheme recognition scores and electroacoustic measures of high-frequency audibility were used to assess which measures might be sensitive to differences in perception for CHH. RESULTS When high-frequency audibility was maximized, CNH and CHH had better word+morpheme recognition in the isolated condition compared with sentence-embedded. When high-frequency audibility was limited, CNH had better word+morpheme recognition in the sentence-embedded condition compared with the isolated condition. CHH whose HAs had greater high-frequency speech bandwidth, as measured by the maximum audible frequency, had better word+morpheme recognition in sentences. CONCLUSIONS High-frequency audibility supports word+morpheme recognition within low-predictability sentences for both CNH and CHH. Maximum audible frequency can be used to estimate word+morpheme recognition for CHH. Low-predictability sentences that do not contain semantic or grammatical context may be of clinical use in estimating children's use of high-frequency audibility in a manner that approximates how they learn language.
Collapse
Affiliation(s)
- Meredith Spratford
- Audibility, Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE
| | | | - Ryan McCreery
- Audibility, Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
8
|
Miller CW, Stewart EK, Wu YH, Bishop C, Bentler RA, Tremblay K. Working Memory and Speech Recognition in Noise Under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type Among Adults With Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2310-2320. [PMID: 28744550 PMCID: PMC5829805 DOI: 10.1044/2017_jslhr-h-16-0284] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Revised: 09/23/2016] [Accepted: 02/04/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. METHOD Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. RESULTS A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. CONCLUSION The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed.
Collapse
Affiliation(s)
- Christi W. Miller
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Erin K. Stewart
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Yu-Hsiang Wu
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Christopher Bishop
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Ruth A. Bentler
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Kelly Tremblay
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| |
Collapse
|
9
|
Moore BCJ. A review of the perceptual effects of hearing loss for frequencies above 3 kHz. Int J Audiol 2016; 55:707-714. [DOI: 10.1080/14992027.2016.1204565] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Brian C. J. Moore
- Department of Experimental Psychology, University of Cambridge, Cambridge, UK
| |
Collapse
|
10
|
Miller CW, Bates E, Brennan M. The effects of frequency lowering on speech perception in noise with adult hearing-aid users. Int J Audiol 2016; 55:305-12. [PMID: 26938846 DOI: 10.3109/14992027.2015.1137364] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE Frequency lowering (FL) strategies move high frequency sound into a lower frequency range. This study determined if speech perception differences are observed between some of the different frequency lowering strategies that are available. DESIGN A cross-sectional, repeated-measures design was used to compare three hearing aids that used wide-dynamic range compression (WDRC) and either non-linear frequency compression (NFC), linear frequency transposition (LFT), or frequency translation (FT). The hearing aids were matched to prescriptive real ear targets for WDRC. The settings for each FL strategy were adjusted to provide audibility for a 6300 Hz filtered speech signal. Sentence recognition in noise, subjective measures of sound quality, and a modified version of the speech intelligibility index (SII) were measured. STUDY SAMPLE Ten adults between the ages of 63 to 82 years with bilateral, high frequency hearing loss. RESULTS LFT and FT led to poorer sentence recognition compared to WDRC for most individuals. No difference in sentence recognition occurred with and without NFC. The quality questionnaire and SII showed few differences between conditions. CONCLUSION Under similar fitting and testing conditions of this study, FL techniques may not provide speech understanding benefit in certain background noise situations.
Collapse
Affiliation(s)
- Christi W Miller
- a Department of Speech and Hearing Sciences , University of Washington , Seattle , USA and
| | - Emily Bates
- a Department of Speech and Hearing Sciences , University of Washington , Seattle , USA and
| | - Marc Brennan
- b Boys Town National Research Hospital , Omaha , USA
| |
Collapse
|