1
|
Saleh HK, Folkeard P, Kuehnel V, Voss S, Qian J, Scollie S. Directionality in BiCROS hearing aids: an investigation of objective and subjective outcomes. Int J Audiol 2024:1-13. [PMID: 39396231 DOI: 10.1080/14992027.2024.2414096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 09/13/2024] [Accepted: 10/03/2024] [Indexed: 10/15/2024]
Abstract
OBJECTIVES To assess the effect of forward and omnidirectional microphone configurations in BiCROS versus monaural hearing aids on objective and subjective outcomes in different noise conditions. DESIGN After fitting and a 4-week acclimatisation period, speech recognition and sound quality were measured using forward directional, omnidirectional, and unaided settings. Two noise configurations were used, surrounding noise and noise presented from the aided (better) ear. Subjective outcomes were assessed using the SSQ-b and BBSS questionnaires and participant interviews. STUDY SAMPLE Eighteen adult participants (mean: 74.6 y; range: 61-94 y; ten males, eight females) with mild to moderately severe SNHL in their better ear (PTA0.5-4khz > 20 dB HL) and limited usable hearing in their poorer ear (average PTA0.5-4khz > 100 dB HL). RESULTS Significant improvement in speech recognition and sound quality for BiCROS and monaural directional settings over omnidirectional and unaided, in both noise configurations. There were no significant differences observed between monoaural and BiCROS directional settings. CONCLUSION Speech in noise recognition and sound quality scores demonstrated a significant directional benefit for both BiCROS and monaural directional fitting settings over omnidirectional and unaided conditions. Unique BiCROS-specific experiences were identified in a patient-oriented approach. These can inform the development of BiCROS-tailored tools.
Collapse
Affiliation(s)
- Hasan K Saleh
- School of Speech and Hearing Sciences, College of Nursing and Health Professions, The University of Southern Mississippi, Hattiesburg, MS, USA
| | - Paula Folkeard
- National Centre for Audiology, Western University, London, Canada
| | | | - Solveig Voss
- Sonova Innovation Centre Toronto, Mississauga, Canada
| | - Jinyu Qian
- Sonova Innovation Centre Toronto, Mississauga, Canada
| | - Susan Scollie
- National Centre for Audiology, Western University, London, Canada
- School of Communication Sciences and Disorders, Western University, London, Canada
| |
Collapse
|
2
|
Saleh HK, Folkeard P, Van Eeckhoutte M, Scollie S. Premium versus entry-level hearing aids: using group concept mapping to investigate the drivers of preference. Int J Audiol 2021; 61:1003-1017. [PMID: 34883040 DOI: 10.1080/14992027.2021.2009923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVES To investigate the difference in outcome measures and drivers of user preference between premium and entry-level hearing aids using group concept mapping. DESIGN A single-blind crossover trial was conducted. Aided behavioural outcomes measured were loudness rating, speech/consonant recognition, and speech quality. Preference between hearing aids was measured with a 7-point Likert scale. Group concept mapping was utilised to investigate preference results. Participants generated statements based on what influenced their preferences. These were sorted into categories with underlying themes. Participants rated each statement on a 5-point Likert scale of importance. STUDY SAMPLE Twenty-three adult participants (mean: 62.4 years; range: 24-78) with mild to moderately severe bilateral SNHL (PTA500-4000 Hz > 20 dB HL). RESULTS A total of 83 unique statements and nine distinct clusters, with underlying themes driving preference, were generated. Clusters that differed significantly in importance between entry-level and premium hearing aid choosers were: Having access to smartphone application-based user-controlled settings, the ability to stream calls and music, and convenience features such as accessory compatibility. CONCLUSION This study has identified non-signal-processing factors which significantly influenced preference for a premium hearing aid over an entry-level hearing aid, indicating the importance of these features as drivers of user preference.
Collapse
Affiliation(s)
- Hasan K Saleh
- Health & Rehabilitation Sciences, Western University, London, Ontario, Canada.,National Centre for Audiology, Western University, London, Ontario, Canada
| | - Paula Folkeard
- National Centre for Audiology, Western University, London, Ontario, Canada
| | - Maaike Van Eeckhoutte
- National Centre for Audiology, Western University, London, Ontario, Canada.,Hearing Systems, Department of Health Technology, Technical University of Denmark, Kongens, Lyngby.,Ear, Nose, Throat (ENT) & Audiology Clinic, Rigshospitalet, Copenhagen University Hospital, Denmark
| | - Susan Scollie
- National Centre for Audiology, Western University, London, Ontario, Canada.,Communication Sciences and Disorders, Faculty of Health Sciences, Western University, London, Ontario, Canada
| |
Collapse
|
3
|
Ricketts TA, Picou EM. Symmetrical and asymmetrical directional benefits are present for talkers at the front and side. Int J Audiol 2021; 61:177-186. [PMID: 34106803 DOI: 10.1080/14992027.2021.1931488] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
OBJECTIVE The purpose of the study was to examine the effects of symmetrical and asymmetrical directional microphone settings on speech recognition, localisation and microphone preference in listening conditions with on- and off-axis talkers. DESIGN A within-subjects repeated-measure evaluation of three hearing aid microphone settings (bilateral omnidirectional, bilateral directional, asymmetrical directional) was completed in a moderately reverberant laboratory. An exploratory analysis of the potential relationship between microphone preference and unaided measures was also completed. STUDY SAMPLE Twenty adult listeners with mild to moderately severe bilateral hearing loss participated. RESULTS The directional and asymmetric microphone settings resulted in equivalent benefits for sentence recognition in noise, word recall, and localisation speed regardless of the speech loudspeaker location (on- or off-axis). However, localisation accuracy was significantly worse with the asymmetric fitting than the directional setting when speech was presented from the rear hemisphere. Listeners who always preferred directional microphones had significantly poorer unaided speech recognition than those who preferred the omnidirectional setting for one or more listening condition. CONCLUSIONS Benefits from directional and asymmetric processing were small in the current study, but generally similar to each other. Unaided speech recognition in noise performance may have utility as a clinical predictor of preference for directional processing.
Collapse
Affiliation(s)
- Todd A Ricketts
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Erin M Picou
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
4
|
Dorman MF, Natale SC, Agrawal S. The Benefit of Remote and On-Ear Directional Microphone Technology Persists in the Presence of Visual Information. J Am Acad Audiol 2020; 32:39-44. [PMID: 33296930 DOI: 10.1055/s-0040-1718893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
BACKGROUND Both the Roger remote microphone and on-ear, adaptive beamforming technologies (e.g., Phonak UltraZoom) have been shown to improve speech understanding in noise for cochlear implant (CI) listeners when tested in audio-only (A-only) test environments. PURPOSE Our aim was to determine if adult and pediatric CI recipients benefited from these technologies in a more common environment-one in which both audio and visual cues were available and when overall performance was high. STUDY SAMPLE Ten adult CI listeners (Experiment 1) and seven pediatric CI listeners (Experiment 2) were tested. DESIGN Adults were tested in quiet and in two levels of noise (level 1 and level 2) in A-only and audio-visual (AV) environments. There were four device conditions: (1) an ear canal-level, omnidirectional microphone (T-mic) in quiet, (2) the T-mic in noise, (3) an adaptive directional mic (UltraZoom) in noise, and (4) a wireless, remote mic (Roger Pen) in noise. Pediatric listeners were tested in quiet and in level 1 noise in A-only and AV environments. The test conditions were: (1) a behind-the-ear level omnidirectional mic (processor mic) in quiet, (2) the processor mic in noise, (3) the T-mic in noise, and (4) the Roger Pen in noise. DATA COLLECTION AND ANALYSES In each test condition, sentence understanding was assessed (percent correct) and ease of listening ratings were obtained. The sentence understanding data were entered into repeated-measures analyses of variance. RESULTS For both adult and pediatric listeners in the AV test conditions in level 1 noise, performance with the Roger Pen was significantly higher than with the T-mic. For both populations, performance in level 1 noise with the Roger Pen approached the level of baseline performance in quiet. Ease of listening in noise was rated higher in the Roger Pen conditions than in the T-mic or processor mic conditions in both A-only and AV test conditions. CONCLUSION The Roger remote mic and on-ear directional mic technologies benefit both speech understanding and ease of listening in a realistic laboratory test environment and are likely do the same in real-world listening environments.
Collapse
Affiliation(s)
- Michael F Dorman
- Department of Speech and Hearing Science, Arizona State University, Tempe, Arizona
| | - Sarah Cook Natale
- Department of Speech and Hearing Science, Arizona State University, Tempe, Arizona
| | | |
Collapse
|
5
|
Wu YH, Stangl E, Chipara O, Hasan SS, DeVries S, Oleson J. Efficacy and Effectiveness of Advanced Hearing Aid Directional and Noise Reduction Technologies for Older Adults With Mild to Moderate Hearing Loss. Ear Hear 2020; 40:805-822. [PMID: 30379683 PMCID: PMC6491270 DOI: 10.1097/aud.0000000000000672] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The purpose of the present study was to investigate the laboratory efficacy and real-world effectiveness of advanced directional microphones (DM) and digital noise reduction (NR) algorithms (i.e., premium DM/NR features) relative to basic-level DM/NR features of contemporary hearing aids (HAs). The study also examined the effect of premium HAs relative to basic HAs and the effect of DM/NR features relative to no features. DESIGN Fifty-four older adults with mild-to-moderate hearing loss completed a single-blinded crossover trial. Two HA models, one a less-expensive, basic-level device (basic HA) and the other a more-expensive, advanced-level device (premium HA), were used. The DM/NR features of the basic HAs (i.e., basic features) were adaptive DMs and gain-reduction NR with fewer channels. In contrast, the DM/NR features of the premium HAs (i.e., premium features) included adaptive DMs and gain-reduction NR with more channels, bilateral beamformers, speech-seeking DMs, pinna-simulation directivity, reverberation reduction, impulse NR, wind NR, and spatial NR. The trial consisted of four conditions, which were factorial combinations of HA model (premium versus basic) and DM/NR feature status (on versus off). To blind participants regarding the HA technology, no technology details were disclosed and minimal training on how to use the features was provided. In each condition, participants wore bilateral HAs for 5 weeks. Outcomes regarding speech understanding, listening effort, sound quality, localization, and HA satisfaction were measured using laboratory tests, retrospective self-reports (i.e., standardized questionnaires), and in-situ self-reports (i.e., self-reports completed in the real world in real time). A smartphone-based ecological momentary assessment system was used to collect in-situ self-reports. RESULTS Laboratory efficacy data generally supported the benefit of premium DM/NR features relative to basic DM/NR, premium HAs relative to basic HAs, and DM/NR features relative to no DM/NR in improving speech understanding and localization performance. Laboratory data also indicated that DM/NR features could improve listening effort and sound quality compared with no features for both basic- and premium-level HAs. For real-world effectiveness, in-situ self-reports first indicated that noisy or very noisy situations did not occur very often in participants' daily lives (10.9% of the time). Although both retrospective and in-situ self-reports indicated that participants were more satisfied with HAs equipped with DM/NR features than without, there was no strong evidence to support the benefit of premium DM/NR features and premium HAs over basic DM/NR features and basic HAs, respectively. CONCLUSIONS Although premium DM/NR features and premium HAs outperformed their basic-level counterparts in well-controlled laboratory test conditions, the benefits were not observed in the real world. In contrast, the effect of DM/NR features relative to no features was robust both in the laboratory and in the real world. Therefore, the present study suggests that although both premium and basic DM/NR technologies evaluated in the study have the potential to improve HA outcomes, older adults with mild-to-moderate hearing loss are unlikely to perceive the additional benefits provided by the premium DM/NR features in their daily lives. Limitations concerning the study's generalizability (e.g., participant's lifestyle) are discussed.
Collapse
Affiliation(s)
- Yu-Hsiang Wu
- Department of Communication Sciences and Disorders, The University of Iowa
| | - Elizabeth Stangl
- Department of Communication Sciences and Disorders, The University of Iowa
| | - Octav Chipara
- Department of Computer Science, The University of Iowa
| | | | - Sean DeVries
- Department of Biostatistics, The University of Iowa
| | - Jacob Oleson
- Department of Biostatistics, The University of Iowa
| |
Collapse
|
6
|
Chung K. Perceived sound quality of different signal processing algorithms by cochlear implant listeners in real-world acoustic environments. JOURNAL OF COMMUNICATION DISORDERS 2020; 83:105973. [PMID: 31901876 DOI: 10.1016/j.jcomdis.2019.105973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Revised: 12/17/2019] [Accepted: 12/20/2019] [Indexed: 06/10/2023]
Abstract
Well-documented benefits of noise-reduction technologies in laboratories do not always yield a significant difference in real-world acoustic environments. Many possible reasons were proposed and studied to address this discrepancy. The purpose of this study was to examine the effectiveness of different noise reduction strategies for cochlear implants in real-world acoustic environments. Sixteen listeners were fit with hearing aid preprocessors with electrical outputs and cochlear implant speech processors receiving the electrical inputs. The preprocessors were programmed to 1) no noise reduction: omnidirectional microphone (OMNI), 2) moderate noise reduction: a combination of omnidirectional and adaptive directional microphone modes with modulation-based noise reduction (TRI+NR); and 3) maximum noise reduction: adaptive directional microphone in all frequency channels with NR (ADM+NR). Listeners listened to sentences in a noisy café, a noisy restaurant, and a quiet hotel lobby. They were instructed to rate the overall sound quality preference, ease of listening, speech intelligibility, and listening comfort of sentences using a paired-comparison categorical rating paradigm. Results indicate cochlear implant listeners had no microphone preference in quiet but they preferred adaptive directional microphones in noisy environments. The paired-comparison categorical rating paradigm is a viable means to evaluate the benefits of signal processing strategies in real-world acoustic environments.
Collapse
Affiliation(s)
- King Chung
- Department of Allied Health and Communication Disorders, Northern Illinois University, 323 Wirtz Hall, DeKalb, IL 60115, USA.
| |
Collapse
|
7
|
Ricketts TA, Picou EM, Shehorn J, Dittberner AB. Degree of Hearing Loss Affects Bilateral Hearing Aid Benefits in Ecologically Relevant Laboratory Conditions. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:3834-3850. [PMID: 31596645 PMCID: PMC7201333 DOI: 10.1044/2019_jslhr-h-19-0013] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Revised: 05/13/2019] [Accepted: 07/02/2019] [Indexed: 06/10/2023]
Abstract
Purpose Previous evidence supports benefits of bilateral hearing aids, relative to unilateral hearing aid use, in laboratory environments using audio-only (AO) stimuli and relatively simple tasks. The purpose of this study was to evaluate bilateral hearing aid benefits in ecologically relevant laboratory settings, with and without visual cues. In addition, we evaluated the relationship between bilateral benefit and clinically viable predictive variables. Method Participants included 32 adult listeners with hearing loss ranging from mild-moderate to severe-profound. Test conditions varied by hearing aid fitting type (unilateral, bilateral) and modality (AO, audiovisual). We tested participants in complex environments that evaluated the following domains: sentence recognition, word recognition, behavioral listening effort, gross localization, and subjective ratings of spatialization. Signal-to-noise ratio was adjusted to provide similar unilateral speech recognition performance in both modalities and across procedures. Results Significant and similar bilateral benefits were measured for both modalities on all tasks except listening effort, where bilateral benefits were not identified in either modality. Predictive variables were related to bilateral benefits in some conditions. With audiovisual stimuli, increasing hearing loss, unaided speech recognition in noise, and unaided subjective spatial ability were significantly correlated with increased benefits for many outcomes. With AO stimuli, these same predictive variables were not significantly correlated with outcomes. No predictive variables were correlated with bilateral benefits for sentence recognition in either modality. Conclusions Hearing aid users can expect significant bilateral hearing aid advantages for ecologically relevant, complex laboratory tests. Although future confirmatory work is necessary, these data indicate the presence of vision strengthens the relationship between bilateral benefits and degree of hearing loss.
Collapse
Affiliation(s)
- Todd A. Ricketts
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Erin M. Picou
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | | | | |
Collapse
|
8
|
|
9
|
Characteristics of Real-World Signal to Noise Ratios and Speech Listening Situations of Older Adults With Mild to Moderate Hearing Loss. Ear Hear 2019; 39:293-304. [PMID: 29466265 DOI: 10.1097/aud.0000000000000486] [Citation(s) in RCA: 102] [Impact Index Per Article: 20.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES The first objective was to determine the relationship between speech level, noise level, and signal to noise ratio (SNR), as well as the distribution of SNR, in real-world situations wherein older adults with hearing loss are listening to speech. The second objective was to develop a set of prototype listening situations (PLSs) that describe the speech level, noise level, SNR, availability of visual cues, and locations of speech and noise sources of typical speech listening situations experienced by these individuals. DESIGN Twenty older adults with mild to moderate hearing loss carried digital recorders for 5 to 6 weeks to record sounds for 10 hours per day. They also repeatedly completed in situ surveys on smartphones several times per day to report the characteristics of their current environments, including the locations of the primary talker (if they were listening to speech) and noise source (if it was noisy) and the availability of visual cues. For surveys where speech listening was indicated, the corresponding audio recording was examined. Speech-plus-noise and noise-only segments were extracted, and the SNR was estimated using a power subtraction technique. SNRs and the associated survey data were subjected to cluster analysis to develop PLSs. RESULTS The speech level, noise level, and SNR of 894 listening situations were analyzed to address the first objective. Results suggested that as noise levels increased from 40 to 74 dBA, speech levels systematically increased from 60 to 74 dBA, and SNR decreased from 20 to 0 dB. Most SNRs (62.9%) of the collected recordings were between 2 and 14 dB. Very noisy situations that had SNRs below 0 dB comprised 7.5% of the listening situations. To address the second objective, recordings and survey data from 718 observations were analyzed. Cluster analysis suggested that the participants' daily listening situations could be grouped into 12 clusters (i.e., 12 PLSs). The most frequently occurring PLSs were characterized as having the talker in front of the listener with visual cues available, either in quiet or in diffuse noise. The mean speech level of the PLSs that described quiet situations was 62.8 dBA, and the mean SNR of the PLSs that represented noisy environments was 7.4 dB (speech = 67.9 dBA). A subset of observations (n = 280), which was obtained by excluding the data collected from quiet environments, was further used to develop PLSs that represent noisier situations. From this subset, two PLSs were identified. These two PLSs had lower SNRs (mean = 4.2 dB), but the most frequent situations still involved speech from in front of the listener in diffuse noise with visual cues available. CONCLUSIONS The present study indicated that visual cues and diffuse noise were exceedingly common in real-world speech listening situations, while environments with negative SNRs were relatively rare. The characteristics of speech level, noise level, and SNR, together with the PLS information reported by the present study, can be useful for researchers aiming to design ecologically valid assessment procedures to estimate real-world speech communicative functions for older adults with hearing loss.
Collapse
|
10
|
Brody L, Wu YH, Stangl E. A Comparison of Personal Sound Amplification Products and Hearing Aids in Ecologically Relevant Test Environments. Am J Audiol 2018; 27:581-593. [PMID: 30458521 DOI: 10.1044/2018_aja-18-0027] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2018] [Accepted: 06/13/2018] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The aim of this study was to compare the benefit of self-adjusted personal sound amplification products (PSAPs) to audiologist-fitted hearing aids based on speech recognition, listening effort, and sound quality in ecologically relevant test conditions to estimate real-world effectiveness. METHOD Twenty-five older adults with bilateral mild-to-moderate hearing loss completed the single-blinded, crossover study. Participants underwent aided testing using 3 PSAPs and a traditional hearing aid, as well as unaided testing. PSAPs were adjusted based on participant preference, whereas the hearing aid was configured using best-practice verification protocols. Audibility provided by the devices was quantified using the Speech Intelligibility Index (American National Standards Institute, 2012). Outcome measures assessing speech recognition, listening effort, and sound quality were administered in ecologically relevant laboratory conditions designed to represent real-world speech listening situations. RESULTS All devices significantly improved Speech Intelligibility Index compared to unaided listening, with the hearing aid providing more audibility than all PSAPs. Results further revealed that, in general, the hearing aid improved speech recognition performance and reduced listening effort significantly more than all PSAPs. Few differences in sound quality were observed between devices. All PSAPs improved speech recognition and listening effort compared to unaided testing. CONCLUSIONS Hearing aids fitted using best-practice verification protocols were capable of providing more aided audibility, better speech recognition performance, and lower listening effort compared to the PSAPs tested in the current study. Differences in sound quality between the devices were minimal. However, because all PSAPs tested in the study significantly improved participants' speech recognition performance and reduced listening effort compared to unaided listening, PSAPs could serve as a budget-friendly option for those who cannot afford traditional amplification.
Collapse
Affiliation(s)
- Lisa Brody
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Yu-Hsiang Wu
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Elizabeth Stangl
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| |
Collapse
|
11
|
Wu YH, Stangl E, Zhang X, Bentler RA. Construct Validity of the Ecological Momentary Assessment in Audiology Research. J Am Acad Audiol 2018; 26:872-84. [PMID: 26554491 DOI: 10.3766/jaaa.15034] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Ecological momentary assessment (EMA) is a methodology involving repeated assessments/surveys to collect data describing respondents' current or very recent experiences and related contexts in their natural environments. The use of EMA in audiology research is growing. PURPOSE This study examined the construct validity (i.e., the degree to which a measurement reflects what it is intended to measure) of EMA in terms of measuring speech understanding and related listening context. Experiment 1 investigated the extent to which individuals can accurately report their speech recognition performance and characterize the listening context in controlled environments. Experiment 2 investigated whether the data aggregated across multiple EMA surveys conducted in uncontrolled, real-world environments would reveal a valid pattern that was consistent with the established relationships between speech understanding, hearing aid use, listening context, and lifestyle. RESEARCH DESIGN This is an observational study. STUDY SAMPLE Twelve and twenty-seven adults with hearing impairment participated in Experiments 1 and 2, respectively. DATA COLLECTION AND ANALYSIS In the laboratory testing of Experiment 1, participants estimated their speech recognition performance in settings wherein the signal-to-noise ratio was fixed or constantly varied across sentences. In the field testing the participants reported the listening context (e.g., noisiness level) of several semicontrolled real-world conversations. Their reports were compared to (1) the context described by normal-hearing observers and (2) the background noise level measured using a sound level meter. In Experiment 2, participants repeatedly reported the degree of speech understanding, hearing aid use, and listening context using paper-and-pencil journals in their natural environments for 1 week. They also carried noise dosimeters to measure the sound level. The associations between (1) speech understanding, hearing aid use, and listening context, (2) dosimeter sound level and self-reported noisiness level, and (3) dosimeter data and lifestyle quantified using the journals were examined. RESULTS For Experiment 1, the reported and measured speech recognition scores were highly correlated across all test conditions (r = 0.94 to 0.97). The field testing results revealed that most listening context properties reported by the participants were highly consistent with those described by the observers (74-95% consistency), except for noisiness rating (58%). Nevertheless, higher noisiness rating was associated with higher background noise level. For Experiment 2, the EMA results revealed several associations: better speech understanding was associated with the use of hearing aids, front-located speech, and lower dosimeter sound level; higher noisiness rating was associated with higher dosimeter sound level; listeners with more diverse lifestyles tended to have higher dosimeter sound levels. CONCLUSIONS Adults with hearing impairment were able to report their listening experiences, such as speech understanding, and characterize listening context in controlled environments with reasonable accuracy. The pattern of the data aggregated across multiple EMA surveys conducted in a wide range of uncontrolled real-world environment was consistent with the established knowledge in audiology. The two experiments suggested that, regarding speech understanding and related listening contexts, EMA reflects what it is intended to measure, supporting its construct validity in audiology research.
Collapse
Affiliation(s)
- Yu-Hsiang Wu
- Department of Communication Sciences and Disorders, The University of Iowa, Iowa City, IA 52242
| | - Elizabeth Stangl
- Department of Communication Sciences and Disorders, The University of Iowa, Iowa City, IA 52242
| | - Xuyang Zhang
- Department of Communication Sciences and Disorders, The University of Iowa, Iowa City, IA 52242
| | - Ruth A Bentler
- Department of Communication Sciences and Disorders, The University of Iowa, Iowa City, IA 52242
| |
Collapse
|
12
|
Miller CW, Stewart EK, Wu YH, Bishop C, Bentler RA, Tremblay K. Working Memory and Speech Recognition in Noise Under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type Among Adults With Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2310-2320. [PMID: 28744550 PMCID: PMC5829805 DOI: 10.1044/2017_jslhr-h-16-0284] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Revised: 09/23/2016] [Accepted: 02/04/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. METHOD Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. RESULTS A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. CONCLUSION The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed.
Collapse
Affiliation(s)
- Christi W. Miller
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Erin K. Stewart
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Yu-Hsiang Wu
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Christopher Bishop
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Ruth A. Bentler
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Kelly Tremblay
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| |
Collapse
|
13
|
Picou EM, Ricketts TA. How directional microphones affect speech recognition, listening effort and localisation for listeners with moderate-to-severe hearing loss. Int J Audiol 2017; 56:909-918. [DOI: 10.1080/14992027.2017.1355074] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Erin M. Picou
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Todd A. Ricketts
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
14
|
Benefit From Directional Microphone Hearing Aids: Objective and Subjective Evaluations. Clin Exp Otorhinolaryngol 2015; 8:237-42. [PMID: 26330918 PMCID: PMC4553354 DOI: 10.3342/ceo.2015.8.3.237] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2015] [Revised: 06/26/2015] [Accepted: 06/26/2015] [Indexed: 11/08/2022] Open
Abstract
OBJECTIVES The aims of this study were to find and compare the effect of directional (DIR) processing of two different hearing aids via both subjective and objective methods, to determine the association between the results of the subjective and objective evaluations, and to find out individual predictive factors influencing the DIR benefit. METHODS Twenty-six hearing aid users fitted unilaterally with each two different experimental hearing aid performed modified Korean Hearing in Noise Test (K-HINT) in three DIR conditions; omnidirectional (OMNI) mode, OMNI plus noise reduction feature, fixed DIR mode. In order to determine benefits from DIR benefit within a hearing aid and compare performance of the DIR processing between hearing aids, a subjective questionnaire was administrated on speech quality (SQ) and discomfort in noise (DN) domain. Correlation analysis of factors influencing DIR benefit was accomplished. RESULTS Benefits from switching OMNI mode to DIR mode within both hearing aids in K-HINT were about 2.8 (standard deviation, 3.5) and 2.1 dB SNR (signal to ratio; SD, 2.5), but significant difference in K-HINT results between OMNI and OMNI plus noise reduction algorithm was not shown. The subjective evaluation resulted in the better SQ and DN scores in DIR mode than those in OMNI mode. However, the difference of scores on both SQ and DN between the two hearing aids with DIR mode was not statistically significant. Any individual factors did not significantly affect subjective and objective DIR benefits. CONCLUSION DIR benefit was found not only in the objective measurement performed in the laboratory but also in the subjective questionnaires, but the subjective results was failed to have significant correlation with the DIR benefit obtained in the K-HINT. Factors influencing individual variation in perceptual DIR benefit were still hard to explain.
Collapse
|
15
|
Silberer AB, Bentler R, Wu YH. The importance of high-frequency audibility with and without visual cues on speech recognition for listeners with normal hearing. Int J Audiol 2015; 54:865-72. [PMID: 26068537 DOI: 10.3109/14992027.2015.1051666] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE To examine the impact of visual cues, speech materials, age and listening condition on the frequency bandwidth necessary for optimizing speech recognition performance. DESIGN Using a randomized repeated measures design; speech recognition performance was assessed using four speech perception tests presented in quiet and noise in 13 LP filter conditions and presented in multimodalities. Participants' performance data were fitted with a Boltzmann function to determine optimal performance (10% below performance achieved in FBW). STUDY SAMPLE Thirty adults (18-63 years) and thirty children (7-12 years) with normal hearing. RESULTS Visual cues significantly reduced the bandwidth required for optimizing speech recognition performance for listeners. The type of speech material significantly impacted the bandwidth required for optimizing performance. Both groups required significantly less bandwidth in quiet, although children required significantly more than adults. The widest bandwidth required was for the phoneme detection task in noise where children required a bandwidth of 7399 Hz and adults 6674 Hz. CONCLUSIONS Listeners require significantly less bandwidth for optimizing speech recognition performance when assessed using sentence materials with visual cues. That is, the amount of bandwidth systematically decreased as a function of increased contextual, linguistic, and visual content.
Collapse
Affiliation(s)
- Amanda B Silberer
- a * Department of Communication Sciences and Disorders , The University of Iowa , Iowa City , USA.,b Department of Communication Sciences and Disorders , Western Illinois University , Macomb, Illinois , USA
| | - Ruth Bentler
- a * Department of Communication Sciences and Disorders , The University of Iowa , Iowa City , USA
| | - Yu-Hsiang Wu
- a * Department of Communication Sciences and Disorders , The University of Iowa , Iowa City , USA
| |
Collapse
|
16
|
How neuroscience relates to hearing aid amplification. Int J Otolaryngol 2014; 2014:641652. [PMID: 25045354 PMCID: PMC4086374 DOI: 10.1155/2014/641652] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2014] [Revised: 05/01/2014] [Accepted: 05/14/2014] [Indexed: 01/19/2023] Open
Abstract
Hearing aids are used to improve sound audibility for people with hearing loss, but the ability to make use of the amplified signal, especially in the presence of competing noise, can vary across people. Here we review how neuroscientists, clinicians, and engineers are using various types of physiological information to improve the design and use of hearing aids.
Collapse
|
17
|
Wu YH, Stangl E, Bentler RA. Hearing-aid users' voices: a factor that could affect directional benefit. Int J Audiol 2013; 52:789-94. [PMID: 23777478 DOI: 10.3109/14992027.2013.802381] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE Backward-facing directional processing (Back-DIR) is an algorithm that employs an anti-cardioid directivity pattern to enhance speech arriving from behind the listener. An experiment that was originally designed to evaluate Back-DIR, together with its follow-up experiment, are reported to illustrate how hearing-aid users' voices could affect directional benefit. DESIGN Speech recognition performance was measured in a speech-180°/noise-0° configuration, with aids programmed to Back-DIR enabled or omnidirectional processing. In the original experiment, the conventional hearing-in-noise test (HINT) was used, wherein listeners repeated heard sentences. In the follow-up experiment, a modified HINT was used, wherein a carrier phrase was presented before each sentence. STUDY SAMPLE Fifteen adults with sensorineural hearing loss participated in both experiments. RESULTS Significant Back-DIR benefit (relative to omnidirectional processing) was observed in the follow-up experiment, while not in the original experiment. CONCLUSIONS In the original experiment, hearing aids were affected by listeners' voices such that Back-DIR was not always activated when the target speech was presented. In the follow-up experiment, listeners' voice effects were eliminated by the carrier phrase activating Back-DIR before the sentences were presented. The results suggest that the effect of hearing-aid technologies is highly dependent on the characteristics of listening conditions.
Collapse
Affiliation(s)
- Yu-Hsiang Wu
- Department of Communication Sciences and Disorders, The University of Iowa , Iowa City , USA
| | | | | |
Collapse
|
18
|
Internet video telephony allows speech reading by deaf individuals and improves speech perception by cochlear implant users. PLoS One 2013; 8:e54770. [PMID: 23359119 PMCID: PMC3554620 DOI: 10.1371/journal.pone.0054770] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2012] [Accepted: 12/14/2012] [Indexed: 11/19/2022] Open
Abstract
Objective To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. Methods Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280×720, 640×480, 320×240, 160×120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0–500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. Results Higher frame rate (>7 fps), higher camera resolution (>640×480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). Conclusion Webcameras have the potential to improve telecommunication of hearing-impaired individuals.
Collapse
|
19
|
Hasan SS, Lai F, Chipara O, Wu YH. AudioSense: Enabling Real-time Evaluation of Hearing Aid Technology In-Situ. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS 2013; 2013:167-172. [PMID: 25013874 PMCID: PMC4087026 DOI: 10.1109/cbms.2013.6627783] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
AudioSense integrates mobile phones and web technology to measure hearing aid performance in real-time and in-situ. Measuring the performance of hearing aids in the real world poses significant challenges as it depends on the patient's listening context. AudioSense uses Ecological Momentary Assessment methods to evaluate both the perceived hearing aid performance as well as to characterize the listening environment using electronic surveys. AudioSense further characterizes a patient's listening context by recording their GPS location and sound samples. By creating a time-synchronized record of listening performance and listening contexts, AudioSense will allow researchers to understand the relationship between listening context and hearing aid performance. Performance evaluation shows that AudioSense is reliable, energy-efficient, and can estimate Signal-to-Noise Ratio (SNR) levels from captured audio samples.
Collapse
|
20
|
The Influence of Audiovisual Ceiling Performance on the Relationship Between Reverberation and Directional Benefit. Ear Hear 2012; 33:604-14. [DOI: 10.1097/aud.0b013e31825641e4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
21
|
McCreery RW, Venediktov RA, Coleman JJ, Leech HM. An evidence-based systematic review of directional microphones and digital noise reduction hearing aids in school-age children with hearing loss. Am J Audiol 2012; 21:295-312. [PMID: 22858614 DOI: 10.1044/1059-0889(2012/12-0014)] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The purpose of this evidence-based systematic review was to evaluate the efficacy of digital noise reduction and directional microphones for outcome measures of audibility, speech recognition, speech and language, and self- or parent-report in pediatric hearing aid users. METHOD The authors searched 26 databases for experimental studies published after 1980 addressing one or more clinical questions and meeting all inclusion criteria. The authors evaluated studies for methodological quality and reported or calculated p values and effect sizes when possible. RESULTS A systematic search of the literature resulted in the inclusion of 4 digital noise reduction and 7 directional microphone studies (in 9 journal articles) that addressed speech recognition, speech and language, and/or self- or parent-report outcomes. No digital noise reduction or directional microphone studies addressed audibility outcomes. CONCLUSIONS On the basis of a moderate level of evidence, digital noise reduction was not found to improve or degrade speech understanding. Additional research is needed before conclusions can be drawn regarding the impact of digital noise reduction on important speech, language, hearing, and satisfaction outcomes. Moderate evidence also indicates that directional microphones resulted in improved speech recognition in controlled optimal settings; however, additional research is needed to determine the effectiveness of directional microphones in actual everyday listening environments.
Collapse
|
22
|
Mens LHM. Speech understanding in noise with an eyeglass hearing aid: Asymmetric fitting and the head shadow benefit of anterior microphones. Int J Audiol 2010; 50:27-33. [DOI: 10.3109/14992027.2010.521199] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|