1
|
Fitzgerald MB, Ward KM, Gianakas SP, Smith ML, Blevins NH, Swanson AP. Speech-in-Noise Assessment in the Routine Audiologic Test Battery: Relationship to Perceived Auditory Disability. Ear Hear 2024; 45:816-826. [PMID: 38414136 PMCID: PMC11175785 DOI: 10.1097/aud.0000000000001472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 12/07/2023] [Indexed: 02/29/2024]
Abstract
OBJECTIVES Self-assessment of perceived communication difficulty has been used in clinical and research practices for decades. Such questionnaires routinely assess the perceived ability of an individual to understand speech, particularly in background noise. Despite the emphasis on perceived performance in noise, speech recognition in routine audiologic practice is measured by word recognition in quiet (WRQ). Moreover, surprisingly little data exist that compare speech understanding in noise (SIN) abilities to perceived communication difficulty. Here, we address these issues by examining audiometric thresholds, WRQ scores, QuickSIN signal to noise ratio (SNR) loss, and perceived auditory disability as measured by the five questions on the Speech Spatial Questionnaire-12 (SSQ12) devoted to speech understanding (SSQ12-Speech5). DESIGN We examined data from 1633 patients who underwent audiometric assessment at the Stanford Ear Institute. All individuals completed the SSQ12 questionnaire, pure-tone audiometry, and speech assessment consisting of ear-specific WRQ, and ear-specific QuickSIN. Only individuals with hearing threshold asymmetries ≤10 dB HL in their high-frequency pure-tone average (HFPTA) were included. Our primary objectives were to (1) examine the relationship between audiometric variables and the SSQ12-Speech5 scores, (2) determine the amount of variance in the SSQ12-Speech5 scores which could be predicted from audiometric variables, and (3) predict which patients were likely to report greater perceived auditory disability according to the SSQ12-Speech5. RESULTS Performance on the SSQ12-Speech5 indicated greater perceived auditory disability with more severe degrees of hearing loss and greater QuickSIN SNR loss. Degree of hearing loss and QuickSIN SNR loss were found to account for modest but significant variance in SSQ12-Speech5 scores after accounting for age. In contrast, WRQ scores did not significantly contribute to the predictive power of the model. Degree of hearing loss and QuickSIN SNR loss were also found to have moderate diagnostic accuracy for determining which patients were likely to report SSQ12-Speech5 scores indicating greater perceived auditory disability. CONCLUSIONS Taken together, these data indicate that audiometric factors including degree of hearing loss (i.e., HFPTA) and QuickSIN SNR loss are predictive of SSQ12-Speech5 scores, though notable variance remains unaccounted for after considering these factors. HFPTA and QuickSIN SNR loss-but not WRQ scores-accounted for a significant amount of variance in SSQ12-Speech5 scores and were largely effective at predicting which patients are likely to report greater perceived auditory disability on the SSQ12-Speech5. This provides further evidence for the notion that speech-in-noise measures have greater clinical utility than WRQ in most instances as they relate more closely to measures of perceived auditory disability.
Collapse
Affiliation(s)
- Matthew B. Fitzgerald
- Department of Otolaryngology—Head and Neck Surgery, Stanford University, Palo Alto, California, USA
| | - Kristina M. Ward
- Department of Otolaryngology—Head and Neck Surgery, Stanford University, Palo Alto, California, USA
| | - Steven P. Gianakas
- Department of Otolaryngology—Head and Neck Surgery, Stanford University, Palo Alto, California, USA
- Department of Speech-Language-Hearing, Boys Town National Research Hospital, Omaha, Nebraska, USA
| | - Michael L. Smith
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN
| | - Nikolas H. Blevins
- Department of Otolaryngology—Head and Neck Surgery, Stanford University, Palo Alto, California, USA
| | - Austin P. Swanson
- Department of Otolaryngology—Head and Neck Surgery, Stanford University, Palo Alto, California, USA
| |
Collapse
|
2
|
Witte E, Köbler S, Ekeroot J, Smeds K, Mäki-Torkko E. Test-retest reliability of the urban outdoor situated phoneme (SiP) test. Int J Audiol 2023:1-8. [PMID: 38008994 DOI: 10.1080/14992027.2023.2281880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Accepted: 11/03/2023] [Indexed: 11/28/2023]
Abstract
OBJECTIVE To introduce the urban outdoor version of the Situated Phoneme (SiP) test and investigate its test-retest reliability. DESIGN Phonemic discrimination scores in matched-spectrum real-world (MSRW) maskers from an urban outdoor environment were measured using a three-alternative forced choice test paradigm at different phoneme-to-noise ratios (PNR). Each measurement was repeated twice. Test-retest scores for the full 84-trial SiP-test, as well as for four types of contrasting phonemes, were analysed and compared to critical difference scores based on binomial confidence intervals. STUDY SAMPLE Seventy-two adult native speakers of Swedish (26-83 years) with symmetric hearing threshold levels ranging from normal hearing to severe sensorineural hearing loss. RESULTS Test-retest scores did not differ significantly for the whole test, or for the subtests analysed. A lower amount of test-retest score difference than expected exceeded the bounds of the corresponding critical difference intervals. CONCLUSIONS The urban outdoor SiP-test has high test-retest reliability. This information can help audiologists to interpret test scores attained with the urban outdoor SiP-test.
Collapse
Affiliation(s)
- Erik Witte
- Audiological Research Centre, Örebro University, Örebro, Sweden
- School of Health Sciences, Örebro University, Örebro, Sweden
| | - Susanne Köbler
- School of Health Sciences, Örebro University, Örebro, Sweden
| | - Jonas Ekeroot
- Section of Otorhinolaryngology, Head and Neck Surgery, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
- Hearing Implant Unit, Department of ENT, Karolinska University Hospital, Stockholm, Sweden
| | | | - Elina Mäki-Torkko
- Audiological Research Centre, Örebro University, Örebro, Sweden
- School of Medical Sciences, Örebro University, Örebro, Sweden
| |
Collapse
|
3
|
Warren SE, Atcherson SR. Evaluation of a clinical method for selective electrode deactivation in cochlear implant programming. Front Hum Neurosci 2023; 17:1157673. [PMID: 37063101 PMCID: PMC10101326 DOI: 10.3389/fnhum.2023.1157673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 03/09/2023] [Indexed: 03/31/2023] Open
Abstract
BackgroundCochlear implants are a neural prosthesis used to restore the perception of hearing in individuals with severe-to-profound hearing loss by stimulating the auditory nerve with electrical current through a surgically implanted electrode array. The integrity of the interface between the implanted electrode array and the auditory nerve contributes to the variability in outcomes experienced by cochlear implant users. Strategies to identify and eliminate poorly encoding electrodes have been found to be effective in improving outcomes with the device, but application is limited in a clinical setting.ObjectiveThe purpose of this study was to evaluate a clinical method used to identify and selectively deactivate cochlear implants (CI) electrodes related to poor electrode-neural interface.MethodsThirteen adult CI users participated in a pitch ranking task to identify indiscriminate electrode pairs. Electrodes associated with indiscriminate pairs were selectively deactivated, creating an individualized experimental program. Speech perception was evaluated in the baseline condition and with the experimental program before and after an acclimation period. Participant preference responses were recorded at each visit.ResultsStatistically significant improvements using the experimental program were found in at least one measure of speech perception at the individual level in four out of 13 participants when tested before acclimation. Following an acclimation period, ten out of 13 participants demonstrated statistically significant improvements in at least one measure of speech perception. Statistically significant improvements were found with the experimental program at the group level for both monosyllabic words (p = 0.006) and sentences in noise (p = 0.020). Additionally, ten participants preferred the experimental program prior to the acclimation period and eleven preferred the experimental program following the acclimation period.ConclusionResults from this study suggest that electrode deactivation may yield improvement in speech perception following an acclimation period. A majority of CI users in our study reported a preference for the experimental program. This method proved to be a suitable clinical strategy for identifying and deactivating poorly encoding electrodes in adult CI users.
Collapse
Affiliation(s)
- Sarah E. Warren
- Cochlear Implant Research Laboratory, School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States
- Department of Audiology, Arkansas Children’s Hospital, Little Rock, AR, United States
- Department of Audiology and Speech Pathology, University of Arkansas for Medical Sciences, Little Rock, AR, United States
- *Correspondence: Sarah E. Warren,
| | - Samuel R. Atcherson
- Department of Audiology and Speech Pathology, University of Arkansas for Medical Sciences, Little Rock, AR, United States
- Department of Otolaryngology–Head and Neck Surgery, University of Arkansas for Medical Sciences, Little Rock, AR, United States
| |
Collapse
|
4
|
Gianakas SP, Fitzgerald MB, Winn MB. Identifying Listeners Whose Speech Intelligibility Depends on a Quiet Extra Moment After a Sentence. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4852-4865. [PMID: 36472938 PMCID: PMC9934912 DOI: 10.1044/2022_jslhr-21-00622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 05/29/2022] [Accepted: 08/16/2022] [Indexed: 06/03/2023]
Abstract
PURPOSE An extra moment after a sentence is spoken may be important for listeners with hearing loss to mentally repair misperceptions during listening. The current audiologic test battery cannot distinguish between a listener who repaired a misperception versus a listener who heard the speech accurately with no need for repair. This study aims to develop a behavioral method to identify individuals who are at risk for relying on a quiet moment after a sentence. METHOD Forty-three individuals with hearing loss (32 cochlear implant users, 11 hearing aid users) heard sentences that were followed by either 2 s of silence or 2 s of babble noise. Both high- and low-context sentences were used in the task. RESULTS Some individuals showed notable benefit in accuracy scores (particularly for high-context sentences) when given an extra moment of silent time following the sentence. This benefit was highly variable across individuals and sometimes absent altogether. However, the group-level patterns of results were mainly explained by the use of context and successful perception of the words preceding sentence-final words. CONCLUSIONS These results suggest that some but not all individuals improve their speech recognition score by relying on a quiet moment after a sentence, and that this fragility of speech recognition cannot be assessed using one isolated utterance at a time. Reliance on a quiet moment to repair perceptions would potentially impede the perception of an upcoming utterance, making continuous communication in real-world scenarios difficult especially for individuals with hearing loss. The methods used in this study-along with some simple modifications if necessary-could potentially identify patients with hearing loss who retroactively repair mistakes by using clinically feasible methods that can ultimately lead to better patient-centered hearing health care. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21644801.
Collapse
|
5
|
Margolis RH, Wilson RH. Evaluation of binomial distribution estimates of confidence intervals of speech-recognition test scores. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1404. [PMID: 36182306 DOI: 10.1121/10.0013826] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 08/10/2022] [Indexed: 06/16/2023]
Abstract
Speech-recognition tests are a routine component of the clinical hearing evaluation. The most common type of test uses recorded monosyllabic words presented in quiet. The interpretation of test scores relies on an understanding of the variance of repeated tests. Confidence intervals are useful for determining if two scores are significantly different or if the difference is due to the variability of test scores. Because the response to each test item is binary, either correct or incorrect, the binomial distribution has been used to estimate confidence intervals. This method requires that test scores be independent. If the scores are not independent, the binomial distribution will not accurately estimate the variance of repeated scores. A previously published dataset with repeated scores from normal-hearing and hearing-impaired listeners was used to derive confidence intervals from actual test scores in contrast to the predicted confidence intervals in earlier reports. This analysis indicates that confidence intervals predicted by the binomial distribution substantially overestimate the variance of repeated scores resulting in erroneously broad confidence intervals. High correlations were found for repeated scores, indicating that scores are not independent. The interdependence of repeated scores invalidates confidence intervals predicted by the binomial distribution. Confidence intervals and confidence levels for repeated measures were determined empirically from measured test scores to assist in interpreting differences between repeat scores.
Collapse
Affiliation(s)
| | - Richard H Wilson
- Speech and Hearing Science Program, Arizona State University, Tempe, Arizona 85287, USA
| |
Collapse
|
6
|
Hayes NA, Davidson LS, Uchanski RM. Considerations in pediatric device candidacy: An emphasis on spoken language. Cochlear Implants Int 2022; 23:300-308. [PMID: 35637623 PMCID: PMC9339525 DOI: 10.1080/14670100.2022.2079189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
As cochlear implant (CI) candidacy expands to consider children with more residual hearing, the use of a CI and a hearing aid (HA) at the non-implanted ear (bimodal devices) is increasing. This case study examines the contributions of acoustic and electric input to speech perception performance for a pediatric bimodal device user (S1) who is a borderline bilateral cochlear implant candidate. S1 completed a battery of perceptual tests in CI-only, HA-only and bimodal conditions. Since CIs and HAs differ in their ability to transmit cues related to segmental and suprasegmental perception, both types of perception were tested. Performance in all three device conditions were generally similar across tests, showing no clear device-condition benefit. Further, S1's spoken language performance was compared to those of a large group of children with prelingual severe-profound hearing loss who used two devices from a young age, at least one of which was a CI. S1's speech perception and language scores were average or above-average compared to these other pediatric CI recipients. Both segmental and suprasegmental speech perception, and spoken language skills should be examined to determine the broad-scale performance level of bimodal recipients, especially when deciding whether to move from bimodal devices to bilateral CIs.
Collapse
Affiliation(s)
- Natalie A Hayes
- Program in Audiology and Communication Science, Department of Otolaryngology, Washington University School of Medicine, St. Louis, MO, USA
| | - Lisa S Davidson
- Program in Audiology and Communication Science, Department of Otolaryngology, Washington University School of Medicine, St. Louis, MO, USA
| | - Rosalie M Uchanski
- Program in Audiology and Communication Science, Department of Otolaryngology, Washington University School of Medicine, St. Louis, MO, USA
| |
Collapse
|
7
|
Brungart DS, Sherlock LP, Kuchinsky SE, Perry TT, Bieber RE, Grant KW, Bernstein JGW. Assessment methods for determining small changes in hearing performance over time. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:3866. [PMID: 35778214 DOI: 10.1121/10.0011509] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Although the behavioral pure-tone threshold audiogram is considered the gold standard for quantifying hearing loss, assessment of speech understanding, especially in noise, is more relevant to quality of life but is only partly related to the audiogram. Metrics of speech understanding in noise are therefore an attractive target for assessing hearing over time. However, speech-in-noise assessments have more potential sources of variability than pure-tone threshold measures, making it a challenge to obtain results reliable enough to detect small changes in performance. This review examines the benefits and limitations of speech-understanding metrics and their application to longitudinal hearing assessment, and identifies potential sources of variability, including learning effects, differences in item difficulty, and between- and within-individual variations in effort and motivation. We conclude by recommending the integration of non-speech auditory tests, which provide information about aspects of auditory health that have reduced variability and fewer central influences than speech tests, in parallel with the traditional audiogram and speech-based assessments.
Collapse
Affiliation(s)
- Douglas S Brungart
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - LaGuinn P Sherlock
- Hearing Conservation and Readiness Branch, U.S. Army Public Health Center, E1570 8977 Sibert Road, Aberdeen Proving Ground, Maryland 21010, USA
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Trevor T Perry
- Hearing Conservation and Readiness Branch, U.S. Army Public Health Center, E1570 8977 Sibert Road, Aberdeen Proving Ground, Maryland 21010, USA
| | - Rebecca E Bieber
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Ken W Grant
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Joshua G W Bernstein
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| |
Collapse
|
8
|
Perreau AE, Tyler RS, Frank V, Watts A, Mancini PC. Use of a Smartphone App for Cochlear Implant Patients With Tinnitus. Am J Audiol 2021; 30:676-687. [PMID: 34314254 DOI: 10.1044/2021_aja-20-00195] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose Smartphone apps for tinnitus relief are now emerging; however, research supporting their use and effectiveness is lacking. Research has shown that Tinnitus Therapy sounds intended for individuals with acoustic hearing provide relief to some patients using cochlear implants (CIs) with tinnitus. Here, we evaluated the use and acceptability of a smartphone app to help CI patients with tinnitus. Method Participants completed a laboratory trial (n = 19) and an at-home trial (n = 14) using the ReSound Tinnitus Relief app to evaluate its acceptability and effectiveness in reducing their tinnitus. During the laboratory trial, participants selected a sound that was most acceptable in managing their tinnitus (termed chosen sound). Word recognition scores in quiet were obtained before and after sound therapy. Participants were randomly assigned to one of two groups for the at-home trial, that is, AB or BA, using (A) the chosen sound for 2 weeks and (B) the study sound (i.e., broadband noise at hearing threshold) for another 2 weeks. Ratings were collected weekly to determine acceptability and effectiveness of the app in reducing tinnitus loudness and annoyance. Results Results indicated that some, but not all, participants found their chosen sound to be acceptable and/or effective in reducing their tinnitus. A majority of the participants rated the chosen sound or the study sound to be acceptable in reducing their tinnitus. Word recognition scores for most participants were not adversely affected using the chosen sound; however, a significant decrease was observed for three participants. All 14 participants had a positive experience with the app during the at-home trial on tests of sound therapy acceptability, effectiveness, and word recognition. Conclusions Sound therapy using a smartphone app can be effective for many tinnitus patients using CIs. Audiologists should recommend a sound and a level for tinnitus masking that do not interfere with speech perception.
Collapse
Affiliation(s)
- Ann E. Perreau
- Department of Communication Sciences and Disorders, Augustana College, Rock Island, IL
- Department of Otolaryngology—Head and Neck Surgery, The University of Iowa, Iowa City
| | - Richard S. Tyler
- Department of Otolaryngology—Head and Neck Surgery, The University of Iowa, Iowa City
| | - Victoria Frank
- Department of Communication Sciences and Disorders, Augustana College, Rock Island, IL
| | - Alexandra Watts
- Department of Otolaryngology—Head and Neck Surgery, The University of Iowa, Iowa City
| | - Patricia C. Mancini
- Department of Otolaryngology—Head and Neck Surgery, The University of Iowa, Iowa City
- Department of Speech Pathology and Audiology, Federal University of Minas Gerais, Belo Horizonte, Brazil
| |
Collapse
|
9
|
Hunter JB, Tolisano AM. When to Refer a Hearing-impaired Patient for a Cochlear Implant Evaluation. Otol Neurotol 2021; 42:e530-e535. [PMID: 33394941 DOI: 10.1097/mao.0000000000003023] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES To explore the predictive value of utilizing routine audiometry to best determine cochlear implant (CI) candidacy using AzBio sentences. METHODS A retrospective chart review was performed between 2011 and 2018 for 206 adult patients who underwent CI evaluation assessed with AzBio sentences. Better hearing ear word recognition score (WRS) using Northwestern University-6 word lists presented at decibel hearing level from a standard audiogram was used to determine when best to refer a patient for CI evaluation. Predicted AzBio scores from multivariate regression models were calculated and compared with the actual CI candidacy to assess accuracy of the regression models. RESULTS Race, marital status, hearing aid type, better hearing ear WRS, and HL were all independently and significantly associated with AzBio testing in quiet on univariate analyses. Better hearing ear WRS and better hearing ear decibel hearing level predicted AzBio Quiet on multivariate regression analysis. For AzBio +10 dB signal-to-noise ratio (SNR), sex, and better hearing ear WRS each significantly predicted speech perception testing. Predicted CI candidacy was based on AzBio sentence testing of ≤60% for the ease of statistical analysis. Regression models for AzBio sentence testing in quiet and +10 dB SNR agreed with the actual testing most of the time (85.0 and 87.9%, respectively). A generalized linear model was built for both AzBio testing in quiet and +10 dB SNR. CONCLUSION A WRS of <60% in the better hearing ear derived from a routine audiogram will identify 83.1% of CI candidates while appropriately excluding 63.8% of patients.
Collapse
Affiliation(s)
- Jacob B Hunter
- Department of Otolaryngology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Anthony M Tolisano
- Department of Otolaryngology-Head and Neck Surgery, Walter Reed National Military Medical Center, Bethesda, Maryland
| |
Collapse
|
10
|
Interhemispheric Auditory Cortical Synchronization in Asymmetric Hearing Loss. Ear Hear 2021; 42:1253-1262. [PMID: 33974786 PMCID: PMC8378543 DOI: 10.1097/aud.0000000000001027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Objectives: Auditory cortical activation of the two hemispheres to monaurally presented tonal stimuli has been shown to be asynchronous in normal hearing (NH) but synchronous in the extreme case of adult-onset asymmetric hearing loss (AHL) with single-sided deafness. We addressed the wide knowledge gap between these two anchoring states of interhemispheric temporal organization. The objectives of this study were as follows: (1) to map the trajectory of interhemispheric temporal reorganization from asynchrony to synchrony using magnitude of interaural threshold difference as the independent variable in a cross-sectional study and (2) to evaluate reversibility of interhemispheric synchrony in association with hearing in noise performance by amplifying the aidable poorer ear in a repeated measures, longitudinal study. Design: The cross-sectional and longitudinal cohorts were comprised of 49 subjects (AHL; N = 21; 11 male, 10 female; mean age = 48 years) and NH (N = 28; 16 male, 12 female; mean age = 45 years). The maximum interaural threshold difference of the two cohorts spanned from 0 to 65 dB. Magnetoencephalography analyses focused on latency of the M100 peak response from auditory cortex in both hemispheres between 50 msec and 150 msec following monaural tonal stimulation at the frequency (0.5, 1, 2, 3, or 4 kHz) corresponding to the maximum and minimum interaural threshold difference for better and poorer ears separately. The longitudinal AHL cohort was drawn from three subjects in the cross-sectional AHL cohort (all male; ages 49 to 60 years; varied AHL etiologies; no amplification for at least 2 years). All longitudinal study subjects were treated by monaural amplification of the poorer ear and underwent repeated measures examination of the M100 response latency and quick speech in noise hearing in noise performance at baseline, and postamplification months 3, 6, and 12. Results: The M100 response peak latency values in the ipsilateral hemisphere lagged those in the contralateral hemisphere for all stimulation conditions. The mean (SD) interhemispheric latency difference values (ipsilateral less contralateral) to better ear stimulation for three categories of maximum interaural threshold difference were as follows: NH (≤ 10 dB)—8.6 (3.0) msec; AHL (15 to 40 dB)—3.0 (1.2) msec; AHL (≥ 45 dB)—1.4 (1.3) msec. In turn, the magnitude of difference values were used to define interhemispheric temporal organization states of asynchrony, mixed asynchrony and synchrony, and synchrony, respectively. Amplification of the poorer ear in longitudinal subjects drove interhemispheric organization change from baseline synchrony to postamplification asynchrony and hearing in noise performance improvement in those with baseline impairment over a 12-month period. Conclusions: Interhemispheric temporal organization in AHL was anchored between states of asynchrony in NH and synchrony in single-sided deafness. For asymmetry magnitudes between 15 and 40 dB, the intermediate mixed state of asynchrony and synchrony was continuous and reversible. Amplification of the poorer ear in AHL improved hearing in noise performance and restored normal temporal organization of auditory cortices in the two hemispheres. The return to normal interhemispheric asynchrony from baseline synchrony and improvement in hearing following monoaural amplification of the poorer ear evolved progressively over a 12-month period.
Collapse
|
11
|
Speech Perception Changes in the Acoustically Aided, Nonimplanted Ear after Cochlear Implantation: A Multicenter Study. J Clin Med 2020; 9:jcm9061758. [PMID: 32517138 PMCID: PMC7356938 DOI: 10.3390/jcm9061758] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Revised: 05/23/2020] [Accepted: 06/02/2020] [Indexed: 11/17/2022] Open
Abstract
In recent years there has been an increasing percentage of cochlear implant (CI) users who have usable residual hearing in the contralateral, nonimplanted ear, typically aided by acoustic amplification. This raises the issue of the extent to which the signal presented through the cochlear implant may influence how listeners process information in the acoustically stimulated ear. This multicenter retrospective study examined pre- to postoperative changes in speech perception in the nonimplanted ear, the implanted ear, and both together. Results in the latter two conditions showed the expected increases, but speech perception in the nonimplanted ear showed a modest yet meaningful decrease that could not be completely explained by changes in unaided thresholds, hearing aid malfunction, or several other demographic variables. Decreases in speech perception in the nonimplanted ear were more likely in individuals who had better levels of speech perception in the implanted ear, and in those who had better speech perception in the implanted than in the nonimplanted ear. This raises the possibility that, in some cases, bimodal listeners may rely on the higher quality signal provided by the implant and may disregard or even neglect the input provided by the nonimplanted ear.
Collapse
|
12
|
Abstract
This article reviews the use of human neuroimaging for chronic subjective tinnitus. Evidence-based guidance on the clinical use of imaging to identify relevant auditory lesions when evaluating tinnitus patients is given. After introducing the anatomy and imaging modalities most pertinent to the neuroscience of tinnitus, the article reviews tinnitus-associated alterations in key auditory and nonauditory networks in the central nervous system. Emphasis is placed on how these findings support proposed models of tinnitus and how this line of investigation is relevant to practicing clinicians.
Collapse
Affiliation(s)
- Meredith E Adams
- Department of Otolaryngology-Head and Neck Surgery, University of Minnesota, 420 Delaware Street Southeast, MMC 395, Minneapolis, MN 55455, USA.
| | - Tina C Huang
- Department of Otolaryngology-Head and Neck Surgery, University of Minnesota, 420 Delaware Street Southeast, MMC 395, Minneapolis, MN 55455, USA
| | - Srikantan Nagarajan
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 513 Parnassus Avenue S362, San Francisco, CA 94143-0628, USA; Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, 2233 Post Street Suite 341, San Francisco, CA 94115-1225, USA
| | - Steven W Cheung
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, 2233 Post Street Suite 341, San Francisco, CA 94115-1225, USA
| |
Collapse
|
13
|
Steffens T, Steffens LM, Marcrum SC. Chance-level hit rates in closed-set, forced-choice audiometry and a novel utility for the significance test-based detection of malingering. PLoS One 2020; 15:e0231715. [PMID: 32315326 PMCID: PMC7173938 DOI: 10.1371/journal.pone.0231715] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Accepted: 03/30/2020] [Indexed: 12/03/2022] Open
Abstract
The primary aim of this study was to extend existing theory on the relationship between chance-level performance and the number of alternatives and trials in closed-set, forced-choice speech audiometry and sound localization methods. When calculating chance performance for closed-set, forced-choice experiments with multiple trials, the binomial distribution should be preferred over the simple 1/a probability, as the latter is appropriate only for single trial experiments. The historical use of constant hit rates for determining chance performance has been based upon the assumption that random hits are distributed evenly across multiple trials. For any closed-set, forced-choice task with 2 to 10 alternatives and 2 to 100 trials, we calculated the probability of obtaining any given hit rate due to random guessing alone according to the binomial distribution. Hit rates with probabilities p > 0.05 were interpreted as being likely to occur due to random chance alone, whereas hit rates with probabilities of p ≤ 0.05 were interpreted as being unlikely to occur due to chance alone. For sound localization experiments with speakers at fixed positions, the expected probability of a random hit was also calculated using the binomial distribution. The expected angular root mean square (rms) error in sound localization resulting from the random selection of sound sources was investigated using Monte Carlo simulations. A new aspect in the interpretation of test results was identified for situations in which the observed number of hits is much lower than would be expected due to chance alone. For test methods incorporating a relatively low number of alternatives and a sufficiently high, yet clinically feasible, number of trials, both upper and lower thresholds for chance-level performance could be identified. This lower threshold represents the lowest hit rate which can be expected through random chance alone. Extending interpretation of results to include this lower threshold affords the ability to not only identify performance significantly superior to that of chance, but also that significantly poorer than chance and thereby represents a simple method for the objective detection of malingering.
Collapse
Affiliation(s)
- Thomas Steffens
- Department of Otolaryngology, University Hospital Regensburg, Regensburg, Germany
| | - Lisa M. Steffens
- Center for Cognitive Sciences, University of Bremen, Bremen, Germany
| | - Steven C. Marcrum
- Department of Otolaryngology, University Hospital Regensburg, Regensburg, Germany
| |
Collapse
|
14
|
The Effect of Hearing Aid Bandwidth and Configuration of Hearing Loss on Bimodal Speech Recognition in Cochlear Implant Users. Ear Hear 2019; 40:621-635. [PMID: 30067559 DOI: 10.1097/aud.0000000000000638] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES (1) To determine the effect of hearing aid (HA) bandwidth on bimodal speech perception in a group of unilateral cochlear implant (CI) patients with diverse degrees and configurations of hearing loss in the nonimplanted ear, (2) to determine whether there are demographic and audiometric characteristics that would help to determine the appropriate HA bandwidth for a bimodal patient. DESIGN Participants were 33 experienced bimodal device users with postlingual hearing loss. Twenty three of them had better speech perception with the CI than the HA (CI>HA group) and 10 had better speech perception with the HA than the CI (HA>CI group). Word recognition in sentences (AzBio sentences at +10 dB signal to noise ratio presented at 0° azimuth) and in isolation [CNC (consonant-nucleus-consonant) words] was measured in unimodal conditions [CI alone or HAWB, which indicates HA alone in the wideband (WB) condition] and in bimodal conditions (BMWB, BM2k, BM1k, and BM500) as the bandwidth of an actual HA was reduced from WB to 2 kHz, 1 kHz, and 500 Hz. Linear mixed-effect modeling was used to quantify the relationship between speech recognition and listening condition and to assess how audiometric or demographic covariates might influence this relationship in each group. RESULTS For the CI>HA group, AzBio scores were significantly higher (on average) in all bimodal conditions than in the best unimodal condition (CI alone) and were highest at the BMWB condition. For CNC scores, on the other hand, there was no significant improvement over the CI-alone condition in any of the bimodal conditions. The opposite pattern was observed in the HA>CI group. CNC word scores were significantly higher in the BM2k and BMWB conditions than in the best unimodal condition (HAWB), but none of the bimodal conditions were significantly better than the best unimodal condition for AzBio sentences (and some of the restricted bandwidth conditions were actually worse). Demographic covariates did not interact significantly with bimodal outcomes, but some of the audiometric variables did. For CI>HA participants with a flatter audiometric configuration and better mid-frequency hearing, bimodal AzBio scores were significantly higher than the CI-alone score with the WB setting (BMWB) but not with other bandwidths. In contrast, CI>HA participants with more steeply sloping hearing loss and poorer mid-frequency thresholds (≥82.5 dB) had significantly higher bimodal AzBio scores in all bimodal conditions, and the BMWB did not differ significantly from the restricted bandwidth conditions. HA>CI participants with mild low-frequency hearing loss showed the highest levels of bimodal improvement over the best unimodal condition on CNC words. They were also less affected by HA bandwidth reduction compared with HA>CI participants with poorer low-frequency thresholds. CONCLUSIONS The pattern of bimodal performance as a function of the HA bandwidth was found to be consistent with the degree and configuration of hearing loss for both patients with CI>HA performance and for those with HA>CI performance. Our results support fitting the HA for all bimodal patients with the widest bandwidth consistent with effective audibility.
Collapse
|
15
|
Cheung SW, Racine CA, Henderson-Sabes J, Demopoulos C, Molinaro AM, Heath S, Nagarajan SS, Bourne AL, Rietcheck JE, Wang SS, Larson PS. Phase I trial of caudate deep brain stimulation for treatment-resistant tinnitus. J Neurosurg 2019; 133:992-1001. [PMID: 31553940 PMCID: PMC7089839 DOI: 10.3171/2019.4.jns19347] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2019] [Accepted: 04/11/2019] [Indexed: 01/10/2023]
Abstract
OBJECTIVE The objective of this open-label, nonrandomized trial was to evaluate the efficacy and safety of bilateral caudate nucleus deep brain stimulation (DBS) for treatment-resistant tinnitus. METHODS Six participants underwent DBS electrode implantation. One participant was removed from the study for suicidality unrelated to brain stimulation. Participants underwent a stimulation optimization period that ranged from 5 to 13 months, during which the most promising stimulation parameters for tinnitus reduction for each individual were determined. These individual optimal stimulation parameters were then used during 24 weeks of continuous caudate stimulation to reach the endpoint. The primary outcome for efficacy was the Tinnitus Functional Index (TFI), and executive function (EF) safety was a composite z-score from multiple neuropsychological tests (EF score). The secondary outcome for efficacy was the Tinnitus Handicap Inventory (THI); for neuropsychiatric safety it was the Frontal Systems Behavior Scale (FrSBe), and for hearing safety it was pure tone audiometry at 0.5, 1, 2, 3, 4, and 6 kHz and word recognition score (WRS). Other monitored outcomes included surgery- and device-related adverse events (AEs). Five participants provided full analyzable data sets. Primary and secondary outcomes were based on differences in measurements between baseline and endpoint. RESULTS The treatment effect size of caudate DBS for tinnitus was assessed by TFI [mean (SE), 23.3 (12.4)] and THI [30.8 (10.4)] scores, both of which were statistically significant (Wilcoxon signed-rank test, 1-tailed; alpha = 0.05). Based on clinically significant treatment response categorical analysis, there were 3 responders determined by TFI (≥ 13-point decrease) and 4 by THI (≥ 20-point decrease) scores. Safety outcomes according to EF score, FrSBe, audiometric thresholds, and WRS showed no significant change with continuous caudate stimulation. Surgery-related and device-related AEs were expected, transient, and reversible. There was only one serious AE, a suicide attempt unrelated to caudate neuromodulation in a participant in whom stimulation was in the off mode for 2 months prior to the event. CONCLUSIONS Bilateral caudate nucleus neuromodulation by DBS for severe, refractory tinnitus in this phase I trial showed very encouraging results. Primary and secondary outcomes revealed a highly variable treatment effect size and 60%-80% treatment response rate for clinically significant benefit, and no safety concerns. The design of a phase II trial may benefit from targeting refinement for final DBS lead placement to decrease the duration of the stimulation optimization period and to increase treatment effect size uniformity.Clinical trial registration no.: NCT01988688 (clinicaltrials.gov).
Collapse
Affiliation(s)
- Steven W. Cheung
- Department of Otolaryngology – Head and Neck Surgery, UCSF, San Francisco, USA
- Surgical Services, Veterans Affairs Health Care System, San Francisco, USA
| | | | | | - Carly Demopoulos
- Department of Psychiatry, UCSF, San Francisco, USA
- Department of Department of Radiology and Biomedical Imaging, UCSF, San Francisco, USA
| | | | - Susan Heath
- Surgical Services, Veterans Affairs Health Care System, San Francisco, USA
| | - Srikantan S. Nagarajan
- Department of Otolaryngology – Head and Neck Surgery, UCSF, San Francisco, USA
- Department of Department of Radiology and Biomedical Imaging, UCSF, San Francisco, USA
| | - Andrea L. Bourne
- Audiology and Speech Pathology Service, Veterans Affairs Health Care System, San Francisco, USA
| | - John E. Rietcheck
- Audiology and Speech Pathology Service, Veterans Affairs Health Care System, San Francisco, USA
| | | | - Paul S. Larson
- Surgical Services, Veterans Affairs Health Care System, San Francisco, USA
- Department of Neurological Surgery, UCSF, San Francisco, USA
| |
Collapse
|
16
|
Yu TLJ, Schlauch RS. Diagnostic Precision of Open-Set Versus Closed-Set Word Recognition Testing. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:2035-2047. [PMID: 31194914 DOI: 10.1044/2019_jslhr-h-18-0317] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose The aim of the study was to examine the precision of forced-choice (closed-set) and open-ended (open-set) word recognition (WR) tasks for identifying a change in hearing. Method WR performance for closed-set (4 and 6 choices) and open-set tasks was obtained from 70 listeners with normal hearing. Speech recognition was degraded by presenting monosyllabic words in noise (-8, -4, 0, and 4 signal-to-noise ratios) or processed by a sine wave vocoder (2, 4, 6, and 8 channels). Results The 2 degraded speech understanding conditions yielded similarly shaped, monotonically increasing psychometric functions with the closed-set tasks having shallower slopes and higher scores than the open-set task for the same listening condition. Fitted psychometric functions to the average data were the input to a computer simulation conducted to assess the ability of each task to identify a change in hearing. Individual data were also analyzed using 95% confidence intervals for significant changes in scores for words and phonemes. These analyses found the following for the most to least efficient condition: open-set (phoneme), open-set (word), closed-set (6 choices), and closed-set (4 choices). Conclusions Closed-set WR testing has distinct advantages for implementation, but its poorer precision for identifying a change than open-set WR testing must be considered.
Collapse
Affiliation(s)
- Tzu-Ling J Yu
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis
| | - Robert S Schlauch
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis
| |
Collapse
|
17
|
Alexander JM. The S-SH Confusion Test and the Effects of Frequency Lowering. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:1486-1505. [PMID: 31063023 DOI: 10.1044/2018_jslhr-h-18-0267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose Frequency lowering in hearing aids can cause listeners to perceive [s] as [ʃ]. The S-SH Confusion Test, which consists of 66 minimal word pairs spoken by 6 female talkers, was designed to help clinicians and researchers document these negative side effects. This study's purpose was to use this new test to evaluate the hypothesis that these confusions will increase to the extent that low frequencies are altered. Method Twenty-one listeners with normal hearing were each tested on 7 conditions. Three were control conditions that were low-pass filtered at 3.3, 5.0, and 9.1 kHz. Four conditions were processed with nonlinear frequency compression (NFC): 2 had a 3.3-kHz maximum audible output frequency (MAOF), with a start frequency (SF) of 1.6 or 2.2 kHz; 2 had a 5.0-kHz MAOF, with an SF of 1.6 or 4.0 kHz. Listeners' responses were analyzed using concepts from signal detection theory. Response times were also collected as a measure of cognitive processing. Results Overall, [s] for [ʃ] confusions were minimal. As predicted, [ʃ] for [s] confusions increased for NFC conditions with a lower versus higher MAOF and with a lower versus higher SF. Response times for trials with correct [s] responses were shortest for the 9.1-kHz control and increased for the 5.0- and 3.3-kHz controls. NFC response times were also significantly longer as MAOF and SF decreased. The NFC condition with the highest MAOF and SF had statistically shorter response times than its control condition, indicating that, under some circumstances, NFC may ease cognitive processing. Conclusions Large differences in the S-SH Confusion Test across frequency-lowering conditions show that it can be used to document a major negative side effect associated with frequency lowering. Smaller but significant differences in response times for correct [s] trials indicate that NFC can help or hinder cognitive processing, depending on its settings.
Collapse
Affiliation(s)
- Joshua M Alexander
- Department of Speech, Language, and Hearing Sciences, Purdue University,West Lafayette, Indiana
| |
Collapse
|
18
|
Kelley KS, Littenberg B. Dichotic Listening Test-Retest Reliability in Children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:169-176. [PMID: 30950751 DOI: 10.1044/2018_jslhr-h-17-0158] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose The objective of the study was to compare test-retest reliability of three dichotic listening tests: SCAN-3 Competing Words Test (Words; Keith, 2009a , 2009b ), Double Dichotic Digits Test (Digits; Musiek, 1983a ), and Bergen Dichotic Listening Test With Consonant-Vowel Syllables (Syllables; Hugdahl & Hammar, 1997 ). Method Sixty English-speaking children, 7-14 years old with normal hearing, had a single study visit during which each test was administered twice. Changes on retest were summarized by within-subject standard deviation ( S w), compared among tests, and compared with binomial model predictions. Correlates of variance were explored. Results Scores based on 40 items were more precise ( S w = 5%) than those based on 20-30 items ( S w = 6%-8%). All 3 tests had reliability within bounds predicted by binomial model. Changes on retest for Words and Digits Test were weakly associated with age, but this is confounded by the trend for older children to have higher Words and Digits scores. Conclusions Digits Right, Digits Left, and Words Total scores-each based on 40 items-had the best reliability among the clinically used scores. Scores based on fewer items were less precise. Poor precision may contribute to misdiagnosis in clinic and to nondifferential misclassification in research. More precise estimates of dichotic listening ability require longer tests.
Collapse
Affiliation(s)
- Kairn Stetler Kelley
- Program in Clinical and Translational Science, University of Vermont, Burlington
| | - Benjamin Littenberg
- Program in Clinical and Translational Science, University of Vermont, Burlington
- Robert Larner, MD College of Medicine, University of Vermont, Burlington
| |
Collapse
|
19
|
Durakovic N, Valente M, Goebel JA, Wick CC. What defines asymmetric sensorineural hearing loss? Laryngoscope 2018; 129:1023-1024. [DOI: 10.1002/lary.27504] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Revised: 06/26/2018] [Accepted: 07/18/2018] [Indexed: 11/11/2022]
Affiliation(s)
- Nedim Durakovic
- Department of Otolaryngology–Head and Neck SurgeryWashington University School of Medicine St. Louis Missouri U.S.A
| | - Michael Valente
- Department of Otolaryngology–Head and Neck SurgeryWashington University School of Medicine St. Louis Missouri U.S.A
| | - Joel A. Goebel
- Department of Otolaryngology–Head and Neck SurgeryWashington University School of Medicine St. Louis Missouri U.S.A
| | - Cameron C. Wick
- Department of Otolaryngology–Head and Neck SurgeryWashington University School of Medicine St. Louis Missouri U.S.A
| |
Collapse
|
20
|
Davidson LS, Firszt JB, Brenner C, Cadieux JH. Evaluation of hearing aid frequency response fittings in pediatric and young adult bimodal recipients. J Am Acad Audiol 2018; 26:393-407. [PMID: 25879243 DOI: 10.3766/jaaa.26.4.7] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND A coordinated fitting of a cochlear implant (CI) and contralateral hearing aid (HA) for bimodal device use should emphasize balanced audibility and loudness across devices. However, guidelines for allocating frequency information to the CI and HA are not well established for the growing population of bimodal recipients. PURPOSE The study aim was to compare the effects of three different HA frequency responses, when fitting a CI and an HA for bimodal use, on speech recognition and localization in children/young adults. Specifically, the three frequency responses were wideband, restricted high frequency, and nonlinear frequency compression (NLFC), which were compared with measures of word recognition in quiet, sentence recognition in noise, talker discrimination, and sound localization. RESEARCH DESIGN The HA frequency responses were evaluated using an A B₁ A B₂ test design: wideband frequency response (baseline-A), restricted high-frequency response (experimental-B₁), and NLFC-activated (experimental-B2). All participants were allowed 3-4 weeks between each test session for acclimatization to each new HA setting. Bimodal benefit was determined by comparing the bimodal score to the CI-alone score. STUDY SAMPLE Participants were 14 children and young adults (ages 7-21 yr) who were experienced users of bimodal devices. All had been unilaterally implanted with a Nucleus CI24 internal system and used either a Freedom or CP810 speech processor. All received a Phonak Naida IX UP behind-the-ear HA at the beginning of the study. DATA COLLECTION AND ANALYSIS Group results for the three bimodal conditions (HA frequency response with wideband, restricted high frequency, and NLFC) on each outcome measure were analyzed using a repeated measures analysis of variance. Group results using the individual "best bimodal" score were analyzed and confirmed using a resampling procedure. Correlation analyses examined the effects of audibility (aided and unaided hearing) in each bimodal condition for each outcome measure. Individual data were analyzed for word recognition in quiet, sentence recognition in noise, and localization. Individual preference for the three bimodal conditions was also assessed. RESULTS Group data revealed no significant difference between the three bimodal conditions for word recognition in quiet, sentence recognition in noise, and talker discrimination. However, group data for the localization measure revealed that both wideband and NLFC resulted in significantly improved bimodal performance. The condition that yielded the "best bimodal" score varied across participants. Because of this individual variability, the "best bimodal" score was chosen for each participant to reassess group data within word recognition in quiet, sentence recognition in noise, and talker discrimination. This method revealed a bimodal benefit for word recognition in quiet after a randomization test was used to confirm significance. The majority of the participants preferred NLFC at the conclusion of the study, although a few preferred a restricted high-frequency response or reported no preference. CONCLUSIONS These results support consideration of restricted high-frequency and NLFC HA responses in addition to traditional wideband response for bimodal device users.
Collapse
Affiliation(s)
- Lisa S Davidson
- Department of Otolaryngology, Washington University School of Medicine, St. Louis, Missouri
| | - Jill B Firszt
- Department of Otolaryngology, Washington University School of Medicine, St. Louis, Missouri
| | - Chris Brenner
- Department of Otolaryngology, Washington University School of Medicine, St. Louis, Missouri
| | | |
Collapse
|
21
|
Nonlinguistic Outcome Measures in Adult Cochlear Implant Users Over the First Year of Implantation. Ear Hear 2018; 37:354-64. [PMID: 26656317 DOI: 10.1097/aud.0000000000000261] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Postlingually deaf cochlear implant users' speech perception improves over several months after implantation due to a learning process which involves integration of the new acoustic information presented by the device. Basic tests of hearing acuity might evaluate sensitivity to the new acoustic information and be less sensitive to learning effects. It was hypothesized that, unlike speech perception, basic spectral and temporal discrimination abilities will not change over the first year of implant use. If there were limited change over time and the test scores were correlated with clinical outcome, the tests might be useful for acute diagnostic assessments of hearing ability and also useful for testing speakers of any language, many of which do not have validated speech tests. DESIGN Ten newly implanted cochlear implant users were tested for speech understanding in quiet and in noise at 1 and 12 months postactivation. Spectral-ripple discrimination, temporal-modulation detection, and Schroeder-phase discrimination abilities were evaluated at 1, 3, 6, 9, and 12 months postactivation. RESULTS Speech understanding in quiet improved between 1 and 12 months postactivation (mean 8% improvement). Speech in noise performance showed no statistically significant improvement. Mean spectral-ripple discrimination thresholds and temporal-modulation detection thresholds for modulation frequencies of 100 Hz and above also showed no significant improvement. Spectral-ripple discrimination thresholds were significantly correlated with speech understanding. Low FM detection and Schroeder-phase discrimination abilities improved over the period. Individual learning trends varied, but the majority of listeners followed the same stable pattern as group data. CONCLUSIONS Spectral-ripple discrimination ability and temporal-modulation detection at 100-Hz modulation and above might serve as a useful diagnostic tool for early acute assessment of cochlear implant outcome for listeners speaking any native language.
Collapse
|
22
|
Dyer RK, Spearman M, Spearman B, McCraney A. Evaluating speech perception of the MAXUM middle ear implant versus speech perception under inserts. Laryngoscope 2017; 128:456-460. [PMID: 28581120 DOI: 10.1002/lary.26605] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2016] [Revised: 02/23/2017] [Accepted: 03/09/2017] [Indexed: 11/06/2022]
Abstract
OBJECTIVES/HYPOTHESIS To evaluate the speech perception of the Ototronix MAXUM middle ear implant relative to the cochlear potential for speech perception of patients. STUDY DESIGN Clinical study chart review. METHODS We performed an evaluation of data from a prospective clinical study of 10 MAXUM patients. Primary outcome measures included comparison of word recognition (WR) scores with MAXUM (WRMAXUM ) versus word recognition under inserts (WRinserts ), and the functional gain improvement for pure-tone average (PTA) (0.5, 1, and 2 kHz) and high-frequency pure-tone average (2, 3, and 4 kHz). RESULTS Ten ears in 10 adult patients (six female; average age 68.7 years) were included. The average speech perception gap (difference between WRinserts and WRMAXUM ) with MAXUM was -9.2% (range, -26% to 4%). A negative number indicates that WRMAXUM was higher than the WRinserts . The average PTA with MAXUM was 23.1 dB (range, 18.7-30 dB), a 38.0-dB gain over the preoperative unaided condition (range, 20-53.3 dB). The average high-frequency pure-tone average with MAXUM was 34.4 dB (range, 26-43.3 dB), a 42.8-dB gain over the preoperative unaided condition (range, 32.3-58.7 dB). CONCLUSIONS These data demonstrate that a significant, very strong correlation was observed between WRinserts and WRMAXUM scores (r = 0.86, P = .001), and a patient's WRinserts score may be used to reasonably predict the word recognition outcomes with MAXUM. LEVEL OF EVIDENCE 4. Laryngoscope, 128:456-460, 2018.
Collapse
Affiliation(s)
- R Kent Dyer
- Department of Surgery and on the Board of Directors at the Hough Ear Institute, Oklahoma City, Oklahoma, U.S.A
| | | | | | | |
Collapse
|
23
|
Myles AJ. The clinical use of Arthur Boothroyd (AB) word lists in Australia: exploring evidence-based practice. Int J Audiol 2017; 56:870-875. [DOI: 10.1080/14992027.2017.1327123] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
24
|
Pedersen ER, Juhl PM. Simulated Critical Differences for Speech Reception Thresholds. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:238-250. [PMID: 28114613 DOI: 10.1044/2016_jslhr-h-15-0445] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2015] [Accepted: 07/07/2016] [Indexed: 06/06/2023]
Abstract
PURPOSE Critical differences state by how much 2 test results have to differ in order to be significantly different. Critical differences for discrimination scores have been available for several decades, but they do not exist for speech reception thresholds (SRTs). This study presents and discusses how critical differences for SRTs can be estimated by Monte Carlo simulations. As an application of this method, critical differences are proposed for a 5-word sentences test (a matrix test) using 2 widely implemented adaptive test procedures. METHOD For each procedure, simulations were performed for different parameters: the number of test sentences, the j factor, the distribution of the subjects' true SRTs, and the slope of the discrimination function. For 1 procedure and 1 parameter setting, simulation data are compared with results found by listening tests (experimental data). RESULTS The critical differences were found to depend on the parameters tested, including interactive effects. The critical differences found by simulation agree with data found experimentally. CONCLUSIONS As the critical differences for SRTs rely on multiple parameters, they must be determined for each parameter setting individually. However, with knowledge of the test setup, rules of thumb can be derived.
Collapse
Affiliation(s)
| | - Peter Møller Juhl
- The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Odense
| |
Collapse
|
25
|
Pedersen ER, Juhl PM, Wetke R, Andersen TD. Speech perception in medico-legal assessment of hearing disabilities. Int J Audiol 2016; 55:547-55. [PMID: 27379376 DOI: 10.1080/14992027.2016.1198967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
OBJECTIVE Examination of Danish data for medico-legal compensations regarding hearing disabilities. The study purposes are: (1) to investigate whether discrimination scores (DSs) relate to patients' subjective experience of their hearing and communication ability (the latter referring to audio-visual perception), (2) to compare DSs from different discrimination tests (auditory/audio-visual perception and without/with noise), and (3) to relate different handicap measures in the scaling used for compensation purposes in Denmark. DESIGN Data from a 15 year period (1999-2014) were collected and analysed. STUDY SAMPLE The data set includes 466 patients, from which 50 were omitted due to suspicion of having exaggerated their hearing disabilities. RESULTS The DSs relate well to the patients' subjective experience of their speech perception ability. By comparing DSs for different test setups it was found that adding noise entails a relatively more difficult listening condition than removing visual cues. The hearing and communication handicap degrees were found to agree, whereas the measured handicap degrees tended to be higher than the self-assessed handicap degrees. CONCLUSIONS The DSs can be used to assess patients' hearing and communication abilities. The difference in the obtained handicap degrees emphasizes the importance of collecting self-assessed as well as measured handicap degrees.
Collapse
Affiliation(s)
- Ellen Raben Pedersen
- a The Maersk McKinney Moller Institute, University of Southern Denmark , Odense , Denmark
| | - Peter Møller Juhl
- a The Maersk McKinney Moller Institute, University of Southern Denmark , Odense , Denmark
| | - Randi Wetke
- b Department of Audiology , Odense University Hospital , Odense , Denmark , and.,c Institute of Clinical Research, University of Southern Denmark , Odense , Denmark
| | - Ture Dammann Andersen
- b Department of Audiology , Odense University Hospital , Odense , Denmark , and.,c Institute of Clinical Research, University of Southern Denmark , Odense , Denmark
| |
Collapse
|
26
|
Rakszawski B, Wright R, Cadieux JH, Davidson LS, Brenner C. The Effects of Preprocessing Strategies for Pediatric Cochlear Implant Recipients. J Am Acad Audiol 2016; 27:85-102. [PMID: 26905529 DOI: 10.3766/jaaa.14058] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Cochlear implants (CIs) have been shown to improve children's speech recognition over traditional amplification when severe-to-profound sensorineural hearing loss is present. Despite improvements, understanding speech at low-level intensities or in the presence of background noise remains difficult. In an effort to improve speech understanding in challenging environments, Cochlear Ltd. offers preprocessing strategies that apply various algorithms before mapping the signal to the internal array. Two of these strategies include Autosensitivity Control™ (ASC) and Adaptive Dynamic Range Optimization (ADRO(®)). Based on the previous research, the manufacturer's default preprocessing strategy for pediatrics' everyday programs combines ASC + ADRO(®). PURPOSE The purpose of this study is to compare pediatric speech perception performance across various preprocessing strategies while applying a specific programming protocol using increased threshold levels to ensure access to very low-level sounds. RESEARCH DESIGN This was a prospective, cross-sectional, observational study. Participants completed speech perception tasks in four preprocessing conditions: no preprocessing, ADRO(®), ASC, and ASC + ADRO(®). STUDY SAMPLE Eleven pediatric Cochlear Ltd. CI users were recruited: six bilateral, one unilateral, and four bimodal. INTERVENTION Four programs, with the participants' everyday map, were loaded into the processor with different preprocessing strategies applied in each of the four programs: no preprocessing, ADRO(®), ASC, and ASC + ADRO(®). DATA COLLECTION AND ANALYSIS Participants repeated consonant-nucleus-consonant (CNC) words presented at 50 and 70 dB SPL in quiet and Hearing in Noise Test (HINT) sentences presented adaptively with competing R-Space(TM) noise at 60 and 70 dB SPL. Each measure was completed as participants listened with each of the four preprocessing strategies listed above. Test order and conditions were randomized. A repeated-measures analysis of was used to compare each preprocessing strategy for the group. Critical differences were used to determine significant score differences between each preprocessing strategy for individual participants. RESULTS For CNC words presented at 50 dB SPL, the group data revealed significantly better scores using ASC + ADRO(®) compared to all other preprocessing conditions while ASC resulted in poorer scores compared to ADRO(®) and ASC + ADRO(®). Group data for HINT sentences presented in 70 dB SPL of R-Space(TM) noise revealed significantly improved scores using ASC and ASC + ADRO(®) compared to no preprocessing, with ASC + ADRO(®) scores being better than ADRO(®) alone scores. Group data for CNC words presented at 70 dB SPL and adaptive HINT sentences presented in 60 dB SPL of R-Space(TM) noise showed no significant difference among conditions. Individual data showed that the preprocessing strategy yielding the best scores varied across measures and participants. CONCLUSIONS Group data reveal an advantage with ASC + ADRO(®) for speech perception presented at lower levels and in higher levels of background noise. Individual data revealed that the optimal preprocessing strategy varied among participants, indicating that a variety of preprocessing strategies should be explored for each CI user considering his or her performance in challenging listening environments.
Collapse
Affiliation(s)
- Bernadette Rakszawski
- Program in Audiology and Communication Sciences, Washington University School of Medicine, St. Louis, MO.,St. Louis Children's Hospital, St. Louis, MO
| | - Rose Wright
- St. Louis Children's Hospital, St. Louis, MO.,Central Institute for the Deaf, St. Louis, MO
| | | | - Lisa S Davidson
- Program in Audiology and Communication Sciences, Washington University School of Medicine, St. Louis, MO.,Department of Otolaryngology, Washington University School of Medicine, St. Louis, MO
| | - Christine Brenner
- Department of Otolaryngology, Washington University School of Medicine, St. Louis, MO
| |
Collapse
|
27
|
Potts LG, Litovsky RY. Transitioning from bimodal to bilateral cochlear implant listening: speech recognition and localization in four individuals. Am J Audiol 2015; 23:79-92. [PMID: 24018578 DOI: 10.1044/1059-0889(2013/11-0031)] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The use of bilateral stimulation is becoming common for cochlear implant (CI) recipients with either (a) a CI in one ear and a hearing aid (HA) in the nonimplanted ear (CI&HA-bimodal) or (b) CIs in both ears (CI&CI-bilateral). The objective of this study was to evaluate 4 individuals who transitioned from bimodal to bilateral stimulation. METHOD Participants had completed a larger study of bimodal hearing and subsequently received a second CI. Test procedures from the bimodal study, including roaming speech recognition, localization, and a questionnaire (the Speech, Spatial, and Qualities of Hearing Scale; Gatehouse & Noble, 2004) were repeated after 6-7 months of bilateral CI experience. RESULTS Speech recognition and localization were not significantly different between bimodal and unilateral CI. In contrast, performance was significantly better with CI&CI compared with unilateral CI. Speech recognition with CI&CI was significantly better than with CI&HA for 2 of 4 participants. Localization was significantly better for all participants with CI&CI compared with CI&HA. CI&CI performance was rated as significantly better on the Speech, Spatial, and Qualities of Hearing Scale compared with CI&HA. CONCLUSIONS There was a strong preference for CI&CI for all participants. The variability in speech recognition and localization, however, suggests that performance under these stimulus conditions is individualized. Differences in hearing and/or HA history may explain performance differences.
Collapse
Affiliation(s)
- Lisa G. Potts
- Washington University School of Medicine, St. Louis, MO
| | | |
Collapse
|
28
|
Silberer AB, Bentler R, Wu YH. The importance of high-frequency audibility with and without visual cues on speech recognition for listeners with normal hearing. Int J Audiol 2015; 54:865-72. [PMID: 26068537 DOI: 10.3109/14992027.2015.1051666] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE To examine the impact of visual cues, speech materials, age and listening condition on the frequency bandwidth necessary for optimizing speech recognition performance. DESIGN Using a randomized repeated measures design; speech recognition performance was assessed using four speech perception tests presented in quiet and noise in 13 LP filter conditions and presented in multimodalities. Participants' performance data were fitted with a Boltzmann function to determine optimal performance (10% below performance achieved in FBW). STUDY SAMPLE Thirty adults (18-63 years) and thirty children (7-12 years) with normal hearing. RESULTS Visual cues significantly reduced the bandwidth required for optimizing speech recognition performance for listeners. The type of speech material significantly impacted the bandwidth required for optimizing performance. Both groups required significantly less bandwidth in quiet, although children required significantly more than adults. The widest bandwidth required was for the phoneme detection task in noise where children required a bandwidth of 7399 Hz and adults 6674 Hz. CONCLUSIONS Listeners require significantly less bandwidth for optimizing speech recognition performance when assessed using sentence materials with visual cues. That is, the amount of bandwidth systematically decreased as a function of increased contextual, linguistic, and visual content.
Collapse
Affiliation(s)
- Amanda B Silberer
- a * Department of Communication Sciences and Disorders , The University of Iowa , Iowa City , USA.,b Department of Communication Sciences and Disorders , Western Illinois University , Macomb, Illinois , USA
| | - Ruth Bentler
- a * Department of Communication Sciences and Disorders , The University of Iowa , Iowa City , USA
| | - Yu-Hsiang Wu
- a * Department of Communication Sciences and Disorders , The University of Iowa , Iowa City , USA
| |
Collapse
|
29
|
Effects of frequency compression and frequency transposition on fricative and affricate perception in listeners with normal hearing and mild to moderate hearing loss. Ear Hear 2015; 35:519-32. [PMID: 24699702 DOI: 10.1097/aud.0000000000000040] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The authors have demonstrated that the limited bandwidth associated with conventional hearing aid amplification prevents useful high-frequency speech information from being transmitted. The purpose of this study was to examine the efficacy of two popular frequency-lowering algorithms and one novel algorithm (spectral envelope decimation) in adults with mild to moderate sensorineural hearing loss and in normal-hearing controls. DESIGN Participants listened monaurally through headphones to recordings of nine fricatives and affricates spoken by three women in a vowel-consonant context. Stimuli were mixed with speech-shaped noise at 10 dB SNR and recorded through a Widex Inteo IN-9 and a Phonak Naída UP V behind-the-ear (BTE) hearing aid. Frequency transposition (FT) is used in the Inteo and nonlinear frequency compression (NFC) used in the Naída. Both devices were programmed to lower frequencies above 4 kHz, but neither device could lower frequencies above 6 to 7 kHz. Each device was tested under four conditions: frequency lowering deactivated (FT-off and NFC-off), frequency lowering activated (FT and NFC), wideband (WB), and a fourth condition unique to each hearing aid. The WB condition was constructed by mixing recordings from the first condition with high-pass filtered versions of the source stimuli. For the Inteo, the fourth condition consisted of recordings made with the same settings as the first, but with the noise-reduction feature activated (FT-off). For the Naída, the fourth condition was the same as the first condition except that source stimuli were preprocessed by a novel frequency compression algorithm, spectral envelope decimation (SED), designed in MATLAB, which allowed for a more complete lowering of the 4 to 10 kHz input band. A follow-up experiment with NFC used Phonak's Naída SP V BTE, which could also lower a greater range of input frequencies. RESULTS For normal-hearing and hearing-impaired listeners, performance with FT was significantly worse compared with that in the other conditions. Consistent with previous findings, performance for the hearing-impaired listeners in the WB condition was significantly better than in the FT-off condition. In addition, performance in the SED and WB conditions were both significantly better than in the NFC-off condition and the NFC condition with 6 kHz input bandwidth. There were no significant differences between SED and WB, indicating that improvements in fricative identification obtained by increasing bandwidth can also be obtained using this form of frequency compression. Significant differences between most conditions could be largely attributed to an increase or decrease in confusions for the phonemes /s/ and /z/. In the follow-up experiment, performance in the NFC condition with 10 kHz input bandwidth was significantly better than NFC-off, replicating the results obtained with SED. Furthermore, listeners who performed poorly with NFC-off tended to show the most improvement with NFC. CONCLUSIONS Improvements in the identification of stimuli chosen to be sensitive to the effects of frequency lowering have been demonstrated using two forms of frequency compression (NFC and SED) in individuals with mild to moderate high-frequency sensorineural hearing loss. However, negative results caution against using FT for this population. Results also indicate that the advantage of an extended bandwidth as reported here and elsewhere applies to the input bandwidth for frequency compression (NFC/SED) when the start frequency is ≥4 kHz.
Collapse
|
30
|
Shi LF. How "proficient" is proficient? Bilingual listeners' recognition of English words in noise. Am J Audiol 2015; 24:53-65. [PMID: 25551364 DOI: 10.1044/2014_aja-14-0041] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2014] [Accepted: 11/21/2014] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Shi (2011, 2013) obtained sensitivity/specificity measures of bilingual listeners' English and relative proficiency ratings as the predictor of English word recognition in quiet. The current study investigated how relative proficiency predicted word recognition in noise. METHOD Forty-two monolingual and 168 bilingual normal-hearing listeners were included. Bilingual listeners rated their proficiency in listening, speaking, and reading in English and in the other language using an 11-point scale. Listeners were presented with 50 English monosyllabic words in quiet at 45 dB HL and in multitalker babble with a signal-to-noise ratio of +6 and 0 dB. RESULTS Data in quiet confirmed Shi's (2013) finding that relative proficiency with or without dominance predicted well whether bilinguals performed on par with the monolingual norm. Predicting the outcome was difficult for the 2 noise conditions. To identify bilinguals whose performance fell below the normative range, dominance per se or a combination of dominance and average relative proficiency rating yielded the best sensitivity/specificity and summary measures, including Youden's index. CONCLUSION Bilinguals' word recognition is more difficult to predict in noise than in quiet; however, proficiency and dominance variables can predict reasonably well whether bilinguals may perform at a monolingual normative level.
Collapse
|
31
|
Han HJ, Schlauch RS, Rao A. The effect of visual cues on scoring of clinical word-recognition tests. Am J Audiol 2014; 23:385-93. [PMID: 25166267 DOI: 10.1044/2014_aja-14-0024] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2014] [Accepted: 08/04/2014] [Indexed: 11/09/2022] Open
Abstract
PURPOSE During routine clinical speech assessment, if the person being tested were to write down what he or she heard, it would not always match what the audiologist heard while scoring the listener's vocal responses (Nelson & Chaiklin, 1970). This study demonstrated a method to assess examiner accuracy and whether speechreading cues reduce writedown-talkback errors. METHOD Examiners were divided into 3 categories: normal hearing native speakers of English, normal hearing nonnative speakers of English, and native speakers with hearing loss. Each examiner assessed 4 normal-hearing listeners. Two NU-6 lists were presented to each listener; one was scored without visual cues and one with visual cues. Lists were presented at 50 dB HL in the presence of speech noise at 0 dB signal-to-noise ratio (SNR). RESULTS Results analyzed by percentage of correct phonemes and words revealed fewer writedown-talkback discrepancies for all 3 examiner groups when visual cues were added, with a substantial improvement for examiners with hearing loss. CONCLUSION The finding of errors between talkback versus writedown scoring of lists for all of the examiners, even with visual cues, suggests a need for modification of the clinical word-recognition procedure for applications that potentially affect diagnosis, rehabilitation choices, or financial compensation.
Collapse
|
32
|
Dorman MF, Cook S, Spahr A, Zhang T, Loiselle L, Schramm D, Whittingham J, Gifford R. Factors constraining the benefit to speech understanding of combining information from low-frequency hearing and a cochlear implant. Hear Res 2014; 322:107-11. [PMID: 25285624 DOI: 10.1016/j.heares.2014.09.010] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/16/2014] [Revised: 08/28/2014] [Accepted: 09/22/2014] [Indexed: 11/20/2022]
Abstract
Many studies have documented the benefits to speech understanding when cochlear implant (CI) patients can access low-frequency acoustic information from the ear opposite the implant. In this study we assessed the role of three factors in determining the magnitude of bimodal benefit - (i) the level of CI-only performance, (ii) the magnitude of the hearing loss in the ear with low-frequency acoustic hearing and (iii) the type of test material. The patients had low-frequency PTAs (average of 125, 250 and 500 Hz) varying over a large range (<30 dB HL to >70 dB HL) in the ear contralateral to the implant. The patients were tested with (i) CNC words presented in quiet (n = 105) (ii) AzBio sentences presented in quiet (n = 102), (iii) AzBio sentences in noise at +10 dB signal-to-noise ratio (SNR) (n = 69), and (iv) AzBio sentences at +5 dB SNR (n = 64). We find maximum bimodal benefit when (i) CI scores are less than 60 percent correct, (ii) hearing loss is less than 60 dB HL in low-frequencies and (iii) the test material is sentences presented against a noise background. When these criteria are met, some bimodal patients can gain 40-60 percentage points in performance relative to performance with a CI. This article is part of a Special Issue entitled <Lasker Award>.
Collapse
Affiliation(s)
- Michael F Dorman
- Arizona State University, Department of Speech and Hearing Science, Tempe, AZ 85287, USA.
| | - Sarah Cook
- Arizona State University, Department of Speech and Hearing Science, Tempe, AZ 85287, USA
| | - Anthony Spahr
- Advanced Bionics 28515 Westinghouse Pl, Valencia, CA 91355, USA
| | - Ting Zhang
- Arizona State University, Department of Speech and Hearing Science, Tempe, AZ 85287, USA
| | - Louise Loiselle
- Arizona State University, Department of Speech and Hearing Science, Tempe, AZ 85287, USA
| | - David Schramm
- University of Ottawa Faculty of Medicine, 451 Smyth Rd. Ottawa, Ontario, Canada K1H 8M5
| | - JoAnne Whittingham
- University of Ottawa Faculty of Medicine, 451 Smyth Rd. Ottawa, Ontario, Canada K1H 8M5
| | - Rene Gifford
- Vanderbilt University, Department of Hearing and Speech Sciences, Nashville, TN 37232, USA
| |
Collapse
|
33
|
Schlauch RS, Anderson ES, Micheyl C. A demonstration of improved precision of word recognition scores. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2014; 57:543-555. [PMID: 24686502 DOI: 10.1044/2014_jslhr-h-13-0017] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
PURPOSE The purpose of this study was to demonstrate improved precision of word recognition scores (WRSs) by increasing list length and analyzing phonemic errors. METHOD Pure-tone thresholds (frequencies between 0.25 and 8.0 kHz) and WRSs were measured in 3 levels of speech-shaped noise (50, 52, and 54 dB HL) for 24 listeners with normal hearing. WRSs were obtained for half-lists and full lists of Northwestern University Test No. 6 (Tillman & Carhart, 1966) words presented at 48 dB HL. A resampling procedure was used to derive dimensionless effect sizes for identifying a change in hearing using the data. This allowed the direct comparison of the magnitude of shifts in WRS (%) and in the average pure-tone threshold (dB), which provided a context for interpreting the WRS. RESULTS WRSs based on a 50-word list analyzed by the percentage of correct phonemes were significantly more sensitive for identifying a change in hearing than the WRSs based on 25-word lists analyzed by percentage of correct words. CONCLUSION Increasing the number of items that contribute to a WRS significantly increased the test's ability to identify a change in hearing. Clinical and research applications could potentially benefit from a more precise word recognition test, the only basic audiologic measure that estimates directly the distortion component of hearing loss and its effect on communication.
Collapse
|
34
|
Gelfand JT, Christie RE, Gelfand SA. Large-corpus phoneme and word recognition and the generality of lexical context in CVC word perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2014; 57:297-307. [PMID: 24687475 DOI: 10.1044/1092-4388(2013/12-0183)] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
PURPOSE Speech recognition may be analyzed in terms of recognition probabilities for perceptual wholes (e.g., words) and parts (e.g., phonemes), where j or the j-factor reveals the number of independent perceptual units required for recognition of the whole (Boothroyd, 1968b; Boothroyd & Nittrouer, 1988; Nittrouer & Boothroyd, 1990). For consonant-vowel-consonant (CVC) nonsense syllables, j ∼ 3 because all 3 phonemes are needed to identify the syllable, but j ∼ 2.5 for real-word CVCs (revealing ∼2.5 independent perceptual units) because higher level contributions such as lexical knowledge enable word recognition even if less than 3 phonemes are accurately received. These findings were almost exclusively determined with the 120-word corpus of the isophonemic word lists (Boothroyd, 1968a; Boothroyd & Nittrouer, 1988), presented one word at a time. It is therefore possible that its generality or applicability may be limited. This study thus determined j by using a much larger and less restricted corpus of real-word CVCs presented in 3-word groups as well as whether j is influenced by test size. METHOD The j-factor for real-word CVCs was derived from the recognition performance of 223 individuals with a broad range of hearing sensitivity by using the Tri-Word Test (Gelfand, 1998), which involves 50 three-word presentations and a corpus of 450 words. The influence of test size was determined from a subsample of 96 participants with separate scores for the first 10, 20, and 25 (and all 50) presentation sets of the full test. RESULTS The mean value of j was 2.48 with a 95% confidence interval of 2.44-2.53, which is in good agreement with values obtained with isophonemic word lists, although its value varies among individuals. A significant correlation was found between percent-correct scores and j, but it was small and accounted for only 12.4% of the variance in j for phoneme scores ≥60%. Mean j-factors for the 10-, 20-, 25-, and 50-set test sizes were between 2.49 and 2.53 and were not significantly different from one another. CONCLUSIONS The j-factor based on a 450-word corpus and tri-word testing confirms and expands on findings from single-word presentations of isophonemic lists and a 120-word corpus. This enhances the generality (external validity) of the notions that j ∼ 2.5 for real-word CVCs, and lexical knowledge enables CVC word recognition based on ∼2.5 independent perceptual units. The robust nature of isophonemic word test outcomes is confirmed by close agreement with those provided by the high-reliability Tri-Word Test. Percent-correct performance was correlated with j but appeared to account for less than 13% of j-factor variance for most scores likely to be encountered in practice. Variability in the size of j suggests individual differences in the ability to take advantage of lexical knowledge in word recognition. The j-factor may be useful to inform rehabilitation needs, intervention content, and outcome assessment, as well as for other clinical applications.
Collapse
|
35
|
Cochlear implantation in nontraditional candidates: preliminary results in adolescents with asymmetric hearing loss. Otol Neurotol 2013; 34:408-15. [PMID: 23222962 DOI: 10.1097/mao.0b013e31827850b8] [Citation(s) in RCA: 51] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Traditionally, children are cochlear implant (CI) candidates if bilateral severe to profound hearing loss is present and amplification benefit is limited. The current study investigated abilities of adolescents with asymmetric hearing loss (one ear with severe to profound hearing loss and better hearing contralaterally), where the poorer ear received a CI and the better ear maintained amplification. STUDY DESIGN Within-subject case study. SETTING Pediatric hospital, outpatient clinic. PATIENTS Participants were 5 adolescents who had not met traditional CI candidacy because of one better hearing ear but did have 1 ear that met criteria and was implanted. All maintained hearing aid (HA) use in the contralateral ear. In the poorer ear, before implant, 3 participants had used amplification, and the other 2 had no HA experience. MAIN OUTCOME MEASURE Participants were assessed in 3 listening conditions: HA alone, CI alone, and both devices together (bimodal) for speech recognition in quiet and noise and sound localization. RESULTS Three participants had CI open-set speech recognition and significant bimodal improvement for speech recognition and localization compared with the HA or CI alone. Two participants had no CI speech recognition and limited bimodal improvement. CONCLUSION Some adolescents with asymmetric hearing loss who are not typical CI candidates can benefit from a CI in the poorer ear, compared with a HA in the better ear alone. Additional study is needed to determine outcomes for this population, especially those who have early onset profound hearing loss in one ear and limited HA experience.
Collapse
|
36
|
Shi LF. How “Proficient” Is Proficient? Comparison of English and Relative Proficiency Rating as a Predictor of Bilingual Listeners' Word Recognition. Am J Audiol 2013; 22:40-52. [DOI: 10.1044/1059-0889(2012/12-0029)] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose
The current study attempted to validate that English proficiency self-ratings predict bilinguals' recognition of English words as reported in Shi (2011) and to explore whether relative proficiency ratings (English vs. first language) improve prediction.
Method
One hundred and twenty-four participants in Shi (2011) and an additional set of 145 participants were included (Groups 1 and 2, respectively) in this study. All listeners rated their proficiency in listening, speaking, and reading (English and first language) on an 11-point scale and listened to a list of words from the Northwestern University Auditory Tests No. 6 (Tillman & Carhart, 1966) at 45 dB HL in quiet.
Results
English proficiency ratings by Group 2 yielded sensitivity/specificity values comparable to those of Group 1 (Shi, 2011) in predicting word recognition. A cutoff of 8 or 9 in minimum English proficiency rating across listening, speaking, and reading resulted in the best combination of prediction sensitivity/specificity. When relative proficiency was used, prediction of Group 1 performance significantly improved as compared to English proficiency. Improvement was slight for Group 2, mainly due to low specificity.
Conclusion
Self-rated English proficiency provides clinically acceptable sensitivity/specificity values as a predictor of bilinguals' English word recognition. Relative proficiency has the potential to further improve predictive power, but the size of improvement depends on the characteristics of the test population.
Collapse
|
37
|
Abstract
Purpose
American Spanish dialects have substantial phonetic and lexical differences. This study investigated how dialectal differences affect Spanish/English bilingual individuals' performance on a clinical Spanish word recognition test.
Method
Forty Spanish/English bilinguals participated in the study—20 dominant in Spanish and 20 in English. Within each group, 10 listeners spoke the Highland dialect, and 10 spoke the Caribbean/Coastal dialect. Participants were maximally matched between the 2 dialectal groups regarding their demographic and linguistic background. Listeners were randomly presented 4 lists of Auditec Spanish bisyllabic words at 40 dB SL re: pure-tone average. Each list was randomly assigned with a signal-to-noise ratio (SNR) of quiet, +6, +3, and 0 dB, in the presence of speech-spectrum noise. Listeners responded orally and in writing.
Results
Dialect and language dominance both significantly affected listener performance on the word recognition test. Higher performance levels were obtained with Highland than Caribbean/Coastal listeners and with Spanish-dominant than English-dominant listeners. The dialectal difference was particularly evident in favorable listening conditions (i.e., quiet and +6 dB SNR) and could not be explained by listeners' familiarity with the test words.
Conclusion
Dialects significantly affect the clinical assessment of Spanish-speaking clients' word recognition. Clinicians are advised to consider the phonetic features of the dialect when scoring a client's performance.
Collapse
|
38
|
Holden LK, Brenner C, Reeder RM, Firszt JB. Postlingual adult performance in noise with HiRes 120 and ClearVoice Low, Medium, and High. Cochlear Implants Int 2013; 14:276-86. [PMID: 23683298 DOI: 10.1179/1754762813y.0000000034] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
OBJECTIVES The study's objectives were to evaluate speech recognition in multiple listening conditions using several noise types with HiRes 120 and ClearVoice (Low, Medium, High) and to determine which ClearVoice program was most beneficial for everyday use. METHODS Fifteen postlingual adults attended four sessions; speech recognition was assessed at sessions 1 and 3 with HiRes 120 and at sessions 2 and 4 with all ClearVoice programs. Test measures included sentences presented in restaurant noise (R-SPACE), in speech-spectrum noise, in four- and eight-talker babble, and connected discourse presented in 12-talker babble. Participants completed a questionnaire comparing ClearVoice programs. RESULTS Significant group differences in performance between HiRes 120 and ClearVoice were present only in the R-SPACE; performance was better with ClearVoice High than HiRes 120. Among ClearVoice programs, no significant group differences were present for any measure. Individual results revealed most participants performed better in the R-SPACE with ClearVoice than HiRes 120. For other measures, significant individual differences between HiRes 120 and ClearVoice were not prevalent. Individual results among ClearVoice programs differed and overall preferences varied. Questionnaire data indicated increased understanding with High and Medium in certain environments. DISCUSSION R-SPACE and questionnaire results indicated an advantage for ClearVoice High and Medium. Individual test and preference data showed mixed results between ClearVoice programs making global recommendations difficult; however, results suggest providing ClearVoice High and Medium and HiRes 120 as processor options for adults willing to change settings. For adults unwilling or unable to change settings, ClearVoice Medium is a practical choice for daily listening.
Collapse
Affiliation(s)
- Laura K Holden
- Washington University School of Medicine in St Louis, St Louis, MO, USA
| | | | | | | |
Collapse
|
39
|
Botros A, Banna R, Maruthurkkara S. The next generation of Nucleus(®) fitting: a multiplatform approach towards universal cochlear implant management. Int J Audiol 2013; 52:485-94. [PMID: 23617610 PMCID: PMC3696341 DOI: 10.3109/14992027.2013.781277] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE This article provides a detailed description and evaluation of the next Nucleus(®) cochlear implant fitting suite. A new fitting methodology is presented that, at its simplest level, requires a single volume adjustment, and at its advanced level, provides access to 22-channel fitting. It is implemented on multiple platforms, including a mobile platform (Remote Assistant Fitting) and an accessible PC application (Nucleus Fitting Software). Additional tools for home care and surgical care are also described. DESIGN Two trials were conducted, comparing the fitting methodology with the existing Custom Sound™ methodology, as fitted by the recipient and by an experienced cochlear implant audiologist. STUDY SAMPLE Thirty-seven subjects participated in the trials. RESULTS No statistically significant differences were observed between the group mean scores, whether fitted by the recipient or by an experienced audiologist. The lower bounds of the 95% confidence intervals of the differences represented clinically insignificant differences. No statistically significant differences were found in the subjective program preferences of the subjects. CONCLUSIONS Equivalent speech perception outcomes were demonstrated when compared to current best practice. As such, the new technology has the potential to expand the capacity of audiological care without compromising efficacy.
Collapse
|
40
|
Auditory abilities after cochlear implantation in adults with unilateral deafness: a pilot study. Otol Neurotol 2013; 33:1339-46. [PMID: 22935813 DOI: 10.1097/mao.0b013e318268d52d] [Citation(s) in RCA: 127] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE This pilot study examined speech recognition, localization, temporal and spectral discrimination, and subjective reports of cochlear implant (CI) recipients with unilateral deafness. STUDY DESIGN Three adult male participants with short-term unilateral deafness (<5 yr) participated. All had sudden onset of severe-to-profound hearing loss in 1 ear, which then received a CI, and normal or near normal hearing in the other ear. Speech recognition in quiet and noise, localization, discrimination of temporal and spectral cues, and a subjective questionnaire were obtained over several days. Listening conditions were CI, normal hearing (NH) ear, and bilaterally (CI and NH). RESULTS All participants had open-set speech recognition and excellent audibility (250-6,000 Hz) with the CI. Localization improved bilaterally compared with the NH ear alone. Word recognition in noise was significantly better bilaterally than with the NH ear for 2 participants. Sentence recognition in various noise conditions did not show significant bilateral improvement; however, the CI did not hinder performance in noise even when noise was toward the CI side. The addition of the CI improved temporal difference discrimination for 2 participants and spectral difference discrimination for all participants. Participants wore the CI full time, and subjective reports were positive. CONCLUSION Overall, the CI recipients with unilateral deafness obtained open-set speech recognition, improved localization, improved word recognition in noise, and improved perception of their ability to hear in everyday life. A larger study is warranted to further quantify the benefits and limitations of cochlear implantation in individuals with unilateral deafness.
Collapse
|
41
|
Abstract
OBJECTIVE Bilateral severe to profound sensorineural hearing loss is a standard criterion for cochlear implantation. Increasingly, patients are implanted in one ear and continue to use a hearing aid in the nonimplanted ear to improve abilities such as sound localization and speech understanding in noise. Patients with severe to profound hearing loss in one ear and a more moderate hearing loss in the other ear (i.e., asymmetric hearing) are not typically considered candidates for cochlear implantation. Amplification in the poorer ear is often unsuccessful because of limited benefit, restricting the patient to unilateral listening from the better ear alone. The purpose of this study was to determine whether patients with asymmetric hearing loss could benefit from cochlear implantation in the poorer ear with continued use of a hearing aid in the better ear. DESIGN Ten adults with asymmetric hearing between ears participated. In the poorer ear, all participants met cochlear implant candidacy guidelines; seven had postlingual onset, and three had pre/perilingual onset of severe to profound hearing loss. All had open-set speech recognition in the better-hearing ear. Assessment measures included word and sentence recognition in quiet, sentence recognition in fixed noise (four-talker babble) and in diffuse restaurant noise using an adaptive procedure, localization of word stimuli, and a hearing handicap scale. Participants were evaluated preimplant with hearing aids and postimplant with the implant alone, the hearing aid alone in the better ear, and bimodally (the implant and hearing aid in combination). Postlingual participants were evaluated at 6 mo postimplant, and pre/perilingual participants were evaluated at 6 and 12 mo postimplant. Data analysis compared the following results: (1) the poorer-hearing ear preimplant (with hearing aid) and postimplant (with cochlear implant); (2) the device(s) used for everyday listening pre- and postimplant; and (3) the hearing aid-alone and bimodal listening conditions postimplant. RESULTS The postlingual participants showed significant improvements in speech recognition after 6 mo cochlear implant use in the poorer ear. Five postlingual participants had a bimodal advantage over the hearing aid-alone condition on at least one test measure. On average, the postlingual participants had significantly improved localization with bimodal input compared with the hearing aid-alone. Only one pre/perilingual participant had open-set speech recognition with the cochlear implant. This participant had better hearing than the other two pre/perilingual participants in both the poorer and better ear. Localization abilities were not significantly different between the bimodal and hearing aid-alone conditions for the pre/perilingual participants. Mean hearing handicap ratings improved postimplant for all participants indicating perceived benefit in everyday life with the addition of the cochlear implant. CONCLUSIONS Patients with asymmetric hearing loss who are not typical cochlear implant candidates can benefit from using a cochlear implant in the poorer ear with continued use of a hearing aid in the better ear. For this group of 10, the 7 postlingually deafened participants showed greater benefits with the cochlear implant than the pre/perilingual participants; however, further study is needed to determine maximum benefit for those with early onset of hearing loss.
Collapse
|
42
|
Reiss LAJ, Perreau AE, Turner CW. Effects of lower frequency-to-electrode allocations on speech and pitch perception with the hybrid short-electrode cochlear implant. Audiol Neurootol 2012; 17:357-72. [PMID: 22907151 PMCID: PMC3519932 DOI: 10.1159/000341165] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2012] [Accepted: 06/19/2012] [Indexed: 11/19/2022] Open
Abstract
Because some users of a Hybrid short-electrode cochlear implant (CI) lose their low-frequency residual hearing after receiving the CI, we tested whether increasing the CI speech processor frequency allocation range to include lower frequencies improves speech perception in these individuals. A secondary goal was to see if pitch perception changed after experience with the new CI frequency allocation. Three subjects who had lost all residual hearing in the implanted ear were recruited to use an experimental CI frequency allocation with a lower frequency cutoff than their current clinical frequency allocation. Speech and pitch perception results were collected at multiple time points throughout the study. In general, subjects showed little or no improvement for speech recognition with the experimental allocation when the CI was worn with a hearing aid in the contralateral ear. However, all 3 subjects showed changes in pitch perception that followed the changes in frequency allocations over time, consistent with previous studies showing that pitch perception changes upon provision of a CI.
Collapse
Affiliation(s)
- Lina A J Reiss
- Oregon Health and Science University, Portland, OR 97239, USA.
| | | | | |
Collapse
|
43
|
Baudhuin J, Cadieux J, Firszt JB, Reeder RM, Maxson JL. Optimization of programming parameters in children with the advanced bionics cochlear implant. J Am Acad Audiol 2012; 23:302-12. [PMID: 22533974 DOI: 10.3766/jaaa.23.5.2] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Cochlear implants provide access to soft intensity sounds and therefore improved audibility for children with severe-to-profound hearing loss. Speech processor programming parameters, such as threshold (or T-level), input dynamic range (IDR), and microphone sensitivity, contribute to the recipient's program and influence audibility. When soundfield thresholds obtained through the speech processor are elevated, programming parameters can be modified to improve soft sound detection. Adult recipients show improved detection for low-level sounds when T-levels are set at raised levels and show better speech understanding in quiet when wider IDRs are used. Little is known about the effects of parameter settings on detection and speech recognition in children using today's cochlear implant technology. PURPOSE The overall study aim was to assess optimal T-level, IDR, and sensitivity settings in pediatric recipients of the Advanced Bionics cochlear implant. RESEARCH DESIGN Two experiments were conducted. Experiment 1 examined the effects of two T-level settings on soundfield thresholds and detection of the Ling 6 sounds. One program set T-levels at 10% of most comfortable levels (M-levels) and another at 10 current units (CUs) below the level judged as "soft." Experiment 2 examined the effects of IDR and sensitivity settings on speech recognition in quiet and noise. STUDY SAMPLE Participants were 11 children 7-17 yr of age (mean 11.3) implanted with the Advanced Bionics High Resolution 90K or CII cochlear implant system who had speech recognition scores of 20% or greater on a monosyllabic word test. DATA COLLECTION AND ANALYSIS Two T-level programs were compared for detection of the Ling sounds and frequency modulated (FM) tones. Differing IDR/sensitivity programs (50/0, 50/10, 70/0, 70/10) were compared using Ling and FM tone detection thresholds, CNC (consonant-vowel nucleus-consonant) words at 50 dB SPL, and Hearing in Noise Test for Children (HINT-C) sentences at 65 dB SPL in the presence of four-talker babble (+8 signal-to-noise ratio). Outcomes were analyzed using a paired t-test and a mixed-model repeated measures analysis of variance (ANOVA). RESULTS T-levels set 10 CUs below "soft" resulted in significantly lower detection thresholds for all six Ling sounds and FM tones at 250, 1000, 3000, 4000, and 6000 Hz. When comparing programs differing by IDR and sensitivity, a 50 dB IDR with a 0 sensitivity setting showed significantly poorer thresholds for low frequency FM tones and voiced Ling sounds. Analysis of group mean scores for CNC words in quiet or HINT-C sentences in noise indicated no significant differences across IDR/sensitivity settings. Individual data, however, showed significant differences between IDR/sensitivity programs in noise; the optimal program differed across participants. CONCLUSIONS In pediatric recipients of the Advanced Bionics cochlear implant device, manually setting T-levels with ascending loudness judgments should be considered when possible or when low-level sounds are inaudible. Study findings confirm the need to determine program settings on an individual basis as well as the importance of speech recognition verification measures in both quiet and noise. Clinical guidelines are suggested for selection of programming parameters in both young and older children.
Collapse
Affiliation(s)
- Jacquelyn Baudhuin
- TL1 Multidisciplinary Clinical Research Program, Washington University in St. Louis School of Medicine, St. Louis, MO 63110, USA
| | | | | | | | | |
Collapse
|
44
|
Gelfand SA, Gelfand JT. Psychometric functions for shortened administrations of a speech recognition approach using tri-word presentations and phonemic scoring. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2012; 55:879-891. [PMID: 22337493 DOI: 10.1044/1092-4388(2011/11-0123)] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
METHOD Complete psychometric functions for phoneme and word recognition scores at 8 signal-to-noise ratios from -15 dB to 20 dB were generated for the first 10, 20, and 25, as well as all 50, three-word presentations of the Tri-Word or Computer Assisted Speech Recognition Assessment (CASRA) Test (Gelfand, 1998) based on the results of 12 normal-hearing young adult participants from the original study. RESULTS The psychometric functions for both phoneme and word scores were very similar and essentially overlapping for all set sizes. Performance on the shortened tests accounted for 98.8% to 99.5% of the full (50-set) test variance with phoneme scoring, and 95.8% to 99.2% of the full test variance with word scoring. Shortening the tests accounted for little if any of the variance in the slopes of the functions. CONCLUSIONS The psychometric functions for abbreviated versions of the Tri-Word speech recognition test using 10, 20, and 25 presentation sets were described and are comparable to those of the original 50-presentation approach for both phoneme and word scoring in healthy, normal-hearing, young adult participants.
Collapse
|
45
|
Shi LF, Morozova N. Understanding native Russian listeners’ errors on an English word recognition test: Model-based analysis of phoneme confusion. Int J Audiol 2012; 51:597-605. [DOI: 10.3109/14992027.2012.680075] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
46
|
Shi LF. How “Proficient” Is Proficient? Subjective Proficiency as a Predictor of Bilingual Listeners’ Recognition of English Words. Am J Audiol 2011; 20:19-32. [DOI: 10.1044/1059-0889(2011/10-0013)] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose
English proficiency must be considered when a bilingual individual is to be evaluated clinically with English speech material. This study describes the minimum level of self-reported English proficiency that identifies bilingual individuals who may perform on par with monolingual listeners on an English word recognition test.
Method
A total of 125 normal hearing bilingual listeners rated their English proficiency in listening, speaking, and reading on an 11-point scale. Other related linguistic variables were also obtained. A randomly selected Northwestern University Auditory Test No. 6 (NU-6) list (50 English monosyllabic words) was presented to all participants at 45 dB HL in quiet.
Results
Over 90% of the listeners self-rated to have at least “good” proficiency in English listening, speaking, or reading. Of these participants, more than 30% did not achieve a monolingual normative level in English as delimited by binomial distribution. Composite proficiency ratings across language domains better predicted word recognition performance than self-ratings for listening proficiency only. Combining language dominance and age of English acquisition with proficiency ratings further improved prediction specificity.
Conclusions
Self-rated English proficiency can predict bilingual listeners’ performance on the NU-6 test. For desirable sensitivity and specificity in predicting monolingual-like performance, a minimum rating of 8 out of 10 across all language domains is recommended.
Collapse
Affiliation(s)
- Lu-Feng Shi
- Long Island University—Brooklyn Campus, Brooklyn, NY
| |
Collapse
|
47
|
Schlauch RS, Carney E. Are false-positive rates leading to an overestimation of noise-induced hearing loss? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2011; 54:679-692. [PMID: 20844255 DOI: 10.1044/1092-4388(2010/09-0132)] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
PURPOSE To estimate false-positive rates for rules proposed to identify early noise-induced hearing loss (NIHL) using the presence of notches in audiograms. METHOD Audiograms collected from school-age children in a national survey of health and nutrition (the Third National Health and Nutrition Examination Survey [NHANES III]; National Center for Health Statistics, 1994) were examined using published rules for identifying noise notches at various pass-fail criteria. These results were compared with computer-simulated "flat" audiograms. The proportion of these identified as having a noise notch is an estimate of the false-positive rate for a particular rule. RESULTS Audiograms from the NHANES III for children 6-11 years of age yielded notched audiograms at rates consistent with simulations, suggesting that this group does not have significant NIHL. Further, pass-fail criteria for rules suggested by expert clinicians, applied to NHANES III audiometric data, yielded unacceptably high false-positive rates. CONCLUSIONS Computer simulations provide an effective method for estimating false-positive rates for protocols used to identify notched audiograms. Audiometric precision could possibly be improved by (a) eliminating systematic calibration errors, including a possible problem with reference levels for TDH-style earphones; (b) repeating and averaging threshold measurements; and (c) using earphones that yield lower variability for 6.0 and 8.0 kHz--2 frequencies critical for identifying noise notches.
Collapse
|
48
|
Shi LF, Sánchez D. Spanish/English bilingual listeners on clinical word recognition tests: what to expect and how to predict. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2010; 53:1096-1110. [PMID: 20689035 DOI: 10.1044/1092-4388(2010/09-0199)] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
PURPOSE The current study was an attempt to provide initial evidence on how to predict the optimal language in which to conduct speech perception testing for Spanish/English (S/E) bilingual listeners. METHOD Thirty normal-hearing S/E listeners differing in age of language acquisition, length of immersion, daily language use, self-rated listening proficiency, and language dominance were evaluated on the English and Spanish word recognition tests in quiet and in speech-spectrum noise. RESULTS Performance on the English and Spanish tests was not correlated for any conditions. English word recognition was most significantly correlated with age of English acquisition. Logistic regression analyses further demonstrated age of English acquisition to be a good predictor of listeners' relative success on the 2 tests in quiet and at +6 dB signal-to-noise ratio (SNR). At 0 dB SNR, language dominance had the highest predictive specificity, whereas the combination of age of English acquisition and Spanish listening proficiency had the highest sensitivity. CONCLUSIONS A Spanish word recognition test would likely yield more favorable results for S/E bilingual listeners who were Spanish-dominant or who acquired English at 10 years of age or older. It may be necessary for listeners who acquired English at 7-10 years of age to be evaluated in both English and Spanish.
Collapse
Affiliation(s)
- Lu-Feng Shi
- Department of Communication Sciences and Disorders, Long Island University-Brooklyn Campus, One University Plaza, Brooklyn, NY 11201, USA.
| | | |
Collapse
|
49
|
Oleson JJ. Bayesian credible intervals for binomial proportions in a single patient trial. Stat Methods Med Res 2010; 19:559-74. [PMID: 20181779 DOI: 10.1177/0962280209349008] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Practitioners are often asking if the treatment successfully improved performance. Many times this question is directed towards the outcome of a single individual. In this article, we develop a method to assess the improvement of a single individual who is administered a test of percent correct at pre-treatment and post-treatment. A Bayesian approach is taken where the number correct is modelled as a binomial random variable and the percent correct is set to a beta prior distribution. The first model assumes percent correct at pre-test is equal to the percent correct at post-test and the posterior predictive distribution is used to evaluate the change in the number correct. We subsequently model the proportions correct at pre-test and post-test as unequal. The second model then assumes independent proportions and the third assumes correlated beta distributions for the two proportions. 95% credible intervals are calculated for the various methods for number of correct at post-test given a particular level at pre-test. An example using data from a cochlear implant clinical trial is presented where clinicians recorded percent correct in a consonant-nucleus-consonant test.
Collapse
Affiliation(s)
- Jacob J Oleson
- Department of Biostatistics, The University of Iowa, 200 Hawkins Drive, The University of Iowa, Iowa City, IA 52242-1009, USA.
| |
Collapse
|