1
|
Fogerty D, Ahlstrom JB, Dubno JR. Sentence recognition with modulation-filtered speech segments for younger and older adults: Effects of hearing impairment and cognition. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:3328-3343. [PMID: 37983296 PMCID: PMC10663055 DOI: 10.1121/10.0022445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 10/23/2023] [Accepted: 11/01/2023] [Indexed: 11/22/2023]
Abstract
This study investigated word recognition for sentences temporally filtered within and across acoustic-phonetic segments providing primarily vocalic or consonantal cues. Amplitude modulation was filtered at syllabic (0-8 Hz) or slow phonemic (8-16 Hz) rates. Sentence-level modulation properties were also varied by amplifying or attenuating segments. Participants were older adults with normal or impaired hearing. Older adult speech recognition was compared to groups of younger normal-hearing adults who heard speech unmodified or spectrally shaped with and without threshold matching noise that matched audibility to hearing-impaired thresholds. Participants also completed cognitive and speech recognition measures. Overall, results confirm the primary contribution of syllabic speech modulations to recognition and demonstrate the importance of these modulations across vowel and consonant segments. Group differences demonstrated a hearing loss-related impairment in processing modulation-filtered speech, particularly at 8-16 Hz. This impairment could not be fully explained by age or poorer audibility. Principal components analysis identified a single factor score that summarized speech recognition across modulation-filtered conditions; analysis of individual differences explained 81% of the variance in this summary factor among the older adults with hearing loss. These results suggest that a combination of cognitive abilities and speech glimpsing abilities contribute to speech recognition in this group.
Collapse
Affiliation(s)
- Daniel Fogerty
- Department of Speech and Hearing Science, University of Illinois Urbana-Champaign, Champaign, Illinois 61820, USA
| | - Jayne B Ahlstrom
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina 29425, USA
| | - Judy R Dubno
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina 29425, USA
| |
Collapse
|
2
|
Ratnanather JT, Wang LC, Bae SH, O'Neill ER, Sagi E, Tward DJ. Visualization of Speech Perception Analysis via Phoneme Alignment: A Pilot Study. Front Neurol 2022; 12:724800. [PMID: 35087462 PMCID: PMC8787339 DOI: 10.3389/fneur.2021.724800] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 12/13/2021] [Indexed: 11/13/2022] Open
Abstract
Objective: Speech tests assess the ability of people with hearing loss to comprehend speech with a hearing aid or cochlear implant. The tests are usually at the word or sentence level. However, few tests analyze errors at the phoneme level. So, there is a need for an automated program to visualize in real time the accuracy of phonemes in these tests. Method: The program reads in stimulus-response pairs and obtains their phonemic representations from an open-source digital pronouncing dictionary. The stimulus phonemes are aligned with the response phonemes via a modification of the Levenshtein Minimum Edit Distance algorithm. Alignment is achieved via dynamic programming with modified costs based on phonological features for insertion, deletions and substitutions. The accuracy for each phoneme is based on the F1-score. Accuracy is visualized with respect to place and manner (consonants) or height (vowels). Confusion matrices for the phonemes are used in an information transfer analysis of ten phonological features. A histogram of the information transfer for the features over a frequency-like range is presented as a phonemegram. Results: The program was applied to two datasets. One consisted of test data at the sentence and word levels. Stimulus-response sentence pairs from six volunteers with different degrees of hearing loss and modes of amplification were analyzed. Four volunteers listened to sentences from a mobile auditory training app while two listened to sentences from a clinical speech test. Stimulus-response word pairs from three lists were also analyzed. The other dataset consisted of published stimulus-response pairs from experiments of 31 participants with cochlear implants listening to 400 Basic English Lexicon sentences via different talkers at four different SNR levels. In all cases, visualization was obtained in real time. Analysis of 12,400 actual and random pairs showed that the program was robust to the nature of the pairs. Conclusion: It is possible to automate the alignment of phonemes extracted from stimulus-response pairs from speech tests in real time. The alignment then makes it possible to visualize the accuracy of responses via phonological features in two ways. Such visualization of phoneme alignment and accuracy could aid clinicians and scientists.
Collapse
Affiliation(s)
- J Tilak Ratnanather
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Lydia C Wang
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Seung-Ho Bae
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Erin R O'Neill
- Center for Applied and Translational Sensory Sciences, University of Minnesota, Minneapolis, MN, United States
| | - Elad Sagi
- Department of Otolaryngology, New York University School of Medicine, New York, NY, United States
| | - Daniel J Tward
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States.,Departments of Computational Medicine and Neurology, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
3
|
Elkins E, Harvey A, Hillyer J, Hazlewood C, Watson S, Parbery-Clark A. Estimating Real-World Performance of Percutaneously Coupled Bone-Conduction Device Users With Severe-to-Profound Unilateral Hearing Loss. Am J Audiol 2020; 29:170-187. [PMID: 32286081 DOI: 10.1044/2019_aja-19-00088] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Purpose The bone-conduction device attached to a percutaneous screw (BCD) is an important treatment option for individuals with severe-to-profound unilateral hearing loss (UHL). Clinicians may use subjective questionnaires and speech-in-noise measures to evaluate BCD use in this patient population; however, the translation of these metrics to real-world aided performance is unclear. The purpose of this study was twofold: first, to measure speech-in-noise performance in BCD users with severe-to-profound UHL in a simulated real-world environment, relative to individuals with normal hearing bilaterally; second, to determine if BCD users' subjective reports of aided performance relate to simulated real-world performance. Method A between-subjects design with two groups was conducted with 14 adults with severe-to-profound UHL (BCD group) and 10 age-matched participants with normal hearing bilaterally (control group). Speech-in-noise tests were administered in an eight-speaker R-Space simulating a real-world environment. To further explore speech-in-noise evaluation methods for this population, testing was also completed in a clinically common two-speaker array. The effects of various microphone settings on performance were explored for BCD users. Subjective performance was measured with the Abbreviated Profile of Hearing Aid Benefit (APHAB; Cox & Alexander, 1995) and the Speech, Spatial and Qualities of Hearing Scale (Gatehouse & Noble, 2004). Statistical analyses to explore relationships between variables included repeated-measures analysis of variance, regression analyses, independent-samples t tests, nonparametric Mann-Whitney tests, and correlations. Results In the simulated real-world environment, BCD group participants struggled with speech-in-noise understanding compared to control group participants. BCD benefit was observed for all microphone settings when speech stimuli were presented to the side with the BCD. When adaptive directional or fixed directional microphone settings were used, a relationship was noted between simulated real-world speech-in-noise performance for speech stimuli presented to the side with the BCD and subjective reports on the Background Noise subscale of the APHAB. Conclusions The Background Noise subscale of the APHAB may help estimate real-world speech-in-noise performance for BCD users with severe-to-profound UHL for signals of interest presented to the implanted side, specifically when adaptive or fixed directional microphone settings are used. This subscale may provide an efficient and accessible alternative to assessing real-world speech-in-noise performance in lieu of less clinically available measurement tools, such as an R-Space.
Collapse
Affiliation(s)
- Elizabeth Elkins
- Auditory Research Laboratory, Center for Hearing and Skull Base Surgery, Swedish Neuroscience Institute, Seattle, WA
| | - Anne Harvey
- University of California San Francisco Audiology Clinic
| | - Jake Hillyer
- School of Medicine, Oregon Health and Science University, Portland
| | - Chantel Hazlewood
- Auditory Research Laboratory, Center for Hearing and Skull Base Surgery, Swedish Neuroscience Institute, Seattle, WA
| | - Stacey Watson
- Auditory Research Laboratory, Center for Hearing and Skull Base Surgery, Swedish Neuroscience Institute, Seattle, WA
| | - Alexandra Parbery-Clark
- Auditory Research Laboratory, Center for Hearing and Skull Base Surgery, Swedish Neuroscience Institute, Seattle, WA
| |
Collapse
|
4
|
Dwyer RT, Roberts J, Gifford RH. Effect of Microphone Configuration and Sound Source Location on Speech Recognition for Adult Cochlear Implant Users with Current-Generation Sound Processors. J Am Acad Audiol 2020; 31:578-589. [PMID: 32340055 DOI: 10.1055/s-0040-1709449] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
BACKGROUND Microphone location has been shown to influence speech recognition with a microphone placed at the entrance to the ear canal yielding higher levels of speech recognition than top-of-the-pinna placement. Although this work is currently influencing cochlear implant programming practices, prior studies were completed with previous-generation microphone and sound processor technology. Consequently, the applicability of prior studies to current clinical practice is unclear. PURPOSE To investigate how microphone location (e.g., at the entrance to the ear canal, at the top of the pinna), speech-source location, and configuration (e.g., omnidirectional, directional) influence speech recognition for adult CI recipients with the latest in sound processor technology. RESEARCH DESIGN Single-center prospective study using a within-subjects, repeated-measures design. STUDY SAMPLE Eleven experienced adult Advanced Bionics cochlear implant recipients (five bilateral, six bimodal) using a Naída CI Q90 sound processor were recruited for this study. DATA COLLECTION AND ANALYSIS Sentences were presented from a single loudspeaker at 65 dBA for source azimuths of 0°, 90°, or 270° with semidiffuse noise originating from the remaining loudspeakers in the R-SPACE array. Individualized signal-to-noise ratios were determined to obtain 50% correct in the unilateral cochlear implant condition with the signal at 0°. Performance was compared across the following microphone sources: T-Mic 2, integrated processor microphone (formerly behind-the-ear mic), processor microphone + T-Mic 2, and two types of beamforming: monaural, adaptive beamforming (UltraZoom) and binaural beamforming (StereoZoom). Repeated-measures analyses were completed for both speech recognition and microphone output for each microphone location and configuration as well as sound source location. A two-way analysis of variance using mic and azimuth as the independent variables and output for pink noise as the dependent variable was used to characterize the acoustic output characteristics of each microphone source. RESULTS No significant differences in speech recognition across omnidirectional mic location at any source azimuth or listening condition were observed. Secondary findings were (1) omnidirectional microphone configurations afforded significantly higher speech recognition for conditions in which speech was directed to ± 90° (when compared with directional microphone configurations), (2) omnidirectional microphone output was significantly greater when the signal was presented off-axis, and (3) processor microphone output was significantly greater than T-Mic 2 when the sound originated from 0°, which contributed to better aided detection at 2 and 6 kHz with the processor microphone in this group. CONCLUSIONS Unlike previous-generation microphones, we found no statistically significant effect of microphone location on speech recognition in noise from any source azimuth. Directional microphones significantly improved speech recognition in the most difficult listening environments.
Collapse
Affiliation(s)
- Robert T Dwyer
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Jillian Roberts
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee.,Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, Tennessee
| |
Collapse
|
5
|
Tamati TN, Janse E, Başkent D. Perceptual Discrimination of Speaking Style Under Cochlear Implant Simulation. Ear Hear 2019; 40:63-76. [PMID: 29742545 PMCID: PMC6319584 DOI: 10.1097/aud.0000000000000591] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2016] [Accepted: 03/12/2018] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Real-life, adverse listening conditions involve a great deal of speech variability, including variability in speaking style. Depending on the speaking context, talkers may use a more casual, reduced speaking style or a more formal, careful speaking style. Attending to fine-grained acoustic-phonetic details characterizing different speaking styles facilitates the perception of the speaking style used by the talker. These acoustic-phonetic cues are poorly encoded in cochlear implants (CIs), potentially rendering the discrimination of speaking style difficult. As a first step to characterizing CI perception of real-life speech forms, the present study investigated the perception of different speaking styles in normal-hearing (NH) listeners with and without CI simulation. DESIGN The discrimination of three speaking styles (conversational reduced speech, speech from retold stories, and carefully read speech) was assessed using a speaking style discrimination task in two experiments. NH listeners classified sentence-length utterances, produced in one of the three styles, as either formal (careful) or informal (conversational). Utterances were presented with unmodified speaking rates in experiment 1 (31 NH, young adult Dutch speakers) and with modified speaking rates set to the average rate across all utterances in experiment 2 (28 NH, young adult Dutch speakers). In both experiments, acoustic noise-vocoder simulations of CIs were used to produce 12-channel (CI-12) and 4-channel (CI-4) vocoder simulation conditions, in addition to a no-simulation condition without CI simulation. RESULTS In both experiments 1 and 2, NH listeners were able to reliably discriminate the speaking styles without CI simulation. However, this ability was reduced under CI simulation. In experiment 1, participants showed poor discrimination of speaking styles under CI simulation. Listeners used speaking rate as a cue to make their judgements, even though it was not a reliable cue to speaking style in the study materials. In experiment 2, without differences in speaking rate among speaking styles, listeners showed better discrimination of speaking styles under CI simulation, using additional cues to complete the task. CONCLUSIONS The findings from the present study demonstrate that perceiving differences in three speaking styles under CI simulation is a difficult task because some important cues to speaking style are not fully available in these conditions. While some cues like speaking rate are available, this information alone may not always be a reliable indicator of a particular speaking style. Some other reliable speaking styles cues, such as degraded acoustic-phonetic information and variability in speaking rate within an utterance, may be available but less salient. However, as in experiment 2, listeners' perception of speaking styles may be modified if they are constrained or trained to use these additional cues, which were more reliable in the context of the present study. Taken together, these results suggest that dealing with speech variability in real-life listening conditions may be a challenge for CI users.
Collapse
Affiliation(s)
- Terrin N. Tamati
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, The Netherlands
| | - Esther Janse
- Centre for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
6
|
Results in Adult Cochlear Implant Recipients With Varied Asymmetric Hearing: A Prospective Longitudinal Study of Speech Recognition, Localization, and Participant Report. Ear Hear 2019; 39:845-862. [PMID: 29373326 DOI: 10.1097/aud.0000000000000548] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Asymmetric hearing with severe to profound hearing loss (SPHL) in one ear and better hearing in the other requires increased listening effort and is detrimental for understanding speech in noise and sound localization. Although a cochlear implant (CI) is the only treatment that can restore hearing to an ear with SPHL, current candidacy criteria often disallows this option for patients with asymmetric hearing. The present study aimed to evaluate longitudinal performance outcomes in a relatively large group of adults with asymmetric hearing who received a CI in the poor ear. DESIGN Forty-seven adults with postlingual hearing loss participated. Test materials included objective and subjective measures meant to elucidate communication challenges encountered by those with asymmetric hearing. Test intervals included preimplant and 6 and 12 months postimplant. Preimplant testing was completed in participants' everyday listening condition: bilateral hearing aids (HAs) n = 9, better ear HA n = 29, and no HA n = 9; postimplant, each ear was tested separately and in the bimodal condition. RESULTS Group mean longitudinal results in the bimodal condition postimplant compared with the preimplant everyday listening condition indicated significantly improved sentence scores at soft levels and in noise, improved localization, and higher ratings of communication function by 6 months postimplant. Group mean, 6-month postimplant results were significantly better in the bimodal condition compared with either ear alone. Audibility and speech recognition for the poor ear alone improved significantly with a CI compared with preimplant. Most participants had clinically meaningful benefit on most measures. Contributory factors reported for traditional CI candidates also impacted results for this population. In general, older participants had poorer bimodal speech recognition in noise and localization abilities than younger participants. Participants with early SPHL onset had better bimodal localization than those with later SPHL onset, and participants with longer SPHL duration had poorer CI alone speech understanding in noise but not in quiet. Better ear pure-tone average (PTA) correlated with all speech recognition measures in the bimodal condition. To understand the impact of better ear hearing on bimodal performance, participants were grouped by better ear PTA: group 1 PTA ≤40 dB HL (n = 19), group 2 PTA = 41 to 55 dB HL (n = 14), and group 3 PTA = 56 to 70 dB HL (n = 14). All groups showed bimodal benefit on speech recognition measures in quiet and in noise; however, only group 3 obtained benefit when noise was toward the CI ear. All groups showed improved localization and ratings of perceived communication. CONCLUSIONS Receiving a CI for the poor ear was an effective treatment for this population. Improved audibility and speech recognition were evident by 6 months postimplant. Improvements in sound localization and self-reports of communication benefit were significant and not related to better ear hearing. Participants with more hearing in the better ear (group 1) showed less bimodal benefit but greater bimodal performance for speech recognition than groups 2 and 3. Test batteries for this population should include quality of life measures, sound localization, and adaptive speech recognition measures with spatially separated noise to capture the hearing loss deficits and treatment benefits reported by this patient population.
Collapse
|
7
|
Gifford RH, Loiselle L, Natale S, Sheffield SW, Sunderhaus LW, S. Dietrich M, Dorman MF. Speech Understanding in Noise for Adults With Cochlear Implants: Effects of Hearing Configuration, Source Location Certainty, and Head Movement. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:1306-1321. [PMID: 29800361 PMCID: PMC6195075 DOI: 10.1044/2018_jslhr-h-16-0444] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2016] [Revised: 07/27/2017] [Accepted: 02/04/2018] [Indexed: 05/11/2023]
Abstract
Purpose The primary purpose of this study was to assess speech understanding in quiet and in diffuse noise for adult cochlear implant (CI) recipients utilizing bimodal hearing or bilateral CIs. Our primary hypothesis was that bilateral CI recipients would demonstrate less effect of source azimuth in the bilateral CI condition due to symmetric interaural head shadow. Method Sentence recognition was assessed for adult bilateral (n = 25) CI users and bimodal listeners (n = 12) in three conditions: (1) source location certainty regarding fixed target azimuth, (2) source location uncertainty regarding roving target azimuth, and (3) Condition 2 repeated, allowing listeners to turn their heads, as needed. Results (a) Bilateral CI users exhibited relatively similar performance regardless of source azimuth in the bilateral CI condition; (b) bimodal listeners exhibited higher performance for speech directed to the better hearing ear even in the bimodal condition; (c) the unilateral, better ear condition yielded higher performance for speech presented to the better ear versus speech to the front or to the poorer ear; (d) source location certainty did not affect speech understanding performance; and (e) head turns did not improve performance. The results confirmed our hypothesis that bilateral CI users exhibited less effect of source azimuth than bimodal listeners. That is, they exhibited similar performance for speech recognition irrespective of source azimuth, whereas bimodal listeners exhibited significantly poorer performance with speech originating from the poorer hearing ear (typically the nonimplanted ear). Conclusions Bilateral CI users overcame ear and source location effects observed for the bimodal listeners. Bilateral CI users have access to head shadow on both sides, whereas bimodal listeners generally have interaural asymmetry in both speech understanding and audible bandwidth limiting the head shadow benefit obtained from the poorer ear (generally the nonimplanted ear). In summary, we found that, in conditions with source location uncertainty and increased ecological validity, bilateral CI performance was superior to bimodal listening.
Collapse
Affiliation(s)
| | - Louise Loiselle
- Arizona State University, Tempe, AZ
- MED-EL Corporation, Durham, NC
| | | | | | | | | | | |
Collapse
|
8
|
Blankenship C, Zhang F, Keith R. Behavioral Measures of Temporal Processing and Speech Perception in Cochlear Implant Users. J Am Acad Audiol 2018; 27:701-713. [PMID: 27718347 DOI: 10.3766/jaaa.15026] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Although most cochlear implant (CI) users achieve improvements in speech perception, there is still a wide variability in speech perception outcomes. There is a growing body of literature that supports the relationship between individual differences in temporal processing and speech perception performance in CI users. Previous psychophysical studies have emphasized the importance of temporal acuity for overall speech perception performance. Measurement of gap detection thresholds (GDTs) is the most common measure currently used to assess temporal resolution. However, most GDT studies completed with CI participants used direct electrical stimulation not acoustic stimulation and they used psychoacoustic research paradigms that are not easy to administer clinically. Therefore, it is necessary to determine if the variance in GDTs assessed with clinical measures of temporal processing such as the Randomized Gap Detection Test (RGDT) can be used to explain the variability in speech perception performance. PURPOSE The primary goal of this study was to investigate the relationship between temporal processing and speech perception performance in CI users. RESEARCH DESIGN A correlational study investigating the relationship between behavioral GDTs (assessed with the RGDT or the Expanded Randomized Gap Detection Test) and commonly used speech perception measures (assessed with the Speech Recognition Test [SRT], Central Institute for the Deaf W-22 Word Recognition Test [W-22], Consonant-Nucleus-Consonant Test [CNC], Arizona Biomedical Sentence Recognition Test [AzBio], Bamford-Kowal-Bench Speech-in-Noise Test [BKB-SIN]). STUDY SAMPLE Twelve postlingually deafened adult CI users (24-83 yr) and ten normal-hearing (NH; 22-30 yr) adults participated in the study. DATA COLLECTION AND ANALYSIS The data were collected in a sound-attenuated test booth. After measuring pure-tone thresholds, GDTs and speech perception performance were measured. The difference in performance between-participant groups on the aforementioned tests, as well as the correlation between GDTs and speech perception performance was examined. The correlations between participants' biologic factors, performance on the RGDT and speech perception measures were also explored. RESULTS Although some CI participants performed as well as the NH listeners, the majority of the CI participants displayed temporal processing impairments (GDTs > 20 msec) and poorer speech perception performance than NH participants. A statistically significant difference was found between the NH and CI test groups in GDTs and some speech tests (SRT, W-22, and BKB-SIN). For the CI group, there were significant correlations between GDTs and some measures of speech perception (CNC Phoneme, AzBio, BKB-SIN); however, no significant correlations were found between biographic factors and GDTs or speech perception performance. CONCLUSIONS Results support the theory that the variability in temporal acuity in CI users contributes to the variability in speech performance. Results also indicate that it is reasonable to use the clinically available RGDT to identify CI users with temporal processing impairments for further appropriate rehabilitation.
Collapse
Affiliation(s)
- Chelsea Blankenship
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH
| | - Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH
| | - Robert Keith
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH
| |
Collapse
|
9
|
Faulkner KF, Tamati TN, Gilbert JL, Pisoni DB. List Equivalency of PRESTO for the Evaluation of Speech Recognition. J Am Acad Audiol 2018; 26:582-94. [PMID: 26134725 DOI: 10.3766/jaaa.14082] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
BACKGROUND There is a pressing clinical need for the development of ecologically valid and robust assessment measures of speech recognition. Perceptually Robust English Sentence Test Open-set (PRESTO) is a new high-variability sentence recognition test that is sensitive to individual differences and was designed for use with several different clinical populations. PRESTO differs from other sentence recognition tests because the target sentences differ in talker, gender, and regional dialect. Increasing interest in using PRESTO as a clinical test of spoken word recognition dictates the need to establish equivalence across test lists. PURPOSE The purpose of this study was to establish list equivalency of PRESTO for clinical use. RESEARCH DESIGN PRESTO sentence lists were presented to three groups of normal-hearing listeners in noise (multitalker babble [MTB] at 0 dB signal-to-noise ratio) or under eight-channel cochlear implant simulation (CI-Sim). STUDY SAMPLE Ninety-one young native speakers of English who were undergraduate students from the Indiana University community participated in this study. DATA COLLECTION AND ANALYSIS Participants completed a sentence recognition task using different PRESTO sentence lists. They listened to sentences presented over headphones and typed in the words they heard on a computer. Keyword scoring was completed offline. Equivalency for sentence lists was determined based on the list intelligibility (mean keyword accuracy for each list compared with all other lists) and listener consistency (the relation between mean keyword accuracy on each list for each listener). RESULTS Based on measures of list equivalency and listener consistency, ten PRESTO lists were found to be equivalent in the MTB condition, nine lists were equivalent in the CI-Sim condition, and six PRESTO lists were equivalent in both conditions. CONCLUSIONS PRESTO is a valuable addition to the clinical toolbox for assessing sentence recognition across different populations. Because the test condition influenced the overall intelligibility of lists, researchers and clinicians should take the presentation conditions into consideration when selecting the best PRESTO lists for their research or clinical protocols.
Collapse
Affiliation(s)
- Kathleen F Faulkner
- Speech Research Laboratory, Department of Psychological and Brain Sciences, Indiana University, Bloomington, 1101 East Tenth Street, Bloomington, IN 47405.,DeVault Otologic Research Laboratory, Department of Otolaryngology, Head and Neck Surgery, Indiana University School of Medicine, 699 Riley Research Drive, RR044, Indianapolis, IN 46202
| | - Terrin N Tamati
- Speech Research Laboratory, Department of Psychological and Brain Sciences, Indiana University, Bloomington, 1101 East Tenth Street, Bloomington, IN 47405
| | - Jaimie L Gilbert
- Department of Communicative Disorders, University of Wisconsin, Stevens Point, 1901 Fourth Avenue, Stevens Point, WI 54481
| | - David B Pisoni
- Speech Research Laboratory, Department of Psychological and Brain Sciences, Indiana University, Bloomington, 1101 East Tenth Street, Bloomington, IN 47405.,DeVault Otologic Research Laboratory, Department of Otolaryngology, Head and Neck Surgery, Indiana University School of Medicine, 699 Riley Research Drive, RR044, Indianapolis, IN 46202
| |
Collapse
|
10
|
Sladen DP, Gifford RH, Haynes D, Kelsall D, Benson A, Lewis K, Zwolan T, Fu QJ, Gantz B, Gilden J, Westerberg B, Gustin C, O'Neil L, Driscoll CL. Evaluation of a revised indication for determining adult cochlear implant candidacy. Laryngoscope 2017; 127:2368-2374. [PMID: 28233910 DOI: 10.1002/lary.26513] [Citation(s) in RCA: 59] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2016] [Revised: 10/29/2016] [Accepted: 12/14/2016] [Indexed: 11/07/2022]
Abstract
OBJECTIVE To evaluate the use of monosyllabic word recognition versus sentence recognition to determine candidacy and long-term benefit for cochlear implantation. STUDY DESIGN Prospective multi-center single-subject design. METHODS A total of 21 adults aged 18 years and older with bilateral moderate to profound sensorineural hearing loss and low monosyllabic word scores received unilateral cochlear implantation. The consonant-nucleus-consonant (CNC) word test was the central measure of pre- and postoperative performance. Additional speech understanding tests included the Hearing in Noise Test sentences in quiet and AzBio sentences in +5 dB signal-to-noise ratio (SNR). Quality of life (QoL) was measured using the Abbreviated Profile of Hearing Aid Benefit and Health Utilities Index. RESULTS Performance on sentence recognition reached the ceiling of the test after only 3 months of implant use. In contrast, none of the participants in this study reached a score of 80% on CNC word recognition, even at the 12-month postoperative test interval. Measures of QoL related to hearing were also significantly improved following implantation. CONCLUSION Results of this study demonstrate that monosyllabic words are appropriate for determining preoperative candidate and measuring long-term postoperative speech recognition performance. LEVEL OF EVIDENCE 2c. Laryngoscope, 127:2368-2374, 2017.
Collapse
Affiliation(s)
- Douglas P Sladen
- Department of Otolaryngology, Mayo Clinic, Rochester, Minnesota, U.S.A
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee, U.S.A
| | - David Haynes
- Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, Tennessee
| | - David Kelsall
- Rocky Mountain Ear Center, Englewood, Colorado, U.S.A
| | - Aaron Benson
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Ohio, U.S.A
| | | | - Teresa Zwolan
- University of Michigan Cochlear Implant Program, Ann Arbor, Michigan, U.S.A
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, University of California, Los Angeles, California, U.S.A
| | - Bruce Gantz
- Department of Otolaryngology, University of Iowa, Iowa City, Iowa, U.S.A
| | - Jan Gilden
- Houston Ear Research Foundation, Houston, Texas
| | - Brian Westerberg
- Department of Otolaryngology - Head and Neck Surgery, Vancouver Children's Hospital, Vancouver, British Columbia, Canada
| | - Cindy Gustin
- Department of Otolaryngology, Vancouver Children's Hospital, Vancouver, British Columbia, Canada
| | - Lori O'Neil
- Cochlear Americas, Centennial, Colorado, U.S.A
| | - Colin L Driscoll
- Department of Otolaryngology, Mayo Clinic, Rochester, Minnesota, U.S.A
| |
Collapse
|
11
|
Srinivasan NK, Tobey EA, Loizou PC. Prior exposure to a reverberant listening environment improves speech intelligibility in adult cochlear implant listeners. Cochlear Implants Int 2016; 17:98-104. [PMID: 26843090 DOI: 10.1080/14670100.2015.1102455] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
OBJECTIVES The goal of this study is to investigate whether prior exposure to reverberant listening environment improves speech intelligibility of adult cochlear implant (CI) users. METHODS Six adult CI users participated in this study. Speech intelligibility was measured in five different simulated reverberant listening environments with two different speech corpuses. Within each listening environment, prior exposure was varied by either having the same environment across all trials (blocked presentation) or having different environment from trial to trial (unblocked). RESULTS Speech intelligibility decreased as reverberation time increased. Although substantial individual variability was observed, all CI listeners showed an increase in the blocked presentation condition as compared to the unblocked presentation condition for both speech corpuses. CONCLUSION Prior listening exposure to a reverberant listening environment improves speech intelligibility in adult CI listeners. Further research is required to understand the underlying mechanism of adaptation to listening environment.
Collapse
Affiliation(s)
- Nirmal Kumar Srinivasan
- a Department of Electrical Engineering , University of Texas at Dallas , Richardson , TX , USA.,b National Center for Rehabilitative Auditory Research, Portland VA Medical Center , OR , USA.,c Department of Otolaryngology , Oregon Health and Science University , Portland , OR , USA
| | - Emily A Tobey
- d Callier Advanced Hearing Research Center, University of Texas at Dallas , Richardson , TX , USA
| | - Philipos C Loizou
- a Department of Electrical Engineering , University of Texas at Dallas , Richardson , TX , USA
| |
Collapse
|
12
|
Kolberg ER, Sheffield SW, Davis TJ, Sunderhaus LW, Gifford RH. Cochlear implant microphone location affects speech recognition in diffuse noise. J Am Acad Audiol 2015; 26:51-8; quiz 109-10. [PMID: 25597460 DOI: 10.3766/jaaa.26.1.6] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Despite improvements in cochlear implants (CIs), CI recipients continue to experience significant communicative difficulty in background noise. Many potential solutions have been proposed to help increase signal-to-noise ratio in noisy environments, including signal processing and external accessories. To date, however, the effect of microphone location on speech recognition in noise has focused primarily on hearing aid users. PURPOSE The purpose of this study was to (1) measure physical output for the T-Mic as compared with the integrated behind-the-ear (BTE) processor mic for various source azimuths, and (2) to investigate the effect of CI processor mic location for speech recognition in semi-diffuse noise with speech originating from various source azimuths as encountered in everyday communicative environments. RESEARCH DESIGN A repeated-measures, within-participant design was used to compare performance across listening conditions. STUDY SAMPLE A total of 11 adults with Advanced Bionics CIs were recruited for this study. DATA COLLECTION AND ANALYSIS Physical acoustic output was measured on a Knowles Experimental Mannequin for Acoustic Research (KEMAR) for the T-Mic and BTE mic, with broadband noise presented at 0 and 90° (directed toward the implant processor). In addition to physical acoustic measurements, we also assessed recognition of sentences constructed by researchers at Texas Instruments, the Massachusetts Institute of Technology, and the Stanford Research Institute (TIMIT sentences) at 60 dBA for speech source azimuths of 0, 90, and 270°. Sentences were presented in a semi-diffuse restaurant noise originating from the R-SPACE 8-loudspeaker array. Signal-to-noise ratio was determined individually to achieve approximately 50% correct in the unilateral implanted listening condition with speech at 0°. Performance was compared across the T-Mic, 50/50, and the integrated BTE processor mic. RESULTS The integrated BTE mic provided approximately 5 dB attenuation from 1500-4500 Hz for signals presented at 0° as compared with 90° (directed toward the processor). The T-Mic output was essentially equivalent for sources originating from 0 and 90°. Mic location also significantly affected sentence recognition as a function of source azimuth, with the T-Mic yielding the highest performance for speech originating from 0°. CONCLUSIONS These results have clinical implications for (1) future implant processor design with respect to mic location, (2) mic settings for implant recipients, and (3) execution of advanced speech testing in the clinic.
Collapse
Affiliation(s)
- Elizabeth R Kolberg
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| | | | - Timothy J Davis
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| | - Linsey W Sunderhaus
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| |
Collapse
|
13
|
Reeder RM, Firszt JB, Holden LK, Strube MJ. A longitudinal study in adults with sequential bilateral cochlear implants: time course for individual ear and bilateral performance. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2014; 57:1108-1126. [PMID: 24686778 PMCID: PMC4057980 DOI: 10.1044/2014_jslhr-h-13-0087] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
PURPOSE The purpose of this study was to examine the rate of progress in the 2nd implanted ear as it relates to the 1st implanted ear and to bilateral performance in adult sequential cochlear implant recipients. In addition, this study aimed to identify factors that contribute to patient outcomes. METHOD The authors performed a prospective longitudinal study in 21 adults who received bilateral sequential cochlear implants. Testing occurred at 6 intervals: prebilateral through 12 months postbilateral implantation. Measures evaluated speech recognition in quiet and noise, localization, and perceived benefit. RESULTS Second ear performance was similar to 1st ear performance by 6 months postbilateral implantation. Bilateral performance was generally superior to either ear alone; however, participants with shorter 2nd ear length of deafness (<20 years) had more rapid early improvement and better overall outcomes than those with longer 2nd ear length of deafness (>30 years). All participants reported bilateral benefit. CONCLUSIONS Adult cochlear implant recipients demonstrated benefit from 2nd ear implantation for speech recognition, localization, and perceived communication function. Because performance outcomes were related to length of deafness, shorter time between surgeries may be warranted to reduce negative length-of-deafness effects. Future study may clarify the impact of other variables, such as preimplant hearing aid use, particularly for individuals with longer periods of deafness.
Collapse
|
14
|
Abstract
OBJECTIVE Bilateral severe to profound sensorineural hearing loss is a standard criterion for cochlear implantation. Increasingly, patients are implanted in one ear and continue to use a hearing aid in the nonimplanted ear to improve abilities such as sound localization and speech understanding in noise. Patients with severe to profound hearing loss in one ear and a more moderate hearing loss in the other ear (i.e., asymmetric hearing) are not typically considered candidates for cochlear implantation. Amplification in the poorer ear is often unsuccessful because of limited benefit, restricting the patient to unilateral listening from the better ear alone. The purpose of this study was to determine whether patients with asymmetric hearing loss could benefit from cochlear implantation in the poorer ear with continued use of a hearing aid in the better ear. DESIGN Ten adults with asymmetric hearing between ears participated. In the poorer ear, all participants met cochlear implant candidacy guidelines; seven had postlingual onset, and three had pre/perilingual onset of severe to profound hearing loss. All had open-set speech recognition in the better-hearing ear. Assessment measures included word and sentence recognition in quiet, sentence recognition in fixed noise (four-talker babble) and in diffuse restaurant noise using an adaptive procedure, localization of word stimuli, and a hearing handicap scale. Participants were evaluated preimplant with hearing aids and postimplant with the implant alone, the hearing aid alone in the better ear, and bimodally (the implant and hearing aid in combination). Postlingual participants were evaluated at 6 mo postimplant, and pre/perilingual participants were evaluated at 6 and 12 mo postimplant. Data analysis compared the following results: (1) the poorer-hearing ear preimplant (with hearing aid) and postimplant (with cochlear implant); (2) the device(s) used for everyday listening pre- and postimplant; and (3) the hearing aid-alone and bimodal listening conditions postimplant. RESULTS The postlingual participants showed significant improvements in speech recognition after 6 mo cochlear implant use in the poorer ear. Five postlingual participants had a bimodal advantage over the hearing aid-alone condition on at least one test measure. On average, the postlingual participants had significantly improved localization with bimodal input compared with the hearing aid-alone. Only one pre/perilingual participant had open-set speech recognition with the cochlear implant. This participant had better hearing than the other two pre/perilingual participants in both the poorer and better ear. Localization abilities were not significantly different between the bimodal and hearing aid-alone conditions for the pre/perilingual participants. Mean hearing handicap ratings improved postimplant for all participants indicating perceived benefit in everyday life with the addition of the cochlear implant. CONCLUSIONS Patients with asymmetric hearing loss who are not typical cochlear implant candidates can benefit from using a cochlear implant in the poorer ear with continued use of a hearing aid in the better ear. For this group of 10, the 7 postlingually deafened participants showed greater benefits with the cochlear implant than the pre/perilingual participants; however, further study is needed to determine maximum benefit for those with early onset of hearing loss.
Collapse
|