1
|
Yoon YS, Straw S. Interactions Between Slopes of Residual Hearing and Frequency Maps in Simulated Bimodal and Electric-Acoustic Stimulation Hearing. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:282-295. [PMID: 38092067 PMCID: PMC11000803 DOI: 10.1044/2023_jslhr-22-00629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 03/16/2023] [Accepted: 10/18/2023] [Indexed: 01/10/2024]
Abstract
PURPOSE The aim of this study was to determine the effects of residual hearing slopes and cochlear implant frequency map settings on bimodal and electric-acoustic stimulation (EAS) benefits in speech perception. METHOD Adults with normal hearing were recruited for simulated bimodal and EAS hearing. Sentence perception was measured unilaterally and bilaterally. For the acoustic stimulation, three slopes of high-frequency hearing loss were created using low-pass filters with a cutoff frequency of 500 Hz: steep (96 dB/octave), medium (48 dB/octave), and shallow (24 dB/octave). For the electric stimulation, an eight-channel sinewave vocoder was used with an output frequency range (1000-7938 Hz) with three input frequency ranges to create frequency maps, overlap (188-7938 Hz), meet (500-7938 Hz), and gap (750-7938 Hz), relative to the cutoff frequency in the acoustic stimulation. RESULTS The largest bimodal/EAS benefit occurred with the shallow slope, and the smallest occurred with the steep slope. The effects of the slopes on bimodal/EAS benefit were greatest with the meet or gap map and the least with the overlap map. EAS benefit was greater than bimodal benefit at higher signal-to-noise ratios regardless of frequency map. CONCLUSIONS The results indicate that correlation between bimodal/EAS benefit and residual hearing could potentially improve if slopes were considered. The optimal frequency map differed with different slopes, suggesting that the slopes of residual hearing should be carefully considered in fitting bimodal and EAS hearing. EAS hearing provided greater benefit over bimodal hearing, suggesting that spectrotemporal integration was better within one ear (i.e., EAS) than across ears (i.e., bimodal).
Collapse
Affiliation(s)
- Yang-Soo Yoon
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX
| | - Shea Straw
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX
| |
Collapse
|
2
|
Xu C, Cheng FY, Medina S, Eng E, Gifford R, Smith S. Objective discrimination of bimodal speech using frequency following responses. Hear Res 2023; 437:108853. [PMID: 37441879 DOI: 10.1016/j.heares.2023.108853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 07/03/2023] [Accepted: 07/08/2023] [Indexed: 07/15/2023]
Abstract
Bimodal hearing, in which a contralateral hearing aid is combined with a cochlear implant (CI), provides greater speech recognition benefits than using a CI alone. Factors predicting individual bimodal patient success are not fully understood. Previous studies have shown that bimodal benefits may be driven by a patient's ability to extract fundamental frequency (f0) and/or temporal fine structure cues (e.g., F1). Both of these features may be represented in frequency following responses (FFR) to bimodal speech. Thus, the goals of this study were to: 1) parametrically examine neural encoding of f0 and F1 in simulated bimodal speech conditions; 2) examine objective discrimination of FFRs to bimodal speech conditions using machine learning; 3) explore whether FFRs are predictive of perceptual bimodal benefit. Three vowels (/ε/, /i/, and /ʊ/) with identical f0 were manipulated by a vocoder (right ear) and low-pass filters (left ear) to create five bimodal simulations for evoking FFRs: Vocoder-only, Vocoder +125 Hz, Vocoder +250 Hz, Vocoder +500 Hz, and Vocoder +750 Hz. Perceptual performance on the BKB-SIN test was also measured using the same five configurations. Results suggested that neural representation of f0 and F1 FFR components were enhanced with increasing acoustic bandwidth in the simulated "non-implanted" ear. As spectral differences between vowels emerged in the FFRs with increased acoustic bandwidth, FFRs were more accurately classified and discriminated using a machine learning algorithm. Enhancement of f0 and F1 neural encoding with increasing bandwidth were collectively predictive of perceptual bimodal benefit on a speech-in-noise task. Given these results, FFR may be a useful tool to objectively assess individual variability in bimodal hearing.
Collapse
Affiliation(s)
- Can Xu
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, 2504A Whitis Ave. (A1100), Austin 78712-0114, TX, USA
| | - Fan-Yin Cheng
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, 2504A Whitis Ave. (A1100), Austin 78712-0114, TX, USA
| | - Sarah Medina
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, 2504A Whitis Ave. (A1100), Austin 78712-0114, TX, USA
| | - Erica Eng
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, 2504A Whitis Ave. (A1100), Austin 78712-0114, TX, USA
| | - René Gifford
- Department of Speech, Language, and Hearing Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Spencer Smith
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, 2504A Whitis Ave. (A1100), Austin 78712-0114, TX, USA.
| |
Collapse
|
3
|
Buz E, Dwyer NC, Lai W, Watson DG, Gifford RH. Integration of fundamental frequency and voice-onset-time to voicing categorization: Listeners with normal hearing and bimodal hearing configurations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:1580. [PMID: 37002096 PMCID: PMC9995168 DOI: 10.1121/10.0017429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 02/13/2023] [Accepted: 02/13/2023] [Indexed: 05/18/2023]
Abstract
This study investigates the integration of word-initial fundamental frequency (F0) and voice-onset-time (VOT) in stop voicing categorization for adult listeners with normal hearing (NH) and unilateral cochlear implant (CI) recipients utilizing a bimodal hearing configuration [CI + contralateral hearing aid (HA)]. Categorization was assessed for ten adults with NH and ten adult bimodal listeners, using synthesized consonant stimuli interpolating between /ba/ and /pa/ exemplars with five-step VOT and F0 conditions. All participants demonstrated the expected categorization pattern by reporting /ba/ for shorter VOTs and /pa/ for longer VOTs, with NH listeners showing more use of VOT as a voicing cue than CI listeners in general. When VOT becomes ambiguous between voiced and voiceless stops, NH users make more use of F0 as a cue to voicing than CI listeners, and CI listeners showed greater utilization of initial F0 during voicing identification in their bimodal (CI + HA) condition than in the CI-alone condition. The results demonstrate the adjunctive benefit of acoustic hearing from the non-implanted ear for listening conditions involving spectrotemporally complex stimuli. This finding may lead to the development of a clinically feasible perceptual weighting task that could inform clinicians about bimodal efficacy and the risk-benefit profile associated with bilateral CI recommendation.
Collapse
Affiliation(s)
- Esteban Buz
- Department of Psychology and Human Development, Vanderbilt University, Nashville, Tennessee 37203, USA
| | - Nichole C Dwyer
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida 33620, USA
| | - Wei Lai
- Department of Psychology and Human Development, Vanderbilt University, Nashville, Tennessee 37203, USA
| | - Duane G Watson
- Department of Psychology and Human Development, Vanderbilt University, Nashville, Tennessee 37203, USA
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37203, USA
| |
Collapse
|
4
|
Fleming JT, Winn MB. Strategic perceptual weighting of acoustic cues for word stress in listeners with cochlear implants, acoustic hearing, or simulated bimodal hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1300. [PMID: 36182279 PMCID: PMC9439712 DOI: 10.1121/10.0013890] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 08/08/2022] [Accepted: 08/16/2022] [Indexed: 05/28/2023]
Abstract
Perception of word stress is an important aspect of recognizing speech, guiding the listener toward candidate words based on the perceived stress pattern. Cochlear implant (CI) signal processing is likely to disrupt some of the available cues for word stress, particularly vowel quality and pitch contour changes. In this study, we used a cue weighting paradigm to investigate differences in stress cue weighting patterns between participants listening with CIs and those with normal hearing (NH). We found that participants with CIs gave less weight to frequency-based pitch and vowel quality cues than NH listeners but compensated by upweighting vowel duration and intensity cues. Nonetheless, CI listeners' stress judgments were also significantly influenced by vowel quality and pitch, and they modulated their usage of these cues depending on the specific word pair in a manner similar to NH participants. In a series of separate online experiments with NH listeners, we simulated aspects of bimodal hearing by combining low-pass filtered speech with a vocoded signal. In these conditions, participants upweighted pitch and vowel quality cues relative to a fully vocoded control condition, suggesting that bimodal listening holds promise for restoring the stress cue weighting patterns exhibited by listeners with NH.
Collapse
Affiliation(s)
- Justin T Fleming
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota 55455, USA
| | - Matthew B Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
5
|
Hayes NA, Davidson LS, Uchanski RM. Considerations in pediatric device candidacy: An emphasis on spoken language. Cochlear Implants Int 2022; 23:300-308. [PMID: 35637623 PMCID: PMC9339525 DOI: 10.1080/14670100.2022.2079189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
As cochlear implant (CI) candidacy expands to consider children with more residual hearing, the use of a CI and a hearing aid (HA) at the non-implanted ear (bimodal devices) is increasing. This case study examines the contributions of acoustic and electric input to speech perception performance for a pediatric bimodal device user (S1) who is a borderline bilateral cochlear implant candidate. S1 completed a battery of perceptual tests in CI-only, HA-only and bimodal conditions. Since CIs and HAs differ in their ability to transmit cues related to segmental and suprasegmental perception, both types of perception were tested. Performance in all three device conditions were generally similar across tests, showing no clear device-condition benefit. Further, S1's spoken language performance was compared to those of a large group of children with prelingual severe-profound hearing loss who used two devices from a young age, at least one of which was a CI. S1's speech perception and language scores were average or above-average compared to these other pediatric CI recipients. Both segmental and suprasegmental speech perception, and spoken language skills should be examined to determine the broad-scale performance level of bimodal recipients, especially when deciding whether to move from bimodal devices to bilateral CIs.
Collapse
Affiliation(s)
- Natalie A Hayes
- Program in Audiology and Communication Science, Department of Otolaryngology, Washington University School of Medicine, St. Louis, MO, USA
| | - Lisa S Davidson
- Program in Audiology and Communication Science, Department of Otolaryngology, Washington University School of Medicine, St. Louis, MO, USA
| | - Rosalie M Uchanski
- Program in Audiology and Communication Science, Department of Otolaryngology, Washington University School of Medicine, St. Louis, MO, USA
| |
Collapse
|
6
|
Yoon YS, Drew C. Effects of the intensified frequency and time ranges on consonant enhancement in bilateral cochlear implant and hearing aid users. Front Psychol 2022; 13:918914. [PMID: 36051201 PMCID: PMC9426545 DOI: 10.3389/fpsyg.2022.918914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 07/19/2022] [Indexed: 11/13/2022] Open
Abstract
A previous study demonstrated that consonant recognition improved significantly in normal hearing listeners when useful frequency and time ranges were intensified by 6 dB. The goal of this study was to determine whether bilateral cochlear implant (BCI) and bilateral hearing aid (BHA) users experienced similar enhancement on consonant recognition with these intensified spectral and temporal cues in noise. In total, 10 BCI and 10 BHA users participated in a recognition test using 14 consonants. For each consonant, we used the frequency and time ranges that are critical for its recognition (called “target frequency and time range”), identified from normal hearing listeners. Then, a signal processing tool called the articulation-index gram (AI-Gram) was utilized to add a 6 dB gain to target frequency and time ranges. Consonant recognition was monaurally and binaurally measured under two signal processing conditions, unprocessed and intensified target frequency and time ranges at +5 and +10 dB signal-to-noise ratio and in quiet conditions. We focused on three comparisons between the BCI and BHA groups: (1) AI-Gram benefits (i.e., before and after intensifying target ranges by 6 dB), (2) enhancement in binaural benefits (better performance with bilateral devices compared to the better ear alone) via the AI-Gram processing, and (3) reduction in binaural interferences (poorer performance with bilateral devices compared to the better ear alone) via the AI-Gram processing. The results showed that the mean AI-Gram benefit was significantly improved for the BCI (max 5.9%) and BHA (max 5.2%) groups. However, the mean binaural benefit was not improved after AI-Gram processing. Individual data showed wide ranges of the AI-Gram benefit (max −1 to 23%) and binaural benefit (max −7.6 to 13%) for both groups. Individual data also showed a decrease in binaural interference in both groups after AI-Gram processing. These results suggest that the frequency and time ranges, intensified by the AI-Gram processing, contribute to consonant enhancement for monaural and binaural listening and both BCI and BHA technologies. The intensified frequency and time ranges helped to reduce binaural interference but contributed less to the synergistic binaural benefit in consonant recognition for both groups.
Collapse
|
7
|
Yoon YS, Whitaker G, Lee YS. Effects of the Configuration of Hearing Loss on Consonant Perception between Simulated Bimodal and Electric Acoustic Stimulation Hearing. J Am Acad Audiol 2021; 32:521-527. [PMID: 34965598 DOI: 10.1055/s-0041-1731699] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Cochlear implant technology allows for acoustic and electric stimulations to be combined across ears (bimodal) and within the same ear (electric acoustic stimulation [EAS]). Mechanisms used to integrate speech acoustics may be different between the bimodal and EAS hearing, and the configurations of hearing loss might be an important factor for the integration. Thus, differentiating the effects of different configurations of hearing loss on bimodal or EAS benefit in speech perception (differences in performance with combined acoustic and electric stimulations from a better stimulation alone) is important. PURPOSE Using acoustic simulation, we determined how consonant recognition was affected by different configurations of hearing loss in bimodal and EAS hearing. RESEARCH DESIGN A mixed design was used with one between-subject variable (simulated bimodal group vs. simulated EAS group) and one within-subject variable (acoustic stimulation alone, electric stimulation alone, and combined acoustic and electric stimulations). STUDY SAMPLE Twenty adult subjects (10 for each group) with normal hearing were recruited. DATA COLLECTION AND ANALYSIS Consonant perception was unilaterally or bilaterally measured in quiet. For the acoustic stimulation, four different simulations of hearing loss were created by band-pass filtering consonants with a fixed lower cutoff frequency of 100 Hz and each of the four upper cutoff frequencies of 250, 500, 750, and 1,000 Hz. For the electric stimulation, an eight-channel noise vocoder was used to generate a typical spectral mismatch by using fixed input (200-7,000 Hz) and output (1,000-7,000 Hz) frequency ranges. The effects of simulated hearing loss on consonant recognition were compared between the two groups. RESULTS Significant bimodal and EAS benefits occurred regardless of the configurations of hearing loss and hearing technology (bimodal vs. EAS). Place information was better transmitted in EAS hearing than in bimodal hearing. CONCLUSION These results suggest that configurations of hearing loss are not a significant factor for integrating consonant information between acoustic and electric stimulations. The results also suggest that mechanisms used to integrate consonant information may be similar between bimodal and EAS hearing.
Collapse
Affiliation(s)
- Yang-Soo Yoon
- Department of Communication Sciences and Disorders, Baylor University, Waco, Texas
| | - George Whitaker
- Division of Otolaryngology, Baylor Scott & White Medical Center, Temple, Texas
| | - Yune S Lee
- Department of Speech, Language, and Hearing, School of Behavioral and Brain Sciences Callier Center for Communication Disorders, The University of Texas at Dallas, Richardson, Texas
| |
Collapse
|
8
|
King K, Dillon MT, O'Connell BP, Brown KD, Park LR. Spatial Release From Masking in Bimodal and Bilateral Pediatric Cochlear Implant Recipients. Am J Audiol 2021; 30:67-75. [PMID: 33259722 DOI: 10.1044/2020_aja-20-00051] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose Traditional clinical measures of cochlear implant (CI) recipient performance may not fully evaluate the benefit of bimodal listening (hearing aid contralateral to a CI). The clinical assessment of spatial release from masking (SRM) may be a sensitive measure of the benefit of listening with bimodal stimulation. This study compared the SRM of pediatric bimodal and bilateral CI listeners using a clinically feasible method, and investigated variables that may contribute to speech recognition performance with spatially separated maskers. Method Forty pediatric bimodal (N = 20) and bilateral CI (N = 20) participants were assessed in their best aided listening condition on sentence recognition in a four-talker masker. Testing was completed with target and masker colocated at 0° azimuth, and with the masker directed at 90° to either ear. SRM was calculated as the difference in performance between the colocated and each 90° condition. A two-way mixed-methods analysis of variance was used to compare performance between groups in the three masker conditions. Multiple regression analyses were conducted to investigate potential predictors for SRM asymmetry including hearing history, unaided thresholds, word recognition, duration of device use, and acoustic bandwidth. Results Both groups demonstrated SRM, with significantly better recognition in each 90° condition as compared to the colocated condition. The groups did not differ significantly in SRM. The multiple regression analyses did not reveal any significant predictors of SRM asymmetry. Conclusions Bimodal and bilateral CI listeners demonstrated similar amounts of SRM. While no specific variables predicted SRM asymmetry in bimodal listeners, pediatric bimodal and bilateral CI recipients should expect similar amounts of SRM regardless of the side of the masker. SRM asymmetry in pediatric bimodal listeners may signal a need for consideration of a second CI.
Collapse
Affiliation(s)
- Kaylene King
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill
| | - Margaret T. Dillon
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill
| | - Brendan P. O'Connell
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill
| | - Kevin D. Brown
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill
| | - Lisa R. Park
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill
| |
Collapse
|
9
|
Speech Segregation in Active Middle Ear Stimulation: Masking Release With Changing Fundamental Frequency. Ear Hear 2020; 42:709-717. [PMID: 33369941 DOI: 10.1097/aud.0000000000000973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Temporal fine structure information such as low-frequency sounds including the fundamental frequency (F0) is important to separate different talkers in noisy environments. Speech perception in noise is negatively affected by reduced temporal fine structure resolution in cochlear hearing loss. It has been shown that normal-hearing (NH) people as well as cochlear implant patients with preserved acoustic low-frequency hearing benefit from different F0 between concurrent talkers. Though patients with an active middle ear implant (AMEI) report better sound quality compared with hearing aids, they often struggle when listening in noise. The primary objective was to evaluate whether or not patients with a Vibrant Soundbridge AMEI were able to benefit from F0 differences in a concurrent talker situation and if the effect was comparable to NH individuals. DESIGN A total of 13 AMEI listeners and 13 NH individuals were included. A modified variant of the Oldenburg sentence test was used to emulate a concurrent talker scenario. One sentence from the test corpus served as the masker and the remaining sentences as target speech. The F0 of the masker sentence was shifted upward by 4, 8, and 12 semitones. The target and masker sentences were presented simultaneously to the study subjects and the speech reception threshold was assessed by adaptively varying the masker level. To evaluate any impact of the occlusion effect on speech perception, AMEI listeners were tested in two configurations: with a plugged ear-canal contralateral to the implant side, indicated as AMEIcontra, or with both ears plugged, indicated as AMEIboth. RESULTS In both study groups, speech perception improved when the F0 difference between target and masker increased. This was significant when the difference was at least 8 semitones; the F0-based release from masking was 3.0 dB in AMEIcontra (p = 0.009) and 2.9 dB in AMEIboth (p = 0.015), compared with 5.6 dB in NH listeners (p < 0.001). A difference of 12 semitones revealed a F0-based release from masking of 3.5 dB in the AMEIcontra (p = 0.002) and 3.4 dB in the AMEIboth (p = 0.003) condition, compared with 5.0 dB in NH individuals (p < 0.001). CONCLUSIONS Though AMEI users deal with problems resulting from cochlear damage, hearing amplification with the implant enables a masking release based on F0 differences when F0 between a target and masker sentence was at least 8 semitones. Additional occlusion of the ear canal on the implant side did not affect speech performance. The current results complement the knowledge about the benefit of F0 within the acoustic low-frequency hearing.
Collapse
|
10
|
Abstract
OBJECTIVE To assess the benefits of bimodal listening (i.e., addition of contralateral hearing aid) for cochlear implant (CI) users on real-world tasks involving high-talker variability speech materials, environmental sounds, and self-reported quality of life (quality of hearing) in listeners' own best-aided conditions. STUDY DESIGN Cross-sectional study between groups. SETTING Outpatient hearing clinic. PATIENTS Fifty experienced adult CI users divided into groups based on normal daily listening conditions (i.e., best-aided conditions): unilateral CI (CI), unilateral CI with contralateral HA (bimodal listening; CIHA), or bilateral CI (CICI). INTERVENTION Task-specific measures of speech recognition with low (Harvard Standard Sentences) and high (Perceptually Robust English Sentence Test Open-set corpus) talker variability, environmental sound recognition (Familiar Environmental Sounds Test-Identification), and hearing-related quality of life (Nijmegen Cochlear Implant Questionnaire). MAIN OUTCOME MEASURES Test group differences among CI, CIHA, and CICI conditions. RESULTS No group effect was observed for speech recognition with low or high-talker variability, or hearing-related quality of life. Bimodal listeners demonstrated a benefit in environmental sound recognition compared with unilateral CI listeners, with a trend of greater benefit than the bilateral CI group. There was also a visual trend for benefit on high-talker variability speech recognition. CONCLUSIONS Findings provide evidence that bimodal listeners demonstrate stronger environmental sound recognition compared with unilateral CI listeners, and support the idea that there are additional advantages to bimodal listening after implantation other than speech recognition measures, which are at risk of being lost if considering bilateral implantation.
Collapse
|
11
|
Factors Affecting Bimodal Benefit in Pediatric Mandarin-Speaking Chinese Cochlear Implant Users. Ear Hear 2020; 40:1316-1327. [PMID: 30882534 DOI: 10.1097/aud.0000000000000712] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES While fundamental frequency (F0) cues are important to both lexical tone perception and multitalker segregation, F0 cues are poorly perceived by cochlear implant (CI) users. Adding low-frequency acoustic hearing via a hearing aid in the contralateral ear may improve CI users' F0 perception. For English-speaking CI users, contralateral acoustic hearing has been shown to improve perception of target speech in noise and in competing talkers. For tonal languages such as Mandarin Chinese, F0 information is lexically meaningful. Given competing F0 information from multiple talkers and lexical tones, contralateral acoustic hearing may be especially beneficial for Mandarin-speaking CI users' perception of competing speech. DESIGN Bimodal benefit (CI+hearing aid - CI-only) was evaluated in 11 pediatric Mandarin-speaking Chinese CI users. In experiment 1, speech recognition thresholds (SRTs) were adaptively measured using a modified coordinated response measure test; subjects were required to correctly identify 2 keywords from among 10 choices in each category. SRTs were measured with CI-only or bimodal listening in the presence of steady state noise (SSN) or competing speech with the same (M+M) or different voice gender (M+F). Unaided thresholds in the non-CI ear and demographic factors were compared with speech performance. In experiment 2, SRTs were adaptively measured in SSN for recognition of 5 keywords, a more difficult listening task than the 2-keyword recognition task in experiment 1. RESULTS In experiment 1, SRTs were significantly lower for SSN than for competing speech in both the CI-only and bimodal listening conditions. There was no significant difference between CI-only and bimodal listening for SSN and M+F (p > 0.05); SRTs were significantly lower for CI-only than for bimodal listening for M+M (p < 0.05), suggesting bimodal interference. Subjects were able to make use of voice gender differences for bimodal listening (p < 0.05) but not for CI-only listening (p > 0.05). Unaided thresholds in the non-CI ear were positively correlated with bimodal SRTs for M+M (p < 0.006) but not for SSN or M+F. No significant correlations were observed between any demographic variables and SRTs (p > 0.05 in all cases). In experiment 2, SRTs were significantly lower with two than with five keywords (p < 0.05). A significant bimodal benefit was observed only for the 5-keyword condition (p < 0.05). CONCLUSIONS With the CI alone, subjects experienced greater interference with competing speech than with SSN and were unable to use voice gender difference to segregate talkers. For the coordinated response measure task, subjects experienced no bimodal benefit and even bimodal interference when competing talkers were the same voice gender. A bimodal benefit in SSN was observed for the five-keyword condition but not for the two-keyword condition, suggesting that bimodal listening may be more beneficial as the difficulty of the listening task increased. The present data suggest that bimodal benefit may depend on the type of masker and/or the difficulty of the listening task.
Collapse
|
12
|
The Effect of Hearing Aid Bandwidth and Configuration of Hearing Loss on Bimodal Speech Recognition in Cochlear Implant Users. Ear Hear 2019; 40:621-635. [PMID: 30067559 DOI: 10.1097/aud.0000000000000638] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES (1) To determine the effect of hearing aid (HA) bandwidth on bimodal speech perception in a group of unilateral cochlear implant (CI) patients with diverse degrees and configurations of hearing loss in the nonimplanted ear, (2) to determine whether there are demographic and audiometric characteristics that would help to determine the appropriate HA bandwidth for a bimodal patient. DESIGN Participants were 33 experienced bimodal device users with postlingual hearing loss. Twenty three of them had better speech perception with the CI than the HA (CI>HA group) and 10 had better speech perception with the HA than the CI (HA>CI group). Word recognition in sentences (AzBio sentences at +10 dB signal to noise ratio presented at 0° azimuth) and in isolation [CNC (consonant-nucleus-consonant) words] was measured in unimodal conditions [CI alone or HAWB, which indicates HA alone in the wideband (WB) condition] and in bimodal conditions (BMWB, BM2k, BM1k, and BM500) as the bandwidth of an actual HA was reduced from WB to 2 kHz, 1 kHz, and 500 Hz. Linear mixed-effect modeling was used to quantify the relationship between speech recognition and listening condition and to assess how audiometric or demographic covariates might influence this relationship in each group. RESULTS For the CI>HA group, AzBio scores were significantly higher (on average) in all bimodal conditions than in the best unimodal condition (CI alone) and were highest at the BMWB condition. For CNC scores, on the other hand, there was no significant improvement over the CI-alone condition in any of the bimodal conditions. The opposite pattern was observed in the HA>CI group. CNC word scores were significantly higher in the BM2k and BMWB conditions than in the best unimodal condition (HAWB), but none of the bimodal conditions were significantly better than the best unimodal condition for AzBio sentences (and some of the restricted bandwidth conditions were actually worse). Demographic covariates did not interact significantly with bimodal outcomes, but some of the audiometric variables did. For CI>HA participants with a flatter audiometric configuration and better mid-frequency hearing, bimodal AzBio scores were significantly higher than the CI-alone score with the WB setting (BMWB) but not with other bandwidths. In contrast, CI>HA participants with more steeply sloping hearing loss and poorer mid-frequency thresholds (≥82.5 dB) had significantly higher bimodal AzBio scores in all bimodal conditions, and the BMWB did not differ significantly from the restricted bandwidth conditions. HA>CI participants with mild low-frequency hearing loss showed the highest levels of bimodal improvement over the best unimodal condition on CNC words. They were also less affected by HA bandwidth reduction compared with HA>CI participants with poorer low-frequency thresholds. CONCLUSIONS The pattern of bimodal performance as a function of the HA bandwidth was found to be consistent with the degree and configuration of hearing loss for both patients with CI>HA performance and for those with HA>CI performance. Our results support fitting the HA for all bimodal patients with the widest bandwidth consistent with effective audibility.
Collapse
|
13
|
Bilateral Cochlear Implantation Versus Bimodal Hearing in Patients With Functional Residual Hearing: A Within-subjects Comparison of Audiologic Performance and Quality of Life. Otol Neurotol 2019. [PMID: 29533331 DOI: 10.1097/mao.0000000000001750] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Evaluate performance and quality of life changes after sequential bilateral cochlear implantation in patients with preoperative residual hearing functioning in a bimodal hearing configuration. STUDY DESIGN Retrospective analysis using within-subjects repeated measures design. SETTING Tertiary otologic center. PATIENTS Twenty-two adult patients with bilateral sensorineural hearing loss who used bimodal hearing before second cochlear implant (CI) meeting the following criteria: 1) preoperative residual hearing (≤80 dB HL at 250 Hz) in the ear to be implanted, 2) implantation with current CI technology (2013-2016), 3) consonant-nucleus-consonant (CNC) speech recognition testing in the bimodal condition preoperatively and bilateral CI condition postoperatively. INTERVENTION Cochlear implantation. MAIN OUTCOME MEASURES CNC and AzBio sentence scores in quiet and noise (+5 SNR). Subjective measures of communication difficulty and sound quality were also administered. RESULTS Twenty-two patients (mean 64 yr, 68% men) were included. At an average follow-up of 11.8 months, CNC scores in the bilateral CI condition (mean 63%, standard deviation [SD] = 22) were significantly better than preoperative bimodal scores with repeated measures analysis (mean 55%, SD = 22) (p = 0.03). AzBio scores in quiet were also higher with bilateral CI (mean 76%, SD = 24) compared with bimodal listening (mean 69%, SD = 29) (p = 0.0007). Global abbreviated profile of hearing aid benefit (APHAB) and overall speech, spatial, and qualities of hearing (SSQ) scores exhibited significant improvement following bilateral implantation (p = 0.006 for both analyses). CONCLUSIONS For patients using a bimodal hearing configuration with substantial residual hearing in the non-CI ear, bilateral cochlear implantation yields improved audiologic performance and better subjective quality of life, irrespective of the ability to preserve acoustic hearing during the second sided implantation.
Collapse
|
14
|
Rødvik AK, Tvete O, Torkildsen JVK, Wie OB, Skaug I, Silvola JT. Consonant and Vowel Confusions in Well-Performing Children and Adolescents With Cochlear Implants, Measured by a Nonsense Syllable Repetition Test. Front Psychol 2019; 10:1813. [PMID: 31474900 PMCID: PMC6702790 DOI: 10.3389/fpsyg.2019.01813] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2019] [Accepted: 07/22/2019] [Indexed: 12/31/2022] Open
Abstract
Although the majority of early implanted, profoundly deaf children with cochlear implants (CIs), will develop correct pronunciation if they receive adequate oral language stimulation, many of them have difficulties with perceiving minute details of speech. The main aim of this study is to measure the confusion of consonants and vowels in well-performing children and adolescents with CIs. The study also aims to investigate how age at onset of severe to profound deafness influences perception. The participants are 36 children and adolescents with CIs (18 girls), with a mean (SD) age of 11.6 (3.0) years (range: 5.9-16.0 years). Twenty-nine of them are prelingually deaf and seven are postlingually deaf. Two reference groups of normal-hearing (NH) 6- and 13-year-olds are included. Consonant and vowel perception is measured by repetition of 16 bisyllabic vowel-consonant-vowel nonsense words and nine monosyllabic consonant-vowel-consonant nonsense words in an open-set design. For the participants with CIs, consonants were mostly confused with consonants with the same voicing and manner, and the mean (SD) voiced consonant repetition score, 63.9 (10.6)%, was considerably lower than the mean (SD) unvoiced consonant score, 76.9 (9.3)%. There was a devoicing bias for the stops; unvoiced stops were confused with other unvoiced stops and not with voiced stops, and voiced stops were confused with both unvoiced stops and other voiced stops. The mean (SD) vowel repetition score was 85.2 (10.6)% and there was a bias in the confusions of [i:] and [y:]; [y:] was perceived as [i:] twice as often as [y:] was repeated correctly. Subgroup analyses showed no statistically significant differences between the consonant scores for pre- and postlingually deaf participants. For the NH participants, the consonant repetition scores were substantially higher and the difference between voiced and unvoiced consonant repetition scores considerably lower than for the participants with CIs. The participants with CIs obtained scores close to ceiling on vowels and real-word monosyllables, but their perception was substantially lower for voiced consonants. This may partly be related to limitations in the CI technology for the transmission of low-frequency sounds, such as insertion depth of the electrode and ability to convey temporal information.
Collapse
Affiliation(s)
- Arne Kirkhorn Rødvik
- Department of Special Needs Education, Institute of Educational Sciences, University of Oslo, Oslo, Norway.,Cochlear Implant Unit, Department of Otorhinolaryngology, Division of Surgery and Clinical Neuroscience, Oslo University Hospital, Oslo, Norway
| | - Ole Tvete
- Cochlear Implant Unit, Department of Otorhinolaryngology, Division of Surgery and Clinical Neuroscience, Oslo University Hospital, Oslo, Norway
| | - Janne von Koss Torkildsen
- Department of Special Needs Education, Institute of Educational Sciences, University of Oslo, Oslo, Norway
| | - Ona Bø Wie
- Department of Special Needs Education, Institute of Educational Sciences, University of Oslo, Oslo, Norway.,Cochlear Implant Unit, Department of Otorhinolaryngology, Division of Surgery and Clinical Neuroscience, Oslo University Hospital, Oslo, Norway
| | | | - Juha Tapio Silvola
- Department of Special Needs Education, Institute of Educational Sciences, University of Oslo, Oslo, Norway.,Cochlear Implant Unit, Department of Otorhinolaryngology, Division of Surgery and Clinical Neuroscience, Oslo University Hospital, Oslo, Norway.,Ear, Nose, and Throat Department, Division of Surgery, Akershus University Hospital, Lørenskog, Norway
| |
Collapse
|
15
|
Tao DD, Liu JS, Yang ZD, Wilson BS, Zhou N. Bilaterally Combined Electric and Acoustic Hearing in Mandarin-Speaking Listeners: The Population With Poor Residual Hearing. Trends Hear 2019; 22:2331216518757892. [PMID: 29451107 PMCID: PMC5818091 DOI: 10.1177/2331216518757892] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
The hearing loss criterion for cochlear implant candidacy in mainland China is extremely stringent (bilateral severe to profound hearing loss), resulting in few patients with substantial residual hearing in the nonimplanted ear. The main objective of the current study was to examine the benefit of bimodal hearing in typical Mandarin-speaking implant users who have poorer residual hearing in the nonimplanted ear relative to those used in the English-speaking studies. Seventeen Mandarin-speaking bimodal users with pure-tone averages of ∼80 dB HL participated in the study. Sentence recognition in quiet and in noise as well as tone and word recognition in quiet were measured in monaural and bilateral conditions. There was no significant bimodal effect for word and sentence recognition in quiet. Small bimodal effects were observed for sentence recognition in noise (6%) and tone recognition (4%). The magnitude of both effects was correlated with unaided thresholds at frequencies near voice fundamental frequencies (F0s). A weak correlation between the bimodal effect for word recognition and unaided thresholds at frequencies higher than F0s was identified. These results were consistent with previous findings that showed more robust bimodal benefits for speech recognition tasks that require higher spectral resolution than speech recognition in quiet. The significant but small F0-related bimodal benefit was also consistent with the limited acoustic hearing in the nonimplanted ear of the current subject sample, who are representative of the bimodal users in mainland China. These results advocate for a more relaxed implant candidacy criterion to be used in mainland China.
Collapse
Affiliation(s)
- Duo-Duo Tao
- 1 Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Ji-Sheng Liu
- 1 Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Zhen-Dong Yang
- 1 Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Blake S Wilson
- 2 Departments of Surgery, Biomedical Engineering, and Electrical and Computer Engineering, Duke University, Durham, NC, USA
| | - Ning Zhou
- 3 Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, USA
| |
Collapse
|
16
|
Di Stadio A, Dipietro L, Toffano R, Burgio F, De Lucia A, Ippolito V, Garofalo S, Ricci G, Martines F, Trabalzini F, Della Volpe A. Working Memory Function in Children with Single Side Deafness Using a Bone-Anchored Hearing Implant: A Case-Control Study. Audiol Neurootol 2018; 23:238-244. [PMID: 30439708 DOI: 10.1159/000493722] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2018] [Accepted: 09/10/2018] [Indexed: 11/19/2022] Open
Abstract
The importance of a good hearing function to preserve memory and cognitive abilities has been shown in the adult population, but studies on the pediatric population are currently lacking. This study aims at evaluating the effects of a bone-anchored hearing implant (BAHI) on speech perception, speech processing, and memory abilities in children with single side deafness (SSD). We enrolled n = 25 children with SSD and assessed them prior to BAHI implantation, and at 1-month and 3-month follow-ups after BAHI implantation using tests of perception in silence and perception in phonemic confusion, dictation in silence and noise, and working memory and short-term memory function in conditions of silence and noise. We also enrolled and evaluated n = 15 children with normal hearing. We found a statistically significant difference in performance between healthy children and children with SSD before BAHI implantation in the scores of all tests. After 3 months from BAHI implantation, the per-formance of children with SSD was comparable to that of healthy subjects as assessed by tests of speech perception, working memory, and short-term memory function in silence condition, while differences persisted in the scores of the dictation test (both in silence and noise conditions) and of the working memory function test in noise condition. Our data suggest that in children with SSD BAHI improves speech perception and memory. Speech rehabilitation may be necessary to further improve speech processing.
Collapse
Affiliation(s)
- Arianna Di Stadio
- Neurology and Neuropsychology Unit, IRCCS, San Camillo Hospital, Venice, Italy,
| | | | - Roberta Toffano
- Neurology and Neuropsychology Unit, IRCCS, San Camillo Hospital, Venice, Italy
| | - Francesca Burgio
- Neurology and Neuropsychology Unit, IRCCS, San Camillo Hospital, Venice, Italy
| | - Antonietta De Lucia
- Cochlear Implant Unit, Children Hospital Santobono-Pausilipon, Naples, Italy
| | - Valentina Ippolito
- Cochlear Implant Unit, Children Hospital Santobono-Pausilipon, Naples, Italy
| | - Sabina Garofalo
- Cochlear Implant Unit, Children Hospital Santobono-Pausilipon, Naples, Italy
| | - Giampietro Ricci
- Otolaryngology Department, University of Perugia, Perugia, Italy
| | | | | | - Antonio Della Volpe
- Cochlear Implant Unit, Children Hospital Santobono-Pausilipon, Naples, Italy
| |
Collapse
|
17
|
Wenrich KA, Davidson LS, Uchanski RM. Segmental and Suprasegmental Perception in Children Using Hearing Aids. J Am Acad Audiol 2018; 28:901-912. [PMID: 29130438 PMCID: PMC5726292 DOI: 10.3766/jaaa.16105] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Suprasegmental perception (perception of stress, intonation, "how something is said" and "who says it") and segmental speech perception (perception of individual phonemes or perception of "what is said") are perceptual abilities that provide the foundation for the development of spoken language and effective communication. While there are numerous studies examining segmental perception in children with hearing aids (HAs), there are far fewer studies examining suprasegmental perception, especially for children with greater degrees of residual hearing. Examining the relation between acoustic hearing thresholds, and both segmental and suprasegmental perception for children with HAs, may ultimately enable better device recommendations (bilateral HAs, bimodal devices [one CI and one HA in opposite ears], bilateral CIs) for a particular degree of residual hearing. Examining both types of speech perception is important because segmental and suprasegmental cues are affected differentially by the type of hearing device(s) used (i.e., cochlear implant [CI] and/or HA). Additionally, suprathreshold measures, such as frequency resolution ability, may partially predict benefit from amplification and may assist audiologists in making hearing device recommendations. PURPOSE The purpose of this study is to explore the relationship between audibility (via hearing thresholds and speech intelligibility indices), and segmental and suprasegmental speech perception for children with HAs. A secondary goal is to explore the relationships among frequency resolution ability (via spectral modulation detection [SMD] measures), segmental and suprasegmental speech perception, and receptive language in these same children. RESEARCH DESIGN A prospective cross-sectional design. STUDY SAMPLE Twenty-three children, ages 4 yr 11 mo to 11 yr 11 mo, participated in the study. Participants were recruited from pediatric clinic populations, oral schools for the deaf, and mainstream schools. DATA COLLECTION AND ANALYSIS Audiological history and hearing device information were collected from participants and their families. Segmental and suprasegmental speech perception, SMD, and receptive vocabulary skills were assessed. Correlations were calculated to examine the significance (p < 0.05) of relations between audibility and outcome measures. RESULTS Measures of audibility and segmental speech perception are not significantly correlated, while low-frequency pure-tone average (unaided) is significantly correlated with suprasegmental speech perception. SMD is significantly correlated with all measures (measures of audibility, segmental and suprasegmental perception and vocabulary). Lastly, although age is not significantly correlated with measures of audibility, it is significantly correlated with all other outcome measures. CONCLUSIONS The absence of a significant correlation between audibility and segmental speech perception might be attributed to overall audibility being maximized through well-fit HAs. The significant correlation between low-frequency unaided audibility and suprasegmental measures is likely due to the strong, predominantly low-frequency nature of suprasegmental acoustic properties. Frequency resolution ability, via SMD performance, is significantly correlated with all outcomes and requires further investigation; its significant correlation with vocabulary suggests that linguistic ability may be partially related to frequency resolution ability. Last, all of the outcome measures are significantly correlated with age, suggestive of developmental effects.
Collapse
Affiliation(s)
- Kaitlyn A. Wenrich
- Program in Audiology and Communication Science, Washington University School of Medicine, St. Louis, MO
| | - Lisa S. Davidson
- Program in Audiology and Communication Science, Washington University School of Medicine, St. Louis, MO
- Department of Otolaryngology, Washington University School of Medicine, St. Louis, MO
- Central Institute for the Deaf, St. Louis, MO
| | - Rosalie M. Uchanski
- Program in Audiology and Communication Science, Washington University School of Medicine, St. Louis, MO
- Department of Otolaryngology, Washington University School of Medicine, St. Louis, MO
| |
Collapse
|
18
|
Combined Electric and Acoustic Stimulation With Hearing Preservation: Effect of Cochlear Implant Low-Frequency Cutoff on Speech Understanding and Perceived Listening Difficulty. Ear Hear 2018; 38:539-553. [PMID: 28301392 DOI: 10.1097/aud.0000000000000418] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE The primary objective of this study was to assess the effect of electric and acoustic overlap for speech understanding in typical listening conditions using semidiffuse noise. DESIGN This study used a within-subjects, repeated measures design including 11 experienced adult implant recipients (13 ears) with functional residual hearing in the implanted and nonimplanted ear. The aided acoustic bandwidth was fixed and the low-frequency cutoff for the cochlear implant (CI) was varied systematically. Assessments were completed in the R-SPACE sound-simulation system which includes a semidiffuse restaurant noise originating from eight loudspeakers placed circumferentially about the subject's head. AzBio sentences were presented at 67 dBA with signal to noise ratio varying between +10 and 0 dB determined individually to yield approximately 50 to 60% correct for the CI-alone condition with full CI bandwidth. Listening conditions for all subjects included CI alone, bimodal (CI + contralateral hearing aid), and bilateral-aided electric and acoustic stimulation (EAS; CI + bilateral hearing aid). Low-frequency cutoffs both below and above the original "clinical software recommendation" frequency were tested for all patients, in all conditions. Subjects estimated listening difficulty for all conditions using listener ratings based on a visual analog scale. RESULTS Three primary findings were that (1) there was statistically significant benefit of preserved acoustic hearing in the implanted ear for most overlap conditions, (2) the default clinical software recommendation rarely yielded the highest level of speech recognition (1 of 13 ears), and (3) greater EAS overlap than that provided by the clinical recommendation yielded significant improvements in speech understanding. CONCLUSIONS For standard-electrode CI recipients with preserved hearing, spectral overlap of acoustic and electric stimuli yielded significantly better speech understanding and less listening effort in a laboratory-based, restaurant-noise simulation. In conclusion, EAS patients may derive more benefit from greater acoustic and electric overlap than given in current software fitting recommendations, which are based solely on audiometric threshold. These data have larger scientific implications, as previous studies may not have assessed outcomes with optimized EAS parameters, thereby underestimating the benefit afforded by hearing preservation.
Collapse
|
19
|
Rødvik AK, von Koss Torkildsen J, Wie OB, Storaker MA, Silvola JT. Consonant and Vowel Identification in Cochlear Implant Users Measured by Nonsense Words: A Systematic Review and Meta-Analysis. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:1023-1050. [PMID: 29623340 DOI: 10.1044/2018_jslhr-h-16-0463] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Accepted: 12/18/2017] [Indexed: 06/08/2023]
Abstract
PURPOSE The purpose of this systematic review and meta-analysis was to establish a baseline of the vowel and consonant identification scores in prelingually and postlingually deaf users of multichannel cochlear implants (CIs) tested with consonant-vowel-consonant and vowel-consonant-vowel nonsense syllables. METHOD Six electronic databases were searched for peer-reviewed articles reporting consonant and vowel identification scores in CI users measured by nonsense words. Relevant studies were independently assessed and screened by 2 reviewers. Consonant and vowel identification scores were presented in forest plots and compared between studies in a meta-analysis. RESULTS Forty-seven articles with 50 studies, including 647 participants, thereof 581 postlingually deaf and 66 prelingually deaf, met the inclusion criteria of this study. The mean performance on vowel identification tasks for the postlingually deaf CI users was 76.8% (N = 5), which was higher than the mean performance for the prelingually deaf CI users (67.7%; N = 1). The mean performance on consonant identification tasks for the postlingually deaf CI users was higher (58.4%; N = 44) than for the prelingually deaf CI users (46.7%; N = 6). The most common consonant confusions were found between those with same manner of articulation (/k/ as /t/, /m/ as /n/, and /p/ as /t/). CONCLUSIONS The mean performance on consonant identification tasks for the prelingually and postlingually deaf CI users was found. There were no statistically significant differences between the scores for prelingually and postlingually deaf CI users. The consonants that were incorrectly identified were typically confused with other consonants with the same acoustic properties, namely, voicing, duration, nasality, and silent gaps. A univariate metaregression model, although not statistically significant, indicated that duration of implant use in postlingually deaf adults predict a substantial portion of their consonant identification ability. As there is no ceiling effect, a nonsense syllable identification test may be a useful addition to the standard test battery in audiology clinics when assessing the speech perception of CI users.
Collapse
Affiliation(s)
- Arne Kirkhorn Rødvik
- Department of Special Needs Education, Faculty of Educational Sciences, University of Oslo, Norway
| | | | - Ona Bø Wie
- Department of Special Needs Education, Faculty of Educational Sciences, University of Oslo, Norway
- Oslo University Hospital, Norway
| | - Marit Aarvaag Storaker
- Institute of Basic Medical Sciences, Faculty of Medicine, University of Oslo, Norway
- Lillehammer Hospital, Norway
| | - Juha Tapio Silvola
- Oslo University Hospital, Norway
- Institute of Basic Medical Sciences, Faculty of Medicine, University of Oslo, Norway
- Akershus University Hospital, Lørenskog, Norway
| |
Collapse
|
20
|
Sheffield SW, Jahn K, Gifford RH. Preserved acoustic hearing in cochlear implantation improves speech perception. J Am Acad Audiol 2018; 26:145-54. [PMID: 25690775 DOI: 10.3766/jaaa.26.2.5] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND With improved surgical techniques and electrode design, an increasing number of cochlear implant (CI) recipients have preserved acoustic hearing in the implanted ear, thereby resulting in bilateral acoustic hearing. There are currently no guidelines, however, for clinicians with respect to audiometric criteria and the recommendation of amplification in the implanted ear. The acoustic bandwidth necessary to obtain speech perception benefit from acoustic hearing in the implanted ear is unknown. Additionally, it is important to determine if, and in which listening environments, acoustic hearing in both ears provides more benefit than hearing in just one ear, even with limited residual hearing. PURPOSE The purposes of this study were to (1) determine whether acoustic hearing in an ear with a CI provides as much speech perception benefit as an equivalent bandwidth of acoustic hearing in the nonimplanted ear, and (2) determine whether acoustic hearing in both ears provides more benefit than hearing in just one ear. RESEARCH DESIGN A repeated-measures, within-participant design was used to compare performance across listening conditions. STUDY SAMPLE Seven adults with CIs and bilateral residual acoustic hearing (hearing preservation) were recruited for the study. DATA COLLECTION AND ANALYSIS Consonant-nucleus-consonant word recognition was tested in four conditions: CI alone, CI + acoustic hearing in the nonimplanted ear, CI + acoustic hearing in the implanted ear, and CI + bilateral acoustic hearing. A series of low-pass filters were used to examine the effects of acoustic bandwidth through an insert earphone with amplification. Benefit was defined as the difference among conditions. The benefit of bilateral acoustic hearing was tested in both diffuse and single-source background noise. RESULTS were analyzed using repeated-measures analysis of variance. RESULTS Similar benefit was obtained for equivalent acoustic frequency bandwidth in either ear. Acoustic hearing in the nonimplanted ear provided more benefit than the implanted ear only in the wideband condition, most likely because of better audiometric thresholds (>500 Hz) in the nonimplanted ear. Bilateral acoustic hearing provided more benefit than unilateral hearing in either ear alone, but only in diffuse background noise. CONCLUSIONS RESULTS support use of amplification in the implanted ear if residual hearing is present. The benefit of bilateral acoustic hearing (hearing preservation) should not be tested in quiet or with spatially coincident speech and noise, but rather in spatially separated speech and noise (e.g., diffuse background noise).
Collapse
Affiliation(s)
| | - Kelly Jahn
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| |
Collapse
|
21
|
Top-Down Processes in Simulated Electric-Acoustic Hearing: The Effect of Linguistic Context on Bimodal Benefit for Temporally Interrupted Speech. Ear Hear 2018; 37:582-92. [PMID: 27007220 DOI: 10.1097/aud.0000000000000298] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
OBJECTIVES Previous studies have documented the benefits of bimodal hearing as compared with a cochlear implant alone, but most have focused on the importance of bottom-up, low-frequency cues. The purpose of the present study was to evaluate the role of top-down processing in bimodal hearing by measuring the effect of sentence context on bimodal benefit for temporally interrupted sentences. It was hypothesized that low-frequency acoustic cues would facilitate the use of contextual information in the interrupted sentences, resulting in greater bimodal benefit for the higher context (CUNY) sentences than for the lower context (IEEE) sentences. DESIGN Young normal-hearing listeners were tested in simulated bimodal listening conditions in which noise band vocoded sentences were presented to one ear with or without low-pass (LP) filtered speech or LP harmonic complexes (LPHCs) presented to the contralateral ear. Speech recognition scores were measured in three listening conditions: vocoder-alone, vocoder combined with LP speech, and vocoder combined with LPHCs. Temporally interrupted versions of the CUNY and IEEE sentences were used to assess listeners' ability to fill in missing segments of speech by using top-down linguistic processing. Sentences were square-wave gated at a rate of 5 Hz with a 50% duty cycle. Three vocoder channel conditions were tested for each type of sentence (8, 12, and 16 channels for CUNY; 12, 16, and 32 channels for IEEE) and bimodal benefit was compared for similar amounts of spectral degradation (matched-channel comparisons) and similar ranges of baseline performance. Two gain measures, percentage-point gain and normalized gain, were examined. RESULTS Significant effects of context on bimodal benefit were observed when LP speech was presented to the residual-hearing ear. For the matched-channel comparisons, CUNY sentences showed significantly higher normalized gains than IEEE sentences for both the 12-channel (20 points higher) and 16-channel (18 points higher) conditions. For the individual gain comparisons that used a similar range of baseline performance, CUNY sentences showed bimodal benefits that were significantly higher (7% points, or 15 points normalized gain) than those for IEEE sentences. The bimodal benefits observed here for temporally interrupted speech were considerably smaller than those observed in an earlier study that used continuous speech. Furthermore, unlike previous findings for continuous speech, no bimodal benefit was observed when LPHCs were presented to the LP ear. CONCLUSIONS Findings indicate that linguistic context has a significant influence on bimodal benefit for temporally interrupted speech and support the hypothesis that low-frequency acoustic information presented to the residual-hearing ear facilitates the use of top-down linguistic processing in bimodal hearing. However, bimodal benefit is reduced for temporally interrupted speech as compared with continuous speech, suggesting that listeners' ability to restore missing speech information depends not only on top-down linguistic knowledge but also on the quality of the bottom-up sensory input.
Collapse
|
22
|
The Effects of Acoustic Bandwidth on Simulated Bimodal Benefit in Children and Adults with Normal Hearing. Ear Hear 2018; 37:282-8. [PMID: 26901264 DOI: 10.1097/aud.0000000000000281] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The primary purpose of this study was to examine the effect of acoustic bandwidth on bimodal benefit for speech recognition in normal-hearing children with a cochlear implant (CI) simulation in one ear and low-pass filtered stimuli in the contralateral ear. The effect of acoustic bandwidth on bimodal benefit in children was compared with the pattern of adults with normal hearing. Our hypothesis was that children would require a wider acoustic bandwidth than adults to (1) derive bimodal benefit, and (2) obtain asymptotic bimodal benefit. DESIGN Nineteen children (6 to 12 years) and 10 adults with normal hearing participated in the study. Speech recognition was assessed via recorded sentences presented in a 20-talker babble. The AzBio female-talker sentences were used for the adults and the pediatric AzBio sentences (BabyBio) were used for the children. A CI simulation was presented to the right ear and low-pass filtered stimuli were presented to the left ear with the following cutoff frequencies: 250, 500, 750, 1000, and 1500 Hz. RESULTS The primary findings were (1) adults achieved higher performance than children when presented with only low-pass filtered acoustic stimuli, (2) adults and children performed similarly in all the simulated CI and bimodal conditions, (3) children gained significant bimodal benefit with the addition of low-pass filtered speech at 250 Hz, and (4) unlike previous studies completed with adult bimodal patients, adults and children with normal hearing gained additional significant bimodal benefit with cutoff frequencies up to 1500 Hz with most of the additional benefit gained with energy below 750 Hz. CONCLUSIONS Acoustic bandwidth effects on simulated bimodal benefit were similar in children and adults with normal hearing. Should the current results generalize to children with CIs, these results suggest pediatric CI recipients may derive significant benefit from minimal acoustic hearing (<250 Hz) in the nonimplanted ear and increasing benefit with broader bandwidth. Knowledge of the effect of acoustic bandwidth on bimodal benefit in children may help direct clinical decisions regarding a second CI, continued bimodal hearing, and even optimizing acoustic amplification for the nonimplanted ear.
Collapse
|
23
|
Integration of acoustic and electric hearing is better in the same ear than across ears. Sci Rep 2017; 7:12500. [PMID: 28970567 PMCID: PMC5624923 DOI: 10.1038/s41598-017-12298-3] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2017] [Accepted: 09/06/2017] [Indexed: 11/26/2022] Open
Abstract
Advances in cochlear implant (CI) technology allow for acoustic and electric hearing to be combined within the same ear (electric-acoustic stimulation, or EAS) and/or across ears (bimodal listening). Integration efficiency (IE; the ratio between observed and predicted performance for acoustic-electric hearing) can be used to estimate how well acoustic and electric hearing are combined. The goal of this study was to evaluate factors that affect IE in EAS and bimodal listening. Vowel recognition was measured in normal-hearing subjects listening to simulations of unimodal, EAS, and bimodal listening. The input/output frequency range for acoustic hearing was 0.1–0.6 kHz. For CI simulations, the output frequency range was 1.2–8.0 kHz to simulate a shallow insertion depth and the input frequency range was varied to provide increasing amounts of speech information and tonotopic mismatch. Performance was best when acoustic and electric hearing was combined in the same ear. IE was significantly better for EAS than for bimodal listening; IE was sensitive to tonotopic mismatch for EAS, but not for bimodal listening. These simulation results suggest acoustic and electric hearing may be more effectively and efficiently combined within rather than across ears, and that tonotopic mismatch should be minimized to maximize the benefit of acoustic-electric hearing, especially for EAS.
Collapse
|
24
|
|
25
|
Masking release with changing fundamental frequency: Electric acoustic stimulation resembles normal hearing subjects. Hear Res 2017; 350:226-234. [DOI: 10.1016/j.heares.2017.05.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2016] [Revised: 03/04/2017] [Accepted: 05/08/2017] [Indexed: 11/20/2022]
|
26
|
Yang HI, Zeng FG. Bimodal benefits in Mandarin-speaking cochlear implant users with contralateral residual acoustic hearing. Int J Audiol 2017; 56:S17-S22. [DOI: 10.1080/14992027.2017.1321789] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Hsin-I Yang
- Department of Biomedical Engineering and Center for Hearing Research, University of California Irvine, Irvine, CA, USA
| | - Fan-Gang Zeng
- Department of Biomedical Engineering and Center for Hearing Research, University of California Irvine, Irvine, CA, USA
| |
Collapse
|
27
|
Zhou X, Li H, Galvin JJ, Fu QJ, Yuan W. Effects of insertion depth on spatial speech perception in noise for simulations of cochlear implants and single-sided deafness. Int J Audiol 2016; 56:S41-S48. [PMID: 27367147 DOI: 10.1080/14992027.2016.1197426] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
OBJECTIVE This study evaluated the effects of insertion depth on spatial speech perception in noise for simulations of cochlear implants (CI) and single-sided deafness (SSD). DESIGN Mandarin speech recognition thresholds were adaptively measured in five listening conditions and four spatial configurations. The original signal was delivered to the left ear. The right ear received either no input, one of three CI simulations in which the insertion depth was varied, or the original signal. Speech and noise were presented at either front, left, or right. STUDY SAMPLE Ten Mandarin-speaking NH listeners with pure-tone thresholds less than 20 dB HL. RESULTS Relative to no input in the right ear, the CI simulations provided significant improvements in head shadow benefit for all insertion depths, as well as better spatial release of masking (SRM) for the deepest simulated insertion. There were no significant improvements in summation or squelch for any of the CI simulations. CONCLUSIONS The benefits of cochlear implantation were largely limited to head shadow, with some benefit for SRM. The greatest benefits were observed for the deepest simulated CI insertion, suggesting that reducing mismatch between acoustic and electric hearing may increase the benefit of cochlear implantation.
Collapse
Affiliation(s)
- Xiaoqing Zhou
- a Department of Otolaryngology , Southwest Hospital, Third Military Medical University , Gao Tan Yan Street, Shaping Ba District , Chongqing , 400038 , China and
| | - Huajun Li
- a Department of Otolaryngology , Southwest Hospital, Third Military Medical University , Gao Tan Yan Street, Shaping Ba District , Chongqing , 400038 , China and
| | - John J Galvin
- b Department of Head and Neck Surgery , David Geffen School of Medicine, University of California Los Angeles , Los Angeles , CA 90095 , USA
| | - Qian-Jie Fu
- b Department of Head and Neck Surgery , David Geffen School of Medicine, University of California Los Angeles , Los Angeles , CA 90095 , USA
| | - Wei Yuan
- a Department of Otolaryngology , Southwest Hospital, Third Military Medical University , Gao Tan Yan Street, Shaping Ba District , Chongqing , 400038 , China and
| |
Collapse
|
28
|
Kong YY, Winn MB, Poellmann K, Donaldson GS. Discriminability and Perceptual Saliency of Temporal and Spectral Cues for Final Fricative Consonant Voicing in Simulated Cochlear-Implant and Bimodal Hearing. Trends Hear 2016; 20:20/0/2331216516652145. [PMID: 27317666 PMCID: PMC5562340 DOI: 10.1177/2331216516652145] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Multiple redundant acoustic cues can contribute to the perception of a single phonemic contrast. This study investigated the effect of spectral degradation on the discriminability and perceptual saliency of acoustic cues for identification of word-final fricative voicing in "loss" versus "laws", and possible changes that occurred when low-frequency acoustic cues were restored. Three acoustic cues that contribute to the word-final /s/-/z/ contrast (first formant frequency [F1] offset, vowel-consonant duration ratio, and consonant voicing duration) were systematically varied in synthesized words. A discrimination task measured listeners' ability to discriminate differences among stimuli within a single cue dimension. A categorization task examined the extent to which listeners make use of a given cue to label a syllable as "loss" versus "laws" when multiple cues are available. Normal-hearing listeners were presented with stimuli that were either unprocessed, processed with an eight-channel noise-band vocoder to approximate spectral degradation in cochlear implants, or low-pass filtered. Listeners were tested in four listening conditions: unprocessed, vocoder, low-pass, and a combined vocoder + low-pass condition that simulated bimodal hearing. Results showed a negative impact of spectral degradation on F1 cue discrimination and a trading relation between spectral and temporal cues in which listeners relied more heavily on the temporal cues for "loss-laws" identification when spectral cues were degraded. Furthermore, the addition of low-frequency fine-structure cues in simulated bimodal hearing increased the perceptual saliency of the F1 cue for "loss-laws" identification compared with vocoded speech. Findings suggest an interplay between the quality of sensory input and cue importance.
Collapse
Affiliation(s)
- Ying-Yee Kong
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| | - Matthew B Winn
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Katja Poellmann
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| | - Gail S Donaldson
- Department of Communication Sciences & Disorders, University of South Florida, Tampa, FL, USA
| |
Collapse
|
29
|
Oh SH, Donaldson GS, Kong YY. The role of continuous low-frequency harmonicity cues for interrupted speech perception in bimodal hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 139:1747. [PMID: 27106322 PMCID: PMC4833731 DOI: 10.1121/1.4945747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Low-frequency acoustic cues have been shown to enhance speech perception by cochlear-implant users, particularly when target speech occurs in a competing background. The present study examined the extent to which a continuous representation of low-frequency harmonicity cues contributes to bimodal benefit in simulated bimodal listeners. Experiment 1 examined the benefit of restoring a continuous temporal envelope to the low-frequency ear while the vocoder ear received a temporally interrupted stimulus. Experiment 2 examined the effect of providing continuous harmonicity cues in the low-frequency ear as compared to restoring a continuous temporal envelope in the vocoder ear. Findings indicate that bimodal benefit for temporally interrupted speech increases when continuity is restored to either or both ears. The primary benefit appears to stem from the continuous temporal envelope in the low-frequency region providing additional phonetic cues related to manner and F1 frequency; a secondary contribution is provided by low-frequency harmonicity cues when a continuous representation of the temporal envelope is present in the low-frequency, or both ears. The continuous temporal envelope and harmonicity cues of low-frequency speech are thought to support bimodal benefit by facilitating identification of word and syllable boundaries, and by restoring partial phonetic cues that occur during gaps in the temporally interrupted stimulus.
Collapse
Affiliation(s)
- Soo Hee Oh
- Department of Communication Sciences and Disorders, University of South Florida, PCD 1017, 4202 East Fowler Avenue, Tampa, Florida 33620, USA
| | - Gail S Donaldson
- Department of Communication Sciences and Disorders, University of South Florida, PCD 1017, 4202 East Fowler Avenue, Tampa, Florida 33620, USA
| | - Ying-Yee Kong
- Department of Communication Sciences and Disorders, Northeastern University, 226 Forsyth Building, 360 Huntington Avenue, Boston, Massachusetts 02115, USA
| |
Collapse
|
30
|
A Within-Subject Comparison of Bimodal Hearing, Bilateral Cochlear Implantation, and Bilateral Cochlear Implantation With Bilateral Hearing Preservation: High-Performing Patients. Otol Neurotol 2016; 36:1331-7. [PMID: 26164443 DOI: 10.1097/mao.0000000000000804] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE To compare speech understanding with bimodal hearing and bilateral cochlear implants (CIs). STUDY DESIGN Within-subjects, repeated-measures. METHODS Speech understanding was assessed in the following conditions: unilateral hearing aid (HA) in the non-implanted ear, unilateral CI, bimodal (CI + HA), and bilateral CI. In addition, three participants had bilateral hearing preservation and were also tested with bilateral CIs and bilateral HAs (BiBi). SETTING Tertiary academic CI center. PATIENTS Eight adult sequential bilateral recipients who, despite achieving incredibly high performance with the first CI, self-selected for bilateral cochlear implantation. INTERVENTION(S) Bilateral cochlear implantation. MAIN OUTCOME MEASURE(S) Speech understanding for the adult minimum speech test battery as well as sentences in semidiffuse noise using the R-SPACE system. RESULTS Bilateral CIs afforded significant individual improvement in a complex listening environment even for individuals demonstrating near perfect sentence scores with both the first CI alone as well as the bimodal condition. The 3 BiBi participants demonstrated additional significant benefit over the bilateral CI condition-presumably because of the availability of interaural time difference cues. CONCLUSIONS These data suggest that, for noisy environments, adding a second implant can significantly improve speech understanding-even for high-performing unilateral CI with bimodal hearing. In diffuse noise conditions, bilateral acoustic hearing can yield even greater benefits beyond that offered by bilateral implantation.
Collapse
|
31
|
Oosthuizen DJJ, Hanekom JJ. Fuzzy information transmission analysis for continuous speech features. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:1983-1994. [PMID: 25920849 DOI: 10.1121/1.4916198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Feature information transmission analysis (FITA) estimates information transmitted by an acoustic feature by assigning tokens to categories according to the feature under investigation and comparing within-category to between-category confusions. FITA was initially developed for categorical features (e.g., voicing) for which the category assignments arise from the feature definition. When used with continuous features (e.g., formants), it may happen that pairs of tokens in different categories are more similar than pairs of tokens in the same category. The estimated transmitted information may be sensitive to category boundary location and the selected number of categories. This paper proposes a fuzzy approach to FITA that provides a smoother transition between categories and compares its sensitivity to grouping parameters with that of the traditional approach. The fuzzy FITA was found to be sufficiently robust to boundary location to allow automation of category boundary selection. Traditional and fuzzy FITA were found to be sensitive to the number of categories. This is inherent to the mechanism of isolating a feature by dividing tokens into categories, so that transmitted information values calculated using different numbers of categories should not be compared. Four categories are recommended for continuous features when twelve tokens are used.
Collapse
Affiliation(s)
- Dirk J J Oosthuizen
- Department of Electrical, Electronic and Computer Engineering, University of Pretoria, University Road, Pretoria 0002, South Africa
| | - Johan J Hanekom
- Department of Electrical, Electronic and Computer Engineering, University of Pretoria, University Road, Pretoria 0002, South Africa
| |
Collapse
|
32
|
Li Y, Zhang G, Galvin JJ, Fu QJ. Mandarin speech perception in combined electric and acoustic stimulation. PLoS One 2014; 9:e112471. [PMID: 25386962 PMCID: PMC4227806 DOI: 10.1371/journal.pone.0112471] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2014] [Accepted: 10/16/2014] [Indexed: 11/18/2022] Open
Abstract
For deaf individuals with residual low-frequency acoustic hearing, combined use of a cochlear implant (CI) and hearing aid (HA) typically provides better speech understanding than with either device alone. Because of coarse spectral resolution, CIs do not provide fundamental frequency (F0) information that contributes to understanding of tonal languages such as Mandarin Chinese. The HA can provide good representation of F0 and, depending on the range of aided acoustic hearing, first and second formant (F1 and F2) information. In this study, Mandarin tone, vowel, and consonant recognition in quiet and noise was measured in 12 adult Mandarin-speaking bimodal listeners with the CI-only and with the CI+HA. Tone recognition was significantly better with the CI+HA in noise, but not in quiet. Vowel recognition was significantly better with the CI+HA in quiet, but not in noise. There was no significant difference in consonant recognition between the CI-only and the CI+HA in quiet or in noise. There was a wide range in bimodal benefit, with improvements often greater than 20 percentage points in some tests and conditions. The bimodal benefit was compared to CI subjects’ HA-aided pure-tone average (PTA) thresholds between 250 and 2000 Hz; subjects were divided into two groups: “better” PTA (<50 dB HL) or “poorer” PTA (>50 dB HL). The bimodal benefit differed significantly between groups only for consonant recognition. The bimodal benefit for tone recognition in quiet was significantly correlated with CI experience, suggesting that bimodal CI users learn to better combine low-frequency spectro-temporal information from acoustic hearing with temporal envelope information from electric hearing. Given the small number of subjects in this study (n = 12), further research with Chinese bimodal listeners may provide more information regarding the contribution of acoustic and electric hearing to tonal language perception.
Collapse
Affiliation(s)
- Yongxin Li
- Department of Otolaryngology, Head and Neck Surgery, Beijing TongRen Hospital, Capital Medical University, Beijing, 100730, P. R. China
| | - Guoping Zhang
- Department of Otolaryngology, Head and Neck Surgery, Beijing TongRen Hospital, Capital Medical University, Beijing, 100730, P. R. China
| | - John J. Galvin
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California, United States of America
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California, United States of America
- * E-mail:
| |
Collapse
|
33
|
Evaluation of the Bimodal Benefit in a Large Cohort of Cochlear Implant Subjects Using a Contralateral Hearing Aid. Otol Neurotol 2014; 35:e240-4. [DOI: 10.1097/mao.0000000000000529] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
34
|
Mason M, Kokkinakis K. Perception of consonants in reverberation and noise by adults fitted with bimodal devices. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2014; 57:1512-1520. [PMID: 24686826 PMCID: PMC4126860 DOI: 10.1044/2014_jslhr-h-13-0127] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
PURPOSE The purpose of this study was to evaluate the contribution of a contralateral hearing aid to the perception of consonants, in terms of voicing, manner, and place-of-articulation cues in reverberation and noise by adult cochlear implantees aided by bimodal fittings. METHOD Eight postlingually deafened adult cochlear implant (CI) listeners with a fully inserted CI in 1 ear and low-frequency hearing in the other ear were tested on consonant perception. They were presented with consonant stimuli processed in the following experimental conditions: 1 quiet condition, 2 different reverberation times (0.3 s and 1.0 s), and the combination of 2 reverberation times with a single signal-to-noise ratio (5 dB). RESULTS Consonant perception improved significantly when listening in combination with a contralateral hearing aid as opposed to listening with a CI alone in 0.3 s and 1.0 s of reverberation. Significantly higher scores were also noted when noise was added to 0.3 s of reverberation. CONCLUSIONS A considerable benefit was noted from the additional acoustic information in conditions of reverberation and reverberation plus noise. The bimodal benefit observed was more pronounced for voicing and manner of articulation than for place of articulation.
Collapse
|
35
|
Effect of hearing aid bandwidth on speech recognition performance of listeners using a cochlear implant and contralateral hearing aid (bimodal hearing). Ear Hear 2014; 34:553-61. [PMID: 23632973 DOI: 10.1097/aud.0b013e31828e86e8] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The purpose of this study was to determine how the bandwidth of the hearing aid (HA) fitting affects bimodal speech recognition of listeners with a cochlear implant (CI) in one ear and severe-to-profound hearing loss in the unimplanted ear (but with residual hearing sufficient for wideband amplification using National Acoustic Laboratories Revised, Profound [NAL-RP] prescriptive guidelines; unaided thresholds no poorer than 95 dB HL through 2000 Hz). DESIGN Recognition of sentence material in quiet and in noise was measured with the CI alone and with CI plus HA as the amplification provided by the HA in the high and mid-frequency regions was systematically reduced from the wideband condition (NAL-RP prescription). Modified bandwidths included upper frequency cutoffs of 2000, 1000, or 500 Hz. RESULTS On average, significant bimodal benefit was obtained when the HA provided amplification at all frequencies with aidable residual hearing. Limiting the HA bandwidth to only low-frequency amplification (below 1000 Hz) did not yield significant improvements in performance over listening with the CI alone. CONCLUSIONS These data suggest the importance of providing amplification across as wide a frequency region as permitted by audiometric thresholds in the HA used by bimodal users.
Collapse
|
36
|
Sheffield SW, Gifford RH. The benefits of bimodal hearing: effect of frequency region and acoustic bandwidth. Audiol Neurootol 2014; 19:151-63. [PMID: 24556850 DOI: 10.1159/000357588] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2013] [Accepted: 11/26/2013] [Indexed: 11/19/2022] Open
Abstract
We examined the effects of acoustic bandwidth on bimodal benefit for speech recognition in adults with a cochlear implant (CI) in one ear and low-frequency acoustic hearing in the contralateral ear. The primary aims were to (1) replicate Zhang et al. [Ear Hear 2010;31:63-69] with a steeper filter roll-off to examine the low-pass bandwidth required to obtain bimodal benefit for speech recognition and expand results to include different signal-to-noise ratios (SNRs) and talker genders, (2) determine whether the bimodal benefit increased with acoustic low-pass bandwidth and (3) determine whether an equivalent bimodal benefit was obtained with acoustic signals of similar low-pass and pass band bandwidth, but different center frequencies. Speech recognition was assessed using words presented in quiet and sentences in noise (+10, +5 and 0 dB SNRs). Acoustic stimuli presented to the nonimplanted ear were filtered into the following bands: <125, 125-250, <250, 250-500, <500, 250-750, <750 Hz and wide-band (full, nonfiltered bandwidth). The primary findings were: (1) the minimum acoustic low-pass bandwidth that produced a significant bimodal benefit was <250 Hz for male talkers in quiet and for female talkers in multitalker babble, but <125 Hz for male talkers in background noise, and the observed bimodal benefit did not vary significantly with SNR; (2) the bimodal benefit increased systematically with acoustic low-pass bandwidth up to <750 Hz for a male talker in quiet and female talkers in noise and up to <500 Hz for male talkers in noise, and (3) a similar bimodal benefit was obtained with low-pass and band-pass-filtered stimuli with different center frequencies (e.g. <250 vs. 250-500 Hz), meaning multiple frequency regions contain useful cues for bimodal benefit. Clinical implications are that (1) all aidable frequencies should be amplified in individuals with bimodal hearing, and (2) verification of audibility at 125 Hz is unnecessary unless it is the only aidable frequency.
Collapse
Affiliation(s)
- Sterling W Sheffield
- Cochlear Implant Research Laboratory, Vanderbilt Bill Wilkerson Center, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tenn., USA
| | | |
Collapse
|
37
|
Li Y, Zhang G, Galvin JJ, Fu QJ. Mandarin speech perception in combined electric and acoustic stimulation. PLoS One 2014. [PMID: 25386962 DOI: 10.1371/journal.pone/0112471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2023] Open
Abstract
For deaf individuals with residual low-frequency acoustic hearing, combined use of a cochlear implant (CI) and hearing aid (HA) typically provides better speech understanding than with either device alone. Because of coarse spectral resolution, CIs do not provide fundamental frequency (F0) information that contributes to understanding of tonal languages such as Mandarin Chinese. The HA can provide good representation of F0 and, depending on the range of aided acoustic hearing, first and second formant (F1 and F2) information. In this study, Mandarin tone, vowel, and consonant recognition in quiet and noise was measured in 12 adult Mandarin-speaking bimodal listeners with the CI-only and with the CI+HA. Tone recognition was significantly better with the CI+HA in noise, but not in quiet. Vowel recognition was significantly better with the CI+HA in quiet, but not in noise. There was no significant difference in consonant recognition between the CI-only and the CI+HA in quiet or in noise. There was a wide range in bimodal benefit, with improvements often greater than 20 percentage points in some tests and conditions. The bimodal benefit was compared to CI subjects' HA-aided pure-tone average (PTA) thresholds between 250 and 2000 Hz; subjects were divided into two groups: "better" PTA (<50 dB HL) or "poorer" PTA (>50 dB HL). The bimodal benefit differed significantly between groups only for consonant recognition. The bimodal benefit for tone recognition in quiet was significantly correlated with CI experience, suggesting that bimodal CI users learn to better combine low-frequency spectro-temporal information from acoustic hearing with temporal envelope information from electric hearing. Given the small number of subjects in this study (n = 12), further research with Chinese bimodal listeners may provide more information regarding the contribution of acoustic and electric hearing to tonal language perception.
Collapse
Affiliation(s)
- Yongxin Li
- Department of Otolaryngology, Head and Neck Surgery, Beijing TongRen Hospital, Capital Medical University, Beijing, 100730, P. R. China
| | - Guoping Zhang
- Department of Otolaryngology, Head and Neck Surgery, Beijing TongRen Hospital, Capital Medical University, Beijing, 100730, P. R. China
| | - John J Galvin
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California, United States of America
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California, United States of America
| |
Collapse
|
38
|
Reduced acoustic and electric integration in concurrent-vowel recognition. Sci Rep 2013; 3:1419. [PMID: 23474462 PMCID: PMC3593224 DOI: 10.1038/srep01419] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2012] [Accepted: 02/18/2013] [Indexed: 11/08/2022] Open
Abstract
The present study used concurrent-vowel recognition to measure integration efficiency of combined acoustic and electric stimulation in eight actual cochlear-implant subjects who had normal or residual low-frequency acoustic hearing contralaterally. Although these subjects could recognize single vowels (>90% correct) with either electric or combined stimulation, their performance degraded significantly in concurrent-vowel recognition. Compared with previous simulation results using normal-hearing subjects, the present subjects produced similar performance with acoustic or electric stimulation alone, but significantly lower performance with combined stimulation. A probabilistic model found reduced integration efficiency between acoustic and electric stimulation in the present subjects. The integration efficiency was negatively correlated with residual acoustic hearing in the non-implanted ear and duration of deafness in the implanted ear. The present result suggests a central origin of the integration deficit and that this integration be evaluated and considered in future management of hearing impairment and design of auditory prostheses.
Collapse
|
39
|
Visram AS, Kluk K, McKay CM. Voice gender differences and separation of simultaneous talkers in cochlear implant users with residual hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 132:EL135-EL141. [PMID: 22894312 DOI: 10.1121/1.4737137] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Perception of a target voice in the presence of a competing talker, of same or different gender as the target, was investigated in cochlear implant users, in implant-alone and bimodal (acoustic hearing in the non-implanted ear) conditions. Recordings of two male and two female talkers acted as targets and maskers, to investigate whether bimodal benefit increased for different compared to same gender target/maskers due to increased ability to perceive and utilize fundamental frequency and spectral-shape differences. In both listening conditions participants showed benefit of target/masker gender difference. There was an overall bimodal benefit, which was independent of target/masker gender difference.
Collapse
Affiliation(s)
- Anisa S Visram
- School of Psychological Sciences, University of Manchester, Manchester M13 9PL, United Kingdom
| | | | | |
Collapse
|