1
|
Dorman MF, Natale SC, Stohl JS, Felder J. Close approximations to the sound of a cochlear implant. Front Hum Neurosci 2024; 18:1434786. [PMID: 39086377 PMCID: PMC11288806 DOI: 10.3389/fnhum.2024.1434786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Accepted: 07/08/2024] [Indexed: 08/02/2024] Open
Abstract
Cochlear implant (CI) systems differ in terms of electrode design and signal processing. It is likely that patients fit with different implant systems will experience different percepts when presented speech via their implant. The sound quality of speech can be evaluated by asking single-sided-deaf (SSD) listeners fit with a cochlear implant (CI) to modify clean signals presented to their typically hearing ear to match the sound quality of signals presented to their CI ear. In this paper, we describe very close matches to CI sound quality, i.e., similarity ratings of 9.5 to 10 on a 10-point scale, by ten patients fit with a 28 mm electrode array and MED EL signal processing. The modifications required to make close approximations to CI sound quality fell into two groups: One consisted of a restricted frequency bandwidth and spectral smearing while a second was characterized by a wide bandwidth and no spectral smearing. Both sets of modifications were different from those found for patients with shorter electrode arrays who chose upshifts in voice pitch and formant frequencies to match CI sound quality. The data from matching-based metrics of CI sound quality document that speech sound-quality differs for patients fit with different CIs and among patients fit with the same CI.
Collapse
Affiliation(s)
- Michael F. Dorman
- College of Health Solutions, Speech and Hearing Science, Arizona State University, Tempe, AZ, United States
| | - Sarah C. Natale
- College of Health Solutions, Speech and Hearing Science, Arizona State University, Tempe, AZ, United States
| | - Joshua S. Stohl
- North American Research Laboratory, MED-EL, Durham, NC, United States
| | - Jenna Felder
- North American Research Laboratory, MED-EL, Durham, NC, United States
| |
Collapse
|
2
|
DeFreese A, Camarata S, Sunderhaus L, Holder J, Berg K, Lighterink M, Gifford R. The impact of spectral and temporal processing on speech recognition in children with cochlear implants. Sci Rep 2024; 14:14094. [PMID: 38890428 PMCID: PMC11189542 DOI: 10.1038/s41598-024-63932-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 06/03/2024] [Indexed: 06/20/2024] Open
Abstract
While the relationships between spectral resolution, temporal resolution, and speech recognition are well defined in adults with cochlear implants (CIs), they are not well defined for prelingually deafened children with CIs, for whom language development is ongoing. This cross-sectional study aimed to better characterize these relationships in a large cohort of prelingually deafened children with CIs (N = 47; mean age = 8.33 years) by comprehensively measuring spectral resolution thresholds (measured via spectral modulation detection), temporal resolution thresholds (measured via sinusoidal amplitude modulation detection), and speech recognition (measured via monosyllabic word recognition, vowel recognition, and sentence recognition in noise via both fixed signal-to-noise ratio (SNR) and adaptively varied SNR). Results indicated that neither spectral or temporal resolution were significantly correlated with speech recognition in quiet or noise for children with CIs. Both age and CI experience had a moderate effect on spectral resolution, with significant effects for spectral modulation detection at a modulation rate of 0.5 cyc/oct, suggesting spectral resolution may improve with maturation. Thus, it is possible we may see an emerging relationship between spectral resolution and speech perception over time for children with CIs. While further investigation into this relationship is warranted, these findings demonstrate the need for new investigations to uncover ways of improving spectral resolution for children with CIs.
Collapse
Affiliation(s)
- Andrea DeFreese
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, TN, 37232, USA.
| | - Stephen Camarata
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, TN, 37232, USA
| | - Linsey Sunderhaus
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, TN, 37232, USA
| | - Jourdan Holder
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, TN, 37232, USA
| | - Katelyn Berg
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, TN, 37232, USA
| | - Mackenzie Lighterink
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, TN, 37232, USA
| | - René Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, TN, 37232, USA
| |
Collapse
|
3
|
Fagniart S, Delvaux V, Harmegnies B, Huberlant A, Huet K, Piccaluga M, Watterman I, Charlier B. Nasal/Oral Vowel Perception in French-Speaking Children With Cochlear Implants and Children With Typical Hearing. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1243-1267. [PMID: 38457658 DOI: 10.1044/2024_jslhr-23-00274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/10/2024]
Abstract
PURPOSE The present study investigates the perception of vowel nasality in French-speaking children with cochlear implants (CIs; CI group) and children with typical hearing (TH; TH group) aged 4-12 years. By investigating the vocalic nasality feature in French, the study aims to document more broadly the effects of the acoustic limitations of CI in processing segments characterized by acoustic cues that require optimal spectral resolution. The impact of various factors related to children's characteristics, such as chronological/auditory age, age of implantation, and exposure to cued speech, has been studied on performance, and the acoustic characteristics of the stimuli in perceptual tasks have also been investigated. METHOD Identification and discrimination tasks involving French nasal and oral vowels were administered to two groups of children: 13 children with CIs (CI group) and 25 children with TH (TH group) divided into three age groups (4-6 years, 7-9 years, and 10-12 years). French nasal vowels were paired with their oral phonological counterpart (phonological pairing) as well as to the closest oral vowel in terms of phonetic proximity (phonetic pairing). Post hoc acoustic analyses of the stimuli were linked to the performance in perception. RESULTS The results indicate an effect of the auditory status on the performance in the two tasks, with the CI group performing at a lower level than the TH group. However, the scores of the children in the CI group are well above chance level, exceeding 80%. The most common errors in identification were substitutions between nasal vowels and phonetically close oral vowels as well as confusions between the phoneme /u/ and other oral vowels. Phonetic pairs showed lower discrimination performance in the CI group with great variability in the results. Age effects were observed only in TH children for nasal vowel identification, whereas in children with CIs, a positive impact of cued speech practice and early implantation was found. Differential links between performance and acoustic characteristics were found within our groups, suggesting that in children with CIs, selective use of certain acoustic features, presumed to be better transmitted by the implant, leads to better perceptual performance. CONCLUSIONS The study's results reveal specific challenges in children with CIs when processing segments characterized by fine spectral resolution cues. However, the CI children in our study appear to effectively compensate for these difficulties by utilizing various acoustic cues assumed to be well transmitted by the implant, such as cues related to the temporal resolution of stimuli. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25328704.
Collapse
Affiliation(s)
- Sophie Fagniart
- Language Sciences and Metrology Unit, University of Mons, Belgium
- Research Institute for Language Science and Technology, University of Mons, Belgium
| | - Véronique Delvaux
- Language Sciences and Metrology Unit, University of Mons, Belgium
- Research Institute for Language Science and Technology, University of Mons, Belgium
- Université Libre de Bruxelles, Brussels, Belgium
- Fund for Scientific Research (F.R.S.-FNRS), Brussels, Belgium
| | - Bernard Harmegnies
- Research Institute for Language Science and Technology, University of Mons, Belgium
- Université Libre de Bruxelles, Brussels, Belgium
| | - Anne Huberlant
- Functional Rehabilitation Center "Comprendre et Parler," Brussels, Belgium
| | - Kathy Huet
- Language Sciences and Metrology Unit, University of Mons, Belgium
- Research Institute for Language Science and Technology, University of Mons, Belgium
| | - Myriam Piccaluga
- Language Sciences and Metrology Unit, University of Mons, Belgium
- Research Institute for Language Science and Technology, University of Mons, Belgium
| | - Isabelle Watterman
- Université Libre de Bruxelles, Brussels, Belgium
- Functional Rehabilitation Center "Comprendre et Parler," Brussels, Belgium
| | - Brigitte Charlier
- Université Libre de Bruxelles, Brussels, Belgium
- Functional Rehabilitation Center "Comprendre et Parler," Brussels, Belgium
| |
Collapse
|
4
|
Kasdan AV, Butera IM, DeFreese AJ, Rowland J, Hilbun AL, Gordon RL, Wallace MT, Gifford RH. Cochlear implant users experience the sound-to-music effect. AUDITORY PERCEPTION & COGNITION 2024; 7:179-202. [PMID: 39391629 PMCID: PMC11463729 DOI: 10.1080/25742442.2024.2313430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 01/23/2024] [Indexed: 10/12/2024]
Abstract
Introduction The speech-to-song illusion is a robust effect where repeated speech induces the perception of singing; this effect has been extended to repeated excerpts of environmental sounds (sound-to-music effect). Here we asked whether repetition could elicit musical percepts in cochlear implant (CI) users, who experience challenges with perceiving music due to both physiological and device limitations. Methods Thirty adult CI users and thirty age-matched controls with normal hearing (NH) completed two repetition experiments for speech and nonspeech sounds (water droplets). We hypothesized that CI users would experience the sound-to-music effect from temporal/rhythmic cues alone, but to a lesser magnitude compared to NH controls, given the limited access to spectral information CI users receive from their implants. Results We found that CI users did experience the sound-to-music effect but to a lesser degree compared to NH participants. Musicality ratings were not associated with musical training or frequency resolution, and among CI users, clinical variables like duration of hearing loss also did not influence ratings. Discussion Cochlear implants provide a strong clinical model for disentangling the effects of spectral and temporal information in an acoustic signal; our results suggest that temporal cues are sufficient to perceive the sound-to-music effect when spectral resolution is limited. Additionally, incorporating short repetitions into music specially designed for CI users may provide a promising way for them to experience music.
Collapse
Affiliation(s)
- Anna V. Kasdan
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Curb Center for Art, Enterprise, and Public Policy, Nashville, TN, USA
| | - Iliza M. Butera
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Andrea J. DeFreese
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Jess Rowland
- Lewis Center for the Arts, Princeton University, Princeton, NJ, USA
| | | | - Reyna L. Gordon
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Curb Center for Art, Enterprise, and Public Policy, Nashville, TN, USA
- Department of Otolaryngology – Head and & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - René H. Gifford
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
5
|
Bosen AK. Characterizing correlations in partial credit speech recognition scoring with beta-binomial distributions. JASA EXPRESS LETTERS 2024; 4:025202. [PMID: 38299983 PMCID: PMC10848658 DOI: 10.1121/10.0024633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 01/12/2024] [Indexed: 02/02/2024]
Abstract
Partial credit scoring for speech recognition tasks can improve measurement precision. However, assessing the magnitude of this improvement with partial credit scoring is challenging because meaningful speech contains contextual cues, which create correlations between the probabilities of correctly identifying each token in a stimulus. Here, beta-binomial distributions were used to estimate recognition accuracy and intraclass correlation for phonemes in words and words in sentences in listeners with cochlear implants (N = 20). Estimates demonstrated substantial intraclass correlation in recognition accuracy within stimuli. These correlations were invariant across individuals. Intraclass correlations should be addressed in power analysis of partial credit scoring.
Collapse
Affiliation(s)
- Adam K Bosen
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska 68131,
| |
Collapse
|
6
|
Noble AR, Halverson DM, Resnick J, Broncheau M, Rubinstein JT, Horn DL. Spectral Resolution and Speech Perception in Cochlear Implanted School-Aged Children. Otolaryngol Head Neck Surg 2024; 170:230-238. [PMID: 37365946 PMCID: PMC10836047 DOI: 10.1002/ohn.408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 05/03/2023] [Accepted: 06/04/2023] [Indexed: 06/28/2023]
Abstract
OBJECTIVE Cochlear implantation of prelingually deaf infants provides auditory input sufficient to develop spoken language; however, outcomes remain variable. Inability to participate in speech perception testing limits testing device efficacy in young listeners. In postlingually implanted adults (aCI), speech perception correlates with spectral resolution an ability that relies independently on frequency resolution (FR) and spectral modulation sensitivity (SMS). The correlation of spectral resolution to speech perception is unknown in prelingually implanted children (cCI). In this study, FR and SMS were measured using a spectral ripple discrimination (SRD) task and were correlated with vowel and consonant identification. It was hypothesized that prelingually deaf cCI would show immature SMS relative to postlingually deaf aCI and that FR would correlate with speech identification. STUDY DESIGN Cross-sectional study. SETTING In-person, booth testing. METHODS SRD was used to determine the highest spectral ripple density perceived at various modulation depths. FR and SMS were derived from spectral modulation transfer functions. Vowel and consonant identification was measured; SRD performance and speech identification were analyzed for correlation. RESULTS Fifteen prelingually implanted cCI and 13 postlingually implanted aCI were included. FR and SMS were similar between cCI and aCI. Better FR was associated with better speech identification for most measures. CONCLUSION Prelingually implanted cCI demonstrated adult-like FR and SMS; additionally, FR correlated with speech identification. FR may be a measure of CI efficacy in young listeners.
Collapse
Affiliation(s)
- Anisha R. Noble
- Division of Pediatric Otolaryngology – Head and Neck Surgery, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, USA
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA, USA
| | - Destinee M. Halverson
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA, USA
| | - Jesse Resnick
- Department of Internal Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Mariette Broncheau
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA, USA
| | - Jay T. Rubinstein
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA, USA
| | - David L. Horn
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA, USA
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| |
Collapse
|
7
|
Spitzer ER, Landsberger DM, Lichtl AJ, Waltzman SB. Ceiling effects for speech perception tests in pediatric cochlear implant users. Cochlear Implants Int 2024; 25:69-80. [PMID: 37875157 DOI: 10.1080/14670100.2023.2271219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2023]
Abstract
OBJECTIVES The purpose of this study was to determine the prevalence of ceiling effects for commonly used speech perception tests in a large population of children who received a cochlear implant (CI) before the age of four. A secondary goal was to determine the demographic factors that were relevant for predicting which children were more likely to reach ceiling level performance. We hypothesize that ceiling effects are highly prevalent for most tests. DESIGN Retrospective chart review of children receiving a CI between 2002 and 2014. RESULTS 165 children were included. Median scores were above ceiling levels (≥90% correct) for the majority of speech perception tests and all distributions of scores were highly skewed. Children who were implanted earlier, received two implants, and were oral communicators were more likely to reach ceiling-level performance. Age and years of CI listening experience at time of test were negatively correlated with performance, suggesting a non-random assignment of tests. Many children were re-tested on tests for which they had already scored at ceiling. CONCLUSIONS Commonly used speech perception tests for children with CIs are prone to ceiling effects and may not accurately reflect how a child performs in everyday listening situations.
Collapse
Affiliation(s)
- Emily R Spitzer
- Department of Otolaryngology-Head and Neck Surgery, New York University Grossman School of Medicine, New York, NY, USA
| | - David M Landsberger
- Department of Otolaryngology-Head and Neck Surgery, New York University Grossman School of Medicine, New York, NY, USA
| | - Alexandra J Lichtl
- Department of Otolaryngology-Head and Neck Surgery, New York University Grossman School of Medicine, New York, NY, USA
| | - Susan B Waltzman
- Department of Otolaryngology-Head and Neck Surgery, New York University Grossman School of Medicine, New York, NY, USA
| |
Collapse
|
8
|
Deroche MLD, Wolfe J, Neumann S, Manning J, Towler W, Alemi R, Bien AG, Koirala N, Hanna L, Henry L, Gracco VL. Auditory evoked response to an oddball paradigm in children wearing cochlear implants. Clin Neurophysiol 2023; 149:133-145. [PMID: 36965466 DOI: 10.1016/j.clinph.2023.02.179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 02/24/2023] [Accepted: 02/28/2023] [Indexed: 03/17/2023]
Abstract
OBJECTIVE Although children with cochlear implants (CI) achieve remarkable success with their device, considerable variability remains in individual outcomes. Here, we explored whether auditory evoked potentials recorded during an oddball paradigm could provide useful markers of auditory processing in this pediatric population. METHODS High-density electroencephalography (EEG) was recorded in 75 children listening to standard and odd noise stimuli: 25 had normal hearing (NH) and 50 wore a CI, divided between high language (HL) and low language (LL) abilities. Three metrics were extracted: the first negative and second positive components of the standard waveform (N1-P2 complex) close to the vertex, the mismatch negativity (MMN) around Fz and the late positive component (P3) around Pz of the difference waveform. RESULTS While children with CIs generally exhibited a well-formed N1-P2 complex, those with language delays typically lacked reliable MMN and P3 components. But many children with CIs with age-appropriate skills showed MMN and P3 responses similar to those of NH children. Moreover, larger and earlier P3 (but not MMN) was linked to better literacy skills. CONCLUSIONS Auditory evoked responses differentiated children with CIs based on their good or poor skills with language and literacy. SIGNIFICANCE This short paradigm could eventually serve as a clinical tool for tracking the developmental outcomes of implanted children.
Collapse
Affiliation(s)
- Mickael L D Deroche
- Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada.
| | - Jace Wolfe
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Sara Neumann
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Jacy Manning
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - William Towler
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Razieh Alemi
- Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada
| | - Alexander G Bien
- University of Oklahoma College of Medicine, Otolaryngology, 800 Stanton L Young Blvd., Oklahoma City, OK 73117, USA
| | - Nabin Koirala
- Haskins Laboratories, 300 George St., New Haven, CT 06511, USA
| | - Lindsay Hanna
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Lauren Henry
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | | |
Collapse
|
9
|
Nittrouer S, Lowenstein JH. Recognition of Sentences With Complex Syntax in Speech Babble by Adolescents With Normal Hearing or Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1110-1135. [PMID: 36758200 PMCID: PMC10205108 DOI: 10.1044/2022_jslhr-22-00407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/17/2022] [Accepted: 11/22/2022] [Indexed: 05/25/2023]
Abstract
PURPOSE General language abilities of children with cochlear implants have been thoroughly investigated, especially at young ages, but far less is known about how well they process language in real-world settings, especially in higher grades. This study addressed this gap in knowledge by examining recognition of sentences with complex syntactic structures in backgrounds of speech babble by adolescents with cochlear implants, and peers with normal hearing. DESIGN Two experiments were conducted. First, new materials were developed using young adults with normal hearing as the normative sample, creating a corpus of sentences with controlled, but complex syntactic structures presented in three kinds of babble that varied in voice gender and number of talkers. Second, recognition by adolescents with normal hearing or cochlear implants was examined for these new materials and for sentence materials used with these adolescents at younger ages. Analyses addressed three objectives: (1) to assess the stability of speech recognition across a multiyear age range, (2) to evaluate speech recognition of sentences with complex syntax in babble, and (3) to explore how bottom-up and top-down mechanisms account for performance under these conditions. RESULTS Results showed: (1) Recognition was stable across the ages of 10-14 years for both groups. (2) Adolescents with normal hearing performed similarly to young adults with normal hearing, showing effects of syntactic complexity and background babble; adolescents with cochlear implants showed poorer recognition overall, and diminished effects of both factors. (3) Top-down language and working memory primarily explained recognition for adolescents with normal hearing, but the bottom-up process of perceptual organization primarily explained recognition for adolescents with cochlear implants. CONCLUSIONS Comprehension of language in real-world settings relies on different mechanisms for adolescents with cochlear implants than for adolescents with normal hearing. A novel finding was that perceptual organization is a critical factor. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21965228.
Collapse
Affiliation(s)
- Susan Nittrouer
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
| | - Joanna H. Lowenstein
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
| |
Collapse
|
10
|
Butera IM, Stevenson RA, Gifford RH, Wallace MT. Visually biased Perception in Cochlear Implant Users: A Study of the McGurk and Sound-Induced Flash Illusions. Trends Hear 2023; 27:23312165221076681. [PMID: 37377212 PMCID: PMC10334005 DOI: 10.1177/23312165221076681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 12/08/2021] [Accepted: 01/10/2021] [Indexed: 06/29/2023] Open
Abstract
The reduction in spectral resolution by cochlear implants oftentimes requires complementary visual speech cues to facilitate understanding. Despite substantial clinical characterization of auditory-only speech measures, relatively little is known about the audiovisual (AV) integrative abilities that most cochlear implant (CI) users rely on for daily speech comprehension. In this study, we tested AV integration in 63 CI users and 69 normal-hearing (NH) controls using the McGurk and sound-induced flash illusions. To our knowledge, this study is the largest to-date measuring the McGurk effect in this population and the first that tests the sound-induced flash illusion (SIFI). When presented with conflicting AV speech stimuli (i.e., the phoneme "ba" dubbed onto the viseme "ga"), we found that 55 CI users (87%) reported a fused percept of "da" or "tha" on at least one trial. After applying an error correction based on unisensory responses, we found that among those susceptible to the illusion, CI users experienced lower fusion than controls-a result that was concordant with results from the SIFI where the pairing of a single circle flashing on the screen with multiple beeps resulted in fewer illusory flashes for CI users. While illusion perception in these two tasks appears to be uncorrelated among CI users, we identified a negative correlation in the NH group. Because neither illusion appears to provide further explanation of variability in CI outcome measures, further research is needed to determine how these findings relate to CI users' speech understanding, particularly in ecological listening conditions that are naturally multisensory.
Collapse
Affiliation(s)
- Iliza M. Butera
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Ryan A. Stevenson
- Department of Psychology, University of
Western Ontario, London, ON, Canada
- Brain and Mind Institute, University of
Western Ontario, London, ON, Canada
| | - René H. Gifford
- Department of Hearing and Speech
Sciences, Vanderbilt University, Nashville, TN, USA
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech
Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt
University Medical Center, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
11
|
Gianakas SP, Fitzgerald MB, Winn MB. Identifying Listeners Whose Speech Intelligibility Depends on a Quiet Extra Moment After a Sentence. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4852-4865. [PMID: 36472938 PMCID: PMC9934912 DOI: 10.1044/2022_jslhr-21-00622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 05/29/2022] [Accepted: 08/16/2022] [Indexed: 06/03/2023]
Abstract
PURPOSE An extra moment after a sentence is spoken may be important for listeners with hearing loss to mentally repair misperceptions during listening. The current audiologic test battery cannot distinguish between a listener who repaired a misperception versus a listener who heard the speech accurately with no need for repair. This study aims to develop a behavioral method to identify individuals who are at risk for relying on a quiet moment after a sentence. METHOD Forty-three individuals with hearing loss (32 cochlear implant users, 11 hearing aid users) heard sentences that were followed by either 2 s of silence or 2 s of babble noise. Both high- and low-context sentences were used in the task. RESULTS Some individuals showed notable benefit in accuracy scores (particularly for high-context sentences) when given an extra moment of silent time following the sentence. This benefit was highly variable across individuals and sometimes absent altogether. However, the group-level patterns of results were mainly explained by the use of context and successful perception of the words preceding sentence-final words. CONCLUSIONS These results suggest that some but not all individuals improve their speech recognition score by relying on a quiet moment after a sentence, and that this fragility of speech recognition cannot be assessed using one isolated utterance at a time. Reliance on a quiet moment to repair perceptions would potentially impede the perception of an upcoming utterance, making continuous communication in real-world scenarios difficult especially for individuals with hearing loss. The methods used in this study-along with some simple modifications if necessary-could potentially identify patients with hearing loss who retroactively repair mistakes by using clinically feasible methods that can ultimately lead to better patient-centered hearing health care. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21644801.
Collapse
|
12
|
Wheeler HJ, Hatch DR, Moody-Antonio SA, Nie Y. Music and Speech Perception in Prelingually Deafened Young Listeners With Cochlear Implants: A Preliminary Study Using Sung Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3951-3965. [PMID: 36179251 DOI: 10.1044/2022_jslhr-21-00271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE In the context of music and speech perception, this study aimed to assess the effect of variation in one of two auditory attributes-pitch contour and timbre-on the perception of the other in prelingually deafened young cochlear implant (CI) users, and the relationship between pitch contour perception and two cognitive functions of interest. METHOD Nine prelingually deafened CI users, aged 8.75-22.17 years, completed a melodic contour identification (MCI) task using stimuli of piano notes or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note), a speech perception task identifying matrix-styled sentences naturally intonated or sung with a fixed pitch (same pitch for each word) or a mixed pitch (different pitches for each word), a forward digit span test indexing auditory short-term memory (STM), and the matrices section of the Kaufman Brief Intelligence Test-Second Edition indexing nonverbal IQ. RESULTS MCI was significantly poorer for the mixed timbre condition. Speech perception was significantly poorer for the fixed and mixed pitch conditions than for the naturally intonated condition. Auditory STM positively correlated with MCI at 2- and 3-semitone note spacings. Relative to their normal-hearing peers from a related study using the same stimuli and tasks, the CI participants showed comparable MCI at 2- or 3-semitone note spacing, and a comparable level of significant decrement in speech perception across three pitch contour conditions. CONCLUSION Findings suggest that prelingually deafened CI users show similar trends of normal-hearing peers for the effect of variation in pitch contour or timbre on the perception of the other, and that cognitive functions may underlie these outcomes to some extent, at least for the perception of pitch contour. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21217937.
Collapse
Affiliation(s)
- Harley J Wheeler
- Department of Communication Sciences and Disorders, James Madison University, Harrisonburg, VA
| | - Debora R Hatch
- Department of Otolaryngology, Eastern Virginia Medical School, Norfolk
| | | | - Yingjiu Nie
- Department of Communication Sciences and Disorders, James Madison University, Harrisonburg, VA
| |
Collapse
|
13
|
Application of Signals with Rippled Spectra as a Training Approach for Speech Intelligibility Improvements in Cochlear Implant Users. J Pers Med 2022; 12:jpm12091426. [PMID: 36143210 PMCID: PMC9503413 DOI: 10.3390/jpm12091426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Revised: 08/19/2022] [Accepted: 08/30/2022] [Indexed: 11/17/2022] Open
Abstract
In cochlear implant (CI) users, the discrimination of sound signals with rippled spectra correlates with speech discrimination. We suggest that rippled-spectrum signals could be a basis for training CI users to improve speech intelligibility. Fifteen CI users participated in the study. Ten of them used the software for training (the experimental group), and five did not (the control group). Software based on the phase reversal discrimination of rippled spectra was used. The experimental group was also tested for speech discrimination using phonetic material based on polysyllabic balanced speech material. An improvement in the discrimination of the rippled spectrum was observed in all CI users from the experimental group. There was no significant improvement in the control group. The result of the speech discrimination test showed that the percentage of recognized words increased after training in nine out of ten CI users. For five CI users who participated in the training program, the data on word recognition were also obtained earlier (at least eight months before training). The increase in the percentage of recognized words was greater after training compared to the period before training. The results allow the suggestion that sound signals with rippled spectra could be used not only for testing rehabilitation results after CI but also for training CI users to discriminate sounds with complex spectra.
Collapse
|
14
|
Butera IM, Larson ED, DeFreese AJ, Lee AK, Gifford RH, Wallace MT. Functional localization of audiovisual speech using near infrared spectroscopy. Brain Topogr 2022; 35:416-430. [PMID: 35821542 PMCID: PMC9334437 DOI: 10.1007/s10548-022-00904-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 05/19/2022] [Indexed: 11/21/2022]
Abstract
Visual cues are especially vital for hearing impaired individuals such as cochlear implant (CI) users to understand speech in noise. Functional Near Infrared Spectroscopy (fNIRS) is a light-based imaging technology that is ideally suited for measuring the brain activity of CI users due to its compatibility with both the ferromagnetic and electrical components of these implants. In a preliminary step toward better elucidating the behavioral and neural correlates of audiovisual (AV) speech integration in CI users, we designed a speech-in-noise task and measured the extent to which 24 normal hearing individuals could integrate the audio of spoken monosyllabic words with the corresponding visual signals of a female speaker. In our behavioral task, we found that audiovisual pairings provided average improvements of 103% and 197% over auditory-alone listening conditions in -6 and -9 dB signal-to-noise ratios consisting of multi-talker background noise. In an fNIRS task using similar stimuli, we measured activity during auditory-only listening, visual-only lipreading, and AV listening conditions. We identified cortical activity in all three conditions over regions of middle and superior temporal cortex typically associated with speech processing and audiovisual integration. In addition, three channels active during the lipreading condition showed uncorrected correlations associated with behavioral measures of audiovisual gain as well as with the McGurk effect. Further work focusing primarily on the regions of interest identified in this study could test how AV speech integration may differ for CI users who rely on this mechanism for daily communication.
Collapse
Affiliation(s)
- Iliza M Butera
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
| | - Eric D Larson
- Institute for Learning & Brain Sciences, University of Washington, Seattle Washington, USA
| | - Andrea J DeFreese
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Adrian Kc Lee
- Institute for Learning & Brain Sciences, University of Washington, Seattle Washington, USA
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, USA
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
15
|
Brungart DS, Sherlock LP, Kuchinsky SE, Perry TT, Bieber RE, Grant KW, Bernstein JGW. Assessment methods for determining small changes in hearing performance over time. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:3866. [PMID: 35778214 DOI: 10.1121/10.0011509] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Although the behavioral pure-tone threshold audiogram is considered the gold standard for quantifying hearing loss, assessment of speech understanding, especially in noise, is more relevant to quality of life but is only partly related to the audiogram. Metrics of speech understanding in noise are therefore an attractive target for assessing hearing over time. However, speech-in-noise assessments have more potential sources of variability than pure-tone threshold measures, making it a challenge to obtain results reliable enough to detect small changes in performance. This review examines the benefits and limitations of speech-understanding metrics and their application to longitudinal hearing assessment, and identifies potential sources of variability, including learning effects, differences in item difficulty, and between- and within-individual variations in effort and motivation. We conclude by recommending the integration of non-speech auditory tests, which provide information about aspects of auditory health that have reduced variability and fewer central influences than speech tests, in parallel with the traditional audiogram and speech-based assessments.
Collapse
Affiliation(s)
- Douglas S Brungart
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - LaGuinn P Sherlock
- Hearing Conservation and Readiness Branch, U.S. Army Public Health Center, E1570 8977 Sibert Road, Aberdeen Proving Ground, Maryland 21010, USA
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Trevor T Perry
- Hearing Conservation and Readiness Branch, U.S. Army Public Health Center, E1570 8977 Sibert Road, Aberdeen Proving Ground, Maryland 21010, USA
| | - Rebecca E Bieber
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Ken W Grant
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Joshua G W Bernstein
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| |
Collapse
|
16
|
Dorman MF, Natale SC, Noble JH, Zeitler DM. Upward Shifts in the Internal Representation of Frequency Can Persist Over a 3-Year Period for Cochlear Implant Patients Fit With a Relatively Short Electrode Array. Front Hum Neurosci 2022; 16:863891. [PMID: 35399353 PMCID: PMC8990937 DOI: 10.3389/fnhum.2022.863891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 02/28/2022] [Indexed: 11/16/2022] Open
Abstract
Patients fit with cochlear implants (CIs) commonly indicate at the time of device fitting and for some time after, that the speech signal sounds abnormal. A high pitch or timbre is one component of the abnormal percept. In this project, our aim was to determine whether a number of years of CI use reduced perceived upshifts in frequency spectrum and/or voice fundamental frequency. The participants were five individuals who were deaf in one ear and who had normal hearing in the other ear. The deafened ears had been implanted with a 18.5 mm electrode array which resulted in signal input frequencies being directed to locations in the spiral ganglion (SG) that were between one and two octaves higher than the input frequencies. The patients judged the similarity of a clean signal (a male-voice sentence) presented to their implanted ear and candidate, implant-like, signals presented to their normal-hearing (NH) ear. Matches to implant sound quality were obtained, on average, at 8 months after device activation (see section “Time 1”) and at 35 months after activation (see section “Time 2”). At Time 1, the matches to CI sound quality were characterized, most generally, by upshifts in the frequency spectrum and in voice pitch. At Time 2, for four of the five patients, frequency spectrum values remained elevated. For all five patients F0 values remained elevated. Overall, the data offer little support for the proposition that, for patients fit with shorter electrode arrays, cortical plasticity nudges the cortical representation of the CI voice toward more normal, or less upshifted, frequency values between 8 and 35 months after device activation. Cortical plasticity may be limited when there are large differences between frequencies in the input signal and the locations in the SG stimulated by those frequencies.
Collapse
Affiliation(s)
- Michael F Dorman
- College of Health Solutions, Speech and Hearing Science, Arizona State University, Tempe, AZ, United States
| | - Sarah C Natale
- College of Health Solutions, Speech and Hearing Science, Arizona State University, Tempe, AZ, United States
| | - Jack H Noble
- Department of Electrical Engineering & Computer Science, Vanderbilt University, Nashville, TN, United States
| | - Daniel M Zeitler
- Otolaryngology, Virginia Mason Medical Center, Seattle, WA, United States
| |
Collapse
|
17
|
Jahn KN, Arenberg JG, Horn DL. Spectral Resolution Development in Children With Normal Hearing and With Cochlear Implants: A Review of Behavioral Studies. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:1646-1658. [PMID: 35201848 PMCID: PMC9499384 DOI: 10.1044/2021_jslhr-21-00307] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 09/09/2021] [Accepted: 12/01/2021] [Indexed: 06/14/2023]
Abstract
PURPOSE This review article provides a theoretical overview of the development of spectral resolution in children with normal hearing (cNH) and in those who use cochlear implants (CIs), with an emphasis on methodological considerations. The aim was to identify key directions for future research on spectral resolution development in children with CIs. METHOD A comprehensive literature review was conducted to summarize and synthesize previously published behavioral research on spectral resolution development in normal and impaired auditory systems. CONCLUSIONS In cNH, performance on spectral resolution tasks continues to improve through the teenage years and is likely driven by gradual maturation of across-channel intensity resolution. A small but growing body of evidence from children with CIs suggests a more complex relationship between spectral resolution development, patient demographics, and the quality of the CI electrode-neuron interface. Future research should aim to distinguish between the effects of patient-specific variables and the underlying physiology on spectral resolution abilities in children of all ages who are hard of hearing and use auditory prostheses.
Collapse
Affiliation(s)
- Kelly N. Jahn
- Department of Speech, Language, and Hearing, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson
- Callier Center for Communication Disorders, The University of Texas at Dallas
| | - Julie G. Arenberg
- Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston
| | - David L. Horn
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle
- Division of Otolaryngology, Seattle Children's Hospital, WA
| |
Collapse
|
18
|
Arjmandi MK, Jahn KN, Arenberg JG. Single-Channel Focused Thresholds Relate to Vowel Identification in Pediatric and Adult Cochlear Implant Listeners. Trends Hear 2022; 26:23312165221095364. [PMID: 35505617 PMCID: PMC9073113 DOI: 10.1177/23312165221095364] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
Speech recognition outcomes are highly variable among pediatric and adult cochlear implant (CI) listeners. Although there is some evidence that the quality of the electrode-neuron interface (ENI) contributes to this large variability in auditory perception, its relationship with speech outcomes is not well understood. Single-channel auditory detection thresholds measured in response to focused electrical fields (i.e., focused thresholds) are sensitive to properties of ENI quality, including electrode-neuron distance, intracochlear resistance, and neural health. In the present study, focused thresholds and speech perception abilities were assessed in 15 children and 21 adult CI listeners. Focused thresholds were measured for all active electrodes using a fast sweep procedure. Speech perception performance was evaluated by assessing listeners’ ability to identify vowels presented in /h-vowel-d/ context. Consistent with prior literature, focused thresholds were lower for children than for adults, but vowel identification did not differ significantly across age groups. Higher across-array average focused thresholds, which may indicate a relatively poor ENI quality, were associated with poorer vowel identification scores in both children and adults. Adult CI listeners with longer durations of deafness had higher focused thresholds. Findings from this study demonstrate that poor-quality ENIs may contribute to reduced speech outcomes for pediatric and adult CI listeners. Estimates of ENI quality (e.g., focused thresholds) may assist in developing customized programming interventions that serve to improve the transmission of spectral cues that are important in vowel identification.
Collapse
Affiliation(s)
- Meisam K Arjmandi
- Department of Otolaryngology - Head and Neck Surgery, 1811Harvard Medical School, Boston, MA, USA.,Eaton-Peabody Laboratories, 1866Massachusetts Eye and Ear, Boston, MA, USA.,Audiology Division, 1866Massachusetts Eye and Ear, Boston, MA, USA
| | - Kelly N Jahn
- Department of Otolaryngology - Head and Neck Surgery, 1811Harvard Medical School, Boston, MA, USA.,Eaton-Peabody Laboratories, 1866Massachusetts Eye and Ear, Boston, MA, USA.,Department of Speech, Language, and Hearing, University of Texas at Dallas, Richardson, TX, USA
| | - Julie G Arenberg
- Department of Otolaryngology - Head and Neck Surgery, 1811Harvard Medical School, Boston, MA, USA.,Eaton-Peabody Laboratories, 1866Massachusetts Eye and Ear, Boston, MA, USA.,Audiology Division, 1866Massachusetts Eye and Ear, Boston, MA, USA
| |
Collapse
|
19
|
Tawdrous MM, D'Onofrio KL, Gifford R, Picou EM. Emotional Responses to Non-Speech Sounds for Hearing-aid and Bimodal Cochlear-Implant Listeners. Trends Hear 2022; 26:23312165221083091. [PMID: 35435773 PMCID: PMC9019384 DOI: 10.1177/23312165221083091] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 12/19/2021] [Accepted: 02/06/2022] [Indexed: 02/03/2023] Open
Abstract
The purpose of this project was to evaluate differences between groups and device configurations for emotional responses to non-speech sounds. Three groups of adults participated: 1) listeners with normal hearing with no history of device use, 2) hearing aid candidates with or without hearing aid experience, and 3) bimodal cochlear-implant listeners with at least 6 months of implant use. Participants (n = 18 in each group) rated valence and arousal of pleasant, neutral, and unpleasant non-speech sounds. Listeners with normal hearing rated sounds without hearing devices. Hearing aid candidates rated sounds while using one or two hearing aids. Bimodal cochlear-implant listeners rated sounds while using a hearing aid alone, a cochlear implant alone, or the hearing aid and cochlear implant simultaneously. Analysis revealed significant differences between groups in ratings of pleasant and unpleasant stimuli; ratings from hearing aid candidates and bimodal cochlear-implant listeners were less extreme (less pleasant and less unpleasant) than were ratings from listeners with normal hearing. Hearing aid candidates' ratings were similar with one and two hearing aids. Bimodal cochlear-implant listeners' ratings of valence were higher (more pleasant) in the configuration without a hearing aid (implant only) than in the two configurations with a hearing aid (alone or with an implant). These data support the need for further investigation into hearing device optimization to improve emotional responses to non-speech sounds for adults with hearing loss.
Collapse
Affiliation(s)
- Marina M. Tawdrous
- School of Communication Sciences and Disorders, Western University, 1151 Richmond St, London, ON, N6A 3K7
| | - Kristen L. D'Onofrio
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| | - René Gifford
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| | - Erin M. Picou
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| |
Collapse
|
20
|
Davidson LS, Geers AE, Uchanski RM. Spectral Modulation Detection Performance and Speech Perception in Pediatric Cochlear Implant Recipients. Am J Audiol 2021; 30:1076-1087. [PMID: 34670098 PMCID: PMC9126113 DOI: 10.1044/2021_aja-21-00076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 07/13/2021] [Accepted: 07/19/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The aims of this study were, for pediatric cochlear implant (CI) recipients, (a) to determine the effect of age on their spectral modulation detection (SMD) ability and compare their age effect to that of their typically hearing (TH) peers; (b) to identify demographic, cognitive, and audiological factors associated with SMD ability; and (c) to determine the unique contribution of SMD ability to segmental and suprasegmental speech perception performance. METHOD A total of 104 pediatric CI recipients and 38 TH peers (ages 6-11 years) completed a test of SMD. CI recipients completed tests of segmental (e.g., word recognition in noise and vowels and consonants in quiet) and suprasegmental (e.g., talker discrimination, stress discrimination, and emotion identification) perception, nonverbal intelligence, and working memory. Regressions analyses were used to examine the effects of group and age on percent-correct SMD scores. For the CI group, the effects of demographic, audiological, and cognitive variables on SMD performance and the effects of SMD on speech perception were examined. RESULTS The TH group performed significantly better than the CI group on SMD. Both groups showed better performance with increasing age. Significant predictors of SMD performance for the CI group were age and nonverbal intelligence. SMD performance predicted significant variance in segmental and suprasegmental perception. The variance predicted by SMD performance was nearly double for suprasegmental than for segmental perception. CONCLUSIONS Children in the CI group, on average, scored lower than their TH peers. The slopes of improvement in SMD with age did not differ between the groups. The significant effect of nonverbal intelligence on SMD performance in CI recipients indicates that difficulties inherent in the task affect outcomes. SMD ability predicted speech perception scores, with a more prominent role in suprasegmental than in segmental speech perception. SMD ability may provide a useful nonlinguistic tool for predicting speech perception benefit, with cautious interpretation based on age and cognitive function.
Collapse
Affiliation(s)
- Lisa S. Davidson
- Department of Otolaryngology, Washington University School of Medicine in St. Louis, MO
| | - Ann E. Geers
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson
| | - Rosalie M. Uchanski
- Department of Otolaryngology, Washington University School of Medicine in St. Louis, MO
| |
Collapse
|
21
|
McSweeny C, Cushing SL, Campos JL, Papsin BC, Gordon KA. Functional Consequences of Poor Binaural Hearing in Development: Evidence From Children With Unilateral Hearing Loss and Children Receiving Bilateral Cochlear Implants. Trends Hear 2021; 25:23312165211051215. [PMID: 34661482 PMCID: PMC8527588 DOI: 10.1177/23312165211051215] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Poor binaural hearing in children was hypothesized to contribute to related cognitive and
academic deficits. Children with unilateral hearing have normal hearing in one ear but no
access to binaural cues. Their cognitive and academic deficits could be unique from
children receiving bilateral cochlear implants (CIs) at young ages who have poor access to
spectral cues and impaired binaural sensitivity. Both groups are at risk for
vestibular/balance deficits which could further contribute to memory and learning
challenges. Eighty-eight children (43 male:45 female, aged 9.89 ± 3.40 years), grouped
by unilateral hearing loss (n = 20), bilateral CI
(n = 32), and typically developing (n = 36), completed a
battery of sensory, cognitive, and academic tests. Analyses revealed that children in both
hearing loss groups had significantly poorer skills (accounting for age) on most tests
than their normal hearing peers. Children with unilateral hearing loss had more asymmetric
speech perception than children with bilateral CIs (p < .0001) but
balance and language deficits (p = .0004, p < .0001,
respectively) were similar in the two hearing loss groups (p > .05).
Visuospatial memory deficits occurred in both hearing loss groups
(p = .02) but more consistently across tests in children with unilateral
hearing loss. Verbal memory was not significantly different than normal
(p > .05). Principal component analyses revealed deficits in a main
cluster of visuospatial memory, oral language, mathematics, and reading measures
(explaining 46.8% data variability). The remaining components revealed clusters of
self-reported hearing, balance and vestibular function, and speech perception deficits.
The findings indicate significant developmental impacts of poor binaural hearing in
children.
Collapse
Affiliation(s)
- Claire McSweeny
- Archie's Cochlear Implant Lab, 7979Hospital for Sick Children, Toronto, Ontario, Canada
| | - Sharon L Cushing
- Archie's Cochlear Implant Lab, 7979Hospital for Sick Children, Toronto, Ontario, Canada.,Department of Otolaryngology, Head & Neck Surgery, Faculty of Medicine, University of Toronto, Ontario, Canada.,Department of Otolaryngology, Head & Neck Surgery, 7979Hospital for Sick Children, Toronto, Ontario, Canada
| | - Jennifer L Campos
- KITE-Toronto Rehabilitation Institute, Toronto, Ontario, Canada.,Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Blake C Papsin
- Archie's Cochlear Implant Lab, 7979Hospital for Sick Children, Toronto, Ontario, Canada.,Department of Otolaryngology, Head & Neck Surgery, Faculty of Medicine, University of Toronto, Ontario, Canada.,Department of Otolaryngology, Head & Neck Surgery, 7979Hospital for Sick Children, Toronto, Ontario, Canada
| | - Karen A Gordon
- Archie's Cochlear Implant Lab, 7979Hospital for Sick Children, Toronto, Ontario, Canada.,Department of Otolaryngology, Head & Neck Surgery, Faculty of Medicine, University of Toronto, Ontario, Canada
| |
Collapse
|
22
|
Holder JT, Gifford RH. Effect of Increased Daily Cochlear Implant Use on Auditory Perception in Adults. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4044-4055. [PMID: 34546763 PMCID: PMC9132064 DOI: 10.1044/2021_jslhr-21-00066] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 05/27/2021] [Accepted: 06/28/2021] [Indexed: 06/13/2023]
Abstract
Purpose Despite the recommendation for cochlear implant (CI) processor use during all waking hours, variability in average daily wear time remains high. Previous work has shown that objective wear time is significantly correlated with speech recognition outcomes. We aimed to investigate the causal link between daily wear time and speech recognition outcomes and assess one potential underlying mechanism, spectral processing, driving the causal link. We hypothesized that increased CI use would result in improved speech recognition via improved spectral processing. Method Twenty adult CI recipients completed two study visits. The baseline visit included auditory perception testing (speech recognition and spectral processing measures), questionnaire administration, and documentation of data logging from the CI software. Participants watched an educational video, and they were informed of the compensation schedule. Participants were then asked to increase their daily CI use over a 4-week period during everyday life. Baseline measures were reassessed following the 4-week period. Results Seventeen out of 20 participants increased their daily CI use. On average, participants' speech recognition improved by 3.0, 2.4, and 7.0 percentage points per hour of increased average daily CI use for consonant-nucleus-consonant words, AzBio sentences, and AzBio sentences in noise, respectively. Questionnaire scores were similar between visits. Spectral processing showed significant improvement and accounted for a small amount of variance in the change in speech recognition values. Conclusions Improved consistency of processor use over a 4-week period yielded significant improvements in speech recognition scores. Though a significant factor, spectral processing is likely not the only mechanism driving improvement in speech recognition; further research is warranted.
Collapse
Affiliation(s)
- Jourdan T. Holder
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - René H. Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
23
|
Nittrouer S, Lowenstein JH, Sinex DG. The contribution of spectral processing to the acquisition of phonological sensitivity by adolescent cochlear implant users and normal-hearing controls. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:2116. [PMID: 34598601 PMCID: PMC8463097 DOI: 10.1121/10.0006416] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 08/27/2021] [Accepted: 09/01/2021] [Indexed: 05/31/2023]
Abstract
This study tested the hypotheses that (1) adolescents with cochlear implants (CIs) experience impaired spectral processing abilities, and (2) those impaired spectral processing abilities constrain acquisition of skills based on sensitivity to phonological structure but not those based on lexical or syntactic (lexicosyntactic) knowledge. To test these hypotheses, spectral modulation detection (SMD) thresholds were measured for 14-year-olds with normal hearing (NH) or CIs. Three measures each of phonological and lexicosyntactic skills were obtained and used to generate latent scores of each kind of skill. Relationships between SMD thresholds and both latent scores were assessed. Mean SMD threshold was poorer for adolescents with CIs than for adolescents with NH. Both latent lexicosyntactic and phonological scores were poorer for the adolescents with CIs, but the latent phonological score was disproportionately so. SMD thresholds were significantly associated with phonological but not lexicosyntactic skill for both groups. The only audiologic factor that also correlated with phonological latent scores for adolescents with CIs was the aided threshold, but it did not explain the observed relationship between SMD thresholds and phonological latent scores. Continued research is required to find ways of enhancing spectral processing for children with CIs to support their acquisition of phonological sensitivity.
Collapse
Affiliation(s)
- Susan Nittrouer
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida 32610, USA
| | - Joanna H Lowenstein
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida 32610, USA
| | - Donal G Sinex
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida 32610, USA
| |
Collapse
|
24
|
Bosen AK, Sevich VA, Cannon SA. Forward Digit Span and Word Familiarity Do Not Correlate With Differences in Speech Recognition in Individuals With Cochlear Implants After Accounting for Auditory Resolution. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:3330-3342. [PMID: 34251908 PMCID: PMC8740688 DOI: 10.1044/2021_jslhr-20-00574] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 01/12/2021] [Accepted: 04/09/2021] [Indexed: 06/07/2023]
Abstract
Purpose In individuals with cochlear implants, speech recognition is not associated with tests of working memory that primarily reflect storage, such as forward digit span. In contrast, our previous work found that vocoded speech recognition in individuals with normal hearing was correlated with performance on a forward digit span task. A possible explanation for this difference across groups is that variability in auditory resolution across individuals with cochlear implants could conceal the true relationship between speech and memory tasks. Here, our goal was to determine if performance on forward digit span and speech recognition tasks are correlated in individuals with cochlear implants after controlling for individual differences in auditory resolution. Method We measured sentence recognition ability in 20 individuals with cochlear implants with Perceptually Robust English Sentence Test Open-set sentences. Spectral and temporal modulation detection tasks were used to assess individual differences in auditory resolution, auditory forward digit span was used to assess working memory storage, and self-reported word familiarity was used to assess vocabulary. Results Individual differences in speech recognition were predicted by spectral and temporal resolution. A correlation was found between forward digit span and speech recognition, but this correlation was not significant after controlling for spectral and temporal resolution. No relationship was found between word familiarity and speech recognition. Forward digit span performance was not associated with individual differences in auditory resolution. Conclusions Our findings support the idea that sentence recognition in individuals with cochlear implants is primarily limited by individual differences in working memory processing, not storage. Studies examining the relationship between speech and memory should control for individual differences in auditory resolution.
Collapse
Affiliation(s)
| | - Victoria A. Sevich
- Boys Town National Research Hospital, Omaha, NE
- The Ohio State University, Columbus
| | | |
Collapse
|
25
|
Zedan A, Jürgens T, Williges B, Kollmeier B, Wiebe K, Galindo J, Wesarg T. Speech Intelligibility and Spatial Release From Masking Improvements Using Spatial Noise Reduction Algorithms in Bimodal Cochlear Implant Users. Trends Hear 2021; 25:23312165211005931. [PMID: 33926327 PMCID: PMC8113364 DOI: 10.1177/23312165211005931] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
This study investigated the speech intelligibility benefit of using two different spatial noise reduction algorithms in cochlear implant (CI) users who use a hearing aid (HA) on the contralateral side (bimodal CI users). The study controlled for head movements by using head-related impulse responses to simulate a realistic cafeteria scenario and controlled for HA and CI manufacturer differences by using the master hearing aid platform (MHA) to apply both hearing loss compensation and the noise reduction algorithms (beamformers). Ten bimodal CI users with moderate to severe hearing loss contralateral to their CI participated in the study, and data from nine listeners were included in the data analysis. The beamformers evaluated were the adaptive differential microphones (ADM) implemented independently on each side of the listener and the (binaurally implemented) minimum variance distortionless response (MVDR). For frontal speech and stationary noise from either left or right, an improvement (reduction) of the speech reception threshold of 5.4 dB and 5.5 dB was observed using the ADM, and 6.4 dB and 7.0 dB using the MVDR, respectively. As expected, no improvement was observed for either algorithm for colocated speech and noise. In a 20-talker babble noise scenario, the benefit observed was 3.5 dB for ADM and 7.5 dB for MVDR. The binaural MVDR algorithm outperformed the bilaterally applied monaural ADM. These results encourage the use of beamformer algorithms such as the ADM and MVDR by bimodal CI users in everyday life scenarios.
Collapse
Affiliation(s)
- Ayham Zedan
- Medizinische Physik und Exzellenzcluster "Hearing4all," Carl-von-Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Tim Jürgens
- Medizinische Physik und Exzellenzcluster "Hearing4all," Carl-von-Ossietzky Universität Oldenburg, Oldenburg, Germany.,Institut für Akustik, Technische Hochschule Lübeck, Lübeck, Germany
| | - Ben Williges
- Medizinische Physik und Exzellenzcluster "Hearing4all," Carl-von-Ossietzky Universität Oldenburg, Oldenburg, Germany.,Department of Clinical Neurosciences, University of Cambridge, Cambridge, United Kingdom
| | - Birger Kollmeier
- Medizinische Physik und Exzellenzcluster "Hearing4all," Carl-von-Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Konstantin Wiebe
- Department of Otorhinolaryngology - Head and Neck Surgery, Faculty of Medicine, Medical Center - University of Freiburg, University of Freiburg, Freiburg, Germany
| | - Julio Galindo
- Department of Otorhinolaryngology - Head and Neck Surgery, Faculty of Medicine, Medical Center - University of Freiburg, University of Freiburg, Freiburg, Germany
| | - Thomas Wesarg
- Department of Otorhinolaryngology - Head and Neck Surgery, Faculty of Medicine, Medical Center - University of Freiburg, University of Freiburg, Freiburg, Germany
| |
Collapse
|
26
|
O'Neill ER, Parke MN, Kreft HA, Oxenham AJ. Role of semantic context and talker variability in speech perception of cochlear-implant users and normal-hearing listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:1224. [PMID: 33639827 PMCID: PMC7895533 DOI: 10.1121/10.0003532] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 01/01/2021] [Accepted: 01/26/2021] [Indexed: 06/12/2023]
Abstract
This study assessed the impact of semantic context and talker variability on speech perception by cochlear-implant (CI) users and compared their overall performance and between-subjects variance with that of normal-hearing (NH) listeners under vocoded conditions. Thirty post-lingually deafened adult CI users were tested, along with 30 age-matched and 30 younger NH listeners, on sentences with and without semantic context, presented in quiet and noise, spoken by four different talkers. Additional measures included working memory, non-verbal intelligence, and spectral-ripple detection and discrimination. Semantic context and between-talker differences influenced speech perception to similar degrees for both CI users and NH listeners. Between-subjects variance for speech perception was greatest in the CI group but remained substantial in both NH groups, despite the uniformly degraded stimuli in these two groups. Spectral-ripple detection and discrimination thresholds in CI users were significantly correlated with speech perception, but a single set of vocoder parameters for NH listeners was not able to capture average CI performance in both speech and spectral-ripple tasks. The lack of difference in the use of semantic context between CI users and NH listeners suggests no overall differences in listening strategy between the groups, when the stimuli are similarly degraded.
Collapse
Affiliation(s)
- Erin R O'Neill
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Morgan N Parke
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
27
|
Zhou N, Dixon S, Zhu Z, Dong L, Weiner M. Spectrotemporal Modulation Sensitivity in Cochlear-Implant and Normal-Hearing Listeners: Is the Performance Driven by Temporal or Spectral Modulation Sensitivity? Trends Hear 2020; 24:2331216520948385. [PMID: 32895024 PMCID: PMC7482033 DOI: 10.1177/2331216520948385] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
This study examined the contribution of temporal and spectral modulation sensitivity to discrimination of stimuli modulated in both the time and frequency domains. The spectrotemporally modulated stimuli contained spectral ripples that shifted systematically across frequency over time at a repetition rate of 5 Hz. As the ripple density increased in the stimulus, modulation depth of the 5 Hz amplitude modulation (AM) reduced. Spectrotemporal modulation discrimination was compared with subjects’ ability to discriminate static spectral ripples and the ability to detect slow AM. The general pattern from both the cochlear implant (CI) and normal hearing groups showed that spectrotemporal modulation thresholds were correlated more strongly with AM detection than with static ripple discrimination. CI subjects’ spectrotemporal modulation thresholds were also highly correlated with speech recognition in noise, when partialing out static ripple discrimination, but the correlation was not significant when partialing out AM detection. The results indicated that temporal information was more heavily weighted in spectrotemporal modulation discrimination, and for CI subjects, it was AM sensitivity that drove the correlation between spectrotemporal modulation thresholds and speech recognition. The results suggest that for the rates tested here, temporal information processing may limit performance more than spectral information processing in both CI users and normal hearing listeners.
Collapse
Affiliation(s)
- Ning Zhou
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, North Carolina, United States
| | - Susannah Dixon
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, North Carolina, United States
| | - Zhen Zhu
- Department of Engineering, East Carolina University, Greenville, North Carolina, United States
| | - Lixue Dong
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, North Carolina, United States
| | - Marti Weiner
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, North Carolina, United States
| |
Collapse
|
28
|
DiNino M, Arenberg JG, Duchen ALR, Winn MB. Effects of Age and Cochlear Implantation on Spectrally Cued Speech Categorization. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:2425-2440. [PMID: 32552327 PMCID: PMC7838840 DOI: 10.1044/2020_jslhr-19-00127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2019] [Revised: 08/12/2019] [Accepted: 03/30/2020] [Indexed: 06/11/2023]
Abstract
Purpose Weighting of acoustic cues for perceiving place-of-articulation speech contrasts was measured to determine the separate and interactive effects of age and use of cochlear implants (CIs). It has been found that adults with normal hearing (NH) show reliance on fine-grained spectral information (e.g., formants), whereas adults with CIs show reliance on broad spectral shape (e.g., spectral tilt). In question was whether children with NH and CIs would demonstrate the same patterns as adults, or show differences based on ongoing maturation of hearing and phonetic skills. Method Children and adults with NH and with CIs categorized a /b/-/d/ speech contrast based on two orthogonal spectral cues. Among CI users, phonetic cue weights were compared to vowel identification scores and Spectral-Temporally Modulated Ripple Test thresholds. Results NH children and adults both relied relatively more on the fine-grained formant cue and less on the broad spectral tilt cue compared to participants with CIs. However, early-implanted children with CIs better utilized the formant cue compared to adult CI users. Formant cue weights correlated with CI participants' vowel recognition and in children, also related to Spectral-Temporally Modulated Ripple Test thresholds. Adults and child CI users with very poor phonetic perception showed additive use of the two cues, whereas those with better and/or more mature cue usage showed a prioritized trading relationship, akin to NH listeners. Conclusions Age group and hearing modality can influence phonetic cue-weighting patterns. Results suggest that simple nonlexical categorization tests correlate with more general speech recognition skills of children and adults with CIs.
Collapse
Affiliation(s)
- Mishaela DiNino
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA
| | - Julie G. Arenberg
- Massachusetts Eye and Ear, Harvard Medical School Department of Otolaryngology, Boston
| | | | - Matthew B. Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis
| |
Collapse
|
29
|
Tejani VD, Brown CJ. Speech masking release in Hybrid cochlear implant users: Roles of spectral and temporal cues in electric-acoustic hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:3667. [PMID: 32486815 PMCID: PMC7255813 DOI: 10.1121/10.0001304] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 05/05/2020] [Accepted: 05/05/2020] [Indexed: 06/04/2023]
Abstract
When compared with cochlear implant (CI) users utilizing electric-only (E-Only) stimulation, CI users utilizing electric-acoustic stimulation (EAS) in the implanted ear show improved speech recognition in modulated noise relative to steady-state noise (i.e., speech masking release). It has been hypothesized, but not shown, that masking release is attributed to spectral resolution and temporal fine structure (TFS) provided by acoustic hearing. To address this question, speech masking release, spectral ripple density discrimination thresholds, and fundamental frequency difference limens (f0DLs) were evaluated in the acoustic-only (A-Only), E-Only, and EAS listening modes in EAS CI users. The spectral ripple and f0DL tasks are thought to reflect access to spectral and TFS cues, which could impact speech masking release. Performance in all three measures was poorest when EAS CI users were tested using the E-Only listening mode, with significant improvements in A-Only and EAS listening modes. f0DLs, but not spectral ripple density discrimination thresholds, significantly correlated with speech masking release when assessed in the EAS listening mode. Additionally, speech masking release correlated with AzBio sentence recognition in noise. The correlation between speech masking release and f0DLs likely indicates that TFS cues provided by residual hearing were used to obtain speech masking release, which aided sentence recognition in noise.
Collapse
Affiliation(s)
- Viral D Tejani
- Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, 200 Hawkins Drive, 21003 Pomerantz Family Pavilion, Iowa City, Iowa 52242-1078, USA
| | - Carolyn J Brown
- Communication Sciences and Disorders, Wendell Johnson Speech and Hearing Center-127B, University of Iowa, 250 Hawkins Drive, Iowa City, Iowa 52242, USA
| |
Collapse
|
30
|
Abstract
OBJECTIVES The Quick Spectral Modulation Detection (QSMD) test provides a quick and clinically implementable spectral resolution estimate for cochlear implant (CI) users. However, the original QSMD software (QSMD(MySound)) has technical and usability limitations that prevent widespread distribution and implementation. In this article, we introduce a new software package EasyQSMD, which is freely available software with the goal of both simplifying and standardizing spectral resolution measurements. DESIGN QSMD was measured for 20 CI users using both software packages. RESULTS No differences between the two software packages were detected, and based on the 95% confidence interval of the difference between tests, the difference between the tests is expected to be <2% points. The average test duration was under 4 minutes. CONCLUSIONS EasyQSMD is considered functionally equivalent to QSMD(MySound) providing a clinically feasible and quick estimate of spectral resolution for CI users.
Collapse
|
31
|
Holder JT, Taylor AL, Sunderhaus LW, Gifford RH. Effect of Microphone Location and Beamforming Technology on Speech Recognition in Pediatric Cochlear Implant Recipients. J Am Acad Audiol 2020; 31:506-512. [PMID: 32119817 DOI: 10.3766/jaaa.19025] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Despite improvements in cochlear implant (CI) technology, pediatric CI recipients continue to have more difficulty understanding speech than their typically hearing peers in background noise. A variety of strategies have been evaluated to help mitigate this disparity, such as signal processing, remote microphone technology, and microphone placement. Previous studies regarding microphone placement used speech processors that are now dated, and most studies investigating the improvement of speech recognition in background noise included adult listeners only. PURPOSE The purpose of the present study was to investigate the effects of microphone location and beamforming technology on speech understanding for pediatric CI recipients in noise. RESEARCH DESIGN A prospective, repeated-measures, within-participant design was used to compare performance across listening conditions. STUDY SAMPLE A total of nine children (aged 6.6 to 15.3 years) with at least one Advanced Bionics CI were recruited for this study. DATA COLLECTION AND ANALYSIS The Basic English Lexicon Sentences and AzBio Sentences were presented at 0o azimuth at 65-dB SPL in +5 signal-to-noise ratio noise presented from seven speakers using the R-SPACE system (Advanced Bionics, Valencia, CA). Performance was compared across three omnidirectional microphone configurations (processor microphone, T-Mic 2, and processor + T-Mic 2) and two directional microphone configurations (UltraZoom and auto UltraZoom). The two youngest participants were not tested in the directional microphone configurations. RESULTS No significant differences were found between the various omnidirectional microphone configurations. UltraZoom provided significant benefit over all omnidirectional microphone configurations (T-Mic 2, p = 0.004, processor microphone, p < 0.001, and processor microphone + T-Mic 2, p = 0.018) but was not significantly different from auto UltraZoom (p = 0.176). CONCLUSIONS All omnidirectional microphone configurations yielded similar performance, suggesting that a child's listening performance in noise will not be compromised by choosing the microphone configuration best suited for the child. UltraZoom (adaptive beamformer) yielded higher performance than all omnidirectional microphones in moderate background noise for adolescents aged 9 to 15 years. The implications of these data suggest that for older children who are able to reliably use manual controls, UltraZoom will yield significantly higher performance in background noise when the target is in front of the listener.
Collapse
Affiliation(s)
- Jourdan T Holder
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Adrian L Taylor
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Linsey W Sunderhaus
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
32
|
D'Onofrio KL, Caldwell M, Limb C, Smith S, Kessler DM, Gifford RH. Musical Emotion Perception in Bimodal Patients: Relative Weighting of Musical Mode and Tempo Cues. Front Neurosci 2020; 14:114. [PMID: 32174809 PMCID: PMC7054459 DOI: 10.3389/fnins.2020.00114] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 01/29/2020] [Indexed: 11/13/2022] Open
Abstract
Several cues are used to convey musical emotion, the two primary being musical mode and musical tempo. Specifically, major and minor modes tend to be associated with positive and negative valence, respectively, and songs at fast tempi have been associated with more positive valence compared to songs at slow tempi (Balkwill and Thompson, 1999; Webster and Weir, 2005). In Experiment I, we examined the relative weighting of musical tempo and musical mode among adult cochlear implant (CI) users combining electric and contralateral acoustic stimulation, or "bimodal" hearing. Our primary hypothesis was that bimodal listeners would utilize both tempo and mode cues in their musical emotion judgments in a manner similar to normal-hearing listeners. Our secondary hypothesis was that low-frequency (LF) spectral resolution in the non-implanted ear, as quantified via psychophysical tuning curves (PTCs) at 262 and 440 Hz, would be significantly correlated with degree of bimodal benefit for musical emotion perception. In Experiment II, we investigated across-channel spectral resolution using a spectral modulation detection (SMD) task and neural representation of temporal fine structure via the frequency following response (FFR) for a 170-ms /da/ stimulus. Results indicate that CI-alone performance was driven almost exclusively by tempo cues, whereas bimodal listening demonstrated use of both tempo and mode. Additionally, bimodal benefit for musical emotion perception may be correlated with spectral resolution in the non-implanted ear via SMD, as well as neural representation of F0 amplitude via FFR - though further study with a larger sample size is warranted. Thus, contralateral acoustic hearing can offer significant benefit for musical emotion perception, and the degree of benefit may be dependent upon spectral resolution of the non-implanted ear.
Collapse
Affiliation(s)
- Kristen L D'Onofrio
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| | | | - Charles Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Spencer Smith
- Department of Communication Sciences and Disorders, The University of Texas at Austin, Austin, TX, United States
| | - David M Kessler
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| | - René H Gifford
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
33
|
Ahmed S, Cho SH. Hand Gesture Recognition Using an IR-UWB Radar with an Inception Module-Based Classifier. SENSORS 2020; 20:s20020564. [PMID: 31968587 PMCID: PMC7014526 DOI: 10.3390/s20020564] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Revised: 01/16/2020] [Accepted: 01/16/2020] [Indexed: 11/16/2022]
Abstract
The emerging integration of technology in daily lives has increased the need for more convenient methods for human-computer interaction (HCI). Given that the existing HCI approaches exhibit various limitations, hand gesture recognition-based HCI may serve as a more natural mode of man-machine interaction in many situations. Inspired by an inception module-based deep-learning network (GoogLeNet), this paper presents a novel hand gesture recognition technique for impulse-radio ultra-wideband (IR-UWB) radars which demonstrates a higher gesture recognition accuracy. First, methodology to demonstrate radar signals as three-dimensional image patterns is presented and then, the inception module-based variant of GoogLeNet is used to analyze the pattern within the images for the recognition of different hand gestures. The proposed framework is exploited for eight different hand gestures with a promising classification accuracy of 95%. To verify the robustness of the proposed algorithm, multiple human subjects were involved in data acquisition.
Collapse
|
34
|
Dorman MF, Natale SC, Baxter L, Zeitler DM, Carlson ML, Lorens A, Skarzynski H, Peters JPM, Torres JH, Noble JH. Approximations to the Voice of a Cochlear Implant: Explorations With Single-Sided Deaf Listeners. Trends Hear 2020; 24:2331216520920079. [PMID: 32339072 PMCID: PMC7225791 DOI: 10.1177/2331216520920079] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Revised: 03/23/2020] [Accepted: 03/27/2020] [Indexed: 12/14/2022] Open
Abstract
Fourteen single-sided deaf listeners fit with an MED-EL cochlear implant (CI) judged the similarity of clean signals presented to their CI and modified signals presented to their normal-hearing ear. The signals to the normal-hearing ear were created by (a) filtering, (b) spectral smearing, (c) changing overall fundamental frequency (F0), (d) F0 contour flattening, (e) changing formant frequencies, (f) altering resonances and ring times to create a metallic sound quality, (g) using a noise vocoder, or (h) using a sine vocoder. The operations could be used singly or in any combination. On a scale of 1 to 10 where 10 was a complete match to the sound of the CI, the mean match score was 8.8. Over half of the matches were 9.0 or higher. The most common alterations to a clean signal were band-pass or low-pass filtering, spectral peak smearing, and F0 contour flattening. On average, 3.4 operations were used to create a match. Upshifts in formant frequencies were implemented most often for electrode insertion angles less than approximately 500°. A relatively small set of operations can produce signals that approximate the sound of the MED-EL CI. There are large individual differences in the combination of operations needed. The sound files in Supplemental Material approximate the sound of the MED-EL CI for patients fit with 28-mm electrode arrays.
Collapse
Affiliation(s)
- Michael F. Dorman
- Speech and Hearing Science, College of Health Solutions, Arizona State University
| | - Sarah Cook Natale
- Speech and Hearing Science, College of Health Solutions, Arizona State University
| | - Leslie Baxter
- Department of Clinical Neuropsychology, Mayo Clinic Arizona
| | - Daniel M. Zeitler
- Department of Otolaryngology/Head-Neck Surgery, Virginia Mason Medical Center
| | - Matthew L. Carlson
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, Minnesota, United States
| | - Artur Lorens
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - Henryk Skarzynski
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - Jeroen P. M. Peters
- Department of Otorhinolaryngology and Head & Neck Surgery, University Medical Center Utrecht
| | | | - Jack H. Noble
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, United States
| |
Collapse
|
35
|
Kreft HA, DeVries LA, Arenberg JG, Oxenham AJ. Comparing Rapid and Traditional Forward-Masked Spatial Tuning Curves in Cochlear-Implant Users. Trends Hear 2019; 23:2331216519851306. [PMID: 31134842 PMCID: PMC6540501 DOI: 10.1177/2331216519851306] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
A rapid forward-masked spatial tuning curve measurement procedure, based on Bekesy tracking, was adapted and evaluated for use with cochlear implants. Twelve postlingually-deafened adult cochlear-implant users participated. Spatial tuning curves using the new procedure and using a traditional forced-choice adaptive procedure resulted in similar estimates of parameters. The Bekesy-tracking method was almost 3 times faster than the forced-choice procedure, but its test-retest reliability was significantly poorer. Although too time-consuming for general clinical use, the new method may have some benefits in individual cases, where identifying electrodes with poor spatial selectivity as candidates for deactivation is deemed necessary.
Collapse
Affiliation(s)
- Heather A Kreft
- 1 Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| | - Lindsay A DeVries
- 2 Department Hearing and Speech Sciences, University of Maryland, College Park, MD, USA
| | - Julie G Arenberg
- 3 Department of Otolaryngology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Andrew J Oxenham
- 1 Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
36
|
Dorman MF, Natale SC, Zeitler DM, Baxter L, Noble JH. Looking for Mickey Mouse™ But Finding a Munchkin: The Perceptual Effects of Frequency Upshifts for Single-Sided Deaf, Cochlear Implant Patients. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:3493-3499. [PMID: 31415186 PMCID: PMC6808340 DOI: 10.1044/2019_jslhr-h-18-0389] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Revised: 02/06/2019] [Accepted: 03/20/2019] [Indexed: 06/10/2023]
Abstract
Purpose Our aim was to make audible for normal-hearing listeners the Mickey Mouse™ sound quality of cochlear implants (CIs) often found following device activation. Method The listeners were 3 single-sided deaf patients fit with a CI and who had 6 months or less of CI experience. Computed tomography imaging established the location of each electrode contact in the cochlea and allowed an estimate of the place frequency of the tissue nearest each electrode. For the most apical electrodes, this estimate ranged from 650 to 780 Hz. To determine CI sound quality, a clean signal (a sentence) was presented to the CI ear via a direct connect cable and candidate, and CI-like signals were presented to the ear with normal hearing via an insert receiver. The listeners rated the similarity of the candidate signals to the sound of the CI on a 1- to 10-point scale, with 10 being a complete match. Results To make the match to CI sound quality, all 3 patients need an upshift in formant frequencies (300-800 Hz) and a metallic sound quality. Two of the 3 patients also needed an upshift in voice pitch (10-80 Hz) and a muffling of sound quality. Similarity scores ranged from 8 to 9.7. Conclusion The formant frequency upshifts, fundamental frequency upshifts, and metallic sound quality experienced by the listeners can be linked to the relatively basal locations of the electrode contacts and short duration experience with their devices. The perceptual consequence was not the voice quality of Mickey Mouse™ but rather that of Munchkins in The Wizard of Oz for whom both formant frequencies and voice pitch were upshifted. Supplemental Material https://doi.org/10.23641/asha.9341651.
Collapse
Affiliation(s)
- Michael F Dorman
- Department of Speech and Hearing Science, Arizona State University, Tempe
| | - Sarah C Natale
- Department of Speech and Hearing Science, Arizona State University, Tempe
| | - Daniel M Zeitler
- Department of Otolaryngology/HNS, Virginia Mason Medical Center, Seattle, WA
| | - Leslie Baxter
- Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, Phoenix, AZ
| | - Jack H Noble
- Department of Electrical Engineering, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
37
|
O'Neill ER, Kreft HA, Oxenham AJ. Cognitive factors contribute to speech perception in cochlear-implant users and age-matched normal-hearing listeners under vocoded conditions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:195. [PMID: 31370651 PMCID: PMC6637026 DOI: 10.1121/1.5116009] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
This study examined the contribution of perceptual and cognitive factors to speech-perception abilities in cochlear-implant (CI) users. Thirty CI users were tested on word intelligibility in sentences with and without semantic context, presented in quiet and in noise. Performance was compared with measures of spectral-ripple detection and discrimination, thought to reflect peripheral processing, as well as with cognitive measures of working memory and non-verbal intelligence. Thirty age-matched and thirty younger normal-hearing (NH) adults also participated, listening via tone-excited vocoders, adjusted to produce mean performance for speech in noise comparable to that of the CI group. Results suggest that CI users may rely more heavily on semantic context than younger or older NH listeners, and that non-auditory working memory explains significant variance in the CI and age-matched NH groups. Between-subject variability in spectral-ripple detection thresholds was similar across groups, despite the spectral resolution for all NH listeners being limited by the same vocoder, whereas speech perception scores were more variable between CI users than between NH listeners. The results highlight the potential importance of central factors in explaining individual differences in CI users and question the extent to which standard measures of spectral resolution in CIs reflect purely peripheral processing.
Collapse
Affiliation(s)
- Erin R O'Neill
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
38
|
DiNino M, O'Brien G, Bierer SM, Jahn KN, Arenberg JG. The Estimated Electrode-Neuron Interface in Cochlear Implant Listeners Is Different for Early-Implanted Children and Late-Implanted Adults. J Assoc Res Otolaryngol 2019; 20:291-303. [PMID: 30911952 PMCID: PMC6513958 DOI: 10.1007/s10162-019-00716-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Accepted: 03/03/2019] [Indexed: 12/01/2022] Open
Abstract
Cochlear implant (CI) programming is similar for all CI users despite limited understanding of the electrode-neuron interface (ENI). The ENI refers to the ability of each CI electrode to effectively stimulate target auditory neurons and is influenced by electrode position, neural health, cochlear geometry, and bone and tissue growth in the cochlea. Hearing history likely affects these variables, suggesting that the efficacy of each channel of stimulation differs between children who were implanted at young ages and adults who lost hearing and received a CI later in life. This study examined whether ENI quality differed between early-implanted children and late-implanted adults. Auditory detection thresholds and most comfortable levels (MCLs) were obtained with monopolar and focused electrode configurations. Channel-to-channel variability and dynamic range were calculated for both types of stimulation. Electrical field imaging data were also acquired to estimate levels of intracochlear resistance. Children exhibited lower average auditory perception thresholds and MCLs compared with adults, particularly with focused stimulation. However, neither dynamic range nor channel-to-channel threshold variability differed between groups, suggesting that children’s range of perceptible current was shifted downward. Children also demonstrated increased intracochlear resistance levels relative to the adult group, possibly reflecting greater ossification or tissue growth after CI surgery. These results illustrate physical and perceptual differences related to the ENI of early-implanted children compared with late-implanted adults. Evidence from this study demonstrates a need for further investigation of the ENI in CI users with varying hearing histories.
Collapse
Affiliation(s)
- Mishaela DiNino
- Department of Psychology, Carnegie Mellon University, 5000 Forbes, Ave., Pittsburgh, PA, 15213, USA.
| | - Gabrielle O'Brien
- Department of Speech and Hearing Sciences, University of Washington, 1417 NE 42nd St., Box 354875, Seattle, WA, 98105, USA
| | - Steven M Bierer
- Department of Speech and Hearing Sciences, University of Washington, 1417 NE 42nd St., Box 354875, Seattle, WA, 98105, USA
| | - Kelly N Jahn
- Department of Speech and Hearing Sciences, University of Washington, 1417 NE 42nd St., Box 354875, Seattle, WA, 98105, USA
| | - Julie G Arenberg
- Department of Otolaryngology, Massachusetts Eye and Ear, Harvard Medical School, 243 Charles St., Boston, MA, 02114, USA
| |
Collapse
|
39
|
Speech Perception with Spectrally Non-overlapping Maskers as Measure of Spectral Resolution in Cochlear Implant Users. J Assoc Res Otolaryngol 2018; 20:151-167. [PMID: 30456730 DOI: 10.1007/s10162-018-00702-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Accepted: 10/07/2018] [Indexed: 10/27/2022] Open
Abstract
Poor spectral resolution contributes to the difficulties experienced by cochlear implant (CI) users when listening to speech in noise. However, correlations between measures of spectral resolution and speech perception in noise have not always been found to be robust. It may be that the relationship between spectral resolution and speech perception in noise becomes clearer in conditions where the speech and noise are not spectrally matched, so that improved spectral resolution can assist in separating the speech from the masker. To test this prediction, speech intelligibility was measured with noise or tone maskers that were presented either in the same spectral channels as the speech or in interleaved spectral channels. Spectral resolution was estimated via a spectral ripple discrimination task. Results from vocoder simulations in normal-hearing listeners showed increasing differences in speech intelligibility between spectrally overlapped and interleaved maskers as well as improved spectral ripple discrimination with increasing spectral resolution. However, no clear differences were observed in CI users between performance with spectrally interleaved and overlapped maskers, or between tone and noise maskers. The results suggest that spectral resolution in current CIs is too poor to take advantage of the spectral separation produced by spectrally interleaved speech and maskers. Overall, the spectrally interleaved and tonal maskers produce a much larger difference in performance between normal-hearing listeners and CI users than do traditional speech-in-noise measures, and thus provide a more sensitive test of speech perception abilities for current and future implantable devices.
Collapse
|