1
|
Xie Z, Gaskins CR, Tinnemore AR, Shader MJ, Gordon-Salant S, Anderson S, Goupell MJ. Spectral degradation and carrier sentences increase age-related temporal processing deficits in a cue-specific manner. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:3983-3994. [PMID: 38934563 PMCID: PMC11213620 DOI: 10.1121/10.0026434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Revised: 05/09/2024] [Accepted: 05/25/2024] [Indexed: 06/28/2024]
Abstract
Advancing age is associated with decreased sensitivity to temporal cues in word segments, particularly when target words follow non-informative carrier sentences or are spectrally degraded (e.g., vocoded to simulate cochlear-implant stimulation). This study investigated whether age, carrier sentences, and spectral degradation interacted to cause undue difficulty in processing speech temporal cues. Younger and older adults with normal hearing performed phonemic categorization tasks on two continua: a Buy/Pie contrast with voice onset time changes for the word-initial stop and a Dish/Ditch contrast with silent interval changes preceding the word-final fricative. Target words were presented in isolation or after non-informative carrier sentences, and were unprocessed or degraded via sinewave vocoding (2, 4, and 8 channels). Older listeners exhibited reduced sensitivity to both temporal cues compared to younger listeners. For the Buy/Pie contrast, age, carrier sentence, and spectral degradation interacted such that the largest age effects were seen for unprocessed words in the carrier sentence condition. This pattern differed from the Dish/Ditch contrast, where reducing spectral resolution exaggerated age effects, but introducing carrier sentences largely left the patterns unchanged. These results suggest that certain temporal cues are particularly susceptible to aging when placed in sentences, likely contributing to the difficulties of older cochlear-implant users in everyday environments.
Collapse
Affiliation(s)
- Zilong Xie
- School of Communication Science and Disorders, Florida State University, Tallahassee, Florida 32306, USA
| | - Casey R Gaskins
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Anna R Tinnemore
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland 20742, USA
| | - Maureen J Shader
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana 47907, USA
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland 20742, USA
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
2
|
Cychosz M, Winn MB, Goupell MJ. How to vocode: Using channel vocoders for cochlear-implant research. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:2407-2437. [PMID: 38568143 PMCID: PMC10994674 DOI: 10.1121/10.0025274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 02/14/2024] [Accepted: 02/23/2024] [Indexed: 04/05/2024]
Abstract
The channel vocoder has become a useful tool to understand the impact of specific forms of auditory degradation-particularly the spectral and temporal degradation that reflect cochlear-implant processing. Vocoders have many parameters that allow researchers to answer questions about cochlear-implant processing in ways that overcome some logistical complications of controlling for factors in individual cochlear implant users. However, there is such a large variety in the implementation of vocoders that the term "vocoder" is not specific enough to describe the signal processing used in these experiments. Misunderstanding vocoder parameters can result in experimental confounds or unexpected stimulus distortions. This paper highlights the signal processing parameters that should be specified when describing vocoder construction. The paper also provides guidance on how to determine vocoder parameters within perception experiments, given the experimenter's goals and research questions, to avoid common signal processing mistakes. Throughout, we will assume that experimenters are interested in vocoders with the specific goal of better understanding cochlear implants.
Collapse
Affiliation(s)
- Margaret Cychosz
- Department of Linguistics, University of California, Los Angeles, Los Angeles, California 90095, USA
| | - Matthew B Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota 55455, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, Maryland 20742, USA
| |
Collapse
|
3
|
Bosen AK, Doria GM. Identifying Links Between Latent Memory and Speech Recognition Factors. Ear Hear 2024; 45:351-369. [PMID: 37882100 PMCID: PMC10922378 DOI: 10.1097/aud.0000000000001430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2023]
Abstract
OBJECTIVES The link between memory ability and speech recognition accuracy is often examined by correlating summary measures of performance across various tasks, but interpretation of such correlations critically depends on assumptions about how these measures map onto underlying factors of interest. The present work presents an alternative approach, wherein latent factor models are fit to trial-level data from multiple tasks to directly test hypotheses about the underlying structure of memory and the extent to which latent memory factors are associated with individual differences in speech recognition accuracy. Latent factor models with different numbers of factors were fit to the data and compared to one another to select the structures which best explained vocoded sentence recognition in a two-talker masker across a range of target-to-masker ratios, performance on three memory tasks, and the link between sentence recognition and memory. DESIGN Young adults with normal hearing (N = 52 for the memory tasks, of which 21 participants also completed the sentence recognition task) completed three memory tasks and one sentence recognition task: reading span, auditory digit span, visual free recall of words, and recognition of 16-channel vocoded Perceptually Robust English Sentence Test Open-set sentences in the presence of a two-talker masker at target-to-masker ratios between +10 and 0 dB. Correlations between summary measures of memory task performance and sentence recognition accuracy were calculated for comparison to prior work, and latent factor models were fit to trial-level data and compared against one another to identify the number of latent factors which best explains the data. Models with one or two latent factors were fit to the sentence recognition data and models with one, two, or three latent factors were fit to the memory task data. Based on findings with these models, full models that linked one speech factor to one, two, or three memory factors were fit to the full data set. Models were compared via Expected Log pointwise Predictive Density and post hoc inspection of model parameters. RESULTS Summary measures were positively correlated across memory tasks and sentence recognition. Latent factor models revealed that sentence recognition accuracy was best explained by a single factor that varied across participants. Memory task performance was best explained by two latent factors, of which one was generally associated with performance on all three tasks and the other was specific to digit span recall accuracy at lists of six digits or more. When these models were combined, the general memory factor was closely related to the sentence recognition factor, whereas the factor specific to digit span had no apparent association with sentence recognition. CONCLUSIONS Comparison of latent factor models enables testing hypotheses about the underlying structure linking cognition and speech recognition. This approach showed that multiple memory tasks assess a common latent factor that is related to individual differences in sentence recognition, although performance on some tasks was associated with multiple factors. Thus, while these tasks provide some convergent assessment of common latent factors, caution is needed when interpreting what they tell us about speech recognition.
Collapse
|
4
|
Abramowitz JC, Goupell MJ, Milvae KD. Cochlear-Implant Simulated Signal Degradation Exacerbates Listening Effort in Older Listeners. Ear Hear 2024; 45:441-450. [PMID: 37953469 PMCID: PMC10922081 DOI: 10.1097/aud.0000000000001440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2023]
Abstract
OBJECTIVES Individuals with cochlear implants (CIs) often report that listening requires high levels of effort. Listening effort can increase with decreasing spectral resolution, which occurs when listening with a CI, and can also increase with age. What is not clear is whether these factors interact; older CI listeners potentially experience even higher listening effort with greater signal degradation than younger CI listeners. This study used pupillometry as a physiological index of listening effort to examine whether age, spectral resolution, and their interaction affect listening effort in a simulation of CI listening. DESIGN Fifteen younger normal-hearing listeners (ages 18 to 31 years) and 15 older normal-hearing listeners (ages 65 to 75 years) participated in this experiment; they had normal hearing thresholds from 0.25 to 4 kHz. Participants repeated sentences presented in quiet that were either unprocessed or vocoded, simulating CI listening. Stimuli frequency spectra were limited to below 4 kHz (to control for effects of age-related high-frequency hearing loss), and spectral resolution was decreased by decreasing the number of vocoder channels, with 32-, 16-, and 8-channel conditions. Behavioral speech recognition scores and pupil dilation were recorded during this task. In addition, cognitive measures of working memory and processing speed were obtained to examine if individual differences in these measures predicted changes in pupil dilation. RESULTS For trials where the sentence was recalled correctly, there was a significant interaction between age and spectral resolution, with significantly greater pupil dilation in the older normal-hearing listeners for the 8- and 32-channel vocoded conditions. Cognitive measures did not predict pupil dilation. CONCLUSIONS There was a significant interaction between age and spectral resolution, such that older listeners appear to exert relatively higher listening effort than younger listeners when the signal is highly degraded, with the largest effects observed in the eight-channel condition. The clinical implication is that older listeners may be at higher risk for increased listening effort with a CI.
Collapse
Affiliation(s)
- Jordan C. Abramowitz
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742
| | - Matthew J. Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742
| | - Kristina DeRoy Milvae
- Department of Communicative Disorders and Sciences, University at Buffalo, Buffalo, NY 14214
| |
Collapse
|
5
|
Davidson A, Souza P. Relationships Between Auditory Processing and Cognitive Abilities in Adults: A Systematic Review. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:296-345. [PMID: 38147487 DOI: 10.1044/2023_jslhr-22-00716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2023]
Abstract
PURPOSE The contributions from the central auditory and cognitive systems play a major role in communication. Understanding the relationship between auditory and cognitive abilities has implications for auditory rehabilitation for clinical patients. The purpose of this systematic review is to address the question, "In adults, what is the relationship between central auditory processing abilities and cognitive abilities?" METHOD Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines were followed to identify, screen, and determine eligibility for articles that addressed the research question of interest. Medical librarians and subject matter experts assisted in search strategy, keyword review, and structuring the systematic review process. To be included, articles needed to have an auditory measure (either behavioral or electrophysiologic), a cognitive measure that assessed individual ability, and the measures needed to be compared to one another. RESULTS Following two rounds of identification and screening, 126 articles were included for full analysis. Central auditory processing (CAP) measures were grouped into categories (behavioral: speech in noise, altered speech, temporal processing, binaural processing; electrophysiologic: mismatch negativity, P50, N200, P200, and P300). The most common CAP measures were sentence recognition in speech-shaped noise and the P300. Cognitive abilities were grouped into constructs, and the most common construct was working memory. The findings were mixed, encompassing both significant and nonsignificant relationships; therefore, the results do not conclusively establish a direct link between CAP and cognitive abilities. Nonetheless, several consistent relationships emerged across different domains. Distorted or noisy speech was related to working memory or processing speed. Auditory temporal order tasks showed significant relationships with working memory, fluid intelligence, or multidomain cognitive measures. For electrophysiology, relationships were observed between some cortical evoked potentials and working memory or executive/inhibitory processes. Significant results were consistent with the hypothesis that assessments of CAP and cognitive processing would be positively correlated. CONCLUSIONS Results from this systematic review summarize relationships between CAP and cognitive processing, but also underscore the complexity of these constructs, the importance of study design, and the need to select an appropriate measure. The relationship between auditory and cognitive abilities is complex but can provide informative context when creating clinical management plans. This review supports a need to develop guidelines and training for audiologists who wish to consider individual central auditory and cognitive abilities in patient care. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.24855174.
Collapse
|
6
|
Anderson SR, Gallun FJ, Litovsky RY. Interaural asymmetry of dynamic range: Abnormal fusion, bilateral interference, and shifts in attention. Front Neurosci 2023; 16:1018190. [PMID: 36699517 PMCID: PMC9869277 DOI: 10.3389/fnins.2022.1018190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Accepted: 12/19/2022] [Indexed: 01/12/2023] Open
Abstract
Speech information in the better ear interferes with the poorer ear in patients with bilateral cochlear implants (BiCIs) who have large asymmetries in speech intelligibility between ears. The goal of the present study was to assess how each ear impacts, and whether one dominates, speech perception using simulated CI processing in older and younger normal-hearing (ONH and YNH) listeners. Dynamic range (DR) was manipulated symmetrically or asymmetrically across spectral bands in a vocoder. We hypothesized that if abnormal integration of speech information occurs with asymmetrical speech understanding, listeners would demonstrate an atypical preference in accuracy when reporting speech presented to the better ear and fusion of speech between the ears (i.e., an increased number of one-word responses when two words were presented). Results from three speech conditions showed that: (1) When the same word was presented to both ears, speech identification accuracy decreased if one or both ears decreased in DR, but listeners usually reported hearing one word. (2) When two words with different vowels were presented to both ears, speech identification accuracy and percentage of two-word responses decreased consistently as DR decreased in one or both ears. (3) When two rhyming words (e.g., bed and led) previously shown to phonologically fuse between ears (e.g., bled) were presented, listeners instead demonstrated interference as DR decreased. The word responded in (2) and (3) came from the right (symmetric) or better (asymmetric) ear, especially in (3) and for ONH listeners in (2). These results suggest that the ear with poorer dynamic range is downweighted by the auditory system, resulting in abnormal fusion and interference, especially for older listeners.
Collapse
Affiliation(s)
- Sean R. Anderson
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States
| | - Frederick J. Gallun
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, OR, United States
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, United States
- Department of Surgery, Division of Otolaryngology, University of Wisconsin-Madison, Madison, WI, United States
| |
Collapse
|
7
|
Tinnemore AR, Montero L, Gordon-Salant S, Goupell MJ. The recognition of time-compressed speech as a function of age in listeners with cochlear implants or normal hearing. Front Aging Neurosci 2022; 14:887581. [PMID: 36247992 PMCID: PMC9557069 DOI: 10.3389/fnagi.2022.887581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 08/29/2022] [Indexed: 11/13/2022] Open
Abstract
Speech recognition is diminished when a listener has an auditory temporal processing deficit. Such deficits occur in listeners over 65 years old with normal hearing (NH) and with age-related hearing loss, but their source is still unclear. These deficits may be especially apparent when speech occurs at a rapid rate and when a listener is mostly reliant on temporal information to recognize speech, such as when listening with a cochlear implant (CI) or to vocoded speech (a CI simulation). Assessment of the auditory temporal processing abilities of adults with CIs across a wide range of ages should better reveal central or cognitive sources of age-related deficits with rapid speech because CI stimulation bypasses much of the cochlear encoding that is affected by age-related peripheral hearing loss. This study used time-compressed speech at four different degrees of time compression (0, 20, 40, and 60%) to challenge the auditory temporal processing abilities of younger, middle-aged, and older listeners with CIs or with NH. Listeners with NH were presented vocoded speech at four degrees of spectral resolution (unprocessed, 16, 8, and 4 channels). Results showed an interaction between age and degree of time compression. The reduction in speech recognition associated with faster rates of speech was greater for older adults than younger adults. The performance of the middle-aged listeners was more similar to that of the older listeners than to that of the younger listeners, especially at higher degrees of time compression. A measure of cognitive processing speed did not predict the effects of time compression. These results suggest that central auditory changes related to the aging process are at least partially responsible for the auditory temporal processing deficits seen in older listeners, rather than solely peripheral age-related changes.
Collapse
Affiliation(s)
- Anna R. Tinnemore
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, College Park, MD, United States
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States
- *Correspondence: Anna R. Tinnemore,
| | - Lauren Montero
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States
| | - Sandra Gordon-Salant
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, College Park, MD, United States
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States
| | - Matthew J. Goupell
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, College Park, MD, United States
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States
| |
Collapse
|
8
|
Shader MJ, Kwon BJ, Gordon-Salant S, Goupell MJ. Open-Set Phoneme Recognition Performance With Varied Temporal Cues in Younger and Older Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:1196-1211. [PMID: 35133853 PMCID: PMC9150732 DOI: 10.1044/2021_jslhr-21-00299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Revised: 09/20/2021] [Accepted: 11/12/2021] [Indexed: 06/14/2023]
Abstract
PURPOSE The goal of this study was to investigate the effect of age on phoneme recognition performance in which the stimuli varied in the amount of temporal information available in the signal. Chronological age is increasingly recognized as a factor that can limit the amount of benefit an individual can receive from a cochlear implant (CI). Central auditory temporal processing deficits in older listeners may contribute to the performance gap between younger and older CI users on recognition of phonemes varying in temporal cues. METHOD Phoneme recognition was measured at three stimulation rates (500, 900, and 1800 pulses per second) and two envelope modulation frequencies (50 Hz and unfiltered) in 20 CI participants ranging in age from 27 to 85 years. Speech stimuli were multiple word pairs differing in temporal contrasts and were presented via direct stimulation of the electrode array using an eight-channel continuous interleaved sampling strategy. Phoneme recognition performance was evaluated at each stimulation rate condition using both envelope modulation frequencies. RESULTS Duration of deafness was the strongest subject-level predictor of phoneme recognition, with participants with longer durations of deafness having poorer performance overall. Chronological age did not predict performance for any stimulus condition. Additionally, duration of deafness interacted with envelope filtering. Participants with shorter durations of deafness were able to take advantage of higher frequency envelope modulations, while participants with longer durations of deafness were not. CONCLUSIONS Age did not significantly predict phoneme recognition performance. In contrast, longer durations of deafness were associated with a reduced ability to utilize available temporal information within the signal to improve phoneme recognition performance.
Collapse
Affiliation(s)
- Maureen J. Shader
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN
| | | | | | - Matthew J. Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park
| |
Collapse
|
9
|
Lewis JH, Castellanos I, Moberly AC. The Impact of Neurocognitive Skills on Recognition of Spectrally Degraded Sentences. J Am Acad Audiol 2021; 32:528-536. [PMID: 34965599 DOI: 10.1055/s-0041-1732438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Recent models theorize that neurocognitive resources are deployed differently during speech recognition depending on task demands, such as the severity of degradation of the signal or modality (auditory vs. audiovisual [AV]). This concept is particularly relevant to the adult cochlear implant (CI) population, considering the large amount of variability among CI users in their spectro-temporal processing abilities. However, disentangling the effects of individual differences in spectro-temporal processing and neurocognitive skills on speech recognition in clinical populations of adult CI users is challenging. Thus, this study investigated the relationship between neurocognitive functions and recognition of spectrally degraded speech in a group of young adult normal-hearing (NH) listeners. PURPOSE The aim of this study was to manipulate the degree of spectral degradation and modality of speech presented to young adult NH listeners to determine whether deployment of neurocognitive skills would be affected. RESEARCH DESIGN Correlational study design. STUDY SAMPLE Twenty-one NH college students. DATA COLLECTION AND ANALYSIS Participants listened to sentences in three spectral-degradation conditions: no degradation (clear sentences); moderate degradation (8-channel noise-vocoded); and high degradation (4-channel noise-vocoded). Thirty sentences were presented in an auditory-only (A-only) modality and an AV fashion. Visual assessments from The National Institute of Health Toolbox Cognitive Battery were completed to evaluate working memory, inhibition-concentration, cognitive flexibility, and processing speed. Analyses of variance compared speech recognition performance among spectral degradation condition and modality. Bivariate correlation analyses were performed among speech recognition performance and the neurocognitive skills in the various test conditions. RESULTS Main effects on sentence recognition were found for degree of degradation (p = < 0.001) and modality (p = < 0.001). Inhibition-concentration skills moderately correlated (r = 0.45, p = 0.02) with recognition scores for sentences that were moderately degraded in the A-only condition. No correlations were found among neurocognitive scores and AV speech recognition scores. CONCLUSIONS Inhibition-concentration skills are deployed differentially during sentence recognition, depending on the level of signal degradation. Additional studies will be required to study these relations in actual clinical populations such as adult CI users.
Collapse
Affiliation(s)
- Jessica H Lewis
- Department of Otolaryngology - Head and Neck Surgery; The Ohio State University Wexner Medical Center, Columbus, Ohio.,Department of Speech and Hearing Science; The Ohio State University, Columbus, Ohio
| | - Irina Castellanos
- Department of Otolaryngology - Head and Neck Surgery; The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Aaron C Moberly
- Department of Otolaryngology - Head and Neck Surgery; The Ohio State University Wexner Medical Center, Columbus, Ohio
| |
Collapse
|
10
|
Goupell MJ, Draves GT, Litovsky RY. Recognition of vocoded words and sentences in quiet and multi-talker babble with children and adults. PLoS One 2020; 15:e0244632. [PMID: 33373427 PMCID: PMC7771688 DOI: 10.1371/journal.pone.0244632] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Accepted: 12/14/2020] [Indexed: 11/18/2022] Open
Abstract
A vocoder is used to simulate cochlear-implant sound processing in normal-hearing listeners. Typically, there is rapid improvement in vocoded speech recognition, but it is unclear if the improvement rate differs across age groups and speech materials. Children (8–10 years) and young adults (18–26 years) were trained and tested over 2 days (4 hours) on recognition of eight-channel noise-vocoded words and sentences, in quiet and in the presence of multi-talker babble at signal-to-noise ratios of 0, +5, and +10 dB. Children achieved poorer performance than adults in all conditions, for both word and sentence recognition. With training, vocoded speech recognition improvement rates were not significantly different between children and adults, suggesting that improvement in learning how to process speech cues degraded via vocoding is absent of developmental differences across these age groups and types of speech materials. Furthermore, this result confirms that the acutely measured age difference in vocoded speech recognition persists after extended training.
Collapse
Affiliation(s)
- Matthew J. Goupell
- Department of Hearing and Speech Sciences, University of Maryland, Maryland, MD, United States of America
- * E-mail:
| | - Garrison T. Draves
- Waisman Center, University of Wisconsin, Madison, WI, United States of America
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin, Madison, WI, United States of America
- Department of Communication Sciences and Disorders, University of Wisconsin, Madison, WI, United States of America
| |
Collapse
|