1
|
Amini AE, Naples JG, Cortina L, Hwa T, Morcos M, Castellanos I, Moberly AC. A Scoping Review and Meta-Analysis of the Relations Between Cognition and Cochlear Implant Outcomes and the Effect of Quiet Versus Noise Testing Conditions. Ear Hear 2024; 45:1339-1352. [PMID: 38953851 PMCID: PMC11493527 DOI: 10.1097/aud.0000000000001527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/04/2024]
Abstract
OBJECTIVES Evidence continues to emerge of associations between cochlear implant (CI) outcomes and cognitive functions in postlingually deafened adults. While there are multiple factors that appear to affect these associations, the impact of speech recognition background testing conditions (i.e., in quiet versus noise) has not been systematically explored. The two aims of this study were to (1) identify associations between speech recognition following cochlear implantation and performance on cognitive tasks, and to (2) investigate the impact of speech testing in quiet versus noise on these associations. Ultimately, we want to understand the conditions that impact this complex relationship between CI outcomes and cognition. DESIGN A scoping review following Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines was performed on published literature evaluating the relation between outcomes of cochlear implantation and cognition. The current review evaluates 39 papers that reported associations between over 30 cognitive assessments and speech recognition tests in adult patients with CIs. Six cognitive domains were evaluated: Global Cognition, Inhibition-Concentration, Memory and Learning, Controlled Fluency, Verbal Fluency, and Visuospatial Organization. Meta-analysis was conducted on three cognitive assessments among 12 studies to evaluate relations with speech recognition outcomes. Subgroup analyses were performed to identify whether speech recognition testing in quiet versus in background noise impacted its association with cognitive performance. RESULTS Significant associations between cognition and speech recognition in a background of quiet or noise were found in 69% of studies. Tests of Global Cognition and Inhibition-Concentration skills resulted in the highest overall frequency of significant associations with speech recognition (45% and 57%, respectively). Despite the modest proportion of significant associations reported, pooling effect sizes across samples through meta-analysis revealed a moderate positive correlation between tests of Global Cognition ( r = +0.37, p < 0.01) as well as Verbal Fluency ( r = +0.44, p < 0.01) and postoperative speech recognition skills. Tests of Memory and Learning are most frequently utilized in the setting of CI (in 26 of 39 included studies), yet meta-analysis revealed nonsignificant associations with speech recognition performance in a background of quiet ( r = +0.30, p = 0.18), and noise ( r = -0.06, p = 0.78). CONCLUSIONS Background conditions of speech recognition testing may influence the relation between speech recognition outcomes and cognition. The magnitude of this effect of testing conditions on this relationship appears to vary depending on the cognitive construct being assessed. Overall, Global Cognition and Inhibition-Concentration skills are potentially useful in explaining speech recognition skills following cochlear implantation. Future work should continue to evaluate these relations to appropriately unify cognitive testing opportunities in the setting of cochlear implantation.
Collapse
Affiliation(s)
- Andrew E Amini
- Department of Otolaryngology Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
- These authors contributed equally to this work
| | - James G Naples
- Division of Otolaryngology-Head and Neck Surgery, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
- These authors contributed equally to this work
| | - Luis Cortina
- Department of Otolaryngology Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - Tiffany Hwa
- Division of Otology, Neurotology, & Lateral Skull Base Surgery, Department of Otolaryngology-Head and Neck Surgery, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Mary Morcos
- Department of Otolaryngology Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - Irina Castellanos
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Aaron C Moberly
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
2
|
Leibold LJ, Buss E, Miller MK, Cowan T, McCreery RW, Oleson J, Rodriguez B, Calandruccio L. Development of the Children's English and Spanish Speech Recognition Test: Psychometric Properties, Feasibility, Reliability, and Normative Data. Ear Hear 2024; 45:860-877. [PMID: 38334698 PMCID: PMC11178473 DOI: 10.1097/aud.0000000000001480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2024]
Abstract
OBJECTIVES The Children's English and Spanish Speech Recognition (ChEgSS) test is a computer-based tool for assessing closed-set word recognition in English and in Spanish, with a masker that is either speech-shaped noise or competing speech. The present study was conducted to (1) characterize the psychometric properties of the ChEgSS test, (2) evaluate feasibility and reliability for a large cohort of Spanish/English bilingual children with normal hearing, and (3) establish normative data. DESIGN Three experiments were conducted to evaluate speech perception in children (4-17 years) and adults (19-40 years) with normal hearing using the ChEgSS test. In Experiment 1, data were collected from Spanish/English bilingual and English monolingual adults at multiple, fixed signal-to-noise ratios. Psychometric functions were fitted to the word-level data to characterize variability across target words in each language and in each masker condition. In Experiment 2, Spanish/English bilingual adults were tested using an adaptive tracking procedure to evaluate the influence of different target-word normalization approaches on the reliability of estimates of masked-speech recognition thresholds corresponding to 70.7% correct word recognition and to determine the optimal number of reversals needed to obtain reliable estimates. In Experiment 3, Spanish/English bilingual and English monolingual children completed speech perception testing using the ChEgSS test to (1) characterize feasibility across age and language group, (2) evaluate test-retest reliability, and (3) establish normative data. RESULTS Experiments 1 and 2 yielded data that are essential for stimulus normalization, optimizing threshold estimation procedures, and interpreting threshold data across test language and masker type. Findings obtained from Spanish/English bilingual and English monolingual children with normal hearing in Experiment 3 support feasibility and demonstrate reliability for use with children as young as 4 years of age. Equivalent results for testing in English and Spanish were observed for Spanish/English bilingual children, contingent on adequate proficiency in the target language. Regression-based threshold norms were established for Spanish/English bilingual and English monolingual children between 4 and 17 years of age. CONCLUSIONS The present findings indicate the ChEgSS test is appropriate for testing a wide age range of children with normal hearing in either Spanish, English, or both languages. The ChEgSS test is currently being evaluated in a large cohort of patients with hearing loss at pediatric audiology clinics across the United States. Results will be compared with normative data established in the present study and with established clinical measures used to evaluate English- and Spanish-speaking children. Questionnaire data from parents and clinician feedback will be used to further improve test procedures.
Collapse
Affiliation(s)
- Lori J Leibold
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, Nebraska, USA
| | - Emily Buss
- Department of Otolaryngology/HNS, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Margaret K Miller
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, Nebraska, USA
| | - Tiana Cowan
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, Nebraska, USA
| | - Ryan W McCreery
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, Nebraska, USA
| | - Jacob Oleson
- Department of Biostatistics, University of Iowa, Iowa City, Iowa, USA
| | - Barbara Rodriguez
- Department of Speech and Hearing Sciences, University of New Mexico, Albuquerque, New Mexico, USA
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, Ohio, USA
| |
Collapse
|
3
|
Chernyak BR, Bradlow AR, Keshet J, Goldrick M. A perceptual similarity space for speech based on self-supervised speech representations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:3915-3929. [PMID: 38904539 DOI: 10.1121/10.0026358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 05/29/2024] [Indexed: 06/22/2024]
Abstract
Speech recognition by both humans and machines frequently fails in non-optimal yet common situations. For example, word recognition error rates for second-language (L2) speech can be high, especially under conditions involving background noise. At the same time, both human and machine speech recognition sometimes shows remarkable robustness against signal- and noise-related degradation. Which acoustic features of speech explain this substantial variation in intelligibility? Current approaches align speech to text to extract a small set of pre-defined spectro-temporal properties from specific sounds in particular words. However, variation in these properties leaves much cross-talker variation in intelligibility unexplained. We examine an alternative approach utilizing a perceptual similarity space acquired using self-supervised learning. This approach encodes distinctions between speech samples without requiring pre-defined acoustic features or speech-to-text alignment. We show that L2 English speech samples are less tightly clustered in the space than L1 samples reflecting variability in English proficiency among L2 talkers. Critically, distances in this similarity space are perceptually meaningful: L1 English listeners have lower recognition accuracy for L2 speakers whose speech is more distant in the space from L1 speech. These results indicate that perceptual similarity may form the basis for an entirely new speech and language analysis approach.
Collapse
Affiliation(s)
- Bronya R Chernyak
- Faculty of Electrical & Computer Engineering, Technion-Israel Institute of Technology, Haifa 3200003, Israel
| | - Ann R Bradlow
- Department of Linguistics, Northwestern University, Evanston, Illinois 60208, USA
| | - Joseph Keshet
- Faculty of Electrical & Computer Engineering, Technion-Israel Institute of Technology, Haifa 3200003, Israel
| | - Matthew Goldrick
- Department of Linguistics, Northwestern University, Evanston, Illinois 60208, USA
| |
Collapse
|
4
|
Nyirjesy SC, Lewis JH, Hallak D, Conroy S, Moberly AC, Tamati TN. Evaluating Listening Effort in Unilateral, Bimodal, and Bilateral Cochlear Implant Users. Otolaryngol Head Neck Surg 2024; 170:1147-1157. [PMID: 38104319 DOI: 10.1002/ohn.609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 10/24/2023] [Accepted: 11/24/2023] [Indexed: 12/19/2023]
Abstract
OBJECTIVE Evaluate listening effort (LE) in unilateral, bilateral, and bimodal cochlear implant (CI) users. Establish an easy-to-implement task of LE that could be useful for clinical decision making. STUDY DESIGN Prospective cohort study. SETTING Tertiary neurotology center. METHODS The Sentence Final Word Identification and Recall Task, an established measure of LE, was modified to include challenging listening conditions (multitalker babble, gender, and emotional variation; test), in addition to single-talker sentences (control). Participants listened to lists of sentences in each condition and recalled the last word of each sentence. LE was quantified by percentage of words correctly recalled and was compared across conditions, across CI groups, and within subjects (best aided vs monaural). RESULTS A total of 24 adults between the ages of 37 and 82 years enrolled, including 4 unilateral CI users (CI), 10 bilateral CI users (CICI), and 10 bimodal CI users (CIHA). Task condition impacted LE (P < .001), but hearing configuration and listener group did not (P = .90). Working memory capacity and contralateral hearing contributed to individual performance. CONCLUSION This study adds to the growing body of literature on LE in challenging listening conditions for CI users and demonstrates feasibility of a simple behavioral task that could be implemented clinically to assess LE. This study also highlights the potential benefits of bimodal hearing and individual hearing and cognitive factors in understanding individual differences in performance, which will be evaluated through further research.
Collapse
Affiliation(s)
- Sarah C Nyirjesy
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University, Columbus, Ohio, USA
| | - Jessica H Lewis
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University, Columbus, Ohio, USA
- Department of Speech and Hearing Science, The Ohio State University, Columbus, Ohio, USA
| | - Diana Hallak
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University, Columbus, Ohio, USA
| | - Sara Conroy
- Department of Biomedical Informatics, Center for Biostatistics, The Ohio State University, Columbus, Ohio, USA
| | - Aaron C Moberly
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Terrin N Tamati
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
5
|
Tamati TN, Jebens A, Başkent D. Lexical effects on talker discrimination in adult cochlear implant usersa). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:1631-1640. [PMID: 38426835 PMCID: PMC10908561 DOI: 10.1121/10.0025011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 02/06/2024] [Accepted: 02/07/2024] [Indexed: 03/02/2024]
Abstract
The lexical and phonological content of an utterance impacts the processing of talker-specific details in normal-hearing (NH) listeners. Adult cochlear implant (CI) users demonstrate difficulties in talker discrimination, particularly for same-gender talker pairs, which may alter the reliance on lexical information in talker discrimination. The current study examined the effect of lexical content on talker discrimination in 24 adult CI users. In a remote AX talker discrimination task, word pairs-produced either by the same talker (ST) or different talkers with the same (DT-SG) or mixed genders (DT-MG)-were either lexically easy (high frequency, low neighborhood density) or lexically hard (low frequency, high neighborhood density). The task was completed in quiet and multi-talker babble (MTB). Results showed an effect of lexical difficulty on talker discrimination, for same-gender talker pairs in both quiet and MTB. CI users showed greater sensitivity in quiet as well as less response bias in both quiet and MTB for lexically easy words compared to lexically hard words. These results suggest that CI users make use of lexical content in same-gender talker discrimination, providing evidence for the contribution of linguistic information to the processing of degraded talker information by adult CI users.
Collapse
Affiliation(s)
- Terrin N Tamati
- Department of Otolaryngology, Vanderbilt University Medical Center, 1215 21st Ave S, Nashville, Tennessee 37232, USA
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Almut Jebens
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
6
|
Bosen AK, Doria GM. Identifying Links Between Latent Memory and Speech Recognition Factors. Ear Hear 2024; 45:351-369. [PMID: 37882100 PMCID: PMC10922378 DOI: 10.1097/aud.0000000000001430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2023]
Abstract
OBJECTIVES The link between memory ability and speech recognition accuracy is often examined by correlating summary measures of performance across various tasks, but interpretation of such correlations critically depends on assumptions about how these measures map onto underlying factors of interest. The present work presents an alternative approach, wherein latent factor models are fit to trial-level data from multiple tasks to directly test hypotheses about the underlying structure of memory and the extent to which latent memory factors are associated with individual differences in speech recognition accuracy. Latent factor models with different numbers of factors were fit to the data and compared to one another to select the structures which best explained vocoded sentence recognition in a two-talker masker across a range of target-to-masker ratios, performance on three memory tasks, and the link between sentence recognition and memory. DESIGN Young adults with normal hearing (N = 52 for the memory tasks, of which 21 participants also completed the sentence recognition task) completed three memory tasks and one sentence recognition task: reading span, auditory digit span, visual free recall of words, and recognition of 16-channel vocoded Perceptually Robust English Sentence Test Open-set sentences in the presence of a two-talker masker at target-to-masker ratios between +10 and 0 dB. Correlations between summary measures of memory task performance and sentence recognition accuracy were calculated for comparison to prior work, and latent factor models were fit to trial-level data and compared against one another to identify the number of latent factors which best explains the data. Models with one or two latent factors were fit to the sentence recognition data and models with one, two, or three latent factors were fit to the memory task data. Based on findings with these models, full models that linked one speech factor to one, two, or three memory factors were fit to the full data set. Models were compared via Expected Log pointwise Predictive Density and post hoc inspection of model parameters. RESULTS Summary measures were positively correlated across memory tasks and sentence recognition. Latent factor models revealed that sentence recognition accuracy was best explained by a single factor that varied across participants. Memory task performance was best explained by two latent factors, of which one was generally associated with performance on all three tasks and the other was specific to digit span recall accuracy at lists of six digits or more. When these models were combined, the general memory factor was closely related to the sentence recognition factor, whereas the factor specific to digit span had no apparent association with sentence recognition. CONCLUSIONS Comparison of latent factor models enables testing hypotheses about the underlying structure linking cognition and speech recognition. This approach showed that multiple memory tasks assess a common latent factor that is related to individual differences in sentence recognition, although performance on some tasks was associated with multiple factors. Thus, while these tasks provide some convergent assessment of common latent factors, caution is needed when interpreting what they tell us about speech recognition.
Collapse
|
7
|
Bosen AK. Characterizing correlations in partial credit speech recognition scoring with beta-binomial distributions. JASA EXPRESS LETTERS 2024; 4:025202. [PMID: 38299983 PMCID: PMC10848658 DOI: 10.1121/10.0024633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 01/12/2024] [Indexed: 02/02/2024]
Abstract
Partial credit scoring for speech recognition tasks can improve measurement precision. However, assessing the magnitude of this improvement with partial credit scoring is challenging because meaningful speech contains contextual cues, which create correlations between the probabilities of correctly identifying each token in a stimulus. Here, beta-binomial distributions were used to estimate recognition accuracy and intraclass correlation for phonemes in words and words in sentences in listeners with cochlear implants (N = 20). Estimates demonstrated substantial intraclass correlation in recognition accuracy within stimuli. These correlations were invariant across individuals. Intraclass correlations should be addressed in power analysis of partial credit scoring.
Collapse
Affiliation(s)
- Adam K Bosen
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska 68131,
| |
Collapse
|
8
|
Tsironis A, Vlahou E, Kontou P, Bagos P, Kopčo N. Adaptation to Reverberation for Speech Perception: A Systematic Review. Trends Hear 2024; 28:23312165241273399. [PMID: 39246212 PMCID: PMC11384524 DOI: 10.1177/23312165241273399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/10/2024] Open
Abstract
In everyday acoustic environments, reverberation alters the speech signal received at the ears. Normal-hearing listeners are robust to these distortions, quickly recalibrating to achieve accurate speech perception. Over the past two decades, multiple studies have investigated the various adaptation mechanisms that listeners use to mitigate the negative impacts of reverberation and improve speech intelligibility. Following the PRISMA guidelines, we performed a systematic review of these studies, with the aim to summarize existing research, identify open questions, and propose future directions. Two researchers independently assessed a total of 661 studies, ultimately including 23 in the review. Our results showed that adaptation to reverberant speech is robust across diverse environments, experimental setups, speech units, and tasks, in noise-masked or unmasked conditions. The time course of adaptation is rapid, sometimes occurring in less than 1 s, but this can vary depending on the reverberation and noise levels of the acoustic environment. Adaptation is stronger in moderately reverberant rooms and minimal in rooms with very intense reverberation. While the mechanisms underlying the recalibration are largely unknown, adaptation to the direct-to-reverberant ratio-related changes in amplitude modulation appears to be the predominant candidate. However, additional factors need to be explored to provide a unified theory for the effect and its applications.
Collapse
Affiliation(s)
- Avgeris Tsironis
- Department of Computer Science and Biomedical Informatics, University of Thessaly, Lamia, Greece
| | - Eleni Vlahou
- Department of Computer Science and Biomedical Informatics, University of Thessaly, Lamia, Greece
| | - Panagiota Kontou
- Department of Computer Science and Biomedical Informatics, University of Thessaly, Lamia, Greece
| | - Pantelis Bagos
- Department of Computer Science and Biomedical Informatics, University of Thessaly, Lamia, Greece
| | - Norbert Kopčo
- Institute of Computer Science, Faculty of Science, Pavol Jozef Šafárik University, Košice, Slovakia
| |
Collapse
|
9
|
Hasnain F, Herran RM, Henning SC, Ditmars AM, Pisoni DB, Sehgal ST, Kronenberger WG. Verbal Fluency in Prelingually Deaf, Early Implanted Children and Adolescents With Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1394-1409. [PMID: 36857026 PMCID: PMC10457083 DOI: 10.1044/2022_jslhr-22-00383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 11/21/2022] [Accepted: 12/18/2022] [Indexed: 06/14/2023]
Abstract
PURPOSE Verbal fluency tasks assess the ability to quickly and efficiently retrieve words from the mental lexicon by requiring subjects to rapidly generate words within a phonological or semantic category. This study investigated differences between cochlear implant users and normal-hearing peers in the clustering and time course of word retrieval during phonological and semantic verbal fluency tasks. METHOD Twenty-eight children and adolescents (aged 9-17 years) with cochlear implants and 33 normal-hearing peers completed measures of verbal fluency, nonverbal intelligence, speech perception, and verbal short-term/working memory. Phonological and semantic verbal fluency tests were scored for total words generated, words generated in each 10-s interval of the 1-min task, latency to first word generated, number of word clusters, average cluster size, and number of word/cluster switches. RESULTS Children and adolescents with cochlear implants generated fewer words than normal-hearing peers throughout the entire 60-s time interval of the phonological and semantic fluency tasks. Cochlear implant users also had slower start latency times and produced fewer clusters and switches than normal-hearing peers during the phonological fluency task. Speech perception and verbal working memory scores were more strongly associated with verbal fluency scores in children and adolescents with cochlear implants than in normal-hearing peers. CONCLUSIONS Cochlear implant users show poorer phonological and semantic verbal fluency than normal-hearing peers, and their verbal fluency is significantly associated with speech perception and verbal working memory. These findings suggest deficits in fluent retrieval of phonological and semantic information from long-term lexical memory in cochlear implant users.
Collapse
Affiliation(s)
- Fahad Hasnain
- Department of Otolaryngology – Head and Neck Surgery, Indiana University School of Medicine, Indianapolis
| | - Reid M. Herran
- Department of Otolaryngology – Head and Neck Surgery, Indiana University School of Medicine, Indianapolis
| | - Shirley C. Henning
- Department of Otolaryngology – Head and Neck Surgery, Indiana University School of Medicine, Indianapolis
| | - Allison M. Ditmars
- Department of Otolaryngology – Head and Neck Surgery, Indiana University School of Medicine, Indianapolis
| | - David B. Pisoni
- Department of Otolaryngology – Head and Neck Surgery, Indiana University School of Medicine, Indianapolis
- Department of Psychological and Brain Sciences, Indiana University, Bloomington
| | - Susan T. Sehgal
- Department of Otolaryngology – Head and Neck Surgery, Indiana University School of Medicine, Indianapolis
| | - William G. Kronenberger
- Department of Otolaryngology – Head and Neck Surgery, Indiana University School of Medicine, Indianapolis
- Department of Psychiatry, Indiana University School of Medicine, Indianapolis
| |
Collapse
|
10
|
Maillard E, Joyal M, Murray MM, Tremblay P. Are musical activities associated with enhanced speech perception in noise in adults? A systematic review and meta-analysis. CURRENT RESEARCH IN NEUROBIOLOGY 2023; 4:100083. [PMID: 37397808 PMCID: PMC10313871 DOI: 10.1016/j.crneur.2023.100083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 02/19/2023] [Accepted: 03/07/2023] [Indexed: 03/30/2023] Open
Abstract
The ability to process speech in noise (SPiN) declines with age, with a detrimental impact on life quality. Music-making activities such as singing and playing a musical instrument have raised interest as potential prevention strategies for SPiN perception decline because of their positive impact on several brain system, especially the auditory system, which is critical for SPiN. However, the literature on the effect of musicianship on SPiN performance has yielded mixed results. By critically assessing the existing literature with a systematic review and a meta-analysis, we aim to provide a comprehensive portrait of the relationship between music-making activities and SPiN in different experimental conditions. 38/49 articles, most focusing on young adults, were included in the quantitative analysis. The results show a positive relationship between music-making activities and SPiN, with the strongest effects found in the most challenging listening conditions, and little to no effect in less challenging situations. This pattern of results supports the notion of a relative advantage for musicians on SPiN performance and clarify the scope of this effect. However, further studies, especially with older adults, using adequate randomization methods, are needed to extend the present conclusions and assess the potential for musical activities to be used to mitigate SPiN decline in seniors.
Collapse
Affiliation(s)
- Elisabeth Maillard
- CERVO Brain Research Center, Quebec City, G1J 2G3, Canada
- Université Laval, Faculté de Médecine, Département de Réadaptation, Quebec City, G1V 0A6, Canada
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Marilyne Joyal
- CERVO Brain Research Center, Quebec City, G1J 2G3, Canada
- Université Laval, Faculté de Médecine, Département de Réadaptation, Quebec City, G1V 0A6, Canada
| | - Micah M. Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense Innovation and Research Center, Lausanne, Sion, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Pascale Tremblay
- CERVO Brain Research Center, Quebec City, G1J 2G3, Canada
- Université Laval, Faculté de Médecine, Département de Réadaptation, Quebec City, G1V 0A6, Canada
| |
Collapse
|
11
|
Harris MS, Hamel BL, Wichert K, Kozlowski K, Mleziva S, Ray C, Pisoni DB, Kronenberger WG, Moberly AC. Contribution of Verbal Learning & Memory and Spectro-Temporal Discrimination to Speech Recognition in Cochlear Implant Users. Laryngoscope 2023; 133:661-669. [PMID: 35567421 PMCID: PMC9659673 DOI: 10.1002/lary.30210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 04/01/2022] [Accepted: 05/03/2022] [Indexed: 11/10/2022]
Abstract
OBJECTIVES Existing cochlear implant (CI) outcomes research demonstrates a high degree of variability in device effectiveness among experienced CI users. Increasing evidence suggests that verbal learning and memory (VL&M) may have an influence on speech recognition with CIs. This study examined the relations in CI users between visual measures of VL&M and speech recognition in a series of models that also incorporated spectro-temporal discrimination. Predictions were that (1) speech recognition would be associated with VL&M abilities and (2) VL&M would contribute to speech recognition outcomes above and beyond spectro-temporal discrimination in multivariable models of speech recognition. METHODS This cross-sectional study included 30 adult postlingually deaf experienced CI users who completed a nonauditory visual version of the California Verbal Learning Test-Second Edition (v-CVLT-II) to assess VL&M, and the Spectral-Temporally Modulated Ripple Test (SMRT), an auditory measure of spectro-temporal processing. Participants also completed a battery of word and sentence recognition tasks. RESULTS CI users showed significant correlations between some v-CVLT-II measures (short-delay free- and cued-recall, retroactive interference, and "subjective" organizational recall strategies) and speech recognition measures. Performance on the SMRT was correlated with all speech recognition measures. Hierarchical multivariable linear regression analyses showed that SMRT performance accounted for a significant degree of speech recognition outcome variance. Moreover, for all speech recognition measures, VL&M scores contributed independently in addition to SMRT. CONCLUSION Measures of spectro-temporal discrimination and VL&M were associated with speech recognition in CI users. After accounting for spectro-temporal discrimination, VL&M contributed independently to performance on measures of speech recognition for words and sentences produced by single and multiple talkers. LEVEL OF EVIDENCE 3 Laryngoscope, 133:661-669, 2023.
Collapse
Affiliation(s)
- Michael S. Harris
- Department of Otolaryngology & Communication Sciences, Medical College of Wisconsin, Milwaukee, WI
- Department of Neurosurgery, Medical College of Wisconsin, Milwaukee, WI
| | | | - Kristin Wichert
- Department of Communication Sciences & Disorders, University of Wisconsin - Eau Claire, Eau Claire, WI
| | - Kristin Kozlowski
- Department of Otolaryngology & Communication Sciences, Medical College of Wisconsin, Milwaukee, WI
| | - Sarah Mleziva
- Department of Otolaryngology & Communication Sciences, Medical College of Wisconsin, Milwaukee, WI
| | - Christin Ray
- Department of Otolaryngology – Head & Neck Surgery, The Ohio State Wexner Medical Center, Columbus, OH
| | - David B. Pisoni
- Speech Research Laboratory, Department of Psychology, Indiana University, Bloomington, IN
| | | | - Aaron C. Moberly
- Department of Otolaryngology – Head & Neck Surgery, The Ohio State Wexner Medical Center, Columbus, OH
| |
Collapse
|
12
|
Moberly AC, Afreen H, Schneider KJ, Tamati TN. Preoperative Reading Efficiency as a Predictor of Adult Cochlear Implant Outcomes. Otol Neurotol 2022; 43:e1100-e1106. [PMID: 36351224 PMCID: PMC9694592 DOI: 10.1097/mao.0000000000003722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
HYPOTHESES 1) Scores of reading efficiency (the Test of Word Reading Efficiency, second edition) obtained in adults before cochlear implant surgery will be predictive of speech recognition outcomes 6 months after surgery; and 2) Cochlear implantation will lead to improvements in language processing as measured through reading efficiency from preimplantation to postimplantation. BACKGROUND Adult cochlear implant (CI) users display remarkable variability in speech recognition outcomes. "Top-down" processing-the use of cognitive resources to make sense of degraded speech-contributes to speech recognition abilities in CI users. One area that has received little attention is the efficiency of lexical and phonological processing. In this study, a visual measure of word and nonword reading efficiency-relying on lexical and phonological processing, respectively-was investigated for its ability to predict CI speech recognition outcomes, as well as to identify any improvements after implantation. METHODS Twenty-four postlingually deaf adult CI candidates were tested on the Test of Word Reading Efficiency, Second Edition preoperatively and again 6 months post-CI. Six-month post-CI speech recognition measures were also assessed across a battery of word and sentence recognition. RESULTS Preoperative nonword reading scores were moderately predictive of sentence recognition outcomes, but real word reading scores were not; word recognition scores were not predicted by either. No 6-month post-CI improvement was demonstrated in either word or nonword reading efficiency. CONCLUSION Phonological processing as measured by the Test of Word Reading Efficiency, Second Edition nonword reading predicts to a moderate degree 6-month sentence recognition outcomes in adult CI users. Reading efficiency did not improve after implantation, although this could be because of the relatively short duration of CI use.
Collapse
Affiliation(s)
- Aaron C Moberly
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Hajera Afreen
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Kara J Schneider
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | | |
Collapse
|
13
|
Monson BB, Buss E. On the use of the TIMIT, QuickSIN, NU-6, and other widely used bandlimited speech materials for speech perception experiments. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1639. [PMID: 36182310 PMCID: PMC9473723 DOI: 10.1121/10.0013993] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 07/20/2022] [Accepted: 08/20/2022] [Indexed: 05/29/2023]
Abstract
The use of spectrally degraded speech signals deprives listeners of acoustic information that is useful for speech perception. Several popular speech corpora, recorded decades ago, have spectral degradations, including limited extended high-frequency (EHF) (>8 kHz) content. Although frequency content above 8 kHz is often assumed to play little or no role in speech perception, recent research suggests that EHF content in speech can have a significant beneficial impact on speech perception under a wide range of natural listening conditions. This paper provides an analysis of the spectral content of popular speech corpora used for speech perception research to highlight the potential shortcomings of using bandlimited speech materials. Two corpora analyzed here, the TIMIT and NU-6, have substantial low-frequency spectral degradation (<500 Hz) in addition to EHF degradation. We provide an overview of the phenomena potentially missed by using bandlimited speech signals, and the factors to consider when selecting stimuli that are sensitive to these effects.
Collapse
Affiliation(s)
- Brian B Monson
- Department of Speech and Hearing Science, University of Illinois Urbana-Champaign, Champaign, Illinois 61820, USA
| | - Emily Buss
- Department of Otolaryngology/HNS, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27514, USA
| |
Collapse
|
14
|
Herbert CJ, Pisoni DB, Kronenberger WG, Nelson RF. Exceptional Speech Recognition Outcomes After Cochlear Implantation: Lessons From Two Case Studies. Am J Audiol 2022; 31:552-566. [PMID: 35944073 PMCID: PMC9886164 DOI: 10.1044/2022_aja-21-00261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 03/03/2022] [Accepted: 04/29/2022] [Indexed: 02/02/2023] Open
Abstract
PURPOSE Individual differences and variability in outcomes following cochlear implantation (CI) in patients with hearing loss remain significant unresolved clinical problems. Case reports of specific individuals allow for detailed examination of the information processing mechanisms underlying variability in outcomes. Two adults who displayed exceptionally good postoperative CI outcomes shortly after activation were administered a novel battery of auditory, speech recognition, and neurocognitive processing tests. METHOD A case study of two adult CI recipients with postlingually acquired hearing loss who displayed excellent postoperative speech recognition scores within 3 months of initial activation. Preoperative City University of New York sentence testing and a postoperative battery of sensitive speech recognition tests were combined with auditory and visual neurocognitive information processing tests to uncover their strengths, weaknesses, and milestones. RESULTS Preactivation CUNY auditory-only (A) scores were < 5% correct while the auditory + visual (A + V) scores were > 74%. Acoustically with their CIs, both participants' scores on speech recognition, environmental sound identification and speech in noise tests exceeded average CI users scores by 1-2 standard deviations. On nonacoustic visual measures of language and neurocognitive functioning, both participants achieved above average scores compared with normal hearing adults in vocabulary knowledge, rapid phonological coding of visually presented words and nonwords, verbal working memory, and executive functioning. CONCLUSIONS Measures of multisensory (A + V) speech recognition and visual neurocognitive functioning were associated with excellent speech recognition outcomes in two postlingual adult CI recipients. These neurocognitive information processing domains may underlie the exceptional speech recognition performance of these two patients and offer new directions for research explaining variability in postimplant outcomes. Results further suggest that current clinical outcome measures should be expanded beyond the conventional speech recognition measures to include more sensitive robust tests of speech recognition as well as neurocognitive measures of working memory, vocabulary, lexical access, and executive functioning.
Collapse
Affiliation(s)
- Carolyn J. Herbert
- DeVault Otologic Research Laboratory, Department of Otolaryngology–Head and Neck Surgery, Indiana University School of Medicine, Indianapolis
| | - David B. Pisoni
- DeVault Otologic Research Laboratory, Department of Otolaryngology–Head and Neck Surgery, Indiana University School of Medicine, Indianapolis
- Department of Psychological and Brain Sciences, Indiana University Bloomington
| | - William G. Kronenberger
- DeVault Otologic Research Laboratory, Department of Otolaryngology–Head and Neck Surgery, Indiana University School of Medicine, Indianapolis
- Department of Psychiatry, Indiana University School of Medicine, Indianapolis
| | - Rick F. Nelson
- DeVault Otologic Research Laboratory, Department of Otolaryngology–Head and Neck Surgery, Indiana University School of Medicine, Indianapolis
| |
Collapse
|
15
|
Li MM, Moberly AC, Tamati TN. Factors affecting talker discrimination ability in adult cochlear implant users. JOURNAL OF COMMUNICATION DISORDERS 2022; 99:106255. [PMID: 35988314 PMCID: PMC10659049 DOI: 10.1016/j.jcomdis.2022.106255] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 08/10/2022] [Accepted: 08/11/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Real-world speech communication involves interacting with many talkers with diverse voices and accents. Many adults with cochlear implants (CIs) demonstrate poor talker discrimination, which may contribute to real-world communication difficulties. However, the factors contributing to talker discrimination ability, and how discrimination ability relates to speech recognition outcomes in adult CI users are still unknown. The current study investigated talker discrimination ability in adult CI users, and the contributions of age, auditory sensitivity, and neurocognitive skills. In addition, the relation between talker discrimination ability and multiple-talker sentence recognition was explored. METHODS Fourteen post-lingually deaf adult CI users (3 female, 11 male) with ≥1 year of CI use completed a talker discrimination task. Participants listened to two monosyllabic English words, produced by the same talker or by two different talkers, and indicated if the words were produced by the same or different talkers. Nine female and nine male native English talkers were paired, resulting in same- and different-talker pairs as well as same-gender and mixed-gender pairs. Participants also completed measures of spectro-temporal processing, neurocognitive skills, and multiple-talker sentence recognition. RESULTS CI users showed poor same-gender talker discrimination, but relatively good mixed-gender talker discrimination. Older age and weaker neurocognitive skills, in particular inhibitory control, were associated with less accurate mixed-gender talker discrimination. Same-gender discrimination was significantly related to multiple-talker sentence recognition accuracy. CONCLUSION Adult CI users demonstrate overall poor talker discrimination ability. Individual differences in mixed-gender discrimination ability were related to age and neurocognitive skills, suggesting that these factors contribute to the ability to make use of available, degraded talker characteristics. Same-gender talker discrimination was associated with multiple-talker sentence recognition, suggesting that access to subtle talker-specific cues may be important for speech recognition in challenging listening conditions.
Collapse
Affiliation(s)
- Michael M Li
- The Ohio State University Wexner Medical Center, Department of Otolaryngology - Head & Neck Surgery, Columbus, OH, USA
| | - Aaron C Moberly
- The Ohio State University Wexner Medical Center, Department of Otolaryngology - Head & Neck Surgery, Columbus, OH, USA
| | - Terrin N Tamati
- The Ohio State University Wexner Medical Center, Department of Otolaryngology - Head & Neck Surgery, Columbus, OH, USA; University Medical Center Groningen, University of Groningen, Department of Otorhinolaryngology/Head and Neck Surgery, Groningen, the Netherlands.
| |
Collapse
|
16
|
Hisagi M, Baker M, Alvarado E, Shafiro V. Online Assessment of Speech Perception and Auditory Spectrotemporal Processing in Spanish-English Bilinguals. Am J Audiol 2022; 31:936-949. [PMID: 35537127 DOI: 10.1044/2022_aja-21-00225] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
PURPOSE There is limited access to audiology services for the growing population of Spanish-English bilinguals in the United States. Online auditory testing can potentially provide a cost-effective alternative to in-person visits. However, even for bilinguals with high English proficiency, age of English acquisition may affect speech perception accuracy. This study used a comprehensive test battery to assess speech perception and spectrotemporal processing abilities in Spanish-English bilinguals and to evaluate susceptibility of different tests to effects of native language. METHOD The online battery comprised three tests of speech in quiet (vowel and consonant identification and words in sentences), four tests of speech perception in noise (two for intelligibility and two for comprehension), and three tests of spectrotemporal processing (two tests of stochastically modulated pattern discrimination and one test of spectral resolution). Participants were 28 adult Spanish-English bilinguals whose English acquisition began either early (≤ 6 years old) or late (≥ 7 years old) and 18 English monolingual speakers. RESULTS Significant differences were found in six of the 10 tests. The differences were most pronounced for vowel perception in quiet, speech-in-noise test, and two tests of speech comprehension in noise. Late bilinguals consistently scored lower than native English speakers or early bilinguals. In contrast, no differences between groups were observed for digits-in-noise or three tests of spectrotemporal processing abilities. CONCLUSION The findings suggest initial feasibility of online assessment in this population and can inform selection of tests for auditory assessment of Spanish-English bilinguals.
Collapse
Affiliation(s)
- Miwako Hisagi
- Department of Communication Disorders, California State University, Los Angeles
| | - Melissa Baker
- Long Island Doctor of Audiology Consortium, Hofstra University, Hempstead, NY
| | - Elizabeth Alvarado
- Department of Communication Disorders, California State University, Los Angeles
| | - Valeriy Shafiro
- Department of Communication Disorders and Sciences, Rush University Medical Center, Chicago, IL
| |
Collapse
|
17
|
Tamati TN, Sevich VA, Clausing EM, Moberly AC. Lexical Effects on the Perceived Clarity of Noise-Vocoded Speech in Younger and Older Listeners. Front Psychol 2022; 13:837644. [PMID: 35432072 PMCID: PMC9010567 DOI: 10.3389/fpsyg.2022.837644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 02/16/2022] [Indexed: 11/13/2022] Open
Abstract
When listening to degraded speech, such as speech delivered by a cochlear implant (CI), listeners make use of top-down linguistic knowledge to facilitate speech recognition. Lexical knowledge supports speech recognition and enhances the perceived clarity of speech. Yet, the extent to which lexical knowledge can be used to effectively compensate for degraded input may depend on the degree of degradation and the listener's age. The current study investigated lexical effects in the compensation for speech that was degraded via noise-vocoding in younger and older listeners. In an online experiment, younger and older normal-hearing (NH) listeners rated the clarity of noise-vocoded sentences on a scale from 1 ("very unclear") to 7 ("completely clear"). Lexical information was provided by matching text primes and the lexical content of the target utterance. Half of the sentences were preceded by a matching text prime, while half were preceded by a non-matching prime. Each sentence also consisted of three key words of high or low lexical frequency and neighborhood density. Sentences were processed to simulate CI hearing, using an eight-channel noise vocoder with varying filter slopes. Results showed that lexical information impacted the perceived clarity of noise-vocoded speech. Noise-vocoded speech was perceived as clearer when preceded by a matching prime, and when sentences included key words with high lexical frequency and low neighborhood density. However, the strength of the lexical effects depended on the level of degradation. Matching text primes had a greater impact for speech with poorer spectral resolution, but lexical content had a smaller impact for speech with poorer spectral resolution. Finally, lexical information appeared to benefit both younger and older listeners. Findings demonstrate that lexical knowledge can be employed by younger and older listeners in cognitive compensation during the processing of noise-vocoded speech. However, lexical content may not be as reliable when the signal is highly degraded. Clinical implications are that for adult CI users, lexical knowledge might be used to compensate for the degraded speech signal, regardless of age, but some CI users may be hindered by a relatively poor signal.
Collapse
Affiliation(s)
- Terrin N. Tamati
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH, United States
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Victoria A. Sevich
- Department of Speech and Hearing Science, The Ohio State University, Columbus, OH, United States
| | - Emily M. Clausing
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH, United States
| | - Aaron C. Moberly
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH, United States
| |
Collapse
|
18
|
Ratnanather JT, Wang LC, Bae SH, O'Neill ER, Sagi E, Tward DJ. Visualization of Speech Perception Analysis via Phoneme Alignment: A Pilot Study. Front Neurol 2022; 12:724800. [PMID: 35087462 PMCID: PMC8787339 DOI: 10.3389/fneur.2021.724800] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 12/13/2021] [Indexed: 11/13/2022] Open
Abstract
Objective: Speech tests assess the ability of people with hearing loss to comprehend speech with a hearing aid or cochlear implant. The tests are usually at the word or sentence level. However, few tests analyze errors at the phoneme level. So, there is a need for an automated program to visualize in real time the accuracy of phonemes in these tests. Method: The program reads in stimulus-response pairs and obtains their phonemic representations from an open-source digital pronouncing dictionary. The stimulus phonemes are aligned with the response phonemes via a modification of the Levenshtein Minimum Edit Distance algorithm. Alignment is achieved via dynamic programming with modified costs based on phonological features for insertion, deletions and substitutions. The accuracy for each phoneme is based on the F1-score. Accuracy is visualized with respect to place and manner (consonants) or height (vowels). Confusion matrices for the phonemes are used in an information transfer analysis of ten phonological features. A histogram of the information transfer for the features over a frequency-like range is presented as a phonemegram. Results: The program was applied to two datasets. One consisted of test data at the sentence and word levels. Stimulus-response sentence pairs from six volunteers with different degrees of hearing loss and modes of amplification were analyzed. Four volunteers listened to sentences from a mobile auditory training app while two listened to sentences from a clinical speech test. Stimulus-response word pairs from three lists were also analyzed. The other dataset consisted of published stimulus-response pairs from experiments of 31 participants with cochlear implants listening to 400 Basic English Lexicon sentences via different talkers at four different SNR levels. In all cases, visualization was obtained in real time. Analysis of 12,400 actual and random pairs showed that the program was robust to the nature of the pairs. Conclusion: It is possible to automate the alignment of phonemes extracted from stimulus-response pairs from speech tests in real time. The alignment then makes it possible to visualize the accuracy of responses via phonological features in two ways. Such visualization of phoneme alignment and accuracy could aid clinicians and scientists.
Collapse
Affiliation(s)
- J Tilak Ratnanather
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Lydia C Wang
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Seung-Ho Bae
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Erin R O'Neill
- Center for Applied and Translational Sensory Sciences, University of Minnesota, Minneapolis, MN, United States
| | - Elad Sagi
- Department of Otolaryngology, New York University School of Medicine, New York, NY, United States
| | - Daniel J Tward
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States.,Departments of Computational Medicine and Neurology, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
19
|
O'Neill ER, Basile JD, Nelson P. Individual Hearing Outcomes in Cochlear Implant Users Influence Social Engagement and Listening Behavior in Everyday Life. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4982-4999. [PMID: 34705529 DOI: 10.1044/2021_jslhr-21-00249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE The goal of this study was to assess the listening behavior and social engagement of cochlear implant (CI) users and normal-hearing (NH) adults in daily life and relate these actions to objective hearing outcomes. METHOD Ecological momentary assessments (EMAs) collected using a smartphone app were used to probe patterns of listening behavior in CI users and age-matched NH adults to detect differences in social engagement and listening behavior in daily life. Participants completed very short surveys every 2 hr to provide snapshots of typical, everyday listening and socializing, as well as longer, reflective surveys at the end of the day to assess listening strategies and coping behavior. Speech perception testing, with accompanying ratings of task difficulty, was also performed in a lab setting to uncover possible correlations between objective and subjective listening behavior. RESULTS Comparisons between speech intelligibility testing and EMA responses showed poorer performing CI users spending more time at home and less time conversing with others than higher performing CI users and their NH peers. Perception of listening difficulty was also very different for CI users and NH listeners, with CI users reporting little difficulty despite poor speech perception performance. However, both CI users and NH listeners spent most of their time in listening environments they considered "not difficult." CI users also reported using several compensatory listening strategies, such as visual cues, whereas NH listeners did not. CONCLUSION Overall, the data indicate systematic differences between how individual CI users and NH adults navigate and manipulate listening and social environments in everyday life.
Collapse
Affiliation(s)
- Erin R O'Neill
- Department of Psychology, University of Minnesota, Twin Cities, Minneapolis
| | - John D Basile
- Department of Biomedical Engineering, University of Minnesota, Twin Cities, Minneapolis
| | - Peggy Nelson
- Department of Speech-Language-Hearing Sciences, Center for Applied and Translational Sensory Science (CATSS), University of Minnesota, Twin Cities, Minneapolis
| |
Collapse
|
20
|
Moberly AC, Lewis JH, Vasil KJ, Ray C, Tamati TN. Bottom-Up Signal Quality Impacts the Role of Top-Down Cognitive-Linguistic Processing During Speech Recognition by Adults with Cochlear Implants. Otol Neurotol 2021; 42:S33-S41. [PMID: 34766942 PMCID: PMC8597903 DOI: 10.1097/mao.0000000000003377] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
HYPOTHESES Significant variability persists in speech recognition outcomes in adults with cochlear implants (CIs). Sensory ("bottom-up") and cognitive-linguistic ("top-down") processes help explain this variability. However, the interactions of these bottom-up and top-down factors remain unclear. One hypothesis was tested: top-down processes would contribute differentially to speech recognition, depending on the fidelity of bottom-up input. BACKGROUND Bottom-up spectro-temporal processing, assessed using a Spectral-Temporally Modulated Ripple Test (SMRT), is associated with CI speech recognition outcomes. Similarly, top-down cognitive-linguistic skills relate to outcomes, including working memory capacity, inhibition-concentration, speed of lexical access, and nonverbal reasoning. METHODS Fifty-one adult CI users were tested for word and sentence recognition, along with performance on the SMRT and a battery of cognitive-linguistic tests. The group was divided into "low-," "intermediate-," and "high-SMRT" groups, based on SMRT scores. Separate correlation analyses were performed for each subgroup between a composite score of cognitive-linguistic processing and speech recognition. RESULTS Associations of top-down composite scores with speech recognition were not significant for the low-SMRT group. In contrast, these associations were significant and of medium effect size (Spearman's rho = 0.44-0.46) for two sentence types for the intermediate-SMRT group. For the high-SMRT group, top-down scores were associated with both word and sentence recognition, with medium to large effect sizes (Spearman's rho = 0.45-0.58). CONCLUSIONS Top-down processes contribute differentially to speech recognition in CI users based on the quality of bottom-up input. Findings have clinical implications for individualized treatment approaches relying on bottom-up device programming or top-down rehabilitation approaches.
Collapse
Affiliation(s)
- Aaron C Moberly
- Department of Otolaryngology - Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Jessica H Lewis
- Department of Otolaryngology - Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Kara J Vasil
- Department of Otolaryngology - Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Christin Ray
- Department of Otolaryngology - Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Terrin N Tamati
- Department of Otolaryngology - Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
- Department of Otorhinolaryngology - Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| |
Collapse
|
21
|
Fogerty D, Ahlstrom JB, Dubno JR. Glimpsing keywords across sentences in noise: A microstructural analysis of acoustic, lexical, and listener factors. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:1979. [PMID: 34598610 PMCID: PMC8448575 DOI: 10.1121/10.0006238] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
This study investigated how acoustic and lexical word-level factors and listener-level factors of auditory thresholds and cognitive-linguistic processing contribute to the microstructure of sentence recognition in unmodulated and speech-modulated noise. The modulation depth of the modulated masker was changed by expanding and compressing the temporal envelope to control glimpsing opportunities. Younger adults with normal hearing (YNH) and older adults with normal and impaired hearing were tested. A second group of YNH was tested under acoustically identical conditions to the hearing-impaired group, who received spectral shaping. For all of the groups, speech recognition declined and masking release increased for later keywords in the sentence, which is consistent with the word position decreases in the signal-to-noise ratio. The acoustic glimpse proportion and lexical word frequency of individual keywords predicted recognition under different noise conditions. For the older adults, better auditory thresholds and better working memory abilities facilitated sentence recognition. Vocabulary knowledge contributed more to sentence recognition for younger than for older adults. These results demonstrate that acoustic and lexical factors contribute to the recognition of individual words within a sentence, but relative contributions vary based on the noise modulation characteristics. Taken together, acoustic, lexical, and listener factors contribute to how individuals recognize keywords during sentences.
Collapse
Affiliation(s)
- Daniel Fogerty
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign, Illinois 61820, USA
| | - Jayne B Ahlstrom
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina 29425, USA
| | - Judy R Dubno
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina 29425, USA
| |
Collapse
|
22
|
Willberg T, Sivonen V, Linder P, Dietz A. Comparing the Speech Perception of Cochlear Implant Users with Three Different Finnish Speech Intelligibility Tests in Noise. J Clin Med 2021; 10:jcm10163666. [PMID: 34441961 PMCID: PMC8397150 DOI: 10.3390/jcm10163666] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 08/08/2021] [Accepted: 08/18/2021] [Indexed: 11/16/2022] Open
Abstract
Background: A large number of different speech-in-noise (SIN) tests are available for testing cochlear implant (CI) recipients, but few studies have compared the different tests in the same patient population to assess how well their results correlate. Methods: A clinically representative group of 80 CI users conducted the Finnish versions of the matrix sentence test, the simplified matrix sentence test, and the digit triplet test. The results were analyzed for correlations between the different tests and for differences among the participants, including age and device modality. Results: Strong and statistically significant correlations were observed between all of the tests. No floor or ceiling effects were observed with any of the tests when using the adaptive test procedure. Age or the length of device use showed no correlation to SIN perception, but bilateral CI users showed slightly better results in comparison to unilateral or bimodal users. Conclusions: Three SIN tests that differ in length and complexity of the test material provided comparable results in a diverse CI user group.
Collapse
Affiliation(s)
- Tytti Willberg
- Department of Otorhinolaryngology, Turku University Hospital, 20521 Turku, Finland
- Institute of Clinical Medicine, University of Eastern Finland, 70211 Kuopio, Finland
- Correspondence:
| | - Ville Sivonen
- Department of Otorhinolaryngology—Head and Neck Surgery, Head and Neck Center, Helsinki University Hospital and University of Helsinki, 00029 Helsinki, Finland;
| | - Pia Linder
- Department of Otorhinolaryngology, Kuopio University Hospital, 70029 Kuopio, Finland; (P.L.); (A.D.)
| | - Aarno Dietz
- Department of Otorhinolaryngology, Kuopio University Hospital, 70029 Kuopio, Finland; (P.L.); (A.D.)
| |
Collapse
|
23
|
MacPhail ME, Connell NT, Totten DJ, Gray MT, Pisoni D, Yates CW, Nelson RF. Speech Recognition Outcomes in Adults With Slim Straight and Slim Modiolar Cochlear Implant Electrode Arrays. Otolaryngol Head Neck Surg 2021; 166:943-950. [PMID: 34399646 DOI: 10.1177/01945998211036339] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE To compare differences in audiologic outcomes between slim modiolar electrode (SME) CI532 and slim lateral wall electrode (SLW) CI522 cochlear implant recipients. STUDY DESIGN Retrospective cohort study. SETTING Tertiary academic hospital. METHODS Comparison of postoperative AzBio sentence scores in quiet (percentage correct) in adult cochlear implant recipients with SME or SLW matched for preoperative AzBio sentence scores in quiet and aided and unaided pure tone average. RESULTS Patients with SLW (n = 52) and patients with SME (n = 37) had a similar mean (SD) age (62.0 [18.2] vs 62.6 [14.6] years, respectively), mean preoperative aided pure tone average (55.9 [20.4] vs 58.1 [16.4] dB; P = .59), and mean AzBio score (percentage correct, 11.1% [13.3%] vs 8.0% [11.5%]; P = .25). At last follow-up (SLW vs SME, 9.0 [2.9] vs 9.9 [2.6] months), postoperative mean AzBio scores in quiet were not significantly different (percentage correct, 70.8% [21.3%] vs 65.6% [24.5%]; P = .29), and data log usage was similar (12.9 [4.0] vs 11.3 [4.1] hours; P = .07). In patients with preoperative AzBio <10% correct, the 6-month mean AzBio scores were significantly better with SLW than SME (percentage correct, 70.6% [22.9%] vs 53.9% [30.3%]; P = .02). The intraoperative tip rollover rate was 8% for SME and 0% for SLW. CONCLUSIONS Cochlear implantation with SLW and SME provides comparable improvement in audiologic functioning. SME does not exhibit superior speech recognition outcomes when compared with SLW.
Collapse
Affiliation(s)
| | - Nathan T Connell
- Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Indiana University, Indianapolis, Indiana, USA
| | - Douglas J Totten
- Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Indiana University, Indianapolis, Indiana, USA
| | - Mitchell T Gray
- School of Medicine, Indiana University, Indianapolis, Indiana, USA
| | - David Pisoni
- Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Indiana University, Indianapolis, Indiana, USA
| | - Charles W Yates
- School of Medicine, Indiana University, Indianapolis, Indiana, USA.,Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Indiana University, Indianapolis, Indiana, USA
| | - Rick F Nelson
- School of Medicine, Indiana University, Indianapolis, Indiana, USA.,Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Indiana University, Indianapolis, Indiana, USA
| |
Collapse
|
24
|
Bosen AK, Sevich VA, Cannon SA. Forward Digit Span and Word Familiarity Do Not Correlate With Differences in Speech Recognition in Individuals With Cochlear Implants After Accounting for Auditory Resolution. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:3330-3342. [PMID: 34251908 PMCID: PMC8740688 DOI: 10.1044/2021_jslhr-20-00574] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 01/12/2021] [Accepted: 04/09/2021] [Indexed: 06/07/2023]
Abstract
Purpose In individuals with cochlear implants, speech recognition is not associated with tests of working memory that primarily reflect storage, such as forward digit span. In contrast, our previous work found that vocoded speech recognition in individuals with normal hearing was correlated with performance on a forward digit span task. A possible explanation for this difference across groups is that variability in auditory resolution across individuals with cochlear implants could conceal the true relationship between speech and memory tasks. Here, our goal was to determine if performance on forward digit span and speech recognition tasks are correlated in individuals with cochlear implants after controlling for individual differences in auditory resolution. Method We measured sentence recognition ability in 20 individuals with cochlear implants with Perceptually Robust English Sentence Test Open-set sentences. Spectral and temporal modulation detection tasks were used to assess individual differences in auditory resolution, auditory forward digit span was used to assess working memory storage, and self-reported word familiarity was used to assess vocabulary. Results Individual differences in speech recognition were predicted by spectral and temporal resolution. A correlation was found between forward digit span and speech recognition, but this correlation was not significant after controlling for spectral and temporal resolution. No relationship was found between word familiarity and speech recognition. Forward digit span performance was not associated with individual differences in auditory resolution. Conclusions Our findings support the idea that sentence recognition in individuals with cochlear implants is primarily limited by individual differences in working memory processing, not storage. Studies examining the relationship between speech and memory should control for individual differences in auditory resolution.
Collapse
Affiliation(s)
| | - Victoria A. Sevich
- Boys Town National Research Hospital, Omaha, NE
- The Ohio State University, Columbus
| | | |
Collapse
|
25
|
Smits C, De Sousa KC, Swanepoel DW. An analytical method to convert between speech recognition thresholds and percentage-correct scores for speech-in-noise tests. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:1321. [PMID: 34470304 DOI: 10.1121/10.0005877] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 07/22/2021] [Indexed: 06/13/2023]
Abstract
Speech-in-noise tests use fixed signal-to-noise ratio (SNR) procedures to measure the percentage of correctly recognized speech items at a fixed SNR or use adaptive procedures to measure the SNR corresponding to 50% correct (i.e., the speech recognition threshold, SRT). A direct comparison of these measures is not possible yet. The aim of the present study was to demonstrate that these measures can be converted when the speech-in-noise test meets specific criteria. Formulae to convert between SRT and percentage-correct were derived from basic concepts that underlie standard speech recognition models. Information about the audiogram is not being used in the proposed method. The method was validated by comparing the direct conversion by these formulae with the conversion using the more elaborate Speech Intelligibility Index model and a representative set of 60 audiograms (r = 0.993 and r = 0.994, respectively). Finally, the method was experimentally validated with the Afrikaans sentence-in-noise test (r = 0.866). The proposed formulae can be used when the speech-in-noise test uses steady-state masking noise that matches the spectrum of the speech. Because pure tone thresholds are not required for these calculations, the method is widely applicable.
Collapse
Affiliation(s)
- Cas Smits
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, De Boelelaan 1117, Amsterdam, The Netherlands
| | - Karina C De Sousa
- Department of Speech-Language Pathology and Audiology, University of Pretoria, Pretoria, Gauteng, South Africa
| | - De Wet Swanepoel
- Department of Speech-Language Pathology and Audiology, University of Pretoria, Pretoria, Gauteng, South Africa
| |
Collapse
|
26
|
Pisoni DB, Kronenberger WG. Recognizing spoken words in semantically-anomalous sentences: Effects of executive control in early-implanted deaf children with cochlear implants. Cochlear Implants Int 2021; 22:223-236. [PMID: 33673795 PMCID: PMC8392694 DOI: 10.1080/14670100.2021.1884433] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
OBJECTIVES To investigate differences in speech, language, and neurocognitive functioning in normal hearing (NH) children and deaf children with cochlear implants (CIs) using anomalous sentences. Anomalous sentences block the use of downstream predictive coding during speech recognition, allowing for investigation of rapid phonological coding and executive functioning.Methods: Extreme groups were extracted from samples of children with CIs and NH peers (ages 9 to 17) based on the 7 highest and 7 lowest scores on the Harvard-Anomalous sentence test (Harvard-A). The four groups were compared on measures of speech, language, and neurocognitive functioning.Results: The 7 highest-scoring CI users and the 7 lowest-scoring NH peers did not differ in Harvard-A scores but did differ significantly on measures of neurocognitive functioning. Compared to low-performing NH peers, highperforming children with CIs had significantly lower nonword repetition scores but higher nonverbal IQ scores, greater verbal WM capacity, and excellent EF skills related to inhibition, shifting attention/mental flexibility and working memory updating.Discussion: High performing deaf children with CIs are able to compensate for their sensory deficits and weaknesses in automatic phonological coding of speech by engaging in a slow effortful mode of information processing involving inhibition, working memory and executive functioning.
Collapse
Affiliation(s)
- David B. Pisoni
- Department of Psychological and Brain Sciences, Indiana University, Bloomington
- DeVault Otologic Research Laboratory, Department of Otolaryngology—Head and Neck Surgery, Indiana University School of Medicine
| | - William G. Kronenberger
- DeVault Otologic Research Laboratory, Department of Otolaryngology—Head and Neck Surgery, Indiana University School of Medicine
- Department of Psychiatry, Indiana University School of Medicine
| |
Collapse
|
27
|
Eadie TL, Durr H, Sauder C, Nagle K, Kapsner-Smith M, Spencer KA. Effect of Noise on Speech Intelligibility and Perceived Listening Effort in Head and Neck Cancer. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2021; 30:1329-1342. [PMID: 33630664 PMCID: PMC8702834 DOI: 10.1044/2020_ajslp-20-00149] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Revised: 08/13/2020] [Accepted: 09/22/2020] [Indexed: 05/19/2023]
Abstract
Purpose This study (a) examined the effect of different levels of background noise on speech intelligibility and perceived listening effort in speakers with impaired and intact speech following treatment for head and neck cancer (HNC) and (b) determined the relative contribution of speech intelligibility, speaker group, and background noise to a measure of perceived listening effort. Method Ten speakers diagnosed with nasal, oral, or oropharyngeal HNC provided audio recordings of six sentences from the Sentence Intelligibility Test. All speakers were 100% intelligible in quiet: Five speakers with HNC exhibited mild speech imprecisions (speech impairment group), and five speakers with HNC demonstrated intact speech (HNC control group). Speech recordings were presented to 30 inexperienced listeners, who transcribed the sentences and rated perceived listening effort in quiet and two levels (+7 and +5 dB SNR) of background noise. Results Significant Group × Noise interactions were found for speech intelligibility and perceived listening effort. While no differences in speech intelligibility were found between the speaker groups in quiet, the results showed that, as the signal-to-noise ratio decreased, speakers with intact speech (HNC control) performed significantly better (greater intelligibility, less perceived listening effort) than those with speech imprecisions in the two noise conditions. Perceived listening effort was also shown to be associated with decreased speech intelligibility, imprecise speech, and increased background noise. Conclusions Speakers with HNC who are 100% intelligible in quiet but who exhibit some degree of imprecise speech are particularly vulnerable to the effects of increased background noise in comparison to those with intact speech. Results have implications for speech evaluations, counseling, and rehabilitation.
Collapse
Affiliation(s)
- Tanya L. Eadie
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Holly Durr
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Cara Sauder
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Kathleen Nagle
- Department of Speech-Language Pathology, Seton Hall University, South Orange, NJ
| | - Mara Kapsner-Smith
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Kristie A. Spencer
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| |
Collapse
|
28
|
Fletcher A, McAuliffe M. Comparing Lexical Cues in Listener Processing of Dysarthria and Speech in Noise. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2021; 30:1572-1579. [PMID: 33630661 DOI: 10.1044/2020_ajslp-20-00137] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The frequency of a word and its number of phonologically similar neighbors can dramatically affect how likely it is to be accurately identified in adverse listening conditions. This study compares how these two cues affect listeners' processing of speech in noise and dysarthric speech. Method Seven speakers with moderate hypokinetic dysarthria and eight healthy control speakers were recorded producing the same set of phrases. Statements from control speakers were mixed with noise at a level selected to match the intelligibility range of the speakers with dysarthria. A binomial mixed-effects model quantified the effects of word frequency and phonological density on word identification. Results The model revealed significant effects of word frequency (b = 0.37, SE = 0.12, p = .002) and phonological neighborhood density (b = 0.40, SE = 0.12, p = .001). There was no effect of speaking condition (i.e., dysarthric speech vs. speech in noise). However, a significant interaction was observed between speaking condition and word frequency (b = 0.26, SE = 0.04, p < .001). Conclusions The model's interactions indicated that listeners were more strongly influenced by the effects of word frequency when decoding moderate hypokinetic dysarthria as compared to speech in noise. Differences in listener reliance on lexical cues may have important implications for the selection of communication-based treatment strategies for speakers with dysarthria.
Collapse
Affiliation(s)
- Annalise Fletcher
- Department of Audiology and Speech-Language Pathology, University of North Texas, Denton
| | - Megan McAuliffe
- School of Psychology, Speech and Hearing, University of Canterbury, New Zealand
| |
Collapse
|
29
|
DeRoy Milvae K, Alexander JM, Strickland EA. The relationship between ipsilateral cochlear gain reduction and speech-in-noise recognition at positive and negative signal-to-noise ratios. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:3449. [PMID: 34241110 PMCID: PMC8411890 DOI: 10.1121/10.0003964] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Revised: 03/09/2021] [Accepted: 03/11/2021] [Indexed: 06/13/2023]
Abstract
Active mechanisms that regulate cochlear gain are hypothesized to influence speech-in-noise perception. However, evidence of a relationship between the amount of cochlear gain reduction and speech-in-noise recognition is mixed. Findings may conflict across studies because different signal-to-noise ratios (SNRs) were used to evaluate speech-in-noise recognition. Also, there is evidence that ipsilateral elicitation of cochlear gain reduction may be stronger than contralateral elicitation, yet, most studies have investigated the contralateral descending pathway. The hypothesis that the relationship between ipsilateral cochlear gain reduction and speech-in-noise recognition depends on the SNR was tested. A forward masking technique was used to quantify the ipsilateral cochlear gain reduction in 24 young adult listeners with normal hearing. Speech-in-noise recognition was measured with the PRESTO-R sentence test using speech-shaped noise presented at -3, 0, and +3 dB SNR. Interestingly, greater cochlear gain reduction was associated with lower speech-in-noise recognition, and the strength of this correlation increased as the SNR became more adverse. These findings support the hypothesis that the SNR influences the relationship between ipsilateral cochlear gain reduction and speech-in-noise recognition. Future studies investigating the relationship between cochlear gain reduction and speech-in-noise recognition should consider the SNR and both descending pathways.
Collapse
Affiliation(s)
- Kristina DeRoy Milvae
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana 47907, USA
| | - Joshua M Alexander
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana 47907, USA
| | - Elizabeth A Strickland
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana 47907, USA
| |
Collapse
|
30
|
Cognitive Functions in Adults Receiving Cochlear Implants: Predictors of Speech Recognition and Changes After Implantation. Otol Neurotol 2021; 41:e322-e329. [PMID: 31868779 DOI: 10.1097/mao.0000000000002544] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
HYPOTHESES Significant variability in speech recognition outcomes is consistently observed in adults who receive cochlear implants (CIs), some of which may be attributable to cognitive functions. Two hypotheses were tested: 1) preoperative cognitive skills assessed visually would predict postoperative speech recognition at 6 months after CI; and 2) cochlear implantation would result in benefits to cognitive processes at 6 months. BACKGROUND Several executive functioning tasks have been identified as contributors to speech recognition in adults with hearing loss. There is also mounting evidence that cochlear implantation can improve cognitive functioning. This study examined whether preoperative cognitive functions would predict speech recognition after implantation, and whether cognitive skills would improve as a result of CI intervention. METHODS Nineteen post-lingually deafened adult CI candidates were tested preoperatively using a visual battery of tests to assess working memory (WM), processing speed, inhibition-concentration, and nonverbal reasoning. Six months post-implantation, participants were assessed with a battery of word and sentence recognition measures and cognitive tests were repeated. RESULTS Multiple speech measures after 6 months of CI use were correlated with preoperative visual WM (symbol span task) and inhibition ability (stroop incongruent task) with moderate-to-large effect sizes. Small-to-large effect size improvements in visual WM, concentration, and inhibition tasks were found from pre- to post-CI. Patients with lower baseline cognitive abilities improved the most after implantation. CONCLUSIONS Findings provide evidence that preoperative cognitive factors contribute to speech recognition outcomes for adult CI users, and support the premise that implantation may lead to improvements in some cognitive domains.
Collapse
|
31
|
Tamati TN, Pisoni DB, Moberly AC. The Perception of Regional Dialects and Foreign Accents by Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:683-690. [PMID: 33493399 PMCID: PMC8632473 DOI: 10.1044/2020_jslhr-20-00496] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Revised: 10/09/2020] [Accepted: 10/09/2020] [Indexed: 06/12/2023]
Abstract
Purpose This preliminary research examined (a) the perception of two common sources of indexical variability in speech-regional dialects and foreign accents, and (b) the relation between indexical processing and sentence recognition among prelingually deaf, long-term cochlear implant (CI) users and normal-hearing (NH) peers. Method Forty-three prelingually deaf adolescent and adult CI users and 44 NH peers completed a regional dialect categorization task, which consisted of identifying the region of origin of an unfamiliar talker from six dialect regions of the United States. They also completed an intelligibility rating task, which consisted of rating the intelligibility of short sentences produced by native and nonnative (foreign-accented) speakers of American English on a scale from 1 (not intelligible at all) to 7 (very intelligible). Individual performance was compared to demographic factors and sentence recognition scores. Results Both CI and NH groups demonstrated difficulty with regional dialect categorization, but NH listeners significantly outperformed the CI users. In the intelligibility rating task, both CI and NH listeners rated foreign-accented sentences as less intelligible than native sentences; however, CI users perceived smaller differences in intelligibility between native and foreign-accented sentences. Sensitivity to accent differences was related to sentence recognition accuracy in CI users. Conclusions Prelingually deaf, long-term CI users are sensitive to accent variability in speech, but less so than NH peers. Additionally, individual differences in CI users' sensitivity to indexical variability was related to sentence recognition abilities, suggesting a common source of difficulty in the perception and encoding of fine acoustic-phonetic details in speech.
Collapse
Affiliation(s)
- Terrin N. Tamati
- Department of Otolaryngology, Wexner Medical Center, The Ohio State University, Columbus
- Department of Otorhinolaryngology, The University Medical Center Groningen, University of Groningen, the Netherlands
| | - David B. Pisoni
- DeVault Otologic Research Laboratory, Department of Otolaryngology—Head and Neck Surgery, Indiana University School of Medicine, Indianapolis
| | - Aaron C. Moberly
- Department of Otolaryngology, Wexner Medical Center, The Ohio State University, Columbus
| |
Collapse
|
32
|
O'Neill ER, Parke MN, Kreft HA, Oxenham AJ. Role of semantic context and talker variability in speech perception of cochlear-implant users and normal-hearing listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:1224. [PMID: 33639827 PMCID: PMC7895533 DOI: 10.1121/10.0003532] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 01/01/2021] [Accepted: 01/26/2021] [Indexed: 06/12/2023]
Abstract
This study assessed the impact of semantic context and talker variability on speech perception by cochlear-implant (CI) users and compared their overall performance and between-subjects variance with that of normal-hearing (NH) listeners under vocoded conditions. Thirty post-lingually deafened adult CI users were tested, along with 30 age-matched and 30 younger NH listeners, on sentences with and without semantic context, presented in quiet and noise, spoken by four different talkers. Additional measures included working memory, non-verbal intelligence, and spectral-ripple detection and discrimination. Semantic context and between-talker differences influenced speech perception to similar degrees for both CI users and NH listeners. Between-subjects variance for speech perception was greatest in the CI group but remained substantial in both NH groups, despite the uniformly degraded stimuli in these two groups. Spectral-ripple detection and discrimination thresholds in CI users were significantly correlated with speech perception, but a single set of vocoder parameters for NH listeners was not able to capture average CI performance in both speech and spectral-ripple tasks. The lack of difference in the use of semantic context between CI users and NH listeners suggests no overall differences in listening strategy between the groups, when the stimuli are similarly degraded.
Collapse
Affiliation(s)
- Erin R O'Neill
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Morgan N Parke
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
33
|
Tremblay P, Brisson V, Deschamps I. Brain aging and speech perception: Effects of background noise and talker variability. Neuroimage 2020; 227:117675. [PMID: 33359849 DOI: 10.1016/j.neuroimage.2020.117675] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2020] [Revised: 12/15/2020] [Accepted: 12/17/2020] [Indexed: 10/22/2022] Open
Abstract
Speech perception can be challenging, especially for older adults. Despite the importance of speech perception in social interactions, the mechanisms underlying these difficulties remain unclear and treatment options are scarce. While several studies have suggested that decline within cortical auditory regions may be a hallmark of these difficulties, a growing number of studies have reported decline in regions beyond the auditory processing network, including regions involved in speech processing and executive control, suggesting a potentially diffuse underlying neural disruption, though no consensus exists regarding underlying dysfunctions. To address this issue, we conducted two experiments in which we investigated age differences in speech perception when background noise and talker variability are manipulated, two factors known to be detrimental to speech perception. In Experiment 1, we examined the relationship between speech perception, hearing and auditory attention in 88 healthy participants aged 19 to 87 years. In Experiment 2, we examined cortical thickness and BOLD signal using magnetic resonance imaging (MRI) and related these measures to speech perception performance using a simple mediation approach in 32 participants from Experiment 1. Our results show that, even after accounting for hearing thresholds and two measures of auditory attention, speech perception significantly declined with age. Age-related decline in speech perception in noise was associated with thinner cortex in auditory and speech processing regions (including the superior temporal cortex, ventral premotor cortex and inferior frontal gyrus) as well as in regions involved in executive control (including the dorsal anterior insula, the anterior cingulate cortex and medial frontal cortex). Further, our results show that speech perception performance was associated with reduced brain response in the right superior temporal cortex in older compared to younger adults, and to an increase in response to noise in older adults in the left anterior temporal cortex. Talker variability was not associated with different activation patterns in older compared to younger adults. Together, these results support the notion of a diffuse rather than a focal dysfunction underlying speech perception in noise difficulties in older adults.
Collapse
Affiliation(s)
- Pascale Tremblay
- CERVO Brain Research Center, Québec City, QC, Canada; Université Laval, Département de réadaptation, Québec City, QC, Canada.
| | - Valérie Brisson
- CERVO Brain Research Center, Québec City, QC, Canada; Université Laval, Département de réadaptation, Québec City, QC, Canada
| | | |
Collapse
|
34
|
Moberly AC. A surgeon-scientist's perspective and review of cognitive-linguistic contributions to adult cochlear implant outcomes. Laryngoscope Investig Otolaryngol 2020; 5:1176-1183. [PMID: 33364410 PMCID: PMC7752064 DOI: 10.1002/lio2.494] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 10/29/2020] [Indexed: 11/12/2022] Open
Abstract
OBJECTIVES Enormous variability in speech recognition outcomes persists in adults who receive cochlear implants (CIs), which leads to a barrier to progress in predicting outcomes before surgery, explaining "poor" outcomes, and determining how to provide tailored rehabilitation therapy for individual CI users. The primary goal of my research program over the past 9 years has been to extend our understanding of the contributions of "top-down" cognitive-linguistic skills to CI outcomes in adults, acknowledging that "bottom-up" sensory processes also contribute substantially. The main objective of this invited narrative review is to provide an overview of this work. A secondary objective is to provide career "guidance points" to budding surgeon-scientists in Otolaryngology. METHODS A narrative, chronological review covers work done by our group to explore top-down and bottom-up processing in adult CI outcomes. A set of ten guidance points is also provided to assist junior Otolaryngology surgeon-scientists. RESULTS Work in our lab has identified substantial contributions of cognitive skills (working memory, inhibition-concentration, speed of lexical access, nonverbal reasoning, verbal learning and memory) as well as linguistic abilities (acoustic cue-weighting, phonological sensitivity) to speech recognition outcomes in adults with CIs. These top-down skills interact with the quality of the bottom-up input. CONCLUSION Although progress has been made in understanding speech recognition variability in adult CI users, future work is needed to predict CI outcomes before surgery, to identify particular patients' strengths and weaknesses, and to tailor rehabilitation approaches for individual CI users. LEVEL OF EVIDENCE 4.
Collapse
Affiliation(s)
- Aaron C. Moberly
- Department of Otolaryngology Head and Neck SurgeryThe Ohio State University Wexner Medical CenterColumbusOhioUSA
| |
Collapse
|
35
|
Strori D, Bradlow AR, Souza PE. Recognising foreign-accented speech of varying intelligibility and linguistic complexity: insights from older listeners with or without hearing loss. Int J Audiol 2020; 60:140-150. [PMID: 32972283 DOI: 10.1080/14992027.2020.1814431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
OBJECTIVE The goal of this study was to assess recognition of foreign-accented speech of varying intelligibility and linguistic complexity in older adults. It is important to understand the factors that influence the recognition of this commonly encountered type of speech, in a population that remains understudied in this regard. DESIGN A repeated measures design was used. Listeners repeated back linguistically simple and complex sentences heard in noise. The sentences were produced by three talkers of varying intelligibility: one native American English, one foreign-accented talker of high intelligibility and one foreign-accented talker of low intelligibility. Percentage word recognition in sentences was measured. STUDY SAMPLE Twenty-five older listeners with a range of hearing thresholds participated. RESULTS We found a robust interaction between talker intelligibility and linguistic complexity. Recognition accuracy was higher for simple versus complex sentences, but only for the native and high intelligibility foreign-accented talkers. This pattern was present after effects of working memory capacity and hearing acuity were taken into consideration. CONCLUSION Older listeners exhibit qualitatively different speech processing strategies for low versus high intelligibility foreign-accented talkers. Differences in recognition accuracy for words presented in simple versus in complex sentence contexts only emerged for speech over a threshold of intelligibility.
Collapse
Affiliation(s)
- Dorina Strori
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA.,Department of Linguistics, Northwestern University, Evanston, IL, USA
| | - Ann R Bradlow
- Department of Linguistics, Northwestern University, Evanston, IL, USA.,Knowles Hearing Center, Northwestern University, Evanston, IL, USA
| | - Pamela E Souza
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA.,Knowles Hearing Center, Northwestern University, Evanston, IL, USA
| |
Collapse
|
36
|
Shafiro V, Hebb M, Walker C, Oh J, Hsiao Y, Brown K, Sheft S, Li Y, Vasil K, Moberly AC. Development of the Basic Auditory Skills Evaluation Battery for Online Testing of Cochlear Implant Listeners. Am J Audiol 2020; 29:577-590. [PMID: 32946250 DOI: 10.1044/2020_aja-19-00083] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose Cochlear implant (CI) performance varies considerably across individuals and across domains of auditory function, but clinical testing is typically restricted to speech intelligibility. The goals of this study were (a) to develop a basic auditory skills evaluation battery of tests for comprehensive assessment of ecologically relevant aspects of auditory perception and (b) to compare CI listeners' performance on the battery when tested in the laboratory by an audiologist or independently at home. Method The battery included 17 tests to evaluate (a) basic spectrotemporal processing, (b) processing of music and environmental sounds, and (c) speech perception in both quiet and background noise. The battery was administered online to three groups of adult listeners: two groups of postlingual CI listeners and a group of older normal-hearing (ONH) listeners of similar age. The ONH group and one CI group were tested in a laboratory by an audiologist, whereas the other CI group self-tested independently at home following online instructions. Results Results indicated a wide range in the performance of CI but not ONH listeners. Significant differences were not found between the two CI groups on any test, whereas on all but two tests, CI listeners' performance was lower than that of the ONH participants. Principal component analysis revealed that four components accounted for 82% of the variance in measured results, with component loading indicating that the test battery successfully captures differences across dimensions of auditory perception. Conclusions These results provide initial support for the use of the basic auditory skills evaluation battery for comprehensive online assessment of auditory skills in adult CI listeners.
Collapse
Affiliation(s)
- Valeriy Shafiro
- Department Communication Disorders & Sciences, Rush University Medical Center, Chicago, IL
| | - Megan Hebb
- Department Communication Disorders & Sciences, Rush University Medical Center, Chicago, IL
| | - Chad Walker
- Department Communication Disorders & Sciences, Rush University Medical Center, Chicago, IL
| | - Jasper Oh
- Department Communication Disorders & Sciences, Rush University Medical Center, Chicago, IL
| | - Ying Hsiao
- Department Communication Disorders & Sciences, Rush University Medical Center, Chicago, IL
| | - Kelly Brown
- Department Communication Disorders & Sciences, Rush University Medical Center, Chicago, IL
| | - Stanley Sheft
- Department Communication Disorders & Sciences, Rush University Medical Center, Chicago, IL
| | - Yan Li
- Department Communication Disorders & Sciences, Rush University Medical Center, Chicago, IL
| | - Kara Vasil
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
| | - Aaron C. Moberly
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
| |
Collapse
|
37
|
Abstract
OBJECTIVE To assess the benefits of bimodal listening (i.e., addition of contralateral hearing aid) for cochlear implant (CI) users on real-world tasks involving high-talker variability speech materials, environmental sounds, and self-reported quality of life (quality of hearing) in listeners' own best-aided conditions. STUDY DESIGN Cross-sectional study between groups. SETTING Outpatient hearing clinic. PATIENTS Fifty experienced adult CI users divided into groups based on normal daily listening conditions (i.e., best-aided conditions): unilateral CI (CI), unilateral CI with contralateral HA (bimodal listening; CIHA), or bilateral CI (CICI). INTERVENTION Task-specific measures of speech recognition with low (Harvard Standard Sentences) and high (Perceptually Robust English Sentence Test Open-set corpus) talker variability, environmental sound recognition (Familiar Environmental Sounds Test-Identification), and hearing-related quality of life (Nijmegen Cochlear Implant Questionnaire). MAIN OUTCOME MEASURES Test group differences among CI, CIHA, and CICI conditions. RESULTS No group effect was observed for speech recognition with low or high-talker variability, or hearing-related quality of life. Bimodal listeners demonstrated a benefit in environmental sound recognition compared with unilateral CI listeners, with a trend of greater benefit than the bilateral CI group. There was also a visual trend for benefit on high-talker variability speech recognition. CONCLUSIONS Findings provide evidence that bimodal listeners demonstrate stronger environmental sound recognition compared with unilateral CI listeners, and support the idea that there are additional advantages to bimodal listening after implantation other than speech recognition measures, which are at risk of being lost if considering bilateral implantation.
Collapse
|
38
|
Skidmore JA, Vasil KJ, He S, Moberly AC. Explaining Speech Recognition and Quality of Life Outcomes in Adult Cochlear Implant Users: Complementary Contributions of Demographic, Sensory, and Cognitive Factors. Otol Neurotol 2020; 41:e795-e803. [PMID: 32558759 PMCID: PMC7875311 DOI: 10.1097/mao.0000000000002682] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
HYPOTHESES Adult cochlear implant (CI) outcomes depend on demographic, sensory, and cognitive factors. However, these factors have not been examined together comprehensively for relations to different outcome types, such as speech recognition versus quality of life (QOL). Three hypotheses were tested: 1) speech recognition will be explained most strongly by sensory factors, whereas QOL will be explained more strongly by cognitive factors. 2) Different speech recognition outcome domains (sentences versus words) and different QOL domains (physical versus social versus psychological functioning) will be explained differentially by demographic, sensory, and cognitive factors. 3) Including cognitive factors as predictors will provide more power to explain outcomes than demographic and sensory predictors alone. BACKGROUND A better understanding of the contributors to CI outcomes is needed to prognosticate outcomes before surgery, explain outcomes after surgery, and tailor rehabilitation efforts. METHODS Forty-one adult postlingual experienced CI users were assessed for sentence and word recognition, as well as hearing-related QOL, along with a broad collection of predictors. Partial least squares regression was used to identify factors that were most predictive of outcome measures. RESULTS Supporting our hypotheses, speech recognition abilities were most strongly dependent on sensory skills, while QOL outcomes required a combination of cognitive, sensory, and demographic predictors. The inclusion of cognitive measures increased the ability to explain outcomes, mainly for QOL. CONCLUSIONS Explaining variability in adult CI outcomes requires a broad assessment approach. Identifying the most important predictors depends on the particular outcome domain and even the particular measure of interest.
Collapse
Affiliation(s)
- Jeffrey A Skidmore
- The Ohio State University Wexner Medical Center, Department of Otolaryngology-Head & Neck Surgery, Columbus, Ohio
| | | | | | | |
Collapse
|
39
|
Tamati TN, Ray C, Vasil KJ, Pisoni DB, Moberly AC. High- and Low-Performing Adult Cochlear Implant Users on High-Variability Sentence Recognition: Differences in Auditory Spectral Resolution and Neurocognitive Functioning. J Am Acad Audiol 2020; 31:324-335. [PMID: 31580802 DOI: 10.3766/jaaa.18106] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
BACKGROUND Postlingually deafened adult cochlear implant (CI) users routinely display large individual differences in the ability to recognize and understand speech, especially in adverse listening conditions. Although individual differences have been linked to several sensory (''bottom-up'') and cognitive (''top-down'') factors, little is currently known about the relative contributions of these factors in high- and low-performing CI users. PURPOSE The aim of the study was to investigate differences in sensory functioning and neurocognitive functioning between high- and low-performing CI users on the Perceptually Robust English Sentence Test Open-set (PRESTO), a high-variability sentence recognition test containing sentence materials produced by multiple male and female talkers with diverse regional accents. RESEARCH DESIGN CI users with accuracy scores in the upper (HiPRESTO) or lower quartiles (LoPRESTO) on PRESTO in quiet completed a battery of behavioral tasks designed to assess spectral resolution and neurocognitive functioning. STUDY SAMPLE Twenty-one postlingually deafened adult CI users, with 11 HiPRESTO and 10 LoPRESTO participants. DATA COLLECTION AND ANALYSIS A discriminant analysis was carried out to determine the extent to which measures of spectral resolution and neurocognitive functioning discriminate HiPRESTO and LoPRESTO CI users. Auditory spectral resolution was measured using the Spectral-Temporally Modulated Ripple Test (SMRT). Neurocognitive functioning was assessed with visual measures of working memory (digit span), inhibitory control (Stroop), speed of lexical/phonological access (Test of Word Reading Efficiency), and nonverbal reasoning (Raven's Progressive Matrices). RESULTS HiPRESTO and LoPRESTO CI users were discriminated primarily by performance on the SMRT and secondarily by the Raven's test. No other neurocognitive measures contributed substantially to the discriminant function. CONCLUSIONS High- and low-performing CI users differed by spectral resolution and, to a lesser extent, nonverbal reasoning. These findings suggest that the extreme groups are determined by global factors of richness of sensory information and domain-general, nonverbal intelligence, rather than specific neurocognitive processing operations related to speech perception and spoken word recognition. Thus, although both bottom-up and top-down information contribute to speech recognition performance, low-performing CI users may not be sufficiently able to rely on neurocognitive skills specific to speech recognition to enhance processing of spectrally degraded input in adverse conditions involving high talker variability.
Collapse
Affiliation(s)
- Terrin N Tamati
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.,Department of Otolaryngology - Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH
| | - Christin Ray
- Department of Otolaryngology - Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH
| | - Kara J Vasil
- Department of Otolaryngology - Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH
| | - David B Pisoni
- Department of Psychological and Brain Sciences, Indiana University - Bloomington, Bloomington, IN
| | - Aaron C Moberly
- Department of Otolaryngology - Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH
| |
Collapse
|
40
|
Strori D, Bradlow AR, Souza PE. Recognition of foreign-accented speech in noise: The interplay between talker intelligibility and linguistic structure. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:3765. [PMID: 32611135 PMCID: PMC7275869 DOI: 10.1121/10.0001194] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2019] [Revised: 04/14/2020] [Accepted: 04/17/2020] [Indexed: 06/01/2023]
Abstract
Foreign-accented speech recognition is typically tested with linguistically simple materials, which offer a limited window into realistic speech processing. The present study examined the relationship between linguistic structure and talker intelligibility in several sentence-in-noise recognition experiments. Listeners transcribed simple/short and more complex/longer sentences embedded in noise. The sentences were spoken by three talkers of varying intelligibility: one native, one high-, and one low-intelligibility non-native English speakers. The effect of linguistic structure on sentence recognition accuracy was modulated by talker intelligibility. Accuracy was disadvantaged by increasing complexity only for the native and high intelligibility foreign-accented talkers, whereas no such effect was found for the low intelligibility foreign-accented talker. This pattern emerged across conditions: low and high signal-to-noise ratios, mixed and blocked stimulus presentation, and in the absence of a major cue to prosodic structure, the natural pitch contour of the sentences. Moreover, the pattern generalized to a different set of three talkers that matched the intelligibility of the original talkers. Taken together, the results in this study suggest that listeners employ qualitatively different speech processing strategies for low- versus high-intelligibility foreign-accented talkers, with sentence-related linguistic factors only emerging for speech over a threshold of intelligibility. Findings are discussed in the context of alternative accounts.
Collapse
Affiliation(s)
- Dorina Strori
- Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, Illinois 60208, USA
| | - Ann R Bradlow
- Department of Linguistics, Northwestern University, 2016 Sheridan Road, Evanston, Illinois 60208, USA
| | - Pamela E Souza
- Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, Illinois 60208, USA
| |
Collapse
|
41
|
Bosen AK, Barry MF. Serial Recall Predicts Vocoded Sentence Recognition Across Spectral Resolutions. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:1282-1298. [PMID: 32213149 PMCID: PMC7242981 DOI: 10.1044/2020_jslhr-19-00319] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Purpose The goal of this study was to determine how various aspects of cognition predict speech recognition ability across different levels of speech vocoding within a single group of listeners. Method We tested the ability of young adults (N = 32) with normal hearing to recognize Perceptually Robust English Sentence Test Open-set (PRESTO) sentences that were degraded with a vocoder to produce different levels of spectral resolution (16, eight, and four carrier channels). Participants also completed tests of cognition (fluid intelligence, short-term memory, and attention), which were used as predictors of sentence recognition. Sentence recognition was compared across vocoder conditions, predictors were correlated with individual differences in sentence recognition, and the relationships between predictors were characterized. Results PRESTO sentence recognition performance declined with a decreasing number of vocoder channels, with no evident floor or ceiling performance in any condition. Individual ability to recognize PRESTO sentences was consistent relative to the group across vocoder conditions. Short-term memory, as measured with serial recall, was a moderate predictor of sentence recognition (ρ = 0.65). Serial recall performance was constant across vocoder conditions when measured with a digit span task. Fluid intelligence was marginally correlated with serial recall, but not sentence recognition. Attentional measures had no discernible relationship to sentence recognition and a marginal relationship with serial recall. Conclusions Verbal serial recall is a substantial predictor of vocoded sentence recognition, and this predictive relationship is independent of spectral resolution. In populations that show variable speech recognition outcomes, such as listeners with cochlear implants, it should be possible to account for the independent effects of spectral resolution and verbal serial recall in their speech recognition ability. Supplemental Material https://doi.org/10.23641/asha.12021051.
Collapse
|
42
|
High-Variability Sentence Recognition in Long-Term Cochlear Implant Users: Associations With Rapid Phonological Coding and Executive Functioning. Ear Hear 2020; 40:1149-1161. [PMID: 30601227 DOI: 10.1097/aud.0000000000000691] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The objective of the present study was to determine whether long-term cochlear implant (CI) users would show greater variability in rapid phonological coding skills and greater reliance on slow-effortful compensatory executive functioning (EF) skills than normal-hearing (NH) peers on perceptually challenging high-variability sentence recognition tasks. We tested the following three hypotheses: First, CI users would show lower scores on sentence recognition tests involving high speaker and dialect variability than NH controls, even after adjusting for poorer sentence recognition performance by CI users on a conventional low-variability sentence recognition test. Second, variability in fast-automatic rapid phonological coding skills would be more strongly associated with performance on high-variability sentence recognition tasks for CI users than NH peers. Third, compensatory EF strategies would be more strongly associated with performance on high-variability sentence recognition tasks for CI users than NH peers. DESIGN Two groups of children, adolescents, and young adults aged 9 to 29 years participated in this cross-sectional study: 49 long-term CI users (≥7 years) and 56 NH controls. All participants were tested on measures of rapid phonological coding (Children's Test of Nonword Repetition), conventional sentence recognition (Harvard Sentence Recognition Test), and two novel high-variability sentence recognition tests that varied the indexical attributes of speech (Perceptually Robust English Sentence Test Open-set test and Perceptually Robust English Sentence Test Open-set test-Foreign Accented English test). Measures of EF included verbal working memory (WM), spatial WM, controlled cognitive fluency, and inhibition concentration. RESULTS CI users scored lower than NH peers on both tests of high-variability sentence recognition even after conventional sentence recognition skills were statistically controlled. Correlations between rapid phonological coding and high-variability sentence recognition scores were stronger for the CI sample than for the NH sample even after basic sentence perception skills were statistically controlled. Scatterplots revealed different ranges and slopes for the relationship between rapid phonological coding skills and high-variability sentence recognition performance in CI users and NH peers. Although no statistically significant correlations between EF strategies and sentence recognition were found in the CI or NH sample after use of a conservative Bonferroni-type correction, medium to high effect sizes for correlations between verbal WM and sentence recognition in the CI sample suggest that further investigation of this relationship is needed. CONCLUSIONS These findings provide converging support for neurocognitive models that propose two channels for speech-language processing: a fast-automatic channel that predominates whenever possible and a compensatory slow-effortful processing channel that is activated during perceptually-challenging speech processing tasks that are not fully managed by the fast-automatic channel (ease of language understanding, framework for understanding effortful listening, and auditory neurocognitive model). CI users showed significantly poorer performance on measures of high-variability sentence recognition than NH peers, even after simple sentence recognition was controlled. Nonword repetition scores showed almost no overlap between CI and NH samples, and correlations between nonword repetition scores and high-variability sentence recognition were consistent with greater reliance on engagement of fast-automatic phonological coding for high-variability sentence recognition in the CI sample than in the NH sample. Further investigation of the verbal WM-sentence recognition relationship in CI users is recommended. Assessment of fast-automatic phonological processing and slow-effortful EF skills may provide a better understanding of speech perception outcomes in CI users in the clinical setting.
Collapse
|
43
|
Bologna WJ, Ahlstrom JB, Dubno JR. Contributions of Voice Expectations to Talker Selection in Younger and Older Adults With Normal Hearing. Trends Hear 2020; 24:2331216520915110. [PMID: 32372720 PMCID: PMC7225833 DOI: 10.1177/2331216520915110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 03/02/2020] [Accepted: 03/03/2020] [Indexed: 11/17/2022] Open
Abstract
Focused attention on expected voice features, such as fundamental frequency (F0) and spectral envelope, may facilitate segregation and selection of a target talker in competing talker backgrounds. Age-related declines in attention may limit these abilities in older adults, resulting in poorer speech understanding in complex environments. To test this hypothesis, younger and older adults with normal hearing listened to sentences with a single competing talker. For most trials, listener attention was directed to the target by a cue phrase that matched the target talker's F0 and spectral envelope. For a small percentage of randomly occurring probe trials, the target's voice unexpectedly differed from the cue phrase in terms of F0 and spectral envelope. Overall, keyword recognition for the target talker was poorer for older adults than younger adults. Keyword recognition was poorer on probe trials than standard trials for both groups, and incorrect responses on probe trials contained keywords from the single-talker masker. No interaction was observed between age-group and the decline in keyword recognition on probe trials. Thus, reduced performance by older adults overall could not be attributed to declines in attention to an expected voice. Rather, other cognitive abilities, such as speed of processing and linguistic closure, were predictive of keyword recognition for younger and older adults. Moreover, the effects of age interacted with the sex of the target talker, such that older adults had greater difficulty understanding target keywords from female talkers than male talkers.
Collapse
Affiliation(s)
- William J. Bologna
- Department of Otolaryngology—Head and Neck Surgery, Medical University of South Carolina
| | - Jayne B. Ahlstrom
- Department of Otolaryngology—Head and Neck Surgery, Medical University of South Carolina
| | - Judy R. Dubno
- Department of Otolaryngology—Head and Neck Surgery, Medical University of South Carolina
| |
Collapse
|
44
|
Rodman C, Moberly AC, Janse E, Başkent D, Tamati TN. The impact of speaking style on speech recognition in quiet and multi-talker babble in adult cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:101. [PMID: 32006976 DOI: 10.1121/1.5141370] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 11/30/2019] [Indexed: 06/10/2023]
Abstract
The current study examined sentence recognition across speaking styles (conversational, neutral, and clear) in quiet and multi-talker babble (MTB) for cochlear implant (CI) users and normal-hearing listeners under CI simulations. Listeners demonstrated poorer recognition accuracy in MTB than in quiet, but were relatively more accurate with clear speech overall. Within CI users, higher-performing participants were also more accurate in MTB when listening to clear speech. Lower performing users' accuracy was not impacted by speaking style. Clear speech may facilitate recognition in MTB for high-performing users, who may be better able to take advantage of clear speech cues.
Collapse
Affiliation(s)
- Cole Rodman
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University Wexner Medical Center, 915 Olentangy River Road, Suite 4000, Columbus, Ohio 43212, USA
| | - Aaron C Moberly
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University Wexner Medical Center, 915 Olentangy River Road, Suite 4000, Columbus, Ohio 43212, USA
| | - Esther Janse
- Centre for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Terrin N Tamati
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University Wexner Medical Center, 915 Olentangy River Road, Suite 4000, Columbus, Ohio 43212, USA
| |
Collapse
|
45
|
Mattingly JK, Castellanos I, Moberly AC. Nonverbal Reasoning as a Contributor to Sentence Recognition Outcomes in Adults With Cochlear Implants. Otol Neurotol 2019; 39:e956-e963. [PMID: 30444843 DOI: 10.1097/mao.0000000000001998] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
HYPOTHESIS Significant variability in speech recognition persists among postlingually deafened adults with cochlear implants (CIs). We hypothesize that scores of nonverbal reasoning predict sentence recognition in adult CI users. BACKGROUND Cognitive functions contribute to speech recognition outcomes in adults with hearing loss. These functions may be particularly important for CI users who must interpret highly degraded speech signals through their devices. This study used a visual measure of reasoning (the ability to solve novel problems), the Raven's Progressive Matrices (RPM), to predict sentence recognition in CI users. METHODS Participants were 39 postlingually deafened adults with CIs and 43 age-matched normal-hearing (NH) controls. CI users were assessed for recognition of words in sentences in quiet, and NH controls listened to eight-channel vocoded versions to simulate the degraded signal delivered by a CI. A computerized visual task of the RPM, requiring participants to identify the correct missing piece in a 3×3 matrix of geometric designs, was also performed. Particular items from the RPM were examined for their associations with sentence recognition abilities, and a subset of items on the RPM was tested for the ability to predict degraded sentence recognition in the NH controls. RESULTS The overall number of items answered correctly on the 48-item RPM significantly correlated with sentence recognition in CI users (r = 0.35-0.47) and NH controls (r = 0.36-0.57). An abbreviated 12-item version of the RPM was created and performance also correlated with sentence recognition in CI users (r = 0.40-0.48) and NH controls (r = 0.49-0.56). CONCLUSIONS Nonverbal reasoning skills correlated with sentence recognition in both CI and NH subjects. Our findings provide further converging evidence that cognitive factors contribute to speech processing by adult CI users and can help explain variability in outcomes. Our abbreviated version of the RPM may serve as a clinically meaningful assessment for predicting sentence recognition outcomes in CI users.
Collapse
Affiliation(s)
- Jameson K Mattingly
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University, Columbus, Ohio
| | | | | |
Collapse
|
46
|
Fletcher A, McAuliffe M, Kerr S, Sinex D. Effects of Vocabulary and Implicit Linguistic Knowledge on Speech Recognition in Adverse Listening Conditions. Am J Audiol 2019; 28:742-755. [PMID: 32271121 DOI: 10.1044/2019_aja-heal18-18-0169] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose This study aims to examine the combined influence of vocabulary knowledge and statistical properties of language on speech recognition in adverse listening conditions. Furthermore, it aims to determine whether any effects identified are more salient at particular levels of signal degradation. Method One hundred three young healthy listeners transcribed phrases presented at 4 different signal-to-noise ratios, which were coded for recognition accuracy. Participants also completed tests of hearing acuity, vocabulary knowledge, nonverbal intelligence, processing speed, and working memory. Results Vocabulary knowledge and working memory demonstrated independent effects on word recognition accuracy when controlling for hearing acuity, nonverbal intelligence, and processing speed. These effects were strongest at the same moderate level of signal degradation. Although listener variables were statistically significant, their effects were subtle in comparison to the influence of word frequency and phonological content. These language-based factors had large effects on word recognition at all signal-to-noise ratios. Discussion Language experience and working memory may have complementary effects on accurate word recognition. However, adequate glimpses of acoustic information appear necessary for speakers to leverage vocabulary knowledge when processing speech in adverse conditions.
Collapse
Affiliation(s)
- Annalise Fletcher
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton
| | - Megan McAuliffe
- Department of Communication Disorders, University of Canterbury, Christchurch, New Zealand
| | - Sarah Kerr
- Department of Communication Disorders, University of Canterbury, Christchurch, New Zealand
| | - Donal Sinex
- Department of Speech, Language, and Hearing Science, University of Florida, Gainesville
| |
Collapse
|
47
|
King G, Corbin NE, Leibold LJ, Buss E. Spatial Release from Masking Using Clinical Corpora: Sentence Recognition in a Colocated or Spatially Separated Speech Masker. J Am Acad Audiol 2019; 31:271-276. [PMID: 31589139 DOI: 10.3766/jaaa.19018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Speech recognition in complex multisource environments is challenging, particularly for listeners with hearing loss. One source of difficulty is the reduced ability of listeners with hearing loss to benefit from spatial separation of the target and masker, an effect called spatial release from masking (SRM). Despite the prevalence of complex multisource environments in everyday life, SRM is not routinely evaluated in the audiology clinic. PURPOSE The purpose of this study was to demonstrate the feasibility of assessing SRM in adults using widely available tests of speech-in-speech recognition that can be conducted using standard clinical equipment. RESEARCH DESIGN Participants were 22 young adults with normal hearing. The task was masked sentence recognition, using each of five clinically available corpora with speech maskers. The target always sounded like it originated from directly in front of the listener, and the masker either sounded like it originated from the front (colocated with the target) or from the side (separated from the target). In the real spatial manipulation conditions, source location was manipulated by routing the target and masker to either a single speaker or to two speakers: one directly in front of the participant, and one mounted in an adjacent corner, 90° to the right. In the perceived spatial separation conditions, the target and masker were presented from both speakers with delays that made them sound as if they were either colocated or separated. RESULTS With real spatial manipulations, the mean SRM ranged from 7.1 to 11.4 dB, depending on the speech corpus. With perceived spatial manipulations, the mean SRM ranged from 1.8 to 3.1 dB. Whereas real separation improves the signal-to-noise ratio in the ear contralateral to the masker, SRM in the perceived spatial separation conditions is based solely on interaural timing cues. CONCLUSIONS The finding of robust SRM with widely available speech corpora supports the feasibility of measuring this important aspect of hearing in the audiology clinic. The finding of a small but significant SRM in the perceived spatial separation conditions suggests that modified materials could be used to evaluate the use of interaural timing cues specifically.
Collapse
Affiliation(s)
- Grant King
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, School of Medicine, Chapel Hill, NC
| | - Nicole E Corbin
- Division of Speech and Hearing Sciences, Department of Allied Health Sciences, University of North Carolina at Chapel Hill, School of Medicine, Chapel Hill, NC
| | - Lori J Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, School of Medicine, Chapel Hill, NC
| |
Collapse
|
48
|
Dingemanse JG, Goedegebure A. The Important Role of Contextual Information in Speech Perception in Cochlear Implant Users and Its Consequences in Speech Tests. Trends Hear 2019; 23:2331216519838672. [PMID: 30991904 PMCID: PMC6472157 DOI: 10.1177/2331216519838672] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
This study investigated the role of contextual information in speech
intelligibility, the influence of verbal working memory on the use of contextual
information, and the suitability of an ecologically valid sentence test
containing contextual information, compared with a CNC
(Consonant-Nucleus-Consonant) word test, in cochlear implant (CI) users. Speech
intelligibility performance was assessed in 50 postlingual adult CI users on
sentence lists and on CNC word lists. Results were compared with a
normal-hearing (NH) group. The influence of contextual information was
calculated from three different context models. Working memory capacity was
measured with a Reading Span Test. CI recipients made significantly more use of
contextual information in recognition of CNC words and sentences than NH
listeners. Their use of contextual information in sentences was related to
verbal working memory capacity but not to age, indicating that the ability to
use context is dependent on cognitive abilities, regardless of age. The presence
of context in sentences enhanced the sensitivity to differences in sensory
bottom-up information but also increased the risk of a ceiling effect. A
sentence test appeared to be suitable in CI users if word scoring is used and
noise is added for the best performers.
Collapse
Affiliation(s)
- J. Gertjan Dingemanse
- 1 Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands
| | - André Goedegebure
- 1 Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands
| |
Collapse
|
49
|
Vasil KJ, Lewis J, Tamati T, Ray C, Moberly AC. How Does Quality of Life Relate to Auditory Abilities? A Subitem Analysis of the Nijmegen Cochlear Implant Questionnaire. J Am Acad Audiol 2019; 31:292-301. [PMID: 31580803 DOI: 10.3766/jaaa.19047] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Objective speech recognition tasks are widely used to measure performance of adult cochlear implant (CI) users; however, the relationship of these measures with patient-reported quality of life (QOL) remains unclear. A comprehensive QOL measure, the Nijmegen Cochlear Implant Questionnaire (NCIQ), has historically shown a weak association with speech recognition performance, but closer examination may indicate stronger relations between QOL and objective auditory performance, particularly when examining a broad range of auditory skills. PURPOSE The aim of the present study was to assess the NCIQ for relations to speech and environmental sound recognition measures. Identifying associations with certain QOL domains, subdomains, and subitems would provide evidence that speech and environmental sound recognition measures are relevant to QOL. A lack of relations among QOL and various auditory abilities would suggest potential areas of patient-reported difficulty that could be better measured or targeted. RESEARCH DESIGN A cross-sectional study was performed in adult CI users to examine relations among subjective QOL ratings on NCIQ domains, subdomains, and subitems with auditory outcome measures. STUDY SAMPLE Participants were 44 adult experienced CI users. All participants were postlingually deafened and had met candidacy requirements for traditional cochlear implantation. DATA COLLECTION AND ANALYSIS Participants completed the NCIQ as well as several speech and environmental sound recognition tasks: monosyllabic word recognition, standard and high-variability sentence recognition, audiovisual sentence recognition, and environmental sound identification. Bivariate correlation analyses were performed to investigate relations among patient-reported NCIQ scores and the functional auditory measures. RESULTS The total NCIQ score was not strongly correlated with any objective auditory outcome measures. The physical domain and the advanced sound perception subdomain related to several measures, in particular monosyllabic word recognition and AzBio sentence recognition. Fourteen of the 60 subitems on the NCIQ were correlated with at least one auditory measure. CONCLUSIONS Several subitems demonstrated moderate-to-strong correlations with auditory measures, indicating that these auditory measures are relevant to the QOL. A lack of relations with other subitems suggests a need for the development of objective measures that will better capture patients' hearing-related obstacles. Clinicians may use information obtained through the NCIQ to better estimate real-world performance, which may support improved counseling and development of recommendations for CI patients.
Collapse
Affiliation(s)
- Kara J Vasil
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University, Columbus, OH
| | - Jessica Lewis
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University, Columbus, OH
| | - Terrin Tamati
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University, Columbus, OH
| | - Christin Ray
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University, Columbus, OH
| | - Aaron C Moberly
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University, Columbus, OH
| |
Collapse
|
50
|
"Product" Versus "Process" Measures in Assessing Speech Recognition Outcomes in Adults With Cochlear Implants. Otol Neurotol 2019; 39:e195-e202. [PMID: 29342056 DOI: 10.1097/mao.0000000000001694] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
HYPOTHESES 1) When controlling for age in postlingual adult cochlear implant (CI) users, information-processing functions, as assessed using "process" measures of working memory capacity, inhibitory control, information-processing speed, and fluid reasoning, will predict traditional "product" outcome measures of speech recognition. 2) Demographic/audiologic factors, particularly duration of deafness, duration of CI use, degree of residual hearing, and socioeconomic status, will impact performance on underlying information-processing functions, as assessed using process measures. BACKGROUND Clinicians and researchers rely heavily on endpoint product measures of accuracy in speech recognition to gauge patient outcomes postoperatively. However, these measures are primarily descriptive and were not designed to assess the underlying core information-processing operations that are used during speech recognition. In contrast, process measures reflect the integrity of elementary core subprocesses that are operative during behavioral tests using complex speech signals. METHODS Forty-two experienced adult CI users were tested using three product measures of speech recognition, along with four process measures of working memory capacity, inhibitory control, speed of lexical/phonological access, and nonverbal fluid reasoning. Demographic and audiologic factors were also assessed. RESULTS Scores on product measures were associated with core process measures of speed of lexical/phonological access and nonverbal fluid reasoning. After controlling for participant age, demographic and audiologic factors did not correlate with process measure scores. CONCLUSION Findings provide support for the important foundational roles of information processing operations in speech recognition outcomes of postlingually deaf patients who have received CIs.
Collapse
|