1
|
Zhao X, Li Y, Yang X. Aging affects Mandarin speakers' understanding of focus sentences in quiet and noisy environments. JOURNAL OF COMMUNICATION DISORDERS 2024; 111:106451. [PMID: 39043003 DOI: 10.1016/j.jcomdis.2024.106451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 05/27/2024] [Accepted: 07/11/2024] [Indexed: 07/25/2024]
Abstract
INTRODUCTION Older adults experiencing normal aging make up most patients seeking services at audiology clinics. While research acknowledges that the speech perception abilities of aging adults can be diminished in lower-level speech identification or discrimination, there is less concern about how aging affects higher-level speech understanding, particularly in tonal languages. This study aimed to explore the effects of aging on the comprehension of implied intentions conveyed through prosodic features in Mandarin focus sentences, both in quiet and noisy environments. METHODS Twenty-seven younger listeners (aged 17 to 26) and 27 older listeners (aged 58 to 77) participated in a focus comprehension task. Their task was to interpret SAVO (subject-adverbial-verb-object) sentences with five focus conditions (initial subject-focus, medial adverbial-focus, medial verb-focus, final object-focus, and neutral non-focus) across five background conditions: quiet, white noise (at 0 and -10-dB signal-to-noise ratios, SNRs), and competing speech (at 0 and -10-dB SNRs). Comprehension performances were analyzed based on accuracy rates, and underlying processing patterns were evaluated using confusion matrices. RESULTS Younger listeners consistently excelled across focus conditions in quiet settings, but their scores declined in white noise at the SNR of -10-dB. Older adults exhibited variability in scores across focus conditions but not in background conditions. They scored lower than their younger counterparts, with the highest scores observed in the comprehension of sentences featuring a medial adverbial-focus. Analysis of confusion matrices revealed that younger adults seldom mistook focus conditions, whereas older adults tended to comprehend the other focused items as medial adverbials. CONCLUSIONS Older listeners' performance reflects their over-reliance on top-down language knowledge, while their bottom-up acoustic processing decreases when interpreting Mandarin focus sentences. These findings provide evidence of active cognitive processing in prosody comprehension among aging adults and offer insights for diagnosing and intervening with speech disorders in clinical settings.
Collapse
Affiliation(s)
- Xinxian Zhao
- School of Foreign Studies, Tongji University, 1239 Siping Road, Shanghai 200092, China
| | - Yang Li
- School of Foreign Studies, Tongji University, 1239 Siping Road, Shanghai 200092, China
| | - Xiaohu Yang
- School of Foreign Studies, Tongji University, 1239 Siping Road, Shanghai 200092, China.
| |
Collapse
|
2
|
Gastaldon S, Bonfiglio N, Vespignani F, Peressotti F. Predictive language processing: integrating comprehension and production, and what atypical populations can tell us. Front Psychol 2024; 15:1369177. [PMID: 38836235 PMCID: PMC11148270 DOI: 10.3389/fpsyg.2024.1369177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 05/06/2024] [Indexed: 06/06/2024] Open
Abstract
Predictive processing, a crucial aspect of human cognition, is also relevant for language comprehension. In everyday situations, we exploit various sources of information to anticipate and therefore facilitate processing of upcoming linguistic input. In the literature, there are a variety of models that aim at accounting for such ability. One group of models propose a strict relationship between prediction and language production mechanisms. In this review, we first introduce very briefly the concept of predictive processing during language comprehension. Secondly, we focus on models that attribute a prominent role to language production and sensorimotor processing in language prediction ("prediction-by-production" models). Contextually, we provide a summary of studies that investigated the role of speech production and auditory perception on language comprehension/prediction tasks in healthy, typical participants. Then, we provide an overview of the limited existing literature on specific atypical/clinical populations that may represent suitable testing ground for such models-i.e., populations with impaired speech production and auditory perception mechanisms. Ultimately, we suggest a more widely and in-depth testing of prediction-by-production accounts, and the involvement of atypical populations both for model testing and as targets for possible novel speech/language treatment approaches.
Collapse
Affiliation(s)
- Simone Gastaldon
- Dipartimento di Psicologia dello Sviluppo e della Socializzazione, University of Padua, Padua, Italy
- Padova Neuroscience Center, University of Padua, Padua, Italy
| | - Noemi Bonfiglio
- Dipartimento di Psicologia dello Sviluppo e della Socializzazione, University of Padua, Padua, Italy
- BCBL-Basque Center on Cognition, Brain and Language, Donostia-San Sebastián, Spain
| | - Francesco Vespignani
- Dipartimento di Psicologia dello Sviluppo e della Socializzazione, University of Padua, Padua, Italy
- Centro Interdipartimentale di Ricerca "I-APPROVE-International Auditory Processing Project in Venice", University of Padua, Padua, Italy
| | - Francesca Peressotti
- Dipartimento di Psicologia dello Sviluppo e della Socializzazione, University of Padua, Padua, Italy
- Padova Neuroscience Center, University of Padua, Padua, Italy
- Centro Interdipartimentale di Ricerca "I-APPROVE-International Auditory Processing Project in Venice", University of Padua, Padua, Italy
| |
Collapse
|
3
|
Nyirjesy SC, Lewis JH, Hallak D, Conroy S, Moberly AC, Tamati TN. Evaluating Listening Effort in Unilateral, Bimodal, and Bilateral Cochlear Implant Users. Otolaryngol Head Neck Surg 2024; 170:1147-1157. [PMID: 38104319 DOI: 10.1002/ohn.609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 10/24/2023] [Accepted: 11/24/2023] [Indexed: 12/19/2023]
Abstract
OBJECTIVE Evaluate listening effort (LE) in unilateral, bilateral, and bimodal cochlear implant (CI) users. Establish an easy-to-implement task of LE that could be useful for clinical decision making. STUDY DESIGN Prospective cohort study. SETTING Tertiary neurotology center. METHODS The Sentence Final Word Identification and Recall Task, an established measure of LE, was modified to include challenging listening conditions (multitalker babble, gender, and emotional variation; test), in addition to single-talker sentences (control). Participants listened to lists of sentences in each condition and recalled the last word of each sentence. LE was quantified by percentage of words correctly recalled and was compared across conditions, across CI groups, and within subjects (best aided vs monaural). RESULTS A total of 24 adults between the ages of 37 and 82 years enrolled, including 4 unilateral CI users (CI), 10 bilateral CI users (CICI), and 10 bimodal CI users (CIHA). Task condition impacted LE (P < .001), but hearing configuration and listener group did not (P = .90). Working memory capacity and contralateral hearing contributed to individual performance. CONCLUSION This study adds to the growing body of literature on LE in challenging listening conditions for CI users and demonstrates feasibility of a simple behavioral task that could be implemented clinically to assess LE. This study also highlights the potential benefits of bimodal hearing and individual hearing and cognitive factors in understanding individual differences in performance, which will be evaluated through further research.
Collapse
Affiliation(s)
- Sarah C Nyirjesy
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University, Columbus, Ohio, USA
| | - Jessica H Lewis
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University, Columbus, Ohio, USA
- Department of Speech and Hearing Science, The Ohio State University, Columbus, Ohio, USA
| | - Diana Hallak
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University, Columbus, Ohio, USA
| | - Sara Conroy
- Department of Biomedical Informatics, Center for Biostatistics, The Ohio State University, Columbus, Ohio, USA
| | - Aaron C Moberly
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Terrin N Tamati
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
4
|
Tamati TN, Jebens A, Başkent D. Lexical effects on talker discrimination in adult cochlear implant usersa). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:1631-1640. [PMID: 38426835 PMCID: PMC10908561 DOI: 10.1121/10.0025011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 02/06/2024] [Accepted: 02/07/2024] [Indexed: 03/02/2024]
Abstract
The lexical and phonological content of an utterance impacts the processing of talker-specific details in normal-hearing (NH) listeners. Adult cochlear implant (CI) users demonstrate difficulties in talker discrimination, particularly for same-gender talker pairs, which may alter the reliance on lexical information in talker discrimination. The current study examined the effect of lexical content on talker discrimination in 24 adult CI users. In a remote AX talker discrimination task, word pairs-produced either by the same talker (ST) or different talkers with the same (DT-SG) or mixed genders (DT-MG)-were either lexically easy (high frequency, low neighborhood density) or lexically hard (low frequency, high neighborhood density). The task was completed in quiet and multi-talker babble (MTB). Results showed an effect of lexical difficulty on talker discrimination, for same-gender talker pairs in both quiet and MTB. CI users showed greater sensitivity in quiet as well as less response bias in both quiet and MTB for lexically easy words compared to lexically hard words. These results suggest that CI users make use of lexical content in same-gender talker discrimination, providing evidence for the contribution of linguistic information to the processing of degraded talker information by adult CI users.
Collapse
Affiliation(s)
- Terrin N Tamati
- Department of Otolaryngology, Vanderbilt University Medical Center, 1215 21st Ave S, Nashville, Tennessee 37232, USA
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Almut Jebens
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
5
|
Everhardt MK, Jung DE, Stiensma B, Lowie W, Başkent D, Sarampalis A. Foreign Language Acquisition in Adolescent Cochlear Implant Users. Ear Hear 2024; 45:174-185. [PMID: 37747307 PMCID: PMC10718217 DOI: 10.1097/aud.0000000000001410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 06/20/2023] [Indexed: 09/26/2023]
Abstract
OBJECTIVES This study explores to what degree adolescent cochlear implant (CI) users can learn a foreign language in a school setting similar to their normal-hearing (NH) peers despite the degraded auditory input. DESIGN A group of native Dutch adolescent CI users (age range 13 to 17 years) learning English as a foreign language at secondary school and a group of NH controls (age range 12 to 15 years) were assessed on their Dutch and English language skills using various language tasks that either relied on the processing of auditory information (i.e., listening task) or on the processing of orthographic information (i.e., reading and/or gap-fill task). The test battery also included various auditory and cognitive tasks to assess whether the auditory and cognitive functioning of the learners could explain the potential variation in language skills. RESULTS Results showed that adolescent CI users can learn English as a foreign language, as the English language skills of the CI users and their NH peers were comparable when assessed with reading or gap-fill tasks. However, the performance of the adolescent CI users was lower for English listening tasks. This discrepancy between task performance was not observed in their native language Dutch. The auditory tasks confirmed that the adolescent CI users had coarser temporal and spectral resolution than their NH peers, supporting the notion that the difference in foreign language listening skills may be due to a difference in auditory functioning. No differences in the cognitive functioning of the CI users and their NH peers were found that could explain the variation in the foreign language listening tasks. CONCLUSIONS In short, acquiring a foreign language with degraded auditory input appears to affect foreign language listening skills, yet does not appear to impact foreign language skills when assessed with tasks that rely on the processing of orthographic information. CI users could take advantage of orthographic information to facilitate foreign language acquisition and potentially support the development of listening-based foreign language skills.
Collapse
Affiliation(s)
- Marita K. Everhardt
- Center for Language and Cognition Groningen, University of Groningen, Netherlands
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Dorit Enja Jung
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
- Department of Psychology, University of Groningen, Netherlands
| | - Berrit Stiensma
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Wander Lowie
- Center for Language and Cognition Groningen, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
- W.J. Kolff Institute for Biomedical Engineering and Materials Science, University Medical Center Groningen, University of Groningen, Netherlands
| | - Anastasios Sarampalis
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
- Department of Psychology, University of Groningen, Netherlands
| |
Collapse
|
6
|
Koelewijn T, Gaudrain E, Shehab T, Treczoks T, Başkent D. The Role of Word Content, Sentence Information, and Vocoding for Voice Cue Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:3665-3676. [PMID: 37556819 DOI: 10.1044/2023_jslhr-22-00491] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
PURPOSE For voice perception, two voice cues, the fundamental frequency (fo) and/or vocal tract length (VTL), seem to largely contribute to identification of voices and speaker characteristics. Acoustic content related to these voice cues is altered in cochlear implant transmitted speech, rendering voice perception difficult for the implant user. In everyday listening, there could be some facilitation from top-down compensatory mechanisms such as from use of linguistic content. Recently, we have shown a lexical content benefit on just-noticeable differences (JNDs) in VTL perception, which was not affected by vocoding. Whether this observed benefit relates to lexicality or phonemic content and whether additional sentence information can affect voice cue perception as well were investigated in this study. METHOD This study examined lexical benefit on VTL perception, by comparing words, time-reversed words, and nonwords, to investigate the contribution of lexical (words vs. nonwords) or phonetic (nonwords vs. reversed words) information. In addition, we investigated the effect of amount of speech (auditory) information on fo and VTL voice cue perception, by comparing words to sentences. In both experiments, nonvocoded and vocoded auditory stimuli were presented. RESULTS The outcomes showed a replication of the detrimental effect reversed words have on VTL perception. Smaller JNDs were shown for stimuli containing lexical and/or phonemic information. Experiment 2 showed a benefit in processing full sentences compared to single words in both fo and VTL perception. In both experiments, there was an effect of vocoding, which only interacted with sentence information for fo. CONCLUSIONS In addition to previous findings suggesting a lexical benefit, the current results show, more specifically, that lexical and phonemic information improves VTL perception. fo and VTL perception benefits from more sentence information compared to words. These results indicate that cochlear implant users may be able to partially compensate for voice cue perception difficulties by relying on the linguistic content and rich acoustic cues of everyday speech. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.23796405.
Collapse
Affiliation(s)
- Thomas Koelewijn
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Research School of Behavioural and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, the Netherlands
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Research School of Behavioural and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, the Netherlands
- Lyon Neuroscience Research Center, CNRS UMR5292, Inserm U1028, UCBL, UJM, Lyon, France
| | - Thawab Shehab
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Neurolinguistics, Faculty of Arts, University of Groningen, the Netherlands
| | - Tobias Treczoks
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Medical Physics and Cluster of Excellence "Hearing4all," Department of Medical Physics and Acoustics, Faculty VI Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Germany
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Research School of Behavioural and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, the Netherlands
| |
Collapse
|
7
|
Moffat R, Başkent D, Luke R, McAlpine D, Van Yper L. Cortical haemodynamic responses predict individual ability to recognise vocal emotions with uninformative pitch cues but do not distinguish different emotions. Hum Brain Mapp 2023; 44:3684-3705. [PMID: 37162212 PMCID: PMC10203806 DOI: 10.1002/hbm.26305] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Revised: 02/23/2023] [Accepted: 03/30/2023] [Indexed: 05/11/2023] Open
Abstract
We investigated the cortical representation of emotional prosody in normal-hearing listeners using functional near-infrared spectroscopy (fNIRS) and behavioural assessments. Consistent with previous reports, listeners relied most heavily on F0 cues when recognizing emotion cues; performance was relatively poor-and highly variable between listeners-when only intensity and speech-rate cues were available. Using fNIRS to image cortical activity to speech utterances containing natural and reduced prosodic cues, we found right superior temporal gyrus (STG) to be most sensitive to emotional prosody, but no emotion-specific cortical activations, suggesting that while fNIRS might be suited to investigating cortical mechanisms supporting speech processing it is less suited to investigating cortical haemodynamic responses to individual vocal emotions. Manipulating emotional speech to render F0 cues less informative, we found the amplitude of the haemodynamic response in right STG to be significantly correlated with listeners' abilities to recognise vocal emotions with uninformative F0 cues. Specifically, listeners more able to assign emotions to speech with degraded F0 cues showed lower haemodynamic responses to these degraded signals. This suggests a potential objective measure of behavioural sensitivity to vocal emotions that might benefit neurodiverse populations less sensitive to emotional prosody or hearing-impaired listeners, many of whom rely on listening technologies such as hearing aids and cochlear implants-neither of which restore, and often further degrade, the F0 cues essential to parsing emotional prosody conveyed in speech.
Collapse
Affiliation(s)
- Ryssa Moffat
- School of Psychological SciencesMacquarie UniversitySydneyNew South WalesAustralia
- International Doctorate of Experimental Approaches to Language and Brain (IDEALAB)Universities of Potsdam, Germany; Groningen, Netherlands; Newcastle University, UK; and Macquarie UniversityAustralia
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center GroningenUniversity of GroningenGroningenThe Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center GroningenUniversity of GroningenGroningenThe Netherlands
- Research School of Behavioral and Cognitive Neuroscience, Graduate School of Medical SciencesUniversity of GroningenGroningenThe Netherlands
| | - Robert Luke
- Macquarie University Hearing, and Department of LinguisticsMacquarie UniversitySydneyNew South WalesAustralia
- Bionics InstituteEast MelbourneVictoriaAustralia
| | - David McAlpine
- Macquarie University Hearing, and Department of LinguisticsMacquarie UniversitySydneyNew South WalesAustralia
| | - Lindsey Van Yper
- Macquarie University Hearing, and Department of LinguisticsMacquarie UniversitySydneyNew South WalesAustralia
- Institute of Clinical ResearchUniversity of Southern DenmarkOdenseDenmark
| |
Collapse
|
8
|
Tamati TN, Janse E, Başkent D. The relation between speaking-style categorization and speech recognition in adult cochlear implant users. JASA EXPRESS LETTERS 2023; 3:035201. [PMID: 37003708 DOI: 10.1121/10.0017439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The current study examined the relation between speaking-style categorization and speech recognition in post-lingually deafened adult cochlear implant users and normal-hearing listeners tested under 4- and 8-channel acoustic noise-vocoder cochlear implant simulations. Across all listeners, better speaking-style categorization of careful read and casual conversation speech was associated with more accurate recognition of speech across those same two speaking styles. Findings suggest that some cochlear implant users and normal-hearing listeners under cochlear implant simulation may benefit from stronger encoding of indexical information in speech, enabling both better categorization and recognition of speech produced in different speaking styles.
Collapse
Affiliation(s)
- Terrin N Tamati
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA
| | - Esther Janse
- Centre for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, The Netherlands , ,
| |
Collapse
|
9
|
Beckers L, Tromp N, Philips B, Mylanus E, Huinck W. Exploring neurocognitive factors and brain activation in adult cochlear implant recipients associated with speech perception outcomes-A scoping review. Front Neurosci 2023; 17:1046669. [PMID: 36816114 PMCID: PMC9932917 DOI: 10.3389/fnins.2023.1046669] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 01/05/2023] [Indexed: 02/05/2023] Open
Abstract
Background Cochlear implants (CIs) are considered an effective treatment for severe-to-profound sensorineural hearing loss. However, speech perception outcomes are highly variable among adult CI recipients. Top-down neurocognitive factors have been hypothesized to contribute to this variation that is currently only partly explained by biological and audiological factors. Studies investigating this, use varying methods and observe varying outcomes, and their relevance has yet to be evaluated in a review. Gathering and structuring this evidence in this scoping review provides a clear overview of where this research line currently stands, with the aim of guiding future research. Objective To understand to which extent different neurocognitive factors influence speech perception in adult CI users with a postlingual onset of hearing loss, by systematically reviewing the literature. Methods A systematic scoping review was performed according to the PRISMA guidelines. Studies investigating the influence of one or more neurocognitive factors on speech perception post-implantation were included. Word and sentence perception in quiet and noise were included as speech perception outcome metrics and six key neurocognitive domains, as defined by the DSM-5, were covered during the literature search (Protocol in open science registries: 10.17605/OSF.IO/Z3G7W of searches in June 2020, April 2022). Results From 5,668 retrieved articles, 54 articles were included and grouped into three categories using different measures to relate to speech perception outcomes: (1) Nineteen studies investigating brain activation, (2) Thirty-one investigating performance on cognitive tests, and (3) Eighteen investigating linguistic skills. Conclusion The use of cognitive functions, recruiting the frontal cortex, the use of visual cues, recruiting the occipital cortex, and the temporal cortex still available for language processing, are beneficial for adult CI users. Cognitive assessments indicate that performance on non-verbal intelligence tasks positively correlated with speech perception outcomes. Performance on auditory or visual working memory, learning, memory and vocabulary tasks were unrelated to speech perception outcomes and performance on the Stroop task not to word perception in quiet. However, there are still many uncertainties regarding the explanation of inconsistent results between papers and more comprehensive studies are needed e.g., including different assessment times, or combining neuroimaging and behavioral measures. Systematic review registration https://doi.org/10.17605/OSF.IO/Z3G7W.
Collapse
Affiliation(s)
- Loes Beckers
- Cochlear Ltd., Mechelen, Belgium
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| | - Nikki Tromp
- Cochlear Ltd., Mechelen, Belgium
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| | | | - Emmanuel Mylanus
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| | - Wendy Huinck
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| |
Collapse
|
10
|
Short Implicit Voice Training Affects Listening Effort During a Voice Cue Sensitivity Task With Vocoder-Degraded Speech. Ear Hear 2023:00003446-990000000-00113. [PMID: 36695603 PMCID: PMC10262993 DOI: 10.1097/aud.0000000000001335] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
OBJECTIVES Understanding speech in real life can be challenging and effortful, such as in multiple-talker listening conditions. Fundamental frequency (fo) and vocal-tract length (vtl) voice cues can help listeners segregate between talkers, enhancing speech perception in adverse listening conditions. Previous research showed lower sensitivity to fo and vtl voice cues when speech signal was degraded, such as in cochlear implant hearing and vocoder-listening compared to normal hearing, likely contributing to difficulties in understanding speech in adverse listening. Nevertheless, when multiple talkers are present, familiarity with a talker's voice, via training or exposure, could provide a speech intelligibility benefit. In this study, the objective was to assess how an implicit short-term voice training could affect perceptual discrimination of voice cues (fo+vtl), measured in sensitivity and listening effort, with or without vocoder degradations. DESIGN Voice training was provided via listening to a recording of a book segment for approximately 30 min, and answering text-related questions, to ensure engagement. Just-noticeable differences (JNDs) for fo+vtl were measured with an odd-one-out task implemented as a 3-alternative forced-choice adaptive paradigm, while simultaneously collecting pupil data. The reference voice either belonged to the trained voice or an untrained voice. Effects of voice training (trained and untrained voice), vocoding (non-vocoded and vocoded), and item variability (fixed or variable consonant-vowel triplets presented across three items) on voice cue sensitivity (fo+vtl JNDs) and listening effort (pupillometry measurements) were analyzed. RESULTS Results showed that voice training did not have a significant effect on voice cue discrimination. As expected, fo+vtl JNDs were significantly larger for vocoded conditions than for non-vocoded conditions and with variable item presentations than fixed item presentations. Generalized additive mixed models analysis of pupil dilation over the time course of stimulus presentation showed that pupil dilation was significantly larger during fo+vtl discrimination while listening to untrained voices compared to trained voices, but only for vocoder-degraded speech. Peak pupil dilation was significantly larger for vocoded conditions compared to non-vocoded conditions and variable items increased the pupil baseline relative to fixed items, which could suggest a higher anticipated task difficulty. CONCLUSIONS In this study, even though short voice training did not lead to improved sensitivity to small fo+vtl voice cue differences at the discrimination threshold level, voice training still resulted in reduced listening effort for discrimination among vocoded voice cues.
Collapse
|
11
|
Moberly AC, Varadarajan VV, Tamati TN. Noise-Vocoded Sentence Recognition and the Use of Context in Older and Younger Adult Listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:365-381. [PMID: 36475738 PMCID: PMC10023188 DOI: 10.1044/2022_jslhr-22-00184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Revised: 08/11/2022] [Accepted: 08/18/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE When listening to speech under adverse conditions, older adults, even with "age-normal" hearing, face challenges that may lead to poorer speech recognition than their younger peers. Older listeners generally demonstrate poorer suprathreshold auditory processing along with aging-related declines in neurocognitive functioning that may impair their ability to compensate using "top-down" cognitive-linguistic functions. This study explored top-down processing in older and younger adult listeners, specifically the use of semantic context during noise-vocoded sentence recognition. METHOD Eighty-four adults with age-normal hearing (45 young normal-hearing [YNH] and 39 older normal-hearing [ONH] adults) participated. Participants were tested for recognition accuracy for two sets of noise-vocoded sentence materials: one that was semantically meaningful and the other that was syntactically appropriate but semantically anomalous. Participants were also tested for hearing ability and for neurocognitive functioning to assess working memory capacity, speed of lexical access, inhibitory control, and nonverbal fluid reasoning, as well as vocabulary knowledge. RESULTS The ONH and YNH listeners made use of semantic context to a similar extent. Nonverbal reasoning predicted recognition of both meaningful and anomalous sentences, whereas pure-tone average contributed additionally to anomalous sentence recognition. None of the hearing, neurocognitive, or language measures significantly predicted the amount of context gain, computed as the difference score between meaningful and anomalous sentence recognition. However, exploratory cluster analyses demonstrated four listener profiles and suggested that individuals may vary in the strategies used to recognize speech under adverse listening conditions. CONCLUSIONS Older and younger listeners made use of sentence context to similar degrees. Nonverbal reasoning was found to be a contributor to noise-vocoded sentence recognition. However, different listeners may approach the problem of recognizing meaningful speech under adverse conditions using different strategies based on their hearing, neurocognitive, and language profiles. These findings provide support for the complexity of bottom-up and top-down interactions during speech recognition under adverse listening conditions.
Collapse
Affiliation(s)
- Aaron C. Moberly
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
| | | | - Terrin N. Tamati
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
| |
Collapse
|
12
|
Burleson AM, Souza PE. Cognitive and linguistic abilities and perceptual restoration of missing speech: Evidence from online assessment. Front Psychol 2022; 13:1059192. [PMID: 36571056 PMCID: PMC9773209 DOI: 10.3389/fpsyg.2022.1059192] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Accepted: 11/23/2022] [Indexed: 12/13/2022] Open
Abstract
When speech is clear, speech understanding is a relatively simple and automatic process. However, when the acoustic signal is degraded, top-down cognitive and linguistic abilities, such as working memory capacity, lexical knowledge (i.e., vocabulary), inhibitory control, and processing speed can often support speech understanding. This study examined whether listeners aged 22-63 (mean age 42 years) with better cognitive and linguistic abilities would be better able to perceptually restore missing speech information than those with poorer scores. Additionally, the role of context and everyday speech was investigated using high-context, low-context, and realistic speech corpi to explore these effects. Sixty-three adult participants with self-reported normal hearing completed a short cognitive and linguistic battery before listening to sentences interrupted by silent gaps or noise bursts. Results indicated that working memory was the most reliable predictor of perceptual restoration ability, followed by lexical knowledge, and inhibitory control and processing speed. Generally, silent gap conditions were related to and predicted by a broader range of cognitive abilities, whereas noise burst conditions were related to working memory capacity and inhibitory control. These findings suggest that higher-order cognitive and linguistic abilities facilitate the top-down restoration of missing speech information and contribute to individual variability in perceptual restoration.
Collapse
|
13
|
Jebens A, Başkent D, Rachman L. Phonological effects on the perceptual weighting of voice cues for voice gender categorization. JASA EXPRESS LETTERS 2022; 2:125202. [PMID: 36586964 DOI: 10.1121/10.0016601] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Voice perception and speaker identification interact with linguistic processing. This study investigated whether lexicality and/or phonological effects alter the perceptual weighting of voice pitch (F0) and vocal-tract length (VTL) cues for perceived voice gender categorization. F0 and VTL of forward words and nonwords (for lexicality effect), and time-reversed nonwords (for phonological effect through phonetic alterations) were manipulated. Participants provided binary "man"/"woman" judgements of the different voice conditions. Cue weights for time-reversed nonwords were significantly lower than cue weights for both forward words and nonwords, but there was no significant difference between forward words and nonwords. Hence, voice cue utilization for voice gender judgements seems to be affected by phonological, rather than lexicality effects.
Collapse
Affiliation(s)
- Almut Jebens
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands ; ;
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands ; ;
| | - Laura Rachman
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands ; ;
| |
Collapse
|
14
|
Deshpande P, Brandt C, Debener S, Neher T. Comparing Clinically Applicable Behavioral and Electrophysiological Measures of Speech Detection, Discrimination, and Comprehension. Trends Hear 2022; 26:23312165221139733. [PMID: 36423251 PMCID: PMC9703531 DOI: 10.1177/23312165221139733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Effective communication requires good speech perception abilities. Speech perception can be assessed with behavioral and electrophysiological methods. Relating these two types of measures to each other can provide a basis for new clinical tests. In audiological practice, speech detection and discrimination are routinely assessed, whereas comprehension-related aspects are ignored. The current study compared behavioral and electrophysiological measures of speech detection, discrimination, and comprehension. Thirty young normal-hearing native Danish speakers participated. All measurements were carried out with digits and stationary speech-shaped noise as the stimuli. The behavioral measures included speech detection thresholds (SDTs), speech recognition thresholds (SRTs), and speech comprehension scores (i.e., response times). For the electrophysiological measures, multichannel electroencephalography (EEG) recordings were performed. N100 and P300 responses were evoked using an active auditory oddball paradigm. N400 and Late Positive Complex (LPC) responses were evoked using a paradigm based on congruent and incongruent digit triplets, with the digits presented either all acoustically or first visually (digits 1-2) and then acoustically (digit 3). While no correlations between the SDTs and SRTs and the N100 and P300 responses were found, the response times were correlated with the EEG responses to the congruent and incongruent triplets. Furthermore, significant differences between the response times (but not EEG responses) obtained with auditory and visual-then-auditory stimulus presentation were observed. This pattern of results could reflect a faster recall mechanism when the first two digits are presented visually rather than acoustically. The visual-then-auditory condition may facilitate the assessment of comprehension-related processes in hard-of-hearing individuals.
Collapse
Affiliation(s)
- Pushkar Deshpande
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark,Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark,Pushkar Deshpande, Institute of Clinical Research, University of Southern Denmark, Odense, Denmark; Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark.
| | - Christian Brandt
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark,Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark
| | - Stefan Debener
- Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - Tobias Neher
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark,Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark
| |
Collapse
|
15
|
Drouin JR, Theodore RM. Many tasks, same outcome: Role of training task on learning and maintenance of noise-vocoded speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:981. [PMID: 36050170 PMCID: PMC9553285 DOI: 10.1121/10.0013507] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 07/12/2022] [Accepted: 07/26/2022] [Indexed: 06/15/2023]
Abstract
Listeners who use cochlear implants show variability in speech recognition. Research suggests that structured auditory training can improve speech recognition outcomes in cochlear implant users, and a central goal in the rehabilitation literature is to identify factors that maximize training. Here, we examined factors that may influence perceptual learning for noise-vocoded speech in normal hearing listeners as a foundational step towards clinical recommendations. Three groups of listeners were exposed to anomalous noise-vocoded sentences and completed one of three training tasks: transcription with feedback, transcription without feedback, or talker identification. Listeners completed a word transcription test at three time points: immediately before training, immediately after training, and one week following training. Accuracy at test was indexed by keyword accuracy at the sentence-initial and sentence-final position for high and low predictability noise-vocoded sentences. Following training, listeners showed improved transcription for both sentence-initial and sentence-final items, and for both low and high predictability sentences. The training groups showed robust and equivalent learning of noise-vocoded sentences immediately after training. Critically, gains were largely maintained equivalently among training groups one week later. These results converge with evidence pointing towards the utility of non-traditional training tasks to maximize perceptual learning of noise-vocoded speech.
Collapse
Affiliation(s)
- Julia R Drouin
- Department of Communication Sciences and Disorders, California State University Fullerton, Fullerton, California 92831, USA
| | - Rachel M Theodore
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Connecticut 06269, USA
| |
Collapse
|
16
|
Gray R, Sarampalis A, Başkent D, Harding EE. Working-Memory, Alpha-Theta Oscillations and Musical Training in Older Age: Research Perspectives for Speech-on-speech Perception. Front Aging Neurosci 2022; 14:806439. [PMID: 35645774 PMCID: PMC9131017 DOI: 10.3389/fnagi.2022.806439] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 03/24/2022] [Indexed: 12/18/2022] Open
Abstract
During the normal course of aging, perception of speech-on-speech or “cocktail party” speech and use of working memory (WM) abilities change. Musical training, which is a complex activity that integrates multiple sensory modalities and higher-order cognitive functions, reportedly benefits both WM performance and speech-on-speech perception in older adults. This mini-review explores the relationship between musical training, WM and speech-on-speech perception in older age (> 65 years) through the lens of the Ease of Language Understanding (ELU) model. Linking neural-oscillation literature associating speech-on-speech perception and WM with alpha-theta oscillatory activity, we propose that two stages of speech-on-speech processing in the ELU are underpinned by WM-related alpha-theta oscillatory activity, and that effects of musical training on speech-on-speech perception may be reflected in these frequency bands among older adults.
Collapse
Affiliation(s)
- Ryan Gray
- Department of Experimental Psychology, University of Groningen, Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
- Department of Psychology, Centre for Applied Behavioural Sciences, School of Social Sciences, Heriot-Watt University, Edinburgh, United Kingdom
| | - Anastasios Sarampalis
- Department of Experimental Psychology, University of Groningen, Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
| | - Deniz Başkent
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
- Department of Otorhinolaryngology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
| | - Eleanor E. Harding
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
- Department of Otorhinolaryngology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- *Correspondence: Eleanor E. Harding,
| |
Collapse
|
17
|
Abdel-Latif KHA, Meister H. Speech Recognition and Listening Effort in Cochlear Implant Recipients and Normal-Hearing Listeners. Front Neurosci 2022; 15:725412. [PMID: 35221883 PMCID: PMC8867819 DOI: 10.3389/fnins.2021.725412] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 12/23/2021] [Indexed: 11/13/2022] Open
Abstract
The outcome of cochlear implantation is typically assessed by speech recognition tests in quiet and in noise. Many cochlear implant recipients reveal satisfactory speech recognition especially in quiet situations. However, since cochlear implants provide only limited spectro-temporal cues the effort associated with understanding speech might be increased. In this respect, measures of listening effort could give important extra information regarding the outcome of cochlear implantation. In order to shed light on this topic and to gain knowledge for clinical applications we compared speech recognition and listening effort in cochlear implants (CI) recipients and age-matched normal-hearing listeners while considering potential influential factors, such as cognitive abilities. Importantly, we estimated speech recognition functions for both listener groups and compared listening effort at similar performance level. Therefore, a subjective listening effort test (adaptive scaling, “ACALES”) as well as an objective test (dual-task paradigm) were applied and compared. Regarding speech recognition CI users needed about 4 dB better signal-to-noise ratio to reach the same performance level of 50% as NH listeners and even 5 dB better SNR to reach 80% speech recognition revealing shallower psychometric functions in the CI listeners. However, when targeting a fixed speech intelligibility of 50 and 80%, respectively, CI users and normal hearing listeners did not differ significantly in terms of listening effort. This applied for both the subjective and the objective estimation. Outcome for subjective and objective listening effort was not correlated with each other nor with age or cognitive abilities of the listeners. This study did not give evidence that CI users and NH listeners differ in terms of listening effort – at least when the same performance level is considered. In contrast, both listener groups showed large inter-individual differences in effort determined with the subjective scaling and the objective dual-task. Potential clinical implications of how to assess listening effort as an outcome measure for hearing rehabilitation are discussed.
Collapse
|
18
|
Luo M, Debelak R, Schneider G, Martin M, Demiray B. With a little help from familiar interlocutors: real-world language use in young and older adults. Aging Ment Health 2021; 25:2310-2319. [PMID: 32981344 DOI: 10.1080/13607863.2020.1822288] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
OBJECTIVES Functional psychologists are concerned with the performance of cognitive activities in the real world in relation to cognitive changes in older age. Conversational contexts may mitigate the influence of cognitive aging on the cognitive activity of language production. This study examined effects of familiarity with interlocutors, as a context, on language production in the real world. METHOD We collected speech samples using iPhones, where an audio recording app (i.e. Electronically Activated Recorder [EAR]) was installed. Over 31,300 brief audio files (30-second long) were randomly collected across four days from 61 young and 48 healthy older adults in Switzerland. We transcribed the audio files that included participants' speech and manually coded for familiar interlocutors (i.e. significant other, friends, family members) and strangers. We computed scores of vocabulary richness and grammatical complexity from the transcripts using computational linguistics techniques. RESULTS Bayesian multilevel analyses showed that participants used richer vocabulary and more complex grammar when talking with familiar interlocutors than with strangers. Young adults used more diverse vocabulary than older adults and the age effects remained stable across contexts. Furthermore, older adults produced equally complex grammar as young adults did with the significant other, but simpler grammar than young adults with friends and family members. CONCLUSION Familiarity with interlocutors is a promising contextual factor for research on aging and language complexity in the real world. Results were discussed in the context of cognitive aging.
Collapse
Affiliation(s)
- Minxia Luo
- Department of Psychology, University of Zurich, Zurich, Switzerland.,University Research Priority Program "Dynamics of Healthy Aging", University of Zurich, Zurich, Switzerland
| | - Rudolf Debelak
- Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Gerold Schneider
- English Department, University of Zurich, Zurich, Switzerland.,Institute of Computational Linguistics, University of Zurich, Zurich, Switzerland
| | - Mike Martin
- Department of Psychology, University of Zurich, Zurich, Switzerland.,University Research Priority Program "Dynamics of Healthy Aging", University of Zurich, Zurich, Switzerland
| | - Burcu Demiray
- Department of Psychology, University of Zurich, Zurich, Switzerland.,University Research Priority Program "Dynamics of Healthy Aging", University of Zurich, Zurich, Switzerland
| |
Collapse
|
19
|
Kommajosyula SP, Bartlett EL, Cai R, Ling L, Caspary DM. Corticothalamic projections deliver enhanced responses to medial geniculate body as a function of the temporal reliability of the stimulus. J Physiol 2021; 599:5465-5484. [PMID: 34783016 PMCID: PMC10630908 DOI: 10.1113/jp282321] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 11/11/2021] [Indexed: 01/12/2023] Open
Abstract
Ageing and challenging signal-in-noise conditions are known to engage the use of cortical resources to help maintain speech understanding. Extensive corticothalamic projections are thought to provide attentional, mnemonic and cognitive-related inputs in support of sensory inferior colliculus (IC) inputs to the medial geniculate body (MGB). Here we show that a decrease in modulation depth, a temporally less distinct periodic acoustic signal, leads to a jittered ascending temporal code, changing MGB unit responses from adapting responses to responses showing repetition enhancement, posited to aid identification of important communication and environmental sounds. Young-adult male Fischer Brown Norway rats, injected with the inhibitory opsin archaerhodopsin T (ArchT) into the primary auditory cortex (A1), were subsequently studied using optetrodes to record single-units in MGB. Decreasing the modulation depth of acoustic stimuli significantly increased repetition enhancement. Repetition enhancement was blocked by optical inactivation of corticothalamic terminals in MGB. These data support a role for corticothalamic projections in repetition enhancement, implying that predictive anticipation could be used to improve neural representation of weakly modulated sounds. KEY POINTS: In response to a less temporally distinct repeating sound with low modulation depth, medial geniculate body (MGB) single units show a switch from adaptation towards repetition enhancement. Repetition enhancement was reversed by blockade of MGB inputs from the auditory cortex. Collectively, these data argue that diminished acoustic temporal cues such as weak modulation engage cortical processes to enhance coding of those cues in auditory thalamus.
Collapse
Affiliation(s)
- Srinivasa P Kommajosyula
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| | - Edward L Bartlett
- Department of Biological Sciences and the Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Rui Cai
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| | - Lynne Ling
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| | - Donald M Caspary
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| |
Collapse
|
20
|
Koelewijn T, Gaudrain E, Tamati T, Başkent D. The effects of lexical content, acoustic and linguistic variability, and vocoding on voice cue perception. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:1620. [PMID: 34598602 DOI: 10.1121/10.0005938] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 08/02/2021] [Indexed: 06/13/2023]
Abstract
Perceptual differences in voice cues, such as fundamental frequency (F0) and vocal tract length (VTL), can facilitate speech understanding in challenging conditions. Yet, we hypothesized that in the presence of spectrotemporal signal degradations, as imposed by cochlear implants (CIs) and vocoders, acoustic cues that overlap for voice perception and phonemic categorization could be mistaken for one another, leading to a strong interaction between linguistic and indexical (talker-specific) content. Fifteen normal-hearing participants performed an odd-one-out adaptive task measuring just-noticeable differences (JNDs) in F0 and VTL. Items used were words (lexical content) or time-reversed words (no lexical content). The use of lexical content was either promoted (by using variable items across comparison intervals) or not (fixed item). Finally, stimuli were presented without or with vocoding. Results showed that JNDs for both F0 and VTL were significantly smaller (better) for non-vocoded compared with vocoded speech and for fixed compared with variable items. Lexical content (forward vs reversed) affected VTL JNDs in the variable item condition, but F0 JNDs only in the non-vocoded, fixed condition. In conclusion, lexical content had a positive top-down effect on VTL perception when acoustic and linguistic variability was present but not on F0 perception. Lexical advantage persisted in the most degraded conditions and vocoding even enhanced the effect of item variability, suggesting that linguistic content could support compensation for poor voice perception in CI users.
Collapse
Affiliation(s)
- Thomas Koelewijn
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Etienne Gaudrain
- CNRS Unité Mixte de Recherche 5292, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics, Institut National de la Santé et de la Recherche Médicale, UMRS 1028, Université Claude Bernard Lyon 1, Université de Lyon, Lyon, France
| | - Terrin Tamati
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University Wexner Medical Center, The Ohio State University, Columbus, Ohio, USA
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
21
|
Nagels L, Gaudrain E, Vickers D, Hendriks P, Başkent D. School-age children benefit from voice gender cue differences for the perception of speech in competing speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:3328. [PMID: 34241121 DOI: 10.1121/10.0004791] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 04/08/2021] [Indexed: 06/13/2023]
Abstract
Differences in speakers' voice characteristics, such as mean fundamental frequency (F0) and vocal-tract length (VTL), that primarily define speakers' so-called perceived voice gender facilitate the perception of speech in competing speech. Perceiving speech in competing speech is particularly challenging for children, which may relate to their lower sensitivity to differences in voice characteristics than adults. This study investigated the development of the benefit from F0 and VTL differences in school-age children (4-12 years) for separating two competing speakers while tasked with comprehending one of them and also the relationship between this benefit and their corresponding voice discrimination thresholds. Children benefited from differences in F0, VTL, or both cues at all ages tested. This benefit proportionally remained the same across age, although overall accuracy continued to differ from that of adults. Additionally, children's benefit from F0 and VTL differences and their overall accuracy were not related to their discrimination thresholds. Hence, although children's voice discrimination thresholds and speech in competing speech perception abilities develop throughout the school-age years, children already show a benefit from voice gender cue differences early on. Factors other than children's discrimination thresholds seem to relate more closely to their developing speech in competing speech perception abilities.
Collapse
Affiliation(s)
- Leanne Nagels
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen 9712EK, Netherlands
| | - Etienne Gaudrain
- CNRS UMR 5292, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics, Inserm UMRS 1028, Université Claude Bernard Lyon 1, Université de Lyon, Lyon, France
| | - Deborah Vickers
- Sound Lab, Cambridge Hearing Group, Clinical Neurosciences Department, University of Cambridge, Cambridge CB2 0SZ, United Kingdom
| | - Petra Hendriks
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen 9712EK, Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen 9713GZ, Netherlands
| |
Collapse
|
22
|
Jaekel BN, Weinstein S, Newman RS, Goupell MJ. Access to semantic cues does not lead to perceptual restoration of interrupted speech in cochlear-implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:1488. [PMID: 33765790 PMCID: PMC7935498 DOI: 10.1121/10.0003573] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 02/01/2021] [Accepted: 02/04/2021] [Indexed: 05/19/2023]
Abstract
Cochlear-implant (CI) users experience less success in understanding speech in noisy, real-world listening environments than normal-hearing (NH) listeners. Perceptual restoration is one method NH listeners use to repair noise-interrupted speech. Whereas previous work has reported that CI users can use perceptual restoration in certain cases, they failed to do so under listening conditions in which NH listeners can successfully restore. Providing increased opportunities to use top-down linguistic knowledge is one possible method to increase perceptual restoration use in CI users. This work tested perceptual restoration abilities in 18 CI users and varied whether a semantic cue (presented visually) was available prior to the target sentence (presented auditorily). Results showed that whereas access to a semantic cue generally improved performance with interrupted speech, CI users failed to perceptually restore speech regardless of the semantic cue availability. The lack of restoration in this population directly contradicts previous work in this field and raises questions of whether restoration is possible in CI users. One reason for speech-in-noise understanding difficulty in CI users could be that they are unable to use tools like restoration to process noise-interrupted speech effectively.
Collapse
Affiliation(s)
- Brittany N Jaekel
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Sarah Weinstein
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Rochelle S Newman
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
23
|
O'Neill ER, Parke MN, Kreft HA, Oxenham AJ. Role of semantic context and talker variability in speech perception of cochlear-implant users and normal-hearing listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:1224. [PMID: 33639827 PMCID: PMC7895533 DOI: 10.1121/10.0003532] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 01/01/2021] [Accepted: 01/26/2021] [Indexed: 06/12/2023]
Abstract
This study assessed the impact of semantic context and talker variability on speech perception by cochlear-implant (CI) users and compared their overall performance and between-subjects variance with that of normal-hearing (NH) listeners under vocoded conditions. Thirty post-lingually deafened adult CI users were tested, along with 30 age-matched and 30 younger NH listeners, on sentences with and without semantic context, presented in quiet and noise, spoken by four different talkers. Additional measures included working memory, non-verbal intelligence, and spectral-ripple detection and discrimination. Semantic context and between-talker differences influenced speech perception to similar degrees for both CI users and NH listeners. Between-subjects variance for speech perception was greatest in the CI group but remained substantial in both NH groups, despite the uniformly degraded stimuli in these two groups. Spectral-ripple detection and discrimination thresholds in CI users were significantly correlated with speech perception, but a single set of vocoder parameters for NH listeners was not able to capture average CI performance in both speech and spectral-ripple tasks. The lack of difference in the use of semantic context between CI users and NH listeners suggests no overall differences in listening strategy between the groups, when the stimuli are similarly degraded.
Collapse
Affiliation(s)
- Erin R O'Neill
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Morgan N Parke
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
24
|
Nagels L, Bastiaanse R, Başkent D, Wagner A. Individual Differences in Lexical Access Among Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:286-304. [PMID: 31855606 DOI: 10.1044/2019_jslhr-19-00192] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose The current study investigates how individual differences in cochlear implant (CI) users' sensitivity to word-nonword differences, reflecting lexical uncertainty, relate to their reliance on sentential context for lexical access in processing continuous speech. Method Fifteen CI users and 14 normal-hearing (NH) controls participated in an auditory lexical decision task (Experiment 1) and a visual-world paradigm task (Experiment 2). Experiment 1 tested participants' reliance on lexical statistics, and Experiment 2 studied how sentential context affects the time course and patterns of lexical competition leading to lexical access. Results In Experiment 1, CI users had lower accuracy scores and longer reaction times than NH listeners, particularly for nonwords. In Experiment 2, CI users' lexical competition patterns were, on average, similar to those of NH listeners, but the patterns of individual CI users varied greatly. Individual CI users' word-nonword sensitivity (Experiment 1) explained differences in the reliance on sentential context to resolve lexical competition, whereas clinical speech perception scores explained competition with phonologically related words. Conclusions The general analysis of CI users' lexical competition patterns showed merely quantitative differences with NH listeners in the time course of lexical competition, but our additional analysis revealed more qualitative differences in CI users' strategies to process speech. Individuals' word-nonword sensitivity explained different parts of individual variability than clinical speech perception scores. These results stress, particularly for heterogeneous clinical populations such as CI users, the importance of investigating individual differences in addition to group averages, as they can be informative for clinical rehabilitation. Supplemental Material https://doi.org/10.23641/asha.11368106.
Collapse
Affiliation(s)
- Leanne Nagels
- Department of Otorhinolaryngology-Head & Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Center for Language and Cognition Groningen, University of Groningen, the Netherlands
| | - Roelien Bastiaanse
- Center for Language and Cognition Groningen, University of Groningen, the Netherlands
- National Research University Higher School of Economics, Moscow, Russia
| | - Deniz Başkent
- Department of Otorhinolaryngology-Head & Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Research School of Behavioural and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, the Netherlands
| | - Anita Wagner
- Department of Otorhinolaryngology-Head & Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Research School of Behavioural and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, the Netherlands
| |
Collapse
|
25
|
Winn MB. Accommodation of gender-related phonetic differences by listeners with cochlear implants and in a variety of vocoder simulations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:174. [PMID: 32006986 PMCID: PMC7341679 DOI: 10.1121/10.0000566] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Revised: 12/06/2019] [Accepted: 12/13/2019] [Indexed: 06/01/2023]
Abstract
Speech perception requires accommodation of a wide range of acoustic variability across talkers. A classic example is the perception of "sh" and "s" fricative sounds, which are categorized according to spectral details of the consonant itself, and also by the context of the voice producing it. Because women's and men's voices occupy different frequency ranges, a listener is required to make a corresponding adjustment of acoustic-phonetic category space for these phonemes when hearing different talkers. This pattern is commonplace in everyday speech communication, and yet might not be captured in accuracy scores for whole words, especially when word lists are spoken by a single talker. Phonetic accommodation for fricatives "s" and "sh" was measured in 20 cochlear implant (CI) users and in a variety of vocoder simulations, including those with noise carriers with and without peak picking, simulated spread of excitation, and pulsatile carriers. CI listeners showed strong phonetic accommodation as a group. Each vocoder produced phonetic accommodation except the 8-channel noise vocoder, despite its historically good match with CI users in word intelligibility. Phonetic accommodation is largely independent of linguistic factors and thus might offer information complementary to speech intelligibility tests which are partially affected by language processing.
Collapse
Affiliation(s)
- Matthew B Winn
- Department of Speech & Hearing Sciences, University of Minnesota, 164 Pillsbury Drive Southeast, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
26
|
Rodman C, Moberly AC, Janse E, Başkent D, Tamati TN. The impact of speaking style on speech recognition in quiet and multi-talker babble in adult cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:101. [PMID: 32006976 DOI: 10.1121/1.5141370] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 11/30/2019] [Indexed: 06/10/2023]
Abstract
The current study examined sentence recognition across speaking styles (conversational, neutral, and clear) in quiet and multi-talker babble (MTB) for cochlear implant (CI) users and normal-hearing listeners under CI simulations. Listeners demonstrated poorer recognition accuracy in MTB than in quiet, but were relatively more accurate with clear speech overall. Within CI users, higher-performing participants were also more accurate in MTB when listening to clear speech. Lower performing users' accuracy was not impacted by speaking style. Clear speech may facilitate recognition in MTB for high-performing users, who may be better able to take advantage of clear speech cues.
Collapse
Affiliation(s)
- Cole Rodman
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University Wexner Medical Center, 915 Olentangy River Road, Suite 4000, Columbus, Ohio 43212, USA
| | - Aaron C Moberly
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University Wexner Medical Center, 915 Olentangy River Road, Suite 4000, Columbus, Ohio 43212, USA
| | - Esther Janse
- Centre for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Terrin N Tamati
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University Wexner Medical Center, 915 Olentangy River Road, Suite 4000, Columbus, Ohio 43212, USA
| |
Collapse
|
27
|
Dingemanse JG, Goedegebure A. The Important Role of Contextual Information in Speech Perception in Cochlear Implant Users and Its Consequences in Speech Tests. Trends Hear 2019; 23:2331216519838672. [PMID: 30991904 PMCID: PMC6472157 DOI: 10.1177/2331216519838672] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
This study investigated the role of contextual information in speech
intelligibility, the influence of verbal working memory on the use of contextual
information, and the suitability of an ecologically valid sentence test
containing contextual information, compared with a CNC
(Consonant-Nucleus-Consonant) word test, in cochlear implant (CI) users. Speech
intelligibility performance was assessed in 50 postlingual adult CI users on
sentence lists and on CNC word lists. Results were compared with a
normal-hearing (NH) group. The influence of contextual information was
calculated from three different context models. Working memory capacity was
measured with a Reading Span Test. CI recipients made significantly more use of
contextual information in recognition of CNC words and sentences than NH
listeners. Their use of contextual information in sentences was related to
verbal working memory capacity but not to age, indicating that the ability to
use context is dependent on cognitive abilities, regardless of age. The presence
of context in sentences enhanced the sensitivity to differences in sensory
bottom-up information but also increased the risk of a ceiling effect. A
sentence test appeared to be suitable in CI users if word scoring is used and
noise is added for the best performers.
Collapse
Affiliation(s)
- J. Gertjan Dingemanse
- 1 Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands
| | - André Goedegebure
- 1 Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands
| |
Collapse
|
28
|
O'Neill ER, Kreft HA, Oxenham AJ. Cognitive factors contribute to speech perception in cochlear-implant users and age-matched normal-hearing listeners under vocoded conditions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:195. [PMID: 31370651 PMCID: PMC6637026 DOI: 10.1121/1.5116009] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
This study examined the contribution of perceptual and cognitive factors to speech-perception abilities in cochlear-implant (CI) users. Thirty CI users were tested on word intelligibility in sentences with and without semantic context, presented in quiet and in noise. Performance was compared with measures of spectral-ripple detection and discrimination, thought to reflect peripheral processing, as well as with cognitive measures of working memory and non-verbal intelligence. Thirty age-matched and thirty younger normal-hearing (NH) adults also participated, listening via tone-excited vocoders, adjusted to produce mean performance for speech in noise comparable to that of the CI group. Results suggest that CI users may rely more heavily on semantic context than younger or older NH listeners, and that non-auditory working memory explains significant variance in the CI and age-matched NH groups. Between-subject variability in spectral-ripple detection thresholds was similar across groups, despite the spectral resolution for all NH listeners being limited by the same vocoder, whereas speech perception scores were more variable between CI users than between NH listeners. The results highlight the potential importance of central factors in explaining individual differences in CI users and question the extent to which standard measures of spectral resolution in CIs reflect purely peripheral processing.
Collapse
Affiliation(s)
- Erin R O'Neill
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
29
|
Winn MB, Moore AN. Pupillometry Reveals That Context Benefit in Speech Perception Can Be Disrupted by Later-Occurring Sounds, Especially in Listeners With Cochlear Implants. Trends Hear 2019; 22:2331216518808962. [PMID: 30375282 PMCID: PMC6207967 DOI: 10.1177/2331216518808962] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Contextual cues can be used to improve speech recognition, especially for people with hearing impairment. However, previous work has suggested that when the auditory signal is degraded, context might be used more slowly than when the signal is clear. This potentially puts the hearing-impaired listener in a dilemma of continuing to process the last sentence when the next sentence has already begun. This study measured the time course of the benefit of context using pupillary responses to high- and low-context sentences that were followed by silence or various auditory distractors (babble noise, ignored digits, or attended digits). Participants were listeners with cochlear implants or normal hearing using a 12-channel noise vocoder. Context-related differences in pupil dilation were greater for normal hearing than for cochlear implant listeners, even when scaled for differences in pupil reactivity. The benefit of context was systematically reduced for both groups by the presence of the later-occurring sounds, including virtually complete negation when sentences were followed by another attended utterance. These results challenge how we interpret the benefit of context in experiments that present just one utterance at a time. If a listener uses context to “repair” part of a sentence, and later-occurring auditory stimuli interfere with that repair process, the benefit of context might not survive outside the idealized laboratory or clinical environment. Elevated listening effort in hearing-impaired listeners might therefore result not just from poor auditory encoding but also inefficient use of context and prolonged processing of misperceived utterances competing with perception of incoming speech.
Collapse
Affiliation(s)
- Matthew B Winn
- 1 Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Ashley N Moore
- 1 Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| |
Collapse
|
30
|
Kaandorp MW, Smits C, Merkus P, Festen JM, Goverts ST. Lexical-Access Ability and Cognitive Predictors of Speech Recognition in Noise in Adult Cochlear Implant Users. Trends Hear 2019; 21:2331216517743887. [PMID: 29205095 PMCID: PMC5721962 DOI: 10.1177/2331216517743887] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
Not all of the variance in speech-recognition performance of cochlear implant (CI) users can be explained by biographic and auditory factors. In normal-hearing listeners, linguistic and cognitive factors determine most of speech-in-noise performance. The current study explored specifically the influence of visually measured lexical-access ability compared with other cognitive factors on speech recognition of 24 postlingually deafened CI users. Speech-recognition performance was measured with monosyllables in quiet (consonant-vowel-consonant [CVC]), sentences-in-noise (SIN), and digit-triplets in noise (DIN). In addition to a composite variable of lexical-access ability (LA), measured with a lexical-decision test (LDT) and word-naming task, vocabulary size, working-memory capacity (Reading Span test [RSpan]), and a visual analogue of the SIN test (text reception threshold test) were measured. The DIN test was used to correct for auditory factors in SIN thresholds by taking the difference between SIN and DIN: SRTdiff. Correlation analyses revealed that duration of hearing loss (dHL) was related to SIN thresholds. Better working-memory capacity was related to SIN and SRTdiff scores. LDT reaction time was positively correlated with SRTdiff scores. No significant relationships were found for CVC or DIN scores with the predictor variables. Regression analyses showed that together with dHL, RSpan explained 55% of the variance in SIN thresholds. When controlling for auditory performance, LA, LDT, and RSpan separately explained, together with dHL, respectively 37%, 36%, and 46% of the variance in SRTdiff outcome. The results suggest that poor verbal working-memory capacity and to a lesser extent poor lexical-access ability limit speech-recognition ability in listeners with a CI.
Collapse
Affiliation(s)
- Marre W Kaandorp
- 1 Department of Otolaryngology-Head and Neck Surgery, Section Ear & Hearing and EMGO Institute for Health and Care Research, VU University Medical Center, Amsterdam, The Netherlands
| | - Cas Smits
- 1 Department of Otolaryngology-Head and Neck Surgery, Section Ear & Hearing and EMGO Institute for Health and Care Research, VU University Medical Center, Amsterdam, The Netherlands
| | - Paul Merkus
- 1 Department of Otolaryngology-Head and Neck Surgery, Section Ear & Hearing and EMGO Institute for Health and Care Research, VU University Medical Center, Amsterdam, The Netherlands
| | - Joost M Festen
- 1 Department of Otolaryngology-Head and Neck Surgery, Section Ear & Hearing and EMGO Institute for Health and Care Research, VU University Medical Center, Amsterdam, The Netherlands
| | - S Theo Goverts
- 1 Department of Otolaryngology-Head and Neck Surgery, Section Ear & Hearing and EMGO Institute for Health and Care Research, VU University Medical Center, Amsterdam, The Netherlands
| |
Collapse
|
31
|
Wagner AE, Nagels L, Toffanin P, Opie JM, Başkent D. Individual Variations in Effort: Assessing Pupillometry for the Hearing Impaired. Trends Hear 2019; 23:2331216519845596. [PMID: 31131729 PMCID: PMC6537294 DOI: 10.1177/2331216519845596] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2018] [Revised: 03/19/2019] [Accepted: 03/25/2019] [Indexed: 12/20/2022] Open
Abstract
Assessing effort in speech comprehension for hearing-impaired (HI) listeners is important, as effortful processing of speech can limit their hearing rehabilitation. We examined the measure of pupil dilation in its capacity to accommodate the heterogeneity that is present within clinical populations by studying lexical access in users with sensorineural hearing loss, who perceive speech via cochlear implants (CIs). We compared the pupillary responses of 15 experienced CI users and 14 age-matched normal-hearing (NH) controls during auditory lexical decision. A growth curve analysis was applied to compare the responses between the groups. NH listeners showed a coherent pattern of pupil dilation that reflects the task demands of the experimental manipulation and a homogenous time course of dilation. CI listeners showed more variability in the morphology of pupil dilation curves, potentially reflecting variable sources of effort across individuals. In follow-up analyses, we examined how speech perception, a task that relies on multiple stages of perceptual analyses, poses multiple sources of increased effort for HI listeners, wherefore we might not be measuring the same source of effort for HI as for NH listeners. We argue that interindividual variability among HI listeners can be clinically meaningful in attesting not only the magnitude but also the locus of increased effort. The understanding of individual variations in effort requires experimental paradigms that (a) differentiate the task demands during speech comprehension, (b) capture pupil dilation in its time course per individual listeners, and (c) investigate the range of individual variability present within clinical and NH populations.
Collapse
Affiliation(s)
- Anita E. Wagner
- Department of Otorhinolaryngology/Head
and Neck Surgery, University Medical Center Groningen, University of Groningen, the
Netherlands
- Graduate School of Medical Sciences,
School of Behavioral and Cognitive Neuroscience, University of Groningen, the
Netherlands
| | - Leanne Nagels
- Department of Otorhinolaryngology/Head
and Neck Surgery, University Medical Center Groningen, University of Groningen, the
Netherlands
- Center for Language and Cognition
Groningen, University of Groningen, the Netherlands
| | - Paolo Toffanin
- Department of Otorhinolaryngology/Head
and Neck Surgery, University Medical Center Groningen, University of Groningen, the
Netherlands
| | | | - Deniz Başkent
- Department of Otorhinolaryngology/Head
and Neck Surgery, University Medical Center Groningen, University of Groningen, the
Netherlands
- Graduate School of Medical Sciences,
School of Behavioral and Cognitive Neuroscience, University of Groningen, the
Netherlands
| |
Collapse
|