1
|
Fama ME, McCall JD, DeMarco AT, Turkeltaub PE. Evidence from aphasia suggests a bidirectional relationship between inner speech and executive function. Neuropsychologia 2024; 204:108997. [PMID: 39251107 DOI: 10.1016/j.neuropsychologia.2024.108997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 09/06/2024] [Accepted: 09/06/2024] [Indexed: 09/11/2024]
Abstract
Research over the past several decades has revealed that non-linguistic cognitive impairments can appear alongside language deficits in individuals with aphasia. One vulnerable cognitive domain is executive function, an umbrella term for the higher-level cognitive processes that allow us to direct our behavior towards a goal. Studies in healthy adults reveal that executive function abilities are supported by inner speech, the ability to use language silently in one's head. Therefore, inner speech may mediate the connection between language and executive function deficits in individuals with aphasia. Here, we investigated whether inner speech ability may link language and cognitive impairments in 59 adults with chronic, post-stroke aphasia. We used two approaches to measure inner speech: one based on internal retrieval of words and one based on internal retrieval plus silent manipulation of the retrieved phonological forms. Then, we examined relationships between these two approaches to measuring inner speech and five aspects of executive function ability: response inhibition, conflict monitoring/resolution, general task-switching ability, phonological control, and semantic control. We also looked for dissociations between inner speech ability and executive function ability. Our results show tentative relationships between inner speech (across multiple measurement approaches) and all aspects of executive function except for response inhibition. We also found evidence for a double dissociation: many participants show intact executive function despite poor inner speech, and vice versa, so neither process is strictly reliant on the other. We suggest that this work provides preliminary evidence of a bidirectional relationship between inner speech and executive function: inner speech supports some aspects of executive function via internal self-cueing and certain aspects of executive function support performance on complex inner speech tasks.
Collapse
Affiliation(s)
- Mackenzie E Fama
- Department of Speech, Language, and Hearing Sciences, The George Washington University, Washington, DC, USA.
| | - Joshua D McCall
- Department of Neurology, Georgetown University Medical Center, Washington, DC, USA
| | - Andrew T DeMarco
- Department of Neurology, Georgetown University Medical Center, Washington, DC, USA; Department of Rehabilitation Medicine, Georgetown University Medical Center, Washington, DC, USA
| | - Peter E Turkeltaub
- Department of Neurology, Georgetown University Medical Center, Washington, DC, USA; Department of Rehabilitation Medicine, Georgetown University Medical Center, Washington, DC, USA; Research Division, MedStar National Rehabilitation Hospital, Washington, DC, USA
| |
Collapse
|
2
|
Nittrouer S. How Hearing Loss and Cochlear Implantation Affect Verbal Working Memory: Evidence From Adolescents. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1850-1867. [PMID: 38713817 PMCID: PMC11192562 DOI: 10.1044/2024_jslhr-23-00446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 11/09/2023] [Accepted: 03/20/2024] [Indexed: 05/09/2024]
Abstract
PURPOSE Verbal working memory is poorer for children with hearing loss than for peers with normal hearing (NH), even with cochlear implantation and early intervention. Poor verbal working memory can affect academic performance, especially in higher grades, making this deficit a significant problem. This study examined the stability of verbal working memory across middle childhood, tested working memory in adolescents with NH or cochlear implants (CIs), explored whether signal enhancement can improve verbal working memory, and tested two hypotheses proposed to explain the poor verbal working memory of children with hearing loss: (a) Diminished auditory experience directly affects executive functions, including working memory; (b) degraded auditory inputs inhibit children's abilities to recover the phonological structure needed for encoding verbal material into storage. DESIGN Fourteen-year-olds served as subjects: 55 with NH; 52 with CIs. Immediate serial recall tasks were used to assess working memory. Stimuli consisted of nonverbal, spatial stimuli and four kinds of verbal, acoustic stimuli: nonrhyming and rhyming words, and nonrhyming words with two kinds of signal enhancement: audiovisual and indexical. Analyses examined (a) stability of verbal working memory across middle childhood, (b) differences in verbal and nonverbal working memory, (c) effects of signal enhancement on recall, (d) phonological processing abilities, and (e) source of the diminished verbal working memory in adolescents with cochlear implants. RESULTS Verbal working memory remained stable across middle childhood. Adolescents across groups performed similarly for nonverbal stimuli, but those with CIs displayed poorer recall accuracy for verbal stimuli; signal enhancement did not improve recall. Poor phonological sensitivity largely accounted for the group effect. CONCLUSIONS The central executive for working memory is not affected by hearing loss or cochlear implantation. Instead, the phonological deficit faced by adolescents with CIs denigrates the representation in storage and augmenting the signal does not help.
Collapse
Affiliation(s)
- Susan Nittrouer
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
| |
Collapse
|
3
|
Chen Y, Wang S, Yang L, Liu Y, Fu X, Wang Y, Zhang X, Wang S. Features of the speech processing network in post- and prelingually deaf cochlear implant users. Cereb Cortex 2024; 34:bhad417. [PMID: 38163443 DOI: 10.1093/cercor/bhad417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 10/13/2023] [Accepted: 10/14/2023] [Indexed: 01/03/2024] Open
Abstract
The onset of hearing loss can lead to altered brain structure and functions. However, hearing restoration may also result in distinct cortical reorganization. A differential pattern of functional remodeling was observed between post- and prelingual cochlear implant users, but it remains unclear how these speech processing networks are reorganized after cochlear implantation. To explore the impact of language acquisition and hearing restoration on speech perception in cochlear implant users, we conducted assessments of brain activation, functional connectivity, and graph theory-based analysis using functional near-infrared spectroscopy. We examined the effects of speech-in-noise stimuli on three groups: postlingual cochlear implant users (n = 12), prelingual cochlear implant users (n = 10), and age-matched individuals with hearing controls (HC) (n = 22). The activation of auditory-related areas in cochlear implant users showed a lower response compared with the HC group. Wernicke's area and Broca's area demonstrated differences network attributes in speech processing networks in post- and prelingual cochlear implant users. In addition, cochlear implant users maintain a high efficiency of the speech processing network to process speech information. Taken together, our results characterize the speech processing networks, in varying noise environments, in post- and prelingual cochlear implant users and provide new insights for theories of how implantation modes impact remodeling of the speech processing functional networks.
Collapse
Affiliation(s)
- Younuo Chen
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing 100005, China
| | - Songjian Wang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing 100005, China
| | - Liu Yang
- School of Biomedical Engineering, Capital Medical University, No. 10, Xitoutiao, YouAnMen, Fengtai District, Beijing 100069, China
| | - Yi Liu
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing 100005, China
| | - Xinxing Fu
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing 100005, China
| | - Yuan Wang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing 100005, China
| | - Xu Zhang
- School of Biomedical Engineering, Capital Medical University, No. 10, Xitoutiao, YouAnMen, Fengtai District, Beijing 100069, China
| | - Shuo Wang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing 100005, China
| |
Collapse
|
4
|
Huang H, Ricketts TA, Hornsby BWY, Picou EM. Effects of Critical Distance and Reverberation on Listening Effort in Adults. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4837-4851. [PMID: 36351258 DOI: 10.1044/2022_jslhr-22-00109] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE Mixed historical data on how listening effort is affected by reverberation and listener-to-speaker distance challenge existing models of listening effort. This study investigated the effects of reverberation and listener-to-speaker distance on behavioral and subjective measures of listening effort: (a) when listening at a fixed signal-to-noise ratio (SNR) and (b) at SNRs that were manipulated so that word recognition would be comparable across different reverberation times and listening distances. It was expected that increased reverberation would increase listening effort but only when listening outside critical distance. METHOD Nineteen adults (21-40 years) with no hearing loss completed a dual-task paradigm. The primary task was word recognition and the secondary task was timed word categorization; response times indexed behavioral listening effort. Additionally, participants provided subjective ratings in each condition. Testing was completed at two reverberation levels (moderate and high, RT30 = 469 and 1,223 ms, respectively) and at two listener-to-speaker distances (inside and outside critical distance for the test room, 1.25 and 4 m, respectively). RESULTS Increased reverberation and listening distances worsened word recognition performance and both behavioral and subjective listening effort. The effect of reverberation was exacerbated when listeners were outside critical distance. Subjective experience of listening effort persisted even when word recognition was comparable across conditions. CONCLUSIONS Longer reverberation times or listening outside the room's critical distance negatively affected behavioral and subjective listening effort. This study extends understanding of listening effort in reverberant rooms by highlighting the effect of listener's position relative to the room's critical distance.
Collapse
Affiliation(s)
- Haiping Huang
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Todd A Ricketts
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Benjamin W Y Hornsby
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Erin M Picou
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
5
|
Völter C, Oberländer K, Haubitz I, Carroll R, Dazert S, Thomas JP. Poor Performer: A Distinct Entity in Cochlear Implant Users? Audiol Neurootol 2022; 27:356-367. [PMID: 35533653 PMCID: PMC9533457 DOI: 10.1159/000524107] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 03/10/2022] [Indexed: 11/19/2022] Open
Abstract
INTRODUCTION Several factors are known to influence speech perception in cochlear implant (CI) users. To date, the underlying mechanisms have not yet been fully clarified. Although many CI users achieve a high level of speech perception, a small percentage of patients does not or only slightly benefit from the CI (poor performer, PP). In a previous study, PP showed significantly poorer results on nonauditory-based cognitive and linguistic tests than CI users with a very high level of speech understanding (star performer, SP). We now investigate if PP also differs from the CI user with an average performance (average performer, AP) in cognitive and linguistic performance. METHODS Seventeen adult postlingually deafened CI users with speech perception scores in quiet of 55 (9.32) % (AP) on the German Freiburg monosyllabic speech test at 65 dB underwent neurocognitive (attention, working memory, short- and long-term memory, verbal fluency, inhibition) and linguistic testing (word retrieval, lexical decision, phonological input lexicon). The results were compared to the performance of 15 PP (speech perception score of 15 [11.80] %) and 19 SP (speech perception score of 80 [4.85] %). For statistical analysis, U-Test and discrimination analysis have been done. RESULTS Significant differences between PP and AP were observed on linguistic tests, in Rapid Automatized Naming (RAN: p = 0.0026), lexical decision (LexDec: p = 0.026), phonological input lexicon (LEMO: p = 0.0085), and understanding of incomplete words (TRT: p = 0.0024). AP also had significantly better neurocognitive results than PP in the domains of attention (M3: p = 0.009) and working memory (OSPAN: p = 0.041; RST: p = 0.015) but not in delayed recall (delayed recall: p = 0.22), verbal fluency (verbal fluency: p = 0.084), and inhibition (Flanker: p = 0.35). In contrast, no differences were found hereby between AP and SP. Based on the TRT and the RAN, AP and PP could be separated in 100%. DISCUSSION The results indicate that PP constitute a distinct entity of CI users that differs even in nonauditory abilities from CI users with an average speech perception, especially with regard to rapid word retrieval either due to reduced phonological abilities or limited storage. Further studies should investigate if improved word retrieval by increased phonological and semantic training results in better speech perception in these CI users.
Collapse
Affiliation(s)
- Christiane Völter
- Department of Otorhinolaryngology, Head and Neck Surgery, Cochlear Implant Center Ruhrgebiet, St Elisabeth-Hospital, Ruhr University Bochum, Bochum, Germany
| | - Kirsten Oberländer
- Department of Otorhinolaryngology, Head and Neck Surgery, Cochlear Implant Center Ruhrgebiet, St Elisabeth-Hospital, Ruhr University Bochum, Bochum, Germany,
| | - Imme Haubitz
- Department of Otorhinolaryngology, Head and Neck Surgery, Cochlear Implant Center Ruhrgebiet, St Elisabeth-Hospital, Ruhr University Bochum, Bochum, Germany
| | - Rebecca Carroll
- Institute of English and American Studies, Technical University Braunschweig, Braunschweig, Germany
| | - Stefan Dazert
- Department of Otorhinolaryngology, Head and Neck Surgery, Cochlear Implant Center Ruhrgebiet, St Elisabeth-Hospital, Ruhr University Bochum, Bochum, Germany
| | - Jan Peter Thomas
- Department of Otorhinolaryngology, Head and Neck Surgery, St-Johannes-Hospital, Dortmund, Germany
| |
Collapse
|
6
|
Cogmed Training Does Not Generalize to Real-World Benefits for Adult Hearing Aid Users: Results of a Blinded, Active-Controlled Randomized Trial. Ear Hear 2021; 43:741-763. [PMID: 34524150 PMCID: PMC9007089 DOI: 10.1097/aud.0000000000001096] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Objectives: Performance on working memory tasks is positively associated with speech-in-noise perception performance, particularly where auditory inputs are degraded. It is suggested that interventions designed to improve working memory capacity may improve domain-general working memory performance for people with hearing loss, to benefit their real-world listening. We examined whether a 5-week training program that primarily targets the storage component of working memory (Cogmed RM, adaptive) could improve cognition, speech-in-noise perception and self-reported hearing in a randomized controlled trial of adult hearing aid users with mild to moderate hearing loss, compared with an active control (Cogmed RM, nonadaptive) group of adults from the same population. Design: A preregistered randomized controlled trial of 57 adult hearing aid users (n = 27 experimental, n = 30 active control), recruited from a dedicated database of research volunteers, examined on-task learning and generalized improvements in measures of trained and untrained cognition, untrained speech-in-noise perception and self-reported hearing abilities, pre- to post-training. Participants and the outcome assessor were both blinded to intervention allocation. Retention of training-related improvements was examined at a 6-month follow-up assessment. Results: Per-protocol analyses showed improvements in trained tasks (Cogmed Index Improvement) that transferred to improvements in a trained working memory task tested outside of the training software (Backward Digit Span) and a small improvement in self-reported hearing ability (Glasgow Hearing Aid Benefit Profile, Initial Disability subscale). Both of these improvements were maintained 6-month post-training. There was no transfer of learning shown to untrained measures of cognition (working memory or attention), speech-in-noise perception, or self-reported hearing in everyday life. An assessment of individual differences showed that participants with better baseline working memory performance achieved greater learning on the trained tasks. Post-training performance for untrained outcomes was largely predicted by individuals’ pretraining performance on those measures. Conclusions: Despite significant on-task learning, generalized improvements of working memory training in this trial were limited to (a) improvements for a trained working memory task tested outside of the training software and (b) a small improvement in self-reported hearing ability for those in the experimental group, compared with active controls. We found no evidence to suggest that training which primarily targets storage aspects of working memory can result in domain-general improvements that benefit everyday communication for adult hearing aid users. These findings are consistent with a significant body of evidence showing that Cogmed training only improves performance for tasks that resemble Cogmed training. Future research should focus on the benefits of interventions that enhance cognition in the context in which it is employed within everyday communication, such as training that targets dynamic aspects of cognitive control important for successful speech-in-noise perception.
Collapse
|
7
|
Unger N, Heim S, Hilger DI, Bludau S, Pieperhoff P, Cichon S, Amunts K, Mühleisen TW. Identification of Phonology-Related Genes and Functional Characterization of Broca's and Wernicke's Regions in Language and Learning Disorders. Front Neurosci 2021; 15:680762. [PMID: 34539327 PMCID: PMC8446646 DOI: 10.3389/fnins.2021.680762] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 08/04/2021] [Indexed: 12/02/2022] Open
Abstract
Impaired phonological processing is a leading symptom of multifactorial language and learning disorders suggesting a common biological basis. Here we evaluated studies of dyslexia, dyscalculia, specific language impairment (SLI), and the logopenic variant of primary progressive aphasia (lvPPA) seeking for shared risk genes in Broca's and Wernicke's regions, being key for phonological processing within the complex language network. The identified "phonology-related genes" from literature were functionally characterized using Atlas-based expression mapping (JuGEx) and gene set enrichment. Out of 643 publications from the last decade until now, we extracted 21 candidate genes of which 13 overlapped with dyslexia and SLI, six with dyslexia and dyscalculia, and two with dyslexia, dyscalculia, and SLI. No overlap was observed between the childhood disorders and the late-onset lvPPA often showing symptoms of learning disorders earlier in life. Multiple genes were enriched in Gene Ontology terms of the topics learning (CNTNAP2, CYFIP1, DCDC2, DNAAF4, FOXP2) and neuronal development (CCDC136, CNTNAP2, CYFIP1, DCDC2, KIAA0319, RBFOX2, ROBO1). Twelve genes showed above-average expression across both regions indicating moderate-to-high gene activity in the investigated cortical part of the language network. Of these, three genes were differentially expressed suggesting potential regional specializations: ATP2C2 was upregulated in Broca's region, while DNAAF4 and FOXP2 were upregulated in Wernicke's region. ATP2C2 encodes a magnesium-dependent calcium transporter which fits with reports about disturbed calcium and magnesium levels for dyslexia and other communication disorders. DNAAF4 (formerly known as DYX1C1) is involved in neuronal migration supporting the hypothesis of disturbed migration in dyslexia. FOXP2 is a transcription factor that regulates a number of genes involved in development of speech and language. Overall, our interdisciplinary and multi-tiered approach provided evidence that genetic and transcriptional variation of ATP2C2, DNAAF4, and FOXP2 may play a role in physiological and pathological aspects of phonological processing.
Collapse
Affiliation(s)
- Nina Unger
- Cécile and Oskar Vogt Institute for Brain Research, Medical Faculty, University Hospital Düsseldorf, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
- Department of Neurology, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Stefan Heim
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
- JARA-Brain, Jülich-Aachen Research Alliance, Jülich, Germany
| | - Dominique I. Hilger
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
| | - Sebastian Bludau
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
| | - Peter Pieperhoff
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
| | - Sven Cichon
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
- Department of Biomedicine, University of Basel, Basel, Switzerland
- Institute of Medical Genetics and Pathology, University Hospital Basel, Basel, Switzerland
| | - Katrin Amunts
- Cécile and Oskar Vogt Institute for Brain Research, Medical Faculty, University Hospital Düsseldorf, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
- JARA-Brain, Jülich-Aachen Research Alliance, Jülich, Germany
| | - Thomas W. Mühleisen
- Cécile and Oskar Vogt Institute for Brain Research, Medical Faculty, University Hospital Düsseldorf, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
- Department of Biomedicine, University of Basel, Basel, Switzerland
| |
Collapse
|
8
|
Kwak C, Han W. Age-Related Difficulty of Listening Effort in Elderly. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18168845. [PMID: 34444593 PMCID: PMC8391845 DOI: 10.3390/ijerph18168845] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 08/19/2021] [Accepted: 08/20/2021] [Indexed: 12/01/2022]
Abstract
The present study identifies the combined effects of aging and listening environment related factors, such as directionality, types of stimuli, and the presence of background noise. A total of 50 listeners with normal hearing (25 older adults and 25 young adults) participated in a series of tasks. The detection task using tone and speech and a speech segregation task with two levels of background noise were conducted while sound was randomly presented via eight directional speakers. After completing each task, a subjective questionnaire using a seven-point Likert scale was asked to measure the amount of the subjects’ listening effort in terms of speech, spatial, and hearing quality. As expected, the amount of listening effort required in all the experiments for the older group was significantly higher than for their young counterparts. The effects of aging and types of stimuli (tone and speech) also showed different patterns of listening effort for the older adults and younger adults. The combined interaction of aging, directionality, and presence of background noise led to a significantly different amount of listening effort for the older group (90.1%) compared to the younger group (53.1%), even in the same listening situation. These current results, when summarized, indicated weak tone detection ability at high frequencies occurred in the elderly population but the elderly could improve their ability by using speech sounds with broad-band spectrum energy. We suggest that a warning signal when using speech rather than a single tone is more advantageous for the elderly in a public environment. It is also better to converse with the elderly by avoiding situations where noise from behind can interrupt.
Collapse
Affiliation(s)
- Chanbeom Kwak
- Laboratory of Hearing and Technology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon 24252, Korea;
- Division of Speech Pathology and Audiology, College of Natural Sciences, Hallym University, Chuncheon 24252, Korea
| | - Woojae Han
- Laboratory of Hearing and Technology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon 24252, Korea;
- Division of Speech Pathology and Audiology, College of Natural Sciences, Hallym University, Chuncheon 24252, Korea
- Correspondence: ; Tel.: +82-33-248-2216
| |
Collapse
|
9
|
Abstract
INTRODUCTION Despite substantial benefits of cochlear implantation (CI) there is a high variability in speech recognition, the reasons for which are not fully understood. Especially the group of low-performing CI users is under-researched. Because of limited perceptual quality, top-down mechanisms play an important role in decoding the speech signal transmitted by the CI. Thereby, differences in cognitive functioning and linguistic skills may explain speech outcome in these CI subjects. MATERIAL AND METHODS Fifteen post-lingually deaf CI recipients with a maximum speech perception of 30% in the Freiburger monosyllabic test (low performer = LP) underwent visually presented neurocognitive and linguistic test batteries assessing attention, memory, inhibition, working memory, lexical access, phonological input as well as automatic naming. Nineteen high performer (HP) with a speech perception of more than 70% were included as a control. Pairwise comparison of the two extreme groups and discrimination analysis were carried out. RESULTS Significant differences were found between LP and HP in phonological input lexicon and word retrieval (p = 0.0039∗∗). HP were faster in lexical access (p = 0.017∗) and distinguished more reliably between non-existing and existing words (p = 0.0021∗∗). Furthermore, HP outperformed LP in neurocognitive subtests, most prominently in attention (p = 0.003∗∗). LP and HP were primarily discriminated by linguistic performance and to a smaller extent by cognitive functioning (canonic r = 0.68, p = 0.0075). Poor rapid automatic naming of numbers helped to discriminate LP from HP CI users 91.7% of the time. CONCLUSION Severe phonologically based deficits in fast automatic speech processing contribute significantly to distinguish LP from HP CI users. Cognitive functions might partially help to overcome these difficulties.
Collapse
|
10
|
Tamati TN, Vasil KJ, Kronenberger WG, Pisoni DB, Moberly AC, Ray C. Word and Nonword Reading Efficiency in Postlingually Deafened Adult Cochlear Implant Users. Otol Neurotol 2021; 42:e272-e278. [PMID: 33306660 PMCID: PMC7874984 DOI: 10.1097/mao.0000000000002925] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
HYPOTHESIS This study tested the hypotheses that 1) experienced adult cochlear implants (CI) users demonstrate poorer reading efficiency relative to normal-hearing controls, 2) reading efficiency reflects basic, underlying neurocognitive skills, and 3) reading efficiency relates to speech recognition outcomes in CI users. BACKGROUND Weak phonological processing skills have been associated with poor speech recognition outcomes in postlingually deaf adult CI users. Phonological processing can be captured in nonauditory measures of reading efficiency, which may have wide use in patients with hearing loss. This study examined reading efficiency in adults CI users, and its relation to speech recognition outcomes. METHODS Forty-eight experienced, postlingually deaf adult CI users (ECIs) and 43 older age-matched peers with age-normal hearing (ONHs) completed the Test of Word Reading Efficiency (TOWRE-2), which measures word and nonword reading efficiency. Participants also completed a battery of nonauditory neurocognitive measures and auditory sentence recognition tasks. RESULTS ECIs and ONHs did not differ in word (ECIs: M = 78.2, SD = 11.4; ONHs: M = 83.3, SD = 10.2) or nonword reading efficiency (ECIs: M = 42.0, SD = 11.2; ONHs: M = 43.7, SD = 10.3). For ECIs, both scores were related to untimed word reading with moderate to strong effect sizes (r = 0.43-0.69), but demonstrated differing relations with other nonauditory neurocognitive measures with weak to moderate effect sizes (word: r = 0.11-0.44; nonword: r = (-)0.15 to (-)0.42). Word reading efficiency was moderately related to sentence recognition outcomes in ECIs (r = 0.36-0.40). CONCLUSION Findings suggest that postlingually deaf adult CI users demonstrate neither impaired word nor nonword reading efficiency, and these measures reflect different underlying mechanisms involved in language processing. The relation between sentence recognition and word reading efficiency, a measure of lexical access speed, suggests that this measure may be useful for explaining outcome variability in adult CI users.
Collapse
Affiliation(s)
- Terrin N. Tamati
- Department of Otolaryngology—Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Kara J. Vasil
- Department of Otolaryngology—Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - William G. Kronenberger
- Department of Otolaryngology—Head and Neck Surgery, DeVault Otologic Research Laboratory, Indianapolis
| | - David B. Pisoni
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, Indiana, USA
| | - Aaron C. Moberly
- Department of Otolaryngology—Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Christin Ray
- Department of Otolaryngology—Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| |
Collapse
|
11
|
Rönnberg J, Holmer E, Rudner M. Cognitive Hearing Science: Three Memory Systems, Two Approaches, and the Ease of Language Understanding Model. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:359-370. [PMID: 33439747 DOI: 10.1044/2020_jslhr-20-00007] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The purpose of this study was to conceptualize the subtle balancing act between language input and prediction (cognitive priming of future input) to achieve understanding of communicated content. When understanding fails, reconstructive postdiction is initiated. Three memory systems play important roles: working memory (WM), episodic long-term memory (ELTM), and semantic long-term memory (SLTM). The axiom of the Ease of Language Understanding (ELU) model is that explicit WM resources are invoked by a mismatch between language input-in the form of rapid automatic multimodal binding of phonology-and multimodal phonological and lexical representations in SLTM. However, if there is a match between rapid automatic multimodal binding of phonology output and SLTM/ELTM representations, language processing continues rapidly and implicitly. Method and Results In our first ELU approach, we focused on experimental manipulations of signal processing in hearing aids and background noise to cause a mismatch with LTM representations; both resulted in increased dependence on WM. Our second-and main approach relevant for this review article-focuses on the relative effects of age-related hearing loss on the three memory systems. According to the ELU, WM is predicted to be frequently occupied with reconstruction of what was actually heard, resulting in a relative disuse of phonological/lexical representations in the ELTM and SLTM systems. The prediction and results do not depend on test modality per se but rather on the particular memory system. This will be further discussed. Conclusions Related to the literature on ELTM decline as precursors of dementia and the fact that the risk for Alzheimer's disease increases substantially over time due to hearing loss, there is a possibility that lowered ELTM due to hearing loss and disuse may be part of the causal chain linking hearing loss and dementia. Future ELU research will focus on this possibility.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Emil Holmer
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
12
|
Williams A, Pulsifer M, Tissera K, Mankarious LA. Cognitive and Behavioral Functioning in Hearing-Impaired Children with and without Language Delay. Otolaryngol Head Neck Surg 2020; 163:588-590. [PMID: 32284003 DOI: 10.1177/0194599820915741] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Poor language development in patients with sensorineural hearing loss (SNHL) may be related to an auditory deficit and/or other neurologic condition that influences the ability to communicate. A retrospective chart review of children (mean age = 4.0 years) with congenital, bilateral SNHL was performed to assess for linguistic and nonlinguistic neurodevelopmental differences between those who were language-impaired (LI) versus non-language-impaired (NLI). Language, neurodevelopmental functioning, and behavior were assessed. Twenty-two patients were identified: 12 were LI and 10 were NLI. Average pure-tone thresholds and nonverbal intelligence were not different between the language groups, but the LI group demonstrated significantly lower median overall adaptive skills, personal living skills, and motor skills. Behavioral dysregulation was significantly higher in the LI versus NLI group (58% vs 10%; P = .031), although the median neurodevelopmental scores did not differ significantly. These findings introduce the possibility that nonlinguistic processing deficit(s) may be confounding the ability to develop language.
Collapse
Affiliation(s)
- Alisha Williams
- Massachusetts Eye and Ear Infirmary, Pediatric Otolaryngology, Boston, Massachusetts, USA
| | - Margaret Pulsifer
- Massachusetts Eye and Ear Infirmary, Pediatric Otolaryngology, Boston, Massachusetts, USA.,Massachusetts General Hospital, Boston, MA, USA.,Harvard Medical School, Boston, Massachusetts, USA
| | - Kristin Tissera
- Massachusetts Eye and Ear Infirmary, Pediatric Otolaryngology, Boston, Massachusetts, USA
| | - Leila A Mankarious
- Massachusetts Eye and Ear Infirmary, Pediatric Otolaryngology, Boston, Massachusetts, USA.,Massachusetts General Hospital, Boston, MA, USA.,Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
13
|
Loughrey DG, Pakhomov SVS, Lawlor BA. Altered verbal fluency processes in older adults with age-related hearing loss. Exp Gerontol 2019; 130:110794. [PMID: 31790801 DOI: 10.1016/j.exger.2019.110794] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 10/27/2019] [Accepted: 11/24/2019] [Indexed: 11/28/2022]
Abstract
Epidemiological studies have linked age-related hearing loss (ARHL) with an increased risk of neurocognitive decline. Difficulties in speech perception with subsequent changes in brain morphometry, including regions important for lexical-semantic memory, are thought to be a possible mechanism for this relationship. This study investigated differences in automatic and executive lexical-semantic processes on verbal fluency tasks in individuals with acquired hearing loss. The primary outcomes were indices of automatic (clustering/word retrieval at start of task) and executive (switching/word retrieval after start of the task) processes from semantic and phonemic fluency tasks. To extract indices of clustering and switching, we used both manual and computerised methods. There were no differences between groups on indices of executive fluency processes or on any indices from the semantic fluency task. The hearing loss group demonstrated weaker automatic processes on the phonemic fluency task. Further research into differences in lexical-semantic processes with ARHL is warranted.
Collapse
Affiliation(s)
- David G Loughrey
- Global Brain Health Institute, Trinity College Dublin, Ireland; Global Brain Health Institute, University of California, San Francisco, USA; Trinity College Institute of Neuroscience, Trinity College Dublin.
| | | | - Brian A Lawlor
- Global Brain Health Institute, Trinity College Dublin, Ireland; Global Brain Health Institute, University of California, San Francisco, USA; Mercer's Institute for Successful Ageing, St James Hospital, Dublin, Ireland
| |
Collapse
|
14
|
Moberly AC, Reed J. Making Sense of Sentences: Top-Down Processing of Speech by Adult Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:2895-2905. [PMID: 31330118 PMCID: PMC6802905 DOI: 10.1044/2019_jslhr-h-18-0472] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2018] [Revised: 03/20/2019] [Accepted: 04/12/2019] [Indexed: 05/03/2023]
Abstract
Purpose Speech recognition relies upon a listener's successful pairing of the acoustic-phonetic details from the bottom-up input with top-down linguistic processing of the incoming speech stream. When the speech is spectrally degraded, such as through a cochlear implant (CI), this role of top-down processing is poorly understood. This study explored the interactions of top-down processing, specifically the use of semantic context during sentence recognition, and the relative contributions of different neurocognitive functions during speech recognition in adult CI users. Method Data from 41 experienced adult CI users were collected and used in analyses. Participants were tested for recognition and immediate repetition of speech materials in the clear. They were asked to repeat 2 sets of sentence materials, 1 that was semantically meaningful and 1 that was syntactically appropriate but semantically anomalous. Participants also were tested on 4 visual measures of neurocognitive functioning to assess working memory capacity (Digit Span; Wechsler, 2004), speed of lexical access (Test of Word Reading Efficiency; Torgeson, Wagner, & Rashotte, 1999), inhibitory control (Stroop; Stroop, 1935), and nonverbal fluid reasoning (Raven's Progressive Matrices; Raven, 2000). Results Individual listeners' inhibitory control predicted recognition of meaningful sentences when controlling for performance on anomalous sentences, our proxy for the quality of the bottom-up input. Additionally, speed of lexical access and nonverbal reasoning predicted recognition of anomalous sentences. Conclusions Findings from this study identified inhibitory control as a potential mechanism at work when listeners make use of semantic context during sentence recognition. Moreover, speed of lexical access and nonverbal reasoning were associated with recognition of sentences that lacked semantic context. These results motivate the development of improved comprehensive rehabilitative approaches for adult patients with CIs to optimize use of top-down processing and underlying core neurocognitive functions.
Collapse
Affiliation(s)
- Aaron C. Moberly
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
| | - Jessa Reed
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
| |
Collapse
|
15
|
Mattsson TS, Lind O, Follestad T, Grøndahl K, Wilson W, Nicholas J, Nordgård S, Andersson S. Electrophysiological characteristics in children with listening difficulties, with or without auditory processing disorder. Int J Audiol 2019; 58:704-716. [PMID: 31154863 DOI: 10.1080/14992027.2019.1621396] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Objective: To determine if the auditory middle latency responses (AMLR), auditory late latency response (ALLR) and auditory P300 were sensitive to auditory processing disorder (APD) and listening difficulties in children, and further to elucidate mechanisms regarding level of neurobiological problems in the central auditory nervous system. Design: Three-group, repeated measure design. Study sample: Forty-six children aged 8-14 years were divided into three groups: children with reported listening difficulties fulfilling APD diagnostic criteria, children with reported listening difficulties not fulfilling APD diagnostic criteria and normally hearing children. Results: AMLR Na latency and P300 latency and amplitude were sensitive to listening difficulties. No other auditory evoked potential (AEP) measures were sensitive to listening difficulties, and no AEP measures were sensitive to APD only. Moderate correlations were observed between P300 latency and amplitude and the behavioural AP measures of competing words, frequency patterns, duration patterns and dichotic digits. Conclusions: Impaired thalamo-cortical (bottom up) and neurocognitive function (top-down) may contribute to difficulties discriminating speech and non-speech sounds. Cognitive processes involved in conscious recognition, attention and discrimination of the acoustic characteristics of the stimuli could contribute to listening difficulties in general, and to APD in particular.
Collapse
Affiliation(s)
- Tone Stokkereit Mattsson
- Department of Otorhinolaryngology, Head and Neck Surgery, Ålesund Hospital , Aalesund , Norway.,Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology , Trondheim , Norway
| | - Ola Lind
- Department of Otorhinolaryngology, Head and Neck Surgery, Haukeland University Hospital , Bergen , Norway
| | - Turid Follestad
- Department of Public Health and General Practice, Norwegian University of Science and Technology , Trondheim , Norway
| | - Kjell Grøndahl
- Department of Clinical Engineering, Haukeland University Hospital , Bergen , Norway
| | - Wayne Wilson
- School of Health and Rehabilitation Sciences, The University of Queensland , Brisbane , Australia
| | - Jude Nicholas
- Statped National Service Center for Special Needs Education , Bergen , Norway.,Department of Occupational Medicine, Haukeland University Hospital , Bergen , Norway
| | - Ståle Nordgård
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology , Trondheim , Norway.,Department of Otorhinolaryngology, Head and Neck Surgery, St. Olavs University Hospital , Trondheim , Norway
| | - Stein Andersson
- Department of Psychology, University of Oslo , Oslo , Norway
| |
Collapse
|
16
|
Rudner M, Danielsson H, Lyxell B, Lunner T, Rönnberg J. Visual Rhyme Judgment in Adults With Mild-to-Severe Hearing Loss. Front Psychol 2019; 10:1149. [PMID: 31191388 PMCID: PMC6546845 DOI: 10.3389/fpsyg.2019.01149] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Accepted: 05/01/2019] [Indexed: 12/23/2022] Open
Abstract
Adults with poorer peripheral hearing have slower phonological processing speed measured using visual rhyme tasks, and it has been suggested that this is due to fading of phonological representations stored in long-term memory. Representations of both vowels and consonants are likely to be important for determining whether or not two printed words rhyme. However, it is not known whether the relation between phonological processing speed and hearing loss is specific to the lower frequency ranges which characterize vowels or higher frequency ranges that characterize consonants. We tested the visual rhyme ability of 212 adults with hearing loss. As in previous studies, we found that rhyme judgments were slower and less accurate when there was a mismatch between phonological and orthographic information. A substantial portion of the variance in the speed of making correct rhyme judgment decisions was explained by lexical access speed. Reading span, a measure of working memory, explained further variance in match but not mismatch conditions, but no additional variance was explained by auditory variables. This pattern of findings suggests possible reliance on a lexico-semantic word-matching strategy for solving the rhyme judgment task. Future work should investigate the relation between adoption of a lexico-semantic strategy during phonological processing tasks and hearing aid outcome.
Collapse
Affiliation(s)
- Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Henrik Danielsson
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Björn Lyxell
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| | - Thomas Lunner
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
17
|
Working Memory and Extended High-Frequency Hearing in Adults: Diagnostic Predictors of Speech-in-Noise Perception. Ear Hear 2019; 40:458-467. [DOI: 10.1097/aud.0000000000000640] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
18
|
Effects of Additional Low-Pass-Filtered Speech on Listening Effort for Noise-Band-Vocoded Speech in Quiet and in Noise. Ear Hear 2019; 40:3-17. [PMID: 29757801 PMCID: PMC6319586 DOI: 10.1097/aud.0000000000000587] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Objectives: Residual acoustic hearing in electric–acoustic stimulation (EAS) can benefit cochlear implant (CI) users in increased sound quality, speech intelligibility, and improved tolerance to noise. The goal of this study was to investigate whether the low-pass–filtered acoustic speech in simulated EAS can provide the additional benefit of reducing listening effort for the spectrotemporally degraded signal of noise-band–vocoded speech. Design: Listening effort was investigated using a dual-task paradigm as a behavioral measure, and the NASA Task Load indeX as a subjective self-report measure. The primary task of the dual-task paradigm was identification of sentences presented in three experiments at three fixed intelligibility levels: at near-ceiling, 50%, and 79% intelligibility, achieved by manipulating the presence and level of speech-shaped noise in the background. Listening effort for the primary intelligibility task was reflected in the performance on the secondary, visual response time task. Experimental speech processing conditions included monaural or binaural vocoder, with added low-pass–filtered speech (to simulate EAS) or without (to simulate CI). Results: In Experiment 1, in quiet with intelligibility near-ceiling, additional low-pass–filtered speech reduced listening effort compared with binaural vocoder, in line with our expectations, although not compared with monaural vocoder. In Experiments 2 and 3, for speech in noise, added low-pass–filtered speech allowed the desired intelligibility levels to be reached at less favorable speech-to-noise ratios, as expected. It is interesting that this came without the cost of increased listening effort usually associated with poor speech-to-noise ratios; at 50% intelligibility, even a reduction in listening effort on top of the increased tolerance to noise was observed. The NASA Task Load indeX did not capture these differences. Conclusions: The dual-task results provide partial evidence for a potential decrease in listening effort as a result of adding low-frequency acoustic speech to noise-band–vocoded speech. Whether these findings translate to CI users with residual acoustic hearing will need to be addressed in future research because the quality and frequency range of low-frequency acoustic sound available to listeners with hearing loss may differ from our idealized simulations, and additional factors, such as advanced age and varying etiology, may also play a role.
Collapse
|
19
|
Zhou X, Seghouane AK, Shah A, Innes-Brown H, Cross W, Litovsky R, McKay CM. Cortical Speech Processing in Postlingually Deaf Adult Cochlear Implant Users, as Revealed by Functional Near-Infrared Spectroscopy. Trends Hear 2019; 22:2331216518786850. [PMID: 30022732 PMCID: PMC6053859 DOI: 10.1177/2331216518786850] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
An experiment was conducted to investigate the feasibility of using functional near-infrared spectroscopy (fNIRS) to image cortical activity in the language areas of cochlear implant (CI) users and to explore the association between the activity and their speech understanding ability. Using fNIRS, 15 experienced CI users and 14 normal-hearing participants were imaged while presented with either visual speech or auditory speech. Brain activation was measured from the prefrontal, temporal, and parietal lobe in both hemispheres, including the language-associated regions. In response to visual speech, the activation levels of CI users in an a priori region of interest (ROI)—the left superior temporal gyrus or sulcus—were negatively correlated with auditory speech understanding. This result suggests that increased cross-modal activity in the auditory cortex is predictive of poor auditory speech understanding. In another two ROIs, in which CI users showed significantly different mean activation levels in response to auditory speech compared with normal-hearing listeners, activation levels were significantly negatively correlated with CI users’ auditory speech understanding. These ROIs were located in the right anterior temporal lobe (including a portion of prefrontal lobe) and the left middle superior temporal lobe. In conclusion, fNIRS successfully revealed activation patterns in CI users associated with their auditory speech understanding.
Collapse
Affiliation(s)
- Xin Zhou
- 1 Bionics Institute of Australia, East Melbourne, Australia.,2 Department of Medical Bionics, University of Melbourne, Australia
| | - Abd-Krim Seghouane
- 3 Department of Electrical and Electronic Engineering, University of Melbourne, Australia
| | - Adnan Shah
- 3 Department of Electrical and Electronic Engineering, University of Melbourne, Australia
| | - Hamish Innes-Brown
- 1 Bionics Institute of Australia, East Melbourne, Australia.,2 Department of Medical Bionics, University of Melbourne, Australia
| | - Will Cross
- 1 Bionics Institute of Australia, East Melbourne, Australia
| | - Ruth Litovsky
- 4 Waisman Center, University of Wisconsin-Madison, WI, USA
| | - Colette M McKay
- 1 Bionics Institute of Australia, East Melbourne, Australia.,2 Department of Medical Bionics, University of Melbourne, Australia
| |
Collapse
|
20
|
Rönnberg J, Holmer E, Rudner M. Cognitive hearing science and ease of language understanding. Int J Audiol 2019; 58:247-261. [DOI: 10.1080/14992027.2018.1551631] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Emil Holmer
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| |
Collapse
|
21
|
Zekveld AA, Pronk M, Danielsson H, Rönnberg J. Reading Behind the Lines: The Factors Affecting the Text Reception Threshold in Hearing Aid Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:762-775. [PMID: 29450534 DOI: 10.1044/2017_jslhr-h-17-0196] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Accepted: 10/12/2017] [Indexed: 06/08/2023]
Abstract
PURPOSE The visual Text Reception Threshold (TRT) test (Zekveld et al., 2007) has been designed to assess modality-general factors relevant for speech perception in noise. In the last decade, the test has been adopted in audiology labs worldwide. The 1st aim of this study was to examine which factors best predict interindividual differences in the TRT. Second, we aimed to assess the relationships between the TRT and the speech reception thresholds (SRTs) estimated in various conditions. METHOD First, we reviewed studies reporting relationships between the TRT and the auditory and/or cognitive factors and formulated specific hypotheses regarding the TRT predictors. These hypotheses were tested using a prediction model applied to a rich data set of 180 hearing aid users. In separate association models, we tested the relationships between the TRT and the various SRTs and subjective hearing difficulties, while taking into account potential confounding variables. RESULTS The results of the prediction model indicate that the TRT is predicted by the ability to fill in missing words in incomplete sentences, by lexical access speed, and by working memory capacity. Furthermore, in line with previous studies, a moderate association between higher age, poorer pure-tone hearing acuity, and poorer TRTs was observed. Better TRTs were associated with better SRTs for the correct perception of 50% of Hagerman matrix sentences in a 4-talker babble, as well as with better subjective ratings of speech perception. Age and pure-tone hearing thresholds significantly confounded these associations. The associations of the TRT with SRTs estimated in other conditions and with subjective qualities of hearing were not statistically significant when adjusting for age and pure-tone average. CONCLUSIONS We conclude that the abilities tapped into by the TRT test include processes relevant for speeded lexical decision making when completing partly masked sentences and that these processes require working memory capacity. Furthermore, the TRT is associated with the SRT of hearing aid users as estimated in a challenging condition that includes informational masking and with experienced difficulties with speech perception in daily-life conditions. The current results underline the value of using the TRT test in studies involving speech perception and aid in the interpretation of findings acquired using the test.
Collapse
Affiliation(s)
- Adriana A Zekveld
- Department of Behavioural Sciences and Learning, Linköping University, Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping, Sweden
- Section Ear & Hearing, Department of Otolaryngology/Head & Neck Surgery, Amsterdam Public Health Research Institute, VU University Medical Center, the Netherlands
| | - Marieke Pronk
- Section Ear & Hearing, Department of Otolaryngology/Head & Neck Surgery, Amsterdam Public Health Research Institute, VU University Medical Center, the Netherlands
| | - Henrik Danielsson
- Department of Behavioural Sciences and Learning, Linköping University, Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping, Sweden
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University, Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping, Sweden
| |
Collapse
|
22
|
Finke M, Strauß-Schier A, Kludt E, Büchner A, Illg A. Speech intelligibility and subjective benefit in single-sided deaf adults after cochlear implantation. Hear Res 2017; 348:112-119. [DOI: 10.1016/j.heares.2017.03.002] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/09/2016] [Revised: 02/21/2017] [Accepted: 03/01/2017] [Indexed: 12/18/2022]
|
23
|
Moberly AC, Harris MS, Boyce L, Nittrouer S. Speech Recognition in Adults With Cochlear Implants: The Effects of Working Memory, Phonological Sensitivity, and Aging. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:1046-1061. [PMID: 28384805 PMCID: PMC5548076 DOI: 10.1044/2016_jslhr-h-16-0119] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2016] [Revised: 08/30/2016] [Accepted: 10/14/2016] [Indexed: 05/12/2023]
Abstract
Purpose Models of speech recognition suggest that "top-down" linguistic and cognitive functions, such as use of phonotactic constraints and working memory, facilitate recognition under conditions of degradation, such as in noise. The question addressed in this study was what happens to these functions when a listener who has experienced years of hearing loss obtains a cochlear implant. Method Thirty adults with cochlear implants and 30 age-matched controls with age-normal hearing underwent testing of verbal working memory using digit span and serial recall of words. Phonological capacities were assessed using a lexical decision task and nonword repetition. Recognition of words in sentences in speech-shaped noise was measured. Results Implant users had only slightly poorer working memory accuracy than did controls and only on serial recall of words; however, phonological sensitivity was highly impaired. Working memory did not facilitate speech recognition in noise for either group. Phonological sensitivity predicted sentence recognition for implant users but not for listeners with normal hearing. Conclusion Clinical speech recognition outcomes for adult implant users relate to the ability of these users to process phonological information. Results suggest that phonological capacities may serve as potential clinical targets through rehabilitative training. Such novel interventions may be particularly helpful for older adult implant users.
Collapse
Affiliation(s)
- Aaron C. Moberly
- Department of Otolaryngology–Head and Neck Surgery, Wexner Medical Center, The Ohio State University, Columbus
| | - Michael S. Harris
- Department of Otolaryngology–Head and Neck Surgery, Wexner Medical Center, The Ohio State University, Columbus
| | - Lauren Boyce
- Department of Otolaryngology–Head and Neck Surgery, Wexner Medical Center, The Ohio State University, Columbus
| | - Susan Nittrouer
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
| |
Collapse
|
24
|
Nittrouer S, Lowenstein JH, Wucinich T, Moberly AC. Verbal Working Memory in Older Adults: The Roles of Phonological Capacities and Processing Speed. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2016; 59:1520-1532. [PMID: 27936265 PMCID: PMC5399767 DOI: 10.1044/2016_jslhr-h-15-0404] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2015] [Revised: 03/29/2016] [Accepted: 04/22/2016] [Indexed: 05/23/2023]
Abstract
PURPOSE This study examined the potential roles of phonological sensitivity and processing speed in age-related declines of verbal working memory. METHOD Twenty younger and 25 older adults with age-normal hearing participated. Two measures of verbal working memory were collected: digit span and serial recall of words. Processing speed was indexed using response times during those tasks. Three other measures were also obtained, assessing phonological awareness, processing, and recoding. RESULTS Forward and reverse digit spans were similar across groups. Accuracy on the serial recall task was poorer for older than for younger adults, and response times were slower. When response time served as a covariate, the age effect for accuracy was reduced. Phonological capacities were equivalent across age groups, so we were unable to account for differences across age groups in verbal working memory. Nonetheless, when outcomes for only older adults were considered, phonological awareness and processing speed explained significant proportions of variance in serial recall accuracy. CONCLUSION Slowing in processing abilities accounts for the primary trajectory of age-related declines in verbal working memory. However, individual differences in phonological capacities explain variability among individual older adults.
Collapse
Affiliation(s)
- Susan Nittrouer
- Department of Otolaryngology–Head and Neck Surgery, The Ohio State University, Columbus
- Currently affiliated with the University of Florida, Gainesville
| | - Joanna H. Lowenstein
- Department of Otolaryngology–Head and Neck Surgery, The Ohio State University, Columbus
- Currently affiliated with the University of Florida, Gainesville
| | - Taylor Wucinich
- Department of Otolaryngology–Head and Neck Surgery, The Ohio State University, Columbus
| | - Aaron C. Moberly
- Department of Otolaryngology–Head and Neck Surgery, The Ohio State University, Columbus
| |
Collapse
|
25
|
Finke M, Sandmann P, Bönitz H, Kral A, Büchner A. Consequences of Stimulus Type on Higher-Order Processing in Single-Sided Deaf Cochlear Implant Users. Audiol Neurootol 2016; 21:305-315. [DOI: 10.1159/000452123] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2016] [Accepted: 09/20/2016] [Indexed: 11/19/2022] Open
Abstract
Single-sided deaf subjects with a cochlear implant (CI) provide the unique opportunity to compare central auditory processing of the electrical input (CI ear) and the acoustic input (normal-hearing, NH, ear) within the same individual. In these individuals, sensory processing differs between their two ears, while cognitive abilities are the same irrespectively of the sensory input. To better understand perceptual-cognitive factors modulating speech intelligibility with a CI, this electroencephalography study examined the central-auditory processing of words, the cognitive abilities, and the speech intelligibility in 10 postlingually single-sided deaf CI users. We found lower hit rates and prolonged response times for word classification during an oddball task for the CI ear when compared with the NH ear. Also, event-related potentials reflecting sensory (N1) and higher-order processing (N2/N4) were prolonged for word classification (targets versus nontargets) with the CI ear compared with the NH ear. Our results suggest that speech processing via the CI ear and the NH ear differs both at sensory (N1) and cognitive (N2/N4) processing stages, thereby affecting the behavioral performance for speech discrimination. These results provide objective evidence for cognition to be a key factor for speech perception under adverse listening conditions, such as the degraded speech signal provided from the CI.
Collapse
|
26
|
Rönnberg J, Lunner T, Ng EHN, Lidestam B, Zekveld AA, Sörqvist P, Lyxell B, Träff U, Yumba W, Classon E, Hällgren M, Larsby B, Signoret C, Pichora-Fuller MK, Rudner M, Danielsson H, Stenfelt S. Hearing impairment, cognition and speech understanding: exploratory factor analyses of a comprehensive test battery for a group of hearing aid users, the n200 study. Int J Audiol 2016; 55:623-42. [PMID: 27589015 PMCID: PMC5044772 DOI: 10.1080/14992027.2016.1219775] [Citation(s) in RCA: 63] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2016] [Revised: 07/29/2016] [Accepted: 07/29/2016] [Indexed: 02/08/2023]
Abstract
OBJECTIVE The aims of the current n200 study were to assess the structural relations between three classes of test variables (i.e. HEARING, COGNITION and aided speech-in-noise OUTCOMES) and to describe the theoretical implications of these relations for the Ease of Language Understanding (ELU) model. STUDY SAMPLE Participants were 200 hard-of-hearing hearing-aid users, with a mean age of 60.8 years. Forty-three percent were females and the mean hearing threshold in the better ear was 37.4 dB HL. DESIGN LEVEL1 factor analyses extracted one factor per test and/or cognitive function based on a priori conceptualizations. The more abstract LEVEL 2 factor analyses were performed separately for the three classes of test variables. RESULTS The HEARING test variables resulted in two LEVEL 2 factors, which we labelled SENSITIVITY and TEMPORAL FINE STRUCTURE; the COGNITIVE variables in one COGNITION factor only, and OUTCOMES in two factors, NO CONTEXT and CONTEXT. COGNITION predicted the NO CONTEXT factor to a stronger extent than the CONTEXT outcome factor. TEMPORAL FINE STRUCTURE and SENSITIVITY were associated with COGNITION and all three contributed significantly and independently to especially the NO CONTEXT outcome scores (R(2) = 0.40). CONCLUSIONS All LEVEL 2 factors are important theoretically as well as for clinical assessment.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Thomas Lunner
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
- Eriksholm Research Centre,
Oticon A/S, Rørtangvej 20, 3070 Snekkersten,
Denmark
| | - Elaine Hoi Ning Ng
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Björn Lidestam
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
| | - Adriana Agatha Zekveld
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Section Ear & Hearing, Dept. of Otolaryngology-Head and Neck Surgery and EMGO Institute, VU University Medical Center,
Amsterdam,
The Netherlands
| | - Patrik Sörqvist
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Building, Energy and Environmental Engineering, University of Gävle,
Gävle,
Sweden
| | - Björn Lyxell
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Ulf Träff
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
| | - Wycliffe Yumba
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Elisabet Classon
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Mathias Hällgren
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
| | - Birgitta Larsby
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
| | - Carine Signoret
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - M. Kathleen Pichora-Fuller
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Psychology, University of Toronto,
Toronto,
Ontario,
Canada
- The Toronto Rehabilitation Institute, University Health Network,
Toronto,
Ontario,
Canada
- The Rotman Research Institute, Baycrest Hospital,
Toronto,
Ontario,
Canada
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Henrik Danielsson
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Stefan Stenfelt
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
| |
Collapse
|
27
|
Henricson C, Frölander HE, Möller C, Lyxell B. Theory of Mind and Cognitive Function in Adults with Alström or Usher Syndrome. JOURNAL OF VISUAL IMPAIRMENT & BLINDNESS 2016. [DOI: 10.1177/0145482x1611000506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Objective Theory of mind (ToM) refers to the ability to impute mental states to one's self and others. ToM was investigated in adults with Usher syndrome type 2 (USH2) or Alström syndrome (AS). Both syndromes cause deafblindness, but differ with regard to onset and degree of sensory loss. Individuals with AS, furthermore, display additional physical diseases. Comparisons were made with individuals with typical hearing and vision. Methods Thirteen people with USH2, 12 people with AS, and 33 people with typical hearing and vision performed tests of working memory capacity and verbal ability. ToM was tested via Happé's Strange Stories, assessing ability to understand the emotions and actions of story characters. The test also included matched physical stories to evaluate understanding of the logical outcomes associated with everyday situations. Results Significant differences were identified in problem solving regarding physical conditions, with higher scores for the typical hearing and vision group, H(2) = 22.91, p < 0.01. The two groups with deafblindness also demonstrated poorer ToM than the typical hearing and vision group, H(2) = 21.61, p < 0.01, and the USH2 group outperformed the AS group, U(34), z = 2.42, p = 0.016. Intra-group variability was related to working memory capacity, verbal ability, visual status, and to a minor extent auditory capacity. The prevalence of the additional physical diseases was not related to ToM performance. Conclusions Limited access to information due to visual loss may have reduced the degree of social experience, thereby negatively affecting the development of ToM. That working memory capacity and verbal ability displayed an impact implies that hearing also contributes to ToM development. Differences between the two groups might be a function of genetic conditions, in which the gene causing USH2 only affects the ears and the eyes, whereas AS has a multisystemic pathology. Implications for practitioners Advice and support technology should emphasize ease of communication and boost the development of the communication required to develop ToM.
Collapse
Affiliation(s)
- Cecilia Henricson
- Clinical psychologist, Department of Behavioral Science and Learning, Linköping University, Linköping SE 581 83, Sweden; The Swedish Institute for Disability Research, Linköping, Sweden; The Linnaeus Centre HEAD, Linköping, Sweden; Research on Hearing and Deafness (HEAD) Graduate School, Linköping
| | - Hans-Erik Frölander
- Clinical psychologist, School of Health, Örebro University, Örebro SE 701 85, Sweden; Audiological Research Centre, Örebro University Hospital, Örebro SE 701 85, Sweden
| | - Claes Möller
- Professor, School of Health, Örebro University, Örebro, Sweden; Audiological Research Centre, Örebro University Hospital, Örebro, Sweden
| | - Björn Lyxell
- Professor, Department of Behavioral Science and Learning, Linköping University, Linköping, Sweden; The Swedish Institute for Disability Research, Linköping, Sweden; The Linnaeus Centre HEAD, Linköping, Sweden
| |
Collapse
|
28
|
Füllgrabe C, Rosen S. On The (Un)importance of Working Memory in Speech-in-Noise Processing for Listeners with Normal Hearing Thresholds. Front Psychol 2016; 7:1268. [PMID: 27625615 PMCID: PMC5003928 DOI: 10.3389/fpsyg.2016.01268] [Citation(s) in RCA: 113] [Impact Index Per Article: 14.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 08/09/2016] [Indexed: 12/29/2022] Open
Abstract
With the advent of cognitive hearing science, increased attention has been given to individual differences in cognitive functioning and their explanatory power in accounting for inter-listener variability in the processing of speech in noise (SiN). The psychological construct that has received much interest in recent years is working memory. Empirical evidence indeed confirms the association between WM capacity (WMC) and SiN identification in older hearing-impaired listeners. However, some theoretical models propose that variations in WMC are an important predictor for variations in speech processing abilities in adverse perceptual conditions for all listeners, and this notion has become widely accepted within the field. To assess whether WMC also plays a role when listeners without hearing loss process speech in adverse listening conditions, we surveyed published and unpublished studies in which the Reading-Span test (a widely used measure of WMC) was administered in conjunction with a measure of SiN identification, using sentence material routinely used in audiological and hearing research. A meta-analysis revealed that, for young listeners with audiometrically normal hearing, individual variations in WMC are estimated to account for, on average, less than 2% of the variance in SiN identification scores. This result cautions against the (intuitively appealing) assumption that individual variations in WMC are predictive of SiN identification independently of the age and hearing status of the listener.
Collapse
Affiliation(s)
- Christian Füllgrabe
- Medical Research Council Institute of Hearing Research, The University of NottinghamNottingham, UK
| | - Stuart Rosen
- Speech,Hearing and Phonetic Sciences, University College LondonLondon, UK
| |
Collapse
|
29
|
|
30
|
Rudner M, Mishra S, Stenfelt S, Lunner T, Rönnberg J. Seeing the Talker's Face Improves Free Recall of Speech for Young Adults With Normal Hearing but Not Older Adults With Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2016; 59:590-599. [PMID: 27280873 DOI: 10.1044/2015_jslhr-h-15-0014] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2015] [Accepted: 11/18/2015] [Indexed: 06/06/2023]
Abstract
PURPOSE Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers. METHOD Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13 two-digit numbers, with alternating male and female talkers. Lists were presented in quiet as well as in stationary and speech-like noise at a signal-to-noise ratio giving approximately 90% intelligibility. Amplification compensated for loss of audibility. RESULTS Seeing the talker's face improved free recall performance for the younger but not the older group. Poorer performance in background noise was contingent on individual differences in working memory capacity. The effect of seeing the talker's face did not differ in quiet and noise. CONCLUSIONS We have argued that the absence of an effect of seeing the talker's face for older adults with hearing loss may be due to modulation of audiovisual integration mechanisms caused by an interaction between task demands and participant characteristics. In particular, we suggest that executive task demands and interindividual executive skills may play a key role in determining the benefit of seeing the talker's face during a speech-based cognitive task.
Collapse
|
31
|
Holmer E, Heimann M, Rudner M. Imitation, Sign Language Skill and the Developmental Ease of Language Understanding (D-ELU) Model. Front Psychol 2016; 7:107. [PMID: 26909050 PMCID: PMC4754574 DOI: 10.3389/fpsyg.2016.00107] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2015] [Accepted: 01/19/2016] [Indexed: 01/05/2023] Open
Abstract
Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU) model (Rönnberg et al., 2013) pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH) signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL) than unfamiliar British Sign Language (BSL) signs, and that both groups would be better at imitating lexical signs (SSL and BSL) than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1) we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2). Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills were taken into account. These results demonstrate that experience of sign language enhances the ability to imitate manual gestures once representations have been established, and suggest that the inherent motor patterns of lexical manual gestures are better suited for representation than those of non-signs. This set of findings prompts a developmental version of the ELU model, D-ELU.
Collapse
Affiliation(s)
- Emil Holmer
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden
| | - Mikael Heimann
- Swedish Institute for Disability Research and Division of Psychology, Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden
| | - Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden
| |
Collapse
|
32
|
Preexisting semantic representation improves working memory performance in the visuospatial domain. Mem Cognit 2016; 44:608-20. [DOI: 10.3758/s13421-016-0585-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
33
|
Henricson C, Lidestam B, Lyxell B, Möller C. Cognitive skills and reading in adults with Usher syndrome type 2. Front Psychol 2015; 6:326. [PMID: 25859232 PMCID: PMC4373271 DOI: 10.3389/fpsyg.2015.00326] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2014] [Accepted: 03/06/2015] [Indexed: 12/23/2022] Open
Abstract
OBJECTIVE To investigate working memory (WM), phonological skills, lexical skills, and reading comprehension in adults with Usher syndrome type 2 (USH2). DESIGN The participants performed tests of phonological processing, lexical access, WM, and reading comprehension. The design of the test situation and tests was specifically considered for use with persons with low vision in combination with hearing impairment. The performance of the group with USH2 on the different cognitive measures was compared to that of a matched control group with normal hearing and vision (NVH). STUDY SAMPLE Thirteen participants with USH2 aged 21-60 years and a control group of 10 individuals with NVH, matched on age and level of education. RESULTS The group with USH2 displayed significantly lower performance on tests of phonological processing, and on measures requiring both fast visual judgment and phonological processing. There was a larger variation in performance among the individuals with USH2 than in the matched control group. CONCLUSION The performance of the group with USH2 indicated similar problems with phonological processing skills and phonological WM as in individuals with long-term hearing loss. The group with USH2 also had significantly longer reaction times, indicating that processing of visual stimuli is difficult due to the visual impairment. These findings point toward the difficulties in accessing information that persons with USH2 experience, and could be part of the explanation of why individuals with USH2 report high levels of fatigue and feelings of stress (Wahlqvist et al., 2013).
Collapse
Affiliation(s)
- Cecilia Henricson
- Swedish Institute for Disability Research (SIDR)Linköping, Sweden
- Linnaeus Centre for Research on Hearing and Deafness (HEAD)Linköping, Sweden
- Department of Behavioral Sciences and Learning, Linköping UniversityLinköping, Sweden
| | - Björn Lidestam
- Linnaeus Centre for Research on Hearing and Deafness (HEAD)Linköping, Sweden
- Department of Behavioral Sciences and Learning, Linköping UniversityLinköping, Sweden
| | - Björn Lyxell
- Swedish Institute for Disability Research (SIDR)Linköping, Sweden
- Linnaeus Centre for Research on Hearing and Deafness (HEAD)Linköping, Sweden
- Department of Behavioral Sciences and Learning, Linköping UniversityLinköping, Sweden
| | - Claes Möller
- Swedish Institute for Disability Research (SIDR)Linköping, Sweden
- Audiological Research Centre, Örebro University HospitalÖrebro, Sweden
- School of Medicine and Health, Örebro UniversityÖrebro, Sweden
| |
Collapse
|
34
|
Cognitive spare capacity and speech communication: a narrative overview. BIOMED RESEARCH INTERNATIONAL 2014; 2014:869726. [PMID: 24971355 PMCID: PMC4058272 DOI: 10.1155/2014/869726] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 02/10/2014] [Accepted: 05/13/2014] [Indexed: 01/27/2023]
Abstract
Background noise can make speech communication tiring and cognitively taxing, especially for individuals with hearing impairment. It is now well established that better working memory capacity is associated with better ability to understand speech under adverse conditions as well as better ability to benefit from the advanced signal processing in modern hearing aids. Recent work has shown that although such processing cannot overcome hearing handicap, it can increase cognitive spare capacity, that is, the ability to engage in higher level processing of speech. This paper surveys recent work on cognitive spare capacity and suggests new avenues of investigation.
Collapse
|
35
|
Mishra S, Stenfelt S, Lunner T, Rönnberg J, Rudner M. Cognitive spare capacity in older adults with hearing loss. Front Aging Neurosci 2014; 6:96. [PMID: 24904409 PMCID: PMC4033040 DOI: 10.3389/fnagi.2014.00096] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2013] [Accepted: 04/29/2014] [Indexed: 12/28/2022] Open
Abstract
Individual differences in working memory capacity (WMC) are associated with speech recognition in adverse conditions, reflecting the need to maintain and process speech fragments until lexical access can be achieved. When working memory resources are engaged in unlocking the lexicon, there is less Cognitive Spare Capacity (CSC) available for higher level processing of speech. CSC is essential for interpreting the linguistic content of speech input and preparing an appropriate response, that is, engaging in conversation. Previously, we showed, using a Cognitive Spare Capacity Test (CSCT) that in young adults with normal hearing, CSC was not generally related to WMC and that when CSC decreased in noise it could be restored by visual cues. In the present study, we investigated CSC in 24 older adults with age-related hearing loss, by administering the CSCT and a battery of cognitive tests. We found generally reduced CSC in older adults with hearing loss compared to the younger group in our previous study, probably because they had poorer cognitive skills and deployed them differently. Importantly, CSC was not reduced in the older group when listening conditions were optimal. Visual cues improved CSC more for this group than for the younger group in our previous study. CSC of older adults with hearing loss was not generally related to WMC but it was consistently related to episodic long term memory, suggesting that the efficiency of this processing bottleneck is important for executive processing of speech in this group.
Collapse
Affiliation(s)
- Sushmit Mishra
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | - Stefan Stenfelt
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden ; Department of Clinical and Experimental Medicine, Linköping University Linköping, Sweden
| | - Thomas Lunner
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden ; Department of Clinical and Experimental Medicine, Linköping University Linköping, Sweden ; Eriksholm Research Centre, Oticon A/S Snekkersten, Denmark
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| |
Collapse
|
36
|
Classon E, Löfkvist U, Rudner M, Rönnberg J. Verbal fluency in adults with postlingually acquired hearing impairment. SPEECH LANGUAGE AND HEARING 2014. [DOI: 10.1179/205057113x13781290153457] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
37
|
Mishra S, Lunner T, Stenfelt S, Rönnberg J, Rudner M. Seeing the talker's face supports executive processing of speech in steady state noise. Front Syst Neurosci 2013; 7:96. [PMID: 24324411 PMCID: PMC3840300 DOI: 10.3389/fnsys.2013.00096] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2013] [Accepted: 11/09/2013] [Indexed: 11/21/2022] Open
Abstract
Listening to speech in noise depletes cognitive resources, affecting speech processing. The present study investigated how remaining resources or cognitive spare capacity (CSC) can be deployed by young adults with normal hearing. We administered a test of CSC (CSCT; Mishra et al., 2013) along with a battery of established cognitive tests to 20 participants with normal hearing. In the CSCT, lists of two-digit numbers were presented with and without visual cues in quiet, as well as in steady-state and speech-like noise at a high intelligibility level. In low load conditions, two numbers were recalled according to instructions inducing executive processing (updating, inhibition) and in high load conditions the participants were additionally instructed to recall one extra number, which was the always the first item in the list. In line with previous findings, results showed that CSC was sensitive to memory load and executive function but generally not related to working memory capacity (WMC). Furthermore, CSCT scores in quiet were lowered by visual cues, probably due to distraction. In steady-state noise, the presence of visual cues improved CSCT scores, probably by enabling better encoding. Contrary to our expectation, CSCT performance was disrupted more in steady-state than speech-like noise, although only without visual cues, possibly because selective attention could be used to ignore the speech-like background and provide an enriched representation of target items in working memory similar to that obtained in quiet. This interpretation is supported by a consistent association between CSCT scores and updating skills.
Collapse
Affiliation(s)
- Sushmit Mishra
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden
| | | | | | | | | |
Collapse
|
38
|
Rönnberg J, Lunner T, Zekveld A, Sörqvist P, Danielsson H, Lyxell B, Dahlström O, Signoret C, Stenfelt S, Pichora-Fuller MK, Rudner M. The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances. Front Syst Neurosci 2013; 7:31. [PMID: 23874273 PMCID: PMC3710434 DOI: 10.3389/fnsys.2013.00031] [Citation(s) in RCA: 566] [Impact Index Per Article: 51.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2013] [Accepted: 06/24/2013] [Indexed: 12/28/2022] Open
Abstract
Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008) in light of new behavioral and neural findings concerning the role of working memory capacity (WMC) in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. It is based on findings that address the relationship between WMC and (a) early attention processes in listening to speech, (b) signal processing in hearing aids and its effects on short-term memory, (c) inhibition of speech maskers and its effect on episodic long-term memory, (d) the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e) listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
39
|
Rönnberg J, Lunner T, Zekveld A, Sörqvist P, Danielsson H, Lyxell B, Dahlström O, Signoret C, Stenfelt S, Pichora-Fuller MK, Rudner M. The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances. Front Syst Neurosci 2013; 7:31. [PMID: 23874273 DOI: 10.3389/fnsys] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2013] [Accepted: 06/24/2013] [Indexed: 05/28/2023] Open
Abstract
Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008) in light of new behavioral and neural findings concerning the role of working memory capacity (WMC) in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. It is based on findings that address the relationship between WMC and (a) early attention processes in listening to speech, (b) signal processing in hearing aids and its effects on short-term memory, (c) inhibition of speech maskers and its effect on episodic long-term memory, (d) the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e) listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
40
|
Besser J, Koelewijn T, Zekveld AA, Kramer SE, Festen JM. How linguistic closure and verbal working memory relate to speech recognition in noise--a review. Trends Amplif 2013; 17:75-93. [PMID: 23945955 PMCID: PMC4070613 DOI: 10.1177/1084713813495459] [Citation(s) in RCA: 92] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
The ability to recognize masked speech, commonly measured with a speech reception threshold (SRT) test, is associated with cognitive processing abilities. Two cognitive factors frequently assessed in speech recognition research are the capacity of working memory (WM), measured by means of a reading span (Rspan) or listening span (Lspan) test, and the ability to read masked text (linguistic closure), measured by the text reception threshold (TRT). The current article provides a review of recent hearing research that examined the relationship of TRT and WM span to SRTs in various maskers. Furthermore, modality differences in WM capacity assessed with the Rspan compared to the Lspan test were examined and related to speech recognition abilities in an experimental study with young adults with normal hearing (NH). Span scores were strongly associated with each other, but were higher in the auditory modality. The results of the reviewed studies suggest that TRT and WM span are related to each other, but differ in their relationships with SRT performance. In NH adults of middle age or older, both TRT and Rspan were associated with SRTs in speech maskers, whereas TRT better predicted speech recognition in fluctuating nonspeech maskers. The associations with SRTs in steady-state noise were inconclusive for both measures. WM span was positively related to benefit from contextual information in speech recognition, but better TRTs related to less interference from unrelated cues. Data for individuals with impaired hearing are limited, but larger WM span seems to give a general advantage in various listening situations.
Collapse
Affiliation(s)
- Jana Besser
- VU University Medical Center, Amsterdam, Netherlands
| | | | - Adriana A. Zekveld
- VU University Medical Center, Amsterdam, Netherlands
- The Swedish Institute for Disability Research, Sweden
- Linköping University, Linköping, Sweden
| | | | | |
Collapse
|