101
|
Cahana-Amitay D, Spiro A, Sayers JT, Oveis AC, Higby E, Ojo EA, Duncan S, Goral M, Hyun J, Albert ML, Obler LK. How older adults use cognition in sentence-final word recognition. AGING NEUROPSYCHOLOGY AND COGNITION 2015; 23:418-44. [PMID: 26569553 DOI: 10.1080/13825585.2015.1111291] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
This study examined the effects of executive control and working memory on older adults' sentence-final word recognition. The question we addressed was the importance of executive functions to this process and how it is modulated by the predictability of the speech material. To this end, we tested 173 neurologically intact adult native English speakers aged 55-84 years. Participants were given a sentence-final word recognition test in which sentential context was manipulated and sentences were presented in different levels of babble, and multiple tests of executive functioning assessing inhibition, shifting, and efficient access to long-term memory, as well as working memory. Using a generalized linear mixed model, we found that better inhibition was associated with higher accuracy in word recognition, while increased age and greater hearing loss were associated with poorer performance. Findings are discussed in the framework of semantic control and are interpreted as supporting a theoretical view of executive control which emphasizes functional diversity among executive components.
Collapse
Affiliation(s)
- Dalia Cahana-Amitay
- a Department of Neurology , Boston University School of Medicine , Boston , MA , USA.,d Veterans Affairs Boston Healthcare System , Boston , MA , USA
| | - Avron Spiro
- b Department of Epidemiology , School of Public Health, Boston University , Boston , MA , USA.,c Department of Psychiatry , Boston University School of Medicine , Boston , MA , USA.,d Veterans Affairs Boston Healthcare System , Boston , MA , USA
| | - Jesse T Sayers
- a Department of Neurology , Boston University School of Medicine , Boston , MA , USA.,d Veterans Affairs Boston Healthcare System , Boston , MA , USA
| | - Abigail C Oveis
- a Department of Neurology , Boston University School of Medicine , Boston , MA , USA.,d Veterans Affairs Boston Healthcare System , Boston , MA , USA
| | - Eve Higby
- a Department of Neurology , Boston University School of Medicine , Boston , MA , USA.,d Veterans Affairs Boston Healthcare System , Boston , MA , USA.,e The Graduate Center, City University of New York , New York , NY , USA
| | - Emmanuel A Ojo
- a Department of Neurology , Boston University School of Medicine , Boston , MA , USA.,d Veterans Affairs Boston Healthcare System , Boston , MA , USA
| | - Susan Duncan
- e The Graduate Center, City University of New York , New York , NY , USA.,g Department of Cognitive Sciences and Neurology , University of California , Irvine , CA , USA
| | - Mira Goral
- a Department of Neurology , Boston University School of Medicine , Boston , MA , USA.,d Veterans Affairs Boston Healthcare System , Boston , MA , USA.,e The Graduate Center, City University of New York , New York , NY , USA.,f Lehman College, City University of New York , New York , NY , USA
| | - Jungmoon Hyun
- a Department of Neurology , Boston University School of Medicine , Boston , MA , USA.,d Veterans Affairs Boston Healthcare System , Boston , MA , USA.,e The Graduate Center, City University of New York , New York , NY , USA
| | - Martin L Albert
- a Department of Neurology , Boston University School of Medicine , Boston , MA , USA.,d Veterans Affairs Boston Healthcare System , Boston , MA , USA
| | - Loraine K Obler
- a Department of Neurology , Boston University School of Medicine , Boston , MA , USA.,d Veterans Affairs Boston Healthcare System , Boston , MA , USA.,e The Graduate Center, City University of New York , New York , NY , USA
| |
Collapse
|
102
|
Wilsch A, Obleser J. What works in auditory working memory? A neural oscillations perspective. Brain Res 2015; 1640:193-207. [PMID: 26556773 DOI: 10.1016/j.brainres.2015.10.054] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2015] [Revised: 10/28/2015] [Accepted: 10/30/2015] [Indexed: 11/16/2022]
Abstract
Working memory is a limited resource: brains can only maintain small amounts of sensory input (memory load) over a brief period of time (memory decay). The dynamics of slow neural oscillations as recorded using magneto- and electroencephalography (M/EEG) provide a window into the neural mechanics of these limitations. Especially oscillations in the alpha range (8-13Hz) are a sensitive marker for memory load. Moreover, according to current models, the resultant working memory load is determined by the relative noise in the neural representation of maintained information. The auditory domain allows memory researchers to apply and test the concept of noise quite literally: Employing degraded stimulus acoustics increases memory load and, at the same time, allows assessing the cognitive resources required to process speech in noise in an ecologically valid and clinically relevant way. The present review first summarizes recent findings on neural oscillations, especially alpha power, and how they reflect memory load and memory decay in auditory working memory. The focus is specifically on memory load resulting from acoustic degradation. These findings are then contrasted with contextual factors that benefit neural as well as behavioral markers of memory performance, by reducing representational noise. We end on discussing the functional role of alpha power in auditory working memory and suggest extensions of the current methodological toolkit. This article is part of a Special Issue entitled SI: Auditory working memory.
Collapse
Affiliation(s)
- Anna Wilsch
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Jonas Obleser
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Department of Psychology, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany.
| |
Collapse
|
103
|
Ellis RJ, Rönnberg J. How does susceptibility to proactive interference relate to speech recognition in aided and unaided conditions? Front Psychol 2015; 6:1017. [PMID: 26283981 PMCID: PMC4522515 DOI: 10.3389/fpsyg.2015.01017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2015] [Accepted: 07/06/2015] [Indexed: 12/04/2022] Open
Abstract
Proactive interference (PI) is the capacity to resist interference to the acquisition of new memories from information stored in the long-term memory. Previous research has shown that PI correlates significantly with the speech-in-noise recognition scores of younger adults with normal hearing. In this study, we report the results of an experiment designed to investigate the extent to which tests of visual PI relate to the speech-in-noise recognition scores of older adults with hearing loss, in aided and unaided conditions. The results suggest that measures of PI correlate significantly with speech-in-noise recognition only in the unaided condition. Furthermore the relation between PI and speech-in-noise recognition differs to that observed in younger listeners without hearing loss. The findings suggest that the relation between PI tests and the speech-in-noise recognition scores of older adults with hearing loss relates to capability of the test to index cognitive flexibility.
Collapse
Affiliation(s)
- Rachel J Ellis
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University , Linköping, Sweden
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University , Linköping, Sweden
| |
Collapse
|
104
|
Rudner M, Toscano E, Holmer E. Load and distinctness interact in working memory for lexical manual gestures. Front Psychol 2015; 6:1147. [PMID: 26321979 PMCID: PMC4535352 DOI: 10.3389/fpsyg.2015.01147] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2015] [Accepted: 07/23/2015] [Indexed: 11/13/2022] Open
Abstract
The Ease of Language Understanding model (Rönnberg et al., 2013) predicts that decreasing the distinctness of language stimuli increases working memory load; in the speech domain this notion is supported by empirical evidence. Our aim was to determine whether such an over-additive interaction can be generalized to sign processing in sign-naïve individuals and whether it is modulated by experience of computer gaming. Twenty young adults with no knowledge of sign language performed an n-back working memory task based on manual gestures lexicalized in sign language; the visual resolution of the signs and working memory load were manipulated. Performance was poorer when load was high and resolution was low. These two effects interacted over-additively, demonstrating that reducing the resolution of signed stimuli increases working memory load when there is no pre-existing semantic representation. This suggests that load and distinctness are handled by a shared amodal mechanism which can be revealed empirically when stimuli are degraded and load is high, even without pre-existing semantic representation. There was some evidence that the mechanism is influenced by computer gaming experience. Future work should explore how the shared mechanism is influenced by pre-existing semantic representation and sensory factors together with computer gaming experience.
Collapse
Affiliation(s)
- Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University , Sweden
| | - Elena Toscano
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University , Sweden
| | - Emil Holmer
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University , Sweden
| |
Collapse
|
105
|
Osman H, Sullivan JR. An analysis of error patterns in children's backward digit recall in noise. Noise Health 2015; 17:191-7. [PMID: 26168949 PMCID: PMC4900480 DOI: 10.4103/1463-1741.160684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The purpose of the study was to determine whether perceptual masking or cognitive processing accounts for a decline in working memory performance in the presence of competing speech. The types and patterns of errors made on the backward digit span in quiet and multitalker babble at -5 dB signal-to-noise ratio (SNR) were analyzed. The errors were classified into two categories: item (if digits that were not presented in a list were repeated) and order (if correct digits were repeated but in an incorrect order). Fifty five children with normal hearing were included. All the children were aged between 7 years and 10 years. Repeated measures of analysis of variance (RM-ANOVA) revealed the main effects for error type and digit span length. In terms of listening condition interaction it was found that the order errors occurred more frequently than item errors in the degraded listening condition compared to quiet. In addition, children had more difficulty recalling the correct order of intermediate items, supporting strong primacy and recency effects. Decline in children's working memory performance was not primarily related to perceptual difficulties alone. The majority of errors was related to the maintenance of sequential order information, which suggests that reduced performance in competing speech may result from increased cognitive processing demands in noise.
Collapse
Affiliation(s)
- Homira Osman
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, USA
| | | |
Collapse
|
106
|
Best V, Keidser G, Buchholz JM, Freeston K. Development and preliminary evaluation of a new test of ongoing speech comprehension. Int J Audiol 2015; 55:45-52. [PMID: 26158403 DOI: 10.3109/14992027.2015.1055835] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE The overall goal of this work is to create new speech perception tests that more closely resemble real world communication and offer an alternative or complement to the commonly used sentence recall test. DESIGN We describe the development of a new ongoing speech comprehension test based on short everyday passages and on-the-go questions. We also describe the results of an experiment conducted to compare the psychometric properties of this test to those of a sentence test. STUDY SAMPLE Both tests were completed by a group of listeners that included normal hearers as well as hearing-impaired listeners who participated with and without their hearing aids. RESULTS Overall, the psychometric properties of the two tests were similar, and thresholds were significantly correlated. However, there was some evidence of age/cognitive effects in the comprehension test that were not revealed by the sentence test. CONCLUSIONS This new comprehension test promises to be useful for the larger goal of creating laboratory tests that combine realistic acoustic environments with realistic communication tasks. Further efforts will be required to assess whether the test can ultimately improve predictions of real-world outcomes.
Collapse
Affiliation(s)
- Virginia Best
- a * National Acoustic Laboratories and the HEARing Cooperative Research Centre, Australian Hearing Hub, Macquarie University , Australia.,b Department of Speech , Language and Hearing Sciences, Boston University , Boston , USA
| | - Gitte Keidser
- a * National Acoustic Laboratories and the HEARing Cooperative Research Centre, Australian Hearing Hub, Macquarie University , Australia
| | - Jӧrg M Buchholz
- a * National Acoustic Laboratories and the HEARing Cooperative Research Centre, Australian Hearing Hub, Macquarie University , Australia.,c Audiology Section, Department of Linguistics , Australian Hearing Hub, Macquarie University , Australia
| | - Katrina Freeston
- a * National Acoustic Laboratories and the HEARing Cooperative Research Centre, Australian Hearing Hub, Macquarie University , Australia
| |
Collapse
|
107
|
Wingfield A, Amichetti NM, Lash A. Cognitive aging and hearing acuity: modeling spoken language comprehension. Front Psychol 2015; 6:684. [PMID: 26124724 PMCID: PMC4462993 DOI: 10.3389/fpsyg.2015.00684] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2015] [Accepted: 05/10/2015] [Indexed: 12/30/2022] Open
Abstract
The comprehension of spoken language has been characterized by a number of "local" theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled.
Collapse
Affiliation(s)
- Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| | | | | |
Collapse
|
108
|
Doherty KA, Desjardins JL. The benefit of amplification on auditory working memory function in middle-aged and young-older hearing impaired adults. Front Psychol 2015; 6:721. [PMID: 26097461 PMCID: PMC4456569 DOI: 10.3389/fpsyg.2015.00721] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2015] [Accepted: 05/14/2015] [Indexed: 11/21/2022] Open
Abstract
Untreated hearing loss can interfere with an individual’s cognitive abilities and intellectual function. Specifically, hearing loss has been shown to negatively impact working memory function, which is important for speech understanding, especially in difficult or noisy listening conditions. The purpose of the present study was to assess the effect of hearing aid use on auditory working memory function in middle-aged and young-older adults with mild to moderate sensorineural hearing loss. Participants completed two objective measures of auditory working memory in aided and unaided listening conditions. An aged matched control group followed the same experimental protocol except they were not fit with hearing aids. All participants’ aided scores on the auditory working memory tests were significantly improved while wearing hearing aids. Thus, hearing aids worn during the early stages of an age-related hearing loss can improve a person’s performance on auditory working memory tests.
Collapse
Affiliation(s)
- Karen A Doherty
- Department of Communication Sciences and Disorders, Syracuse University , Syracuse, NY, USA
| | - Jamie L Desjardins
- Department of Rehabilitation Sciences, University of Texas at El Paso , El Paso, TX, USA
| |
Collapse
|
109
|
Lunner T. About Cognitive Outcome Measures at Ecological Signal-to-Noise Ratios and Cognitive-Driven Hearing Aid Signal Processing. Am J Audiol 2015; 24:121-3. [PMID: 25863715 DOI: 10.1044/2015_aja-14-0066] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2014] [Accepted: 03/01/2015] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The purpose of this article is to discuss 2 questions concerning how hearing aids interact with hearing and cognition: Can signal processing in hearing aids improve memory? Can attention be used for top-down control of hearing aids? METHOD Memory recall of sentences, presented at 95% correct speech recognition, was assessed with and without binary mask noise reduction. A short literature review was performed on recent findings on new brain-imaging techniques showing potential for hearing aid control. CONCLUSIONS Two experiments indicate that it is possible to show improved memory with an experimental noise reduction algorithm at ecological signal-to-noise ratios and that it is possible to replicate these findings in a new language. The literature indicates that attention-controlled hearing aids may be developed in the future.
Collapse
Affiliation(s)
- Thomas Lunner
- Linköping University, Sweden
- Eriksholm Research Centre, Oticon A/S, Smørum, Denmark
| |
Collapse
|
110
|
Marsh JE, Ljung R, Nöstl A, Threadgold E, Campbell TA. Failing to get the gist of what's being said: background noise impairs higher-order cognitive processing. Front Psychol 2015; 6:548. [PMID: 26052289 PMCID: PMC4439538 DOI: 10.3389/fpsyg.2015.00548] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2014] [Accepted: 04/16/2015] [Indexed: 11/13/2022] Open
Abstract
A dynamic interplay is known to exist between auditory processing and human cognition. For example, prior investigations of speech-in-noise have revealed there is more to learning than just listening: Even if all words within a spoken list are correctly heard in noise, later memory for those words is typically impoverished. These investigations supported a view that there is a "gap" between the intelligibility of speech and memory for that speech. Here, the notion was that this gap between speech intelligibility and memorability is a function of the extent to which the spoken message seizes limited immediate memory resources (e.g., Kjellberg et al., 2008). Accordingly, the more difficult the processing of the spoken message, the less resources are available for elaboration, storage, and recall of that spoken material. However, it was not previously known how increasing that difficulty affected the memory processing of semantically rich spoken material. This investigation showed that noise impairs higher levels of cognitive analysis. A variant of the Deese-Roediger-McDermott procedure that encourages semantic elaborative processes was deployed. On each trial, participants listened to a 36-item list comprising 12 words blocked by each of 3 different themes. Each of those 12 words (e.g., bed, tired, snore…) was associated with a "critical" lure theme word that was not presented (e.g., sleep). Word lists were either presented without noise or at a signal-to-noise ratio of 5 decibels upon an A-weighting. Noise reduced false recall of the critical words, and decreased the semantic clustering of recall. Theoretical and practical implications are discussed.
Collapse
Affiliation(s)
- John E Marsh
- Department of Building, Energy, and Environmental Engineering, Faculty of Engineering and Sustainable Development, University of Gävle Gävle, Sweden ; School of Psychology, University of Central Lancashire Preston, Lancashire, UK
| | - Robert Ljung
- Department of Building, Energy, and Environmental Engineering, Faculty of Engineering and Sustainable Development, University of Gävle Gävle, Sweden
| | - Anatole Nöstl
- Department of Building, Energy, and Environmental Engineering, Faculty of Engineering and Sustainable Development, University of Gävle Gävle, Sweden
| | | | - Tom A Campbell
- Neuroscience Center, University of Helsinki Helsinki, Finland
| |
Collapse
|
111
|
Dundon NM, Dockree SP, Buckley V, Merriman N, Carton M, Clarke S, Roche RAP, Lalor EC, Robertson IH, Dockree PM. Impaired auditory selective attention ameliorated by cognitive training with graded exposure to noise in patients with traumatic brain injury. Neuropsychologia 2015; 75:74-87. [PMID: 26004059 DOI: 10.1016/j.neuropsychologia.2015.05.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2014] [Revised: 04/20/2015] [Accepted: 05/19/2015] [Indexed: 11/24/2022]
Abstract
Patients who suffer traumatic brain injury frequently report difficulty concentrating on tasks and completing routine activities in noisy and distracting environments. Such impairments can have long-term negative psychosocial consequences. A cognitive control function that may underlie this impairment is the capacity to select a goal-relevant signal for further processing while safeguarding it from irrelevant noise. A paradigmatic investigation of this problem was undertaken using a dichotic listening task (study 1) in which comprehension of a stream of speech to one ear was measured in the context of increasing interference from a second stream of irrelevant speech to the other ear. Controls showed an initial decline in performance in the presence of competing speech but thereafter showed adaptation to increasing audibility of irrelevant speech, even at the highest levels of noise. By contrast, patients showed linear decline in performance with increasing noise. Subsequently attempts were made to ameliorate this deficit (study 2) using a cognitive training procedure based on attention process training (APT) that included graded exposure to irrelevant noise over the course of training. Patients were assigned to adaptive and non-adaptive training schedules or to a no-training control group. Results showed that both types of training drove improvements in the dichotic listening and in naturalistic tasks of performance in noise. Improvements were also seen on measures of selective attention in the visual domain suggesting transfer of training. We also observed augmentation of event-related potentials (ERPs) linked to target processing (P3b) but no change in ERPs evoked by distractor stimuli (P3a) suggesting that training heightened tuning of target signals, as opposed to gating irrelevant noise. No changes in any of the above measures were observed in a no-training control group. Together these findings present an ecologically valid approach to measure selective attention difficulties after brain injury, and provide a means to ameliorate these deficits.
Collapse
Affiliation(s)
- Neil M Dundon
- Headway Ireland, Blackhall Place, Dublin 7, Ireland; Università di Bologna, Dipartimento di Psichologia, Viale Berti Pichat, 5, Bologna, Italy.
| | - Suvi P Dockree
- Headway Ireland, Blackhall Place, Dublin 7, Ireland; National Rehabilitation Hospital, Dun Laoghaire, Ireland.
| | - Vanessa Buckley
- Headway Ireland, Blackhall Place, Dublin 7, Ireland; Trinity College Institute of Neuroscience, The University of Dublin, Dublin 2, Ireland.
| | - Niamh Merriman
- Trinity College Institute of Neuroscience, The University of Dublin, Dublin 2, Ireland
| | - Mary Carton
- Headway Ireland, Blackhall Place, Dublin 7, Ireland.
| | - Sarah Clarke
- Headway Ireland, Blackhall Place, Dublin 7, Ireland; Department of Psychology, Beaumont Hospital, Dublin 9, Ireland.
| | - Richard A P Roche
- Department of Psychology, National University of Ireland, Maynooth, Ireland
| | - Edmund C Lalor
- Trinity College Institute of Neuroscience, The University of Dublin, Dublin 2, Ireland.
| | - Ian H Robertson
- Trinity College Institute of Neuroscience, The University of Dublin, Dublin 2, Ireland.
| | - Paul M Dockree
- Trinity College Institute of Neuroscience, The University of Dublin, Dublin 2, Ireland.
| |
Collapse
|
112
|
Wendt D, Kollmeier B, Brand T. How hearing impairment affects sentence comprehension: using eye fixations to investigate the duration of speech processing. Trends Hear 2015; 19:19/0/2331216515584149. [PMID: 25910503 PMCID: PMC4409940 DOI: 10.1177/2331216515584149] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
The main objective of this study was to investigate the extent to which hearing impairment influences the duration of sentence processing. An eye-tracking paradigm is introduced that provides an online measure of how hearing impairment prolongs processing of linguistically complex sentences; this measure uses eye fixations recorded while the participant listens to a sentence. Eye fixations toward a target picture (which matches the aurally presented sentence) were measured in the presence of a competitor picture. Based on the recorded eye fixations, the single target detection amplitude, which reflects the tendency of the participant to fixate the target picture, was used as a metric to estimate the duration of sentence processing. The single target detection amplitude was calculated for sentence structures with different levels of linguistic complexity and for different listening conditions: in quiet and in two different noise conditions. Participants with hearing impairment spent more time processing sentences, even at high levels of speech intelligibility. In addition, the relationship between the proposed online measure and listener-specific factors, such as hearing aid use and cognitive abilities, was investigated. Longer processing durations were measured for participants with hearing impairment who were not accustomed to using a hearing aid. Moreover, significant correlations were found between sentence processing duration and individual cognitive abilities (such as working memory capacity or susceptibility to interference). These findings are discussed with respect to audiological applications.
Collapse
Affiliation(s)
- Dorothea Wendt
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany Hearing Systems, Department of Electrical Engineering, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Birger Kollmeier
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany Cluster of Excellence Hearing4all, Oldenburg, Germany
| | - Thomas Brand
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany Cluster of Excellence Hearing4all, Oldenburg, Germany
| |
Collapse
|
113
|
Kilman L, Zekveld A, Hällgren M, Rönnberg J. Native and Non-native Speech Perception by Hearing-Impaired Listeners in Noise- and Speech Maskers. Trends Hear 2015; 19:19/0/2331216515579127. [PMID: 25910504 PMCID: PMC4409938 DOI: 10.1177/2331216515579127] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
This study evaluated how hearing-impaired listeners perceive native (Swedish) and nonnative (English) speech in the presence of noise- and speech maskers. Speech reception thresholds were measured for four different masker types for each target language. The maskers consisted of stationary and fluctuating noise and two-talker babble in Swedish and English. Twenty-three hearing-impaired native Swedish listeners participated, aged between 28 and 65 years. The participants also performed cognitive tests of working memory capacity in Swedish and English, nonverbal reasoning, and an English proficiency test. Results indicated that the speech maskers were more interfering than the noise maskers in both target languages. The larger need for phonetic and semantic cues in a nonnative language makes a stationary masker relatively more challenging than a fluctuating-noise masker. Better hearing acuity (pure tone average) was associated with better perception of the target speech in Swedish, and better English proficiency was associated with better speech perception in English. Larger working memory and better pure tone averages were related to the better perception of speech masked with fluctuating noise in the nonnative language. This suggests that both are relevant in highly taxing conditions. A large variance in performance between the listeners was observed, especially for speech perception in the nonnative language.
Collapse
Affiliation(s)
- Lisa Kilman
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Adriana Zekveld
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden Audiology/ENT & EMGO+ Institute for Health and Care Research, VU University Medical Center, Amsterdam, the Netherlands
| | - Mathias Hällgren
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden Department of Otorhinolaryngology/Section of Audiology, Linköping University Hospital, Linköping, Sweden
| | - Jerker Rönnberg
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| |
Collapse
|
114
|
Banks B, Gowen E, Munro KJ, Adank P. Cognitive predictors of perceptual adaptation to accented speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:2015-2024. [PMID: 25920852 DOI: 10.1121/1.4916265] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
The present study investigated the effects of inhibition, vocabulary knowledge, and working memory on perceptual adaptation to accented speech. One hundred young, normal-hearing adults listened to sentences spoken in a constructed, unfamiliar accent presented in speech-shaped background noise. Speech Reception Thresholds (SRTs) corresponding to 50% speech recognition accuracy provided a measurement of adaptation to the accented speech. Stroop, vocabulary knowledge, and working memory tests were performed to measure cognitive ability. Participants adapted to the unfamiliar accent as revealed by a decrease in SRTs over time. Better inhibition (lower Stroop scores) predicted greater and faster adaptation to the unfamiliar accent. Vocabulary knowledge predicted better recognition of the unfamiliar accent, while working memory had a smaller, indirect effect on speech recognition mediated by vocabulary score. Results support a top-down model for successful adaptation to, and recognition of, accented speech; they add to recent theories that allocate a prominent role for executive function to effective speech comprehension in adverse listening conditions.
Collapse
Affiliation(s)
- Briony Banks
- School of Psychological Sciences, University of Manchester, Manchester M13 9PL, United Kingdom
| | - Emma Gowen
- Faculty of Life Sciences, University of Manchester, Manchester M13 9PL, United Kingdom
| | - Kevin J Munro
- School of Psychological Sciences, University of Manchester, Manchester M13 9PL, United Kingdom
| | - Patti Adank
- School of Psychological Sciences, University of Manchester, Manchester M13 9PL, United Kingdom
| |
Collapse
|
115
|
Ellis RJ, Munro KJ. Predictors of aided speech recognition, with and without frequency compression, in older adults. Int J Audiol 2015; 54:467-75. [DOI: 10.3109/14992027.2014.996825] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
116
|
Meister H, Rählmann S, Walger M, Margolf-Hackl S, Kießling J. Hearing aid fitting in older persons with hearing impairment: the influence of cognitive function, age, and hearing loss on hearing aid benefit. Clin Interv Aging 2015; 10:435-43. [PMID: 25709417 PMCID: PMC4330028 DOI: 10.2147/cia.s77096] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
PURPOSE To examine the association of cognitive function, age, and hearing loss with clinically assessed hearing aid benefit in older hearing-impaired persons. METHODS Hearing aid benefit was assessed using objective measures regarding speech recognition in quiet and noisy environments as well as a subjective measure reflecting everyday situations captured using a standardized questionnaire. A broad range of general cognitive functions such as attention, memory, and intelligence were determined using different neuropsychological tests. Linear regression analyses were conducted with the outcome of the neuropsychological tests as well as age and hearing loss as independent variables and the benefit measures as dependent variables. Thirty experienced older hearing aid users with typical age-related hearing impairment participated. RESULTS Most of the benefit measures revealed that the participants obtained significant improvement with their hearing aids. Regression models showed a significant relationship between a fluid intelligence measure and objective hearing aid benefit. When individual hearing thresholds were considered as an additional independent variable, hearing loss was the only significant contributor to the benefit models. Lower cognitive capacity - as determined by the fluid intelligence measure - was significantly associated with greater hearing loss. Subjective benefit could not be predicted by any of the variables considered. CONCLUSION The present study does not give evidence that hearing aid benefit is critically associated with cognitive function in experienced hearing aid users. However, it was found that lower fluid intelligence scores were related to higher hearing thresholds. Since greater hearing loss was associated with a greater objective benefit, these results strongly support the advice of using hearing aids regardless of age and cognitive function to counter hearing loss and the adverse effects of age-related hearing impairment. Still, individual cognitive capacity might be relevant for hearing aid benefit during an initial phase of hearing aid provision if acclimatization has not yet taken place.
Collapse
Affiliation(s)
- Hartmut Meister
- Jean Uhrmacher Institute for Clinical ENT-Research, University of Cologne, Cologne, Germany
| | - Sebastian Rählmann
- Jean Uhrmacher Institute for Clinical ENT-Research, University of Cologne, Cologne, Germany
| | - Martin Walger
- Department of Otorhinolaryngology, Head and Neck Surgery, University of Cologne, Cologne, Germany
| | - Sabine Margolf-Hackl
- Department of Othorhinolaryngology, Head and Neck Surgery, University of Giessen, Giessen, Germany
| | - Jürgen Kießling
- Department of Othorhinolaryngology, Head and Neck Surgery, University of Giessen, Giessen, Germany
| |
Collapse
|
117
|
Rönnberg N, Rudner M, Lunner T, Stenfelt S. Memory performance on the Auditory Inference Span Test is independent of background noise type for young adults with normal hearing at high speech intelligibility. Front Psychol 2014; 5:1490. [PMID: 25566159 PMCID: PMC4273615 DOI: 10.3389/fpsyg.2014.01490] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2014] [Accepted: 12/03/2014] [Indexed: 11/17/2022] Open
Abstract
Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR) on listening effort, as a function of working memory capacity (WMC) and updating ability (UA). The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing memory load level. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech fragments and vocal sounds in the background noise.
Collapse
Affiliation(s)
- Niklas Rönnberg
- Technical Audiology, Department of Clinical and Experimental Medicine, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | - Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden ; Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden
| | - Thomas Lunner
- Technical Audiology, Department of Clinical and Experimental Medicine, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden ; Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Oticon Research Centre Eriksholm Snekkersten, Denmark
| | - Stefan Stenfelt
- Technical Audiology, Department of Clinical and Experimental Medicine, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| |
Collapse
|
118
|
Neher T. Relating hearing loss and executive functions to hearing aid users' preference for, and speech recognition with, different combinations of binaural noise reduction and microphone directionality. Front Neurosci 2014; 8:391. [PMID: 25538547 PMCID: PMC4255521 DOI: 10.3389/fnins.2014.00391] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2014] [Accepted: 11/14/2014] [Indexed: 11/13/2022] Open
Abstract
Knowledge of how executive functions relate to preferred hearing aid (HA) processing is sparse and seemingly inconsistent with related knowledge for speech recognition outcomes. This study thus aimed to find out if (1) performance on a measure of reading span (RS) is related to preferred binaural noise reduction (NR) strength, (2) similar relations exist for two different, non-verbal measures of executive function, (3) pure-tone average hearing loss (PTA), signal-to-noise ratio (SNR), and microphone directionality (DIR) also influence preferred NR strength, and (4) preference and speech recognition outcomes are similar. Sixty elderly HA users took part. Six HA conditions consisting of omnidirectional or cardioid microphones followed by inactive, moderate, or strong binaural NR as well as linear amplification were tested. Outcome was assessed at fixed SNRs using headphone simulations of a frontal target talker in a busy cafeteria. Analyses showed positive effects of active NR and DIR on preference, and negative and positive effects of, respectively, strong NR and DIR on speech recognition. Also, while moderate NR was the most preferred NR setting overall, preference for strong NR increased with SNR. No relation between RS and preference was found. However, larger PTA was related to weaker preference for inactive NR and stronger preference for strong NR for both microphone modes. Equivalent (but weaker) relations between worse performance on one non-verbal measure of executive function and the HA conditions without DIR were found. For speech recognition, there were relations between HA condition, PTA, and RS, but their pattern differed from that for preference. Altogether, these results indicate that, while moderate NR works well in general, a notable proportion of HA users prefer stronger NR. Furthermore, PTA and executive functions can account for some of the variability in preference for, and speech recognition with, different binaural NR and DIR settings.
Collapse
Affiliation(s)
- Tobias Neher
- Medical Physics and Cluster of Excellence Hearing4all, Oldenburg UniversityGermany
| |
Collapse
|
119
|
Ng EHN, Classon E, Larsby B, Arlinger S, Lunner T, Rudner M, Rönnberg J. Dynamic relation between working memory capacity and speech recognition in noise during the first 6 months of hearing aid use. Trends Hear 2014; 18:18/0/2331216514558688. [PMID: 25421088 PMCID: PMC4271770 DOI: 10.1177/2331216514558688] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
The present study aimed to investigate the changing relationship between aided speech recognition and cognitive function during the first 6 months of hearing aid use. Twenty-seven first-time hearing aid users with symmetrical mild to moderate sensorineural hearing loss were recruited. Aided speech recognition thresholds in noise were obtained in the hearing aid fitting session as well as at 3 and 6 months postfitting. Cognitive abilities were assessed using a reading span test, which is a measure of working memory capacity, and a cognitive test battery. Results showed a significant correlation between reading span and speech reception threshold during the hearing aid fitting session. This relation was significantly weakened over the first 6 months of hearing aid use. Multiple regression analysis showed that reading span was the main predictor of speech recognition thresholds in noise when hearing aids were first fitted, but that the pure-tone average hearing threshold was the main predictor 6 months later. One way of explaining the results is that working memory capacity plays a more important role in speech recognition in noise initially rather than after 6 months of use. We propose that new hearing aid users engage working memory capacity to recognize unfamiliar processed speech signals because the phonological form of these signals cannot be automatically matched to phonological representations in long-term memory. As familiarization proceeds, the mismatch effect is alleviated, and the engagement of working memory capacity is reduced.
Collapse
Affiliation(s)
- Elaine H N Ng
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Sweden Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Elisabet Classon
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Sweden Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Birgitta Larsby
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Sweden Department of Clinical and Experimental Medicine, Linköping University, Sweden
| | - Stig Arlinger
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Sweden Department of Clinical and Experimental Medicine, Linköping University, Sweden
| | - Thomas Lunner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Sweden Department of Behavioural Sciences and Learning, Linköping University, Sweden Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
| | - Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Sweden Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Sweden Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
120
|
Schoof T, Rosen S. The role of auditory and cognitive factors in understanding speech in noise by normal-hearing older listeners. Front Aging Neurosci 2014; 6:307. [PMID: 25429266 PMCID: PMC4228854 DOI: 10.3389/fnagi.2014.00307] [Citation(s) in RCA: 99] [Impact Index Per Article: 9.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2014] [Accepted: 10/21/2014] [Indexed: 11/13/2022] Open
Abstract
Normal-hearing older adults often experience increased difficulties understanding speech in noise. In addition, they benefit less from amplitude fluctuations in the masker. These difficulties may be attributed to an age-related auditory temporal processing deficit. However, a decline in cognitive processing likely also plays an important role. This study examined the relative contribution of declines in both auditory and cognitive processing to the speech in noise performance in older adults. Participants included older (60–72 years) and younger (19–29 years) adults with normal hearing. Speech reception thresholds (SRTs) were measured for sentences in steady-state speech-shaped noise (SS), 10-Hz sinusoidally amplitude-modulated speech-shaped noise (AM), and two-talker babble. In addition, auditory temporal processing abilities were assessed by measuring thresholds for gap, amplitude-modulation, and frequency-modulation detection. Measures of processing speed, attention, working memory, Text Reception Threshold (a visual analog of the SRT), and reading ability were also obtained. Of primary interest was the extent to which the various measures correlate with listeners' abilities to perceive speech in noise. SRTs were significantly worse for older adults in the presence of two-talker babble but not SS and AM noise. In addition, older adults showed some cognitive processing declines (working memory and processing speed) although no declines in auditory temporal processing. However, working memory and processing speed did not correlate significantly with SRTs in babble. Despite declines in cognitive processing, normal-hearing older adults do not necessarily have problems understanding speech in noise as SRTs in SS and AM noise did not differ significantly between the two groups. Moreover, while older adults had higher SRTs in two-talker babble, this could not be explained by age-related cognitive declines in working memory or processing speed.
Collapse
Affiliation(s)
- Tim Schoof
- Speech, Hearing and Phonetic Sciences, University College London London, UK
| | - Stuart Rosen
- Speech, Hearing and Phonetic Sciences, University College London London, UK
| |
Collapse
|
121
|
Kilman L, Zekveld A, Hällgren M, Rönnberg J. The influence of non-native language proficiency on speech perception performance. Front Psychol 2014; 5:651. [PMID: 25071630 PMCID: PMC4078910 DOI: 10.3389/fpsyg.2014.00651] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2013] [Accepted: 06/07/2014] [Indexed: 11/13/2022] Open
Abstract
The present study examined to what extent proficiency in a non-native language influences speech perception in noise. We explored how English proficiency affected native (Swedish) and non-native (English) speech perception in four speech reception threshold (SRT) conditions, including two energetic (stationary, fluctuating noise) and two informational (two-talker babble Swedish, two-talker babble English) maskers. Twenty-three normal-hearing native Swedish listeners participated, age between 28 and 64 years. The participants also performed standardized tests in English proficiency, non-verbal reasoning and working memory capacity. Our approach with focus on proficiency and the assessment of external as well as internal, listener-related factors allowed us to examine which variables explained intra- and interindividual differences in native and non-native speech perception performance. The main result was that in the non-native target, the level of English proficiency is a decisive factor for speech intelligibility in noise. High English proficiency improved performance in all four conditions when the target language was English. The informational maskers were interfering more with perception than energetic maskers, specifically in the non-native target. The study also confirmed that the SRT’s were better when target language was native compared to non-native.
Collapse
Affiliation(s)
- Lisa Kilman
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Departement of Behavioural Sciences and Learning, Linköping University Linköping, Sweden
| | - Adriana Zekveld
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Departement of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Department of Audiology/ENT, EMGO Institute for Health and Care Research, VU University Medical Center Amsterdam, Netherlands
| | - Mathias Hällgren
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Departement of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Department of Otorhinolaryngology, Section of Audiology, Linköping University Hospital Linköping, Sweden
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Departement of Behavioural Sciences and Learning, Linköping University Linköping, Sweden
| |
Collapse
|
122
|
Moradi S, Lidestam B, Saremi A, Rönnberg J. Gated auditory speech perception: effects of listening conditions and cognitive capacity. Front Psychol 2014; 5:531. [PMID: 24926274 PMCID: PMC4040882 DOI: 10.3389/fpsyg.2014.00531] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2014] [Accepted: 05/13/2014] [Indexed: 11/13/2022] Open
Abstract
This study aimed to measure the initial portion of signal required for the correct identification of auditory speech stimuli (or isolation points, IPs) in silence and noise, and to investigate the relationships between auditory and cognitive functions in silence and noise. Twenty-one university students were presented with auditory stimuli in a gating paradigm for the identification of consonants, words, and final words in highly predictable and low predictable sentences. The Hearing in Noise Test (HINT), the reading span test, and the Paced Auditory Serial Attention Test were also administered to measure speech-in-noise ability, working memory and attentional capacities of the participants, respectively. The results showed that noise delayed the identification of consonants, words, and final words in highly predictable and low predictable sentences. HINT performance correlated with working memory and attentional capacities. In the noise condition, there were correlations between HINT performance, cognitive task performance, and the IPs of consonants and words. In the silent condition, there were no correlations between auditory and cognitive tasks. In conclusion, a combination of hearing-in-noise ability, working memory capacity, and attention capacity is needed for the early identification of consonants and words in noise.
Collapse
Affiliation(s)
- Shahram Moradi
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping UniversityLinköping, Sweden
| | - Björn Lidestam
- Department of Behavioral Sciences and Learning, Linköping UniversityLinköping, Sweden
| | - Amin Saremi
- Division of Technical Audiology, Department of Clinical and Experimental Medicine, Linköping UniversityLinköping, Sweden
- Cluster of Excellence “Hearing4all”, Department for Neuroscience, Computational Neuroscience Group, Carl von Ossietzky University of OldenburgOldenburg, Germany
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping UniversityLinköping, Sweden
| |
Collapse
|
123
|
Cognitive spare capacity and speech communication: a narrative overview. BIOMED RESEARCH INTERNATIONAL 2014; 2014:869726. [PMID: 24971355 PMCID: PMC4058272 DOI: 10.1155/2014/869726] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 02/10/2014] [Accepted: 05/13/2014] [Indexed: 01/27/2023]
Abstract
Background noise can make speech communication tiring and cognitively taxing, especially for individuals with hearing impairment. It is now well established that better working memory capacity is associated with better ability to understand speech under adverse conditions as well as better ability to benefit from the advanced signal processing in modern hearing aids. Recent work has shown that although such processing cannot overcome hearing handicap, it can increase cognitive spare capacity, that is, the ability to engage in higher level processing of speech. This paper surveys recent work on cognitive spare capacity and suggests new avenues of investigation.
Collapse
|
124
|
Mishra S, Stenfelt S, Lunner T, Rönnberg J, Rudner M. Cognitive spare capacity in older adults with hearing loss. Front Aging Neurosci 2014; 6:96. [PMID: 24904409 PMCID: PMC4033040 DOI: 10.3389/fnagi.2014.00096] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2013] [Accepted: 04/29/2014] [Indexed: 12/28/2022] Open
Abstract
Individual differences in working memory capacity (WMC) are associated with speech recognition in adverse conditions, reflecting the need to maintain and process speech fragments until lexical access can be achieved. When working memory resources are engaged in unlocking the lexicon, there is less Cognitive Spare Capacity (CSC) available for higher level processing of speech. CSC is essential for interpreting the linguistic content of speech input and preparing an appropriate response, that is, engaging in conversation. Previously, we showed, using a Cognitive Spare Capacity Test (CSCT) that in young adults with normal hearing, CSC was not generally related to WMC and that when CSC decreased in noise it could be restored by visual cues. In the present study, we investigated CSC in 24 older adults with age-related hearing loss, by administering the CSCT and a battery of cognitive tests. We found generally reduced CSC in older adults with hearing loss compared to the younger group in our previous study, probably because they had poorer cognitive skills and deployed them differently. Importantly, CSC was not reduced in the older group when listening conditions were optimal. Visual cues improved CSC more for this group than for the younger group in our previous study. CSC of older adults with hearing loss was not generally related to WMC but it was consistently related to episodic long term memory, suggesting that the efficiency of this processing bottleneck is important for executive processing of speech in this group.
Collapse
Affiliation(s)
- Sushmit Mishra
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | - Stefan Stenfelt
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden ; Department of Clinical and Experimental Medicine, Linköping University Linköping, Sweden
| | - Thomas Lunner
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden ; Department of Clinical and Experimental Medicine, Linköping University Linköping, Sweden ; Eriksholm Research Centre, Oticon A/S Snekkersten, Denmark
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| |
Collapse
|
125
|
Zekveld AA, Rudner M, Kramer SE, Lyzenga J, Rönnberg J. Cognitive processing load during listening is reduced more by decreasing voice similarity than by increasing spatial separation between target and masker speech. Front Neurosci 2014; 8:88. [PMID: 24808818 PMCID: PMC4010736 DOI: 10.3389/fnins.2014.00088] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2013] [Accepted: 04/07/2014] [Indexed: 11/24/2022] Open
Abstract
We investigated changes in speech recognition and cognitive processing load due to the masking release attributable to decreasing similarity between target and masker speech. This was achieved by using masker voices with either the same (female) gender as the target speech or different gender (male) and/or by spatially separating the target and masker speech using HRTFs. We assessed the relation between the signal-to-noise ratio required for 50% sentence intelligibility, the pupil response and cognitive abilities. We hypothesized that the pupil response, a measure of cognitive processing load, would be larger for co-located maskers and for same-gender compared to different-gender maskers. We further expected that better cognitive abilities would be associated with better speech perception and larger pupil responses as the allocation of larger capacity may result in more intense mental processing. In line with previous studies, the performance benefit from different-gender compared to same-gender maskers was larger for co-located masker signals. The performance benefit of spatially-separated maskers was larger for same-gender maskers. The pupil response was larger for same-gender than for different-gender maskers, but was not reduced by spatial separation. We observed associations between better perception performance and better working memory, better information updating, and better executive abilities when applying no corrections for multiple comparisons. The pupil response was not associated with cognitive abilities. Thus, although both gender and location differences between target and masker facilitate speech perception, only gender differences lower cognitive processing load. Presenting a more dissimilar masker may facilitate target-masker separation at a later (cognitive) processing stage than increasing the spatial separation between the target and masker. The pupil response provides information about speech perception that complements intelligibility data.
Collapse
Affiliation(s)
- Adriana A Zekveld
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre for Hearing and Deafness Research, The Swedish Institute for Disability Research, Linköping and Örebro Universities Linköping, Sweden ; Section Audiology, Department of Otolaryngology-Head and Neck Surgery and EMGO Institute for Health and Care Research, VU University Medical Center Amsterdam, Netherlands
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre for Hearing and Deafness Research, The Swedish Institute for Disability Research, Linköping and Örebro Universities Linköping, Sweden
| | - Sophia E Kramer
- Section Audiology, Department of Otolaryngology-Head and Neck Surgery and EMGO Institute for Health and Care Research, VU University Medical Center Amsterdam, Netherlands
| | - Johannes Lyzenga
- Section Audiology, Department of Otolaryngology-Head and Neck Surgery and EMGO Institute for Health and Care Research, VU University Medical Center Amsterdam, Netherlands
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre for Hearing and Deafness Research, The Swedish Institute for Disability Research, Linköping and Örebro Universities Linköping, Sweden
| |
Collapse
|
126
|
Classon E, Löfkvist U, Rudner M, Rönnberg J. Verbal fluency in adults with postlingually acquired hearing impairment. SPEECH LANGUAGE AND HEARING 2014. [DOI: 10.1179/205057113x13781290153457] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
127
|
Michalek AMP, Watson SM, Ash I, Ringleb S, Raymer A. Effects of noise and audiovisual cues on speech processing in adults with and without ADHD. Int J Audiol 2014; 53:145-52. [DOI: 10.3109/14992027.2013.866282] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
128
|
Abstract
In the present study, we investigated the effect of initial-consonant intensity on lexical decisions. Amplification was selectively applied to the initial consonant of monosyllabic words. In Experiment 1, young adults with normal hearing completed an auditory lexical decision task with words that either had the natural or amplified initial consonant. The results demonstrated faster reaction times for amplified words when listeners randomly heard words spoken by two unfamiliar talkers. The same pattern of results was found when comparing words in which the initial consonant was naturally higher in intensity than the low-intensity consonants, across all amplification conditions. In Experiment 2, listeners were familiarized with the talkers and tested on each talker in separate blocks, to minimize talker uncertainty. The effect of initial-consonant intensity was reversed, with faster reaction times being obtained for natural than for amplified consonants. In Experiment 3, nonlinguistic processing of the amplitude envelope was assessed using noise modulated by the word envelope. The results again demonstrated faster reaction times for natural than for amplified words. Across all experiments, the results suggest that the acoustic-phonetic structure of the word influences the speed of lexical decisions and interacts with the familiarity and predictability of the talker. In unfamiliar and less-predictable listening contexts, initial-consonant amplification increases lexical decision speed, even if sufficient audibility is available without amplification. In familiar contexts with adequate audibility, an acoustic match of the stimulus with the stored mental representation of the word is more important, possibly along with general auditory properties related to loudness perception.
Collapse
|
129
|
Gilbert RC, Chandrasekaran B, Smiljanic R. Recognition memory in noise for speech of varying intelligibility. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 135:389-99. [PMID: 24437779 DOI: 10.1121/1.4838975] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
This study investigated the extent to which noise impacts normal-hearing young adults' speech processing of sentences that vary in intelligibility. Intelligibility and recognition memory in noise were examined for conversational and clear speech sentences recorded in quiet (quiet speech, QS) and in response to the environmental noise (noise-adapted speech, NAS). Results showed that (1) increased intelligibility through conversational-to-clear speech modifications led to improved recognition memory and (2) NAS presented a more naturalistic speech adaptation to noise compared to QS, leading to more accurate word recognition and enhanced sentence recognition memory. These results demonstrate that acoustic-phonetic modifications implemented in listener-oriented speech enhance speech-in-noise processing beyond word recognition. Effortful speech processing in challenging listening environments can thus be improved by speaking style adaptations on the part of the talker. In addition to enhanced intelligibility, a substantial improvement in recognition memory can be achieved through speaker adaptations to the environment and to the listener when in adverse conditions.
Collapse
Affiliation(s)
- Rachael C Gilbert
- Department of Linguistics, University of Texas at Austin, Austin, Texas 78712
| | - Bharath Chandrasekaran
- Department of Communication Sciences and Disorders, University of Texas at Austin, Austin, Texas 78712
| | - Rajka Smiljanic
- Department of Linguistics, University of Texas at Austin, Austin, Texas 78712
| |
Collapse
|
130
|
Mishra S, Lunner T, Stenfelt S, Rönnberg J, Rudner M. Seeing the talker's face supports executive processing of speech in steady state noise. Front Syst Neurosci 2013; 7:96. [PMID: 24324411 PMCID: PMC3840300 DOI: 10.3389/fnsys.2013.00096] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2013] [Accepted: 11/09/2013] [Indexed: 11/21/2022] Open
Abstract
Listening to speech in noise depletes cognitive resources, affecting speech processing. The present study investigated how remaining resources or cognitive spare capacity (CSC) can be deployed by young adults with normal hearing. We administered a test of CSC (CSCT; Mishra et al., 2013) along with a battery of established cognitive tests to 20 participants with normal hearing. In the CSCT, lists of two-digit numbers were presented with and without visual cues in quiet, as well as in steady-state and speech-like noise at a high intelligibility level. In low load conditions, two numbers were recalled according to instructions inducing executive processing (updating, inhibition) and in high load conditions the participants were additionally instructed to recall one extra number, which was the always the first item in the list. In line with previous findings, results showed that CSC was sensitive to memory load and executive function but generally not related to working memory capacity (WMC). Furthermore, CSCT scores in quiet were lowered by visual cues, probably due to distraction. In steady-state noise, the presence of visual cues improved CSCT scores, probably by enabling better encoding. Contrary to our expectation, CSCT performance was disrupted more in steady-state than speech-like noise, although only without visual cues, possibly because selective attention could be used to ignore the speech-like background and provide an enriched representation of target items in working memory similar to that obtained in quiet. This interpretation is supported by a consistent association between CSCT scores and updating skills.
Collapse
Affiliation(s)
- Sushmit Mishra
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden
| | | | | | | | | |
Collapse
|
131
|
Ng EHN, Rudner M, Lunner T, Rönnberg J. Relationships between self-report and cognitive measures of hearing aid outcome. SPEECH LANGUAGE AND HEARING 2013. [PMID: 26213622 PMCID: PMC4500453 DOI: 10.1179/205057113x13782848890774] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
This present study examined the relationship between cognitive measures and self-report hearing aid outcome. A sentence-final word identification and recall (SWIR) test was used to investigate how hearing aid use may relate to experienced explicit cognitive processing. A visually based cognitive test battery was also administered. To measure self-report hearing aid outcome, the International Outcome Inventory – Hearing Aids (IOI-HA) and the Speech, Spatial and Qualities of Hearing Scale (SSQ) were employed. Twenty-six experienced hearing aid users (mean age of 59 years) with symmetrical moderate-to-moderately severe sensorineural hearing loss were recruited. Free recall performance in the SWIR test correlated negatively with item 3 of IOI-HA, which measures residual difficulty in adverse listening situations. Cognitive abilities related to verbal information processing were correlated positively with self-reported hearing aid use and overall success. The present study showed that reported residual difficulty with hearing aid may relate to experienced explicit processing in difficult listening conditions, such that individuals with better cognitive capacity tended to report more remaining difficulty in challenging listening situations. The possibility of using cognitive measures to predict hearing aid outcome in real life should be explored in future research.
Collapse
Affiliation(s)
- Elaine Hoi Ning Ng
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Swedish Institute for Disability Research, Linköping University, Sweden
| | - Mary Rudner
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Swedish Institute for Disability Research, Linköping University, Sweden
| | - Thomas Lunner
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Swedish Institute for Disability Research, Linköping University, Sweden; Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark; and Department of Clinical and Experimental Medicine, Linköping University, Sweden
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Swedish Institute for Disability Research, Linköping University, Sweden
| |
Collapse
|
132
|
Zekveld AA, George ELJ, Houtgast T, Kramer SE. Cognitive abilities relate to self-reported hearing disability. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:1364-1372. [PMID: 23838985 DOI: 10.1044/1092-4388(2013/12-0268)] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
PURPOSE In this explorative study, the authors investigated the relationship between auditory and cognitive abilities and self-reported hearing disability. METHOD Thirty-two adults with mild to moderate hearing loss completed the Amsterdam Inventory for Auditory Disability and Handicap (AIADH; Kramer, Kapteyn, Festen, & Tobi, 1996) and performed the Text Reception Threshold (TRT; Zekveld, George, Kramer, Goverts, & Houtgast, 2007) test as well as tests of spatial working memory (SWM) and visual sustained attention. Regression analyses examined the predictive value of age, hearing thresholds (pure-tone averages [PTAs]), speech perception in noise (speech reception thresholds in noise [SRTNs]), and the cognitive tests for the 5 AIADH factors. RESULTS Besides the variance explained by age, PTA, and SRTN, cognitive abilities were related to each hearing factor. The reported difficulties with sound detection and speech perception in quiet were less severe for participants with higher age, lower PTAs, and better TRTs. Fewer sound localization and speech perception in noise problems were reported by participants with better SRTNs and smaller SWM. Fewer sound discrimination difficulties were reported by subjects with better SRTNs and TRTs and smaller SWM. CONCLUSIONS The results suggest a general role of the ability to read partly masked text in subjective hearing. Large working memory was associated with more reported hearing difficulties. This study shows that besides auditory variables and age, cognitive abilities are related to self-reported hearing disability.
Collapse
|
133
|
Zekveld AA, Rudner M, Johnsrude IS, Rönnberg J. The effects of working memory capacity and semantic cues on the intelligibility of speech in noise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 134:2225-2234. [PMID: 23967952 DOI: 10.1121/1.4817926] [Citation(s) in RCA: 68] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
This study examined how semantically related information facilitates the intelligibility of spoken sentences in the presence of masking sound, and how this facilitation is influenced by masker type and by individual differences in cognitive functioning. Dutch sentences were masked by stationary noise, fluctuating noise, or an interfering talker. Each sentence was preceded by a text cue; cues were either three words that were semantically related to the sentence or three unpronounceable nonwords. Speech reception thresholds were adaptively measured. Additional measures included working memory capacity (reading span and size comparison span), linguistic closure ability (text reception threshold), and delayed sentence recognition. Word cues facilitated speech perception in noise similarly for all masker types. Cue benefit was related to reading span performance when the masker was interfering speech, but not when other maskers were used, and it did not correlate with text reception threshold or size comparison span. Better reading span performance was furthermore associated with enhanced delayed recognition of sentences preceded by word relative to nonword cues, across masker types. The results suggest that working memory capacity is associated with release from informational masking by semantically related information, and additionally with the encoding, storage, or retrieval of speech content in memory.
Collapse
Affiliation(s)
- Adriana A Zekveld
- Linnaeus Centre HEAD, Department of Behavioral Sciences and Learning, Linköping University, SE 581 83 Linköping, Sweden.
| | | | | | | |
Collapse
|
134
|
Mishra S, Lunner T, Stenfelt S, Rönnberg J, Rudner M. Visual information can hinder working memory processing of speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:1120-1132. [PMID: 23785180 DOI: 10.1044/1092-4388(2012/12-0033)] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
PURPOSE The purpose of the present study was to evaluate the new Cognitive Spare Capacity Test (CSCT), which measures aspects of working memory capacity for heard speech in the audiovisual and auditory-only modalities of presentation. METHOD In Experiment 1, 20 young adults with normal hearing performed the CSCT and an independent battery of cognitive tests. In the CSCT, they listened to and recalled 2-digit numbers according to instructions inducing executive processing at 2 different memory loads. In Experiment 2, 10 participants performed a less executively demanding free recall task using the same stimuli. RESULTS CSCT performance demonstrated an effect of memory load and was associated with independent measures of executive function and inference making but not with general working memory capacity. Audiovisual presentation was associated with lower CSCT scores but higher free recall performance scores. CONCLUSIONS CSCT is an executively challenging test of the ability to process heard speech. It captures cognitive aspects of listening related to sentence comprehension that are quantitatively and qualitatively different from working memory capacity. Visual information provided in the audiovisual modality of presentation can hinder executive processing in working memory of nondegraded speech material.
Collapse
Affiliation(s)
- Sushmit Mishra
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Sweden.
| | | | | | | | | |
Collapse
|
135
|
Rönnberg J, Lunner T, Zekveld A, Sörqvist P, Danielsson H, Lyxell B, Dahlström O, Signoret C, Stenfelt S, Pichora-Fuller MK, Rudner M. The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances. Front Syst Neurosci 2013; 7:31. [PMID: 23874273 PMCID: PMC3710434 DOI: 10.3389/fnsys.2013.00031] [Citation(s) in RCA: 566] [Impact Index Per Article: 51.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2013] [Accepted: 06/24/2013] [Indexed: 12/28/2022] Open
Abstract
Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008) in light of new behavioral and neural findings concerning the role of working memory capacity (WMC) in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. It is based on findings that address the relationship between WMC and (a) early attention processes in listening to speech, (b) signal processing in hearing aids and its effects on short-term memory, (c) inhibition of speech maskers and its effect on episodic long-term memory, (d) the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e) listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
136
|
Rönnberg J, Lunner T, Zekveld A, Sörqvist P, Danielsson H, Lyxell B, Dahlström O, Signoret C, Stenfelt S, Pichora-Fuller MK, Rudner M. The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances. Front Syst Neurosci 2013; 7:31. [PMID: 23874273 DOI: 10.3389/fnsys] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2013] [Accepted: 06/24/2013] [Indexed: 05/28/2023] Open
Abstract
Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008) in light of new behavioral and neural findings concerning the role of working memory capacity (WMC) in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. It is based on findings that address the relationship between WMC and (a) early attention processes in listening to speech, (b) signal processing in hearing aids and its effects on short-term memory, (c) inhibition of speech maskers and its effect on episodic long-term memory, (d) the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e) listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
137
|
Besser J, Koelewijn T, Zekveld AA, Kramer SE, Festen JM. How linguistic closure and verbal working memory relate to speech recognition in noise--a review. Trends Amplif 2013; 17:75-93. [PMID: 23945955 PMCID: PMC4070613 DOI: 10.1177/1084713813495459] [Citation(s) in RCA: 92] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
The ability to recognize masked speech, commonly measured with a speech reception threshold (SRT) test, is associated with cognitive processing abilities. Two cognitive factors frequently assessed in speech recognition research are the capacity of working memory (WM), measured by means of a reading span (Rspan) or listening span (Lspan) test, and the ability to read masked text (linguistic closure), measured by the text reception threshold (TRT). The current article provides a review of recent hearing research that examined the relationship of TRT and WM span to SRTs in various maskers. Furthermore, modality differences in WM capacity assessed with the Rspan compared to the Lspan test were examined and related to speech recognition abilities in an experimental study with young adults with normal hearing (NH). Span scores were strongly associated with each other, but were higher in the auditory modality. The results of the reviewed studies suggest that TRT and WM span are related to each other, but differ in their relationships with SRT performance. In NH adults of middle age or older, both TRT and Rspan were associated with SRTs in speech maskers, whereas TRT better predicted speech recognition in fluctuating nonspeech maskers. The associations with SRTs in steady-state noise were inconclusive for both measures. WM span was positively related to benefit from contextual information in speech recognition, but better TRTs related to less interference from unrelated cues. Data for individuals with impaired hearing are limited, but larger WM span seems to give a general advantage in various listening situations.
Collapse
Affiliation(s)
- Jana Besser
- VU University Medical Center, Amsterdam, Netherlands
| | | | - Adriana A. Zekveld
- VU University Medical Center, Amsterdam, Netherlands
- The Swedish Institute for Disability Research, Sweden
- Linköping University, Linköping, Sweden
| | | | | |
Collapse
|
138
|
Kramer SE, Lorens A, Coninx F, Zekveld AA, Piotrowska A, Skarzynski H. Processing load during listening: The influence of task characteristics on the pupil response. ACTA ACUST UNITED AC 2013. [DOI: 10.1080/01690965.2011.642267] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
139
|
Ng EHN, Rudner M, Lunner T, Pedersen MS, Rönnberg J. Effects of noise and working memory capacity on memory processing of speech for hearing-aid users. Int J Audiol 2013; 52:433-41. [PMID: 23550584 DOI: 10.3109/14992027.2013.776181] [Citation(s) in RCA: 142] [Impact Index Per Article: 12.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVES It has been shown that noise reduction algorithms can reduce the negative effects of noise on memory processing in persons with normal hearing. The objective of the present study was to investigate whether a similar effect can be obtained for persons with hearing impairment and whether such an effect is dependent on individual differences in working memory capacity. DESIGN A sentence-final word identification and recall (SWIR) test was conducted in two noise backgrounds with and without noise reduction as well as in quiet. Working memory capacity was measured using a reading span (RS) test. STUDY SAMPLE Twenty-six experienced hearing-aid users with moderate to moderately severe sensorineural hearing loss. RESULTS Noise impaired recall performance. Competing speech disrupted memory performance more than speech-shaped noise. For late list items the disruptive effect of the competing speech background was virtually cancelled out by noise reduction for persons with high working memory capacity. CONCLUSIONS Noise reduction can reduce the adverse effect of noise on memory for speech for persons with good working memory capacity. We argue that the mechanism behind this is faster word identification that enhances encoding into working memory.
Collapse
Affiliation(s)
- Elaine Hoi Ning Ng
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, SE-581 83 Linköping University, Sweden.
| | | | | | | | | |
Collapse
|
140
|
Manchaiah VKC, Stephens D. Perspectives on defining ‘hearing loss’ and its consequences. HEARING BALANCE AND COMMUNICATION 2013. [DOI: 10.3109/21695717.2012.756624] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
141
|
Classon E, Rudner M, Rönnberg J. Working memory compensates for hearing related phonological processing deficit. JOURNAL OF COMMUNICATION DISORDERS 2013; 46:17-29. [PMID: 23157731 DOI: 10.1016/j.jcomdis.2012.10.001] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2012] [Revised: 10/08/2012] [Accepted: 10/24/2012] [Indexed: 06/01/2023]
Abstract
UNLABELLED Acquired hearing impairment is associated with gradually declining phonological representations. According to the Ease of Language Understanding (ELU) model, poorly defined representations lead to mismatch in phonologically challenging tasks. To resolve the mismatch, reliance on working memory capacity (WMC) increases. This study investigated whether WMC modulated performance in a phonological task in individuals with hearing impairment. A visual rhyme judgment task with congruous or incongruous orthography, followed by an incidental episodic recognition memory task, was used. In participants with hearing impairment, WMC modulated both rhyme judgment performance and recognition memory in the orthographically similar non-rhyming condition; those with high WMC performed exceptionally well in the judgment task, but later recognized few of the words. For participants with hearing impairment and low WMC the pattern was reversed; they performed poorly in the judgment task but later recognized a surprisingly large proportion of the words. Results indicate that good WMC can compensate for the negative impact of auditory deprivation on phonological processing abilities by allowing for efficient use of phonological processing skills. They also suggest that individuals with hearing impairment and low WMC may use a non-phonological approach to written words, which can have the beneficial side effect of improving memory encoding. LEARNING OUTCOMES Readers will be able to: (1) describe cognitive processes involved in rhyme judgment, (2) explain how acquired hearing impairment affects phonological processing and (3) discuss how reading strategies at encoding impact memory performance.
Collapse
Affiliation(s)
- Elisabet Classon
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, SE-581 83 Linköping, Sweden.
| | | | | |
Collapse
|
142
|
Abstract
Purpose
To summarize existing data on the interactions of cognitive function and hearing technology in older adults.
Method
A narrative review was used to summarize previous data for the short-term interactions of cognition and hearing technology on measured outcomes. For long-term outcomes, typically for 3–24 months of hearing aid use, a computerized database search was conducted.
Results
There is accumulating evidence that cognitive function can impact outcomes following immediate or short-term use of hearing aids and that hearing aids can impact immediate cognitive function. There is limited evidence regarding the long-term impact of hearing aids on cognition, and the most rigorous studies in this area have not observed a positive effect.
Conclusions
Although interactions have been observed between cognition and use of hearing aids for measures obtained following immediate or short-term usage of hearing technology, limited evidence is available following long-term usage, and that evidence that is available does not support an effect of hearing aids on cognitive function. More research is needed, however, including rigorous studies of older adults following longer periods of hearing aid usage.
Collapse
Affiliation(s)
| | - Larry E. Humes
- Department of Speech and Hearing Sciences, Indiana University, Bloomington
| |
Collapse
|
143
|
Processing load induced by informational masking is related to linguistic abilities. Int J Otolaryngol 2012; 2012:865731. [PMID: 23091495 PMCID: PMC3471442 DOI: 10.1155/2012/865731] [Citation(s) in RCA: 80] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2012] [Revised: 08/31/2012] [Accepted: 09/04/2012] [Indexed: 11/18/2022] Open
Abstract
It is often assumed that the benefit of hearing aids is not primarily reflected in better speech performance, but that it is reflected in less effortful listening in the aided than in the unaided condition. Before being able to assess such a hearing aid benefit the present study examined how processing load while listening to masked speech relates to inter-individual differences in cognitive abilities relevant for language processing. Pupil dilation was measured in thirty-two normal hearing participants while listening to sentences masked by fluctuating noise or interfering speech at either 50% and 84% intelligibility. Additionally, working memory capacity, inhibition of irrelevant information, and written text reception was tested. Pupil responses were larger during interfering speech as compared to fluctuating noise. This effect was independent of intelligibility level. Regression analysis revealed that high working memory capacity, better inhibition, and better text reception were related to better speech reception thresholds. Apart from a positive relation to speech recognition, better inhibition and better text reception are also positively related to larger pupil dilation in the single-talker masker conditions. We conclude that better cognitive abilities not only relate to better speech perception, but also partly explain higher processing load in complex listening conditions.
Collapse
|
144
|
Aging and implantable hearing solutions. Abstracts from the Cochlear Science and Research Seminar. Paris, France. March 19-20, 2012. Audiol Neurootol 2012; 17 Suppl 1:1-26. [PMID: 22922653 DOI: 10.1159/000341356] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
|
145
|
Zekveld AA, Rudner M, Johnsrude IS, Heslenfeld DJ, Rönnberg J. Behavioral and fMRI evidence that cognitive ability modulates the effect of semantic context on speech intelligibility. BRAIN AND LANGUAGE 2012; 122:103-13. [PMID: 22728131 DOI: 10.1016/j.bandl.2012.05.006] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2011] [Revised: 05/10/2012] [Accepted: 05/21/2012] [Indexed: 05/12/2023]
Abstract
Text cues facilitate the perception of spoken sentences to which they are semantically related (Zekveld, Rudner, et al., 2011). In this study, semantically related and unrelated cues preceding sentences evoked more activation in middle temporal gyrus (MTG) and inferior frontal gyrus (IFG) than nonword cues, regardless of acoustic quality (speech in noise or speech in quiet). Larger verbal working memory (WM) capacity (reading span) was associated with greater intelligibility benefit obtained from related cues, with less speech-related activation in the left superior temporal gyrus and left anterior IFG, and with more activation in right medial frontal cortex for related versus unrelated cues. Better ability to comprehend masked text was associated with greater ability to disregard unrelated cues, and with more activation in left angular gyrus (AG). We conclude that individual differences in cognitive abilities are related to activation in a speech-sensitive network including left MTG, IFG and AG during cued speech perception.
Collapse
Affiliation(s)
- Adriana A Zekveld
- Department of Behavioural Sciences and Learning, Linköping University, Sweden.
| | | | | | | | | |
Collapse
|
146
|
Piquado T, Benichov JI, Brownell H, Wingfield A. The hidden effect of hearing acuity on speech recall, and compensatory effects of self-paced listening. Int J Audiol 2012; 51:576-83. [PMID: 22731919 DOI: 10.3109/14992027.2012.684403] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
OBJECTIVE The purpose of this research was to determine whether negative effects of hearing loss on recall accuracy for spoken narratives can be mitigated by allowing listeners to control the rate of speech input. DESIGN Paragraph-length narratives were presented for recall under two listening conditions in a within-participants design: presentation without interruption (continuous) at an average speech-rate of 150 words per minute; and presentation interrupted at periodic intervals at which participants were allowed to pause before initiating the next segment (self-paced). STUDY SAMPLE Participants were 24 adults ranging from 21 to 33 years of age. Half had age-normal hearing acuity and half had mild- to-moderate hearing loss. The two groups were comparable for age, years of formal education, and vocabulary. RESULTS When narrative passages were presented continuously, without interruption, participants with hearing loss recalled significantly fewer story elements, both main ideas and narrative details, than those with age-normal hearing. The recall difference was eliminated when the two groups were allowed to self-pace the speech input. CONCLUSION Results support the hypothesis that the listening effort associated with reduced hearing acuity can slow processing operations and increase demands on working memory, with consequent negative effects on accuracy of narrative recall.
Collapse
|
147
|
Sörqvist P, Rönnberg J. Episodic long-term memory of spoken discourse masked by speech: what is the role for working memory capacity? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2012; 55:210-218. [PMID: 22199182 DOI: 10.1044/1092-4388(2011/10-0353)] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
PURPOSE To investigate whether working memory capacity (WMC) modulates the effects of to-be-ignored speech on the memory of materials conveyed by to-be-attended speech. METHOD Two tasks (reading span, Daneman & Carpenter, 1980; Rönnberg et al., 2008; and size-comparison span, Sörqvist, Ljungberg, & Ljung, 2010) were used to measure individual differences in WMC. Episodic long-term memory of spoken discourse was measured by requesting participants to listen to stories masked either by normal speech or by a rotated version of that speech and to subsequently answer questions on the content of the stories. RESULTS Normal speech impaired performance on the episodic long-term memory test, and both WMC tasks were negatively related to this effect, indicating that individuals with high WMC are less susceptible to disruption. Moreover, further analyses revealed that size-comparison span (a task that requires resolution of semantic confusion by inhibition processes) is a stronger predictor of the effect than is reading span. CONCLUSIONS Cognitive control processes support listening in adverse conditions. In particular, inhibition processes acting to resolve semantic confusion seem to underlie the relationship between WMC and susceptibility to distraction from masking speech.
Collapse
|
148
|
Besser J, Zekveld AA, Kramer SE, Rönnberg J, Festen JM. New measures of masked text recognition in relation to speech-in-noise perception and their associations with age and cognitive abilities. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2012; 55:194-209. [PMID: 22199191 DOI: 10.1044/1092-4388(2011/11-0008)] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
PURPOSE In this research, the authors aimed to increase the analogy between Text Reception Threshold (TRT; Zekveld, George, Kramer, Goverts, & Houtgast, 2007) and Speech Reception Threshold (SRT; Plomp & Mimpen, 1979) and to examine the TRT's value in estimating cognitive abilities that are important for speech comprehension in noise. METHOD The authors administered 5 TRT versions, SRT tests in stationary (SRT(STAT)) and modulated (SRT(MOD)) noise, and 2 cognitive tests: a reading span (RSpan) test for working memory capacity and a letter-digit substitution test for information-processing speed. Fifty-five adults with normal hearing (18-78 years, M = 44 years) participated. The authors examined mutual associations of the tests and their predictive value for the SRTs with correlation and linear regression analyses. RESULTS SRTs and TRTs were well associated, also when controlling for age. Correlations for the SRT(STAT) were generally lower than for the SRT(MOD.) The cognitive tests were correlated to the SRTs only when age was not controlled for. Age and the TRTs were the only significant predictors of SRT(MOD). SRT(STAT) was predicted by level of education and some of the TRT versions. CONCLUSIONS TRTs and SRTs are robustly associated, nearly independent of age. The association between SRTs and RSpan is largely age dependent. The TRT test and the RSpan test measure different nonauditory components of linguistic processing relevant for speech perception in noise.
Collapse
Affiliation(s)
- Jana Besser
- VU University Medical Center, Institute forHealth and Care Research, Amsterdam, The Netherlands.
| | | | | | | | | |
Collapse
|
149
|
Goverts ST, Huysmans E, Kramer SE, de Groot AMB, Houtgast T. On the use of the distortion-sensitivity approach in examining the role of linguistic abilities in speech understanding in noise. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2011; 54:1702-1708. [PMID: 22180022 DOI: 10.1044/1092-4388(2011/09-0268)] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
PURPOSE Researchers have used the distortion-sensitivity approach in the psychoacoustical domain to investigate the role of auditory processing abilities in speech perception in noise (van Schijndel, Houtgast, & Festen, 2001; Goverts & Houtgast, 2010). In this study, the authors examined the potential applicability of the distortion-sensitivity approach for investigating the role of linguistic abilities in speech understanding in noise. METHOD The authors applied the distortion-sensitivity approach by measuring the processing of visually presented masked text in a condition with manipulated syntactic, lexical, and semantic cues and while using the Text Reception Threshold (George et al., 2007; Kramer, Zekveld, & Houtgast, 2009; Zekveld, George, Kramer, Goverts, & Houtgast, 2007) method. Two groups that differed in linguistic abilities were studied: 13 native and 10 non-native speakers of Dutch, all typically hearing university students. RESULTS As expected, the non-native subjects showed substantially reduced performance. The results of the distortion-sensitivity approach yielded differentiated results on the use of specific linguistic cues in the 2 groups. CONCLUSION The results show the potential value of the distortion-sensitivity approach in studying the role of linguistic abilities in speech understanding in noise of individuals with hearing impairment.
Collapse
Affiliation(s)
- S Theo Goverts
- VU University Medical Center, Amsterdam, The Netherlands.
| | | | | | | | | |
Collapse
|
150
|
The Influence of Semantically Related and Unrelated Text Cues on the Intelligibility of Sentences in Noise. Ear Hear 2011; 32:e16-25. [DOI: 10.1097/aud.0b013e318228036a] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|