1
|
Fernandez LB, Pickering MJ, Naylor G, Hadley LV. Uses of Linguistic Context in Speech Listening: Does Acquired Hearing Loss Lead to Reduced Engagement of Prediction? Ear Hear 2024; 45:1107-1114. [PMID: 38880953 PMCID: PMC11325976 DOI: 10.1097/aud.0000000000001515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 04/01/2024] [Indexed: 06/18/2024]
Abstract
Research investigating the complex interplay of cognitive mechanisms involved in speech listening for people with hearing loss has been gaining prominence. In particular, linguistic context allows the use of several cognitive mechanisms that are not well distinguished in hearing science, namely those relating to "postdiction", "integration", and "prediction". We offer the perspective that an unacknowledged impact of hearing loss is the differential use of predictive mechanisms relative to age-matched individuals with normal hearing. As evidence, we first review how degraded auditory input leads to reduced prediction in people with normal hearing, then consider the literature exploring context use in people with acquired postlingual hearing loss. We argue that no research on hearing loss has directly assessed prediction. Because current interventions for hearing do not fully alleviate difficulty in conversation, and avoidance of spoken social interaction may be a mediator between hearing loss and cognitive decline, this perspective could lead to greater understanding of cognitive effects of hearing loss and provide insight regarding new targets for intervention.
Collapse
Affiliation(s)
- Leigh B. Fernandez
- Department of Social Sciences, Psycholinguistics Group, University of Kaiserslautern-Landau, Kaiserslautern, Germany
| | - Martin J. Pickering
- Department of Psychology, University of Edinburgh, Edinburgh, United Kingdom
| | - Graham Naylor
- Hearing Sciences—Scottish Section, School of Medicine, University of Nottingham, Glasgow, United Kingdom
| | - Lauren V. Hadley
- Hearing Sciences—Scottish Section, School of Medicine, University of Nottingham, Glasgow, United Kingdom
| |
Collapse
|
2
|
Zhu M, Qiao Y, Sun W, Sun Y, Long Y, Guo H, Cai C, Shen H, Shang Y. Visual selective attention in individuals with age-related hearing loss. Neuroimage 2024; 298:120787. [PMID: 39147293 DOI: 10.1016/j.neuroimage.2024.120787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 07/11/2024] [Accepted: 08/12/2024] [Indexed: 08/17/2024] Open
Abstract
Evidence from epidemiological studies suggests that hearing loss is associated with an accelerated decline in cognitive function, but the underlying pathophysiological mechanism remains poorly understood. Studies using auditory tasks have suggested that degraded auditory input increases the cognitive load for auditory perceptual processing and thereby reduces the resources available for other cognitive tasks. Attention-related networks are among the systems overrecruited to support degraded auditory perception, but it is unclear how they function when no excessive recruitment of cognitive resources for auditory processing is needed. Here, we implemented an EEG study using a nonauditory visual attentional selection task in 30 individuals with age-related hearing loss (ARHLs, 60-73 years) and compared them with aged (N = 30, 60-70 years) and young (N = 35, 22-29 years) normal-hearing controls. Compared with their normal-hearing peers, ARHLs demonstrated a significant amplitude reduction for the posterior contralateral N2 component, which is a well-validated index of the allocation of selective visual attention, despite the comparable behavioral performance. Furthermore, the amplitudes were observed to correlate significantly with hearing acuities (pure tone audiometry thresholds) and higher-order hearing abilities (speech-in-noise thresholds) in aged individuals. The target-elicited alpha lateralization, another mechanism of visuospatial attention, demonstrated in control groups was not observed in ARHLs. Although behavioral performance is comparable, the significant decrease in N2pc amplitude in ARHLs provides neurophysiologic evidence that may suggest a visual attentional deficit in ARHLs even without extra-recruitment of cognitive resources by auditory processing. It supports the hypothesis that constant degraded auditory input in ARHLs has an adverse impact on the function of cognitive control systems, which is a possible mechanism mediating the relationship between hearing loss and cognitive decline.
Collapse
Affiliation(s)
- Min Zhu
- Department of Otorhinolaryngology, Peking Union Medical College Hospital, Beijing, People's Republic of China
| | - Yufei Qiao
- Department of Otorhinolaryngology, Peking Union Medical College Hospital, Beijing, People's Republic of China
| | - Wen Sun
- Department of Otorhinolaryngology, Peking Union Medical College Hospital, Beijing, People's Republic of China
| | - Yang Sun
- School of Educational Science, Shenyang Normal University, Shenyang, People's Republic of China
| | - Yuanshun Long
- National Engineering Research Center for E-Learning, Central China Normal University, Wuhan, People's Republic of China
| | - Hua Guo
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, People's Republic of China
| | - Chang Cai
- National Engineering Research Center for E-Learning, Central China Normal University, Wuhan, People's Republic of China
| | - Hang Shen
- Department of Neurology, Peking Union Medical College Hospital, Beijing, People's Republic of China.
| | - Yingying Shang
- Department of Otorhinolaryngology, Peking Union Medical College Hospital, Beijing, People's Republic of China.
| |
Collapse
|
3
|
Svirsky MA, Neukam JD, Capach NH, Amichetti NM, Lavender A, Wingfield A. Communication Under Sharply Degraded Auditory Input and the "2-Sentence" Problem. Ear Hear 2024; 45:1045-1058. [PMID: 38523125 DOI: 10.1097/aud.0000000000001500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/26/2024]
Abstract
OBJECTIVES Despite performing well in standard clinical assessments of speech perception, many cochlear implant (CI) users report experiencing significant difficulties when listening in real-world environments. We hypothesize that this disconnect may be related, in part, to the limited ecological validity of tests that are currently used clinically and in research laboratories. The challenges that arise from degraded auditory information provided by a CI, combined with the listener's finite cognitive resources, may lead to difficulties when processing speech material that is more demanding than the single words or single sentences that are used in clinical tests. DESIGN Here, we investigate whether speech identification performance and processing effort (indexed by pupil dilation measures) are affected when CI users or normal-hearing control subjects are asked to repeat two sentences presented sequentially instead of just one sentence. RESULTS Response accuracy was minimally affected in normal-hearing listeners, but CI users showed a wide range of outcomes, from no change to decrements of up to 45 percentage points. The amount of decrement was not predictable from the CI users' performance in standard clinical tests. Pupillometry measures tracked closely with task difficulty in both the CI group and the normal-hearing group, even though the latter had speech perception scores near ceiling levels for all conditions. CONCLUSIONS Speech identification performance is significantly degraded in many (but not all) CI users in response to input that is only slightly more challenging than standard clinical tests; specifically, when two sentences are presented sequentially before requesting a response, instead of presenting just a single sentence at a time. This potential "2-sentence problem" represents one of the simplest possible scenarios that go beyond presentation of the single words or sentences used in most clinical tests of speech perception, and it raises the possibility that even good performers in single-sentence tests may be seriously impaired by other ecologically relevant manipulations. The present findings also raise the possibility that a clinical version of a 2-sentence test may provide actionable information for counseling and rehabilitating CI users, and for people who interact with them closely.
Collapse
Affiliation(s)
- Mario A Svirsky
- Department of Otolaryngology Head and Neck Surgery, New York University Grossman School of Medicine, New York, New York, USA
- Neuroscience Institute, New York University School of Medicine, New York, New York, USA
| | - Jonathan D Neukam
- Department of Otolaryngology Head and Neck Surgery, New York University Grossman School of Medicine, New York, New York, USA
| | - Nicole Hope Capach
- Department of Otolaryngology Head and Neck Surgery, New York University Grossman School of Medicine, New York, New York, USA
| | - Nicole M Amichetti
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| | - Annette Lavender
- Department of Otolaryngology Head and Neck Surgery, New York University Grossman School of Medicine, New York, New York, USA
- Cochlear Americas, Denver, Colorado, USA
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| |
Collapse
|
4
|
Brown VA, Sewell K, Villanueva J, Strand JF. Noisy speech impairs retention of previously heard information only at short time scales. Mem Cognit 2024:10.3758/s13421-024-01583-y. [PMID: 38758512 DOI: 10.3758/s13421-024-01583-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/24/2024] [Indexed: 05/18/2024]
Abstract
When speech is presented in noise, listeners must recruit cognitive resources to resolve the mismatch between the noisy input and representations in memory. A consequence of this effortful listening is impaired memory for content presented earlier. In the first study on effortful listening, Rabbitt, The Quarterly Journal of Experimental Psychology, 20, 241-248 (1968; Experiment 2) found that recall for a list of digits was poorer when subsequent digits were presented with masking noise than without. Experiment 3 of that study extended this effect to more naturalistic, passage-length materials. Although the findings of Rabbitt's Experiment 2 have been replicated multiple times, no work has assessed the robustness of Experiment 3. We conducted a replication attempt of Rabbitt's Experiment 3 at three signal-to-noise ratios (SNRs). Results at one of the SNRs (Experiment 1a of the current study) were in the opposite direction from what Rabbitt, The Quarterly Journal of Experimental Psychology, 20, 241-248, (1968) reported - that is, speech was recalled more accurately when it was followed by speech presented in noise rather than in the clear - and results at the other two SNRs showed no effect of noise (Experiments 1b and 1c). In addition, reanalysis of a replication of Rabbitt's seminal finding in his second experiment showed that the effect of effortful listening on previously presented information is transient. Thus, effortful listening caused by noise appears to only impair memory for information presented immediately before the noise, which may account for our finding that noise in the second-half of a long passage did not impair recall of information presented in the first half of the passage.
Collapse
Affiliation(s)
- Violet A Brown
- Department of Psychology, Carleton College, Northfield, MN, USA.
| | - Katrina Sewell
- Department of Psychology, Carleton College, Northfield, MN, USA
| | - Jed Villanueva
- Department of Psychology, Carleton College, Northfield, MN, USA
| | - Julia F Strand
- Department of Psychology, Carleton College, Northfield, MN, USA
| |
Collapse
|
5
|
Sherafati A, Dwyer N, Bajracharya A, Hassanpour MS, Eggebrecht AT, Firszt JB, Culver JP, Peelle JE. Prefrontal cortex supports speech perception in listeners with cochlear implants. eLife 2022; 11:e75323. [PMID: 35666138 PMCID: PMC9225001 DOI: 10.7554/elife.75323] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 06/04/2022] [Indexed: 12/14/2022] Open
Abstract
Cochlear implants are neuroprosthetic devices that can restore hearing in people with severe to profound hearing loss by electrically stimulating the auditory nerve. Because of physical limitations on the precision of this stimulation, the acoustic information delivered by a cochlear implant does not convey the same level of acoustic detail as that conveyed by normal hearing. As a result, speech understanding in listeners with cochlear implants is typically poorer and more effortful than in listeners with normal hearing. The brain networks supporting speech understanding in listeners with cochlear implants are not well understood, partly due to difficulties obtaining functional neuroimaging data in this population. In the current study, we assessed the brain regions supporting spoken word understanding in adult listeners with right unilateral cochlear implants (n=20) and matched controls (n=18) using high-density diffuse optical tomography (HD-DOT), a quiet and non-invasive imaging modality with spatial resolution comparable to that of functional MRI. We found that while listening to spoken words in quiet, listeners with cochlear implants showed greater activity in the left prefrontal cortex than listeners with normal hearing, specifically in a region engaged in a separate spatial working memory task. These results suggest that listeners with cochlear implants require greater cognitive processing during speech understanding than listeners with normal hearing, supported by compensatory recruitment of the left prefrontal cortex.
Collapse
Affiliation(s)
- Arefeh Sherafati
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
| | - Noel Dwyer
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | - Aahana Bajracharya
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | | | - Adam T Eggebrecht
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
- Department of Electrical & Systems Engineering, Washington University in St. LouisSt. LouisUnited States
- Department of Biomedical Engineering, Washington University in St. LouisSt. LouisUnited States
- Division of Biology and Biomedical Sciences, Washington University in St. LouisSt. LouisUnited States
| | - Jill B Firszt
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | - Joseph P Culver
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
- Department of Biomedical Engineering, Washington University in St. LouisSt. LouisUnited States
- Division of Biology and Biomedical Sciences, Washington University in St. LouisSt. LouisUnited States
- Department of Physics, Washington University in St. LouisSt. LouisUnited States
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| |
Collapse
|
6
|
McClannahan KS, Mainardi A, Luor A, Chiu YF, Sommers MS, Peelle JE. Spoken Word Recognition in Listeners with Mild Dementia Symptoms. J Alzheimers Dis 2022; 90:749-759. [PMID: 36189586 PMCID: PMC9885492 DOI: 10.3233/jad-215606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
BACKGROUND Difficulty understanding speech is a common complaint of older adults. In quiet, speech perception is often assumed to be relatively automatic. However, higher-level cognitive processes play a key role in successful communication in noise. Limited cognitive resources in adults with dementia may therefore hamper word recognition. OBJECTIVE The goal of this study was to determine the impact of mild dementia on spoken word recognition in quiet and noise. METHODS Participants were 53-86 years with (n = 16) or without (n = 32) dementia symptoms as classified by the Clinical Dementia Rating scale. Participants performed a word identification task with two levels of word difficulty (few and many similar sounding words) in quiet and in noise at two signal-to-noise ratios, +6 and +3 dB. Our hypothesis was that listeners with mild dementia symptoms would have more difficulty with speech perception in noise under conditions that tax cognitive resources. RESULTS Listeners with mild dementia symptoms had poorer task accuracy in both quiet and noise, which held after accounting for differences in age and hearing level. Notably, even in quiet, adults with dementia symptoms correctly identified words only about 80% of the time. However, word difficulty was not a factor in task performance for either group. CONCLUSION These results affirm the difficulty that listeners with mild dementia may have with spoken word recognition, both in quiet and in background noise, consistent with a role of cognitive resources in spoken word identification.
Collapse
Affiliation(s)
| | - Amelia Mainardi
- Department of Otolaryngology, Washington University in St. Louis
| | - Austin Luor
- Department of Otolaryngology, Washington University in St. Louis
| | - Yi-Fang Chiu
- Department of Speech, Language and Hearing Sciences, Saint Louis University
| | - Mitchell S. Sommers
- Department of Psychological and Brain Sciences, Washington University in St. Louis
| | | |
Collapse
|
7
|
Pupillometry reveals cognitive demands of lexical competition during spoken word recognition in young and older adults. Psychon Bull Rev 2021; 29:268-280. [PMID: 34405386 DOI: 10.3758/s13423-021-01991-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/27/2021] [Indexed: 12/27/2022]
Abstract
In most contemporary activation-competition frameworks for spoken word recognition, candidate words compete against phonological "neighbors" with similar acoustic properties (e.g., "cap" vs. "cat"). Thus, recognizing words with more competitors should come at a greater cognitive cost relative to recognizing words with fewer competitors, due to increased demands for selecting the correct item and inhibiting incorrect candidates. Importantly, these processes should operate even in the absence of differences in accuracy. In the present study, we tested this proposal by examining differences in processing costs associated with neighborhood density for highly intelligible items presented in quiet. A second goal was to examine whether the cognitive demands associated with increased neighborhood density were greater for older adults compared with young adults. Using pupillometry as an index of cognitive processing load, we compared the cognitive demands associated with spoken word recognition for words with many or fewer neighbors, presented in quiet, for young (n = 67) and older (n = 69) adult listeners. Growth curve analysis of the pupil data indicated that older adults showed a greater evoked pupil response for spoken words than did young adults, consistent with increased cognitive load during spoken word recognition. Words from dense neighborhoods were marginally more demanding to process than words from sparse neighborhoods. There was also an interaction between age and neighborhood density, indicating larger effects of density in young adult listeners. These results highlight the importance of assessing both cognitive demands and accuracy when investigating the mechanisms underlying spoken word recognition.
Collapse
|
8
|
Text Captioning Buffers Against the Effects of Background Noise and Hearing Loss on Memory for Speech. Ear Hear 2021; 43:115-127. [PMID: 34260436 DOI: 10.1097/aud.0000000000001079] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Everyday speech understanding frequently occurs in perceptually demanding environments, for example, due to background noise and normal age-related hearing loss. The resulting degraded speech signals increase listening effort, which gives rise to negative downstream effects on subsequent memory and comprehension, even when speech is intelligible. In two experiments, we explored whether the presentation of realistic assistive text captioned speech offsets the negative effects of background noise and hearing impairment on multiple measures of speech memory. DESIGN In Experiment 1, young normal-hearing adults (N = 48) listened to sentences for immediate recall and delayed recognition memory. Speech was presented in quiet or in two levels of background noise. Sentences were either presented as speech only or as text captioned speech. Thus, the experiment followed a 2 (caption vs no caption) × 3 (no noise, +7 dB signal-to-noise ratio, +3 dB signal-to-noise ratio) within-subjects design. In Experiment 2, a group of older adults (age range: 61 to 80, N = 31), with varying levels of hearing acuity completed the same experimental task as in Experiment 1. For both experiments, immediate recall, recognition memory accuracy, and recognition memory confidence were analyzed via general(ized) linear mixed-effects models. In addition, we examined individual differences as a function of hearing acuity in Experiment 2. RESULTS In Experiment 1, we found that the presentation of realistic text-captioned speech in young normal-hearing listeners showed improved immediate recall and delayed recognition memory accuracy and confidence compared with speech alone. Moreover, text captions attenuated the negative effects of background noise on all speech memory outcomes. In Experiment 2, we replicated the same pattern of results in a sample of older adults with varying levels of hearing acuity. Moreover, we showed that the negative effects of hearing loss on speech memory in older adulthood were attenuated by the presentation of text captions. CONCLUSIONS Collectively, these findings strongly suggest that the simultaneous presentation of text can offset the negative effects of effortful listening on speech memory. Critically, captioning benefits extended from immediate word recall to long-term sentence recognition memory, a benefit that was observed not only for older adults with hearing loss but also young normal-hearing listeners. These findings suggest that the text captioning benefit to memory is robust and has potentially wide applications for supporting speech listening in acoustically challenging environments.
Collapse
|
9
|
Koelewijn T, Zekveld AA, Lunner T, Kramer SE. The effect of monetary reward on listening effort and sentence recognition. Hear Res 2021; 406:108255. [PMID: 33964552 DOI: 10.1016/j.heares.2021.108255] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 04/06/2021] [Accepted: 04/13/2021] [Indexed: 12/12/2022]
Abstract
Recently we showed that higher reward results in increased pupil dilation during listening (listening effort). Remarkably, this effect was not accompanied with improved speech reception. Still, increased listening effort may reflect more in-depth processing, potentially resulting in a better memory representation of speech. Here, we investigated this hypothesis by also testing the effect of monetary reward on recognition memory performance. Twenty-four young adults performed speech reception threshold (SRT) tests, either hard or easy, in which they repeated sentences uttered by a female talker masked by a male talker. We recorded the pupil dilation response during listening. Participants could earn a high or low reward and the four conditions were presented in a blocked fashion. After each SRT block, participants performed a visual sentence recognition task. In this task, the sentences that were presented in the preceding SRT task were visually presented in random order and intermixed with unfamiliar sentences. Participants had to indicate whether they had previously heard the sentence or not. The SRT and sentence recognition were affected by task difficulty but not by reward. Contrary to our previous results, peak pupil dilation did not reflect effects of reward. However, post-hoc time course analysis (GAMMs) revealed that in the hard SRT task, the pupil response was larger for high than low reward. We did not observe an effect of reward on visual sentence recognition. Hence, the current results provide no conclusive evidence that the effect of monetary reward on the pupil response relates to the memory encoding of speech.
Collapse
Affiliation(s)
- Thomas Koelewijn
- Department of Otolaryngology - Head and Neck surgery, Amsterdam UMC, Vrije Universiteit Amsterdam, Ear & Hearing, Amsterdam Public Health research institute, De Boelelaan 1117, Amsterdam, the Netherlands; Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, 9700 RB Hanzeplein 1, Groningen 9713GZ, the Netherlands; Research School of Behavioral and Cognitive Neuroscience, Graduate School of Medical Sciences, University of Groningen, Groningen, the Netherlands.
| | - Adriana A Zekveld
- Department of Otolaryngology - Head and Neck surgery, Amsterdam UMC, Vrije Universiteit Amsterdam, Ear & Hearing, Amsterdam Public Health research institute, De Boelelaan 1117, Amsterdam, the Netherlands
| | - Thomas Lunner
- Eriksholm Research Centre, Snekkersten, Denmark; Department of Electrical Engineering, Hearing Systems, Hearing Systems group, Technical University of Denmark, Kgs., Lyngby, Denmark; Division of Technical Audiology, Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden; Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Sophia E Kramer
- Department of Otolaryngology - Head and Neck surgery, Amsterdam UMC, Vrije Universiteit Amsterdam, Ear & Hearing, Amsterdam Public Health research institute, De Boelelaan 1117, Amsterdam, the Netherlands
| |
Collapse
|
10
|
Abstract
Sequences of phonologically similar words are more difficult to remember than phonologically distinct sequences. This study investigated whether this difficulty arises in the acoustic similarity of auditory stimuli or in the corresponding phonological labels in memory. Participants reconstructed sequences of words which were degraded with a vocoder. We manipulated the phonological similarity of response options across two groups. One group was trained to map stimulus words onto phonologically similar response labels which matched the recorded word; the other group was trained to map words onto a set of plausible responses which were mismatched from the original recordings but were selected to have less phonological overlap. Participants trained on the matched responses were able to learn responses with less training and recall sequences more accurately than participants trained on the mismatched responses, even though the mismatched responses were more phonologically distinct from one another and participants were unaware of the mismatch. The relative difficulty of recalling items in the correct position was the same across both sets of response labels. Mismatched responses impaired recall accuracy across all positions except the final item in each list. These results are consistent with the idea that increased difficulty of mapping acoustic stimuli onto phonological forms impairs serial recall. Increased mapping difficulty could impair retention of memoranda and impede consolidation into phonological forms, which would impair recall in adverse listening conditions.
Collapse
Affiliation(s)
- Adam K Bosen
- Hearing and Speech Perception, Boys Town National Research Hospital, Omaha, NE, USA
| | - Elizabeth Monzingo
- Hearing and Speech Perception, Boys Town National Research Hospital, Omaha, NE, USA
| | - Angela M AuBuchon
- Hearing and Speech Perception, Boys Town National Research Hospital, Omaha, NE, USA
| |
Collapse
|
11
|
Abstract
OBJECTIVES Serial recall of digits is frequently used to measure short-term memory span in various listening conditions. However, the use of digits may mask the effect of low quality auditory input. Digits have high frequency and are phonologically distinct relative to one another, so they should be easy to identify even with low quality auditory input. In contrast, larger item sets reduce listener ability to strategically constrain their expectations, which should reduce identification accuracy and increase the time and/or cognitive resources needed for identification when auditory quality is low. This diminished accuracy and increased cognitive load should interfere with memory for sequences of items drawn from large sets. The goal of this work was to determine whether this predicted interaction between auditory quality and stimulus set in short-term memory exists, and if so, whether this interaction is associated with processing speed, vocabulary, or attention. DESIGN We compared immediate serial recall within young adults with normal hearing across unprocessed and vocoded listening conditions for multiple stimulus sets. Stimulus sets were lists of digits (1 to 9), consonant-vowel-consonant (CVC) words (chosen from a list of 60 words), and CVC nonwords (chosen from a list of 50 nonwords). Stimuli were unprocessed or vocoded with an eight-channel noise vocoder. To support interpretation of responses, words and nonwords were selected to minimize inclusion of multiple phonemes from within a confusion cluster. We also measured receptive vocabulary (Peabody Picture Vocabulary Test [PPVT-4]), sustained attention (test of variables of attention [TOVA]), and repetition speed for individual items from each stimulus set under both listening conditions. RESULTS Vocoding stimuli had no impact on serial recall of digits, but reduced memory span for words and nonwords. This reduction in memory span was attributed to an increase in phonological confusions for nonwords. However, memory span for vocoded word lists remained reduced even after accounting for common phonetic confusions, indicating that lexical status played an additional role across listening conditions. Principal components analysis found two components that explained 84% of the variance in memory span across conditions. Component one had similar load across all conditions, indicating that participants had an underlying memory capacity, which was common to all conditions. Component two was loaded by performance in the vocoded word and nonword conditions, representing the sensitivity of memory span to vocoding of these stimuli. The order in which participants completed listening conditions had a small effect on memory span that could not account for the effect of listening condition. Repetition speed was fastest for digits, slower for words, and slowest for nonwords. On average, vocoding slowed repetition speed for all stimuli, but repetition speed was not predictive of individual memory span. Vocabulary and attention showed no correlation with memory span. CONCLUSIONS Our results replicated previous findings that low quality auditory input can impair short-term memory, and demonstrated that this impairment is sensitive to stimulus set. Using multiple stimulus sets in degraded listening conditions can isolate memory capacity (in digit span) from impaired item identification (in word and nonword span), which may help characterize the relationship between memory and speech recognition in difficult listening conditions.
Collapse
Affiliation(s)
- Adam K. Bosen
- Boys Town National Research Hospital, Omaha, NE, USA
| | | |
Collapse
|
12
|
Guang C, Lefkowitz E, Dillman-Hasso N, Brown VA, Strand JF. Recall of Speech is Impaired by Subsequent Masking Noise: A Replication of Experiment 2. ACTA ACUST UNITED AC 2020; 3:158-167. [PMID: 34240010 DOI: 10.1080/25742442.2021.1896908] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Introduction The presence of masking noise can impair speech intelligibility and increase the attentional and cognitive resources necessary to understand speech. The first study to demonstrate the negative cognitive effects of noisy speech found that participants had poorer recall for aurally-presented digits early in a list when later digits were presented in noise relative to quiet (Rabbitt, 1968). However, despite being cited nearly 500 times and providing the foundation for a wealth of subsequent research on the topic, the original study has never been directly replicated. Methods This study replicated Rabbitt (1968) with a large online sample and tested its robustness to a variety of analytical and scoring techniques. Results We replicated Rabbitt's key finding that listening to speech in noise impairs recall for items that came earlier in the list. The results were consistent when we used the original analytical technique (an ANOVA) and a more powerful analytical technique (generalized linear mixed effects models) that was not available when the original paper was published. Discussion These findings support the claim that effortful listening can interfere with encoding or rehearsal of previously presented information.
Collapse
Affiliation(s)
| | | | | | - Violet A Brown
- Washington University in St. Louis, Department of Psychological and Brain Sciences
| | | |
Collapse
|
13
|
Peelle JE. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior. Ear Hear 2019; 39:204-214. [PMID: 28938250 PMCID: PMC5821557 DOI: 10.1097/aud.0000000000000494] [Citation(s) in RCA: 315] [Impact Index Per Article: 63.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Accepted: 07/28/2017] [Indexed: 02/04/2023]
Abstract
Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners' abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in Saint Louis, Saint Louis, Missouri, USA
| |
Collapse
|
14
|
Ayasse ND, Wingfield A. A Tipping Point in Listening Effort: Effects of Linguistic Complexity and Age-Related Hearing Loss on Sentence Comprehension. Trends Hear 2019; 22:2331216518790907. [PMID: 30235973 PMCID: PMC6154259 DOI: 10.1177/2331216518790907] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
In recent years, there has been a growing interest in the relationship between effort and performance. Early formulations implied that, as the challenge of a task increases, individuals will exert more effort, with resultant maintenance of stable performance. We report an experiment in which normal-hearing young adults, normal-hearing older adults, and older adults with age-related mild-to-moderate hearing loss were tested for comprehension of recorded sentences that varied the comprehension challenge in two ways. First, sentences were constructed that expressed their meaning either with a simpler subject-relative syntactic structure or a more computationally demanding object-relative structure. Second, for each sentence type, an adjectival phrase was inserted that created either a short or long gap in the sentence between the agent performing an action and the action being performed. The measurement of pupil dilation as an index of processing effort showed effort to increase with task difficulty until a difficulty tipping point was reached. Beyond this point, the measurement of pupil size revealed a commitment of effort by the two groups of older adults who failed to keep pace with task demands as evidenced by reduced comprehension accuracy. We take these pupillometry data as revealing a complex relationship between task difficulty, effort, and performance that might not otherwise appear from task performance alone.
Collapse
Affiliation(s)
- Nicole D Ayasse
- 1 Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| | - Arthur Wingfield
- 1 Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| |
Collapse
|
15
|
Panza F, Lozupone M, Sardone R, Battista P, Piccininni M, Dibello V, La Montagna M, Stallone R, Venezia P, Liguori A, Giannelli G, Bellomo A, Greco A, Daniele A, Seripa D, Quaranta N, Logroscino G. Sensorial frailty: age-related hearing loss and the risk of cognitive impairment and dementia in later life. Ther Adv Chronic Dis 2018; 10:2040622318811000. [PMID: 31452865 PMCID: PMC6700845 DOI: 10.1177/2040622318811000] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2018] [Accepted: 10/12/2018] [Indexed: 01/12/2023] Open
Abstract
The peripheral hearing alterations and central auditory processing disorder (CAPD) associated with age-related hearing loss (ARHL), may impact cognitive disorders in older age. In older age, ARHL is also a significant marker for frailty, another age-related multidimensional clinical condition with a nonspecific state of vulnerability, reduced multisystem physiological reserve, and decreased resistance to different stressors (i.e. sensorial impairments, psychosocial stress, diseases, injuries). The multidimensional nature of frailty required an approach based on different pathogeneses because this clinical condition may include sensorial, physical, social, nutritional, cognitive, and psychological phenotypes. In the present narrative review, the cumulative epidemiological evidence coming from several longitudinal population-based studies, suggested convincing links between peripheral ARHL and incident cognitive decline and dementia. Moreover, a few longitudinal case-control and population-based studies also suggested that age-related CAPD in ARHL, may be central in determining an increased risk of incident cognitive decline, dementia, and Alzheimer's disease (AD). Cumulative meta-analytic evidence confirmed cross-sectional and longitudinal association of both peripheral ARHL and age-related CAPD with different domains of cognitive functions, mild cognitive impairment, and dementia, while the association with dementia subtypes such as AD and vascular dementia remained unclear. However, ARHL may represent a modifiable condition and a possible target for secondary prevention of cognitive impairment in older age, social isolation, late-life depression, and frailty. Further research is required to determine whether broader hearing rehabilitative interventions including coordinated counseling and environmental accommodations could delay or halt cognitive and global decline in the oldest old with both ARHL and dementia.
Collapse
Affiliation(s)
- Francesco Panza
- Department of Basic Medical Sciences,
Neurosciences, and Sense Organs, Neurodegenerative Disease Unit, University
of Bari ‘Aldo Moro’, Piazza Giulio Cesare 11, 70100, Bari, Italy
| | - Madia Lozupone
- Neurodegenerative Disease Unit, Department of
Basic Medicine, Neuroscience, and Sense Organs, University of Bari Aldo
Moro, Bari, Italy
| | - Rodolfo Sardone
- National Institute of Gastroenterology ‘Saverio
de Bellis’, Research Hospital, Castellana Grotte Bari, Italy
| | - Petronilla Battista
- Neurodegenerative Disease Unit, Department of
Basic Medicine, Neuroscience, and Sense Organs, University of Bari Aldo
Moro, Bari, Italy
- Istituti Clinici Scientifici Maugeri SPA SB,
IRCCS, Institute of Cassano Murge, Bari, Italy
| | - Marco Piccininni
- Neurodegenerative Disease Unit, Department of
Basic Medicine, Neuroscience, and Sense Organs, University of Bari Aldo
Moro, Bari, Italy
| | - Vittorio Dibello
- National Institute of Gastroenterology ‘Saverio
de Bellis’, Research Hospital, Castellana Grotte Bari, Italy
- Interdisciplinary Department of Medicine (DIM),
Section of Dentistry, University of Bari Aldo Moro, Bari, Italy
| | - Maddalena La Montagna
- Psychiatric Unit, Department of Clinical and
Experimental Medicine, University of Foggia, Foggia, Italy
| | - Roberta Stallone
- Neurodegenerative Disease Unit, Department of
Basic Medicine, Neuroscience, and Sense Organs, University of Bari Aldo
Moro, Bari, Italy
- National Institute of Gastroenterology ‘Saverio
de Bellis’, Research Hospital, Castellana Grotte Bari, Italy
| | - Pietro Venezia
- Department of Prosthodontics, Section of
Dentistry, University of Catania, Catania, Italy
| | - Angelo Liguori
- Neurodegenerative Disease Unit, Department of
Basic Medicine, Neuroscience, and Sense Organs, University of Bari Aldo
Moro, Bari, Italy
| | - Gianluigi Giannelli
- National Institute of Gastroenterology ‘Saverio
de Bellis’, Research Hospital, Castellana Grotte Bari, Italy
| | - Antonello Bellomo
- Psychiatric Unit, Department of Clinical and
Experimental Medicine, University of Foggia, Foggia, Italy
| | - Antonio Greco
- Geriatric Unit, Fondazione IRCCS ‘Casa Sollievo
della Sofferenza’, San Giovanni Rotondo, Foggia, Italy
| | - Antonio Daniele
- Institute of Neurology, Catholic University of
Sacred Heart, Rome, Italy
- Fondazione Policlinico Universitario A. Gemelli
IRCCS, Rome, Italy
| | - Davide Seripa
- Geriatric Unit, Fondazione IRCCS ‘Casa Sollievo
della Sofferenza’, San Giovanni Rotondo, Foggia, Italy
| | - Nicola Quaranta
- Otolaryngology Unit, University of Bari Aldo
Moro, Bari, Italy
| | - Giancarlo Logroscino
- Neurodegenerative Disease Unit, Department of
Basic Medicine, Neuroscience, and Sense Organs, University of Bari Aldo
Moro, Bari, Italy
- Neurodegenerative Disease Unit, Department of
Clinical Research in Neurology, University of Bari Aldo Moro, ‘Pia
Fondazione Cardinale G. Panico’, Tricase, Lecce, Italy
| |
Collapse
|
16
|
Claes AJ, Van de Heyning P, Gilles A, Hofkens-Van den Brandt A, Van Rompaey V, Mertens G. Impaired Cognitive Functioning in Cochlear Implant Recipients Over the Age of 55 Years: A Cross-Sectional Study Using the Repeatable Battery for the Assessment of Neuropsychological Status for Hearing-Impaired Individuals (RBANS-H). Front Neurosci 2018; 12:580. [PMID: 30197584 PMCID: PMC6117382 DOI: 10.3389/fnins.2018.00580] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Accepted: 07/31/2018] [Indexed: 11/30/2022] Open
Abstract
Primary Objective: To compare cognitive functioning among experienced, unilateral cochlear implant (CI) recipients and normal-hearing (NH) controls by means of the Repeatable Battery for the Assessment of Neuropsychological Status for Hearing-impaired individuals (RBANS-H). Methods: Sixty-one post-lingually and bilaterally severely hearing-impaired CI recipients (median age: 71.0, range: 58.3 to 93.9 years) with at least 1 year of CI experience (median: 12.4, range: 1.1 to 18.6 years) and 81 NH control participants (median age: 69.9, range: 50.1 to 87.1 years) took part in this cross-sectional study. The RBANS-H was performed, as well as an audiometric assessment, including best-aided speech audiometry in quiet (monosyllabic words) and in noise (Leuven Intelligibility Sentences test). Results: The RBANS-H performances of the CI recipients (mean: 88.1 ± 14.9) were significantly poorer than the those of the NH participants (mean: 100.5 ± 13.2), with correction of age, sex, and education differences (general linear model: p = 0.001). The mean difference, corrected for the effects of these three demographic factors, was 8.8 (± 2.5) points. Additionally, in both groups, a significant correlation was established between overall cognition and speech perception, both in quiet and in noise, independently of age. Conclusion: Experienced, unilateral CI recipients present subnormal cognitive functioning, beyond the effect of age, sex and education. This has implications for auditory rehabilitation after CI and may highlight the need for additional cognitive rehabilitation in the long term after implantation. Long-term prospective and longitudinal investigations are imperative to improve our understanding of cognitive aging in severely hearing-impaired individuals receiving CIs and its association with CI outcomes.
Collapse
Affiliation(s)
- Annes J Claes
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital, Antwerp, Belgium.,Experimental Lab of Translational Neurosciences and Dento-Otolaryngology, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Paul Van de Heyning
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital, Antwerp, Belgium.,Experimental Lab of Translational Neurosciences and Dento-Otolaryngology, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Annick Gilles
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital, Antwerp, Belgium.,Experimental Lab of Translational Neurosciences and Dento-Otolaryngology, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium.,Department of Human and Social Welfare, University College Ghent, Ghent, Belgium
| | | | - Vincent Van Rompaey
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital, Antwerp, Belgium.,Experimental Lab of Translational Neurosciences and Dento-Otolaryngology, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Griet Mertens
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital, Antwerp, Belgium.,Experimental Lab of Translational Neurosciences and Dento-Otolaryngology, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| |
Collapse
|
17
|
Koeritzer MA, Rogers CS, Van Engen KJ, Peelle JE. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:740-751. [PMID: 29450493 PMCID: PMC5963044 DOI: 10.1044/2017_jslhr-h-17-0077] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 08/28/2017] [Accepted: 09/20/2017] [Indexed: 05/20/2023]
Abstract
PURPOSE The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. METHOD We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously. RESULTS Recognition memory (indexed by d') was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise. CONCLUSIONS Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences. SUPPLEMENTAL MATERIALS https://doi.org/10.23641/asha.5848059.
Collapse
Affiliation(s)
- Margaret A Koeritzer
- Program in Audiology and Communication Sciences, Washington University in St. Louis, MO
| | - Chad S Rogers
- Department of Otolaryngology, Washington University in St. Louis, MO
| | - Kristin J Van Engen
- Department of Psychological and Brain Sciences and Program in Linguistics, Washington University in St. Louis, MO
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis, MO
| |
Collapse
|
18
|
Abstract
The objective of this study was regarding sensory and cognitive interactions in older adults published since 2009, the approximate date of the most recent reviews on this topic. After an electronic database search of articles published in English since 2009 on measures of hearing and cognition or vision and cognition in older adults, a total of 437 articles were identified. Screening by title and abstract for appropriateness of topic and for articles presenting original research in peer-reviewed journals reduced the final number of articles reviewed to 34. These articles were qualitatively evaluated and synthesized with the existing knowledge base. Additional evidence has been obtained since 2009 associating declines in vision, hearing, or both with declines in cognition among older adults. The observed sensory-cognitive associations are generally stronger when more than one sensory domain is measured and when the sensory measures involve more than simple threshold sensitivity. Evidence continues to accumulate supporting a link between decline in sensory function and cognitive decline in older adults.
Collapse
|
19
|
Processing Mechanisms in Hearing-Impaired Listeners: Evidence from Reaction Times and Sentence Interpretation. Ear Hear 2018; 37:e391-e401. [PMID: 27748664 DOI: 10.1097/aud.0000000000000339] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE The authors aimed to determine whether hearing impairment affects sentence comprehension beyond phoneme or word recognition (i.e., on the sentence level), and to distinguish grammatically induced processing difficulties in structurally complex sentences from perceptual difficulties associated with listening to degraded speech. Effects of hearing impairment or speech in noise were expected to reflect hearer-specific speech recognition difficulties. Any additional processing time caused by the sustained perceptual challenges across the sentence may either be independent of or interact with top-down processing mechanisms associated with grammatical sentence structure. DESIGN Forty-nine participants listened to canonical subject-initial or noncanonical object-initial sentences that were presented either in quiet or in noise. Twenty-four participants had mild-to-moderate hearing impairment and received hearing-loss-specific amplification. Twenty-five participants were age-matched peers with normal hearing status. Reaction times were measured on-line at syntactically critical processing points as well as two control points to capture differences in processing mechanisms. An off-line comprehension task served as an additional indicator of sentence (mis)interpretation, and enforced syntactic processing. RESULTS The authors found general effects of hearing impairment and speech in noise that negatively affected perceptual processing, and an effect of word order, where complex grammar locally caused processing difficulties for the noncanonical sentence structure. Listeners with hearing impairment were hardly affected by noise at the beginning of the sentence, but were affected markedly toward the end of the sentence, indicating a sustained perceptual effect of speech recognition. Comprehension of sentences with noncanonical word order was negatively affected by degraded signals even after sentence presentation. CONCLUSION Hearing impairment adds perceptual processing load during sentence processing, but affects grammatical processing beyond the word level to the same degree as in normal hearing, with minor differences in processing mechanisms. The data contribute to our understanding of individual differences in speech perception and language understanding. The authors interpret their results within the ease of language understanding model.
Collapse
|
20
|
Vaden KI, Teubner-Rhodes S, Ahlstrom JB, Dubno JR, Eckert MA. Cingulo-opercular activity affects incidental memory encoding for speech in noise. Neuroimage 2017. [PMID: 28624645 DOI: 10.1016/j.neuroimage.2017.06.028] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Correctly understood speech in difficult listening conditions is often difficult to remember. A long-standing hypothesis for this observation is that the engagement of cognitive resources to aid speech understanding can limit resources available for memory encoding. This hypothesis is consistent with evidence that speech presented in difficult conditions typically elicits greater activity throughout cingulo-opercular regions of frontal cortex that are proposed to optimize task performance through adaptive control of behavior and tonic attention. However, successful memory encoding of items for delayed recognition memory tasks is consistently associated with increased cingulo-opercular activity when perceptual difficulty is minimized. The current study used a delayed recognition memory task to test competing predictions that memory encoding for words is enhanced or limited by the engagement of cingulo-opercular activity during challenging listening conditions. An fMRI experiment was conducted with twenty healthy adult participants who performed a word identification in noise task that was immediately followed by a delayed recognition memory task. Consistent with previous findings, word identification trials in the poorer signal-to-noise ratio condition were associated with increased cingulo-opercular activity and poorer recognition memory scores on average. However, cingulo-opercular activity decreased for correctly identified words in noise that were not recognized in the delayed memory test. These results suggest that memory encoding in difficult listening conditions is poorer when elevated cingulo-opercular activity is not sustained. Although increased attention to speech when presented in difficult conditions may detract from more active forms of memory maintenance (e.g., sub-vocal rehearsal), we conclude that task performance monitoring and/or elevated tonic attention supports incidental memory encoding in challenging listening conditions.
Collapse
Affiliation(s)
- Kenneth I Vaden
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States.
| | - Susan Teubner-Rhodes
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States
| | - Jayne B Ahlstrom
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States
| | - Judy R Dubno
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States
| | - Mark A Eckert
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States.
| |
Collapse
|
21
|
Ward CM, Rogers CS, Van Engen KJ, Peelle JE. Effects of Age, Acoustic Challenge, and Verbal Working Memory on Recall of Narrative Speech. Exp Aging Res 2016; 42:97-111. [PMID: 26683044 DOI: 10.1080/0361073x.2016.1108785] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
BACKGROUND/STUDY CONTEXT A common goal during speech comprehension is to remember what we have heard. Encoding speech into long-term memory frequently requires processes such as verbal working memory that may also be involved in processing degraded speech. Here the authors tested whether young and older adult listeners' memory for short stories was worse when the stories were acoustically degraded, or whether the additional contextual support provided by a narrative would protect against these effects. METHODS The authors tested 30 young adults (aged 18-28 years) and 30 older adults (aged 65-79 years) with good self-reported hearing. Participants heard short stories that were presented as normal (unprocessed) speech or acoustically degraded using a noise vocoding algorithm with 24 or 16 channels. The degraded stories were still fully intelligible. Following each story, participants were asked to repeat the story in as much detail as possible. Recall was scored using a modified idea unit scoring approach, which included separately scoring hierarchical levels of narrative detail. RESULTS Memory for acoustically degraded stories was significantly worse than for normal stories at some levels of narrative detail. Older adults' memory for the stories was significantly worse overall, but there was no interaction between age and acoustic clarity or level of narrative detail. Verbal working memory (assessed by reading span) significantly correlated with recall accuracy for both young and older adults, whereas hearing ability (better ear pure tone average) did not. CONCLUSION The present findings are consistent with a framework in which the additional cognitive demands caused by a degraded acoustic signal use resources that would otherwise be available for memory encoding for both young and older adults. Verbal working memory is a likely candidate for supporting both of these processes.
Collapse
Affiliation(s)
- Caitlin M Ward
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Chad S Rogers
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Kristin J Van Engen
- b Department of Psychology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Jonathan E Peelle
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| |
Collapse
|
22
|
Wöstmann M, Obleser J. Acoustic Detail But Not Predictability of Task-Irrelevant Speech Disrupts Working Memory. Front Hum Neurosci 2016; 10:538. [PMID: 27826235 PMCID: PMC5078496 DOI: 10.3389/fnhum.2016.00538] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2016] [Accepted: 10/11/2016] [Indexed: 11/29/2022] Open
Abstract
Attended speech is comprehended better not only if more acoustic detail is available, but also if it is semantically highly predictable. But can more acoustic detail or higher predictability turn into disadvantages and distract a listener if the speech signal is to be ignored? Also, does the degree of distraction increase for older listeners who typically show a decline in attentional control ability? Adopting the irrelevant-speech paradigm, we tested whether younger (age 23–33 years) and older (60–78 years) listeners’ working memory for the serial order of spoken digits would be disrupted by the presentation of task-irrelevant speech varying in its acoustic detail (using noise-vocoding) and its semantic predictability (of sentence endings). More acoustic detail, but not higher predictability, of task-irrelevant speech aggravated memory interference. This pattern of results did not differ between younger and older listeners, despite generally lower performance in older listeners. Our findings suggest that the focus of attention determines how acoustics and predictability affect the processing of speech: first, as more acoustic detail is known to enhance speech comprehension and memory for speech, we here demonstrate that more acoustic detail of ignored speech enhances the degree of distraction. Second, while higher predictability of attended speech is known to also enhance speech comprehension under acoustically adverse conditions, higher predictability of ignored speech is unable to exert any distracting effect upon working memory performance in younger or older listeners. These findings suggest that features that make attended speech easier to comprehend do not necessarily enhance distraction by ignored speech.
Collapse
Affiliation(s)
- Malte Wöstmann
- Department of Psychology, University of Lübeck Lübeck, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck Lübeck, Germany
| |
Collapse
|
23
|
Thiel CM, Özyurt J, Nogueira W, Puschmann S. Effects of Age on Long Term Memory for Degraded Speech. Front Hum Neurosci 2016; 10:473. [PMID: 27708570 PMCID: PMC5030220 DOI: 10.3389/fnhum.2016.00473] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2016] [Accepted: 09/07/2016] [Indexed: 12/15/2022] Open
Abstract
Prior research suggests that acoustical degradation impacts encoding of items into memory, especially in elderly subjects. We here aimed to investigate whether acoustically degraded items that are initially encoded into memory are more prone to forgetting as a function of age. Young and old participants were tested with a vocoded and unvocoded serial list learning task involving immediate and delayed free recall. We found that degraded auditory input increased forgetting of previously encoded items, especially in older participants. We further found that working memory capacity predicted forgetting of degraded information in young participants. In old participants, verbal IQ was the most important predictor for forgetting acoustically degraded information. Our data provide evidence that acoustically degraded information, even if encoded, is especially vulnerable to forgetting in old age.
Collapse
Affiliation(s)
- Christiane M Thiel
- Biological Psychology Lab, Cluster of Excellence "Hearing4all", Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany; Research Center Neurosensory Science, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Jale Özyurt
- Biological Psychology Lab, Cluster of Excellence "Hearing4all", Department of Psychology, European Medical School, Carl von Ossietzky Universität Oldenburg Oldenburg, Germany
| | - Waldo Nogueira
- Cluster of Excellence "Hearing4all", Department of Otolaryngology, Medical University Hannover Hannover, Germany
| | - Sebastian Puschmann
- Biological Psychology Lab, Cluster of Excellence "Hearing4all", Department of Psychology, European Medical School, Carl von Ossietzky Universität Oldenburg Oldenburg, Germany
| |
Collapse
|
24
|
Peelle JE, Wingfield A. The Neural Consequences of Age-Related Hearing Loss. Trends Neurosci 2016; 39:486-497. [PMID: 27262177 DOI: 10.1016/j.tins.2016.05.001] [Citation(s) in RCA: 152] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2016] [Revised: 05/04/2016] [Accepted: 05/09/2016] [Indexed: 01/02/2023]
Abstract
During hearing, acoustic signals travel up the ascending auditory pathway from the cochlea to auditory cortex; efferent connections provide descending feedback. In human listeners, although auditory and cognitive processing have sometimes been viewed as separate domains, a growing body of work suggests they are intimately coupled. Here, we review the effects of hearing loss on neural systems supporting spoken language comprehension, beginning with age-related physiological decline. We suggest that listeners recruit domain general executive systems to maintain successful communication when the auditory signal is degraded, but that this compensatory processing has behavioral consequences: even relatively mild levels of hearing loss can lead to cascading cognitive effects that impact perception, comprehension, and memory, leading to increased listening effort during speech comprehension.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in St Louis, St Louis, MO, USA.
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA.
| |
Collapse
|
25
|
Amichetti NM, White AG, Wingfield A. Multiple Solutions to the Same Problem: Utilization of Plausibility and Syntax in Sentence Comprehension by Older Adults with Impaired Hearing. Front Psychol 2016; 7:789. [PMID: 27303346 PMCID: PMC4884746 DOI: 10.3389/fpsyg.2016.00789] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Accepted: 05/10/2016] [Indexed: 11/24/2022] Open
Abstract
A fundamental question in psycholinguistic theory is whether equivalent success in sentence comprehension may come about by different underlying operations. Of special interest is whether adult aging, especially when accompanied by reduced hearing acuity, may shift the balance of reliance on formal syntax vs. plausibility in determining sentence meaning. In two experiments participants were asked to identify the thematic roles in grammatical sentences that contained either plausible or implausible semantic relations. Comprehension of sentence meanings was indexed by the ability to correctly name the agent or the recipient of an action represented in the sentence. In Experiment 1 young and older adults’ comprehension was tested for plausible and implausible sentences with the meaning expressed with either an active-declarative or a passive syntactic form. In Experiment 2 comprehension performance was examined for young adults with age-normal hearing, older adults with good hearing acuity, and age-matched older adults with mild-to-moderate hearing loss for plausible or implausible sentences with meaning expressed with either a subject-relative (SR) or an object-relative (OR) syntactic structure. Experiment 1 showed that the likelihood of interpreting a sentence according to its literal meaning was reduced when that meaning expressed an implausible relationship. Experiment 2 showed that this likelihood was further decreased for OR as compared to SR sentences, and especially so for older adults whose hearing impairment added to the perceptual challenge. Experiment 2 also showed that working memory capacity as measured with a letter-number sequencing task contributed to the likelihood that listeners would base their comprehension responses on the literal syntax even when this processing scheme yielded an implausible meaning. Taken together, the results of both experiments support the postulate that listeners may use more than a single uniform processing strategy for successful sentence comprehension, with the existence of these alternative solutions only revealed when literal syntax and plausibility do not coincide.
Collapse
Affiliation(s)
- Nicole M Amichetti
- Volen National Center for Complex Systems, Brandeis University, Waltham MA, USA
| | - Alison G White
- Volen National Center for Complex Systems, Brandeis University, Waltham MA, USA
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham MA, USA
| |
Collapse
|
26
|
Verhaegen C, Poncelet M. The Effects of Aging on the Components of Auditory - Verbal Short-Term Memory. Psychol Belg 2015; 55:175-195. [PMID: 30479423 PMCID: PMC5854219 DOI: 10.5334/pb.bm] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
This study aimed at exploring the effects of aging on the multiple components of the auditory-verbal short-term memory (STM). Participants of 45-54, 55-64, 65-74 and 75-84 years of age were presented STM tasks assessing short-term retention of order and item information, and of phonological and lexical-semantic information separately. Because older participants often present reduced hearing levels, we sought to control for an effect of hearing status on performance on STM tasks. Participants' hearing thresholds were measured with a pure-tone audiometer. The results showed age-related effects on all STM components. However, after hearing status was controlled for in analyses of covariance, the age-related differences became non-significant for all STM processes. The fact that age-related hearing loss may in large part explain decreases in performance on STM tasks with aging is discussed.
Collapse
Affiliation(s)
- Clémence Verhaegen
- Department of Psychology: Cognition and Behavior, University of Liège, Liège, Belgium
| | - Martine Poncelet
- Department of Psychology: Cognition and Behavior, University of Liège, Liège, Belgium
| |
Collapse
|
27
|
Wayne RV, Johnsrude IS. A review of causal mechanisms underlying the link between age-related hearing loss and cognitive decline. Ageing Res Rev 2015; 23:154-66. [PMID: 26123097 DOI: 10.1016/j.arr.2015.06.002] [Citation(s) in RCA: 263] [Impact Index Per Article: 29.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2015] [Revised: 06/04/2015] [Accepted: 06/15/2015] [Indexed: 02/05/2023]
Abstract
Accumulating evidence points to a link between age-related hearing loss and cognitive decline, but their relationship is not clear. Does one cause the other, or does some third factor produce both? The answer has critical implications for prevention, rehabilitation, and health policy but has been difficult to establish for several reasons. First, determining a causal relationship in natural, correlational samples is problematic, and hearing and cognition are difficult to measure independently. Here, we critically review the evidence for a link between hearing loss and cognitive decline. We conclude that the evidence is convincing, but that the effects are small when hearing is measured audiometrically. We review four different directional hypotheses that have been offered as explanations for such a link, and conclude that no single hypothesis is sufficient. We introduce a framework that highlights that hearing and cognition rely on shared neurocognitive resources, and relate to each other in several different ways. We also discuss interventions for sensory and cognitive decline that may permit more causal inferences.
Collapse
|
28
|
Wingfield A, Amichetti NM, Lash A. Cognitive aging and hearing acuity: modeling spoken language comprehension. Front Psychol 2015; 6:684. [PMID: 26124724 PMCID: PMC4462993 DOI: 10.3389/fpsyg.2015.00684] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2015] [Accepted: 05/10/2015] [Indexed: 12/30/2022] Open
Abstract
The comprehension of spoken language has been characterized by a number of "local" theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled.
Collapse
Affiliation(s)
- Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| | | | | |
Collapse
|
29
|
Rönnberg J, Hygge S, Keidser G, Rudner M. The effect of functional hearing loss and age on long- and short-term visuospatial memory: evidence from the UK biobank resource. Front Aging Neurosci 2014; 6:326. [PMID: 25538617 PMCID: PMC4260513 DOI: 10.3389/fnagi.2014.00326] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2014] [Accepted: 11/07/2014] [Indexed: 11/15/2022] Open
Abstract
The UK Biobank offers cross-sectional epidemiological data collected on >500,000 individuals in the UK between 40 and 70 years of age. Using the UK Biobank data, the aim of this study was to investigate the effects of functional hearing loss and hearing aid usage on visuospatial memory function. This selection of variables resulted in a sub-sample of 138,098 participants after discarding extreme values. A digit triplets functional hearing test was used to divide the participants into three groups: poor, insufficient and normal hearers. We found negative relationships between functional hearing loss and both visuospatial working memory (i.e., a card pair matching task) and visuospatial, episodic long-term memory (i.e., a prospective memory task), with the strongest association for episodic long-term memory. The use of hearing aids showed a small positive effect for working memory performance for the poor hearers, but did not have any influence on episodic long-term memory. Age also showed strong main effects for both memory tasks and interacted with gender and education for the long-term memory task. Broader theoretical implications based on a memory systems approach will be discussed and compared to theoretical alternatives.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | - Staffan Hygge
- Environmental Psychology, Faculty of Engineering and Sustainable Development, University of Gävle Gävle, Sweden
| | | | - Mary Rudner
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| |
Collapse
|
30
|
Monitoring the capacity of working memory: executive control and effects of listening effort. Mem Cognit 2014; 41:839-49. [PMID: 23400826 DOI: 10.3758/s13421-013-0302-0] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In two experiments, we used an interruption-and-recall (IAR) task to explore listeners' ability to monitor the capacity of working memory as new information arrived in real time. In this task, listeners heard recorded word lists with instructions to interrupt the input at the maximum point that would still allow for perfect recall. Experiment 1 demonstrated that the most commonly selected segment size closely matched participants' memory span, as measured in a baseline span test. Experiment 2 showed that reducing the sound level of presented word lists to a suprathreshold but effortful listening level disrupted the accuracy of matching selected segment sizes with participants' memory spans. The results are discussed in terms of whether online capacity monitoring may be subsumed under other, already enumerated working memory executive functions (inhibition, set shifting, and memory updating).
Collapse
|
31
|
Cousins KAQ, Dar H, Wingfield A, Miller P. Acoustic masking disrupts time-dependent mechanisms of memory encoding in word-list recall. Mem Cognit 2014; 42:622-38. [PMID: 24838269 PMCID: PMC4030694 DOI: 10.3758/s13421-013-0377-7] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recall of recently heard words is affected by the clarity of presentation: Even if all words are presented with sufficient clarity for successful recognition, those that are more difficult to hear are less likely to be recalled. Such a result demonstrates that memory processing depends on more than whether a word is simply "recognized" versus "not recognized." More surprising is that, when a single item in a list of spoken words is acoustically masked, prior words that were heard with full clarity are also less likely to be recalled. To account for such a phenomenon, we developed the linking-by-active-maintenance model (LAMM). This computational model of perception and encoding predicts that these effects will be time dependent. Here we challenged our model by investigating whether and how the impact of acoustic masking on memory depends on presentation rate. We found that a slower presentation rate causes a more disruptive impact of stimulus degradation on prior, clearly heard words than does a fast rate. These results are unexpected according to prior theories of effortful listening, but we demonstrated that they can be accounted for by LAMM.
Collapse
Affiliation(s)
- Katheryn A Q Cousins
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, 02454-9110, USA
| | | | | | | |
Collapse
|
32
|
Erb J, Obleser J. Upregulation of cognitive control networks in older adults' speech comprehension. Front Syst Neurosci 2013; 7:116. [PMID: 24399939 PMCID: PMC3871967 DOI: 10.3389/fnsys.2013.00116] [Citation(s) in RCA: 78] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2013] [Accepted: 12/05/2013] [Indexed: 11/20/2022] Open
Abstract
Speech comprehension abilities decline with age and with age-related hearing loss, but it is unclear how this decline expresses in terms of central neural mechanisms. The current study examined neural speech processing in a group of older adults (aged 56–77, n = 16, with varying degrees of sensorineural hearing loss), and compared them to a cohort of young adults (aged 22–31, n = 30, self-reported normal hearing). In a functional MRI experiment, listeners heard and repeated back degraded sentences (4-band vocoded, where the temporal envelope of the acoustic signal is preserved, while the spectral information is substantially degraded). Behaviorally, older adults adapted to degraded speech at the same rate as young listeners, although their overall comprehension of degraded speech was lower. Neurally, both older and young adults relied on the left anterior insula for degraded more than clear speech perception. However, anterior insula engagement in older adults was dependent on hearing acuity. Young adults additionally employed the anterior cingulate cortex (ACC). Interestingly, this age group × degradation interaction was driven by a reduced dynamic range in older adults who displayed elevated levels of ACC activity for both degraded and clear speech, consistent with a persistent upregulation in cognitive control irrespective of task difficulty. For correct speech comprehension, older adults relied on the middle frontal gyrus in addition to a core speech comprehension network recruited by younger adults suggestive of a compensatory mechanism. Taken together, the results indicate that older adults increasingly recruit cognitive control networks, even under optimal listening conditions, at the expense of these systems’ dynamic range.
Collapse
Affiliation(s)
- Julia Erb
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | - Jonas Obleser
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| |
Collapse
|
33
|
Verhaegen C, Collette F, Majerus S. The impact of aging and hearing status on verbal short-term memory. AGING NEUROPSYCHOLOGY AND COGNITION 2013; 21:464-82. [PMID: 24007209 DOI: 10.1080/13825585.2013.832725] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
The aim of this study is to assess the impact of hearing status on age-related decrease in verbal short-term memory (STM) performance. This was done by administering a battery of verbal STM tasks to elderly and young adult participants matched for hearing thresholds, as well as to young normal-hearing control participants. The matching procedure allowed us to assess the importance of hearing loss as an explanatory factor of age-related STM decline. We observed that elderly participants and hearing-matched young participants showed equal levels of performance in all verbal STM tasks, and performed overall lower than the normal-hearing young control participants. This study provides evidence for recent theoretical accounts considering reduced hearing level as an important explanatory factor of poor auditory-verbal STM performance in older adults.
Collapse
Affiliation(s)
- Clémence Verhaegen
- a Department of Psychology: Cognition and Behavior , University of Liège , Liège , Belgium
| | | | | |
Collapse
|
34
|
Zekveld AA, Festen JM, Kramer SE. Task difficulty differentially affects two measures of processing load: the pupil response during sentence processing and delayed cued recall of the sentences. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:1156-1165. [PMID: 23785182 DOI: 10.1044/1092-4388(2012/12-0058)] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
PURPOSE In this study, the authors assessed the influence of masking level (29% or 71% sentence perception) and test modality on the processing load during language perception as reflected by the pupil response. In addition, the authors administered a delayed cued stimulus recall test to examine whether processing load affected the encoding of the stimuli in memory. METHOD Participants performed speech and text reception threshold tests, during which the pupil response was measured. In the cued recall test, the first half of correctly perceived sentences was presented, and participants were asked to complete the sentences. Reading and listening span tests of working memory capacity were presented as well. RESULTS Regardless of test modality, the pupil response indicated higher processing load in the 29% condition than in the 71% correct condition. Cued recall was better for the 29% condition. CONCLUSIONS The consistent effect of masking level on the pupil response during listening and reading support the validity of the pupil response as a measure of processing load during language perception. The absent relation between pupil response and cued recall may suggest that cued recall is not directly related to processing load, as reflected by the pupil response.
Collapse
Affiliation(s)
- Adriana A Zekveld
- The EMGO+ Institute for Health and Care Research, VU University Medical Center, Amsterdam, the Netherlands.
| | | | | |
Collapse
|
35
|
Strauß A, Kotz SA, Obleser J. Narrowed Expectancies under Degraded Speech: Revisiting the N400. J Cogn Neurosci 2013; 25:1383-95. [DOI: 10.1162/jocn_a_00389] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Under adverse listening conditions, speech comprehension profits from the expectancies that listeners derive from the semantic context. However, the neurocognitive mechanisms of this semantic benefit are unclear: How are expectancies formed from context and adjusted as a sentence unfolds over time under various degrees of acoustic degradation? In an EEG study, we modified auditory signal degradation by applying noise-vocoding (severely degraded: four-band, moderately degraded: eight-band, and clear speech). Orthogonal to that, we manipulated the extent of expectancy: strong or weak semantic context (±con) and context-based typicality of the sentence-last word (high or low: ±typ). This allowed calculation of two distinct effects of expectancy on the N400 component of the evoked potential. The sentence-final N400 effect was taken as an index of the neural effort of automatic word-into-context integration; it varied in peak amplitude and latency with signal degradation and was not reliably observed in response to severely degraded speech. Under clear speech conditions in a strong context, typical and untypical sentence completions seemed to fulfill the neural prediction, as indicated by N400 reductions. In response to moderately degraded signal quality, however, the formed expectancies appeared more specific: Only typical (+con +typ), but not the less typical (+con −typ) context–word combinations led to a decrease in the N400 amplitude. The results show that adverse listening “narrows,” rather than broadens, the expectancies about the perceived speech signal: limiting the perceptual evidence forces the neural system to rely on signal-driven expectancies, rather than more abstract expectancies, while a sentence unfolds over time.
Collapse
|
36
|
Classon E, Rudner M, Rönnberg J. Working memory compensates for hearing related phonological processing deficit. JOURNAL OF COMMUNICATION DISORDERS 2013; 46:17-29. [PMID: 23157731 DOI: 10.1016/j.jcomdis.2012.10.001] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2012] [Revised: 10/08/2012] [Accepted: 10/24/2012] [Indexed: 06/01/2023]
Abstract
UNLABELLED Acquired hearing impairment is associated with gradually declining phonological representations. According to the Ease of Language Understanding (ELU) model, poorly defined representations lead to mismatch in phonologically challenging tasks. To resolve the mismatch, reliance on working memory capacity (WMC) increases. This study investigated whether WMC modulated performance in a phonological task in individuals with hearing impairment. A visual rhyme judgment task with congruous or incongruous orthography, followed by an incidental episodic recognition memory task, was used. In participants with hearing impairment, WMC modulated both rhyme judgment performance and recognition memory in the orthographically similar non-rhyming condition; those with high WMC performed exceptionally well in the judgment task, but later recognized few of the words. For participants with hearing impairment and low WMC the pattern was reversed; they performed poorly in the judgment task but later recognized a surprisingly large proportion of the words. Results indicate that good WMC can compensate for the negative impact of auditory deprivation on phonological processing abilities by allowing for efficient use of phonological processing skills. They also suggest that individuals with hearing impairment and low WMC may use a non-phonological approach to written words, which can have the beneficial side effect of improving memory encoding. LEARNING OUTCOMES Readers will be able to: (1) describe cognitive processes involved in rhyme judgment, (2) explain how acquired hearing impairment affects phonological processing and (3) discuss how reading strategies at encoding impact memory performance.
Collapse
Affiliation(s)
- Elisabet Classon
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, SE-581 83 Linköping, Sweden.
| | | | | |
Collapse
|
37
|
Abstract
How does acoustic degradation affect the neural mechanisms of working memory? Enhanced alpha oscillations (8-13 Hz) during retention of items in working memory are often interpreted to reflect increased demands on storage and inhibition. We hypothesized that auditory signal degradation poses an additional challenge to human listeners partly because it draws on the same neural mechanisms. In an adapted Sternberg paradigm, auditory memory load and acoustic degradation were parametrically varied and the magnetoencephalographic response was analyzed in the time-frequency domain. Notably, during the stimulus-free delay interval, alpha power monotonically increased at central-parietal sensors as functions of memory load (higher alpha power with more memory load) and of acoustic degradation (also higher alpha power with more severe acoustic degradation). This alpha effect was superadditive when highest load was combined with most severe degradation. Moreover, alpha oscillatory dynamics during stimulus-free delay were predictive of response times to the probe item. Source localization of alpha power during stimulus-free delay indicated that alpha generators in right parietal, cingulate, supramarginal, and superior temporal cortex were sensitive to combined memory load and acoustic degradation. In summary, both challenges of memory load and acoustic degradation increase activity in a common alpha-frequency network. The results set the stage for future studies on how chronic or acute degradations of sensory input affect mechanisms of executive control.
Collapse
|
38
|
Auditory skills and brain morphology predict individual differences in adaptation to degraded speech. Neuropsychologia 2012; 50:2154-64. [DOI: 10.1016/j.neuropsychologia.2012.05.013] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2012] [Revised: 05/09/2012] [Accepted: 05/10/2012] [Indexed: 11/21/2022]
|
39
|
Abstract
Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment, we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry, demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally, these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task.
Collapse
|