1
|
Shen J, Sun J, Zhang Z, Sun B, Li H, Liu Y. The Effect of Hearing Loss and Working Memory Capacity on Context Use and Reliance on Context in Older Adults. Ear Hear 2024; 45:787-800. [PMID: 38273447 DOI: 10.1097/aud.0000000000001470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
OBJECTIVES Older adults often complain of difficulty in communicating in noisy environments. Contextual information is considered an important cue for identifying everyday speech. To date, it has not been clear exactly how context use (CU) and reliance on context in older adults are affected by hearing status and cognitive function. The present study examined the effects of semantic context on the performance of speech recognition, recall, perceived listening effort (LE), and noise tolerance, and further explored the impacts of hearing loss and working memory capacity on CU and reliance on context among older adults. DESIGN Fifty older adults with normal hearing and 56 older adults with mild-to-moderate hearing loss between the ages of 60 and 95 years participated in this study. A median split of the backward digit span further classified the participants into high working memory (HWM) and low working memory (LWM) capacity groups. Each participant performed high- and low-context Repeat and Recall tests, including a sentence repeat and delayed recall task, subjective assessments of LE, and tolerable time under seven signal to noise ratios (SNRs). CU was calculated as the difference between high- and low-context sentences for each outcome measure. The proportion of context use (PCU) in high-context performance was taken as the reliance on context to explain the degree to which participants relied on context when they repeated and recalled high-context sentences. RESULTS Semantic context helps improve the performance of speech recognition and delayed recall, reduces perceived LE, and prolongs noise tolerance in older adults with and without hearing loss. In addition, the adverse effects of hearing loss on the performance of repeat tasks were more pronounced in low context than in high context, whereas the effects on recall tasks and noise tolerance time were more significant in high context than in low context. Compared with other tasks, the CU and PCU in repeat tasks were more affected by listening status and working memory capacity. In the repeat phase, hearing loss increased older adults' reliance on the context of a relatively challenging listening environment, as shown by the fact that when the SNR was 0 and -5 dB, the PCU (repeat) of the hearing loss group was significantly greater than that of the normal-hearing group, whereas there was no significant difference between the two hearing groups under the remaining SNRs. In addition, older adults with LWM had significantly greater CU and PCU in repeat tasks than those with HWM, especially at SNRs with moderate task demands. CONCLUSIONS Taken together, semantic context not only improved speech perception intelligibility but also released cognitive resources for memory encoding in older adults. Mild-to-moderate hearing loss and LWM capacity in older adults significantly increased the use and reliance on semantic context, which was also modulated by the level of SNR.
Collapse
Affiliation(s)
- Jiayuan Shen
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Zhejiang, China
| | - Jiayu Sun
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, China
| | - Zhikai Zhang
- Department of Otolaryngology, Head and Neck Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Baoxuan Sun
- Training Department, Widex Hearing Aid (Shanghai) Co., Ltd, Shanghai, China
| | - Haitao Li
- Department of Neurology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work and are co-corresponding authors
| | - Yuhe Liu
- Department of Otolaryngology, Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work and are co-corresponding authors
| |
Collapse
|
2
|
Schauwecker N, Tamati TN, Moberly AC. Predicting Early Cochlear Implant Performance: Can Cognitive Testing Help? OTOLOGY & NEUROTOLOGY OPEN 2024; 4:e050. [PMID: 38533348 PMCID: PMC10962885 DOI: 10.1097/ono.0000000000000050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 01/08/2024] [Indexed: 03/28/2024]
Abstract
Introduction There is significant variability in speech recognition outcomes in adults who receive cochlear implants (CIs). Little is known regarding cognitive influences on very early CI performance, during which significant neural plasticity occurs. Methods Prospective study of 15 postlingually deafened adult CI candidates tested preoperatively with a battery of cognitive assessments. The mini-mental state exam (MMSE), forward digit span, Stroop measure of inhibition-concentration, and test of word reading efficiency were utilized to assess cognition. consonant-nucleus-consonant words, AZBio sentences in quiet, and AZBio sentences in noise (+10 dB SNR) were utilized to assess speech recognition at 1- and 3-months of CI use. Results Performance in all speech measures at 1-month was moderately correlated with preoperative MMSE, but these correlations were not strongly correlated after correcting for multiple comparisons. There were large correlations of forward digit span with 1-month AzBio quiet (P ≤ 0.001, rho = 0.762) and AzBio noise (P ≤ 0.001, rho = 0.860), both of which were strong after correction. At 3 months, forward digit span was strongly predictive of AzBio noise (P ≤ 0.001, rho = 0.786), which was strongly correlated after correction. Changes in speech recognition scores were not correlated with preoperative cognitive test scores. Conclusions Working memory capacity significantly predicted early CI sentence recognition performance in our small cohort, while other cognitive functions assessed did not. These results differ from prior studies predicting longer-term outcomes. Findings and further studies may lead to better preoperative counseling and help identify patients who require closer evaluation to ensure optimal CI performance.
Collapse
Affiliation(s)
- Natalie Schauwecker
- Department of Otolaryngology – Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Terrin N. Tamati
- Department of Otolaryngology – Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Aaron C. Moberly
- Department of Otolaryngology – Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee
| |
Collapse
|
3
|
Carolan PJ, Heinrich A, Munro KJ, Millman RE. Divergent effects of listening demands and evaluative threat on listening effort in online and laboratory settings. Front Psychol 2024; 15:1171873. [PMID: 38333064 PMCID: PMC10850315 DOI: 10.3389/fpsyg.2024.1171873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 01/15/2024] [Indexed: 02/10/2024] Open
Abstract
Objective Listening effort (LE) varies as a function of listening demands, motivation and resource availability, among other things. Motivation is posited to have a greater influence on listening effort under high, compared to low, listening demands. Methods To test this prediction, we manipulated the listening demands of a speech recognition task using tone vocoders to create moderate and high listening demand conditions. We manipulated motivation using evaluative threat, i.e., informing participants that they must reach a particular "score" for their results to be usable. Resource availability was assessed by means of working memory span and included as a fixed effects predictor. Outcome measures were indices of LE, including reaction times (RTs), self-rated work and self-rated tiredness, in addition to task performance (correct response rates). Given the recent popularity of online studies, we also wanted to examine the effect of experimental context (online vs. laboratory) on the efficacy of manipulations of listening demands and motivation. We carried out two highly similar experiments with two groups of 37 young adults, a laboratory experiment and an online experiment. To make listening demands comparable between the two studies, vocoder settings had to differ. All results were analysed using linear mixed models. Results Results showed that under laboratory conditions, listening demands affected all outcomes, with significantly lower correct response rates, slower RTs and greater self-rated work with higher listening demands. In the online study, listening demands only affected RTs. In addition, motivation affected self-rated work. Resource availability was only a significant predictor for RTs in the online study. Discussion These results show that the influence of motivation and listening demands on LE depends on the type of outcome measures used and the experimental context. It may also depend on the exact vocoder settings. A controlled laboratory settings and/or particular vocoder settings may be necessary to observe all expected effects of listening demands and motivation.
Collapse
Affiliation(s)
- Peter J. Carolan
- School of Health Sciences, Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, United Kingdom
| | - Antje Heinrich
- School of Health Sciences, Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, United Kingdom
| | - Kevin J. Munro
- School of Health Sciences, Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, United Kingdom
| | - Rebecca E. Millman
- School of Health Sciences, Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, United Kingdom
| |
Collapse
|
4
|
Zheng Y, Gao P, Li X. The modulating effect of musical expertise on lexical-semantic prediction in speech-in-noise comprehension: Evidence from an EEG study. Psychophysiology 2023; 60:e14371. [PMID: 37350401 DOI: 10.1111/psyp.14371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 05/08/2023] [Accepted: 05/27/2023] [Indexed: 06/24/2023]
Abstract
Musical expertise has been proposed to facilitate speech perception and comprehension in noisy environments. This study further examined the open question of whether musical expertise modulates high-level lexical-semantic prediction to aid online speech comprehension in noisy backgrounds. Musicians and nonmusicians listened to semantically strongly/weakly constraining sentences during EEG recording. At verbs prior to target nouns, both groups showed a positivity-ERP effect (Strong vs. Weak) associated with the predictability of incoming nouns; this correlation effect was stronger in musicians than in nonmusicians. After the target nouns appeared, both groups showed an N400 reduction effect (Strong vs. Weak) associated with noun predictability, but musicians exhibited an earlier onset latency and stronger effect size of this correlation effect than nonmusicians. To determine whether musical expertise enhances anticipatory semantic processing in general, the same group of participants participated in a control reading comprehension experiment. The results showed that, compared with nonmusicians, musicians demonstrated more delayed ERP correlation effects of noun predictability at words preceding the target nouns; musicians also exhibited more delayed and reduced N400 decrease effects correlated with noun predictability at the target nouns. Taken together, these results suggest that musical expertise enhances lexical-semantic predictive processing in speech-in-noise comprehension. This musical-expertise effect may be related to the strengthened hierarchical speech processing in particular.
Collapse
Affiliation(s)
- Yuanyi Zheng
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Panke Gao
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Xiaoqing Li
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, Xuzhou, China
| |
Collapse
|
5
|
Park JJ, Baek SC, Suh MW, Choi J, Kim SJ, Lim Y. The effect of topic familiarity and volatility of auditory scene on selective auditory attention. Hear Res 2023; 433:108770. [PMID: 37104990 DOI: 10.1016/j.heares.2023.108770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 04/06/2023] [Accepted: 04/15/2023] [Indexed: 04/29/2023]
Abstract
Selective auditory attention has been shown to modulate the cortical representation of speech. This effect has been well documented in acoustically more challenging environments. However, the influence of top-down factors, in particular topic familiarity, on this process remains unclear, despite evidence that semantic information can promote speech-in-noise perception. Apart from individual features forming a static listening condition, dynamic and irregular changes of auditory scenes-volatile listening environments-have been less studied. To address these gaps, we explored the influence of topic familiarity and volatile listening on the selective auditory attention process during dichotic listening using electroencephalography. When stories with unfamiliar topics were presented, participants' comprehension was severely degraded. However, their cortical activity selectively tracked the speech of the target story well. This implies that topic familiarity hardly influences the speech tracking neural index, possibly when the bottom-up information is sufficient. However, when the listening environment was volatile and the listeners had to re-engage in new speech whenever auditory scenes altered, the neural correlates of the attended speech were degraded. In particular, the cortical response to the attended speech and the spatial asymmetry of the response to the left and right attention were significantly attenuated around 100-200 ms after the speech onset. These findings suggest that volatile listening environments could adversely affect the modulation effect of selective attention, possibly by hampering proper attention due to increased perceptual load.
Collapse
Affiliation(s)
- Jonghwa Jeonglok Park
- Center for Intelligent & Interactive Robotics, Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea; Department of Electrical and Computer Engineering, College of Engineering, Seoul National University, Seoul 08826, South Korea
| | - Seung-Cheol Baek
- Center for Intelligent & Interactive Robotics, Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea; Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, Frankfurt am Main 60322, Germany
| | - Myung-Whan Suh
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul 03080, South Korea
| | - Jongsuk Choi
- Center for Intelligent & Interactive Robotics, Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea; Department of AI Robotics, KIST School, Korea University of Science and Technology, Seoul 02792, South Korea
| | - Sung June Kim
- Department of Electrical and Computer Engineering, College of Engineering, Seoul National University, Seoul 08826, South Korea
| | - Yoonseob Lim
- Center for Intelligent & Interactive Robotics, Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea; Department of HY-KIST Bio-convergence, Hanyang University, Seoul 04763, South Korea.
| |
Collapse
|
6
|
Wang H, Chen R, Yan Y, McGettigan C, Rosen S, Adank P. Perceptual Learning of Noise-Vocoded Speech Under Divided Attention. Trends Hear 2023; 27:23312165231192297. [PMID: 37547940 PMCID: PMC10408355 DOI: 10.1177/23312165231192297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 07/13/2023] [Accepted: 07/19/2023] [Indexed: 08/08/2023] Open
Abstract
Speech perception performance for degraded speech can improve with practice or exposure. Such perceptual learning is thought to be reliant on attention and theoretical accounts like the predictive coding framework suggest a key role for attention in supporting learning. However, it is unclear whether speech perceptual learning requires undivided attention. We evaluated the role of divided attention in speech perceptual learning in two online experiments (N = 336). Experiment 1 tested the reliance of perceptual learning on undivided attention. Participants completed a speech recognition task where they repeated forty noise-vocoded sentences in a between-group design. Participants performed the speech task alone or concurrently with a domain-general visual task (dual task) at one of three difficulty levels. We observed perceptual learning under divided attention for all four groups, moderated by dual-task difficulty. Listeners in easy and intermediate visual conditions improved as much as the single-task group. Those who completed the most challenging visual task showed faster learning and achieved similar ending performance compared to the single-task group. Experiment 2 tested whether learning relies on domain-specific or domain-general processes. Participants completed a single speech task or performed this task together with a dual task aiming to recruit domain-specific (lexical or phonological), or domain-general (visual) processes. All secondary task conditions produced patterns and amount of learning comparable to the single speech task. Our results demonstrate that the impact of divided attention on perceptual learning is not strictly dependent on domain-general or domain-specific processes and speech perceptual learning persists under divided attention.
Collapse
Affiliation(s)
- Han Wang
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Rongru Chen
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Yu Yan
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Carolyn McGettigan
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Stuart Rosen
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| |
Collapse
|
7
|
Vella Azzopardi R, Beyer I, De Raedemaeker K, Foulon I, Vermeiren S, Petrovic M, Van Den Noortgate N, Bautmans I, Gorus E. Hearing aid use and gender differences in the auditory-cognitive cascade in the oldest old. Aging Ment Health 2023; 27:184-192. [PMID: 34937465 DOI: 10.1080/13607863.2021.2007355] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
OBJECTIVES This study analyzed cognitive differences between hearing-aid (HA) and non-HA users. We hypothesized that HA-use attenuates the auditory-cognitive cascade, thereby, the latter is more conspicuous in non-HA users. Since hearing impairment (HI) shows male predominance, we hypothesized gender differences within the auditory-cognitive relationship. METHODS Non-frail community-dwellers ≥ 80 years were assessed for HI (pure tone audiogram-PTA; speech reception threshold-SRT) and global and domain-specific cognitive impairments (Mini-Mental State Examination-MMSE; Montreal Cognitive Assessment-MOCA; Reaction Time Test-RT1-4). Pearson and partial correlations (correcting for age and PTA) assessed auditory-cognitive associations within gender and HA subgroups. Fisher's z test compared correlations between HA and non-HA users. RESULTS 126 participants (age range 80-91 years) were included. HA-use prevalence was 21%. HA-users were older with worse HI (mean PTA 49.5dBHL). HA-users exhibited no significant auditory (PTA, SRT) and cognitive (MMSE, MOCA, RT1- RT4) correlations. Male non-HA users, displayed a significant association between HI and global cognition, processing speed, selective and alternating attention. Significant differences were noted between MMSE and PTA and SRT (z-score 2.28, 3.33, p = 0.02, <0.01, respectively) between HA and non-HA users. CONCLUSION Male non-HA users displayed an association between HI and global and domain-specific (processing speed; selective and alternating attention) cognitive decline. Associations between global cognition and HI were significantly different between HA and non-HA users. This may be partially attributable to underlying subgroups sample sizes and statistical power disparity. If larger scale longitudinal or interventional studies confirm these findings, timely HI assessment and management may be the cornerstone for delaying cognitive decline.
Collapse
Affiliation(s)
- Roberta Vella Azzopardi
- Gerontology Department, Vrije Universiteit Brussel (VUB), Brussels, Belgium.,Frailty in Ageing (FRIA) Research Department, Vrije Universiteit Brussel (VUB), Brussels, Belgium
| | - Ingo Beyer
- Gerontology Department, Vrije Universiteit Brussel (VUB), Brussels, Belgium.,Frailty in Ageing (FRIA) Research Department, Vrije Universiteit Brussel (VUB), Brussels, Belgium.,Geriatrics Department, Universitair Ziekenhuis Brussel (UZ Brussel), Brussels, Belgium
| | - Kaat De Raedemaeker
- Department of Otolaryngology - Head and Neck Surgery, UZ Brussel, Brussels, Belgium
| | - Ina Foulon
- Department of Otolaryngology - Head and Neck Surgery, UZ Brussel, Brussels, Belgium
| | - Sofie Vermeiren
- Gerontology Department, Vrije Universiteit Brussel (VUB), Brussels, Belgium.,Frailty in Ageing (FRIA) Research Department, Vrije Universiteit Brussel (VUB), Brussels, Belgium
| | - Mirko Petrovic
- Geriatrics Department, Ghent University Hospital (UZ Gent), Ghent, Belgium
| | | | - Ivan Bautmans
- Gerontology Department, Vrije Universiteit Brussel (VUB), Brussels, Belgium.,Frailty in Ageing (FRIA) Research Department, Vrije Universiteit Brussel (VUB), Brussels, Belgium.,Geriatrics Department, Universitair Ziekenhuis Brussel (UZ Brussel), Brussels, Belgium
| | - Ellen Gorus
- Gerontology Department, Vrije Universiteit Brussel (VUB), Brussels, Belgium.,Frailty in Ageing (FRIA) Research Department, Vrije Universiteit Brussel (VUB), Brussels, Belgium
| | -
- Members of the Gerontopole Brussels Study group: Ivan Bautmans (FRIA, VUB), Dominque Verté (Belgian Ageing Studies BAST, VUB), Ingo Beyer (Geriatrics Department, UZ Brussel), Mirko Petrovic (ReFrail, UGhent), Liesbeth De Donder (Belgian Ageing Studies BAST, VUB), Tinie Kardol (Leerstoel Bevordering Active Ageing, VUB), Gina Rossi (Clinical and Lifespan Psychology KLEP, VUB), Peter Clarys (Physical Activity and Nutrition PANU, VUB), Aldo Scafoglieri (Experimental Anatomy EXAN, VUB), Erik Cattrysse (Experimental Anatomy EXAN, VUB), Eugenio Mantovani (Fundamental Rights and Constitutionalism Research group FRC, VUB), Bart Jansen (Department of Electronics and Informatics ETRO, VUB)
| |
Collapse
|
8
|
Sun J, Zhang Z, Sun B, Liu H, Wei C, Liu Y. The effect of aging on context use and reliance on context in speech: A behavioral experiment with Repeat–Recall Test. Front Aging Neurosci 2022; 14:924193. [PMID: 35936762 PMCID: PMC9354826 DOI: 10.3389/fnagi.2022.924193] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 06/29/2022] [Indexed: 11/13/2022] Open
Abstract
PurposeTo elucidate how aging would affect the extent of semantic context use and the reliance on semantic context measured with the Repeat–Recall Test (RRT).MethodsA younger adult group (YA) aged between 18 and 25 and an older adult group (OA) aged between 50 and 65 were recruited. Participants from both the groups performed RRT: sentence repeat and delayed recall tasks, and subjective listening effort and noise tolerable time, under two noise types and seven signal-to-noise ratios (SNR). Performance–Intensity curves were fitted. The performance in SRT50 and SRT75 was predicted.ResultsFor the repeat task, the OA group used more semantic context and relied more on semantic context than the YA group. For the recall task, OA used less semantic context but relied more on context than the YA group. Age did not affect the subjective listening effort but significantly affected noise tolerable time. Participants in both age groups could use more context in SRT75 than SRT50 on four tasks of RRT. Under the same SRT, however, the YA group could use more context in repeat and recall tasks than the OA group.ConclusionAge affected the use and reliance of semantic context. Even though the OA group used more context in speech recognition, they failed in speech information maintenance (recall) even with the help of semantic context. The OA group relied more on context while performing repeat and recall tasks. The amount of context used was also influenced by SRT.
Collapse
Affiliation(s)
- Jiayu Sun
- Department of Otolaryngology Head and Neck Surgery, Peking University First Hospital, Beijing, China
- Department of Otorhinolaryngology, Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhikai Zhang
- Department of Otolaryngology Head and Neck Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Baoxuan Sun
- Widex Hearing Aid (Shanghai) Co., Ltd, Shanghai, China
| | - Haotian Liu
- Department of Otolaryngology Head and Neck Surgery, West China Hospital of Sichuan University, Chengdu, China
| | - Chaogang Wei
- Department of Otolaryngology Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Yuhe Liu
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- *Correspondence: Yuhe Liu,
| |
Collapse
|
9
|
Vaden KI, Teubner-Rhodes S, Ahlstrom JB, Dubno JR, Eckert MA. Evidence for cortical adjustments to perceptual decision criteria during word recognition in noise. Neuroimage 2022; 253:119042. [PMID: 35259524 PMCID: PMC9082296 DOI: 10.1016/j.neuroimage.2022.119042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 02/23/2022] [Accepted: 02/26/2022] [Indexed: 01/31/2023] Open
Abstract
Extensive increases in cingulo-opercular frontal activity are typically observed during speech recognition in noise tasks. This elevated activity has been linked to a word recognition benefit on the next trial, termed "adaptive control," but how this effect might be implemented has been unclear. The established link between perceptual decision making and cingulo-opercular function may provide an explanation for how those regions benefit subsequent word recognition. In this case, processes that support recognition such as raising or lowering the decision criteria for more accurate or faster recognition may be adjusted to optimize performance on the next trial. The current neuroimaging study tested the hypothesis that pre-stimulus cingulo-opercular activity reflects criterion adjustments that determine how much information to collect for word recognition on subsequent trials. Participants included middle-age and older adults (N = 30; age = 58.3 ± 8.8 years; m ± sd) with normal hearing or mild sensorineural hearing loss. During a sparse fMRI experiment, words were presented in multitalker babble at +3 dB or +10 dB signal-to-noise ratio (SNR), which participants were instructed to repeat aloud. Word recognition was significantly poorer with increasing participant age and lower SNR compared to higher SNR conditions. A perceptual decision-making model was used to characterize processing differences based on task response latency distributions. The model showed that significantly less sensory evidence was collected (i.e., lower criteria) for lower compared to higher SNR trials. Replicating earlier observations, pre-stimulus cingulo-opercular activity was significantly predictive of correct recognition on a subsequent trial. Individual differences showed that participants with higher criteria also benefitted the most from pre-stimulus activity. Moreover, trial-level criteria changes were significantly linked to higher versus lower pre-stimulus activity. These results suggest cingulo-opercular cortex contributes to criteria adjustments to optimize speech recognition task performance.
Collapse
Affiliation(s)
- Kenneth I. Vaden
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States,Corresponding author. (K.I. Vaden Jr)
| | - Susan Teubner-Rhodes
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States,Department of Psychological Sciences, 226 Thach Hall, Auburn University, AL 36849-9027
| | - Jayne B. Ahlstrom
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| | - Judy R. Dubno
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| | - Mark A. Eckert
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| |
Collapse
|
10
|
Völter C, Oberländer K, Haubitz I, Carroll R, Dazert S, Thomas JP. Poor Performer: A Distinct Entity in Cochlear Implant Users? Audiol Neurootol 2022; 27:356-367. [PMID: 35533653 PMCID: PMC9533457 DOI: 10.1159/000524107] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 03/10/2022] [Indexed: 11/19/2022] Open
Abstract
INTRODUCTION Several factors are known to influence speech perception in cochlear implant (CI) users. To date, the underlying mechanisms have not yet been fully clarified. Although many CI users achieve a high level of speech perception, a small percentage of patients does not or only slightly benefit from the CI (poor performer, PP). In a previous study, PP showed significantly poorer results on nonauditory-based cognitive and linguistic tests than CI users with a very high level of speech understanding (star performer, SP). We now investigate if PP also differs from the CI user with an average performance (average performer, AP) in cognitive and linguistic performance. METHODS Seventeen adult postlingually deafened CI users with speech perception scores in quiet of 55 (9.32) % (AP) on the German Freiburg monosyllabic speech test at 65 dB underwent neurocognitive (attention, working memory, short- and long-term memory, verbal fluency, inhibition) and linguistic testing (word retrieval, lexical decision, phonological input lexicon). The results were compared to the performance of 15 PP (speech perception score of 15 [11.80] %) and 19 SP (speech perception score of 80 [4.85] %). For statistical analysis, U-Test and discrimination analysis have been done. RESULTS Significant differences between PP and AP were observed on linguistic tests, in Rapid Automatized Naming (RAN: p = 0.0026), lexical decision (LexDec: p = 0.026), phonological input lexicon (LEMO: p = 0.0085), and understanding of incomplete words (TRT: p = 0.0024). AP also had significantly better neurocognitive results than PP in the domains of attention (M3: p = 0.009) and working memory (OSPAN: p = 0.041; RST: p = 0.015) but not in delayed recall (delayed recall: p = 0.22), verbal fluency (verbal fluency: p = 0.084), and inhibition (Flanker: p = 0.35). In contrast, no differences were found hereby between AP and SP. Based on the TRT and the RAN, AP and PP could be separated in 100%. DISCUSSION The results indicate that PP constitute a distinct entity of CI users that differs even in nonauditory abilities from CI users with an average speech perception, especially with regard to rapid word retrieval either due to reduced phonological abilities or limited storage. Further studies should investigate if improved word retrieval by increased phonological and semantic training results in better speech perception in these CI users.
Collapse
Affiliation(s)
- Christiane Völter
- Department of Otorhinolaryngology, Head and Neck Surgery, Cochlear Implant Center Ruhrgebiet, St Elisabeth-Hospital, Ruhr University Bochum, Bochum, Germany
| | - Kirsten Oberländer
- Department of Otorhinolaryngology, Head and Neck Surgery, Cochlear Implant Center Ruhrgebiet, St Elisabeth-Hospital, Ruhr University Bochum, Bochum, Germany,
| | - Imme Haubitz
- Department of Otorhinolaryngology, Head and Neck Surgery, Cochlear Implant Center Ruhrgebiet, St Elisabeth-Hospital, Ruhr University Bochum, Bochum, Germany
| | - Rebecca Carroll
- Institute of English and American Studies, Technical University Braunschweig, Braunschweig, Germany
| | - Stefan Dazert
- Department of Otorhinolaryngology, Head and Neck Surgery, Cochlear Implant Center Ruhrgebiet, St Elisabeth-Hospital, Ruhr University Bochum, Bochum, Germany
| | - Jan Peter Thomas
- Department of Otorhinolaryngology, Head and Neck Surgery, St-Johannes-Hospital, Dortmund, Germany
| |
Collapse
|
11
|
Predictive Sentence Context Reduces Listening Effort in Older Adults With and Without Hearing Loss and With High and Low Working Memory Capacity. Ear Hear 2022; 43:1164-1177. [PMID: 34983897 PMCID: PMC9232842 DOI: 10.1097/aud.0000000000001192] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
OBJECTIVES Listening effort is needed to understand speech that is degraded by hearing loss, a noisy environment, or both. This in turn reduces cognitive spare capacity, the amount of cognitive resources available for allocation to concurrent tasks. Predictive sentence context enables older listeners to perceive speech more accurately, but how does contextual information affect older adults' listening effort? The current study examines the impacts of sentence context and cognitive (memory) load on sequential dual-task behavioral performance in older adults. To assess whether effects of context and memory load differ as a function of older listeners' hearing status, baseline working memory capacity, or both, effects were compared across separate groups of participants with and without hearing loss and with high and low working memory capacity. DESIGN Participants were older adults (age 60-84 years; n = 63) who passed a screen for cognitive impairment. A median split classified participants into groups with high and low working memory capacity. On each trial, participants listened to spoken sentences in noise and reported sentence-final words that were either predictable or unpredictable based on sentence context, and also recalled short (low-load) or long (high-load) sequences of digits that were presented visually before each spoken sentence. Speech intelligibility was quantified as word identification accuracy, and measures of listening effort included digit recall accuracy, and response time to words and digits. Correlations of context benefit in each dependent measure with working memory and vocabulary were also examined. RESULTS Across all participant groups, accuracy and response time for both word identification and digit recall were facilitated by predictive context, indicating that in addition to an improvement in intelligibility, listening effort was also reduced when sentence-final words were predictable. Effects of predictability on all listening effort measures were observed whether or not trials with an incorrect word identification response were excluded, indicating that the effects of predictability on listening effort did not depend on speech intelligibility. In addition, although cognitive load did not affect word identification accuracy, response time for word identification and digit recall, as well as accuracy for digit recall, were impaired under the high-load condition, indicating that cognitive load reduced the amount of cognitive resources available for speech processing. Context benefit in speech intelligibility was positively correlated with vocabulary. However, context benefit was not related to working memory capacity. CONCLUSIONS Predictive sentence context reduces listening effort in cognitively healthy older adults resulting in greater cognitive spare capacity available for other mental tasks, irrespective of the presence or absence of hearing loss and baseline working memory capacity.
Collapse
|
12
|
Hunter CR. Dual-Task Accuracy and Response Time Index Effects of Spoken Sentence Predictability and Cognitive Load on Listening Effort. Trends Hear 2021; 25:23312165211018092. [PMID: 34674579 PMCID: PMC8543634 DOI: 10.1177/23312165211018092] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
A sequential dual-task design was used to assess the impacts of spoken sentence context and cognitive load on listening effort. Young adults with normal hearing listened to sentences masked by multitalker babble in which sentence-final words were either predictable or unpredictable. Each trial began with visual presentation of a short (low-load) or long (high-load) sequence of to-be-remembered digits. Words were identified more quickly and accurately in predictable than unpredictable sentence contexts. In addition, digits were recalled more quickly and accurately on trials on which the sentence was predictable, indicating reduced listening effort for predictable compared to unpredictable sentences. For word and digit recall response time but not for digit recall accuracy, the effect of predictability remained significant after exclusion of trials with incorrect word responses and was thus independent of speech intelligibility. In addition, under high cognitive load, words were identified more slowly and digits were recalled more slowly and less accurately than under low load. Participants’ working memory and vocabulary were not correlated with the sentence context benefit in either word recognition or digit recall. Results indicate that listening effort is reduced when sentences are predictable and that cognitive load affects the processing of spoken words in sentence contexts.
Collapse
Affiliation(s)
- Cynthia R Hunter
- Speech Perception, Cognition, and Hearing Laboratory, Department of Speech-Language-Hearing: Sciences and Disorders, University of Kansas, Lawrence, United States
| |
Collapse
|
13
|
Reduced Semantic Context and Signal-to-Noise Ratio Increase Listening Effort As Measured Using Functional Near-Infrared Spectroscopy. Ear Hear 2021; 43:836-848. [PMID: 34623112 DOI: 10.1097/aud.0000000000001137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Understanding speech-in-noise can be highly effortful. Decreasing the signal-to-noise ratio (SNR) of speech increases listening effort, but it is relatively unclear if decreasing the level of semantic context does as well. The current study used functional near-infrared spectroscopy to evaluate two primary hypotheses: (1) listening effort (operationalized as oxygenation of the left lateral PFC) increases as the SNR decreases and (2) listening effort increases as context decreases. DESIGN Twenty-eight younger adults with normal hearing completed the Revised Speech Perception in Noise Test, in which they listened to sentences and reported the final word. These sentences either had an easy SNR (+4 dB) or a hard SNR (-2 dB), and were either low in semantic context (e.g., "Tom could have thought about the sport") or high in context (e.g., "She had to vacuum the rug"). PFC oxygenation was measured throughout using functional near-infrared spectroscopy. RESULTS Accuracy on the Revised Speech Perception in Noise Test was worse when the SNR was hard than when it was easy, and worse for sentences low in semantic context than high in context. Similarly, oxygenation across the entire PFC (including the left lateral PFC) was greater when the SNR was hard, and left lateral PFC oxygenation was greater when context was low. CONCLUSIONS These results suggest that activation of the left lateral PFC (interpreted here as reflecting listening effort) increases to compensate for acoustic and linguistic challenges. This may reflect the increased engagement of domain-general and domain-specific processes subserved by the dorsolateral prefrontal cortex (e.g., cognitive control) and inferior frontal gyrus (e.g., predicting the sensory consequences of articulatory gestures), respectively.
Collapse
|
14
|
Cogmed Training Does Not Generalize to Real-World Benefits for Adult Hearing Aid Users: Results of a Blinded, Active-Controlled Randomized Trial. Ear Hear 2021; 43:741-763. [PMID: 34524150 PMCID: PMC9007089 DOI: 10.1097/aud.0000000000001096] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Objectives: Performance on working memory tasks is positively associated with speech-in-noise perception performance, particularly where auditory inputs are degraded. It is suggested that interventions designed to improve working memory capacity may improve domain-general working memory performance for people with hearing loss, to benefit their real-world listening. We examined whether a 5-week training program that primarily targets the storage component of working memory (Cogmed RM, adaptive) could improve cognition, speech-in-noise perception and self-reported hearing in a randomized controlled trial of adult hearing aid users with mild to moderate hearing loss, compared with an active control (Cogmed RM, nonadaptive) group of adults from the same population. Design: A preregistered randomized controlled trial of 57 adult hearing aid users (n = 27 experimental, n = 30 active control), recruited from a dedicated database of research volunteers, examined on-task learning and generalized improvements in measures of trained and untrained cognition, untrained speech-in-noise perception and self-reported hearing abilities, pre- to post-training. Participants and the outcome assessor were both blinded to intervention allocation. Retention of training-related improvements was examined at a 6-month follow-up assessment. Results: Per-protocol analyses showed improvements in trained tasks (Cogmed Index Improvement) that transferred to improvements in a trained working memory task tested outside of the training software (Backward Digit Span) and a small improvement in self-reported hearing ability (Glasgow Hearing Aid Benefit Profile, Initial Disability subscale). Both of these improvements were maintained 6-month post-training. There was no transfer of learning shown to untrained measures of cognition (working memory or attention), speech-in-noise perception, or self-reported hearing in everyday life. An assessment of individual differences showed that participants with better baseline working memory performance achieved greater learning on the trained tasks. Post-training performance for untrained outcomes was largely predicted by individuals’ pretraining performance on those measures. Conclusions: Despite significant on-task learning, generalized improvements of working memory training in this trial were limited to (a) improvements for a trained working memory task tested outside of the training software and (b) a small improvement in self-reported hearing ability for those in the experimental group, compared with active controls. We found no evidence to suggest that training which primarily targets storage aspects of working memory can result in domain-general improvements that benefit everyday communication for adult hearing aid users. These findings are consistent with a significant body of evidence showing that Cogmed training only improves performance for tasks that resemble Cogmed training. Future research should focus on the benefits of interventions that enhance cognition in the context in which it is employed within everyday communication, such as training that targets dynamic aspects of cognitive control important for successful speech-in-noise perception.
Collapse
|
15
|
Tracking Cognitive Spare Capacity During Speech Perception With EEG/ERP: Effects of Cognitive Load and Sentence Predictability. Ear Hear 2021; 41:1144-1157. [PMID: 32282402 DOI: 10.1097/aud.0000000000000856] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
OBJECTIVES Listening to speech in adverse listening conditions is effortful. Objective assessment of cognitive spare capacity during listening can serve as an index of the effort needed to understand speech. Cognitive spare capacity is influenced both by signal-driven demands posed by listening conditions and top-down demands intrinsic to spoken language processing, such as memory use and semantic processing. Previous research indicates that electrophysiological responses, particularly alpha oscillatory power, may index listening effort. However, it is not known how these indices respond to memory and semantic processing demands during spoken language processing in adverse listening conditions. The aim of the present study was twofold: first, to assess the impact of memory demands on electrophysiological responses during recognition of degraded, spoken sentences, and second, to examine whether predictable sentence contexts increase or decrease cognitive spare capacity during listening. DESIGN Cognitive demand was varied in a memory load task in which young adult participants (n = 20) viewed either low-load (one digit) or high-load (seven digits) sequences of digits, then listened to noise-vocoded spoken sentences that were either predictable or unpredictable, and then reported the final word of the sentence and the digits. Alpha oscillations in the frequency domain and event-related potentials in the time domain of the electrophysiological data were analyzed, as was behavioral accuracy for both words and digits. RESULTS Measured during sentence processing, event-related desynchronization of alpha power was greater (more negative) under high load than low load and was also greater for unpredictable than predictable sentences. A complementary pattern was observed for the P300/late positive complex (LPC) to sentence-final words, such that P300/LPC amplitude was reduced under high load compared with low load and for unpredictable compared with predictable sentences. Both words and digits were identified more quickly and accurately on trials in which spoken sentences were predictable. CONCLUSIONS Results indicate that during a sentence-recognition task, both cognitive load and sentence predictability modulate electrophysiological indices of cognitive spare capacity, namely alpha oscillatory power and P300/LPC amplitude. Both electrophysiological and behavioral results indicate that a predictive sentence context reduces cognitive demands during listening. Findings contribute to a growing literature on objective measures of cognitive demand during listening and indicate predictable sentence context as a top-down factor that can support ease of listening.
Collapse
|
16
|
White BE, Langdon C. The cortical organization of listening effort: New insight from functional near-infrared spectroscopy. Neuroimage 2021; 240:118324. [PMID: 34217787 DOI: 10.1016/j.neuroimage.2021.118324] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 06/17/2021] [Accepted: 06/28/2021] [Indexed: 10/21/2022] Open
Abstract
Everyday challenges impact our ability to hear and comprehend spoken language with ease, such as accented speech (source factors), spectral degradation (transmission factors), complex or unfamiliar language use (message factors), and predictability (context factors). Auditory degradation and linguistic complexity in the brain and behavior have been well investigated, and several computational models have emerged. The work here provides a novel test of the hypotheses that listening effort is partially reliant on higher cognitive auditory attention and working memory mechanisms in the frontal lobe, and partially reliant on hierarchical linguistic computation in the brain's left hemisphere. We specifically hypothesize that these models are robust and can be applied in ecologically relevant and coarse-grain contexts that rigorously control for acoustic and linguistic listening challenges. Using functional near-infrared spectroscopy during an auditory plausibility judgment task, we show the hierarchical cortical organization for listening effort in the frontal and left temporal-parietal brain regions. In response to increasing levels of cognitive demand, we found (i) poorer comprehension, (ii) slower reaction times, (iii) increasing levels of perceived mental effort, (iv) increasing levels of brain activity in the prefrontal cortex, (v) hierarchical modulation of core language processing regions that reflect increasingly higher-order auditory-linguistic processing, and (vi) a correlation between participants' mental effort ratings and their performance on the task. Our results demonstrate that listening effort is partly reliant on higher cognitive auditory attention and working memory mechanisms in the frontal lobe and partly reliant on hierarchical linguistic computation in the brain's left hemisphere. Further, listening effort is driven by a voluntary, motivation-based attention system for which our results validate the use of a single-item post-task questionnaire for measuring perceived levels of mental effort and predicting listening performance. We anticipate our study to be a starting point for more sophisticated models of listening effort and even cognitive neuroplasticity in hearing aid and cochlear implant users.
Collapse
Affiliation(s)
- Bradley E White
- Brain and Language Center for Neuroimaging, Gallaudet University, Washington, DC, USA.
| | - Clifton Langdon
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
| |
Collapse
|
17
|
Guediche S, de Bruin A, Caballero-Gaudes C, Baart M, Samuel AG. Second-language word recognition in noise: Interdependent neuromodulatory effects of semantic context and crosslinguistic interactions driven by word form similarity. Neuroimage 2021; 237:118168. [PMID: 34000398 DOI: 10.1016/j.neuroimage.2021.118168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 05/05/2021] [Accepted: 05/12/2021] [Indexed: 11/17/2022] Open
Abstract
Spoken language comprehension is a fundamental component of our cognitive skills. We are quite proficient at deciphering words from the auditory input despite the fact that the speech we hear is often masked by noise such as background babble originating from talkers other than the one we are attending to. To perceive spoken language as intended, we rely on prior linguistic knowledge and context. Prior knowledge includes all sounds and words that are familiar to a listener and depends on linguistic experience. For bilinguals, the phonetic and lexical repertoire encompasses two languages, and the degree of overlap between word forms across languages affects the degree to which they influence one another during auditory word recognition. To support spoken word recognition, listeners often rely on semantic information (i.e., the words we hear are usually related in a meaningful way). Although the number of multilinguals across the globe is increasing, little is known about how crosslinguistic effects (i.e., word overlap) interact with semantic context and affect the flexible neural systems that support accurate word recognition. The current multi-echo functional magnetic resonance imaging (fMRI) study addresses this question by examining how prime-target word pair semantic relationships interact with the target word's form similarity (cognate status) to the translation equivalent in the dominant language (L1) during accurate word recognition of a non-dominant (L2) language. We tested 26 early-proficient Spanish-Basque (L1-L2) bilinguals. When L2 targets matching L1 translation-equivalent phonological word forms were preceded by unrelated semantic contexts that drive lexical competition, a flexible language control (fronto-parietal-subcortical) network was upregulated, whereas when they were preceded by related semantic contexts that reduce lexical competition, it was downregulated. We conclude that an interplay between semantic and crosslinguistic effects regulates flexible control mechanisms of speech processing to facilitate L2 word recognition, in noise.
Collapse
Affiliation(s)
- Sara Guediche
- Basque Center on Cognition Brain, and Language, Donostia-San Sebastian 20009, Spain.
| | | | | | - Martijn Baart
- Basque Center on Cognition Brain, and Language, Donostia-San Sebastian 20009, Spain; Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, 5000 LE Tilburg, the Netherlands
| | - Arthur G Samuel
- Basque Center on Cognition Brain, and Language, Donostia-San Sebastian 20009, Spain; Stony Brook University, NY 11794-2500, United States; Ikerbasque Foundation, Spain
| |
Collapse
|
18
|
Burton H, Reeder RM, Holden T, Agato A, Firszt JB. Cortical Regions Activated by Spectrally Degraded Speech in Adults With Single Sided Deafness or Bilateral Normal Hearing. Front Neurosci 2021; 15:618326. [PMID: 33897343 PMCID: PMC8058229 DOI: 10.3389/fnins.2021.618326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Accepted: 03/04/2021] [Indexed: 11/13/2022] Open
Abstract
Those with profound sensorineural hearing loss from single sided deafness (SSD) generally experience greater cognitive effort and fatigue in adverse sound environments. We studied cases with right ear, SSD compared to normal hearing (NH) individuals. SSD cases were significantly less correct in naming last words in spectrally degraded 8- and 16-band vocoded sentences, despite high semantic predictability. Group differences were not significant for less intelligible 4-band sentences, irrespective of predictability. SSD also had diminished BOLD percent signal changes to these same sentences in left hemisphere (LH) cortical regions of early auditory, association auditory, inferior frontal, premotor, inferior parietal, dorsolateral prefrontal, posterior cingulate, temporal-parietal-occipital junction, and posterior opercular. Cortical regions with lower amplitude responses in SSD than NH were mostly components of a LH language network, previously noted as concerned with speech recognition. Recorded BOLD signal magnitudes were averages from all vertices within predefined parcels from these cortex regions. Parcels from different regions in SSD showed significantly larger signal magnitudes to sentences of greater intelligibility (e.g., 8- or 16- vs. 4-band) in all except early auditory and posterior cingulate cortex. Significantly lower response magnitudes occurred in SSD than NH in regions prior studies found responsible for phonetics and phonology of speech, cognitive extraction of meaning, controlled retrieval of word meaning, and semantics. The findings suggested reduced activation of a LH fronto-temporo-parietal network in SSD contributed to difficulty processing speech for word meaning and sentence semantics. Effortful listening experienced by SSD might reflect diminished activation to degraded speech in the affected LH language network parcels. SSD showed no compensatory activity in matched right hemisphere parcels.
Collapse
Affiliation(s)
- Harold Burton
- Department of Neuroscience, Washington University School of Medicine, Saint Louis, MO, United States
| | - Ruth M Reeder
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine, Saint Louis, MO, United States
| | - Tim Holden
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine, Saint Louis, MO, United States
| | - Alvin Agato
- Department of Neuroscience, Washington University School of Medicine, Saint Louis, MO, United States
| | - Jill B Firszt
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine, Saint Louis, MO, United States
| |
Collapse
|
19
|
Effects of temporal order and intentionality on reflective attention to words in noise. PSYCHOLOGICAL RESEARCH 2021; 86:544-557. [PMID: 33683449 DOI: 10.1007/s00426-021-01494-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 02/15/2021] [Indexed: 10/22/2022]
Abstract
Speech perception in noise is a cognitively demanding process that challenges not only the auditory sensory system, but also cognitive networks involved in attention. The predictive coding theory has been influential in characterizing the influence of prior context on processing incoming auditory stimuli, with comparatively less research dedicated to "postdictive" processes and subsequent context effects on speech perception. Effects of subsequent semantic context were evaluated while manipulating the relationship of three target words presented in noise and the temporal position of targets compared to the subsequent contextual cue, demonstrating that subsequent context benefits were present regardless of whether the targets were related to each other and did not depend on the position of the target. However, participants instructed to focus on the relation between target and cue performed worse than those who did not receive this instruction, suggesting a disruption of a natural process of continuous speech recognition. We discuss these findings in relation to lexical commitment and stimulus-driven attention to short-term memory as mechanisms of subsequent context integration.
Collapse
|
20
|
Rönnberg J, Holmer E, Rudner M. Cognitive Hearing Science: Three Memory Systems, Two Approaches, and the Ease of Language Understanding Model. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:359-370. [PMID: 33439747 DOI: 10.1044/2020_jslhr-20-00007] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The purpose of this study was to conceptualize the subtle balancing act between language input and prediction (cognitive priming of future input) to achieve understanding of communicated content. When understanding fails, reconstructive postdiction is initiated. Three memory systems play important roles: working memory (WM), episodic long-term memory (ELTM), and semantic long-term memory (SLTM). The axiom of the Ease of Language Understanding (ELU) model is that explicit WM resources are invoked by a mismatch between language input-in the form of rapid automatic multimodal binding of phonology-and multimodal phonological and lexical representations in SLTM. However, if there is a match between rapid automatic multimodal binding of phonology output and SLTM/ELTM representations, language processing continues rapidly and implicitly. Method and Results In our first ELU approach, we focused on experimental manipulations of signal processing in hearing aids and background noise to cause a mismatch with LTM representations; both resulted in increased dependence on WM. Our second-and main approach relevant for this review article-focuses on the relative effects of age-related hearing loss on the three memory systems. According to the ELU, WM is predicted to be frequently occupied with reconstruction of what was actually heard, resulting in a relative disuse of phonological/lexical representations in the ELTM and SLTM systems. The prediction and results do not depend on test modality per se but rather on the particular memory system. This will be further discussed. Conclusions Related to the literature on ELTM decline as precursors of dementia and the fact that the risk for Alzheimer's disease increases substantially over time due to hearing loss, there is a possibility that lowered ELTM due to hearing loss and disuse may be part of the causal chain linking hearing loss and dementia. Future ELU research will focus on this possibility.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Emil Holmer
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
21
|
O'Neill ER, Parke MN, Kreft HA, Oxenham AJ. Role of semantic context and talker variability in speech perception of cochlear-implant users and normal-hearing listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:1224. [PMID: 33639827 PMCID: PMC7895533 DOI: 10.1121/10.0003532] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 01/01/2021] [Accepted: 01/26/2021] [Indexed: 06/12/2023]
Abstract
This study assessed the impact of semantic context and talker variability on speech perception by cochlear-implant (CI) users and compared their overall performance and between-subjects variance with that of normal-hearing (NH) listeners under vocoded conditions. Thirty post-lingually deafened adult CI users were tested, along with 30 age-matched and 30 younger NH listeners, on sentences with and without semantic context, presented in quiet and noise, spoken by four different talkers. Additional measures included working memory, non-verbal intelligence, and spectral-ripple detection and discrimination. Semantic context and between-talker differences influenced speech perception to similar degrees for both CI users and NH listeners. Between-subjects variance for speech perception was greatest in the CI group but remained substantial in both NH groups, despite the uniformly degraded stimuli in these two groups. Spectral-ripple detection and discrimination thresholds in CI users were significantly correlated with speech perception, but a single set of vocoder parameters for NH listeners was not able to capture average CI performance in both speech and spectral-ripple tasks. The lack of difference in the use of semantic context between CI users and NH listeners suggests no overall differences in listening strategy between the groups, when the stimuli are similarly degraded.
Collapse
Affiliation(s)
- Erin R O'Neill
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Morgan N Parke
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
22
|
Kessler M, Schierholz I, Mamach M, Wilke F, Hahne A, Büchner A, Geworski L, Bengel FM, Sandmann P, Berding G. Combined Brain-Perfusion SPECT and EEG Measurements Suggest Distinct Strategies for Speech Comprehension in CI Users With Higher and Lower Performance. Front Neurosci 2020; 14:787. [PMID: 32848560 PMCID: PMC7431776 DOI: 10.3389/fnins.2020.00787] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Accepted: 07/06/2020] [Indexed: 11/29/2022] Open
Abstract
Cochlear implantation constitutes a successful therapy of inner ear deafness, with the majority of patients showing good outcomes. There is, however, still some unexplained variability in outcomes with a number of cochlear-implant (CI) users, showing major limitations in speech comprehension. The current study used a multimodal diagnostic approach combining single-photon emission computed tomography (SPECT) and electroencephalography (EEG) to examine the mechanisms underlying speech processing in postlingually deafened CI users (N = 21). In one session, the participants performed a speech discrimination task, during which a 96-channel EEG was recorded and the perfusions marker 99mTc-HMPAO was injected intravenously. The SPECT scan was acquired 1.5 h after injection to measure the cortical activity during the speech task. The second session included a SPECT scan after injection without stimulation at rest. Analysis of EEG and SPECT data showed N400 and P600 event-related potentials (ERPs) particularly evoked by semantic violations in the sentences, and enhanced perfusion in a temporo-frontal network during task compared to rest, involving the auditory cortex bilaterally and Broca's area. Moreover, higher performance in testing for word recognition and verbal intelligence strongly correlated to the activation in this network during the speech task. However, comparing CI users with lower and higher speech intelligibility [median split with cutoff + 7.6 dB signal-to-noise ratio (SNR) in the Göttinger sentence test] revealed for CI users with higher performance additional activations of parietal and occipital regions and for those with lower performance stronger activation of superior frontal areas. Furthermore, SPECT activity was tightly coupled with EEG and cognitive abilities, as indicated by correlations between (1) cortical activation and the amplitudes in EEG, N400 (temporal and occipital areas)/P600 (parietal and occipital areas) and (2) between cortical activation in left-sided temporal and bilateral occipital/parietal areas and working memory capacity. These results suggest the recruitment of a temporo-frontal network in CI users during speech processing and a close connection between ERP effects and cortical activation in CI users. The observed differences in speech-evoked cortical activation patterns for CI users with higher and lower speech intelligibility suggest distinct processing strategies during speech rehabilitation with CI.
Collapse
Affiliation(s)
- Mariella Kessler
- Department of Nuclear Medicine, Hannover Medical School, Hanover, Germany
- Cluster of Excellence Hearing4all, Hannover Medical School, University of Oldenburg, Oldenburg, Germany
| | - Irina Schierholz
- Cluster of Excellence Hearing4all, Hannover Medical School, University of Oldenburg, Oldenburg, Germany
- Department of Otorhinolaryngology, Hannover Medical School, Hanover, Germany
- Department of Otorhinolaryngology, University of Cologne, Cologne, Germany
| | - Martin Mamach
- Cluster of Excellence Hearing4all, Hannover Medical School, University of Oldenburg, Oldenburg, Germany
- Department of Medical Physics and Radiation Protection, Hannover Medical School, Hanover, Germany
| | - Florian Wilke
- Department of Medical Physics and Radiation Protection, Hannover Medical School, Hanover, Germany
| | - Anja Hahne
- Department of Otorhinolaryngology, Faculty of Medicine Carl Gustav Carus, Saxonian Cochlear Implant Center, Technical University Dresden, Dresden, Germany
| | - Andreas Büchner
- Cluster of Excellence Hearing4all, Hannover Medical School, University of Oldenburg, Oldenburg, Germany
- Department of Otorhinolaryngology, Hannover Medical School, Hanover, Germany
| | - Lilli Geworski
- Department of Medical Physics and Radiation Protection, Hannover Medical School, Hanover, Germany
| | - Frank M. Bengel
- Department of Nuclear Medicine, Hannover Medical School, Hanover, Germany
| | - Pascale Sandmann
- Department of Otorhinolaryngology, University of Cologne, Cologne, Germany
| | - Georg Berding
- Department of Nuclear Medicine, Hannover Medical School, Hanover, Germany
- Cluster of Excellence Hearing4all, Hannover Medical School, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
23
|
Weber S, Hausmann M, Kane P, Weis S. The relationship between language ability and brain activity across language processes and modalities. Neuropsychologia 2020; 146:107536. [PMID: 32590019 DOI: 10.1016/j.neuropsychologia.2020.107536] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2019] [Revised: 03/03/2020] [Accepted: 06/12/2020] [Indexed: 01/22/2023]
Abstract
Existing neuroimaging studies on the relationship between language ability and brain activity have found contradictory evidence: On the one hand, increased activity with higher language ability has been interpreted as deeper or more adaptive language processing. On the other hand, decreased activity with higher language ability has been interpreted as more efficient language processing. In contrast to previous studies, the current study investigated the relationship between language ability and neural activity across different language processes and modalities while keeping non-linguistic cognitive task demands to a minimum. fMRI data were collected from 22 healthy adults performing a sentence listening task, a sentence reading task and a phonological production task. Outside the MRI scanner, language ability was assessed with the verbal scale of the Wechsler Abbreviated Scale of Intelligence (WASI-II) and a verbal fluency task. As expected, sentence comprehension activated the left anterior temporal lobe while phonological processing activated the left inferior frontal gyrus. Higher language ability was associated with increased activity in the left temporal lobe during auditory sentence processing and with increased activity in the left frontal lobe during phonological processing, reflected in both, higher intensity and greater extent of activations. Evidence for decreased activity with higher language ability was less consistent and restricted to verbal fluency. Together, the results predominantly support the hypothesis of deeper language processing in individuals with higher language ability. The consistency of results across language processes, modalities, and brain regions suggests a general positive link between language abilities and brain activity within the core language network. However, a negative relationship seems to exist for non-linguistic cognitive functions located outside the language network.
Collapse
Affiliation(s)
- Sarah Weber
- Department of Psychology, Durham University, UK; Department of Biological and Medical Psychology, University of Bergen, Norway.
| | | | | | - Susanne Weis
- Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany; Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Centre Jülich, Jülich, Germany
| |
Collapse
|
24
|
Zekveld AA, Kramer SE, Rönnberg J, Rudner M. In a Concurrent Memory and Auditory Perception Task, the Pupil Dilation Response Is More Sensitive to Memory Load Than to Auditory Stimulus Characteristics. Ear Hear 2019; 40:272-286. [PMID: 29923867 PMCID: PMC6400496 DOI: 10.1097/aud.0000000000000612] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Accepted: 04/10/2018] [Indexed: 11/30/2022]
Abstract
OBJECTIVES Speech understanding may be cognitively demanding, but it can be enhanced when semantically related text cues precede auditory sentences. The present study aimed to determine whether (a) providing text cues reduces pupil dilation, a measure of cognitive load, during listening to sentences, (b) repeating the sentences aloud affects recall accuracy and pupil dilation during recall of cue words, and (c) semantic relatedness between cues and sentences affects recall accuracy and pupil dilation during recall of cue words. DESIGN Sentence repetition following text cues and recall of the text cues were tested. Twenty-six participants (mean age, 22 years) with normal hearing listened to masked sentences. On each trial, a set of four-word cues was presented visually as text preceding the auditory presentation of a sentence whose meaning was either related or unrelated to the cues. On each trial, participants first read the cue words, then listened to a sentence. Following this they spoke aloud either the cue words or the sentence, according to instruction, and finally on all trials orally recalled the cues. Peak pupil dilation was measured throughout listening and recall on each trial. Additionally, participants completed a test measuring the ability to perceive degraded verbal text information and three working memory tests (a reading span test, a size-comparison span test, and a test of memory updating). RESULTS Cue words that were semantically related to the sentence facilitated sentence repetition but did not reduce pupil dilation. Recall was poorer and there were more intrusion errors when the cue words were related to the sentences. Recall was also poorer when sentences were repeated aloud. Both behavioral effects were associated with greater pupil dilation. Larger reading span capacity and smaller size-comparison span were associated with larger peak pupil dilation during listening. Furthermore, larger reading span and greater memory updating ability were both associated with better cue recall overall. CONCLUSIONS Although sentence-related word cues facilitate sentence repetition, our results indicate that they do not reduce cognitive load during listening in noise with a concurrent memory load. As expected, higher working memory capacity was associated with better recall of the cues. Unexpectedly, however, semantic relatedness with the sentence reduced word cue recall accuracy and increased intrusion errors, suggesting an effect of semantic confusion. Further, speaking the sentence aloud also reduced word cue recall accuracy, probably due to articulatory suppression. Importantly, imposing a memory load during listening to sentences resulted in the absence of formerly established strong effects of speech intelligibility on the pupil dilation response. This nullified intelligibility effect demonstrates that the pupil dilation response to a cognitive (memory) task can completely overshadow the effect of perceptual factors on the pupil dilation response. This highlights the importance of taking cognitive task load into account during auditory testing.
Collapse
Affiliation(s)
- Adriana A. Zekveld
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping and Örebro Universities, Linköping, Sweden
- Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery and Amsterdam Public Health research institute VU University Medical Center, Amsterdam, The Netherlands
| | - Sophia E. Kramer
- Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery and Amsterdam Public Health research institute VU University Medical Center, Amsterdam, The Netherlands
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping and Örebro Universities, Linköping, Sweden
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping and Örebro Universities, Linköping, Sweden
| |
Collapse
|
25
|
Moberly AC, Reed J. Making Sense of Sentences: Top-Down Processing of Speech by Adult Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:2895-2905. [PMID: 31330118 PMCID: PMC6802905 DOI: 10.1044/2019_jslhr-h-18-0472] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2018] [Revised: 03/20/2019] [Accepted: 04/12/2019] [Indexed: 05/03/2023]
Abstract
Purpose Speech recognition relies upon a listener's successful pairing of the acoustic-phonetic details from the bottom-up input with top-down linguistic processing of the incoming speech stream. When the speech is spectrally degraded, such as through a cochlear implant (CI), this role of top-down processing is poorly understood. This study explored the interactions of top-down processing, specifically the use of semantic context during sentence recognition, and the relative contributions of different neurocognitive functions during speech recognition in adult CI users. Method Data from 41 experienced adult CI users were collected and used in analyses. Participants were tested for recognition and immediate repetition of speech materials in the clear. They were asked to repeat 2 sets of sentence materials, 1 that was semantically meaningful and 1 that was syntactically appropriate but semantically anomalous. Participants also were tested on 4 visual measures of neurocognitive functioning to assess working memory capacity (Digit Span; Wechsler, 2004), speed of lexical access (Test of Word Reading Efficiency; Torgeson, Wagner, & Rashotte, 1999), inhibitory control (Stroop; Stroop, 1935), and nonverbal fluid reasoning (Raven's Progressive Matrices; Raven, 2000). Results Individual listeners' inhibitory control predicted recognition of meaningful sentences when controlling for performance on anomalous sentences, our proxy for the quality of the bottom-up input. Additionally, speed of lexical access and nonverbal reasoning predicted recognition of anomalous sentences. Conclusions Findings from this study identified inhibitory control as a potential mechanism at work when listeners make use of semantic context during sentence recognition. Moreover, speed of lexical access and nonverbal reasoning were associated with recognition of sentences that lacked semantic context. These results motivate the development of improved comprehensive rehabilitative approaches for adult patients with CIs to optimize use of top-down processing and underlying core neurocognitive functions.
Collapse
Affiliation(s)
- Aaron C. Moberly
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
| | - Jessa Reed
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
| |
Collapse
|
26
|
O'Neill ER, Kreft HA, Oxenham AJ. Cognitive factors contribute to speech perception in cochlear-implant users and age-matched normal-hearing listeners under vocoded conditions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:195. [PMID: 31370651 PMCID: PMC6637026 DOI: 10.1121/1.5116009] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
This study examined the contribution of perceptual and cognitive factors to speech-perception abilities in cochlear-implant (CI) users. Thirty CI users were tested on word intelligibility in sentences with and without semantic context, presented in quiet and in noise. Performance was compared with measures of spectral-ripple detection and discrimination, thought to reflect peripheral processing, as well as with cognitive measures of working memory and non-verbal intelligence. Thirty age-matched and thirty younger normal-hearing (NH) adults also participated, listening via tone-excited vocoders, adjusted to produce mean performance for speech in noise comparable to that of the CI group. Results suggest that CI users may rely more heavily on semantic context than younger or older NH listeners, and that non-auditory working memory explains significant variance in the CI and age-matched NH groups. Between-subject variability in spectral-ripple detection thresholds was similar across groups, despite the spectral resolution for all NH listeners being limited by the same vocoder, whereas speech perception scores were more variable between CI users than between NH listeners. The results highlight the potential importance of central factors in explaining individual differences in CI users and question the extent to which standard measures of spectral resolution in CIs reflect purely peripheral processing.
Collapse
Affiliation(s)
- Erin R O'Neill
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
27
|
Rudner M, Seeto M, Keidser G, Johnson B, Rönnberg J. Poorer Speech Reception Threshold in Noise Is Associated With Lower Brain Volume in Auditory and Cognitive Processing Regions. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:1117-1130. [PMID: 31026199 DOI: 10.1044/2018_jslhr-h-ascc7-18-0142] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose Hearing loss is associated with changes in brain volume in regions supporting auditory and cognitive processing. The purpose of this study was to determine whether there is a systematic association between hearing ability and brain volume in cross-sectional data from a large nonclinical cohort of middle-aged adults available from the UK Biobank Resource ( http://www.ukbiobank.ac.uk ). Method We performed a set of regression analyses to determine the association between speech reception threshold in noise (SRTn) and global brain volume as well as predefined regions of interest (ROIs) based on T1-weighted structural images, controlling for hearing-related comorbidities and cognition as well as demographic factors. In a 2nd set of analyses, we additionally controlled for hearing aid (HA) use. We predicted statistically significant associations globally and in ROIs including auditory and cognitive processing regions, possibly modulated by HA use. Results Whole-brain gray matter volume was significantly lower for individuals with poorer SRTn. Furthermore, the volume of 9 predicted ROIs including both auditory and cognitive processing regions was lower for individuals with poorer SRTn. The greatest percentage difference (-0.57%) in ROI volume relating to a 1 SD worsening of SRTn was found in the left superior temporal gyrus. HA use did not substantially modulate the pattern of association between brain volume and SRTn. Conclusions In a large middle-aged nonclinical population, poorer hearing ability is associated with lower brain volume globally as well as in cortical and subcortical regions involved in auditory and cognitive processing, but there was no conclusive evidence that this effect is moderated by HA use. This pattern of results supports the notion that poor hearing leads to reduced volume in brain regions recruited during speech understanding under challenging conditions. These findings should be tested in future longitudinal, experimental studies. Supplemental Material https://doi.org/10.23641/asha.7949357.
Collapse
Affiliation(s)
- Mary Rudner
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Mark Seeto
- National Acoustic Laboratories and the HEARing CRC, Sydney, New South Wales, Australia
| | - Gitte Keidser
- National Acoustic Laboratories and the HEARing CRC, Sydney, New South Wales, Australia
| | - Blake Johnson
- Department of Cognitive Science, Macquarie University, Sydney, New South Wales, Australia
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
28
|
Extrinsic Cognitive Load Impairs Spoken Word Recognition in High- and Low-Predictability Sentences. Ear Hear 2019; 39:378-389. [PMID: 28945658 DOI: 10.1097/aud.0000000000000493] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
OBJECTIVES Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences. Downstream effects of speech degradation and sentence predictability on the cognitive load task were also examined. DESIGN One hundred twenty young adults identified sentence-final spoken words in high- and low-predictability Speech Perception in Noise sentences. Cognitive load consisted of a preload of short (low-load) or long (high-load) sequences of digits, presented visually before each spoken sentence and reported either before or after identification of the sentence-final word. LE was varied by spectrally degrading sentences with four-, six-, or eight-channel noise vocoding. Level of spectral degradation and order of report (digits first or words first) were between-participants variables. Effects of cognitive load, sentence predictability, and speech degradation on accuracy of sentence-final word identification as well as recall of preload digit sequences were examined. RESULTS In addition to anticipated main effects of sentence predictability and spectral degradation on word recognition, we found an effect of cognitive load, such that words were identified more accurately under low load than high load. However, load differentially affected word identification in high- and low-predictability sentences depending on the level of sentence degradation. Under severe spectral degradation (four-channel vocoding), the effect of cognitive load on word identification was present for high-predictability sentences but not for low-predictability sentences. Under mild spectral degradation (eight-channel vocoding), the effect of load was present for low-predictability sentences but not for high-predictability sentences. There were also reliable downstream effects of speech degradation and sentence predictability on recall of the preload digit sequences. Long digit sequences were more easily recalled following spoken sentences that were less spectrally degraded. When digits were reported after identification of sentence-final words, short digit sequences were recalled more accurately when the spoken sentences were predictable. CONCLUSIONS Extrinsic cognitive load can impair recognition of spectrally degraded spoken words in a sentence recognition task. Cognitive load affected word identification in both high- and low-predictability sentences, suggesting that load may impact both context use and lower-level perceptual processes. Consistent with prior work, LE also had downstream effects on memory for visual digit sequences. Results support the proposal that extrinsic cognitive load and LE induced by signal degradation both draw on a central, limited pool of cognitive resources that is used to recognize spoken words in sentences under adverse listening conditions.
Collapse
|
29
|
Rönnberg J, Holmer E, Rudner M. Cognitive hearing science and ease of language understanding. Int J Audiol 2019; 58:247-261. [DOI: 10.1080/14992027.2018.1551631] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Emil Holmer
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| |
Collapse
|
30
|
Abstract
OBJECTIVES It is well known from previous research that when listeners are told what they are about to hear before a degraded or partially masked auditory signal is presented, the speech signal "pops out" of the background and becomes considerably more intelligible. The goal of this research was to explore whether this priming effect is as strong in older adults as in younger adults. DESIGN Fifty-six adults-28 older and 28 younger-listened to "nonsense" sentences spoken by a female talker in the presence of a 2-talker speech masker (also female) or a fluctuating speech-like noise masker at 5 signal-to-noise ratios. Just before, or just after, the auditory signal was presented, a typed caption was displayed on a computer screen. The caption sentence was either identical to the auditory sentence or differed by one key word. The subjects' task was to decide whether the caption and auditory messages were the same or different. Discrimination performance was reported in d'. The strength of the pop-out perception was inferred from the improvement in performance that was expected from the caption-before order of presentation. A subset of 12 subjects from each group made confidence judgments as they gave their responses, and also completed several cognitive tests. RESULTS Data showed a clear order effect for both subject groups and both maskers, with better same-different discrimination performance for the caption-before condition than the caption-after condition. However, for the two-talker masker, the younger adults obtained a larger and more consistent benefit from the caption-before order than the older adults across signal-to-noise ratios. Especially at the poorer signal-to-noise ratios, older subjects showed little evidence that they experienced the pop-out effect that is presumed to make the discrimination task easier. On average, older subjects also appeared to approach the task differently, being more reluctant than younger subjects to report that the captions and auditory sentences were the same. Correlation analyses indicated a significant negative association between age and priming benefit in the two-talker masker and nonsignificant associations between priming benefit in this masker and either high-frequency hearing loss or performance on the cognitive tasks. CONCLUSIONS Previous studies have shown that older adults are at least as good, if not better, at exploiting context in speech recognition, as compared with younger adults. The current results are not in disagreement with those findings but suggest that, under some conditions, the automatic priming process that may contribute to benefits from context is not as strong in older as in younger adults.
Collapse
|
31
|
Alain C, Du Y, Bernstein LJ, Barten T, Banai K. Listening under difficult conditions: An activation likelihood estimation meta-analysis. Hum Brain Mapp 2018. [PMID: 29536592 DOI: 10.1002/hbm.24031] [Citation(s) in RCA: 73] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
The brain networks supporting speech identification and comprehension under difficult listening conditions are not well specified. The networks hypothesized to underlie effortful listening include regions responsible for executive control. We conducted meta-analyses of auditory neuroimaging studies to determine whether a common activation pattern of the frontal lobe supports effortful listening under different speech manipulations. Fifty-three functional neuroimaging studies investigating speech perception were divided into three independent Activation Likelihood Estimate analyses based on the type of speech manipulation paradigm used: Speech-in-noise (SIN, 16 studies, involving 224 participants); spectrally degraded speech using filtering techniques (15 studies involving 270 participants); and linguistic complexity (i.e., levels of syntactic, lexical and semantic intricacy/density, 22 studies, involving 348 participants). Meta-analysis of the SIN studies revealed higher effort was associated with activation in left inferior frontal gyrus (IFG), left inferior parietal lobule, and right insula. Studies using spectrally degraded speech demonstrated increased activation of the insula bilaterally and the left superior temporal gyrus (STG). Studies manipulating linguistic complexity showed activation in the left IFG, right middle frontal gyrus, left middle temporal gyrus and bilateral STG. Planned contrasts revealed left IFG activation in linguistic complexity studies, which differed from activation patterns observed in SIN or spectral degradation studies. Although there were no significant overlap in prefrontal activation across these three speech manipulation paradigms, SIN and spectral degradation showed overlapping regions in left and right insula. These findings provide evidence that there is regional specialization within the left IFG and differential executive networks underlie effortful listening.
Collapse
Affiliation(s)
- Claude Alain
- Rotman Research Institute, Baycrest Health Centre, Toronto, Ontario, Canada.,Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Lori J Bernstein
- Department of Supportive Care, Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada.,Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | - Thijs Barten
- Rotman Research Institute, Baycrest Health Centre, Toronto, Ontario, Canada
| | - Karen Banai
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| |
Collapse
|
32
|
Is Listening in Noise Worth It? The Neurobiology of Speech Recognition in Challenging Listening Conditions. Ear Hear 2018; 37 Suppl 1:101S-10S. [PMID: 27355759 DOI: 10.1097/aud.0000000000000300] [Citation(s) in RCA: 80] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
This review examines findings from functional neuroimaging studies of speech recognition in noise to provide a neural systems level explanation for the effort and fatigue that can be experienced during speech recognition in challenging listening conditions. Neuroimaging studies of speech recognition consistently demonstrate that challenging listening conditions engage neural systems that are used to monitor and optimize performance across a wide range of tasks. These systems appear to improve speech recognition in younger and older adults, but sustained engagement of these systems also appears to produce an experience of effort and fatigue that may affect the value of communication. When considered in the broader context of the neuroimaging and decision making literature, the speech recognition findings from functional imaging studies indicate that the expected value, or expected level of speech recognition given the difficulty of listening conditions, should be considered when measuring effort and fatigue. The authors propose that the behavioral economics or neuroeconomics of listening can provide a conceptual and experimental framework for understanding effort and fatigue that may have clinical significance.
Collapse
|
33
|
Hakonen M, May PJC, Jääskeläinen IP, Jokinen E, Sams M, Tiitinen H. Predictive processing increases intelligibility of acoustically distorted speech: Behavioral and neural correlates. Brain Behav 2017; 7:e00789. [PMID: 28948083 PMCID: PMC5607552 DOI: 10.1002/brb3.789] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Revised: 06/10/2017] [Accepted: 06/26/2017] [Indexed: 11/08/2022] Open
Abstract
INTRODUCTION We examined which brain areas are involved in the comprehension of acoustically distorted speech using an experimental paradigm where the same distorted sentence can be perceived at different levels of intelligibility. This change in intelligibility occurs via a single intervening presentation of the intact version of the sentence, and the effect lasts at least on the order of minutes. Since the acoustic structure of the distorted stimulus is kept fixed and only intelligibility is varied, this allows one to study brain activity related to speech comprehension specifically. METHODS In a functional magnetic resonance imaging (fMRI) experiment, a stimulus set contained a block of six distorted sentences. This was followed by the intact counterparts of the sentences, after which the sentences were presented in distorted form again. A total of 18 such sets were presented to 20 human subjects. RESULTS The blood oxygenation level dependent (BOLD)-responses elicited by the distorted sentences which came after the disambiguating, intact sentences were contrasted with the responses to the sentences presented before disambiguation. This revealed increased activity in the bilateral frontal pole, the dorsal anterior cingulate/paracingulate cortex, and the right frontal operculum. Decreased BOLD responses were observed in the posterior insula, Heschl's gyrus, and the posterior superior temporal sulcus. CONCLUSIONS The brain areas that showed BOLD-enhancement for increased sentence comprehension have been associated with executive functions and with the mapping of incoming sensory information to representations stored in episodic memory. Thus, the comprehension of acoustically distorted speech may be associated with the engagement of memory-related subsystems. Further, activity in the primary auditory cortex was modulated by prior experience, possibly in a predictive coding framework. Our results suggest that memory biases the perception of ambiguous sensory information toward interpretations that have the highest probability to be correct based on previous experience.
Collapse
Affiliation(s)
- Maria Hakonen
- Brain and Mind LaboratoryDepartment of Neuroscience and Biomedical Engineering (NBE)School of ScienceAalto UniversityAaltoFinland
- Department of PhysiologyFaculty of MedicineUniversity of HelsinkiHelsinkiFinland
| | - Patrick J. C. May
- Medical Research Council Institute of Hearing ResearchSchool of MedicineThe University of NottinghamNottinghamUK
- Special Laboratory Non‐Invasive Brain ImagingLeibniz Institute for NeurobiologyMagdeburgGermany
| | - Iiro P. Jääskeläinen
- Brain and Mind LaboratoryDepartment of Neuroscience and Biomedical Engineering (NBE)School of ScienceAalto UniversityAaltoFinland
| | - Emma Jokinen
- Department of Signal Processing and AcousticsSchool of Electrical EngineeringAalto UniversityAaltoFinland
| | - Mikko Sams
- Brain and Mind LaboratoryDepartment of Neuroscience and Biomedical Engineering (NBE)School of ScienceAalto UniversityAaltoFinland
| | | |
Collapse
|
34
|
Moberly AC, Houston DM, Harris MS, Adunka OF, Castellanos I. Verbal working memory and inhibition-concentration in adults with cochlear implants. Laryngoscope Investig Otolaryngol 2017; 2:254-261. [PMID: 29094068 PMCID: PMC5655567 DOI: 10.1002/lio2.90] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2017] [Revised: 05/19/2017] [Accepted: 06/20/2017] [Indexed: 11/27/2022] Open
Abstract
Objectives Neurocognitive functions contribute to speech recognition in postlingual adults with cochlear implants (CIs). In particular, better verbal working memory (WM) on modality‐specific (auditory) WM tasks predicts better speech recognition. It remains unclear, however, whether this association can be attributed to basic underlying modality‐general neurocognitive functions, or whether it is solely a result of the degraded nature of auditory signals delivered by the CI. Three hypotheses were tested: 1) Both modality‐specific and modality‐general tasks of verbal WM would predict scores of sentence recognition in speech‐shaped noise; 2) Basic modality‐general neurocognitive functions of controlled fluency and inhibition‐concentration would predict both modality‐specific and modality‐general verbal WM; and 3) Scores on both tasks of verbal WM would mediate the effects of more basic neurocognitive functions on sentence recognition. Study Design Cross‐sectional study of 30 postlingual adults with CIs and thirty age‐matched normal‐hearing (NH) controls. Materials and Methods Participants were tested for sentence recognition in speech‐shaped noise, along with verbal WM using a modality‐general task (Reading Span) and an auditory modality‐specific task (Listening Span). Participants were also assessed for controlled fluency and inhibition‐concentration abilities. Results For CI users only, Listening Span scores predicted sentence recognition, and Listening Span scores mediated the effects of inhibition‐concentration on speech recognition. Scores on Reading Span were not related to sentence recognition for either group. Conclusion Inhibition‐concentration skills play an important role in CI users' sentence recognition skills, with effects mediated by modality‐specific verbal WM. Further studies will examine inhibition‐concentration and WM skills as novel targets for clinical intervention. Level of Evidence 4.
Collapse
Affiliation(s)
- Aaron C Moberly
- Department of Otolaryngology The Ohio State University Wexner Medical Center
| | - Derek M Houston
- Department of Otolaryngology The Ohio State University Wexner Medical Center
| | - Michael S Harris
- Department of Otolaryngology The Ohio State University Wexner Medical Center
| | - Oliver F Adunka
- Department of Otolaryngology The Ohio State University Wexner Medical Center
| | - Irina Castellanos
- Department of Otolaryngology The Ohio State University Wexner Medical Center
| |
Collapse
|
35
|
Rönnberg J, Lunner T, Ng EHN, Lidestam B, Zekveld AA, Sörqvist P, Lyxell B, Träff U, Yumba W, Classon E, Hällgren M, Larsby B, Signoret C, Pichora-Fuller MK, Rudner M, Danielsson H, Stenfelt S. Hearing impairment, cognition and speech understanding: exploratory factor analyses of a comprehensive test battery for a group of hearing aid users, the n200 study. Int J Audiol 2016; 55:623-42. [PMID: 27589015 PMCID: PMC5044772 DOI: 10.1080/14992027.2016.1219775] [Citation(s) in RCA: 63] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2016] [Revised: 07/29/2016] [Accepted: 07/29/2016] [Indexed: 02/08/2023]
Abstract
OBJECTIVE The aims of the current n200 study were to assess the structural relations between three classes of test variables (i.e. HEARING, COGNITION and aided speech-in-noise OUTCOMES) and to describe the theoretical implications of these relations for the Ease of Language Understanding (ELU) model. STUDY SAMPLE Participants were 200 hard-of-hearing hearing-aid users, with a mean age of 60.8 years. Forty-three percent were females and the mean hearing threshold in the better ear was 37.4 dB HL. DESIGN LEVEL1 factor analyses extracted one factor per test and/or cognitive function based on a priori conceptualizations. The more abstract LEVEL 2 factor analyses were performed separately for the three classes of test variables. RESULTS The HEARING test variables resulted in two LEVEL 2 factors, which we labelled SENSITIVITY and TEMPORAL FINE STRUCTURE; the COGNITIVE variables in one COGNITION factor only, and OUTCOMES in two factors, NO CONTEXT and CONTEXT. COGNITION predicted the NO CONTEXT factor to a stronger extent than the CONTEXT outcome factor. TEMPORAL FINE STRUCTURE and SENSITIVITY were associated with COGNITION and all three contributed significantly and independently to especially the NO CONTEXT outcome scores (R(2) = 0.40). CONCLUSIONS All LEVEL 2 factors are important theoretically as well as for clinical assessment.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Thomas Lunner
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
- Eriksholm Research Centre,
Oticon A/S, Rørtangvej 20, 3070 Snekkersten,
Denmark
| | - Elaine Hoi Ning Ng
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Björn Lidestam
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
| | - Adriana Agatha Zekveld
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Section Ear & Hearing, Dept. of Otolaryngology-Head and Neck Surgery and EMGO Institute, VU University Medical Center,
Amsterdam,
The Netherlands
| | - Patrik Sörqvist
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Building, Energy and Environmental Engineering, University of Gävle,
Gävle,
Sweden
| | - Björn Lyxell
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Ulf Träff
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
| | - Wycliffe Yumba
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Elisabet Classon
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Mathias Hällgren
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
| | - Birgitta Larsby
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
| | - Carine Signoret
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - M. Kathleen Pichora-Fuller
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Psychology, University of Toronto,
Toronto,
Ontario,
Canada
- The Toronto Rehabilitation Institute, University Health Network,
Toronto,
Ontario,
Canada
- The Rotman Research Institute, Baycrest Hospital,
Toronto,
Ontario,
Canada
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Henrik Danielsson
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Stefan Stenfelt
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
| |
Collapse
|
36
|
An fMRI study investigating effects of conceptually related sentences on the perception of degraded speech. Cortex 2016; 79:57-74. [PMID: 27100909 DOI: 10.1016/j.cortex.2016.03.014] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2015] [Revised: 01/06/2016] [Accepted: 03/15/2016] [Indexed: 11/20/2022]
Abstract
Prior research has shown that the perception of degraded speech is influenced by within sentence meaning and recruits one or more components of a frontal-temporal-parietal network. The goal of the current study is to examine whether the overall conceptual meaning of a sentence, made up of one set of words, influences the perception of a second acoustically degraded sentence, made up of a different set of words. Using functional magnetic resonance imaging (fMRI), we presented an acoustically clear sentence followed by an acoustically degraded sentence and manipulated the semantic relationship between them: Related in meaning (but consisting of different content words), Unrelated in meaning, or Same. Results showed that listeners' word recognition accuracy for the acoustically degraded sentences was significantly higher when the target sentence was preceded by a conceptually related compared to a conceptually unrelated sentence. Sensitivity to conceptual relationships was associated with enhanced activity in middle and inferior frontal, temporal, and parietal areas. In addition, the left middle frontal gyrus (LMFG), left inferior frontal gyrus (LIFG), and left middle temporal gyrus (LMTG) showed activity that correlated with individual performance on the Related condition. The superior temporal gyrus (STG) showed increased activation in the Same condition suggesting that it is sensitive to perceptual similarity rather than the integration of meaning between the sentence pairs. A fronto-temporo-parietal network appears to consolidate information sources across multiple levels of language (acoustic, lexical, syntactic, semantic) to build, and ultimately integrate conceptual information across sentences and facilitate the perception of a degraded speech signal. However, the nature of the sources of information that are available differentially recruit specific regions and modulate their activity within this network. Implications of these findings for the functional architecture of the network are considered.
Collapse
|
37
|
Wayne RV, Hamilton C, Jones Huyck J, Johnsrude IS. Working Memory Training and Speech in Noise Comprehension in Older Adults. Front Aging Neurosci 2016; 8:49. [PMID: 27047370 PMCID: PMC4801856 DOI: 10.3389/fnagi.2016.00049] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2015] [Accepted: 02/22/2016] [Indexed: 11/16/2022] Open
Abstract
Understanding speech in the presence of background sound can be challenging for older adults. Speech comprehension in noise appears to depend on working memory and executive-control processes (e.g., Heald and Nusbaum, 2014), and their augmentation through training may have rehabilitative potential for age-related hearing loss. We examined the efficacy of adaptive working-memory training (Cogmed; Klingberg et al., 2002) in 24 older adults, assessing generalization to other working-memory tasks (near-transfer) and to other cognitive domains (far-transfer) using a cognitive test battery, including the Reading Span test, sensitive to working memory (e.g., Daneman and Carpenter, 1980). We also assessed far transfer to speech-in-noise performance, including a closed-set sentence task (Kidd et al., 2008). To examine the effect of cognitive training on benefit obtained from semantic context, we also assessed transfer to open-set sentences; half were semantically coherent (high-context) and half were semantically anomalous (low-context). Subjects completed 25 sessions (0.5–1 h each; 5 sessions/week) of both adaptive working memory training and placebo training over 10 weeks in a crossover design. Subjects' scores on the adaptive working-memory training tasks improved as a result of training. However, training did not transfer to other working memory tasks, nor to tasks recruiting other cognitive domains. We did not observe any training-related improvement in speech-in-noise performance. Measures of working memory correlated with the intelligibility of low-context, but not high-context, sentences, suggesting that sentence context may reduce the load on working memory. The Reading Span test significantly correlated only with a test of visual episodic memory, suggesting that the Reading Span test is not a pure-test of working memory, as is commonly assumed.
Collapse
Affiliation(s)
- Rachel V Wayne
- Department of Psychology, Queen's University Kingston, ON, Canada
| | - Cheryl Hamilton
- Department of Psychology, Queen's University Kingston, ON, Canada
| | | | - Ingrid S Johnsrude
- Department of Psychology, Queen's UniversityKingston, ON, Canada; Department of Psychology, School of Communication Sciences and Disorders, The Brain and Mind Institute, University of Western OntarioLondon, ON, Canada
| |
Collapse
|
38
|
Rudner M, Toscano E, Holmer E. Load and distinctness interact in working memory for lexical manual gestures. Front Psychol 2015; 6:1147. [PMID: 26321979 PMCID: PMC4535352 DOI: 10.3389/fpsyg.2015.01147] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2015] [Accepted: 07/23/2015] [Indexed: 11/13/2022] Open
Abstract
The Ease of Language Understanding model (Rönnberg et al., 2013) predicts that decreasing the distinctness of language stimuli increases working memory load; in the speech domain this notion is supported by empirical evidence. Our aim was to determine whether such an over-additive interaction can be generalized to sign processing in sign-naïve individuals and whether it is modulated by experience of computer gaming. Twenty young adults with no knowledge of sign language performed an n-back working memory task based on manual gestures lexicalized in sign language; the visual resolution of the signs and working memory load were manipulated. Performance was poorer when load was high and resolution was low. These two effects interacted over-additively, demonstrating that reducing the resolution of signed stimuli increases working memory load when there is no pre-existing semantic representation. This suggests that load and distinctness are handled by a shared amodal mechanism which can be revealed empirically when stimuli are degraded and load is high, even without pre-existing semantic representation. There was some evidence that the mechanism is influenced by computer gaming experience. Future work should explore how the shared mechanism is influenced by pre-existing semantic representation and sensory factors together with computer gaming experience.
Collapse
Affiliation(s)
- Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University , Sweden
| | - Elena Toscano
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University , Sweden
| | - Emil Holmer
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University , Sweden
| |
Collapse
|
39
|
Dopamine receptor D4 (DRD4) gene modulates the influence of informational masking on speech recognition. Neuropsychologia 2014; 67:121-31. [PMID: 25497692 DOI: 10.1016/j.neuropsychologia.2014.12.013] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2014] [Revised: 12/09/2014] [Accepted: 12/10/2014] [Indexed: 12/30/2022]
Abstract
Listeners vary substantially in their ability to recognize speech in noisy environments. Here we examined the role of genetic variation on individual differences in speech recognition in various noise backgrounds. Background noise typically varies in the levels of energetic masking (EM) and informational masking (IM) imposed on target speech. Relative to EM, release from IM is hypothesized to place greater demand on executive function to selectively attend to target speech while ignoring competing noises. Recent evidence suggests that the long allele variant in exon III of the DRD4 gene, primarily expressed in the prefrontal cortex, may be associated with enhanced selective attention to goal-relevant high-priority information even in the face of interference. We investigated the extent to which this polymorphism is associated with speech recognition in IM and EM conditions. In an unscreened adult sample (Experiment 1) and a larger screened replication sample (Experiment 2), we demonstrate that individuals with the DRD4 long variant show better recognition performance in noise conditions involving significant IM, but not in EM conditions. In Experiment 2, we also obtained neuropsychological measures to assess the underlying mechanisms. Mediation analysis revealed that this listening condition-specific advantage was mediated by enhanced executive attention/working memory capacity in individuals with the long allele variant. These findings suggest that DRD4 may contribute specifically to individual differences in speech recognition ability in noise conditions that place demands on executive function.
Collapse
|
40
|
Zekveld AA, Heslenfeld DJ, Johnsrude IS, Versfeld NJ, Kramer SE. The eye as a window to the listening brain: Neural correlates of pupil size as a measure of cognitive listening load. Neuroimage 2014; 101:76-86. [DOI: 10.1016/j.neuroimage.2014.06.069] [Citation(s) in RCA: 100] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2014] [Revised: 06/26/2014] [Accepted: 06/28/2014] [Indexed: 11/26/2022] Open
|
41
|
Moradi S, Lidestam B, Saremi A, Rönnberg J. Gated auditory speech perception: effects of listening conditions and cognitive capacity. Front Psychol 2014; 5:531. [PMID: 24926274 PMCID: PMC4040882 DOI: 10.3389/fpsyg.2014.00531] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2014] [Accepted: 05/13/2014] [Indexed: 11/13/2022] Open
Abstract
This study aimed to measure the initial portion of signal required for the correct identification of auditory speech stimuli (or isolation points, IPs) in silence and noise, and to investigate the relationships between auditory and cognitive functions in silence and noise. Twenty-one university students were presented with auditory stimuli in a gating paradigm for the identification of consonants, words, and final words in highly predictable and low predictable sentences. The Hearing in Noise Test (HINT), the reading span test, and the Paced Auditory Serial Attention Test were also administered to measure speech-in-noise ability, working memory and attentional capacities of the participants, respectively. The results showed that noise delayed the identification of consonants, words, and final words in highly predictable and low predictable sentences. HINT performance correlated with working memory and attentional capacities. In the noise condition, there were correlations between HINT performance, cognitive task performance, and the IPs of consonants and words. In the silent condition, there were no correlations between auditory and cognitive tasks. In conclusion, a combination of hearing-in-noise ability, working memory capacity, and attention capacity is needed for the early identification of consonants and words in noise.
Collapse
Affiliation(s)
- Shahram Moradi
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping UniversityLinköping, Sweden
| | - Björn Lidestam
- Department of Behavioral Sciences and Learning, Linköping UniversityLinköping, Sweden
| | - Amin Saremi
- Division of Technical Audiology, Department of Clinical and Experimental Medicine, Linköping UniversityLinköping, Sweden
- Cluster of Excellence “Hearing4all”, Department for Neuroscience, Computational Neuroscience Group, Carl von Ossietzky University of OldenburgOldenburg, Germany
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping UniversityLinköping, Sweden
| |
Collapse
|
42
|
Cognitive spare capacity and speech communication: a narrative overview. BIOMED RESEARCH INTERNATIONAL 2014; 2014:869726. [PMID: 24971355 PMCID: PMC4058272 DOI: 10.1155/2014/869726] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 02/10/2014] [Accepted: 05/13/2014] [Indexed: 01/27/2023]
Abstract
Background noise can make speech communication tiring and cognitively taxing, especially for individuals with hearing impairment. It is now well established that better working memory capacity is associated with better ability to understand speech under adverse conditions as well as better ability to benefit from the advanced signal processing in modern hearing aids. Recent work has shown that although such processing cannot overcome hearing handicap, it can increase cognitive spare capacity, that is, the ability to engage in higher level processing of speech. This paper surveys recent work on cognitive spare capacity and suggests new avenues of investigation.
Collapse
|
43
|
Mishra S, Stenfelt S, Lunner T, Rönnberg J, Rudner M. Cognitive spare capacity in older adults with hearing loss. Front Aging Neurosci 2014; 6:96. [PMID: 24904409 PMCID: PMC4033040 DOI: 10.3389/fnagi.2014.00096] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2013] [Accepted: 04/29/2014] [Indexed: 12/28/2022] Open
Abstract
Individual differences in working memory capacity (WMC) are associated with speech recognition in adverse conditions, reflecting the need to maintain and process speech fragments until lexical access can be achieved. When working memory resources are engaged in unlocking the lexicon, there is less Cognitive Spare Capacity (CSC) available for higher level processing of speech. CSC is essential for interpreting the linguistic content of speech input and preparing an appropriate response, that is, engaging in conversation. Previously, we showed, using a Cognitive Spare Capacity Test (CSCT) that in young adults with normal hearing, CSC was not generally related to WMC and that when CSC decreased in noise it could be restored by visual cues. In the present study, we investigated CSC in 24 older adults with age-related hearing loss, by administering the CSCT and a battery of cognitive tests. We found generally reduced CSC in older adults with hearing loss compared to the younger group in our previous study, probably because they had poorer cognitive skills and deployed them differently. Importantly, CSC was not reduced in the older group when listening conditions were optimal. Visual cues improved CSC more for this group than for the younger group in our previous study. CSC of older adults with hearing loss was not generally related to WMC but it was consistently related to episodic long term memory, suggesting that the efficiency of this processing bottleneck is important for executive processing of speech in this group.
Collapse
Affiliation(s)
- Sushmit Mishra
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | - Stefan Stenfelt
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden ; Department of Clinical and Experimental Medicine, Linköping University Linköping, Sweden
| | - Thomas Lunner
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden ; Department of Clinical and Experimental Medicine, Linköping University Linköping, Sweden ; Eriksholm Research Centre, Oticon A/S Snekkersten, Denmark
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| |
Collapse
|
44
|
Janse E, Jesse A. Working memory affects older adults' use of context in spoken-word recognition. Q J Exp Psychol (Hove) 2014; 67:1842-62. [PMID: 24443921 DOI: 10.1080/17470218.2013.879391] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate older listeners' ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether verbal working memory predicts older adults' ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) affected the speed of recognition. Contextual facilitation was modulated by older listeners' verbal working memory (measured with a backward digit span task) and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners' immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.
Collapse
Affiliation(s)
- Esther Janse
- a Psychology of Language Department , Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands and Centre for Language Studies, Radboud University , Nijmegen , The Netherlands
| | | |
Collapse
|
45
|
Sörqvist P, Hurtig A, Ljung R, Rönnberg J. High second-language proficiency protects against the effects of reverberation on listening comprehension. Scand J Psychol 2014; 55:91-6. [PMID: 24646043 PMCID: PMC4211359 DOI: 10.1111/sjop.12115] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2013] [Accepted: 01/16/2014] [Indexed: 11/30/2022]
Abstract
The purpose of this experiment was to investigate whether classroom reverberation influences second-language (L2) listening comprehension. Moreover, we investigated whether individual differences in baseline L2 proficiency and in working memory capacity (WMC) modulate the effect of reverberation time on L2 listening comprehension. The results showed that L2 listening comprehension decreased as reverberation time increased. Participants with higher baseline L2 proficiency were less susceptible to this effect. WMC was also related to the effect of reverberation (although just barely significant), but the effect of WMC was eliminated when baseline L2 proficiency was statistically controlled. Taken together, the results suggest that top-down cognitive capabilities support listening in adverse conditions. Potential implications for the Swedish national tests in English are discussed.
Collapse
Affiliation(s)
- Patrik Sörqvist
- Department of Building, Energy and Environmental Engineering, University of Gävle, Gävle, Sweden; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | | | | | | |
Collapse
|
46
|
Rönnberg N, Rudner M, Lunner T, Stenfelt S. Assessing listening effort by measuring short-term memory storage and processing of speech in noise. SPEECH LANGUAGE AND HEARING 2014. [DOI: 10.1179/2050572813y.0000000033] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
47
|
Guediche S, Blumstein SE, Fiez JA, Holt LL. Speech perception under adverse conditions: insights from behavioral, computational, and neuroscience research. Front Syst Neurosci 2014; 7:126. [PMID: 24427119 PMCID: PMC3879477 DOI: 10.3389/fnsys.2013.00126] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2013] [Accepted: 12/16/2013] [Indexed: 01/06/2023] Open
Abstract
Adult speech perception reflects the long-term regularities of the native language, but it is also flexible such that it accommodates and adapts to adverse listening conditions and short-term deviations from native-language norms. The purpose of this article is to examine how the broader neuroscience literature can inform and advance research efforts in understanding the neural basis of flexibility and adaptive plasticity in speech perception. Specifically, we highlight the potential role of learning algorithms that rely on prediction error signals and discuss specific neural structures that are likely to contribute to such learning. To this end, we review behavioral studies, computational accounts, and neuroimaging findings related to adaptive plasticity in speech perception. Already, a few studies have alluded to a potential role of these mechanisms in adaptive plasticity in speech perception. Furthermore, we consider research topics in neuroscience that offer insight into how perception can be adaptively tuned to short-term deviations while balancing the need to maintain stability in the perception of learned long-term regularities. Consideration of the application and limitations of these algorithms in characterizing flexible speech perception under adverse conditions promises to inform theoretical models of speech.
Collapse
Affiliation(s)
- Sara Guediche
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown UniversityProvidence, RI, USA
| | - Sheila E. Blumstein
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown UniversityProvidence, RI, USA
- Department of Cognitive, Linguistic, and Psychological Sciences, Brain Institute, Brown UniversityProvidence, RI, USA
| | - Julie A. Fiez
- Department of Neuroscience, Center for Neuroscience at the University of Pittsburgh, University of PittsburghPittsburgh, PA, USA
- Department of Psychology, University of PittsburghPittsburgh, PA, USA
- Department of Psychology at Carnegie Mellon University and Department of Neuroscience at the University of Pittsburgh, Center for the Neural Basis of CognitionPittsburgh, PA, USA
| | - Lori L. Holt
- Department of Neuroscience, Center for Neuroscience at the University of Pittsburgh, University of PittsburghPittsburgh, PA, USA
- Department of Psychology at Carnegie Mellon University and Department of Neuroscience at the University of Pittsburgh, Center for the Neural Basis of CognitionPittsburgh, PA, USA
- Department of Psychology, Carnegie Mellon UniversityPittsburgh, PA, USA
| |
Collapse
|
48
|
Becker R, Pefkou M, Michel CM, Hervais-Adelman AG. Left temporal alpha-band activity reflects single word intelligibility. Front Syst Neurosci 2013; 7:121. [PMID: 24416001 PMCID: PMC3873629 DOI: 10.3389/fnsys.2013.00121] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2013] [Accepted: 12/10/2013] [Indexed: 11/13/2022] Open
Abstract
The electroencephalographic (EEG) correlates of degraded speech perception have been explored in a number of recent studies. However, such investigations have often been inconclusive as to whether observed differences in brain responses between conditions result from different acoustic properties of more or less intelligible stimuli or whether they relate to cognitive processes implicated in comprehending challenging stimuli. In this study we used noise vocoding to spectrally degrade monosyllabic words in order to manipulate their intelligibility. We used spectral rotation to generate incomprehensible control conditions matched in terms of spectral detail. We recorded EEG from 14 volunteers who listened to a series of noise vocoded (NV) and noise-vocoded spectrally-rotated (rNV) words, while they carried out a detection task. We specifically sought components of the EEG response that showed an interaction between spectral rotation and spectral degradation. This reflects those aspects of the brain electrical response that are related to the intelligibility of acoustically degraded monosyllabic words, while controlling for spectral detail. An interaction between spectral complexity and rotation was apparent in both evoked and induced activity. Analyses of event-related potentials showed an interaction effect for a P300-like component at several centro-parietal electrodes. Time-frequency analysis of the EEG signal in the alpha-band revealed a monotonic increase in event-related desynchronization (ERD) for the NV but not the rNV stimuli in the alpha band at a left temporo-central electrode cluster from 420-560 ms reflecting a direct relationship between the strength of alpha-band ERD and intelligibility. By matching NV words with their incomprehensible rNV homologues, we reveal the spatiotemporal pattern of evoked and induced processes involved in degraded speech perception, largely uncontaminated by purely acoustic effects.
Collapse
Affiliation(s)
- Robert Becker
- Functional Brain Mapping Lab, Department of Fundamental Neuroscience, University of Geneva Geneva, Switzerland
| | - Maria Pefkou
- Brain and Language Lab, Department of Clinical Neuroscience, University of Geneva Geneva, Switzerland
| | - Christoph M Michel
- Functional Brain Mapping Lab, Department of Fundamental Neuroscience, University of Geneva Geneva, Switzerland
| | - Alexis G Hervais-Adelman
- Brain and Language Lab, Department of Clinical Neuroscience, University of Geneva Geneva, Switzerland
| |
Collapse
|
49
|
Mishra S, Lunner T, Stenfelt S, Rönnberg J, Rudner M. Seeing the talker's face supports executive processing of speech in steady state noise. Front Syst Neurosci 2013; 7:96. [PMID: 24324411 PMCID: PMC3840300 DOI: 10.3389/fnsys.2013.00096] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2013] [Accepted: 11/09/2013] [Indexed: 11/21/2022] Open
Abstract
Listening to speech in noise depletes cognitive resources, affecting speech processing. The present study investigated how remaining resources or cognitive spare capacity (CSC) can be deployed by young adults with normal hearing. We administered a test of CSC (CSCT; Mishra et al., 2013) along with a battery of established cognitive tests to 20 participants with normal hearing. In the CSCT, lists of two-digit numbers were presented with and without visual cues in quiet, as well as in steady-state and speech-like noise at a high intelligibility level. In low load conditions, two numbers were recalled according to instructions inducing executive processing (updating, inhibition) and in high load conditions the participants were additionally instructed to recall one extra number, which was the always the first item in the list. In line with previous findings, results showed that CSC was sensitive to memory load and executive function but generally not related to working memory capacity (WMC). Furthermore, CSCT scores in quiet were lowered by visual cues, probably due to distraction. In steady-state noise, the presence of visual cues improved CSCT scores, probably by enabling better encoding. Contrary to our expectation, CSCT performance was disrupted more in steady-state than speech-like noise, although only without visual cues, possibly because selective attention could be used to ignore the speech-like background and provide an enriched representation of target items in working memory similar to that obtained in quiet. This interpretation is supported by a consistent association between CSCT scores and updating skills.
Collapse
Affiliation(s)
- Sushmit Mishra
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden
| | | | | | | | | |
Collapse
|
50
|
Miller N. Measuring up to speech intelligibility. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2013; 48:601-612. [PMID: 24119170 DOI: 10.1111/1460-6984.12061] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Improvement or maintenance of speech intelligibility is a central aim in a whole range of conditions in speech-language therapy, both developmental and acquired. Best clinical practice and pursuance of the evidence base for interventions would suggest measurement of intelligibility forms a vital role in clinical decision-making and monitoring. However, what should be measured to gauge intelligibility and how this is achieved and relates to clinical planning continues to be a topic of debate. This review considers the strengths and weaknesses of selected clinical approaches to intelligibility assessment, stressing the importance of explanatory, diagnostic testing as both a more sensitive and a clinically informative method. The worth of this, and any approach, is predicated, though, on awareness and control of key design, elicitation, transcription and listening/listener variables to maximize validity and reliability of assessments. These are discussed. A distinction is drawn between signal-dependent and -independent factors in intelligibility evaluation. Discussion broaches how these different perspectives might be reconciled to deliver comprehensive insights into intelligibility levels and their clinical/educational significance. The paper ends with a call for wider implementation of best practice around intelligibility assessment.
Collapse
Affiliation(s)
- Nick Miller
- Institute of Health and Society, Speech and Language Sciences, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|