1
|
Akashi DA, Martinelli MC. Study of Speech Recognition in Noise and Working Memory in Adults and Elderly with Normal Hearing. Int Arch Otorhinolaryngol 2024; 28:e473-e480. [PMID: 38974622 PMCID: PMC11226241 DOI: 10.1055/s-0044-1779432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 12/27/2023] [Indexed: 07/09/2024] Open
Abstract
Introduction In clinical practice, patients with the same degree and configuration of hearing loss, or even with normal audiometric thresholds, present substantially different performances in terms of speech perception. This probably happens because other factors, in addition to auditory sensitivity, interfere with speech perception. Thus, studies are needed to investigate the performance of listeners in unfavorable listening conditions to identify the processes that interfere in the speech perception of these subjects. Objective To verify the influence of age, temporal processing, and working memory on speech recognition in noise. Methods Thirty-eight adult and elderly individuals with normal hearing thresholds participated in the study. Participants were divided into two groups: The adult group (G1), composed of 10 individuals aged 21 to 33 years, and the elderly group (G2), with 28 participants aged 60 to 81 years. They underwent audiological assessment with the Portuguese Sentence List Test, Gaps-in-Noise test, Digit Span Memory test, Running Span Task, Corsi Block-Tapping test, and Visual Pattern test. Results The Running Span Task score proved to be a statistically significant predictor of the listening-in-noise variable. This result showed that the difference in performance between groups G1 and G2 in relation to listening in noise is due not only to aging, but also to changes in working memory. Conclusion The study showed that working memory is a predictor of listening performance in noise in individuals with normal hearing, and that this task can provide important information for investigation in individuals who have difficulty hearing in unfavorable environments.
Collapse
Affiliation(s)
- Daniela Aiko Akashi
- Department of Speech, Language and Hearing Sciences, Universidade Federal de São Paulo (UNIFESP), São Paulo, SP, Brazil
| | - Maria Cecília Martinelli
- Department of Speech, Language and Hearing Sciences, Universidade Federal de São Paulo (UNIFESP), São Paulo, SP, Brazil
| |
Collapse
|
2
|
Bhatt IS, Garay JAR, Bhagavan SG, Ingalls V, Dias R, Torkamani A. A genome-wide association study reveals a polygenic architecture of speech-in-noise deficits in individuals with self-reported normal hearing. Sci Rep 2024; 14:13089. [PMID: 38849415 PMCID: PMC11161523 DOI: 10.1038/s41598-024-63972-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 06/04/2024] [Indexed: 06/09/2024] Open
Abstract
Speech-in-noise (SIN) perception is a primary complaint of individuals with audiometric hearing loss. SIN performance varies drastically, even among individuals with normal hearing. The present genome-wide association study (GWAS) investigated the genetic basis of SIN deficits in individuals with self-reported normal hearing in quiet situations. GWAS was performed on 279,911 individuals from the UB Biobank cohort, with 58,847 reporting SIN deficits despite reporting normal hearing in quiet. GWAS identified 996 single nucleotide polymorphisms (SNPs), achieving significance (p < 5*10-8) across four genomic loci. 720 SNPs across 21 loci achieved suggestive significance (p < 10-6). GWAS signals were enriched in brain tissues, such as the anterior cingulate cortex, dorsolateral prefrontal cortex, entorhinal cortex, frontal cortex, hippocampus, and inferior temporal cortex. Cochlear cell types revealed no significant association with SIN deficits. SIN deficits were associated with various health traits, including neuropsychiatric, sensory, cognitive, metabolic, cardiovascular, and inflammatory conditions. A replication analysis was conducted on 242 healthy young adults. Self-reported speech perception, hearing thresholds (0.25-16 kHz), and distortion product otoacoustic emissions (1-16 kHz) were utilized for the replication analysis. 73 SNPs were replicated with a self-reported speech perception measure. 211 SNPs were replicated with at least one and 66 with at least two audiological measures. 12 SNPs near or within MAPT, GRM3, and HLA-DQA1 were replicated for all audiological measures. The present study highlighted a polygenic architecture underlying SIN deficits in individuals with self-reported normal hearing.
Collapse
Affiliation(s)
- Ishan Sunilkumar Bhatt
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr, Iowa City, IA, 52242, USA.
| | - Juan Antonio Raygoza Garay
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr, Iowa City, IA, 52242, USA
- Holden Comprehensive Cancer Center, University of Iowa, Iowa City, IA, 52242, USA
| | - Srividya Grama Bhagavan
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr, Iowa City, IA, 52242, USA
| | - Valerie Ingalls
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr, Iowa City, IA, 52242, USA
| | - Raquel Dias
- Department of Microbiology and Cell Science, University of Florida, Gainesville, FL, 32608, USA
| | - Ali Torkamani
- Department of Integrative Structural and Computational Biology, Scripps Research Institute, La Jolla, CA, 92037, USA
| |
Collapse
|
3
|
Lu K, Dutta K, Mohammed A, Elhilali M, Shamma S. Temporal-Coherence Induces Binding of Responses to Sound Sequences in Ferret Auditory Cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.21.595170. [PMID: 38854125 PMCID: PMC11160575 DOI: 10.1101/2024.05.21.595170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2024]
Abstract
Binding the attributes of a sensory source is necessary to perceive it as a unified entity, one that can be attended to and extracted from its surrounding scene. In auditory perception, this is the essence of the cocktail party problem in which a listener segregates one speaker from a mixture of voices, or a musical stream from simultaneous others. It is postulated that coherence of the temporal modulations of a source's features is necessary to bind them. The focus of this study is on the role of temporal-coherence in binding and segregation, and specifically as evidenced by the neural correlates of rapid plasticity that enhance cortical responses among synchronized neurons, while suppressing them among asynchronized ones. In a first experiment, we find that attention to a sound sequence rapidly binds it to other coherent sequences while suppressing nearby incoherent sequences, thus enhancing the contrast between the two groups. In a second experiment, a sequence of synchronized multi-tone complexes, embedded in a cloud of randomly dispersed background of desynchronized tones, perceptually and neurally pops-out after a fraction of a second highlighting the binding among its coherent tones against the incoherent background. These findings demonstrate the role of temporal-coherence in binding and segregation.
Collapse
Affiliation(s)
- Kai Lu
- Emory University Medical School
| | - Kelsey Dutta
- Electrical and Computer Engineering Department & Institute for Systems Research, University of Maryland College Park
| | - Ali Mohammed
- Electrical and Computer Engineering Department & Institute for Systems Research, University of Maryland College Park
| | - Mounya Elhilali
- Electrical and Computer Engineering, The Johns Hopkins University
| | - Shihab Shamma
- Electrical and Computer Engineering Department & Institute for Systems Research, University of Maryland College Park
- Départment d'étude Cognitives, école normale supérieure, PSL
| |
Collapse
|
4
|
Jiang K, Albert MS, Coresh J, Couper DJ, Gottesman RF, Hayden KM, Jack CR, Knopman DS, Mosley TH, Pankow JS, Pike JR, Reed NS, Sanchez VA, Sharrett AR, Lin FR, Deal JA. Cross-Sectional Associations of Peripheral Hearing, Brain Imaging, and Cognitive Performance With Speech-in-Noise Performance: The Aging and Cognitive Health Evaluation in Elders Brain Magnetic Resonance Imaging Ancillary Study. Am J Audiol 2024:1-12. [PMID: 38748919 DOI: 10.1044/2024_aja-23-00108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024] Open
Abstract
PURPOSE Population-based evidence in the interrelationships among hearing, brain structure, and cognition is limited. This study aims to investigate the cross-sectional associations of peripheral hearing, brain imaging measures, and cognitive function with speech-in-noise performance among older adults. METHOD We studied 602 participants in the Aging and Cognitive Health Evaluation in Elders (ACHIEVE) brain magnetic resonance imaging (MRI) ancillary study, including 427 ACHIEVE baseline (2018-2020) participants with hearing loss and 175 Atherosclerosis Risk in Communities Neurocognitive Study Visit 6/7 (2016-2017/2018-2019) participants with normal hearing. Speech-in-noise performance, as outcome of interest, was assessed by the Quick Speech-in-Noise (QuickSIN) test (range: 0-30; higher = better). Predictors of interest included (a) peripheral hearing assessed by pure-tone audiometry; (b) brain imaging measures: structural MRI measures, white matter hyperintensities, and diffusion tensor imaging measures; and (c) cognitive performance assessed by a battery of 10 cognitive tests. All predictors were standardized to z scores. We estimated the differences in QuickSIN associated with every standard deviation (SD) worse in each predictor (peripheral hearing, brain imaging, and cognition) using multivariable-adjusted linear regression, adjusting for demographic variables, lifestyle, and disease factors (Model 1), and, additionally, for other predictors to assess independent associations (Model 2). RESULTS Participants were aged 70-84 years, 56% female, and 17% Black. Every SD worse in better-ear 4-frequency pure-tone average was associated with worse QuickSIN (-4.89, 95% confidence interval, CI [-5.57, -4.21]) when participants had peripheral hearing loss, independent of other predictors. Smaller temporal lobe volume was associated with worse QuickSIN, but the association was not independent of other predictors (-0.30, 95% CI [-0.86, 0.26]). Every SD worse in global cognitive performance was independently associated with worse QuickSIN (-0.90, 95% CI [-1.30, -0.50]). CONCLUSIONS Peripheral hearing and cognitive performance are independently associated with speech-in-noise performance among dementia-free older adults. The ongoing ACHIEVE trial will elucidate the effect of a hearing intervention that includes amplification and auditory rehabilitation on speech-in-noise understanding in older adults. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25733679.
Collapse
Affiliation(s)
- Kening Jiang
- Cochlear Center for Hearing and Public Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
| | - Marilyn S Albert
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD
| | - Josef Coresh
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
| | - David J Couper
- Department of Biostatistics, Gillings School of Global Public Health, The University of North Carolina at Chapel Hill
| | - Rebecca F Gottesman
- Stroke Branch, National Institute of Neurological Disorders and Stroke Intramural Research Program, National Institutes of Health, Bethesda, MD
| | - Kathleen M Hayden
- Department of Social Sciences and Health Policy, Wake Forest School of Medicine, Winston-Salem, NC
| | | | | | - Thomas H Mosley
- The MIND Center, University of Mississippi Medical Center, Jackson, MS
| | - James S Pankow
- Division of Epidemiology and Community Health, University of Minnesota School of Public Health, Minneapolis
| | - James R Pike
- Department of Biostatistics, Gillings School of Global Public Health, The University of North Carolina at Chapel Hill
| | - Nicholas S Reed
- Cochlear Center for Hearing and Public Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins School of Medicine, Baltimore, MD
| | - Victoria A Sanchez
- Department of Otolaryngology, Morsani College of Medicine, University of South Florida, Tampa
| | - A Richey Sharrett
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
| | - Frank R Lin
- Cochlear Center for Hearing and Public Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins School of Medicine, Baltimore, MD
| | - Jennifer A Deal
- Cochlear Center for Hearing and Public Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins School of Medicine, Baltimore, MD
| |
Collapse
|
5
|
Boncz Á, Szalárdy O, Velősy PK, Béres L, Baumgartner R, Winkler I, Tóth B. The effects of aging and hearing impairment on listening in noise. iScience 2024; 27:109295. [PMID: 38558934 PMCID: PMC10981015 DOI: 10.1016/j.isci.2024.109295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 09/19/2023] [Accepted: 02/16/2024] [Indexed: 04/04/2024] Open
Abstract
The study investigates age-related decline in listening abilities, particularly in noisy environments, where the challenge lies in extracting meaningful information from variable sensory input (figure-ground segregation). The research focuses on peripheral and central factors contributing to this decline using a tone-cloud-based figure detection task. Results based on behavioral measures and event-related brain potentials (ERPs) indicate that, despite delayed perceptual processes and some deterioration in attention and executive functions with aging, the ability to detect sound sources in noise remains relatively intact. However, even mild hearing impairment significantly hampers the segregation of individual sound sources within a complex auditory scene. The severity of the hearing deficit correlates with an increased susceptibility to masking noise. The study underscores the impact of hearing impairment on auditory scene analysis and highlights the need for personalized interventions based on individual abilities.
Collapse
Affiliation(s)
- Ádám Boncz
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
| | - Orsolya Szalárdy
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
- Institute of Behavioural Sciences, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| | - Péter Kristóf Velősy
- Department of Cognitive Science, Budapest University of Technology and Economics, Budapest, Hungary
| | - Luca Béres
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
- Department of Cognitive Science, Budapest University of Technology and Economics, Budapest, Hungary
| | - Robert Baumgartner
- Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - István Winkler
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
| | - Brigitta Tóth
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
| |
Collapse
|
6
|
Colak H, Sendesen E, Turkyilmaz MD. Subcortical auditory system in tinnitus with normal hearing: insights from electrophysiological perspective. Eur Arch Otorhinolaryngol 2024:10.1007/s00405-024-08583-3. [PMID: 38555317 DOI: 10.1007/s00405-024-08583-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 02/26/2024] [Indexed: 04/02/2024]
Abstract
PURPOSE The mechanism of tinnitus remains poorly understood; however, studies have underscored the significance of the subcortical auditory system in tinnitus perception. In this study, our aim was to investigate the subcortical auditory system using electrophysiological measurements in individuals with tinnitus and normal hearing. Additionally, we aimed to assess speech-in-noise (SiN) perception to determine whether individuals with tinnitus exhibit SiN deficits despite having normal-hearing thresholds. METHODS A total 42 normal-hearing participants, including 22 individuals with chronic subjective tinnitus and 20 normal individuals, participated in the study. We recorded auditory brainstem response (ABR) and speech-evoked frequency following response (sFFR) from the participants. SiN perception was also assessed using the Matrix test. RESULTS Our results revealed a significant prolongation of the O peak, which encodes sound offset in sFFR, for the tinnitus group (p < 0.01). The greater non-stimulus-evoked activity was also found in individuals with tinnitus (p < 0.01). In ABR, the tinnitus group showed reduced wave I amplitude and prolonged absolute wave I, III, and V latencies (p ≤ 0.02). Our findings suggested that individuals with tinnitus had poorer SiN perception compared to normal participants (p < 0.05). CONCLUSION The deficit in encoding sound offset may indicate an impaired inhibitory mechanism in tinnitus. The greater non-stimulus-evoked activity observed in the tinnitus group suggests increased neural noise at the subcortical level. Additionally, individuals with tinnitus may experience speech-in-noise deficits despite having a normal audiogram. Taken together, these findings suggest that the lack of inhibition and increased neural noise may be associated with tinnitus perception.
Collapse
Affiliation(s)
- Hasan Colak
- Biosciences Institute, Newcastle University, Newcastle Upon Tyne, UK.
| | - Eser Sendesen
- Department of Audiology, Hacettepe University, Ankara, Turkey
| | | |
Collapse
|
7
|
Çolak H, Aydemir BE, Sakarya MD, Çakmak E, Alniaçik A, Türkyilmaz MD. Subcortical Auditory Processing and Speech Perception in Noise Among Individuals With and Without Extended High-Frequency Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:221-231. [PMID: 37956878 DOI: 10.1044/2023_jslhr-23-00023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
PURPOSE The significance of extended high-frequency (EHF) hearing (> 8 kHz) is not well understood so far. In this study, we aimed to understand the relationship between EHF hearing loss (EHFHL) and speech perception in noise (SPIN) and the associated physiological signatures using the speech-evoked frequency-following response (sFFR). METHOD Sixteen young adults with EHFHL and 16 age- and sex-matched individuals with normal hearing participated in the study. SPIN performance in right speech-right noise, left speech-left noise, and binaural listening conditions was evaluated using the Turkish Matrix Test. Additionally, subcortical auditory processing was assessed by recording sFFRs elicited by 40-ms /da/ stimuli. RESULTS Individuals with EHFHL demonstrated poorer SPIN performances in all listening conditions (p < .01). Longer latencies were observed in the V (onset) and O (offset) peaks in these individuals (p ≤ .01). However, only the V/A peak amplitude was found to be significantly reduced in individuals with EHFHL (p < .01). CONCLUSIONS Our findings highlight the importance of EHF hearing and suggest that EHF hearing should be considered among the key elements in SPIN. Individuals with EHFHL show a tendency toward weaker subcortical auditory processing, which likely contributes to their poorer SPIN performance. Thus, routine assessment of EHF hearing should be implemented in clinical settings, alongside the evaluation of standard audiometric frequencies (0.25-8 kHz).
Collapse
Affiliation(s)
- Hasan Çolak
- Department of Audiology, Baskent University, Ankara, Turkey
- Department of Audiology, Hacettepe University, Ankara, Turkey
| | | | | | - Eda Çakmak
- Department of Audiology, Baskent University, Ankara, Turkey
| | | | | |
Collapse
|
8
|
Choi I, Gander PE, Berger JI, Woo J, Choy MH, Hong J, Colby S, McMurray B, Griffiths TD. Spectral Grouping of Electrically Encoded Sound Predicts Speech-in-Noise Performance in Cochlear Implantees. J Assoc Res Otolaryngol 2023; 24:607-617. [PMID: 38062284 PMCID: PMC10752853 DOI: 10.1007/s10162-023-00918-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Accepted: 11/14/2023] [Indexed: 12/29/2023] Open
Abstract
OBJECTIVES Cochlear implant (CI) users exhibit large variability in understanding speech in noise. Past work in CI users found that spectral and temporal resolution correlates with speech-in-noise ability, but a large portion of variance remains unexplained. Recent work on normal-hearing listeners showed that the ability to group temporally and spectrally coherent tones in a complex auditory scene predicts speech-in-noise ability independently of the audiogram, highlighting a central mechanism for auditory scene analysis that contributes to speech-in-noise. The current study examined whether the auditory grouping ability also contributes to speech-in-noise understanding in CI users. DESIGN Forty-seven post-lingually deafened CI users were tested with psychophysical measures of spectral and temporal resolution, a stochastic figure-ground task that depends on the detection of a figure by grouping multiple fixed frequency elements against a random background, and a sentence-in-noise measure. Multiple linear regression was used to predict sentence-in-noise performance from the other tasks. RESULTS No co-linearity was found between any predictor variables. All three predictors (spectral and temporal resolution plus the figure-ground task) exhibited significant contribution in the multiple linear regression model, indicating that the auditory grouping ability in a complex auditory scene explains a further proportion of variance in CI users' speech-in-noise performance that was not explained by spectral and temporal resolution. CONCLUSION Measures of cross-frequency grouping reflect an auditory cognitive mechanism that determines speech-in-noise understanding independently of cochlear function. Such measures are easily implemented clinically as predictors of CI success and suggest potential strategies for rehabilitation based on training with non-speech stimuli.
Collapse
Affiliation(s)
- Inyong Choi
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr., Iowa City, IA, 52242, USA.
- Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA.
| | - Phillip E Gander
- Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
- Department of Radiology, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Joel I Berger
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Jihwan Woo
- Department of Biomedical Engineering, University of Ulsan, Ulsan, Republic of Korea
| | - Matthew H Choy
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE1 7RU, UK
| | - Jean Hong
- Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Sarah Colby
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, 52242, USA
| | - Bob McMurray
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr., Iowa City, IA, 52242, USA
- Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, 52242, USA
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE1 7RU, UK
| |
Collapse
|
9
|
Colby SE, McMurray B. Efficiency of spoken word recognition slows across the adult lifespan. Cognition 2023; 240:105588. [PMID: 37586157 PMCID: PMC10530619 DOI: 10.1016/j.cognition.2023.105588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 07/26/2023] [Accepted: 08/03/2023] [Indexed: 08/18/2023]
Abstract
Spoken word recognition is a critical hub during language processing, linking hearing and perception to meaning and syntax. Words must be recognized quickly and efficiently as speech unfolds to be successfully integrated into conversation. This makes word recognition a computationally challenging process even for young, normal hearing adults. Older adults often experience declines in hearing and cognition, which could be linked by age-related declines in the cognitive processes specific to word recognition. However, it is unclear whether changes in word recognition across the lifespan can be accounted for by hearing or domain-general cognition. Participants (N = 107) responded to spoken words in a Visual World Paradigm task while their eyes were tracked to assess the real-time dynamics of word recognition. We examined several indices of word recognition from early adolescence through older adulthood (ages 11-78). The timing and proportion of eye fixations to target and competitor images reveals that spoken word recognition became more efficient through age 25 and began to slow in middle age, accompanied by declines in the ability to resolve competition (e.g., suppressing sandwich to recognize sandal). There was a unique effect of age even after accounting for differences in inhibitory control, processing speed, and hearing thresholds. This suggests a limited age range where listeners are peak performers.
Collapse
Affiliation(s)
- Sarah E Colby
- Department of Psychological and Brain Sciences, University of Iowa, Psychological and Brain Sciences Building, Iowa City, IA, 52242, USA; Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA.
| | - Bob McMurray
- Department of Psychological and Brain Sciences, University of Iowa, Psychological and Brain Sciences Building, Iowa City, IA, 52242, USA; Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA; Department of Communication Sciences and Disorders, University of Iowa, Wendell Johnson Speech and Hearing Center, Iowa City, IA, 52242, USA; Department of Linguistics, University of Iowa, Phillips Hall, Iowa City, IA 52242, USA
| |
Collapse
|
10
|
Bhatt IS, Ramadugu SK, Goodman S, Bhagavan SG, Ingalls V, Dias R, Torkamani A. Polygenic Risk Score-Based Association Analysis of Speech-in-Noise and Hearing Threshold Measures in Healthy Young Adults with Self-reported Normal Hearing. J Assoc Res Otolaryngol 2023; 24:513-525. [PMID: 37783963 PMCID: PMC10695896 DOI: 10.1007/s10162-023-00911-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 09/08/2023] [Indexed: 10/04/2023] Open
Abstract
PURPOSE Speech-in-noise (SIN) traits exhibit high inter-subject variability, even for healthy young adults reporting normal hearing. Emerging evidence suggests that genetic variability could influence inter-subject variability in SIN traits. Genome-wide association studies (GWAS) have uncovered the polygenic architecture of various adult-onset complex human conditions. Polygenic risk scores (PRS) summarize complex genetic susceptibility to quantify the degree of genetic risk for health conditions. The present study conducted PRS-based association analyses to identify PRS risk factors for SIN and hearing threshold measures in 255 healthy young adults (18-40 years) with self-reported normal hearing. METHODS Self-reported SIN perception abilities were assessed by the Speech, Spatial, and Qualities of Hearing Scale (SSQ12). QuickSIN and audiometry (0.25-16 kHz) were performed on 218 participants. Saliva-derived DNA was used for low-pass whole genome sequencing, and 2620 PRS variables for various traits were calculated using the models derived from the polygenic risk score (PGS) catalog. The regression analysis was conducted to identify predictors for SSQ12, QuickSIN, and better ear puretone averages at conventional (PTA0.5-2), high (PTA4-8), and extended-high (PTA12.5-16) frequency ranges. RESULTS Participants with a higher genetic predisposition to HDL cholesterol reported better SSQ12. Participants with high PRS to dementia revealed significantly elevated PTA4-8, and those with high PRS to atrial fibrillation and flutter revealed significantly elevated PTA12.5-16. CONCLUSION These results indicate that healthy individuals with polygenic risk of certain health conditions could exhibit a subclinical decline in hearing health measures at young ages, decades before clinically meaningful SIN deficits and hearing loss could be observed. PRS could be used to identify high-risk individuals to prevent hearing health conditions by promoting a healthy lifestyle.
Collapse
Affiliation(s)
- Ishan Sunilkumar Bhatt
- Department of Communication Sciences & Disorders, University of Iowa, 250 Hawkins Dr, Iowa City, IA, 52242, USA.
| | - Sai Kumar Ramadugu
- Department of Communication Sciences & Disorders, University of Iowa, 250 Hawkins Dr, Iowa City, IA, 52242, USA
| | - Shawn Goodman
- Department of Communication Sciences & Disorders, University of Iowa, 250 Hawkins Dr, Iowa City, IA, 52242, USA
| | - Srividya Grama Bhagavan
- Department of Communication Sciences & Disorders, University of Iowa, 250 Hawkins Dr, Iowa City, IA, 52242, USA
| | - Valerie Ingalls
- Department of Communication Sciences & Disorders, University of Iowa, 250 Hawkins Dr, Iowa City, IA, 52242, USA
| | - Raquel Dias
- Department of Microbiology and Cell Science, University of Florida, Gainesville, FL, 32608, USA
| | - Ali Torkamani
- Department of Integrative Structural and Computational Biology, Scripps Science Institute, La Jolla, CA, 92037, USA
| |
Collapse
|
11
|
Wasiuk PA, Calandruccio L, Oleson JJ, Buss E. Predicting speech-in-speech recognition: Short-term audibility and spatial separation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:1827-1837. [PMID: 37728286 DOI: 10.1121/10.0021069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 08/28/2023] [Indexed: 09/21/2023]
Abstract
Quantifying the factors that predict variability in speech-in-speech recognition represents a fundamental challenge in auditory science. Stimulus factors associated with energetic and informational masking (IM) modulate variability in speech-in-speech recognition, but energetic effects can be difficult to estimate in spectro-temporally dynamic speech maskers. The current experiment characterized the effects of short-term audibility and differences in target and masker location (or perceived location) on the horizontal plane for sentence recognition in two-talker speech. Thirty young adults with normal hearing (NH) participated. Speech reception thresholds and keyword recognition at a fixed signal-to-noise ratio (SNR) were measured in each spatial condition. Short-term audibility for each keyword was quantified using a glimpsing model. Results revealed that speech-in-speech recognition depended on the proportion of audible glimpses available in the target + masker keyword stimulus in each spatial condition, even across stimuli presented at a fixed global SNR. Short-term audibility requirements were greater for colocated than spatially separated speech-in-speech recognition, and keyword recognition improved more rapidly as a function of increases in target audibility with spatial separation. Results indicate that spatial cues enhance glimpsing efficiency in competing speech for young adults with NH and provide a quantitative framework for estimating IM for speech-in-speech recognition in different spatial configurations.
Collapse
Affiliation(s)
- Peter A Wasiuk
- Department of Communication Disorders, 493 Fitch Street, Southern Connecticut State University, New Haven, Connecticut 06515, USA
| | - Lauren Calandruccio
- Department of Psychological Sciences, 11635 Euclid Avenue, Case Western Reserve University, Cleveland, Ohio 44106, USA
| | - Jacob J Oleson
- Department of Biostatistics, 145 North Riverside Drive N300, College of Public Health, University of Iowa, Iowa City, Iowa 52242, USA
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, 170 Manning Drive, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599, USA
| |
Collapse
|
12
|
Cancel VE, McHaney JR, Milne V, Palmer C, Parthasarathy A. A data-driven approach to identify a rapid screener for auditory processing disorder testing referrals in adults. Sci Rep 2023; 13:13636. [PMID: 37604867 PMCID: PMC10442397 DOI: 10.1038/s41598-023-40645-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 08/16/2023] [Indexed: 08/23/2023] Open
Abstract
Hearing thresholds form the gold standard assessment in Audiology clinics. However, ~ 10% of adult patients seeking audiological care for self-perceived hearing deficits have thresholds that are normal. Currently, a diagnostic assessment for auditory processing disorder (APD) remains one of the few viable avenues of further care for this patient population, yet there are no standard guidelines for referrals. Here, we identified tests within the APD testing battery that could provide a rapid screener to inform APD referrals in adults. We first analyzed records from the University of Pittsburgh Medical Center (UPMC) Audiology database to identify adult patients with self-perceived hearing difficulties despite normal audiometric thresholds. We then looked at the patients who were referred for APD testing. We examined test performances, correlational relationships, and classification accuracies. Patients experienced most difficulties within the dichotic domain of testing. Additionally, accuracies calculated from sensitivities and specificities revealed the words-in-noise (WIN), the Random Dichotic Digits Task (RDDT) and Quick Speech in Noise (QuickSIN) tests had the highest classification accuracies. The addition of these tests have the greatest promise as a quick screener during routine audiological assessments to help identify adult patients who may be referred for APD assessment and resulting treatment plans.
Collapse
Affiliation(s)
- Victoria E Cancel
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, 5060A Forbes Tower, Pittsburgh, PA, 15260, USA
| | - Jacie R McHaney
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, 5060A Forbes Tower, Pittsburgh, PA, 15260, USA
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Virginia Milne
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, 5060A Forbes Tower, Pittsburgh, PA, 15260, USA
- Department of Otolaryngology, School of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
- University of Pittsburgh Medical Center, University of Pittsburgh, Pittsburgh, PA, USA
| | - Catherine Palmer
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, 5060A Forbes Tower, Pittsburgh, PA, 15260, USA
- Department of Otolaryngology, School of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
- University of Pittsburgh Medical Center, University of Pittsburgh, Pittsburgh, PA, USA
| | - Aravindakshan Parthasarathy
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, 5060A Forbes Tower, Pittsburgh, PA, 15260, USA.
- Department of Otolaryngology, School of Medicine, University of Pittsburgh, Pittsburgh, PA, USA.
- University of Pittsburgh Medical Center, University of Pittsburgh, Pittsburgh, PA, USA.
- Department of BioEngineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, USA.
| |
Collapse
|
13
|
McHaney JR, Hancock KE, Polley DB, Parthasarathy A. Sensory representations and pupil-indexed listening effort provide complementary contributions to multi-talker speech intelligibility. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.13.553131. [PMID: 37645975 PMCID: PMC10462058 DOI: 10.1101/2023.08.13.553131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
Optimal speech perception in noise requires successful separation of the target speech stream from multiple competing background speech streams. The ability to segregate these competing speech streams depends on the fidelity of bottom-up neural representations of sensory information in the auditory system and top-down influences of effortful listening. Here, we use objective neurophysiological measures of bottom-up temporal processing using envelope-following responses (EFRs) to amplitude modulated tones and investigate their interactions with pupil-indexed listening effort, as it relates to performance on the Quick speech in noise (QuickSIN) test in young adult listeners with clinically normal hearing thresholds. We developed an approach using ear-canal electrodes and adjusting electrode montages for modulation rate ranges, which extended the rage of reliable EFR measurements as high as 1024Hz. Pupillary responses revealed changes in listening effort at the two most difficult signal-to-noise ratios (SNR), but behavioral deficits at the hardest SNR only. Neither pupil-indexed listening effort nor the slope of the EFR decay function independently related to QuickSIN performance. However, a linear model using the combination of EFRs and pupil metrics significantly explained variance in QuickSIN performance. These results suggest a synergistic interaction between bottom-up sensory coding and top-down measures of listening effort as it relates to speech perception in noise. These findings can inform the development of next-generation tests for hearing deficits in listeners with normal-hearing thresholds that incorporates a multi-dimensional approach to understanding speech intelligibility deficits.
Collapse
Affiliation(s)
- Jacie R. McHaney
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA
| | - Kenneth E. Hancock
- Deparment of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston MA
| | - Daniel B. Polley
- Deparment of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston MA
| | - Aravindakshan Parthasarathy
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh PA
| |
Collapse
|
14
|
Trau-Margalit A, Fostick L, Harel-Arbeli T, Nissanholtz-Gannot R, Taitelbaum-Swead R. Speech recognition in noise task among children and young-adults: a pupillometry study. Front Psychol 2023; 14:1188485. [PMID: 37425148 PMCID: PMC10328119 DOI: 10.3389/fpsyg.2023.1188485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 06/05/2023] [Indexed: 07/11/2023] Open
Abstract
Introduction Children experience unique challenges when listening to speech in noisy environments. The present study used pupillometry, an established method for quantifying listening and cognitive effort, to detect temporal changes in pupil dilation during a speech-recognition-in-noise task among school-aged children and young adults. Methods Thirty school-aged children and 31 young adults listened to sentences amidst four-talker babble noise in two signal-to-noise ratios (SNR) conditions: high accuracy condition (+10 dB and + 6 dB, for children and adults, respectively) and low accuracy condition (+5 dB and + 2 dB, for children and adults, respectively). They were asked to repeat the sentences while pupil size was measured continuously during the task. Results During the auditory processing phase, both groups displayed pupil dilation; however, adults exhibited greater dilation than children, particularly in the low accuracy condition. In the second phase (retention), only children demonstrated increased pupil dilation, whereas adults consistently exhibited a decrease in pupil size. Additionally, the children's group showed increased pupil dilation during the response phase. Discussion Although adults and school-aged children produce similar behavioural scores, group differences in dilation patterns point that their underlying auditory processing differs. A second peak of pupil dilation among the children suggests that their cognitive effort during speech recognition in noise lasts longer than in adults, continuing past the first auditory processing peak dilation. These findings support effortful listening among children and highlight the need to identify and alleviate listening difficulties in school-aged children, to provide proper intervention strategies.
Collapse
Affiliation(s)
- Avital Trau-Margalit
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the Name of Prof. Mordechai Himelfarb, Ariel University, Ariel, Israel
| | - Leah Fostick
- Department of Communication Disorders, Auditory Perception Lab in the Name of Laurent Levy, Ariel University, Ariel, Israel
| | - Tami Harel-Arbeli
- Department of Gerontology, University of Haifa, Haifa, Israel
- Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| | | | - Riki Taitelbaum-Swead
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the Name of Prof. Mordechai Himelfarb, Ariel University, Ariel, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| |
Collapse
|
15
|
Pearson DV, Shen Y, McAuley JD, Kidd GR. Differential sensitivity to speech rhythms in young and older adults. Front Psychol 2023; 14:1160236. [PMID: 37251054 PMCID: PMC10213510 DOI: 10.3389/fpsyg.2023.1160236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 04/19/2023] [Indexed: 05/31/2023] Open
Abstract
Sensitivity to the temporal properties of auditory patterns tends to be poorer in older listeners, and this has been hypothesized to be one factor contributing to their poorer speech understanding. This study examined sensitivity to speech rhythms in young and older normal-hearing subjects, using a task designed to measure the effect of speech rhythmic context on the detection of changes in the timing of word onsets in spoken sentences. A temporal-shift detection paradigm was used in which listeners were presented with an intact sentence followed by two versions of the sentence in which a portion of speech was replaced with a silent gap: one with correct gap timing (the same duration as the missing speech) and one with altered gap timing (shorter or longer than the duration of the missing speech), resulting in an early or late resumption of the sentence after the gap. The sentences were presented with either an intact rhythm or an altered rhythm preceding the silent gap. Listeners judged which sentence had the altered gap timing, and thresholds for the detection of deviations from the correct timing were calculated separately for shortened and lengthened gaps. Both young and older listeners demonstrated lower thresholds in the intact rhythm condition than in the altered rhythm conditions. However, shortened gaps led to lower thresholds than lengthened gaps for the young listeners, while older listeners were not sensitive to the direction of the change in timing. These results show that both young and older listeners rely on speech rhythms to generate temporal expectancies for upcoming speech events. However, the absence of lower thresholds for shortened gaps among the older listeners indicates a change in speech-timing expectancies with age. A further examination of individual differences within the older group revealed that those with better rhythm-discrimination abilities (from a separate study) tended to show the same heightened sensitivity to early events observed with the young listeners.
Collapse
Affiliation(s)
- Dylan V. Pearson
- Department of Speech, Language, and Hearing Sciences, Indiana University, Bloomington, IN, United States
| | - Yi Shen
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, United States
| | - J. Devin McAuley
- Department of Psychology, Michigan State University, East Lansing, MI, United States
| | - Gary R. Kidd
- Department of Speech, Language, and Hearing Sciences, Indiana University, Bloomington, IN, United States
| |
Collapse
|
16
|
Griffiths TD. Predicting speech-in-noise ability in normal and impaired hearing based on auditory cognitive measures. Front Neurosci 2023; 17:1077344. [PMID: 36824211 PMCID: PMC9941633 DOI: 10.3389/fnins.2023.1077344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Accepted: 01/23/2023] [Indexed: 02/10/2023] Open
Abstract
Problems with speech-in-noise (SiN) perception are extremely common in hearing loss. Clinical tests have generally been based on measurement of SiN. My group has developed an approach to SiN based on the auditory cognitive mechanisms that subserve this, that might be relevant to speakers of any language. I describe how well these predict SiN, the brain systems for them, and tests of auditory cognition based on them that might be used to characterise SiN deficits in the clinic.
Collapse
Affiliation(s)
- Timothy D. Griffiths
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, United Kingdom
| |
Collapse
|
17
|
Bianco R, Chait M. No Link Between Speech-in-Noise Perception and Auditory Sensory Memory - Evidence From a Large Cohort of Older and Younger Listeners. Trends Hear 2023; 27:23312165231190688. [PMID: 37828868 PMCID: PMC10576936 DOI: 10.1177/23312165231190688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 07/06/2023] [Accepted: 07/11/2023] [Indexed: 10/14/2023] Open
Abstract
A growing literature is demonstrating a link between working memory (WM) and speech-in-noise (SiN) perception. However, the nature of this correlation and which components of WM might underlie it, are being debated. We investigated how SiN reception links with auditory sensory memory (aSM) - the low-level processes that support the short-term maintenance of temporally unfolding sounds. A large sample of old (N = 199, 60-79 yo) and young (N = 149, 20-35 yo) participants was recruited online and performed a coordinate response measure-based speech-in-babble task that taps listeners' ability to track a speech target in background noise. We used two tasks to investigate implicit and explicit aSM. Both were based on tone patterns overlapping in processing time scales with speech (presentation rate of tones 20 Hz; of patterns 2 Hz). We hypothesised that a link between SiN and aSM may be particularly apparent in older listeners due to age-related reduction in both SiN reception and aSM. We confirmed impaired SiN reception in the older cohort and demonstrated reduced aSM performance in those listeners. However, SiN and aSM did not share variability. Across the two age groups, SiN performance was predicted by a binaural processing test and age. The results suggest that previously observed links between WM and SiN may relate to the executive components and other cognitive demands of the used tasks. This finding helps to constrain the search for the perceptual and cognitive factors that explain individual variability in SiN performance.
Collapse
Affiliation(s)
- Roberta Bianco
- Ear Institute, University College London, London, UK
- Neuroscience of Perception and Action Lab, Italian Institute of Technology (IIT), Rome, Italy
| | - Maria Chait
- Ear Institute, University College London, London, UK
| |
Collapse
|
18
|
Shim H, Kim S, Hong J, Na Y, Woo J, Hansen M, Gantz B, Choi I. Differences in neural encoding of speech in noise between cochlear implant users with and without preserved acoustic hearing. Hear Res 2023; 427:108649. [PMID: 36462377 PMCID: PMC9842477 DOI: 10.1016/j.heares.2022.108649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 11/06/2022] [Accepted: 11/12/2022] [Indexed: 11/15/2022]
Abstract
Cochlear implants (CIs) have evolved to combine residual acoustic hearing with electric hearing. It has been expected that CI users with residual acoustic hearing experience better speech-in-noise perception than CI-only listeners because preserved acoustic cues aid unmasking speech from background noise. This study sought neural substrate of better speech unmasking in CI users with preserved acoustic hearing compared to those with lower degree of acoustic hearing. Cortical evoked responses to speech in multi-talker babble noise were compared between 29 Hybrid (i.e., electric acoustic stimulation or EAS) and 29 electric-only CI users. The amplitude ratio of evoked responses to speech and noise, or internal SNR, was significantly larger in the CI users with EAS. This result indicates that CI users with better residual acoustic hearing exhibit enhanced unmasking of speech from background noise.
Collapse
Affiliation(s)
- Hwan Shim
- Dept. Electrical and Computer Engineering Technology, Rochester Institute of Technology, Rochester, NY 14623, United States
| | - Subong Kim
- Dept. Communication Sciences and Disorders, Montclair State University, Montclair, NJ 07043, United States
| | - Jean Hong
- Dept. Communication Sciences and Disorders, University of Iowa, Iowa City, IA 52242, United States
| | - Youngmin Na
- Dept. Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, United States
| | - Jihwan Woo
- Dept. Biomedical Engineering, University of Ulsan, Ulsan, Republic of Korea
| | - Marlan Hansen
- Dept. Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, United States
| | - Bruce Gantz
- Dept. Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, United States
| | - Inyong Choi
- Dept. Communication Sciences and Disorders, University of Iowa, Iowa City, IA 52242, United States; Dept. Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, United States.
| |
Collapse
|
19
|
Johns MA, Calloway RC, Phillips I, Karuzis VP, Dutta K, Smith E, Shamma SA, Goupell MJ, Kuchinsky SE. Performance on stochastic figure-ground perception varies with individual differences in speech-in-noise recognition and working memory capacity. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:286. [PMID: 36732241 PMCID: PMC9851714 DOI: 10.1121/10.0016756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 12/07/2022] [Accepted: 12/10/2022] [Indexed: 06/18/2023]
Abstract
Speech recognition in noisy environments can be challenging and requires listeners to accurately segregate a target speaker from irrelevant background noise. Stochastic figure-ground (SFG) tasks in which temporally coherent inharmonic pure-tones must be identified from a background have been used to probe the non-linguistic auditory stream segregation processes important for speech-in-noise processing. However, little is known about the relationship between performance on SFG tasks and speech-in-noise tasks nor the individual differences that may modulate such relationships. In this study, 37 younger normal-hearing adults performed an SFG task with target figure chords consisting of four, six, eight, or ten temporally coherent tones amongst a background of randomly varying tones. Stimuli were designed to be spectrally and temporally flat. An increased number of temporally coherent tones resulted in higher accuracy and faster reaction times (RTs). For ten target tones, faster RTs were associated with better scores on the Quick Speech-in-Noise task. Individual differences in working memory capacity and self-reported musicianship further modulated these relationships. Overall, results demonstrate that the SFG task could serve as an assessment of auditory stream segregation accuracy and RT that is sensitive to individual differences in cognitive and auditory abilities, even among younger normal-hearing adults.
Collapse
Affiliation(s)
- Michael A Johns
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Regina C Calloway
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Ian Phillips
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Valerie P Karuzis
- Applied Research Laboratory of Intelligence and Security, University of Maryland, College Park, Maryland 20742, USA
| | - Kelsey Dutta
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Ed Smith
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Shihab A Shamma
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| |
Collapse
|
20
|
Lee JH, Shim H, Gantz B, Choi I. Strength of Attentional Modulation on Cortical Auditory Evoked Responses Correlates with Speech-in-Noise Performance in Bimodal Cochlear Implant Users. Trends Hear 2022; 26:23312165221141143. [PMID: 36464791 PMCID: PMC9726851 DOI: 10.1177/23312165221141143] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Auditory selective attention is a crucial top-down cognitive mechanism for understanding speech in noise. Cochlear implant (CI) users display great variability in speech-in-noise performance that is not easily explained by peripheral auditory profile or demographic factors. Thus, it is imperative to understand if auditory cognitive processes such as selective attention explain such variability. The presented study directly addressed this question by quantifying attentional modulation of cortical auditory responses during an attention task and comparing its individual differences with speech-in-noise performance. In our attention experiment, participants with CI were given a pre-stimulus visual cue that directed their attention to either of two speech streams and were asked to select a deviant syllable in the target stream. The two speech streams consisted of the female voice saying "Up" five times every 800 ms and the male voice saying "Down" four times every 1 s. The onset of each syllable elicited distinct event-related potentials (ERPs). At each syllable onset, the difference in the amplitudes of ERPs between the two attentional conditions (attended - ignored) was computed. This ERP amplitude difference served as a proxy for attentional modulation strength. Our group-level analysis showed that the amplitude of ERPs was greater when the syllable was attended than ignored, exhibiting that attention modulated cortical auditory responses. Moreover, the strength of attentional modulation showed a significant correlation with speech-in-noise performance. These results suggest that the attentional modulation of cortical auditory responses may provide a neural marker for predicting CI users' success in clinical tests of speech-in-noise listening.
Collapse
Affiliation(s)
- Jae-Hee Lee
- Dept. Communication Sciences and Disorders, University of Iowa, Iowa City, IA, 52242, USA,Dept. Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Hwan Shim
- Dept. Electrical and Computer Engineering Technology, Rochester Institute of Technology, Rochester, NY, 14623, USA
| | - Bruce Gantz
- Dept. Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Inyong Choi
- Dept. Communication Sciences and Disorders, University of Iowa, Iowa City, IA, 52242, USA,Dept. Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA,Inyong Choi, 250 Hawkins Dr., Iowa City, IA 52242, USA.
| |
Collapse
|
21
|
Schumann A, Ross B. Adaptive Syllable Training Improves Phoneme Identification in Older Listeners with and without Hearing Loss. Audiol Res 2022; 12:653-673. [PMID: 36412658 PMCID: PMC9680330 DOI: 10.3390/audiolres12060063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 11/02/2022] [Accepted: 11/13/2022] [Indexed: 11/22/2022] Open
Abstract
Acoustic-phonetic speech training mitigates confusion between consonants and improves phoneme identification in noise. A novel training paradigm addressed two principles of perceptual learning. First, training benefits are often specific to the trained material; therefore, stimulus variability was reduced by training small sets of phonetically similar consonant-vowel-consonant syllables. Second, the training is most efficient at an optimal difficulty level; accordingly, the noise level was adapted to the participant's competency. Fifty-two adults aged between sixty and ninety years with normal hearing or moderate hearing loss participated in five training sessions within two weeks. Training sets of phonetically similar syllables contained voiced and voiceless stop and fricative consonants, as well as voiced nasals and liquids. Listeners identified consonants at the onset or the coda syllable position by matching the syllables with their orthographic equivalent within a closed set of three alternative symbols. The noise level was adjusted in a staircase procedure. Pre-post-training benefits were quantified as increased accuracy and a decrease in the required signal-to-noise ratio (SNR) and analyzed with regard to the stimulus sets and the participant's hearing abilities. The adaptive training was feasible for older adults with various degrees of hearing loss. Normal-hearing listeners performed with high accuracy at lower SNR after the training. Participants with hearing loss improved consonant accuracy but still required a high SNR. Phoneme identification improved for all stimulus sets. However, syllables within a set required noticeably different SNRs. Most significant gains occurred for voiced and voiceless stop and (af)fricative consonants. The training was beneficial for difficult consonants, but the easiest to identify consonants improved most prominently. The training enabled older listeners with different capabilities to train and improve at an individual 'edge of competence'.
Collapse
Affiliation(s)
- Annette Schumann
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, ON M6A 2E1, Canada
| | - Bernhard Ross
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, ON M6A 2E1, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5G 1L7, Canada
- Correspondence: ; Tel.: +1-416-785-2500 (ext. 2690)
| |
Collapse
|
22
|
Wasiuk PA, Buss E, Oleson JJ, Calandruccio L. Predicting speech-in-speech recognition: Short-term audibility, talker sex, and listener factors. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:3010. [PMID: 36456289 DOI: 10.1121/10.0015228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Accepted: 11/01/2022] [Indexed: 06/17/2023]
Abstract
Speech-in-speech recognition can be challenging, and listeners vary considerably in their ability to accomplish this complex auditory-cognitive task. Variability in performance can be related to intrinsic listener factors as well as stimulus factors associated with energetic and informational masking. The current experiments characterized the effects of short-term audibility of the target, differences in target and masker talker sex, and intrinsic listener variables on sentence recognition in two-talker speech and speech-shaped noise. Participants were young adults with normal hearing. Each condition included the adaptive measurement of speech reception thresholds, followed by testing at a fixed signal-to-noise ratio (SNR). Short-term audibility for each keyword was quantified using a computational glimpsing model for target+masker mixtures. Scores on a psychophysical task of auditory stream segregation predicted speech recognition, with stronger effects for speech-in-speech than speech-in-noise. Both speech-in-speech and speech-in-noise recognition depended on the proportion of audible glimpses available in the target+masker mixture, even across stimuli presented at the same global SNR. Short-term audibility requirements varied systematically across stimuli, providing an estimate of the greater informational masking for speech-in-speech than speech-in-noise recognition and quantifying informational masking for matched and mismatched talker sex.
Collapse
Affiliation(s)
- Peter A Wasiuk
- Department of Psychological Sciences, 11635 Euclid Avenue, Case Western Reserve University, Cleveland, Ohio 44106, USA
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, 170 Manning Drive, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599, USA
| | - Jacob J Oleson
- Department of Biostatistics, 145 North Riverside Drive, University of Iowa, Iowa City, Iowa 52242, USA
| | - Lauren Calandruccio
- Department of Psychological Sciences, 11635 Euclid Avenue, Case Western Reserve University, Cleveland, Ohio 44106, USA
| |
Collapse
|
23
|
Holtze B, Rosenkranz M, Jaeger M, Debener S, Mirkovic B. Ear-EEG Measures of Auditory Attention to Continuous Speech. Front Neurosci 2022; 16:869426. [PMID: 35592265 PMCID: PMC9111016 DOI: 10.3389/fnins.2022.869426] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 03/25/2022] [Indexed: 11/13/2022] Open
Abstract
Auditory attention is an important cognitive function used to separate relevant from irrelevant auditory information. However, most findings on attentional selection have been obtained in highly controlled laboratory settings using bulky recording setups and unnaturalistic stimuli. Recent advances in electroencephalography (EEG) facilitate the measurement of brain activity outside the laboratory, and around-the-ear sensors such as the cEEGrid promise unobtrusive acquisition. In parallel, methods such as speech envelope tracking, intersubject correlations and spectral entropy measures emerged which allow us to study attentional effects in the neural processing of natural, continuous auditory scenes. In the current study, we investigated whether these three attentional measures can be reliably obtained when using around-the-ear EEG. To this end, we analyzed the cEEGrid data of 36 participants who attended to one of two simultaneously presented speech streams. Speech envelope tracking results confirmed a reliable identification of the attended speaker from cEEGrid data. The accuracies in identifying the attended speaker increased when fitting the classification model to the individual. Artifact correction of the cEEGrid data with artifact subspace reconstruction did not increase the classification accuracy. Intersubject correlations were higher for those participants attending to the same speech stream than for those attending to different speech streams, replicating previously obtained results with high-density cap-EEG. We also found that spectral entropy decreased over time, possibly reflecting the decrease in the listener's level of attention. Overall, these results support the idea of using ear-EEG measurements to unobtrusively monitor auditory attention to continuous speech. This knowledge may help to develop assistive devices that support listeners separating relevant from irrelevant information in complex auditory environments.
Collapse
Affiliation(s)
- Björn Holtze
- Neuropsychology Lab, Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - Marc Rosenkranz
- Neurophysiology of Everyday Life Group, Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - Manuela Jaeger
- Neuropsychology Lab, Department of Psychology, University of Oldenburg, Oldenburg, Germany
- Division Hearing, Speech and Audio Technology, Fraunhofer Institute for Digital Media Technology IDMT, Oldenburg, Germany
| | - Stefan Debener
- Neuropsychology Lab, Department of Psychology, University of Oldenburg, Oldenburg, Germany
- Research Center for Neurosensory Science, University of Oldenburg, Oldenburg, Germany
- Cluster of Excellence Hearing4all, University of Oldenburg, Oldenburg, Germany
| | - Bojana Mirkovic
- Neuropsychology Lab, Department of Psychology, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
24
|
Guo MX. EEG Responses to Auditory Figure-Ground Perception. Hear Res 2022; 422:108524. [DOI: 10.1016/j.heares.2022.108524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 05/05/2022] [Accepted: 05/14/2022] [Indexed: 11/25/2022]
|
25
|
Bhatt IS, Washnik N, Torkamani A. Suprathreshold Auditory Measures for Detecting Early-Stage Noise-Induced Hearing Loss in Young Adults. J Am Acad Audiol 2022; 33:185-195. [PMID: 36195294 PMCID: PMC10858682 DOI: 10.1055/s-0041-1740362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2022]
Abstract
BACKGROUND Over 1 billion young adults are at risk for developing noise-induced hearing loss (NIHL) due to their habit of listening to music at loud levels. The gold standard for detecting NIHL is the audiometric notch around 3,000 to 6,000 Hz observed in pure tone audiogram. However, recent studies suggested that suprathreshold auditory measures might be more sensitive to detect early-stage NIHL in young adults. PURPOSE The present study compared suprathreshold measures in individuals with high and low noise exposure backgrounds (NEBs). We hypothesized that individuals with high NEB would exhibit reduced performance on suprathreshold measures than those with low NEB. STUDY SAMPLE An initial sample of 100 English-speaking healthy adults (18-35 years; females = 70) was obtained from five university classes. We identified 15 participants with the lowest NEB scores (10 females) and 15 participants with the highest NEB scores (10 females). We selected a sample of healthy young adults with no history of middle ear infection, and those in the low NEB group were selected with no history of impulse noise exposure. DATA COLLECTION AND ANALYSIS The study included conventional audiometry, extended high-frequency audiometry, middle ear muscle reflex (MEMR) thresholds, distortion-product otoacoustic emissions (DPOAEs), QuickSIN, and suprathreshold auditory brainstem response (ABR) measures. We used independent sample t-tests, correlation coefficients, and linear mixed model analysis to compare the audiometric measures between the NEB groups. RESULTS The prevalence of audiometric notch was low in the study sample, even for individuals with high NEB. We found that: (1) individuals with high NEB revealed significantly reduced QuickSIN performance than those with low NEB; (2) music exposure via earphone revealed a significant association with QuickSIN; (3) individuals with high NEB revealed significantly reduced DPOAEs and ABR wave I amplitude compared with individuals with low NEB; (4) MEMR and ABR latency measures showed a modest association with NEB; and (5) audiometric thresholds across the frequency range did not show statistically significant association with NEB. CONCLUSION Our results suggest that young adults with high NEB might exhibit impaired peripheral neural coding deficits leading to reduced speech-in-noise (SIN) performance despite clinically normal hearing thresholds. SIN measures might be more sensitive than audiometric notch for detecting early-stage NIHL in young adults.
Collapse
Affiliation(s)
- Ishan S Bhatt
- Department of Communication Sciences and Disorders, The University of Iowa, Iowa City, Iowa
| | - Nilesh Washnik
- Department of Communication Sciences & Disorders, Ohio University, Athens, Ohio
| | - Ali Torkamani
- Department of Integrative Structural and Computational Biology, Scripps Translational Science Institute, La Jolla, California
| |
Collapse
|
26
|
The Intelligibility of Time-Compressed Speech Is Correlated with the Ability to Listen in Modulated Noise. J Assoc Res Otolaryngol 2022; 23:413-426. [DOI: 10.1007/s10162-021-00832-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Accepted: 12/15/2021] [Indexed: 10/18/2022] Open
|
27
|
Zhao X, Zhou Y, Wei K, Bai X, Zhang J, Zhou M, Sun X. Associations of sensory impairment and cognitive function in middle-aged and older Chinese population: The China Health and Retirement Longitudinal Study. J Glob Health 2021; 11:08008. [PMID: 34956639 PMCID: PMC8684796 DOI: 10.7189/jogh.11.08008] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
Background Little is known about the associations between vision impairment, hearing impairment, and cognitive function. The aim of this study was to examine whether vision and hearing impairment were associated with a high risk for cognitive impairment in middle-aged and older Chinese adults. Methods A total of 13 914 Chinese adults from the China Health and Retirement Longitudinal Study (CHARLS) baseline were selected for analysis. Sensory impairment was assessed from a single self-report question, and we categorized sensory impairment into four groups: no sensory impairment, vision impairment, hearing impairment, and dual sensory impairment. Cognitive assessment covered memory, mental state, and cognition, and the data was obtained through a questionnaire. Results Memory was negatively associated with hearing impairment (β = -0.043, 95% confidence interval (CI) = -0.076, -0.043) and dual sensory impairment (β = -0.033, 95% CI = -0.049, -0.017); mental status was negatively associated with vision impairment (β = -0.034, 95% CI = -0.049, -0.018), hearing impairment (β = -0.070, 95% CI = -0.086, -0.055), and dual sensory impairment (β = -0.054, 95% CI = -0.070, -0.039); and cognition was negatively associated with vision impairment (β = -0.028, 95% CI = -0.044, -0.013), hearing impairment (β = -0.074, 95% CI = -0.090, -0.059), and dual sensory impairment (β = -0.052, 95% CI = -0.067, -0.036), even after adjusting for demographics, social economic factors, and lifestyle behavior. Conclusions Vision and hearing impairment are negatively associated with memory, mental status, and cognition for middle-aged and elderly Chinese adults. There were stronger negative associations between sensory impairment and cognitive-related indicators in the elderly compared to the middle-aged.
Collapse
Affiliation(s)
- Xiaohuan Zhao
- Department of Ophthalmology, Shanghai General Hospital (Shanghai First People's Hospital), Shanghai Jiao Tong University School of Medicine, Shanghai, China.,National Clinical Research Center for Eye Diseases, Shanghai, China.,Shanghai Key Laboratory of Fundus Diseases, Shanghai, China
| | - Yifan Zhou
- Putuo People's Hospital, Tongji University, Shanghai 200060, China
| | - Kunchen Wei
- Department of Plastic and Reconstructive Surgery, Shanghai Ninth People's Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Xinyue Bai
- Department of Ophthalmology, Shanghai General Hospital (Shanghai First People's Hospital), Shanghai Jiao Tong University School of Medicine, Shanghai, China.,National Clinical Research Center for Eye Diseases, Shanghai, China.,Shanghai Key Laboratory of Fundus Diseases, Shanghai, China
| | - Jingfa Zhang
- Department of Ophthalmology, Shanghai General Hospital (Shanghai First People's Hospital), Shanghai Jiao Tong University School of Medicine, Shanghai, China.,National Clinical Research Center for Eye Diseases, Shanghai, China.,Shanghai Key Laboratory of Fundus Diseases, Shanghai, China
| | - Minwen Zhou
- Department of Ophthalmology, Shanghai General Hospital (Shanghai First People's Hospital), Shanghai Jiao Tong University School of Medicine, Shanghai, China.,National Clinical Research Center for Eye Diseases, Shanghai, China.,Shanghai Key Laboratory of Fundus Diseases, Shanghai, China
| | - Xiaodong Sun
- Department of Ophthalmology, Shanghai General Hospital (Shanghai First People's Hospital), Shanghai Jiao Tong University School of Medicine, Shanghai, China.,National Clinical Research Center for Eye Diseases, Shanghai, China.,Shanghai Key Laboratory of Fundus Diseases, Shanghai, China
| |
Collapse
|
28
|
Brisson V, Tremblay P. Improving speech perception in noise in young and older adults using transcranial magnetic stimulation. BRAIN AND LANGUAGE 2021; 222:105009. [PMID: 34425411 DOI: 10.1016/j.bandl.2021.105009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 08/06/2021] [Accepted: 08/12/2021] [Indexed: 06/13/2023]
Abstract
UNLABELLED Normal aging is associated with speech perception in noise (SPiN) difficulties. The objective of this study was to determine if SPiN performance can be enhanced by intermittent theta-burst stimulation (iTBS) in young and older adults. METHOD We developed a sub-lexical SPiN test to evaluate the contribution of age, hearing, and cognition to SPiN performance in young and older adults. iTBS was applied to the left posterior superior temporal sulcus (pSTS) and the left ventral premotor cortex (PMv) to examine its impact on SPiN performance. RESULTS Aging was associated with reduced SPiN accuracy. TMS-induced performance gain was greater after stimulation of the PMv compared to the pSTS. Participants with lower scores in the baseline condition improved the most. DISCUSSION SPiN difficulties can be reduced by enhancing activity within the left speech-processing network in adults. This study paves the way for the development of TMS-based interventions to reduce SPiN difficulties in adults.
Collapse
Affiliation(s)
- Valérie Brisson
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada
| | - Pascale Tremblay
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada.
| |
Collapse
|
29
|
Smith KG, Fogerty D. Older adult recognition error patterns when listening to interrupted speech and speech in steady-state noise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:3428. [PMID: 34852602 PMCID: PMC8577864 DOI: 10.1121/10.0006975] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 10/07/2021] [Accepted: 10/11/2021] [Indexed: 05/29/2023]
Abstract
This study examined sentence recognition errors made by older adults in degraded listening conditions compared to a previous sample of younger adults. We examined speech recognition errors made by older normal-hearing adults who repeated sentences that were corrupted by steady-state noise (SSN) or periodically interrupted by noise to preserve 33%, 50%, or 66% of the sentence. Responses were transcribed and coded for the number and type of keyword errors. Errors increased with decreasing preservation of the sentence. Similar sentence recognition was observed between SSN and the greatest amount of interruption (33%). Errors were predominately at the word level rather than at the phoneme level and consisted of omission or substitution of keywords. Compared to younger listeners, older listeners made more total errors and omitted more whole words when speech was highly degraded. They also made more whole word substitutions when speech was more preserved. In addition, the semantic relatedness of the substitution errors to the sentence context varied according to the distortion condition, with greater context effects in SSN than interruption. Overall, older listeners made errors reflecting poorer speech representations. Error analyses provide a more detailed account of speech recognition by identifying changes in the type of errors made across listening conditions and listener groups.
Collapse
Affiliation(s)
- Kimberly G Smith
- Department of Speech Pathology and Audiology, University of South Alabama, 5721 USA Drive North, Alabama 36688, USA
| | - Daniel Fogerty
- Department of Speech and Hearing Science, University of Illinois Urbana-Champaign, 901 S. Sixth St., Champaign, Illinois 61820, USA
| |
Collapse
|
30
|
Herrmann B, Maess B, Johnsrude IS. A neural signature of regularity in sound is reduced in older adults. Neurobiol Aging 2021; 109:1-10. [PMID: 34634748 DOI: 10.1016/j.neurobiolaging.2021.09.011] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 09/03/2021] [Accepted: 09/08/2021] [Indexed: 01/21/2023]
Abstract
Sensitivity to repetitions in sound amplitude and frequency is crucial for sound perception. As with other aspects of sound processing, sensitivity to such patterns may change with age, and may help explain some age-related changes in hearing such as segregating speech from background sound. We recorded magnetoencephalography to characterize differences in the processing of sound patterns between younger and older adults. We presented tone sequences that either contained a pattern (made of a repeated set of tones) or did not contain a pattern. We show that auditory cortex in older, compared to younger, adults is hyperresponsive to sound onsets, but that sustained neural activity in auditory cortex, indexing the processing of a sound pattern, is reduced. Hence, the sensitivity of neural populations in auditory cortex fundamentally differs between younger and older individuals, overresponding to sound onsets, while underresponding to patterns in sounds. This may help to explain some age-related changes in hearing such as increased sensitivity to distracting sounds and difficulties tracking speech in the presence of other sound.
Collapse
Affiliation(s)
- Björn Herrmann
- Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON, Canada; Rotman Research Institute, Baycrest, North York, ON, Canada; Department of Psychology, University of Toronto, Toronto, ON, Canada.
| | - Burkhard Maess
- Brain Networks Unit, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Ingrid S Johnsrude
- Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON, Canada; School of Communication Sciences & Disorders, The University of Western Ontario, London, ON, Canada
| |
Collapse
|
31
|
Patro C, Kreft HA, Wojtczak M. The search for correlates of age-related cochlear synaptopathy: Measures of temporal envelope processing and spatial release from speech-on-speech masking. Hear Res 2021; 409:108333. [PMID: 34425347 PMCID: PMC8424701 DOI: 10.1016/j.heares.2021.108333] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 07/17/2021] [Accepted: 08/04/2021] [Indexed: 01/13/2023]
Abstract
Older adults often experience difficulties understanding speech in adverse listening conditions. It has been suggested that for listeners with normal and near-normal audiograms, these difficulties may, at least in part, arise from age-related cochlear synaptopathy. The aim of this study was to assess if performance on auditory tasks relying on temporal envelope processing reveal age-related deficits consistent with those expected from cochlear synaptopathy. Listeners aged 20 to 66 years were tested using a series of psychophysical, electrophysiological, and speech-perception measures using stimulus configurations that promote coding by medium- and low-spontaneous-rate auditory-nerve fibers. Cognitive measures of executive function were obtained to control for age-related cognitive decline. Results from the different tests were not significantly correlated with each other despite a presumed reliance on common mechanisms involved in temporal envelope processing. Only gap-detection thresholds for a tone in noise and spatial release from speech-on-speech masking were significantly correlated with age. Increasing age was related to impaired cognitive executive function. Multivariate regression analyses showed that individual differences in hearing sensitivity, envelope-based measures, and scores from nonauditory cognitive tests did not significantly contribute to the variability in spatial release from speech-on-speech masking for small target/masker spatial separation, while age was a significant contributor.
Collapse
Affiliation(s)
- Chhayakanta Patro
- Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Parkway, Minneapolis, MN 55455, USA.
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Parkway, Minneapolis, MN 55455, USA
| | - Magdalena Wojtczak
- Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Parkway, Minneapolis, MN 55455, USA
| |
Collapse
|
32
|
Knopke S, Schubert A, Häussler SM, Gräbel S, Szczepek AJ, Olze H. Improvement of Working Memory and Processing Speed in Patients over 70 with Bilateral Hearing Impairment Following Unilateral Cochlear Implantation. J Clin Med 2021; 10:3421. [PMID: 34362204 PMCID: PMC8347702 DOI: 10.3390/jcm10153421] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 07/21/2021] [Accepted: 07/28/2021] [Indexed: 12/03/2022] Open
Abstract
Several studies demonstrated the association of hearing disorders with neurocognitive deficits and dementia disorders, but little is known about the effects of auditory rehabilitation on the cognitive performance of the elderly. Therefore, the research question of the present study was whether cochlear implantation, performed in 21 patients over 70 with bilateral severe hearing impairment, could influence their cognitive skills. The measuring points were before implantation and 12 months after the first cochlear implant (CI) fitting. Evaluation of the working memory (WMI) and processing speed (PSI) was performed using the Wechsler Adult Intelligence Scale 4th edition (WAIS-IV). The audiological assessment included speech perception (SP) in quiet (Freiburg monosyllabic test; FMT), noise (Oldenburg sentence test; OLSA), and self-assessment inventory (Oldenburg Inventory; OI). Twelve months after the first CI fitting, not only the auditory parameters (SP and OI), but also the WMI and PSI, improved significantly (p < 0.05) in the cohort. The presented results imply that cochlear implantation of bilaterally hearing-impaired patients over 70 positively influences their cognitive skills.
Collapse
Affiliation(s)
- Steffen Knopke
- Department of Otorhinolaryngology, Head and Neck Surgery, Campus Virchow-Klinikum, Charité—Universitätsmedizin Berlin, 13353 Berlin, Germany; (A.S.); (S.M.H.); (S.G.)
| | - Arvid Schubert
- Department of Otorhinolaryngology, Head and Neck Surgery, Campus Virchow-Klinikum, Charité—Universitätsmedizin Berlin, 13353 Berlin, Germany; (A.S.); (S.M.H.); (S.G.)
| | - Sophia Marie Häussler
- Department of Otorhinolaryngology, Head and Neck Surgery, Campus Virchow-Klinikum, Charité—Universitätsmedizin Berlin, 13353 Berlin, Germany; (A.S.); (S.M.H.); (S.G.)
| | - Stefan Gräbel
- Department of Otorhinolaryngology, Head and Neck Surgery, Campus Virchow-Klinikum, Charité—Universitätsmedizin Berlin, 13353 Berlin, Germany; (A.S.); (S.M.H.); (S.G.)
| | - Agnieszka J. Szczepek
- Department of Otorhinolaryngology, Head and Neck Surgery, Campus Charité Mitte, Charité—Universitätsmedizin Berlin, 10117 Berlin, Germany
| | - Heidi Olze
- Department of Otorhinolaryngology, Head and Neck Surgery, Campus Virchow-Klinikum, Charité—Universitätsmedizin Berlin, 13353 Berlin, Germany; (A.S.); (S.M.H.); (S.G.)
- Department of Otorhinolaryngology, Head and Neck Surgery, Campus Charité Mitte, Charité—Universitätsmedizin Berlin, 10117 Berlin, Germany
| |
Collapse
|
33
|
Herrmann B, Butler BE. Hearing loss and brain plasticity: the hyperactivity phenomenon. Brain Struct Funct 2021; 226:2019-2039. [PMID: 34100151 DOI: 10.1007/s00429-021-02313-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 06/03/2021] [Indexed: 12/22/2022]
Abstract
Many aging adults experience some form of hearing problems that may arise from auditory peripheral damage. However, it has been increasingly acknowledged that hearing loss is not only a dysfunction of the auditory periphery but also results from changes within the entire auditory system, from periphery to cortex. Damage to the auditory periphery is associated with an increase in neural activity at various stages throughout the auditory pathway. Here, we review neurophysiological evidence of hyperactivity, auditory perceptual difficulties that may result from hyperactivity, and outline open conceptual and methodological questions related to the study of hyperactivity. We suggest that hyperactivity alters all aspects of hearing-including spectral, temporal, spatial hearing-and, in turn, impairs speech comprehension when background sound is present. By focusing on the perceptual consequences of hyperactivity and the potential challenges of investigating hyperactivity in humans, we hope to bring animal and human electrophysiologists closer together to better understand hearing problems in older adulthood.
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, Baycrest, Toronto, ON, M6A 2E1, Canada. .,Department of Psychology, University of Toronto, Toronto, ON, Canada.
| | - Blake E Butler
- Department of Psychology & The Brain and Mind Institute, University of Western Ontario, London, ON, Canada.,National Centre for Audiology, University of Western Ontario, London, ON, Canada
| |
Collapse
|
34
|
Neuronal figure-ground responses in primate primary auditory cortex. Cell Rep 2021; 35:109242. [PMID: 34133935 PMCID: PMC8220257 DOI: 10.1016/j.celrep.2021.109242] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 12/09/2020] [Accepted: 05/20/2021] [Indexed: 11/22/2022] Open
Abstract
Figure-ground segregation, the brain’s ability to group related features into stable perceptual entities, is crucial for auditory perception in noisy environments. The neuronal mechanisms for this process are poorly understood in the auditory system. Here, we report figure-ground modulation of multi-unit activity (MUA) in the primary and non-primary auditory cortex of rhesus macaques. Across both regions, MUA increases upon presentation of auditory figures, which consist of coherent chord sequences. We show increased activity even in the absence of any perceptual decision, suggesting that neural mechanisms for perceptual grouping are, to some extent, independent of behavioral demands. Furthermore, we demonstrate differences in figure encoding between more anterior and more posterior regions; perceptual saliency is represented in anterior cortical fields only. Our results suggest an encoding of auditory figures from the earliest cortical stages by a rate code. Neuronal figure-ground modulation in primary auditory cortex A rate code is used to signal the presence of auditory figures Anteriorly located recording sites encode perceptual saliency Figure-ground modulation is present without perceptual detection
Collapse
|
35
|
Ross B, Dobri S, Schumann A. Psychometric function for speech-in-noise tests accounts for word-recognition deficits in older listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:2337. [PMID: 33940923 DOI: 10.1121/10.0003956] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2020] [Accepted: 03/10/2021] [Indexed: 06/12/2023]
Abstract
Speech-in-noise (SIN) understanding in older age is affected by hearing loss, impaired central auditory processing, and cognitive deficits. SIN-tests measure these factors' compound effects by a speech reception threshold, defined as the signal-to-noise ratio required for 50% word understanding (SNR50). This study compared two standard SIN tests, QuickSIN (n = 354) in young and older adults and BKB-SIN (n = 139) in older adults (>60 years). The effects of hearing loss and age on SIN understanding were analyzed to identify auditory and nonauditory contributions to SIN loss. Word recognition in noise was modelled with individual psychometric functions using a logistic fit with three parameters: the midpoint (SNRα), slope (β), and asymptotic word-recognition deficit at high SNR (λ). The parameters SNRα and λ formally separate SIN loss into two components. SNRα characterizes the steep slope of the psychometric function at which a slight SNR increase provides a considerable improvement in SIN understanding. SNRα was discussed as being predominantly affected by audibility and low-level central auditory processing. The parameter λ describes a shallow segment of the psychometric function at which a further increase in the SNR provides modest improvement in SIN understanding. Cognitive factors in aging may contribute to the SIN loss indicated by λ.
Collapse
Affiliation(s)
- Bernhard Ross
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
| | - Simon Dobri
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
| | - Annette Schumann
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
| |
Collapse
|
36
|
Johnson JCS, Marshall CR, Weil RS, Bamiou DE, Hardy CJD, Warren JD. Hearing and dementia: from ears to brain. Brain 2021; 144:391-401. [PMID: 33351095 PMCID: PMC7940169 DOI: 10.1093/brain/awaa429] [Citation(s) in RCA: 79] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 10/02/2020] [Accepted: 10/17/2020] [Indexed: 12/19/2022] Open
Abstract
The association between hearing impairment and dementia has emerged as a major public health challenge, with significant opportunities for earlier diagnosis, treatment and prevention. However, the nature of this association has not been defined. We hear with our brains, particularly within the complex soundscapes of everyday life: neurodegenerative pathologies target the auditory brain, and are therefore predicted to damage hearing function early and profoundly. Here we present evidence for this proposition, based on structural and functional features of auditory brain organization that confer vulnerability to neurodegeneration, the extensive, reciprocal interplay between 'peripheral' and 'central' hearing dysfunction, and recently characterized auditory signatures of canonical neurodegenerative dementias (Alzheimer's disease, Lewy body disease and frontotemporal dementia). Moving beyond any simple dichotomy of ear and brain, we argue for a reappraisal of the role of auditory cognitive dysfunction and the critical coupling of brain to peripheral organs of hearing in the dementias. We call for a clinical assessment of real-world hearing in these diseases that moves beyond pure tone perception to the development of novel auditory 'cognitive stress tests' and proximity markers for the early diagnosis of dementia and management strategies that harness retained auditory plasticity.
Collapse
Affiliation(s)
- Jeremy C S Johnson
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Charles R Marshall
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, UK
- Preventive Neurology Unit, Wolfson Institute of Preventive Medicine, Queen Mary University of London, London, UK
| | - Rimona S Weil
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, UK
- Movement Disorders Centre, Department of Clinical and Movement Neurosciences, UCL Queen Square Institute of Neurology, University College London, London, UK
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Doris-Eva Bamiou
- UCL Ear Institute and UCL/UCLH Biomedical Research Centre, National Institute for Health Research, University College London, London, UK
| | - Chris J D Hardy
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Jason D Warren
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, UK
| |
Collapse
|
37
|
Varnet L, Léger AC, Boucher S, Bonnet C, Petit C, Lorenzi C. Contributions of Age-Related and Audibility-Related Deficits to Aided Consonant Identification in Presbycusis: A Causal-Inference Analysis. Front Aging Neurosci 2021; 13:640522. [PMID: 33732140 PMCID: PMC7956988 DOI: 10.3389/fnagi.2021.640522] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 02/08/2021] [Indexed: 12/05/2022] Open
Abstract
The decline of speech intelligibility in presbycusis can be regarded as resulting from the combined contribution of two main groups of factors: (1) audibility-related factors and (2) age-related factors. In particular, there is now an abundant scientific literature on the crucial role of suprathreshold auditory abilities and cognitive functions, which have been found to decline with age even in the absence of audiometric hearing loss. However, researchers investigating the direct effect of aging in presbycusis have to deal with the methodological issue that age and peripheral hearing loss covary to a large extent. In the present study, we analyzed a dataset of consonant-identification scores measured in quiet and in noise for a large cohort (n = 459, age = 42-92) of hearing-impaired (HI) and normal-hearing (NH) listeners. HI listeners were provided with a frequency-dependent amplification adjusted to their audiometric profile. Their scores in the two conditions were predicted from their pure-tone average (PTA) and age, as well as from their Extended Speech Intelligibility Index (ESII), a measure of the impact of audibility loss on speech intelligibility. We relied on a causal-inference approach combined with Bayesian modeling to disentangle the direct causal effects of age and audibility on intelligibility from the indirect effect of age on hearing loss. The analysis revealed that the direct effect of PTA on HI intelligibility scores was 5 times higher than the effect of age. This overwhelming effect of PTA was not due to a residual audibility loss despite amplification, as confirmed by a ESII-based model. More plausibly, the marginal role of age could be a consequence of the relatively little cognitively-demanding task used in this study. Furthermore, the amount of variance in intelligibility scores was smaller for NH than HI listeners, even after accounting for age and audibility, reflecting the presence of additional suprathreshold deficits in the latter group. Although the non-sense-syllable materials and the particular amplification settings used in this study potentially restrict the generalization of the findings, we think that these promising results call for a wider use of causal-inference analysis in audiology, e.g., as a way to disentangle the influence of the various cognitive factors and suprathreshold deficits associated to presbycusis.
Collapse
Affiliation(s)
- Léo Varnet
- Laboratoire des Systèmes Perceptifs, UMR CNRS 8248, Département d'Études Cognitives, École normale supérieure, Université Paris Sciences & Lettres, Paris, France
| | - Agnès C. Léger
- Manchester Centre for Audiology and Deafness, Division of Human Communication, Development & Hearing, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, University of Manchester, Manchester, United Kingdom
| | - Sophie Boucher
- Complexité du Vivant, Sorbonne Universités, Université Pierre et Marie Curie, Université Paris VI, Paris, France
- Institut de l'Audition, Institut Pasteur, INSERM, Paris, France
- Centre Hospitalier Universitaire d'Angers, Angers, France
| | - Crystel Bonnet
- Complexité du Vivant, Sorbonne Universités, Université Pierre et Marie Curie, Université Paris VI, Paris, France
- Institut de l'Audition, Institut Pasteur, INSERM, Paris, France
| | - Christine Petit
- Institut de l'Audition, Institut Pasteur, INSERM, Paris, France
- Collège de France, Paris, France
| | - Christian Lorenzi
- Laboratoire des Systèmes Perceptifs, UMR CNRS 8248, Département d'Études Cognitives, École normale supérieure, Université Paris Sciences & Lettres, Paris, France
| |
Collapse
|
38
|
Kim S, Schwalje AT, Liu AS, Gander PE, McMurray B, Griffiths TD, Choi I. Pre- and post-target cortical processes predict speech-in-noise performance. Neuroimage 2021; 228:117699. [PMID: 33387631 PMCID: PMC8291856 DOI: 10.1016/j.neuroimage.2020.117699] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 11/06/2020] [Accepted: 12/23/2020] [Indexed: 12/19/2022] Open
Abstract
Understanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. There is a variance in individuals' ability to understand SiN that cannot be explained by simple hearing profiles, which suggests that central factors may underlie the variance in SiN ability. Here, we elucidated a few cortical functions involved during a SiN task and their contributions to individual variance using both within- and across-subject approaches. Through our within-subject analysis of source-localized electroencephalography, we investigated how acoustic signal-to-noise ratio (SNR) alters cortical evoked responses to a target word across the speech recognition areas, finding stronger responses in left supramarginal gyrus (SMG, BA40 the dorsal lexicon area) with quieter noise. Through an individual differences approach, we found that listeners show different neural sensitivity to the background noise and target speech, reflected in the amplitude ratio of earlier auditory-cortical responses to speech and noise, named as an internal SNR. Listeners with better internal SNR showed better SiN performance. Further, we found that the post-speech time SMG activity explains a further amount of variance in SiN performance that is not accounted for by internal SNR. This result demonstrates that at least two cortical processes contribute to SiN performance independently: pre-target time processing to attenuate neural representation of background noise and post-target time processing to extract information from speech sounds.
Collapse
Affiliation(s)
- Subong Kim
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN 47907, USA
| | - Adam T Schwalje
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Andrew S Liu
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Phillip E Gander
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Bob McMurray
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA; Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA 52242, USA; Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA 52242, USA
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne NE1 7RU, UK
| | - Inyong Choi
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA; Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA 52242, USA.
| |
Collapse
|
39
|
James CJ, Graham PL, Betances Reinoso FA, Breuning SN, Durko M, Huarte Irujo A, Royo López J, Müller L, Perenyi A, Jaramillo Saffon R, Salinas Garcia S, Schüssler M, Schwarz Langer MJ, Skarzynski PH, Mecklenburg DJ. The Listening Network and Cochlear Implant Benefits in Hearing-Impaired Adults. Front Aging Neurosci 2021; 13:589296. [PMID: 33716706 PMCID: PMC7947658 DOI: 10.3389/fnagi.2021.589296] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 01/28/2021] [Indexed: 01/10/2023] Open
Abstract
Older adults with mild or no hearing loss make more errors and expend more effort listening to speech. Cochlear implants (CI) restore hearing to deaf patients but with limited fidelity. We hypothesized that patient-reported hearing and health-related quality of life in CI patients may similarly vary according to age. Speech Spatial Qualities (SSQ) of hearing scale and Health Utilities Index Mark III (HUI) questionnaires were administered to 543 unilaterally implanted adults across Europe, South Africa, and South America. Data were acquired before surgery and at 1, 2, and 3 years post-surgery. Data were analyzed using linear mixed models with visit, age group (18–34, 35–44, 45–54, 55–64, and 65+), and side of implant as main factors and adjusted for other covariates. Tinnitus and dizziness prevalence did not vary with age, but older groups had more preoperative hearing. Preoperatively and postoperatively, SSQ scores were significantly higher (Δ0.75–0.82) for those aged <45 compared with those 55+. However, gains in SSQ scores were equivalent across age groups, although postoperative SSQ scores were higher in right-ear implanted subjects. All age groups benefited equally in terms of HUI gain (0.18), with no decrease in scores with age. Overall, younger adults appeared to cope better with a degraded hearing before and after CI, leading to better subjective hearing performance.
Collapse
Affiliation(s)
| | - Petra L Graham
- Department of Mathematics and Statistics, Macquarie University, North Ryde, NSW, Australia
| | | | | | - Marcin Durko
- Department of Otolaryngology, Head and Neck Oncology, Medical University of Lodz, Lodz, Poland
| | - Alicia Huarte Irujo
- Department of Otorhinolaryngology, Clínica Universidad de Navarra, Pamplona, Spain
| | - Juan Royo López
- Servicio de Otorrinolaringología, Hospital Clínico Universitario Lozano Blesa, Zaragoza, Spain
| | - Lida Müller
- Tygerberg Hospital-Stellenbosch University Cochlear Implant Unit, Tygerberg, South Africa
| | - Adam Perenyi
- Department of Otolaryngology and Head Neck Surgery, Albert Szent Györgyi Medical Center, University of Szeged, Szeged, Hungary
| | | | - Sandra Salinas Garcia
- Servicio de Otorrinolaringología y Patología Cérvico-Facial, Fundación Jiménez Díaz University Hospital, Madrid, Spain
| | - Mark Schüssler
- Deutsches HörZentrum Hannover der HNO-Klinik, Medizische Hochschule Hannover, Hannover, Germany
| | | | | | | |
Collapse
|
40
|
Holmes E, Utoomprurkporn N, Hoskote C, Warren JD, Bamiou DE, Griffiths TD. Simultaneous auditory agnosia: Systematic description of a new type of auditory segregation deficit following a right hemisphere lesion. Cortex 2021; 135:92-107. [PMID: 33360763 PMCID: PMC7856551 DOI: 10.1016/j.cortex.2020.10.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Revised: 09/17/2020] [Accepted: 10/22/2020] [Indexed: 11/27/2022]
Abstract
We investigated auditory processing in a young patient who experienced a single embolus causing an infarct in the right middle cerebral artery territory. This led to damage to auditory cortex including planum temporale that spared medial Heschl's gyrus, and included damage to the posterior insula and inferior parietal lobule. She reported chronic difficulties with segregating speech from noise and segregating elements of music. Clinical tests showed no evidence for abnormal cochlear function. Follow-up tests confirmed difficulties with auditory segregation in her left ear that spanned multiple domains, including words-in-noise and music streaming. Testing with a stochastic figure-ground task-a way of estimating generic acoustic foreground and background segregation-demonstrated that this was also abnormal. This is the first demonstration of an acquired deficit in the segregation of complex acoustic patterns due to cortical damage, which we argue is a causal explanation for the symptomatic deficits in the segregation of speech and music. These symptoms are analogous to the visual symptom of simultaneous agnosia. Consistent with functional imaging studies on normal listeners, the work implicates non-primary auditory cortex. Further, the work demonstrates a (partial) lateralisation of the necessary anatomical substrate for segregation that has not been previously highlighted.
Collapse
Affiliation(s)
- Emma Holmes
- Wellcome Centre for Human Neuroimaging, UCL, London, UK.
| | - Nattawan Utoomprurkporn
- UCL Ear Institute, UCL, London, UK; NIHR University College London Hospitals Biomedical Research Centre, University College London Hospitals NHS Foundation Trust, UCL, London, UK; Faculty of Medicine, Chulalongkorn University, King Chulalongkorn Memorial Hospital, Bangkok, Thailand
| | - Chandrashekar Hoskote
- Lysholm Department of Neuroradiology, University College London Hospitals NHS Foundation Trust, UCL, London, UK
| | | | - Doris-Eva Bamiou
- UCL Ear Institute, UCL, London, UK; NIHR University College London Hospitals Biomedical Research Centre, University College London Hospitals NHS Foundation Trust, UCL, London, UK
| | - Timothy D Griffiths
- Wellcome Centre for Human Neuroimaging, UCL, London, UK; Biosciences Institute, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
41
|
Griffiths TD, Lad M, Kumar S, Holmes E, McMurray B, Maguire EA, Billig AJ, Sedley W. How Can Hearing Loss Cause Dementia? Neuron 2020; 108:401-412. [PMID: 32871106 PMCID: PMC7664986 DOI: 10.1016/j.neuron.2020.08.003] [Citation(s) in RCA: 151] [Impact Index Per Article: 37.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Revised: 07/31/2020] [Accepted: 08/05/2020] [Indexed: 12/11/2022]
Abstract
Epidemiological studies identify midlife hearing loss as an independent risk factor for dementia, estimated to account for 9% of cases. We evaluate candidate brain bases for this relationship. These bases include a common pathology affecting the ascending auditory pathway and multimodal cortex, depletion of cognitive reserve due to an impoverished listening environment, and the occupation of cognitive resources when listening in difficult conditions. We also put forward an alternate mechanism, drawing on new insights into the role of the medial temporal lobe in auditory cognition. In particular, we consider how aberrant activity in the service of auditory pattern analysis, working memory, and object processing may interact with dementia pathology in people with hearing loss. We highlight how the effect of hearing interventions on dementia depends on the specific mechanism and suggest avenues for work at the molecular, neuronal, and systems levels to pin this down.
Collapse
Affiliation(s)
- Timothy D Griffiths
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK; Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK; Human Brain Research Laboratory, Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA.
| | - Meher Lad
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| | - Sukhbinder Kumar
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| | - Emma Holmes
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Bob McMurray
- Departments of Psychological and Brain Sciences, Communication Sciences and Disorders, Otolaryngology, University of Iowa, Iowa City, IA 52242, USA
| | - Eleanor A Maguire
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | | | - William Sedley
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| |
Collapse
|
42
|
Holmes E, Zeidman P, Friston KJ, Griffiths TD. Difficulties with Speech-in-Noise Perception Related to Fundamental Grouping Processes in Auditory Cortex. Cereb Cortex 2020; 31:1582-1596. [PMID: 33136138 PMCID: PMC7869094 DOI: 10.1093/cercor/bhaa311] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Revised: 08/04/2020] [Accepted: 09/22/2020] [Indexed: 01/05/2023] Open
Abstract
In our everyday lives, we are often required to follow a conversation when background noise is present (“speech-in-noise” [SPIN] perception). SPIN perception varies widely—and people who are worse at SPIN perception are also worse at fundamental auditory grouping, as assessed by figure-ground tasks. Here, we examined the cortical processes that link difficulties with SPIN perception to difficulties with figure-ground perception using functional magnetic resonance imaging. We found strong evidence that the earliest stages of the auditory cortical hierarchy (left core and belt areas) are similarly disinhibited when SPIN and figure-ground tasks are more difficult (i.e., at target-to-masker ratios corresponding to 60% rather than 90% performance)—consistent with increased cortical gain at lower levels of the auditory hierarchy. Overall, our results reveal a common neural substrate for these basic (figure-ground) and naturally relevant (SPIN) tasks—which provides a common computational basis for the link between SPIN perception and fundamental auditory grouping.
Collapse
Affiliation(s)
- Emma Holmes
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, UCL, London WC1N 3AR, UK
| | - Peter Zeidman
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, UCL, London WC1N 3AR, UK
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, UCL, London WC1N 3AR, UK
| | - Timothy D Griffiths
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, UCL, London WC1N 3AR, UK.,Biosciences Institute, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne NE2 4HH, UK
| |
Collapse
|
43
|
Abstract
Being able to pick out particular sounds, such as speech, against a background of other sounds represents one of the key tasks performed by the auditory system. Understanding how this happens is important because speech recognition in noise is particularly challenging for older listeners and for people with hearing impairments. Central to this ability is the capacity of neurons to adapt to the statistics of sounds reaching the ears, which helps to generate noise-tolerant representations of sounds in the brain. In more complex auditory scenes, such as a cocktail party — where the background noise comprises other voices, sound features associated with each source have to be grouped together and segregated from those belonging to other sources. This depends on precise temporal coding and modulation of cortical response properties when attending to a particular speaker in a multi-talker environment. Furthermore, the neural processing underlying auditory scene analysis is shaped by experience over multiple timescales.
Collapse
|
44
|
Bidelman GM, Yoo J. Musicians Show Improved Speech Segregation in Competitive, Multi-Talker Cocktail Party Scenarios. Front Psychol 2020; 11:1927. [PMID: 32973610 PMCID: PMC7461890 DOI: 10.3389/fpsyg.2020.01927] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Accepted: 07/13/2020] [Indexed: 12/05/2022] Open
Abstract
Studies suggest that long-term music experience enhances the brain’s ability to segregate speech from noise. Musicians’ “speech-in-noise (SIN) benefit” is based largely on perception from simple figure-ground tasks rather than competitive, multi-talker scenarios that offer realistic spatial cues for segregation and engage binaural processing. We aimed to investigate whether musicians show perceptual advantages in cocktail party speech segregation in a competitive, multi-talker environment. We used the coordinate response measure (CRM) paradigm to measure speech recognition and localization performance in musicians vs. non-musicians in a simulated 3D cocktail party environment conducted in an anechoic chamber. Speech was delivered through a 16-channel speaker array distributed around the horizontal soundfield surrounding the listener. Participants recalled the color, number, and perceived location of target callsign sentences. We manipulated task difficulty by varying the number of additional maskers presented at other spatial locations in the horizontal soundfield (0–1–2–3–4–6–8 multi-talkers). Musicians obtained faster and better speech recognition amidst up to around eight simultaneous talkers and showed less noise-related decline in performance with increasing interferers than their non-musician peers. Correlations revealed associations between listeners’ years of musical training and CRM recognition and working memory. However, better working memory correlated with better speech streaming. Basic (QuickSIN) but not more complex (speech streaming) SIN processing was still predicted by music training after controlling for working memory. Our findings confirm a relationship between musicianship and naturalistic cocktail party speech streaming but also suggest that cognitive factors at least partially drive musicians’ SIN advantage.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States.,School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States.,Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN, United States
| | - Jessica Yoo
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States
| |
Collapse
|
45
|
Erb J, Schmitt LM, Obleser J. Temporal selectivity declines in the aging human auditory cortex. eLife 2020; 9:55300. [PMID: 32618270 PMCID: PMC7410487 DOI: 10.7554/elife.55300] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 07/02/2020] [Indexed: 12/03/2022] Open
Abstract
Current models successfully describe the auditory cortical response to natural sounds with a set of spectro-temporal features. However, these models have hardly been linked to the ill-understood neurobiological changes that occur in the aging auditory cortex. Modelling the hemodynamic response to a rich natural sound mixture in N = 64 listeners of varying age, we here show that in older listeners’ auditory cortex, the key feature of temporal rate is represented with a markedly broader tuning. This loss of temporal selectivity is most prominent in primary auditory cortex and planum temporale, with no such changes in adjacent auditory or other brain areas. Amongst older listeners, we observe a direct relationship between chronological age and temporal-rate tuning, unconfounded by auditory acuity or model goodness of fit. In line with senescent neural dedifferentiation more generally, our results highlight decreased selectivity to temporal information as a hallmark of the aging auditory cortex. It can often be difficult for an older person to understand what someone is saying, particularly in noisy environments. Exactly how and why this age-related change occurs is not clear, but it is thought that older individuals may become less able to tune in to certain features of sound. Newer tools are making it easier to study age-related changes in hearing in the brain. For example, functional magnetic resonance imaging (fMRI) can allow scientists to ‘see’ and measure how certain parts of the brain react to different features of sound. Using fMRI data, researchers can compare how younger and older people process speech. They can also track how speech processing in the brain changes with age. Now, Erb et al. show that older individuals have a harder time tuning into the rhythm of speech. In the experiments, 64 people between the ages of 18 to 78 were asked to listen to speech in a noisy setting while they underwent fMRI. The researchers then tested a computer model using the data. In the older individuals, the brain’s tuning to the timing or rhythm of speech was broader, while the younger participants were more able to finely tune into this feature of sound. The older a person was the less able their brain was to distinguish rhythms in speech, likely making it harder to understand what had been said. This hearing change likely occurs because brain cells become less specialised overtime, which can contribute to many kinds of age-related cognitive decline. This new information about why understanding speech becomes more difficult with age may help scientists develop better hearing aids that are individualised to a person’s specific needs.
Collapse
Affiliation(s)
- Julia Erb
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | | | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany
| |
Collapse
|