1
|
Jiang K, Albert MS, Coresh J, Couper DJ, Gottesman RF, Hayden KM, Jack CR, Knopman DS, Mosley TH, Pankow JS, Pike JR, Reed NS, Sanchez VA, Sharrett AR, Lin FR, Deal JA. Cross-Sectional Associations of Peripheral Hearing, Brain Imaging, and Cognitive Performance With Speech-in-Noise Performance: The Aging and Cognitive Health Evaluation in Elders Brain Magnetic Resonance Imaging Ancillary Study. Am J Audiol 2024:1-12. [PMID: 38748919 DOI: 10.1044/2024_aja-23-00108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024] Open
Abstract
PURPOSE Population-based evidence in the interrelationships among hearing, brain structure, and cognition is limited. This study aims to investigate the cross-sectional associations of peripheral hearing, brain imaging measures, and cognitive function with speech-in-noise performance among older adults. METHOD We studied 602 participants in the Aging and Cognitive Health Evaluation in Elders (ACHIEVE) brain magnetic resonance imaging (MRI) ancillary study, including 427 ACHIEVE baseline (2018-2020) participants with hearing loss and 175 Atherosclerosis Risk in Communities Neurocognitive Study Visit 6/7 (2016-2017/2018-2019) participants with normal hearing. Speech-in-noise performance, as outcome of interest, was assessed by the Quick Speech-in-Noise (QuickSIN) test (range: 0-30; higher = better). Predictors of interest included (a) peripheral hearing assessed by pure-tone audiometry; (b) brain imaging measures: structural MRI measures, white matter hyperintensities, and diffusion tensor imaging measures; and (c) cognitive performance assessed by a battery of 10 cognitive tests. All predictors were standardized to z scores. We estimated the differences in QuickSIN associated with every standard deviation (SD) worse in each predictor (peripheral hearing, brain imaging, and cognition) using multivariable-adjusted linear regression, adjusting for demographic variables, lifestyle, and disease factors (Model 1), and, additionally, for other predictors to assess independent associations (Model 2). RESULTS Participants were aged 70-84 years, 56% female, and 17% Black. Every SD worse in better-ear 4-frequency pure-tone average was associated with worse QuickSIN (-4.89, 95% confidence interval, CI [-5.57, -4.21]) when participants had peripheral hearing loss, independent of other predictors. Smaller temporal lobe volume was associated with worse QuickSIN, but the association was not independent of other predictors (-0.30, 95% CI [-0.86, 0.26]). Every SD worse in global cognitive performance was independently associated with worse QuickSIN (-0.90, 95% CI [-1.30, -0.50]). CONCLUSIONS Peripheral hearing and cognitive performance are independently associated with speech-in-noise performance among dementia-free older adults. The ongoing ACHIEVE trial will elucidate the effect of a hearing intervention that includes amplification and auditory rehabilitation on speech-in-noise understanding in older adults. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25733679.
Collapse
Affiliation(s)
- Kening Jiang
- Cochlear Center for Hearing and Public Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
| | - Marilyn S Albert
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD
| | - Josef Coresh
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
| | - David J Couper
- Department of Biostatistics, Gillings School of Global Public Health, The University of North Carolina at Chapel Hill
| | - Rebecca F Gottesman
- Stroke Branch, National Institute of Neurological Disorders and Stroke Intramural Research Program, National Institutes of Health, Bethesda, MD
| | - Kathleen M Hayden
- Department of Social Sciences and Health Policy, Wake Forest School of Medicine, Winston-Salem, NC
| | | | | | - Thomas H Mosley
- The MIND Center, University of Mississippi Medical Center, Jackson, MS
| | - James S Pankow
- Division of Epidemiology and Community Health, University of Minnesota School of Public Health, Minneapolis
| | - James R Pike
- Department of Biostatistics, Gillings School of Global Public Health, The University of North Carolina at Chapel Hill
| | - Nicholas S Reed
- Cochlear Center for Hearing and Public Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins School of Medicine, Baltimore, MD
| | - Victoria A Sanchez
- Department of Otolaryngology, Morsani College of Medicine, University of South Florida, Tampa
| | - A Richey Sharrett
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
| | - Frank R Lin
- Cochlear Center for Hearing and Public Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins School of Medicine, Baltimore, MD
| | - Jennifer A Deal
- Cochlear Center for Hearing and Public Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins School of Medicine, Baltimore, MD
| |
Collapse
|
2
|
Boncz Á, Szalárdy O, Velősy PK, Béres L, Baumgartner R, Winkler I, Tóth B. The effects of aging and hearing impairment on listening in noise. iScience 2024; 27:109295. [PMID: 38558934 PMCID: PMC10981015 DOI: 10.1016/j.isci.2024.109295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 09/19/2023] [Accepted: 02/16/2024] [Indexed: 04/04/2024] Open
Abstract
The study investigates age-related decline in listening abilities, particularly in noisy environments, where the challenge lies in extracting meaningful information from variable sensory input (figure-ground segregation). The research focuses on peripheral and central factors contributing to this decline using a tone-cloud-based figure detection task. Results based on behavioral measures and event-related brain potentials (ERPs) indicate that, despite delayed perceptual processes and some deterioration in attention and executive functions with aging, the ability to detect sound sources in noise remains relatively intact. However, even mild hearing impairment significantly hampers the segregation of individual sound sources within a complex auditory scene. The severity of the hearing deficit correlates with an increased susceptibility to masking noise. The study underscores the impact of hearing impairment on auditory scene analysis and highlights the need for personalized interventions based on individual abilities.
Collapse
Affiliation(s)
- Ádám Boncz
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
| | - Orsolya Szalárdy
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
- Institute of Behavioural Sciences, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| | - Péter Kristóf Velősy
- Department of Cognitive Science, Budapest University of Technology and Economics, Budapest, Hungary
| | - Luca Béres
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
- Department of Cognitive Science, Budapest University of Technology and Economics, Budapest, Hungary
| | - Robert Baumgartner
- Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - István Winkler
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
| | - Brigitta Tóth
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
| |
Collapse
|
3
|
Leaver AM. Perceptual and cognitive effects of focal tDCS of auditory cortex in tinnitus. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.01.31.24302093. [PMID: 38352362 PMCID: PMC10863023 DOI: 10.1101/2024.01.31.24302093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/19/2024]
Abstract
OBJECTIVES Noninvasive brain stimulation continues to grow as an effective, low-risk way of improving the symptoms of brain conditions. Transcranial direct current stimulation (tDCS) is particularly well-tolerated, with benefits including low cost and potential portability. Nevertheless, continued study of perceptual and cognitive side effects is warranted, given the complexity of functional brain organization. This paper describes the results of a brief battery of tablet-based tasks used in a recent pilot study of auditory-cortex tDCS in people with chronic tinnitus. METHODS Volunteers with chronic tinnitus (n=20) completed two hearing tasks (pure-tone thresholds, Words In Noise) and two cognitive tasks (Flanker, Dimension Change Card Sort) from the NIH Toolbox. Volunteers were randomized to active or sham 4×1 Ag/AgCl tDCS of auditory cortex, and tasks were completed immediately before and after the first tDCS session, and after the fifth/final tDCS session. Statistics included linear mixed-effects models for change in task performance over time. RESULTS Before tDCS, performance on both auditory tasks was highly correlated with clinical audiometry, supporting the external validity of these measures (r2>0.89 for all). Although overall auditory task performance did not change after active or sham tDCS, detection of right-ear Words in Noise stimuli modestly improved after five active tDCS sessions (t(34)=-2.07, p=0.05). On cognitive tasks, reaction times were quicker after sham tDCS, reflecting expected practice effects (e.g., t(88)=3.22, p=0.002 after 5 sessions on Flanker task). However, reaction times did not improve over repeated sessions in the active group, suggesting that tDCS interfered with learning these practice effects. CONCLUSIONS Repeated sessions of auditory-cortex tDCS does not appear to adversely affect hearing or cognition, but may modestly improve hearing in noisy environments and interfere with some types of motor learning. Low-burden cognitive/perceptual test batteries could be a powerful way to identify adverse effects and new treatment targets in brain stimulation research.
Collapse
Affiliation(s)
- Amber M. Leaver
- Department of Radiology, Northwestern University, Chicago, IL, USA
| |
Collapse
|
4
|
Choi I, Gander PE, Berger JI, Woo J, Choy MH, Hong J, Colby S, McMurray B, Griffiths TD. Spectral Grouping of Electrically Encoded Sound Predicts Speech-in-Noise Performance in Cochlear Implantees. J Assoc Res Otolaryngol 2023; 24:607-617. [PMID: 38062284 PMCID: PMC10752853 DOI: 10.1007/s10162-023-00918-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Accepted: 11/14/2023] [Indexed: 12/29/2023] Open
Abstract
OBJECTIVES Cochlear implant (CI) users exhibit large variability in understanding speech in noise. Past work in CI users found that spectral and temporal resolution correlates with speech-in-noise ability, but a large portion of variance remains unexplained. Recent work on normal-hearing listeners showed that the ability to group temporally and spectrally coherent tones in a complex auditory scene predicts speech-in-noise ability independently of the audiogram, highlighting a central mechanism for auditory scene analysis that contributes to speech-in-noise. The current study examined whether the auditory grouping ability also contributes to speech-in-noise understanding in CI users. DESIGN Forty-seven post-lingually deafened CI users were tested with psychophysical measures of spectral and temporal resolution, a stochastic figure-ground task that depends on the detection of a figure by grouping multiple fixed frequency elements against a random background, and a sentence-in-noise measure. Multiple linear regression was used to predict sentence-in-noise performance from the other tasks. RESULTS No co-linearity was found between any predictor variables. All three predictors (spectral and temporal resolution plus the figure-ground task) exhibited significant contribution in the multiple linear regression model, indicating that the auditory grouping ability in a complex auditory scene explains a further proportion of variance in CI users' speech-in-noise performance that was not explained by spectral and temporal resolution. CONCLUSION Measures of cross-frequency grouping reflect an auditory cognitive mechanism that determines speech-in-noise understanding independently of cochlear function. Such measures are easily implemented clinically as predictors of CI success and suggest potential strategies for rehabilitation based on training with non-speech stimuli.
Collapse
Affiliation(s)
- Inyong Choi
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr., Iowa City, IA, 52242, USA.
- Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA.
| | - Phillip E Gander
- Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
- Department of Radiology, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Joel I Berger
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Jihwan Woo
- Department of Biomedical Engineering, University of Ulsan, Ulsan, Republic of Korea
| | - Matthew H Choy
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE1 7RU, UK
| | - Jean Hong
- Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Sarah Colby
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, 52242, USA
| | - Bob McMurray
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr., Iowa City, IA, 52242, USA
- Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, 52242, USA
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE1 7RU, UK
| |
Collapse
|
5
|
Zhuang Q, Qiao L, Xu L, Yao S, Chen S, Zheng X, Li J, Fu M, Li K, Vatansever D, Ferraro S, Kendrick KM, Becker B. The right inferior frontal gyrus as pivotal node and effective regulator of the basal ganglia-thalamocortical response inhibition circuit. PSYCHORADIOLOGY 2023; 3:kkad016. [PMID: 38666118 PMCID: PMC10917375 DOI: 10.1093/psyrad/kkad016] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 08/13/2023] [Accepted: 09/12/2023] [Indexed: 04/28/2024]
Abstract
Background The involvement of specific basal ganglia-thalamocortical circuits in response inhibition has been extensively mapped in animal models. However, the pivotal nodes and directed causal regulation within this inhibitory circuit in humans remains controversial. Objective The main aim of the present study was to determine the causal information flow and critical nodes in the basal ganglia-thalamocortical inhibitory circuits and also to examine whether these are modulated by biological factors (i.e. sex) and behavioral performance. Methods Here, we capitalize on the recent progress in robust and biologically plausible directed causal modeling (DCM-PEB) and a large response inhibition dataset (n = 250) acquired with concomitant functional magnetic resonance imaging to determine key nodes, their causal regulation and modulation via biological variables (sex) and inhibitory performance in the inhibitory circuit encompassing the right inferior frontal gyrus (rIFG), caudate nucleus (rCau), globus pallidum (rGP), and thalamus (rThal). Results The entire neural circuit exhibited high intrinsic connectivity and response inhibition critically increased causal projections from the rIFG to both rCau and rThal. Direct comparison further demonstrated that response inhibition induced an increasing rIFG inflow and increased the causal regulation of this region over the rCau and rThal. In addition, sex and performance influenced the functional architecture of the regulatory circuits such that women displayed increased rThal self-inhibition and decreased rThal to GP modulation, while better inhibitory performance was associated with stronger rThal to rIFG communication. Furthermore, control analyses did not reveal a similar key communication in a left lateralized model. Conclusions Together, these findings indicate a pivotal role of the rIFG as input and causal regulator of subcortical response inhibition nodes.
Collapse
Affiliation(s)
- Qian Zhuang
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, The University of Electronic Science and Technology of China, Chengdu, Sichuan Province 611731, China
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, Zhejiang Province 311121, China
| | - Lei Qiao
- School of Psychology, Shenzhen University, Shenzhen 518060, China
| | - Lei Xu
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, The University of Electronic Science and Technology of China, Chengdu, Sichuan Province 611731, China
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, 610068, China
| | - Shuxia Yao
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, The University of Electronic Science and Technology of China, Chengdu, Sichuan Province 611731, China
| | - Shuaiyu Chen
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, Zhejiang Province 311121, China
| | - Xiaoxiao Zheng
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, The University of Electronic Science and Technology of China, Chengdu, Sichuan Province 611731, China
- Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Jialin Li
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, The University of Electronic Science and Technology of China, Chengdu, Sichuan Province 611731, China
| | - Meina Fu
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, The University of Electronic Science and Technology of China, Chengdu, Sichuan Province 611731, China
| | - Keshuang Li
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, The University of Electronic Science and Technology of China, Chengdu, Sichuan Province 611731, China
- School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Deniz Vatansever
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China
| | - Stefania Ferraro
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, The University of Electronic Science and Technology of China, Chengdu, Sichuan Province 611731, China
| | - Keith M Kendrick
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, The University of Electronic Science and Technology of China, Chengdu, Sichuan Province 611731, China
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China
| | - Benjamin Becker
- State Key Laboratory of Brain and Cognitive Sciences, The University of Hong Kong, Hong Kong 999077, China
- Department of Psychology, The University of Hong Kong, Hong Kong 999077, China
| |
Collapse
|
6
|
Jiang J, Johnson JCS, Requena-Komuro MC, Benhamou E, Sivasathiaseelan H, Chokesuwattanaskul A, Nelson A, Nortley R, Weil RS, Volkmer A, Marshall CR, Bamiou DE, Warren JD, Hardy CJD. Comprehension of acoustically degraded speech in Alzheimer's disease and primary progressive aphasia. Brain 2023; 146:4065-4076. [PMID: 37184986 PMCID: PMC10545509 DOI: 10.1093/brain/awad163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 04/20/2023] [Accepted: 04/27/2023] [Indexed: 05/17/2023] Open
Abstract
Successful communication in daily life depends on accurate decoding of speech signals that are acoustically degraded by challenging listening conditions. This process presents the brain with a demanding computational task that is vulnerable to neurodegenerative pathologies. However, despite recent intense interest in the link between hearing impairment and dementia, comprehension of acoustically degraded speech in these diseases has been little studied. Here we addressed this issue in a cohort of 19 patients with typical Alzheimer's disease and 30 patients representing the three canonical syndromes of primary progressive aphasia (non-fluent/agrammatic variant primary progressive aphasia; semantic variant primary progressive aphasia; logopenic variant primary progressive aphasia), compared to 25 healthy age-matched controls. As a paradigm for the acoustically degraded speech signals of daily life, we used noise-vocoding: synthetic division of the speech signal into frequency channels constituted from amplitude-modulated white noise, such that fewer channels convey less spectrotemporal detail thereby reducing intelligibility. We investigated the impact of noise-vocoding on recognition of spoken three-digit numbers and used psychometric modelling to ascertain the threshold number of noise-vocoding channels required for 50% intelligibility by each participant. Associations of noise-vocoded speech intelligibility threshold with general demographic, clinical and neuropsychological characteristics and regional grey matter volume (defined by voxel-based morphometry of patients' brain images) were also assessed. Mean noise-vocoded speech intelligibility threshold was significantly higher in all patient groups than healthy controls, and significantly higher in Alzheimer's disease and logopenic variant primary progressive aphasia than semantic variant primary progressive aphasia (all P < 0.05). In a receiver operating characteristic analysis, vocoded intelligibility threshold discriminated Alzheimer's disease, non-fluent variant and logopenic variant primary progressive aphasia patients very well from healthy controls. Further, this central hearing measure correlated with overall disease severity but not with peripheral hearing or clear speech perception. Neuroanatomically, after correcting for multiple voxel-wise comparisons in predefined regions of interest, impaired noise-vocoded speech comprehension across syndromes was significantly associated (P < 0.05) with atrophy of left planum temporale, angular gyrus and anterior cingulate gyrus: a cortical network that has previously been widely implicated in processing degraded speech signals. Our findings suggest that the comprehension of acoustically altered speech captures an auditory brain process relevant to daily hearing and communication in major dementia syndromes, with novel diagnostic and therapeutic implications.
Collapse
Affiliation(s)
- Jessica Jiang
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Jeremy C S Johnson
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Maï-Carmen Requena-Komuro
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
- Kidney Cancer Program, UT Southwestern Medical Centre, Dallas, TX 75390, USA
| | - Elia Benhamou
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Harri Sivasathiaseelan
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Anthipa Chokesuwattanaskul
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
- Division of Neurology, Department of Internal Medicine, King Chulalongkorn Memorial Hospital, Thai Red Cross Society, Bangkok 10330, Thailand
| | - Annabel Nelson
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Ross Nortley
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
- Wexham Park Hospital, Frimley Health NHS Foundation Trust, Slough SL2 4HL, UK
| | - Rimona S Weil
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Anna Volkmer
- Division of Psychology and Language Sciences, University College London, London WC1H 0AP, UK
| | - Charles R Marshall
- Preventive Neurology Unit, Wolfson Institute of Population Health, Queen Mary University of London, London EC1M 6BQ, UK
| | - Doris-Eva Bamiou
- UCL Ear Institute and UCL/UCLH Biomedical Research Centre, National Institute of Health Research, University College London, London WC1X 8EE, UK
| | - Jason D Warren
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Chris J D Hardy
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| |
Collapse
|
7
|
Zhang Y, Rennig J, Magnotti JF, Beauchamp MS. Multivariate fMRI responses in superior temporal cortex predict visual contributions to, and individual differences in, the intelligibility of noisy speech. Neuroimage 2023; 278:120271. [PMID: 37442310 PMCID: PMC10460966 DOI: 10.1016/j.neuroimage.2023.120271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 06/20/2023] [Accepted: 07/06/2023] [Indexed: 07/15/2023] Open
Abstract
Humans have the unique ability to decode the rapid stream of language elements that constitute speech, even when it is contaminated by noise. Two reliable observations about noisy speech perception are that seeing the face of the talker improves intelligibility and the existence of individual differences in the ability to perceive noisy speech. We introduce a multivariate BOLD fMRI measure that explains both observations. In two independent fMRI studies, clear and noisy speech was presented in visual, auditory and audiovisual formats to thirty-seven participants who rated intelligibility. An event-related design was used to sort noisy speech trials by their intelligibility. Individual-differences multidimensional scaling was applied to fMRI response patterns in superior temporal cortex and the dissimilarity between responses to clear speech and noisy (but intelligible) speech was measured. Neural dissimilarity was less for audiovisual speech than auditory-only speech, corresponding to the greater intelligibility of noisy audiovisual speech. Dissimilarity was less in participants with better noisy speech perception, corresponding to individual differences. These relationships held for both single word and entire sentence stimuli, suggesting that they were driven by intelligibility rather than the specific stimuli tested. A neural measure of perceptual intelligibility may aid in the development of strategies for helping those with impaired speech perception.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States; Department of Neurosurgery, Baylor College of Medicine, Houston, TX, United States
| | - Johannes Rennig
- Division of Neuropsychology, Center of Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - John F Magnotti
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Michael S Beauchamp
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.
| |
Collapse
|
8
|
Bianco R, Chait M. No Link Between Speech-in-Noise Perception and Auditory Sensory Memory - Evidence From a Large Cohort of Older and Younger Listeners. Trends Hear 2023; 27:23312165231190688. [PMID: 37828868 PMCID: PMC10576936 DOI: 10.1177/23312165231190688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 07/06/2023] [Accepted: 07/11/2023] [Indexed: 10/14/2023] Open
Abstract
A growing literature is demonstrating a link between working memory (WM) and speech-in-noise (SiN) perception. However, the nature of this correlation and which components of WM might underlie it, are being debated. We investigated how SiN reception links with auditory sensory memory (aSM) - the low-level processes that support the short-term maintenance of temporally unfolding sounds. A large sample of old (N = 199, 60-79 yo) and young (N = 149, 20-35 yo) participants was recruited online and performed a coordinate response measure-based speech-in-babble task that taps listeners' ability to track a speech target in background noise. We used two tasks to investigate implicit and explicit aSM. Both were based on tone patterns overlapping in processing time scales with speech (presentation rate of tones 20 Hz; of patterns 2 Hz). We hypothesised that a link between SiN and aSM may be particularly apparent in older listeners due to age-related reduction in both SiN reception and aSM. We confirmed impaired SiN reception in the older cohort and demonstrated reduced aSM performance in those listeners. However, SiN and aSM did not share variability. Across the two age groups, SiN performance was predicted by a binaural processing test and age. The results suggest that previously observed links between WM and SiN may relate to the executive components and other cognitive demands of the used tasks. This finding helps to constrain the search for the perceptual and cognitive factors that explain individual variability in SiN performance.
Collapse
Affiliation(s)
- Roberta Bianco
- Ear Institute, University College London, London, UK
- Neuroscience of Perception and Action Lab, Italian Institute of Technology (IIT), Rome, Italy
| | - Maria Chait
- Ear Institute, University College London, London, UK
| |
Collapse
|
9
|
Johns MA, Calloway RC, Phillips I, Karuzis VP, Dutta K, Smith E, Shamma SA, Goupell MJ, Kuchinsky SE. Performance on stochastic figure-ground perception varies with individual differences in speech-in-noise recognition and working memory capacity. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:286. [PMID: 36732241 PMCID: PMC9851714 DOI: 10.1121/10.0016756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 12/07/2022] [Accepted: 12/10/2022] [Indexed: 06/18/2023]
Abstract
Speech recognition in noisy environments can be challenging and requires listeners to accurately segregate a target speaker from irrelevant background noise. Stochastic figure-ground (SFG) tasks in which temporally coherent inharmonic pure-tones must be identified from a background have been used to probe the non-linguistic auditory stream segregation processes important for speech-in-noise processing. However, little is known about the relationship between performance on SFG tasks and speech-in-noise tasks nor the individual differences that may modulate such relationships. In this study, 37 younger normal-hearing adults performed an SFG task with target figure chords consisting of four, six, eight, or ten temporally coherent tones amongst a background of randomly varying tones. Stimuli were designed to be spectrally and temporally flat. An increased number of temporally coherent tones resulted in higher accuracy and faster reaction times (RTs). For ten target tones, faster RTs were associated with better scores on the Quick Speech-in-Noise task. Individual differences in working memory capacity and self-reported musicianship further modulated these relationships. Overall, results demonstrate that the SFG task could serve as an assessment of auditory stream segregation accuracy and RT that is sensitive to individual differences in cognitive and auditory abilities, even among younger normal-hearing adults.
Collapse
Affiliation(s)
- Michael A Johns
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Regina C Calloway
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Ian Phillips
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Valerie P Karuzis
- Applied Research Laboratory of Intelligence and Security, University of Maryland, College Park, Maryland 20742, USA
| | - Kelsey Dutta
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Ed Smith
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Shihab A Shamma
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| |
Collapse
|
10
|
Simon KR, Merz EC, He X, Noble KG. Environmental noise, brain structure, and language development in children. BRAIN AND LANGUAGE 2022; 229:105112. [PMID: 35398600 PMCID: PMC9126644 DOI: 10.1016/j.bandl.2022.105112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 03/26/2022] [Accepted: 03/28/2022] [Indexed: 06/03/2023]
Abstract
While excessive noise exposure in childhood has been associated with reduced language ability, few studies have examined potential underlying neurobiological mechanisms that may account for noise-related differences in language skills. In this study, we tested the hypotheses that higher everyday noise exposure would be associated with 1) poorer language skills and 2) differences in language-related cortical structure. A socioeconomically diverse sample of children aged 5-9 (N = 94) completed standardized language assessments. High-resolution T1-weighted magnetic resonance imaging (MRI) scans were acquired, and surface area and cortical thickness of the left inferior frontal gyrus (IFG) and left superior temporal gyrus (STG) were extracted. Language Environmental Analysis (LENA) was used to measure levels of exposure to excessive environmental noise over the course of a typical day (n = 43 with complete LENA, MRI, and behavioral data). Results indicated that children exposed to excessive levels of noise exhibited reduced cortical thickness in the left IFG. These findings add to a growing literature that explores the extent to which home environmental factors, such as environmental noise, are associated with neurobiological development related to language development in children.
Collapse
Affiliation(s)
- Katrina R Simon
- Department of Human Development, Teachers College, Columbia University, New York, NY, USA
| | - Emily C Merz
- Department of Psychology, Colorado State University, Fort Collins, CO, USA
| | - Xiaofu He
- Department of Psychiatry, The Vagelos College of Physicians and Surgeons, Columbia University and the New York State Psychiatric Institute, New York, NY, USA
| | - Kimberly G Noble
- Department of Human Development, Teachers College, Columbia University, New York, NY, USA; Teachers College, Columbia University, Department of Biobehavioral Sciences, USA.
| |
Collapse
|
11
|
Guo MX. EEG Responses to Auditory Figure-Ground Perception. Hear Res 2022; 422:108524. [DOI: 10.1016/j.heares.2022.108524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 05/05/2022] [Accepted: 05/14/2022] [Indexed: 11/25/2022]
|
12
|
Herrmann B, Maess B, Johnsrude IS. A neural signature of regularity in sound is reduced in older adults. Neurobiol Aging 2021; 109:1-10. [PMID: 34634748 DOI: 10.1016/j.neurobiolaging.2021.09.011] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 09/03/2021] [Accepted: 09/08/2021] [Indexed: 01/21/2023]
Abstract
Sensitivity to repetitions in sound amplitude and frequency is crucial for sound perception. As with other aspects of sound processing, sensitivity to such patterns may change with age, and may help explain some age-related changes in hearing such as segregating speech from background sound. We recorded magnetoencephalography to characterize differences in the processing of sound patterns between younger and older adults. We presented tone sequences that either contained a pattern (made of a repeated set of tones) or did not contain a pattern. We show that auditory cortex in older, compared to younger, adults is hyperresponsive to sound onsets, but that sustained neural activity in auditory cortex, indexing the processing of a sound pattern, is reduced. Hence, the sensitivity of neural populations in auditory cortex fundamentally differs between younger and older individuals, overresponding to sound onsets, while underresponding to patterns in sounds. This may help to explain some age-related changes in hearing such as increased sensitivity to distracting sounds and difficulties tracking speech in the presence of other sound.
Collapse
Affiliation(s)
- Björn Herrmann
- Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON, Canada; Rotman Research Institute, Baycrest, North York, ON, Canada; Department of Psychology, University of Toronto, Toronto, ON, Canada.
| | - Burkhard Maess
- Brain Networks Unit, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Ingrid S Johnsrude
- Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON, Canada; School of Communication Sciences & Disorders, The University of Western Ontario, London, ON, Canada
| |
Collapse
|
13
|
Zhao Y, Zhang L, Rütgen M, Sladky R, Lamm C. Neural dynamics between anterior insular cortex and right supramarginal gyrus dissociate genuine affect sharing from perceptual saliency of pretended pain. eLife 2021; 10:e69994. [PMID: 34409940 PMCID: PMC8443248 DOI: 10.7554/elife.69994] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Accepted: 08/17/2021] [Indexed: 12/30/2022] Open
Abstract
Empathy for pain engages both shared affective responses and self-other distinction. In this study, we addressed the highly debated question of whether neural responses previously linked to affect sharing could result from the perception of salient affective displays. Moreover, we investigated how the brain network involved in affect sharing and self-other distinction underpinned our response to a pain that is either perceived as genuine or pretended (while in fact both were acted for reasons of experimental control). We found stronger activations in regions associated with affect sharing (anterior insula [aIns] and anterior mid-cingulate cortex) as well as with affective self-other distinction (right supramarginal gyrus [rSMG]), in participants watching video clips of genuine vs. pretended facial expressions of pain. Using dynamic causal modeling, we then assessed the neural dynamics between the right aIns and rSMG in these two conditions. This revealed a reduced inhibitory effect on the aIns to rSMG connection for genuine pain compared to pretended pain. For genuine pain only, brain-to-behavior regression analyses highlighted a linkage between this inhibitory effect on the one hand, and pain ratings as well as empathic traits on the other. These findings imply that if the pain of others is genuine and thus calls for an appropriate empathic response, neural responses in the aIns indeed seem related to affect sharing and self-other distinction is engaged to avoid empathic over-arousal. In contrast, if others merely pretend to be in pain, the perceptual salience of their painful expression results in neural responses that are down-regulated to avoid inappropriate affect sharing and social support.
Collapse
Affiliation(s)
- Yili Zhao
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of ViennaViennaAustria
| | - Lei Zhang
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of ViennaViennaAustria
| | - Markus Rütgen
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of ViennaViennaAustria
- Vienna Cognitive Science Hub, University of ViennaViennaAustria
| | - Ronald Sladky
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of ViennaViennaAustria
| | - Claus Lamm
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of ViennaViennaAustria
- Vienna Cognitive Science Hub, University of ViennaViennaAustria
| |
Collapse
|
14
|
Neuronal figure-ground responses in primate primary auditory cortex. Cell Rep 2021; 35:109242. [PMID: 34133935 PMCID: PMC8220257 DOI: 10.1016/j.celrep.2021.109242] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 12/09/2020] [Accepted: 05/20/2021] [Indexed: 11/22/2022] Open
Abstract
Figure-ground segregation, the brain’s ability to group related features into stable perceptual entities, is crucial for auditory perception in noisy environments. The neuronal mechanisms for this process are poorly understood in the auditory system. Here, we report figure-ground modulation of multi-unit activity (MUA) in the primary and non-primary auditory cortex of rhesus macaques. Across both regions, MUA increases upon presentation of auditory figures, which consist of coherent chord sequences. We show increased activity even in the absence of any perceptual decision, suggesting that neural mechanisms for perceptual grouping are, to some extent, independent of behavioral demands. Furthermore, we demonstrate differences in figure encoding between more anterior and more posterior regions; perceptual saliency is represented in anterior cortical fields only. Our results suggest an encoding of auditory figures from the earliest cortical stages by a rate code. Neuronal figure-ground modulation in primary auditory cortex A rate code is used to signal the presence of auditory figures Anteriorly located recording sites encode perceptual saliency Figure-ground modulation is present without perceptual detection
Collapse
|
15
|
Holmes E, Johnsrude IS. Speech-evoked brain activity is more robust to competing speech when it is spoken by someone familiar. Neuroimage 2021; 237:118107. [PMID: 33933598 DOI: 10.1016/j.neuroimage.2021.118107] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 04/19/2021] [Accepted: 04/25/2021] [Indexed: 10/21/2022] Open
Abstract
When speech is masked by competing sound, people are better at understanding what is said if the talker is familiar compared to unfamiliar. The benefit is robust, but how does processing of familiar voices facilitate intelligibility? We combined high-resolution fMRI with representational similarity analysis to quantify the difference in distributed activity between clear and masked speech. We demonstrate that brain representations of spoken sentences are less affected by a competing sentence when they are spoken by a friend or partner than by someone unfamiliar-effectively, showing a cortical signal-to-noise ratio (SNR) enhancement for familiar voices. This effect correlated with the familiar-voice intelligibility benefit. We functionally parcellated auditory cortex, and found that the most prominent familiar-voice advantage was manifest along the posterior superior and middle temporal gyri. Overall, our results demonstrate that experience-driven improvements in intelligibility are associated with enhanced multivariate pattern activity in posterior temporal cortex.
Collapse
Affiliation(s)
- Emma Holmes
- The Brain and Mind Institute, University of Western Ontario, London, Ontario, N6A 3K7, Canada.
| | - Ingrid S Johnsrude
- The Brain and Mind Institute, University of Western Ontario, London, Ontario, N6A 3K7, Canada; School of Communication Sciences and Disorders, University of Western Ontario, London, Ontario, London, N6G 1H1, Canada
| |
Collapse
|
16
|
Holmes E, Utoomprurkporn N, Hoskote C, Warren JD, Bamiou DE, Griffiths TD. Simultaneous auditory agnosia: Systematic description of a new type of auditory segregation deficit following a right hemisphere lesion. Cortex 2021; 135:92-107. [PMID: 33360763 PMCID: PMC7856551 DOI: 10.1016/j.cortex.2020.10.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Revised: 09/17/2020] [Accepted: 10/22/2020] [Indexed: 11/27/2022]
Abstract
We investigated auditory processing in a young patient who experienced a single embolus causing an infarct in the right middle cerebral artery territory. This led to damage to auditory cortex including planum temporale that spared medial Heschl's gyrus, and included damage to the posterior insula and inferior parietal lobule. She reported chronic difficulties with segregating speech from noise and segregating elements of music. Clinical tests showed no evidence for abnormal cochlear function. Follow-up tests confirmed difficulties with auditory segregation in her left ear that spanned multiple domains, including words-in-noise and music streaming. Testing with a stochastic figure-ground task-a way of estimating generic acoustic foreground and background segregation-demonstrated that this was also abnormal. This is the first demonstration of an acquired deficit in the segregation of complex acoustic patterns due to cortical damage, which we argue is a causal explanation for the symptomatic deficits in the segregation of speech and music. These symptoms are analogous to the visual symptom of simultaneous agnosia. Consistent with functional imaging studies on normal listeners, the work implicates non-primary auditory cortex. Further, the work demonstrates a (partial) lateralisation of the necessary anatomical substrate for segregation that has not been previously highlighted.
Collapse
Affiliation(s)
- Emma Holmes
- Wellcome Centre for Human Neuroimaging, UCL, London, UK.
| | - Nattawan Utoomprurkporn
- UCL Ear Institute, UCL, London, UK; NIHR University College London Hospitals Biomedical Research Centre, University College London Hospitals NHS Foundation Trust, UCL, London, UK; Faculty of Medicine, Chulalongkorn University, King Chulalongkorn Memorial Hospital, Bangkok, Thailand
| | - Chandrashekar Hoskote
- Lysholm Department of Neuroradiology, University College London Hospitals NHS Foundation Trust, UCL, London, UK
| | | | - Doris-Eva Bamiou
- UCL Ear Institute, UCL, London, UK; NIHR University College London Hospitals Biomedical Research Centre, University College London Hospitals NHS Foundation Trust, UCL, London, UK
| | - Timothy D Griffiths
- Wellcome Centre for Human Neuroimaging, UCL, London, UK; Biosciences Institute, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|