1
|
Pandey PR, Herrmann B. The Influence of Semantic Context on the Intelligibility Benefit From Speech Glimpses in Younger and Older Adults. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2025:1-18. [PMID: 40233803 DOI: 10.1044/2025_jslhr-24-00588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/17/2025]
Abstract
PURPOSE Speech is often masked by background sound that fluctuates over time. Fluctuations in masker intensity can reveal glimpses of speech that support speech intelligibility, but older adults have frequently been shown to benefit less from speech glimpses than younger adults when listening to sentences. Recent work, however, suggests that older adults may leverage speech glimpses as much, or more, when listening to naturalistic stories, potentially because of the availability of semantic context in stories. The current study directly investigated whether semantic context helps older adults benefit from speech glimpses released by a fluctuating (modulated) masker more than younger adults. METHOD In two experiments, we reduced and extended semantic information of sentence stimuli in modulated and unmodulated speech maskers for younger and older adults. Speech intelligibility was assessed. RESULTS We found that semantic context improves speech intelligibility in both younger and older adults. Both age groups also exhibit better speech intelligibility for a modulated than an unmodulated (stationary) masker, but the benefit from the speech glimpses was reduced in older compared to younger adults. Semantic context amplified the benefit gained from the speech glimpses, but there was no indication that the amplification by the semantic context led to a greater benefit in older adults. If anything, younger adults benefitted more. CONCLUSIONS The current results suggest that the deficit in the masking-release benefit in older adults generalizes to situations in which extended speech context is available. That previous research found a greater benefit in older than younger adults during story listening may suggest that other factors, such as thematic knowledge, motivation, or cognition, may amplify the benefit from speech glimpses under naturalistic listening conditions.
Collapse
Affiliation(s)
- Priya R Pandey
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Ontario, Canada
| | - Björn Herrmann
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Ontario, Canada
| |
Collapse
|
2
|
Herrmann B. Enhanced neural speech tracking through noise indicates stochastic resonance in humans. eLife 2025; 13:RP100830. [PMID: 40100253 PMCID: PMC11919254 DOI: 10.7554/elife.100830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/20/2025] Open
Abstract
Neural activity in auditory cortex tracks the amplitude-onset envelope of continuous speech, but recent work counterintuitively suggests that neural tracking increases when speech is masked by background noise, despite reduced speech intelligibility. Noise-related amplification could indicate that stochastic resonance - the response facilitation through noise - supports neural speech tracking, but a comprehensive account is lacking. In five human electroencephalography experiments, the current study demonstrates a generalized enhancement of neural speech tracking due to minimal background noise. Results show that (1) neural speech tracking is enhanced for speech masked by background noise at very high signal-to-noise ratios (~30 dB SNR) where speech is highly intelligible; (2) this enhancement is independent of attention; (3) it generalizes across different stationary background maskers, but is strongest for 12-talker babble; and (4) it is present for headphone and free-field listening, suggesting that the neural-tracking enhancement generalizes to real-life listening. The work paints a clear picture that minimal background noise enhances the neural representation of the speech onset-envelope, suggesting that stochastic resonance contributes to neural speech tracking. The work further highlights non-linearities of neural tracking induced by background noise that make its use as a biological marker for speech processing challenging.
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Canada
- Department of Psychology, University of Toronto, Toronto, Canada
| |
Collapse
|
3
|
Borges H, Zaar J, Alickovic E, Christensen C, Kidmose P. Speech Reception Threshold Estimation via EEG-Based Continuous Speech Envelope Reconstruction. Eur J Neurosci 2025; 61:e70083. [PMID: 40145625 PMCID: PMC11948451 DOI: 10.1111/ejn.70083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2024] [Accepted: 03/11/2025] [Indexed: 03/28/2025]
Abstract
This study investigates the potential of speech reception threshold (SRT) estimation through electroencephalography (EEG) based envelope reconstruction techniques with continuous speech. Additionally, we investigate the influence of the stimuli's signal-to-noise ratio (SNR) on the temporal response function (TRF). Twenty young normal-hearing participants listened to audiobook excerpts with varying background noise levels while EEG was recorded. A linear decoder was trained to reconstruct the speech envelope from the EEG data. The reconstruction accuracy was calculated as the Pearson's correlation between the reconstructed and actual speech envelopes. An EEG SRT estimate (SRTneuro) was obtained as the midpoint of a sigmoid function fitted to the reconstruction accuracy versus SNR data points. Additionally, the TRF was estimated at each SNR level, followed by a statistical analysis to reveal significant effects of SNR levels on the latencies and amplitudes of the most prominent components. The SRTneuro was within 3 dB of the behavioral SRT for all participants. The TRF analysis showed a significant latency decrease for N1 and P2 and a significant amplitude magnitude increase for N1 and P2 with increasing SNR. The results suggest that both envelope reconstruction accuracy and the TRF components are influenced by changes in SNR, indicating they may be linked to the same underlying neural process.
Collapse
Affiliation(s)
- Heidi B. Borges
- Eriksholm Research CentreSnekkerstenDenmark
- Department of Electrical and Computer EngineeringAarhus UniversityAarhusDenmark
| | - Johannes Zaar
- Eriksholm Research CentreSnekkerstenDenmark
- Hearing Systems, Department of Health TechnologyTechnical University of DenmarkKongens LyngbyDenmark
| | - Emina Alickovic
- Eriksholm Research CentreSnekkerstenDenmark
- Department of Electrical EngineeringLinköping UniversityLinköpingSweden
| | | | - Preben Kidmose
- Department of Electrical and Computer EngineeringAarhus UniversityAarhusDenmark
| |
Collapse
|
4
|
Herrmann B, Cui ME. Impaired Prosodic Processing but Not Hearing Function Is Associated with an Age-Related Reduction in AI Speech Recognition. Audiol Res 2025; 15:14. [PMID: 39997158 PMCID: PMC11852301 DOI: 10.3390/audiolres15010014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2024] [Revised: 01/22/2025] [Accepted: 02/05/2025] [Indexed: 02/26/2025] Open
Abstract
BACKGROUND/OBJECTIVES Voice artificial intelligence (AI) technology is becoming increasingly common. Recent work indicates that middle-aged to older adults are less able to identify modern AI speech compared to younger adults, but the underlying causes are unclear. METHODS The current study with younger and middle-aged to older adults investigated factors that could explain the age-related reduction in AI speech identification. Experiment 1 investigated whether high-frequency information in speech-to which middle-aged to older adults often have less access due sensitivity loss at high frequencies-contributes to age-group differences. Experiment 2 investigated whether an age-related reduction in the ability to process prosodic information in speech predicts the reduction in AI speech identification. RESULTS Results for Experiment 1 show that middle-aged to older adults are less able to identify AI speech for both full-bandwidth speech and speech for which information above 4 kHz is removed, making the contribution of high-frequency hearing loss unlikely. Experiment 2 shows that the ability to identify AI speech is greater in individuals who also show a greater ability to identify emotions from prosodic speech information, after accounting for hearing function and self-rated experience with voice-AI systems. CONCLUSIONS The current results suggest that the ability to identify AI speech is related to the accurate processing of prosodic information.
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, Baycrest Academy for Research and Education, 3560 Bathurst St., North York, ON M6A 2E1, Canada;
- Department of Psychology, University of Toronto, Toronto, ON M5S 1A1, Canada
| | - Mo Eric Cui
- Rotman Research Institute, Baycrest Academy for Research and Education, 3560 Bathurst St., North York, ON M6A 2E1, Canada;
- Department of Psychology, University of Toronto, Toronto, ON M5S 1A1, Canada
| |
Collapse
|
5
|
Dapper K, Wolpert SM, Schirmer J, Fink S, Gaudrain E, Başkent D, Singer W, Verhulst S, Braun C, Dalhoff E, Rüttiger L, Munk MHJ, Knipper M. Age dependent deficits in speech recognition in quiet and noise are reflected in MGB activity and cochlear onset coding. Neuroimage 2025; 305:120958. [PMID: 39622462 DOI: 10.1016/j.neuroimage.2024.120958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 11/25/2024] [Accepted: 11/26/2024] [Indexed: 12/21/2024] Open
Abstract
The slowing and reduction of auditory responses in the brain are recognized side effects of increased pure tone thresholds, impaired speech recognition, and aging. However, it remains controversial whether central slowing is primarily linked to brain processes as atrophy, or is also associated with the slowing of temporal neural processing from the periphery. Here we analyzed electroencephalogram (EEG) responses that most likely reflect medial geniculate body (MGB) responses to passive listening of phonemes in 80 subjects ranging in age from 18 to 76 years, in whom the peripheral auditory responses had been analyzed in detail (Schirmer et al., 2024). We observed that passive listening to vowels and phonemes, specifically designed to rely on either temporal fine structure (TFS) for frequencies below the phase locking limit (<1500 Hz), or on the temporal envelope (TENV) for frequencies above phase locking limit, entrained lower or higher neural EEG responses. While previous views predict speech content, particular in noise to be encoded through TENV, here a decreasing phoneme-induced EEG amplitude over age in response to phonemes relying on TENV coding could also be linked to poorer speech-recognition thresholds in quiet. In addition, increased phoneme-evoked EEG delay could be correlated with elevated extended high-frequency threshold (EHF) for phoneme changes that relied on TFS and TENV coding. This may suggest a role of pure-tone threshold averages (PTA) of EHF for TENV and TFS beyond sound localization that is reflected in likely MGB delays. When speech recognition thresholds were normalized for pure-tone thresholds, however, the EEG amplitudes remained insignificant, and thereby became independent of age. Under these conditions, poor speech recognition in quiet was found together with a delay in EEG response for phonemes that relied on TFS coding, while poor speech recognition in ipsilateral noise was observed as a trend of shortened EEG delays for phonemes that relied on TENV coding. Based on previous analyses performed in these same subjects, elevated thresholds in extended high-frequency regions were linked to cochlear synaptopathy and auditory brainstem delays. Also, independent of hearing loss, poor speech-performing groups in quiet or with ipsilateral noise during TFS or TENV coding could be linked to lower or better outer hair cell performance and delayed or steeper auditory nerve responses at stimulus onset. The amplitude and latency of MGB responses to phonemes requiring TFS or TENV coding, dependent or independent of hearing loss, may thus be a new predictor of poor speech recognition in quiet and ipsilateral noise that links deficits in synchronicity at stimulus onset to neocortical activity. Amplitudes and delays of speech EEG responses to syllables should be reconsidered for future hearing-aid studies.
Collapse
Affiliation(s)
- Konrad Dapper
- Department of Otolaryngology, Head and Neck, University of Tübingen, Tübingen 72076, Germany; Department of Biology, Technical University 64287 Darmstadt, Darmstadt, Germany
| | - Stephan M Wolpert
- Department of Otolaryngology, Head and Neck, University of Tübingen, Tübingen 72076, Germany
| | - Jakob Schirmer
- Department of Otolaryngology, Head and Neck, University of Tübingen, Tübingen 72076, Germany
| | - Stefan Fink
- Department of Otolaryngology, Head and Neck, University of Tübingen, Tübingen 72076, Germany
| | - Etienne Gaudrain
- Lyon Neuroscience Research Center, Université Claude Bernard Lyon 1, CNRS UMR5292, INSERM U1028, Center Hospitalier Le Vinatier -Bâtiment 462-Neurocampus, 95 boulevard Pinel, Lyon, France
| | - Deniz Başkent
- Department of Otorhinolaryngology, University Medical Center Groningen (UMCG), Hanzeplein 1, BB21, Groningen 9700RB, the Netherlands
| | - Wibke Singer
- Department of Otolaryngology, Head and Neck, University of Tübingen, Tübingen 72076, Germany
| | - Sarah Verhulst
- Department of Information Technology, Ghent University, Zwijnaarde 9052, Belgium
| | - Christoph Braun
- MEG-Center, University of Tübingen, Tübingen 72076, Germany; HIH, Hertie Institute for Clinical Brain Research, Tübingen 72076, Germany; CIMeC, Center for Mind and Brain Research, University of Trento, Rovereto 38068, Italy
| | - Ernst Dalhoff
- Department of Otolaryngology, Head and Neck, University of Tübingen, Tübingen 72076, Germany
| | - Lukas Rüttiger
- Department of Otolaryngology, Head and Neck, University of Tübingen, Tübingen 72076, Germany
| | - Matthias H J Munk
- Department of Otolaryngology, Head and Neck, University of Tübingen, Tübingen 72076, Germany; Department of Biology, Technical University 64287 Darmstadt, Darmstadt, Germany
| | - Marlies Knipper
- Department of Otolaryngology, Head and Neck, University of Tübingen, Tübingen 72076, Germany.
| |
Collapse
|
6
|
Brisson V, Tremblay P. Assessing the Impact of Transcranial Magnetic Stimulation on Speech Perception in Noise. J Cogn Neurosci 2024; 36:2184-2207. [PMID: 39023366 DOI: 10.1162/jocn_a_02224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Healthy aging is associated with reduced speech perception in noise (SPiN) abilities. The etiology of these difficulties remains elusive, which prevents the development of new strategies to optimize the speech processing network and reduce these difficulties. The objective of this study was to determine if sublexical SPiN performance can be enhanced by applying TMS to three regions involved in processing speech: the left posterior temporal sulcus, the left superior temporal gyrus, and the left ventral premotor cortex. The second objective was to assess the impact of several factors (age, baseline performance, target, brain structure, and activity) on post-TMS SPiN improvement. The results revealed that participants with lower baseline performance were more likely to improve. Moreover, in older adults, cortical thickness within the target areas was negatively associated with performance improvement, whereas this association was null in younger individuals. No differences between the targets were found. This study suggests that TMS can modulate sublexical SPiN performance, but that the strength and direction of the effects depend on a complex combination of contextual and individual factors.
Collapse
Affiliation(s)
- Valérie Brisson
- Université Laval, School of Rehabilitation Sciences, Québec, Canada
- Centre de recherche CERVO, Québec, Canada
| | - Pascale Tremblay
- Université Laval, School of Rehabilitation Sciences, Québec, Canada
- Centre de recherche CERVO, Québec, Canada
| |
Collapse
|
7
|
Bolt E, Giroud N. Neural encoding of linguistic speech cues is unaffected by cognitive decline, but decreases with increasing hearing impairment. Sci Rep 2024; 14:19105. [PMID: 39154048 PMCID: PMC11330478 DOI: 10.1038/s41598-024-69602-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Accepted: 08/07/2024] [Indexed: 08/19/2024] Open
Abstract
The multivariate temporal response function (mTRF) is an effective tool for investigating the neural encoding of acoustic and complex linguistic features in natural continuous speech. In this study, we investigated how neural representations of speech features derived from natural stimuli are related to early signs of cognitive decline in older adults, taking into account the effects of hearing. Participants without ( n = 25 ) and with ( n = 19 ) early signs of cognitive decline listened to an audiobook while their electroencephalography responses were recorded. Using the mTRF framework, we modeled the relationship between speech input and neural response via different acoustic, segmented and linguistic encoding models and examined the response functions in terms of encoding accuracy, signal power, peak amplitudes and latencies. Our results showed no significant effect of cognitive decline or hearing ability on the neural encoding of acoustic and linguistic speech features. However, we found a significant interaction between hearing ability and the word-level segmentation model, suggesting that hearing impairment specifically affects encoding accuracy for this model, while other features were not affected by hearing ability. These results suggest that while speech processing markers remain unaffected by cognitive decline and hearing loss per se, neural encoding of word-level segmented speech features in older adults is affected by hearing loss but not by cognitive decline. This study emphasises the effectiveness of mTRF analysis in studying the neural encoding of speech and argues for an extension of research to investigate its clinical impact on hearing loss and cognition.
Collapse
Affiliation(s)
- Elena Bolt
- Computational Neuroscience of Speech and Hearing, Department of Computational Linguistics, University of Zurich, 8050, Zurich, Switzerland.
- International Max Planck Research School on the Life Course (IMPRS LIFE), University of Zurich, 8050, Zurich, Switzerland.
| | - Nathalie Giroud
- Computational Neuroscience of Speech and Hearing, Department of Computational Linguistics, University of Zurich, 8050, Zurich, Switzerland
- International Max Planck Research School on the Life Course (IMPRS LIFE), University of Zurich, 8050, Zurich, Switzerland
- Language and Medicine Centre Zurich, Competence Centre of Medical Faculty and Faculty of Arts and Sciences, University of Zurich, 8050, Zurich, Switzerland
| |
Collapse
|
8
|
Kaya E, Kotz SA, Henry MJ. A novel method for estimating properties of attentional oscillators reveals an age-related decline in flexibility. eLife 2024; 12:RP90735. [PMID: 38904659 PMCID: PMC11192533 DOI: 10.7554/elife.90735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/22/2024] Open
Abstract
Dynamic attending theory proposes that the ability to track temporal cues in the auditory environment is governed by entrainment, the synchronization between internal oscillations and regularities in external auditory signals. Here, we focused on two key properties of internal oscillators: their preferred rate, the default rate in the absence of any input; and their flexibility, how they adapt to changes in rhythmic context. We developed methods to estimate oscillator properties (Experiment 1) and compared the estimates across tasks and individuals (Experiment 2). Preferred rates, estimated as the stimulus rates with peak performance, showed a harmonic relationship across measurements and were correlated with individuals' spontaneous motor tempo. Estimates from motor tasks were slower than those from the perceptual task, and the degree of slowing was consistent for each individual. Task performance decreased with trial-to-trial changes in stimulus rate, and responses on individual trials were biased toward the preceding trial's stimulus properties. Flexibility, quantified as an individual's ability to adapt to faster-than-previous rates, decreased with age. These findings show domain-specific rate preferences for the assumed oscillatory system underlying rhythm perception and production, and that this system loses its ability to flexibly adapt to changes in the external rhythmic context during aging.
Collapse
Affiliation(s)
- Ece Kaya
- Max Planck Institute for Empirical AestheticsFrankfurtGermany
- Maastricht UniversityMaastrichtNetherlands
| | - Sonja A Kotz
- Maastricht UniversityMaastrichtNetherlands
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Molly J Henry
- Max Planck Institute for Empirical AestheticsFrankfurtGermany
- Toronto Metropolitan UniversityTorontoCanada
| |
Collapse
|
9
|
Herrmann B, Ryan JD. Pupil Size and Eye Movements Differently Index Effort in Both Younger and Older Adults. J Cogn Neurosci 2024; 36:1325-1340. [PMID: 38683698 DOI: 10.1162/jocn_a_02172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
The assessment of mental effort is increasingly relevant in neurocognitive and life span domains. Pupillometry, the measure of the pupil size, is often used to assess effort but has disadvantages. Analysis of eye movements may provide an alternative, but research has been limited to easy and difficult task demands in younger adults. An effort measure must be sensitive to the whole effort profile, including "giving up" effort investment, and capture effort in different age groups. The current study comprised three experiments in which younger (n = 66) and older (n = 44) adults listened to speech masked by background babble at different signal-to-noise ratios associated with easy, difficult, and impossible speech comprehension. We expected individuals to invest little effort for easy and impossible speech (giving up) but to exert effort for difficult speech. Indeed, pupil size was largest for difficult but lower for easy and impossible speech. In contrast, gaze dispersion decreased with increasing speech masking in both age groups. Critically, gaze dispersion during difficult speech returned to levels similar to easy speech after sentence offset, when acoustic stimulation was similar across conditions, whereas gaze dispersion during impossible speech continued to be reduced. These findings show that a reduction in eye movements is not a byproduct of acoustic factors, but instead suggest that neurocognitive processes, different from arousal-related systems regulating the pupil size, drive reduced eye movements during high task demands. The current data thus show that effort in one sensory domain (audition) differentially impacts distinct functional properties in another sensory domain (vision).
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, North York, Ontario, Canada
- University of Toronto, Ontario, Canada
| | - Jennifer D Ryan
- Rotman Research Institute, North York, Ontario, Canada
- University of Toronto, Ontario, Canada
| |
Collapse
|
10
|
Temboury-Gutierrez M, Märcher-Rørsted J, Bille M, Yde J, Encina-Llamas G, Hjortkjær J, Dau T. Electrocochleographic frequency-following responses as a potential marker of age-related cochlear neural degeneration. Hear Res 2024; 446:109005. [PMID: 38598943 DOI: 10.1016/j.heares.2024.109005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 03/19/2024] [Accepted: 04/01/2024] [Indexed: 04/12/2024]
Abstract
Auditory nerve (AN) fibers that innervate inner hair cells in the cochlea degenerate with advancing age. It has been proposed that age-related reductions in brainstem frequency-following responses (FFR) to the carrier of low-frequency, high-intensity pure tones may partially reflect this neural loss in the cochlea (Märcher-Rørsted et al., 2022). If the loss of AN fibers is the primary factor contributing to age-related changes in the brainstem FFR, then the FFR could serve as an indicator of cochlear neural degeneration. In this study, we employed electrocochleography (ECochG) to investigate the effects of age on frequency-following neurophonic potentials, i.e., neural responses phase-locked to the carrier frequency of the tone stimulus. We compared these findings to the brainstem-generated FFRs obtained simultaneously using the same stimulation. We conducted recordings in young and older individuals with normal hearing. Responses to pure tones (250 ms, 516 and 1086 Hz, 85 dB SPL) and clicks were recorded using both ECochG at the tympanic membrane and traditional scalp electroencephalographic (EEG) recordings of the FFR. Distortion product otoacoustic emissions (DPOAE) were also collected. In the ECochG recordings, sustained AN neurophonic (ANN) responses to tonal stimulation, as well as the click-evoked compound action potential (CAP) of the AN, were significantly reduced in the older listeners compared to young controls, despite normal audiometric thresholds. In the EEG recordings, brainstem FFRs to the same tone stimulation were also diminished in the older participants. Unlike the reduced AN CAP response, the transient-evoked wave-V remained unaffected. These findings could indicate that a decreased number of AN fibers contributes to the response in the older participants. The results suggest that the scalp-recorded FFR, as opposed to the clinical standard wave-V of the auditory brainstem response, may serve as a more reliable indicator of age-related cochlear neural degeneration.
Collapse
Affiliation(s)
- Miguel Temboury-Gutierrez
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Ørsteds Plads, Building 352, DK-2800 Kgs. Lyngby, Denmark.
| | - Jonatan Märcher-Rørsted
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Ørsteds Plads, Building 352, DK-2800 Kgs. Lyngby, Denmark
| | - Michael Bille
- Copenhagen Hearing and Balance Center, Ear, Nose and Throat (ENT) and Audiology Clinic, Rigshospitalet, Copenhagen University Hospital, Denmark, Inge Lehmanns Vej 8, DK-2100 København Ø, Denmark
| | - Jesper Yde
- Copenhagen Hearing and Balance Center, Ear, Nose and Throat (ENT) and Audiology Clinic, Rigshospitalet, Copenhagen University Hospital, Denmark, Inge Lehmanns Vej 8, DK-2100 København Ø, Denmark
| | - Gerard Encina-Llamas
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Ørsteds Plads, Building 352, DK-2800 Kgs. Lyngby, Denmark; Copenhagen Hearing and Balance Center, Ear, Nose and Throat (ENT) and Audiology Clinic, Rigshospitalet, Copenhagen University Hospital, Denmark, Inge Lehmanns Vej 8, DK-2100 København Ø, Denmark; Faculty of Medicine. University of Vic - Central University of Catalonia (UVic-UCC), Vic, 08500, Catalonia - Spain
| | - Jens Hjortkjær
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Ørsteds Plads, Building 352, DK-2800 Kgs. Lyngby, Denmark; Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, Kettegård Allé 30, DK-2650 Hvidovre, Denmark
| | - Torsten Dau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Ørsteds Plads, Building 352, DK-2800 Kgs. Lyngby, Denmark
| |
Collapse
|
11
|
Yang L, Wang S, Chen Y, Liang Y, Chen T, Wang Y, Fu X, Wang S. Effects of Age on the Auditory Cortex During Speech Perception in Noise: Evidence From Functional Near-Infrared Spectroscopy. Ear Hear 2024; 45:742-752. [PMID: 38268081 PMCID: PMC11008455 DOI: 10.1097/aud.0000000000001460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 11/23/2023] [Indexed: 01/26/2024]
Abstract
OBJECTIVES Age-related speech perception difficulties may be related to a decline in central auditory processing abilities, particularly in noisy or challenging environments. However, how the activation patterns related to speech stimulation in different noise situations change with normal aging has yet to be elucidated. In this study, we aimed to investigate the effects of noisy environments and aging on patterns of auditory cortical activation. DESIGN We analyzed the functional near-infrared spectroscopy signals of 20 young adults, 21 middle-aged adults, and 21 elderly adults, and evaluated their cortical response patterns to speech stimuli under five different signal to noise ratios (SNRs). In addition, we analyzed the behavior score, activation intensity, oxyhemoglobin variability, and dominant hemisphere, to investigate the effects of aging and noisy environments on auditory cortical activation. RESULTS Activation intensity and oxyhemoglobin variability both showed a decreasing trend with aging at an SNR of 0 dB; we also identified a strong correlation between activation intensity and age under this condition. However, we observed an inconsistent activation pattern when the SNR was 5 dB. Furthermore, our analysis revealed that the left hemisphere may be more susceptible to aging than the right hemisphere. Activation in the right hemisphere was more evident in older adults than in the left hemisphere; in contrast, younger adults showed leftward lateralization. CONCLUSIONS Our analysis showed that with aging, auditory cortical regions gradually become inflexible in noisy environments. Furthermore, changes in cortical activation patterns with aging may be related to SNR conditions, and that understandable speech with a low SNR ratio but still understandable may induce the highest level of activation. We also found that the left hemisphere was more affected by aging than the right hemisphere in speech perception tasks; the left-sided dominance observed in younger individuals gradually shifted to the right hemisphere with aging.
Collapse
Affiliation(s)
- Liu Yang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work
| | - Songjian Wang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work
| | - Younuo Chen
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Ying Liang
- School of Biomedical Engineering, Capital Medical University, Beijing, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Ting Chen
- School of Biomedical Engineering, Capital Medical University, Beijing, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Yuan Wang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xinxing Fu
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Shuo Wang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
12
|
Tune S, Obleser J. Neural attentional filters and behavioural outcome follow independent individual trajectories over the adult lifespan. eLife 2024; 12:RP92079. [PMID: 38470243 DOI: 10.7554/elife.92079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024] Open
Abstract
Preserved communication abilities promote healthy ageing. To this end, the age-typical loss of sensory acuity might in part be compensated for by an individual's preserved attentional neural filtering. Is such a compensatory brain-behaviour link longitudinally stable? Can it predict individual change in listening behaviour? We here show that individual listening behaviour and neural filtering ability follow largely independent developmental trajectories modelling electroencephalographic and behavioural data of N = 105 ageing individuals (39-82 y). First, despite the expected decline in hearing-threshold-derived sensory acuity, listening-task performance proved stable over 2 y. Second, neural filtering and behaviour were correlated only within each separate measurement timepoint (T1, T2). Longitudinally, however, our results raise caution on attention-guided neural filtering metrics as predictors of individual trajectories in listening behaviour: neither neural filtering at T1 nor its 2-year change could predict individual 2-year behavioural change, under a combination of modelling strategies.
Collapse
Affiliation(s)
- Sarah Tune
- Center of Brain, Behavior, and Metabolism, University of Lübeck, Lübeck, Germany
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | - Jonas Obleser
- Center of Brain, Behavior, and Metabolism, University of Lübeck, Lübeck, Germany
- Department of Psychology, University of Lübeck, Lübeck, Germany
| |
Collapse
|
13
|
Ershaid H, Lizarazu M, McLaughlin D, Cooke M, Simantiraki O, Koutsogiannaki M, Lallier M. Contributions of listening effort and intelligibility to cortical tracking of speech in adverse listening conditions. Cortex 2024; 172:54-71. [PMID: 38215511 DOI: 10.1016/j.cortex.2023.11.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 09/05/2023] [Accepted: 11/14/2023] [Indexed: 01/14/2024]
Abstract
Cortical tracking of speech is vital for speech segmentation and is linked to speech intelligibility. However, there is no clear consensus as to whether reduced intelligibility leads to a decrease or an increase in cortical speech tracking, warranting further investigation of the factors influencing this relationship. One such factor is listening effort, defined as the cognitive resources necessary for speech comprehension, and reported to have a strong negative correlation with speech intelligibility. Yet, no studies have examined the relationship between speech intelligibility, listening effort, and cortical tracking of speech. The aim of the present study was thus to examine these factors in quiet and distinct adverse listening conditions. Forty-nine normal hearing adults listened to sentences produced casually, presented in quiet and two adverse listening conditions: cafeteria noise and reverberant speech. Electrophysiological responses were registered with electroencephalogram, and listening effort was estimated subjectively using self-reported scores and objectively using pupillometry. Results indicated varying impacts of adverse conditions on intelligibility, listening effort, and cortical tracking of speech, depending on the preservation of the speech temporal envelope. The more distorted envelope in the reverberant condition led to higher listening effort, as reflected in higher subjective scores, increased pupil diameter, and stronger cortical tracking of speech in the delta band. These findings suggest that using measures of listening effort in addition to those of intelligibility is useful for interpreting cortical tracking of speech results. Moreover, reading and phonological skills of participants were positively correlated with listening effort in the cafeteria condition, suggesting a special role of expert language skills in processing speech in this noisy condition. Implications for future research and theories linking atypical cortical tracking of speech and reading disorders are further discussed.
Collapse
Affiliation(s)
- Hadeel Ershaid
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain.
| | - Mikel Lizarazu
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain.
| | - Drew McLaughlin
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain.
| | - Martin Cooke
- Ikerbasque, Basque Science Foundation, Bilbao, Spain.
| | | | | | - Marie Lallier
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain; Ikerbasque, Basque Science Foundation, Bilbao, Spain.
| |
Collapse
|
14
|
Panela RA, Copelli F, Herrmann B. Reliability and generalizability of neural speech tracking in younger and older adults. Neurobiol Aging 2024; 134:165-180. [PMID: 38103477 DOI: 10.1016/j.neurobiolaging.2023.11.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 11/09/2023] [Accepted: 11/16/2023] [Indexed: 12/19/2023]
Abstract
Neural tracking of spoken speech is considered a potential clinical biomarker for speech-processing difficulties, but the reliability of neural speech tracking is unclear. Here, younger and older adults listened to stories in two sessions while electroencephalography was recorded to investigate the reliability and generalizability of neural speech tracking. Speech tracking amplitude was larger for older than younger adults, consistent with an age-related loss of inhibition. The reliability of neural speech tracking was moderate (ICC ∼0.5-0.75) and tended to be higher for older adults. However, reliability was lower for speech tracking than for neural responses to noise bursts (ICC >0.8), which we used as a benchmark for maximum reliability. Neural speech tracking generalized moderately across different stories (ICC ∼0.5-0.6), which appeared greatest for audiobook-like stories spoken by the same person. Hence, a variety of stories could possibly be used for clinical assessments. Overall, the current data are important for developing a biomarker of speech processing but suggest that further work is needed to increase the reliability to meet clinical standards.
Collapse
Affiliation(s)
- Ryan A Panela
- Rotman Research Institute, Baycrest Academy for Research and Education, M6A 2E1 North York, ON, Canada; Department of Psychology, University of Toronto, M5S 1A1 Toronto, ON, Canada
| | - Francesca Copelli
- Rotman Research Institute, Baycrest Academy for Research and Education, M6A 2E1 North York, ON, Canada; Department of Psychology, University of Toronto, M5S 1A1 Toronto, ON, Canada
| | - Björn Herrmann
- Rotman Research Institute, Baycrest Academy for Research and Education, M6A 2E1 North York, ON, Canada; Department of Psychology, University of Toronto, M5S 1A1 Toronto, ON, Canada.
| |
Collapse
|
15
|
McClaskey CM. Neural hyperactivity and altered envelope encoding in the central auditory system: Changes with advanced age and hearing loss. Hear Res 2024; 442:108945. [PMID: 38154191 PMCID: PMC10942735 DOI: 10.1016/j.heares.2023.108945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 12/04/2023] [Accepted: 12/22/2023] [Indexed: 12/30/2023]
Abstract
Temporal modulations are ubiquitous features of sound signals that are important for auditory perception. The perception of temporal modulations, or temporal processing, is known to decline with aging and hearing loss and negatively impact auditory perception in general and speech recognition specifically. However, neurophysiological literature also provides evidence of exaggerated or enhanced encoding of specifically temporal envelopes in aging and hearing loss, which may arise from changes in inhibitory neurotransmission and neuronal hyperactivity. This review paper describes the physiological changes to the neural encoding of temporal envelopes that have been shown to occur with age and hearing loss and discusses the role of disinhibition and neural hyperactivity in contributing to these changes. Studies in both humans and animal models suggest that aging and hearing loss are associated with stronger neural representations of both periodic amplitude modulation envelopes and of naturalistic speech envelopes, but primarily for low-frequency modulations (<80 Hz). Although the frequency dependence of these results is generally taken as evidence of amplified envelope encoding at the cortex and impoverished encoding at the midbrain and brainstem, there is additional evidence to suggest that exaggerated envelope encoding may also occur subcortically, though only for envelopes with low modulation rates. A better understanding of how temporal envelope encoding is altered in aging and hearing loss, and the contexts in which neural responses are exaggerated/diminished, may aid in the development of interventions, assistive devices, and treatment strategies that work to ameliorate age- and hearing-loss-related auditory perceptual deficits.
Collapse
Affiliation(s)
- Carolyn M McClaskey
- Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave, MSC 550, Charleston, SC 29425, United States.
| |
Collapse
|
16
|
Zhou M, Soleimanpour R, Mahajan A, Anderson S. Hearing Aid Delay Effects on Neural Phase Locking. Ear Hear 2024; 45:142-150. [PMID: 37434283 PMCID: PMC10718218 DOI: 10.1097/aud.0000000000001408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 06/15/2023] [Indexed: 07/13/2023]
Abstract
OBJECTIVES This study was designed to examine the effects of hearing aid delay on the neural representation of the temporal envelope. It was hypothesized that the comb-filter effect would disrupt neural phase locking, and that shorter hearing aid delays would minimize this effect. DESIGN Twenty-one participants, ages 50 years and older, with bilateral mild-to-moderate sensorineural hearing loss were recruited through print advertisements in local senior newspapers. They were fitted with three different sets of hearing aids with average processing delays that ranged from 0.5 to 7 msec. Envelope-following responses (EFRs) were recorded to a 50-msec /da/ syllable presented through a speaker placed 1 meter in front of the participants while they wore the three sets of hearing aids with open tips. Phase-locking factor (PLF) and stimulus-to-response (STR) correlations were calculated from these recordings. RESULTS Recordings obtained while wearing hearing aids with a 0.5-msec processing delay showed higher PLF and STR correlations compared with those with either 5-msec or 7-msec delays. No differences were noted between recordings of hearing aids with 5-msec and 7-msec delays. The degree of difference between hearing aids was greater for individuals who had milder degrees of hearing loss. CONCLUSIONS Hearing aid processing delays disrupt phase locking due to mixing of processed and unprocessed sounds in the ear canal when using open domes. Given previous work showing that better phase locking correlates with better speech-in-noise performance, consideration should be given to reducing hearing aid processing delay in the design of hearing aid algorithms.
Collapse
Affiliation(s)
- Mary Zhou
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, USA
| | - Roksana Soleimanpour
- Department of Biological Sciences, University of Maryland, College Park, Maryland, USA
| | - Aakriti Mahajan
- Department of Biological Sciences, University of Maryland, College Park, Maryland, USA
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, USA
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, USA
| |
Collapse
|
17
|
Wang S, Chen Y, Liu Y, Yang L, Wang Y, Fu X, Hu J, Pugh E, Wang S. Aging effects on dual-route speech processing networks during speech perception in noise. Hum Brain Mapp 2024; 45:e26577. [PMID: 38224542 PMCID: PMC10789214 DOI: 10.1002/hbm.26577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 11/28/2023] [Accepted: 12/16/2023] [Indexed: 01/17/2024] Open
Abstract
Healthy aging leads to complex changes in the functional network of speech processing in a noisy environment. The dual-route neural architecture has been applied to the study of speech processing. Although evidence suggests that senescent increases activity in the brain regions across the dorsal and ventral stream regions to offset reduced periphery, the regulatory mechanism of dual-route functional networks underlying such compensation remains largely unknown. Here, by utilizing functional near-infrared spectroscopy (fNIRS), we investigated the compensatory mechanism of the dual-route functional connectivity, and its relationship with healthy aging by using a speech perception task at varying signal-to-noise ratios (SNR) in healthy individuals (young adults, middle-aged adults, and older adults). Results showed that the speech perception scores showed a significant age-related decrease with the reduction of the SNR. The analysis results of dual-route speech processing networks showed that the functional connection of Wernicke's area and homolog Wernicke's area were age-related increases. Further to clarify the age-related characteristics of the dual-route speech processing networks, graph-theoretical network analysis revealed an age-related increase in the efficiency of the networks, and the age-related differences in nodal characteristics were found both in Wernicke's area and homolog Wernicke's area under noise environment. Thus, Wernicke's area might be a key network hub to maintain efficient information transfer across the speech process network with healthy aging. Moreover, older adults would recruit more resources from the homologous Wernicke's area in a noisy environment. The recruitment of the homolog of Wernicke's area might provide a means of compensation for older adults for decoding speech in an adverse listening environment. Together, our results characterized dual-route speech processing networks at varying noise environments and provided new insight for the compensatory theories of how aging modulates the dual-route speech processing functional networks.
Collapse
Affiliation(s)
- Songjian Wang
- Beijing Institute of Otolaryngology, Otolaryngology‐Head and Neck SurgeryKey Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical UniversityBeijingChina
| | - Younuo Chen
- Beijing Institute of Otolaryngology, Otolaryngology‐Head and Neck SurgeryKey Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical UniversityBeijingChina
| | - Yi Liu
- Beijing Institute of Otolaryngology, Otolaryngology‐Head and Neck SurgeryKey Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical UniversityBeijingChina
| | - Liu Yang
- Beijing Institute of Otolaryngology, Otolaryngology‐Head and Neck SurgeryKey Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical UniversityBeijingChina
| | - Yuan Wang
- Beijing Institute of Otolaryngology, Otolaryngology‐Head and Neck SurgeryKey Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical UniversityBeijingChina
| | - Xinxing Fu
- Beijing Institute of Otolaryngology, Otolaryngology‐Head and Neck SurgeryKey Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical UniversityBeijingChina
| | - Jiong Hu
- Department of AudiologyUniversity of the PacificSan FranciscoCaliforniaUSA
| | | | - Shuo Wang
- Beijing Institute of Otolaryngology, Otolaryngology‐Head and Neck SurgeryKey Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical UniversityBeijingChina
| |
Collapse
|
18
|
Wang B, Xu X, Niu Y, Wu C, Wu X, Chen J. EEG-based auditory attention decoding with audiovisual speech for hearing-impaired listeners. Cereb Cortex 2023; 33:10972-10983. [PMID: 37750333 DOI: 10.1093/cercor/bhad325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 08/21/2023] [Accepted: 08/22/2023] [Indexed: 09/27/2023] Open
Abstract
Auditory attention decoding (AAD) was used to determine the attended speaker during an auditory selective attention task. However, the auditory factors modulating AAD remained unclear for hearing-impaired (HI) listeners. In this study, scalp electroencephalogram (EEG) was recorded with an auditory selective attention paradigm, in which HI listeners were instructed to attend one of the two simultaneous speech streams with or without congruent visual input (articulation movements), and at a high or low target-to-masker ratio (TMR). Meanwhile, behavioral hearing tests (i.e. audiogram, speech reception threshold, temporal modulation transfer function) were used to assess listeners' individual auditory abilities. The results showed that both visual input and increasing TMR could significantly enhance the cortical tracking of the attended speech and AAD accuracy. Further analysis revealed that the audiovisual (AV) gain in attended speech cortical tracking was significantly correlated with listeners' auditory amplitude modulation (AM) sensitivity, and the TMR gain in attended speech cortical tracking was significantly correlated with listeners' hearing thresholds. Temporal response function analysis revealed that subjects with higher AM sensitivity demonstrated more AV gain over the right occipitotemporal and bilateral frontocentral scalp electrodes.
Collapse
Affiliation(s)
- Bo Wang
- Speech and Hearing Research Center, Key Laboratory of Machine Perception (Ministry of Education), School of Intelligence Science and Technology, Peking University, Beijing 100871, China
| | - Xiran Xu
- Speech and Hearing Research Center, Key Laboratory of Machine Perception (Ministry of Education), School of Intelligence Science and Technology, Peking University, Beijing 100871, China
| | - Yadong Niu
- Speech and Hearing Research Center, Key Laboratory of Machine Perception (Ministry of Education), School of Intelligence Science and Technology, Peking University, Beijing 100871, China
| | - Chao Wu
- School of Nursing, Peking University, Beijing 100191, China
| | - Xihong Wu
- Speech and Hearing Research Center, Key Laboratory of Machine Perception (Ministry of Education), School of Intelligence Science and Technology, Peking University, Beijing 100871, China
- National Biomedical Imaging Center, College of Future Technology, Beijing 100871, China
| | - Jing Chen
- Speech and Hearing Research Center, Key Laboratory of Machine Perception (Ministry of Education), School of Intelligence Science and Technology, Peking University, Beijing 100871, China
- National Biomedical Imaging Center, College of Future Technology, Beijing 100871, China
| |
Collapse
|
19
|
David W, Verwaerde E, Gransier R, Wouters J. Effects of analysis window on 40-Hz auditory steady-state responses in cochlear implant users. Hear Res 2023; 438:108882. [PMID: 37688847 DOI: 10.1016/j.heares.2023.108882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 08/25/2023] [Accepted: 08/31/2023] [Indexed: 09/11/2023]
Abstract
Auditory steady-state responses (ASSRs) are phase-locked responses of the auditory system to the envelope of a stimulus. These responses can be used as an objective proxy to assess temporal envelope processing and its related functional outcomes such as hearing thresholds and speech perception, in normal-hearing listeners, in persons with hearing impairment, as well as in cochlear-implant (CI) users. While ASSRs are traditionally measured using a continuous stimulation paradigm, an alternative is the intermittent stimulation paradigm, whereby stimuli are presented with silence intervals in between. This paradigm could be more useful in a clinical setting as it allows for other neural responses to be analysed concurrently. One clinical use case of the intermittent paradigm is to objectively program CIs during an automatic fitting session whereby electrically evoked ASSRs (eASSRs) as well as other evoked potentials are used to predict behavioural thresholds. However, there is no consensus yet about the optimal analysis parameters for an intermittent paradigm in order to detect and measure eASSRs reliably. In this study, we used the intermittent paradigm to evoke eASSRs in adult CI users and investigated whether the early response buildup affects the response measurement outcomes. To this end, we varied the starting timepoint and length of the analysis window within which the responses were analysed. We used the amplitude, signal-to-noise ratio (SNR), phase, and pairwise phase consistency (PPC) to characterize the responses. Moreover, we set out to find the optimal stimulus duration for efficient and reliable eASSR measurements. These analyses were performed at two stimulation levels, i.e., 100% and 50% of the dynamic range of each participant. Results revealed that inclusion of the first 300 ms in the analysis window leads to overestimation of response amplitude and underestimation of response phase. Additionally, the response SNR and PPC were not affected by the inclusion of the first 300 ms in the analysis window. However, the latter two metrics were highly dependent on the stimulus duration which complicates comparisons across studies. Finally, the optimal stimulus duration for quick and reliable characterization of eASSRs was found to be around 800 ms for the stimulation level of 100% DR. These findings suggest that inclusion of the early onset period of eASSR recordings negatively influences the response measurement outcomes and that efficient and reliable eASSR measurements are possible using stimuli of around 800 ms long. This will pave the path for the development of a clinically feasible eASSR measurement in CI users.
Collapse
Affiliation(s)
- Wouter David
- ExpORL, Dept. of Neurosciences, KU Leuven, Herestraat 49 box 721, 3000 Leuven, Belgium.
| | - Elise Verwaerde
- ExpORL, Dept. of Neurosciences, KU Leuven, Herestraat 49 box 721, 3000 Leuven, Belgium
| | - Robin Gransier
- ExpORL, Dept. of Neurosciences, KU Leuven, Herestraat 49 box 721, 3000 Leuven, Belgium
| | - Jan Wouters
- ExpORL, Dept. of Neurosciences, KU Leuven, Herestraat 49 box 721, 3000 Leuven, Belgium
| |
Collapse
|
20
|
Xu N, Qin X, Zhou Z, Shan W, Ren J, Yang C, Lu L, Wang Q. Age differentially modulates the cortical tracking of the lower and higher level linguistic structures during speech comprehension. Cereb Cortex 2023; 33:10463-10474. [PMID: 37566910 DOI: 10.1093/cercor/bhad296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 07/23/2023] [Accepted: 07/24/2023] [Indexed: 08/13/2023] Open
Abstract
Speech comprehension requires listeners to rapidly parse continuous speech into hierarchically-organized linguistic structures (i.e. syllable, word, phrase, and sentence) and entrain the neural activities to the rhythm of different linguistic levels. Aging is accompanied by changes in speech processing, but it remains unclear how aging affects different levels of linguistic representation. Here, we recorded magnetoencephalography signals in older and younger groups when subjects actively and passively listened to the continuous speech in which hierarchical linguistic structures of word, phrase, and sentence were tagged at 4, 2, and 1 Hz, respectively. A newly-developed parameterization algorithm was applied to separate the periodically linguistic tracking from the aperiodic component. We found enhanced lower-level (word-level) tracking, reduced higher-level (phrasal- and sentential-level) tracking, and reduced aperiodic offset in older compared with younger adults. Furthermore, we observed the attentional modulation on the sentential-level tracking being larger for younger than for older ones. Notably, the neuro-behavior analyses showed that subjects' behavioral accuracy was positively correlated with the higher-level linguistic tracking, reversely correlated with the lower-level linguistic tracking. Overall, these results suggest that the enhanced lower-level linguistic tracking, reduced higher-level linguistic tracking and less flexibility of attentional modulation may underpin aging-related decline in speech comprehension.
Collapse
Affiliation(s)
- Na Xu
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Xiaoxiao Qin
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Ziqi Zhou
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Wei Shan
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Jiechuan Ren
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Chunqing Yang
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Lingxi Lu
- Center for the Cognitive Science of Language, Beijing Language and Culture University, Beijing 100083, China
| | - Qun Wang
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
- Beijing Institute of Brain Disorders, Collaborative Innovation Center for Brain Disorders, Capital Medical University, Beijing 100069, China
| |
Collapse
|
21
|
Yasmin S, Irsik VC, Johnsrude IS, Herrmann B. The effects of speech masking on neural tracking of acoustic and semantic features of natural speech. Neuropsychologia 2023; 186:108584. [PMID: 37169066 DOI: 10.1016/j.neuropsychologia.2023.108584] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 04/30/2023] [Accepted: 05/08/2023] [Indexed: 05/13/2023]
Abstract
Listening environments contain background sounds that mask speech and lead to communication challenges. Sensitivity to slow acoustic fluctuations in speech can help segregate speech from background noise. Semantic context can also facilitate speech perception in noise, for example, by enabling prediction of upcoming words. However, not much is known about how different degrees of background masking affect the neural processing of acoustic and semantic features during naturalistic speech listening. In the current electroencephalography (EEG) study, participants listened to engaging, spoken stories masked at different levels of multi-talker babble to investigate how neural activity in response to acoustic and semantic features changes with acoustic challenges, and how such effects relate to speech intelligibility. The pattern of neural response amplitudes associated with both acoustic and semantic speech features across masking levels was U-shaped, such that amplitudes were largest for moderate masking levels. This U-shape may be due to increased attentional focus when speech comprehension is challenging, but manageable. The latency of the neural responses increased linearly with increasing background masking, and neural latency change associated with acoustic processing most closely mirrored the changes in speech intelligibility. Finally, tracking responses related to semantic dissimilarity remained robust until severe speech masking (-3 dB SNR). The current study reveals that neural responses to acoustic features are highly sensitive to background masking and decreasing speech intelligibility, whereas neural responses to semantic features are relatively robust, suggesting that individuals track the meaning of the story well even in moderate background sound.
Collapse
Affiliation(s)
- Sonia Yasmin
- Department of Psychology & the Brain and Mind Institute,The University of Western Ontario, London, ON, N6A 3K7, Canada.
| | - Vanessa C Irsik
- Department of Psychology & the Brain and Mind Institute,The University of Western Ontario, London, ON, N6A 3K7, Canada
| | - Ingrid S Johnsrude
- Department of Psychology & the Brain and Mind Institute,The University of Western Ontario, London, ON, N6A 3K7, Canada; School of Communication and Speech Disorders,The University of Western Ontario, London, ON, N6A 5B7, Canada
| | - Björn Herrmann
- Rotman Research Institute, Baycrest, M6A 2E1, Toronto, ON, Canada; Department of Psychology,University of Toronto, M5S 1A1, Toronto, ON, Canada
| |
Collapse
|
22
|
Karunathilake IMD, Dunlap JL, Perera J, Presacco A, Decruy L, Anderson S, Kuchinsky SE, Simon JZ. Effects of aging on cortical representations of continuous speech. J Neurophysiol 2023; 129:1359-1377. [PMID: 37096924 PMCID: PMC10202479 DOI: 10.1152/jn.00356.2022] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 04/04/2023] [Accepted: 04/20/2023] [Indexed: 04/26/2023] Open
Abstract
Understanding speech in a noisy environment is crucial in day-to-day interactions and yet becomes more challenging with age, even for healthy aging. Age-related changes in the neural mechanisms that enable speech-in-noise listening have been investigated previously; however, the extent to which age affects the timing and fidelity of encoding of target and interfering speech streams is not well understood. Using magnetoencephalography (MEG), we investigated how continuous speech is represented in auditory cortex in the presence of interfering speech in younger and older adults. Cortical representations were obtained from neural responses that time-locked to the speech envelopes with speech envelope reconstruction and temporal response functions (TRFs). TRFs showed three prominent peaks corresponding to auditory cortical processing stages: early (∼50 ms), middle (∼100 ms), and late (∼200 ms). Older adults showed exaggerated speech envelope representations compared with younger adults. Temporal analysis revealed both that the age-related exaggeration starts as early as ∼50 ms and that older adults needed a substantially longer integration time window to achieve their better reconstruction of the speech envelope. As expected, with increased speech masking envelope reconstruction for the attended talker decreased and all three TRF peaks were delayed, with aging contributing additionally to the reduction. Interestingly, for older adults the late peak was delayed, suggesting that this late peak may receive contributions from multiple sources. Together these results suggest that there are several mechanisms at play compensating for age-related temporal processing deficits at several stages but which are not able to fully reestablish unimpaired speech perception.NEW & NOTEWORTHY We observed age-related changes in cortical temporal processing of continuous speech that may be related to older adults' difficulty in understanding speech in noise. These changes occur in both timing and strength of the speech representations at different cortical processing stages and depend on both noise condition and selective attention. Critically, their dependence on noise condition changes dramatically among the early, middle, and late cortical processing stages, underscoring how aging differentially affects these stages.
Collapse
Affiliation(s)
- I M Dushyanthi Karunathilake
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States
| | - Jason L Dunlap
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, United States
| | - Janani Perera
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, United States
| | - Alessandro Presacco
- Institute for Systems Research, University of Maryland, College Park, Maryland, United States
| | - Lien Decruy
- Institute for Systems Research, University of Maryland, College Park, Maryland, United States
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, United States
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland, United States
| | - Jonathan Z Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States
- Institute for Systems Research, University of Maryland, College Park, Maryland, United States
- Department of Biology, University of Maryland, College Park, Maryland, United States
| |
Collapse
|
23
|
Ding N, Gao J, Wang J, Sun W, Fang M, Liu X, Zhao H. Speech recognition in echoic environments and the effect of aging and hearing impairment. Hear Res 2023; 431:108725. [PMID: 36931021 DOI: 10.1016/j.heares.2023.108725] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 02/12/2023] [Accepted: 02/23/2023] [Indexed: 03/01/2023]
Abstract
Temporal modulations provide critical cues for speech recognition. When the temporal modulations are distorted by, e.g., reverberations, speech intelligibility drops, and the drop in speech intelligibility can be explained by the amount of distortions to the speech modulation spectrum, i.e., the spectrum of temporal modulations. Here, we test a condition in which speech is contaminated by a single echo. Speech is delayed by either 0.125 s or 0.25 s to create an echo, and these two conditions notch out the temporal modulations at 2 or 4 Hz, respectively. We evaluate how well young and older listeners can recognize such echoic speech. For young listeners, the speech recognition rate is not influenced by the echo, even when they are exposed to the first echoic sentence. For older listeners, the speech recognition rate drops to less than 60% when listening to the first echoic sentence, but rapidly recovers to above 75% with exposure to a few sentences. Further analyses reveal that both age and the hearing threshold influence the recognition of echoic speech for the older listeners. These results show that the recognition of echoic speech cannot be fully explained by distortions to the modulation spectrum, and suggest that the auditory system has mechanisms to effectively compensate the influence of single echoes.
Collapse
Affiliation(s)
- Nai Ding
- College of Biomedical Engineering and Instrument Science,Department of Nursing, The Second Affiliated Hospital of Zhejiang University School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jiaxin Gao
- College of Biomedical Engineering and Instrument Science,Department of Nursing, The Second Affiliated Hospital of Zhejiang University School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jing Wang
- College of Biomedical Engineering and Instrument Science,Department of Nursing, The Second Affiliated Hospital of Zhejiang University School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Wenhui Sun
- Research Center for Applied Mathematics and Machine Intelligence, Research Institute of Basic Theories, Zhejiang Lab, Hangzhou, Zhejiang, China
| | - Mingxuan Fang
- College of Biomedical Engineering and Instrument Science,Department of Nursing, The Second Affiliated Hospital of Zhejiang University School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiaoling Liu
- College of Biomedical Engineering and Instrument Science,Department of Nursing, The Second Affiliated Hospital of Zhejiang University School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Hua Zhao
- College of Biomedical Engineering and Instrument Science,Department of Nursing, The Second Affiliated Hospital of Zhejiang University School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China.
| |
Collapse
|
24
|
Gillis M, Kries J, Vandermosten M, Francart T. Neural tracking of linguistic and acoustic speech representations decreases with advancing age. Neuroimage 2023; 267:119841. [PMID: 36584758 PMCID: PMC9878439 DOI: 10.1016/j.neuroimage.2022.119841] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 12/21/2022] [Accepted: 12/26/2022] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND Older adults process speech differently, but it is not yet clear how aging affects different levels of processing natural, continuous speech, both in terms of bottom-up acoustic analysis and top-down generation of linguistic-based predictions. We studied natural speech processing across the adult lifespan via electroencephalography (EEG) measurements of neural tracking. GOALS Our goals are to analyze the unique contribution of linguistic speech processing across the adult lifespan using natural speech, while controlling for the influence of acoustic processing. Moreover, we also studied acoustic processing across age. In particular, we focus on changes in spatial and temporal activation patterns in response to natural speech across the lifespan. METHODS 52 normal-hearing adults between 17 and 82 years of age listened to a naturally spoken story while the EEG signal was recorded. We investigated the effect of age on acoustic and linguistic processing of speech. Because age correlated with hearing capacity and measures of cognition, we investigated whether the observed age effect is mediated by these factors. Furthermore, we investigated whether there is an effect of age on hemisphere lateralization and on spatiotemporal patterns of the neural responses. RESULTS Our EEG results showed that linguistic speech processing declines with advancing age. Moreover, as age increased, the neural response latency to certain aspects of linguistic speech processing increased. Also acoustic neural tracking (NT) decreased with increasing age, which is at odds with the literature. In contrast to linguistic processing, older subjects showed shorter latencies for early acoustic responses to speech. No evidence was found for hemispheric lateralization in neither younger nor older adults during linguistic speech processing. Most of the observed aging effects on acoustic and linguistic processing were not explained by age-related decline in hearing capacity or cognition. However, our results suggest that the effect of decreasing linguistic neural tracking with advancing age at word-level is also partially due to an age-related decline in cognition than a robust effect of age. CONCLUSION Spatial and temporal characteristics of the neural responses to continuous speech change across the adult lifespan for both acoustic and linguistic speech processing. These changes may be traces of structural and/or functional change that occurs with advancing age.
Collapse
Affiliation(s)
- Marlies Gillis
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium.
| | - Jill Kries
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium.
| | - Maaike Vandermosten
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium
| | - Tom Francart
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium
| |
Collapse
|
25
|
Lai J, Alain C, Bidelman GM. Cortical-brainstem interplay during speech perception in older adults with and without hearing loss. Front Neurosci 2023; 17:1075368. [PMID: 36816123 PMCID: PMC9932544 DOI: 10.3389/fnins.2023.1075368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 01/17/2023] [Indexed: 02/05/2023] Open
Abstract
Introduction Real time modulation of brainstem frequency-following responses (FFRs) by online changes in cortical arousal state via the corticofugal (top-down) pathway has been demonstrated previously in young adults and is more prominent in the presence of background noise. FFRs during high cortical arousal states also have a stronger relationship with speech perception. Aging is associated with increased auditory brain responses, which might reflect degraded inhibitory processing within the peripheral and ascending pathways, or changes in attentional control regulation via descending auditory pathways. Here, we tested the hypothesis that online corticofugal interplay is impacted by age-related hearing loss. Methods We measured EEG in older adults with normal-hearing (NH) and mild to moderate hearing-loss (HL) while they performed speech identification tasks in different noise backgrounds. We measured α power to index online cortical arousal states during task engagement. Subsequently, we split brainstem speech-FFRs, on a trial-by-trial basis, according to fluctuations in concomitant cortical α power into low or high α FFRs to index cortical-brainstem modulation. Results We found cortical α power was smaller in the HL than the NH group. In NH listeners, α-FFRs modulation for clear speech (i.e., without noise) also resembled that previously observed in younger adults for speech in noise. Cortical-brainstem modulation was further diminished in HL older adults in the clear condition and by noise in NH older adults. Machine learning classification showed low α FFR frequency spectra yielded higher accuracy for classifying listeners' perceptual performance in both NH and HL participants. Moreover, low α FFRs decreased with increased hearing thresholds at 0.5-2 kHz for clear speech but noise generally reduced low α FFRs in the HL group. Discussion Collectively, our study reveals cortical arousal state actively shapes brainstem speech representations and provides a potential new mechanism for older listeners' difficulties perceiving speech in cocktail party-like listening situations in the form of a miss-coordination between cortical and subcortical levels of auditory processing.
Collapse
Affiliation(s)
- Jesyin Lai
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States,School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States,Department of Diagnostic Imaging, St. Jude Children’s Research Hospital, Memphis, TN, United States
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, ON, Canada,Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Gavin M. Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States,School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States,Department of Speech, Language, and Hearing Sciences, Indiana University, Bloomington, IN, United States,Program in Neuroscience, Indiana University, Bloomington, IN, United States,*Correspondence: Gavin M. Bidelman,
| |
Collapse
|
26
|
Herrmann B, Maess B, Johnsrude IS. Sustained responses and neural synchronization to amplitude and frequency modulation in sound change with age. Hear Res 2023; 428:108677. [PMID: 36580732 DOI: 10.1016/j.heares.2022.108677] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 12/09/2022] [Accepted: 12/16/2022] [Indexed: 12/23/2022]
Abstract
Perception of speech requires sensitivity to features, such as amplitude and frequency modulations, that are often temporally regular. Previous work suggests age-related changes in neural responses to temporally regular features, but little work has focused on age differences for different types of modulations. We recorded magnetoencephalography in younger (21-33 years) and older adults (53-73 years) to investigate age differences in neural responses to slow (2-6 Hz sinusoidal and non-sinusoidal) modulations in amplitude, frequency, or combined amplitude and frequency. Audiometric pure-tone average thresholds were elevated in older compared to younger adults, indicating subclinical hearing impairment in the recruited older-adult sample. Neural responses to sound onset (independent of temporal modulations) were increased in magnitude in older compared to younger adults, suggesting hyperresponsivity and a loss of inhibition in the aged auditory system. Analyses of neural activity to modulations revealed greater neural synchronization with amplitude, frequency, and combined amplitude-frequency modulations for older compared to younger adults. This potentiated response generalized across different degrees of temporal regularity (sinusoidal and non-sinusoidal), although neural synchronization was generally lower for non-sinusoidal modulation. Despite greater synchronization, sustained neural activity was reduced in older compared to younger adults for sounds modulated both sinusoidally and non-sinusoidally in frequency. Our results suggest age differences in the sensitivity of the auditory system to features present in speech and other natural sounds.
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, Baycrest, North York, ON M6A 2E1, Canada; Department of Psychology, University of Toronto, Toronto, ON M5S 1A1, Canada; Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON N6A 3K7, Canada.
| | - Burkhard Maess
- Max Planck Institute for Human Cognitive and Brain Sciences, Brain Networks Unit, Leipzig 04103, Germany
| | - Ingrid S Johnsrude
- Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON N6A 3K7, Canada; School of Communication Sciences & Disorders, The University of Western Ontario, London, ON N6A 5B7, Canada
| |
Collapse
|
27
|
Becker R, Hervais-Adelman A. Individual theta-band cortical entrainment to speech in quiet predicts word-in-noise comprehension. Cereb Cortex Commun 2023; 4:tgad001. [PMID: 36726796 PMCID: PMC9883620 DOI: 10.1093/texcom/tgad001] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 12/17/2022] [Accepted: 12/18/2022] [Indexed: 01/09/2023] Open
Abstract
Speech elicits brain activity time-locked to its amplitude envelope. The resulting speech-brain synchrony (SBS) is thought to be crucial to speech parsing and comprehension. It has been shown that higher speech-brain coherence is associated with increased speech intelligibility. However, studies depending on the experimental manipulation of speech stimuli do not allow conclusion about the causality of the observed tracking. Here, we investigate whether individual differences in the intrinsic propensity to track the speech envelope when listening to speech-in-quiet is predictive of individual differences in speech-recognition-in-noise, in an independent task. We evaluated the cerebral tracking of speech in source-localized magnetoencephalography, at timescales corresponding to the phrases, words, syllables and phonemes. We found that individual differences in syllabic tracking in right superior temporal gyrus and in left middle temporal gyrus (MTG) were positively associated with recognition accuracy in an independent words-in-noise task. Furthermore, directed connectivity analysis showed that this relationship is partially mediated by top-down connectivity from premotor cortex-associated with speech processing and active sensing in the auditory domain-to left MTG. Thus, the extent of SBS-even during clear speech-reflects an active mechanism of the speech processing system that may confer resilience to noise.
Collapse
Affiliation(s)
- Robert Becker
- Corresponding author: Neurolinguistics, Department of Psychology, University of Zurich (UZH), Zurich, Switzerland.
| | - Alexis Hervais-Adelman
- Neurolinguistics, Department of Psychology, University of Zurich, Zurich 8050, Switzerland,Neuroscience Center Zurich, University of Zurich and Eidgenössische Technische Hochschule Zurich, Zurich 8057, Switzerland
| |
Collapse
|
28
|
Kulasingham JP, Simon JZ. Algorithms for Estimating Time-Locked Neural Response Components in Cortical Processing of Continuous Speech. IEEE Trans Biomed Eng 2023; 70:88-96. [PMID: 35727788 PMCID: PMC9946293 DOI: 10.1109/tbme.2022.3185005] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE The Temporal Response Function (TRF) is a linear model of neural activity time-locked to continuous stimuli, including continuous speech. TRFs based on speech envelopes typically have distinct components that have provided remarkable insights into the cortical processing of speech. However, current methods may lead to less than reliable estimates of single-subject TRF components. Here, we compare two established methods, in TRF component estimation, and also propose novel algorithms that utilize prior knowledge of these components, bypassing the full TRF estimation. METHODS We compared two established algorithms, ridge and boosting, and two novel algorithms based on Subspace Pursuit (SP) and Expectation Maximization (EM), which directly estimate TRF components given plausible assumptions regarding component characteristics. Single-channel, multi-channel, and source-localized TRFs were fit on simulations and real magnetoencephalographic data. Performance metrics included model fit and component estimation accuracy. RESULTS Boosting and ridge have comparable performance in component estimation. The novel algorithms outperformed the others in simulations, but not on real data, possibly due to the plausible assumptions not actually being met. Ridge had slightly better model fits on real data compared to boosting, but also more spurious TRF activity. CONCLUSION Results indicate that both smooth (ridge) and sparse (boosting) algorithms perform comparably at TRF component estimation. The SP and EM algorithms may be accurate, but rely on assumptions of component characteristics. SIGNIFICANCE This systematic comparison establishes the suitability of widely used and novel algorithms for estimating robust TRF components, which is essential for improved subject-specific investigations into the cortical processing of speech.
Collapse
|
29
|
Niesen M, Bourguignon M, Bertels J, Vander Ghinst M, Wens V, Goldman S, De Tiège X. Cortical tracking of lexical speech units in a multi-talker background is immature in school-aged children. Neuroimage 2023; 265:119770. [PMID: 36462732 DOI: 10.1016/j.neuroimage.2022.119770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 11/09/2022] [Accepted: 11/23/2022] [Indexed: 12/03/2022] Open
Abstract
Children have more difficulty perceiving speech in noise than adults. Whether this difficulty relates to an immature processing of prosodic or linguistic elements of the attended speech is still unclear. To address the impact of noise on linguistic processing per se, we assessed how babble noise impacts the cortical tracking of intelligible speech devoid of prosody in school-aged children and adults. Twenty adults and twenty children (7-9 years) listened to synthesized French monosyllabic words presented at 2.5 Hz, either randomly or in 4-word hierarchical structures wherein 2 words formed a phrase at 1.25 Hz, and 2 phrases formed a sentence at 0.625 Hz, with or without babble noise. Neuromagnetic responses to words, phrases and sentences were identified and source-localized. Children and adults displayed significant cortical tracking of words in all conditions, and of phrases and sentences only when words formed meaningful sentences. In children compared with adults, the cortical tracking was lower for all linguistic units in conditions without noise. In the presence of noise, the cortical tracking was similarly reduced for sentence units in both groups, but remained stable for phrase units. Critically, when there was noise, adults increased the cortical tracking of monosyllabic words in the inferior frontal gyri and supratemporal auditory cortices but children did not. This study demonstrates that the difficulties of school-aged children in understanding speech in a multi-talker background might be partly due to an immature tracking of lexical but not supra-lexical linguistic units.
Collapse
Affiliation(s)
- Maxime Niesen
- Université libre de Bruxelles (ULB), UNI - ULB Neurosciences Institute, Laboratoire de Neuroanatomie et de Neuroimagerie translationnelles (LN2T), 1070 Brussels, Belgium; Université libre de Bruxelles (ULB), Hôpital Universitaire de Bruxelles (HUB), CUB Hôpital Erasme, Department of Otorhinolaryngology, 1070 Brussels, Belgium.
| | - Mathieu Bourguignon
- Université libre de Bruxelles (ULB), UNI - ULB Neurosciences Institute, Laboratoire de Neuroanatomie et de Neuroimagerie translationnelles (LN2T), 1070 Brussels, Belgium; Université libre de Bruxelles (ULB), UNI-ULB Neuroscience Institute, Laboratory of Neurophysiology and Movement Biomechanics, 1070 Brussels, Belgium.; BCBL, Basque Center on Cognition, Brain and Language, 20009 San Sebastian, Spain
| | - Julie Bertels
- Université libre de Bruxelles (ULB), UNI - ULB Neurosciences Institute, Laboratoire de Neuroanatomie et de Neuroimagerie translationnelles (LN2T), 1070 Brussels, Belgium; Université libre de Bruxelles (ULB), UNI-ULB Neuroscience Institute, Cognition and Computation group, ULBabyLab - Consciousness, Brussels, Belgium
| | - Marc Vander Ghinst
- Université libre de Bruxelles (ULB), UNI - ULB Neurosciences Institute, Laboratoire de Neuroanatomie et de Neuroimagerie translationnelles (LN2T), 1070 Brussels, Belgium; Université libre de Bruxelles (ULB), Hôpital Universitaire de Bruxelles (HUB), CUB Hôpital Erasme, Department of Otorhinolaryngology, 1070 Brussels, Belgium
| | - Vincent Wens
- Université libre de Bruxelles (ULB), UNI - ULB Neurosciences Institute, Laboratoire de Neuroanatomie et de Neuroimagerie translationnelles (LN2T), 1070 Brussels, Belgium; Université libre de Bruxelles (ULB), Hôpital Universitaire de Bruxelles (HUB), CUB Hôpital Erasme, Department of translational Neuroimaging, 1070 Brussels, Belgium
| | - Serge Goldman
- Université libre de Bruxelles (ULB), UNI - ULB Neurosciences Institute, Laboratoire de Neuroanatomie et de Neuroimagerie translationnelles (LN2T), 1070 Brussels, Belgium; Université libre de Bruxelles (ULB), Hôpital Universitaire de Bruxelles (HUB), CUB Hôpital Erasme, Department of Nuclear Medicine, 1070 Brussels, Belgium
| | - Xavier De Tiège
- Université libre de Bruxelles (ULB), UNI - ULB Neurosciences Institute, Laboratoire de Neuroanatomie et de Neuroimagerie translationnelles (LN2T), 1070 Brussels, Belgium; Université libre de Bruxelles (ULB), Hôpital Universitaire de Bruxelles (HUB), CUB Hôpital Erasme, Department of translational Neuroimaging, 1070 Brussels, Belgium
| |
Collapse
|
30
|
Johns MA, Calloway RC, Phillips I, Karuzis VP, Dutta K, Smith E, Shamma SA, Goupell MJ, Kuchinsky SE. Performance on stochastic figure-ground perception varies with individual differences in speech-in-noise recognition and working memory capacity. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:286. [PMID: 36732241 PMCID: PMC9851714 DOI: 10.1121/10.0016756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 12/07/2022] [Accepted: 12/10/2022] [Indexed: 06/18/2023]
Abstract
Speech recognition in noisy environments can be challenging and requires listeners to accurately segregate a target speaker from irrelevant background noise. Stochastic figure-ground (SFG) tasks in which temporally coherent inharmonic pure-tones must be identified from a background have been used to probe the non-linguistic auditory stream segregation processes important for speech-in-noise processing. However, little is known about the relationship between performance on SFG tasks and speech-in-noise tasks nor the individual differences that may modulate such relationships. In this study, 37 younger normal-hearing adults performed an SFG task with target figure chords consisting of four, six, eight, or ten temporally coherent tones amongst a background of randomly varying tones. Stimuli were designed to be spectrally and temporally flat. An increased number of temporally coherent tones resulted in higher accuracy and faster reaction times (RTs). For ten target tones, faster RTs were associated with better scores on the Quick Speech-in-Noise task. Individual differences in working memory capacity and self-reported musicianship further modulated these relationships. Overall, results demonstrate that the SFG task could serve as an assessment of auditory stream segregation accuracy and RT that is sensitive to individual differences in cognitive and auditory abilities, even among younger normal-hearing adults.
Collapse
Affiliation(s)
- Michael A Johns
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Regina C Calloway
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Ian Phillips
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Valerie P Karuzis
- Applied Research Laboratory of Intelligence and Security, University of Maryland, College Park, Maryland 20742, USA
| | - Kelsey Dutta
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Ed Smith
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Shihab A Shamma
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| |
Collapse
|
31
|
Mai G, Howell P. The possible role of early-stage phase-locked neural activities in speech-in-noise perception in human adults across age and hearing loss. Hear Res 2023; 427:108647. [PMID: 36436293 DOI: 10.1016/j.heares.2022.108647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 10/26/2022] [Accepted: 11/04/2022] [Indexed: 11/11/2022]
Abstract
Ageing affects auditory neural phase-locked activities which could increase the challenges experienced during speech-in-noise (SiN) perception by older adults. However, evidence for how ageing affects SiN perception through these phase-locked activities is still lacking. It is also unclear whether influences of ageing on phase-locked activities in response to different acoustic properties have similar or different mechanisms to affect SiN perception. The present study addressed these issues by measuring early-stage phase-locked encoding of speech under quiet and noisy backgrounds (speech-shaped noise (SSN) and multi-talker babbles) in adults across a wide age range (19-75 years old). Participants passively listened to a repeated vowel whilst the frequency-following response (FFR) to fundamental frequency that has primary subcortical sources and cortical phase-locked response to slowly-fluctuating acoustic envelopes were recorded. We studied how these activities are affected by age and age-related hearing loss and how they are related to SiN performances (word recognition in sentences in noise). First, we found that the effects of age and hearing loss differ for the FFR and slow-envelope phase-locking. FFR was significantly decreased with age and high-frequency (≥ 2 kHz) hearing loss but increased with low-frequency (< 2 kHz) hearing loss, whilst the slow-envelope phase-locking was significantly increased with age and hearing loss across frequencies. Second, potential relationships between the types of phase-locked activities and SiN perception performances were also different. We found that the FFR and slow-envelope phase-locking positively corresponded to SiN performance under multi-talker babbles and SSN, respectively. Finally, we investigated how age and hearing loss affected SiN perception through phase-locked activities via mediation analyses. We showed that both types of activities significantly mediated the relation between age/hearing loss and SiN perception but in distinct manners. Specifically, FFR decreased with age and high-frequency hearing loss which in turn contributed to poorer SiN performance but increased with low-frequency hearing loss which in turn contributed to better SiN performance under multi-talker babbles. Slow-envelope phase-locking increased with age and hearing loss which in turn contributed to better SiN performance under both SSN and multi-talker babbles. Taken together, the present study provided evidence for distinct neural mechanisms of early-stage auditory phase-locked encoding of different acoustic properties through which ageing affects SiN perception.
Collapse
Affiliation(s)
- Guangting Mai
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham NG1 5DU, UK; Academic Unit of Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham NG7 2UH, UK; Department of Experimental Psychology, University College London, London WC1H 0AP, UK.
| | - Peter Howell
- Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| |
Collapse
|
32
|
van Wieringen A, Van Wilderode M, Van Humbeeck N, Krampe R. Coupling of sensorimotor and cognitive functions in middle- and late adulthood. Front Neurosci 2022; 16:1049639. [PMID: 36532286 PMCID: PMC9752872 DOI: 10.3389/fnins.2022.1049639] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Accepted: 11/08/2022] [Indexed: 11/03/2023] Open
Abstract
Introduction The present study explored age effects and the coupling of sensorimotor and cognitive functions in a stratified sample of 96 middle-aged and older adults (age 45-86 years) with no indication of mild cognitive decline. In our sensorimotor tasks, we had an emphasis on listening in noise and postural control, but we also assessed functional mobility and tactile sensitivity. Methods Our cognitive measures comprised processing speed and assessments of core cognitive control processes (executive functions), notably inhibition, task switching, and working memory updating. We explored whether our measures of sensorimotor functioning mediated age differences in cognitive variables and compared their effect to processing speed. Subsequently, we examined whether individuals who had poorer (or better) than median cognitive performance for their age group also performed relatively poorer (or better) on sensorimotor tasks. Moreover, we examined whether the link between cognitive and sensorimotor functions becomes more pronounced in older age groups. Results Except for tactile sensitivity, we observed substantial age-related differences in all sensorimotor and cognitive variables from middle age onward. Processing speed and functional mobility were reliable mediators of age in task switching and inhibitory control. Regarding coupling between sensorimotor and cognition, we observed that individuals with poor cognitive control do not necessarily have poor listening in noise skills or poor postural control. Discussion As most conditions do not show an interdependency between sensorimotor and cognitive performance, other domain-specific factors that were not accounted for must also play a role. These need to be researched in order to gain a better understanding of how rehabilitation may impact cognitive functioning in aging persons.
Collapse
Affiliation(s)
- Astrid van Wieringen
- Research Group Experimental Oto-Rhino-Laryngology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Mira Van Wilderode
- Research Group Experimental Oto-Rhino-Laryngology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Nathan Van Humbeeck
- Research Group Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Ralf Krampe
- Research Group Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
33
|
Liu Y, Luo C, Zheng J, Liang J, Ding N. Working memory asymmetrically modulates auditory and linguistic processing of speech. Neuroimage 2022; 264:119698. [PMID: 36270622 DOI: 10.1016/j.neuroimage.2022.119698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/11/2022] [Accepted: 10/17/2022] [Indexed: 11/09/2022] Open
Abstract
Working memory load can modulate speech perception. However, since speech perception and working memory are both complex functions, it remains elusive how each component of the working memory system interacts with each speech processing stage. To investigate this issue, we concurrently measure how the working memory load modulates neural activity tracking three levels of linguistic units, i.e., syllables, phrases, and sentences, using a multiscale frequency-tagging approach. Participants engage in a sentence comprehension task and the working memory load is manipulated by asking them to memorize either auditory verbal sequences or visual patterns. It is found that verbal and visual working memory load modulate speech processing in similar manners: Higher working memory load attenuates neural activity tracking of phrases and sentences but enhances neural activity tracking of syllables. Since verbal and visual WM load similarly influence the neural responses to speech, such influences may derive from the domain-general component of WM system. More importantly, working memory load asymmetrically modulates lower-level auditory encoding and higher-level linguistic processing of speech, possibly reflecting reallocation of attention induced by mnemonic load.
Collapse
Affiliation(s)
- Yiguang Liu
- Research Center for Applied Mathematics and Machine Intelligence, Research Institute of Basic Theories, Zhejiang Lab, Hangzhou 311121, China
| | - Cheng Luo
- Research Center for Applied Mathematics and Machine Intelligence, Research Institute of Basic Theories, Zhejiang Lab, Hangzhou 311121, China
| | - Jing Zheng
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou 310027, China
| | - Junying Liang
- Department of Linguistics, School of International Studies, Zhejiang University, Hangzhou 310058, China
| | - Nai Ding
- Research Center for Applied Mathematics and Machine Intelligence, Research Institute of Basic Theories, Zhejiang Lab, Hangzhou 311121, China; Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou 310027, China; The MOE Frontier Science Center for Brain Science & Brain-machine Integration, Zhejiang University, Hangzhou 310012, China.
| |
Collapse
|
34
|
Suess N, Hauswald A, Reisinger P, Rösch S, Keitel A, Weisz N. Cortical tracking of formant modulations derived from silently presented lip movements and its decline with age. Cereb Cortex 2022; 32:4818-4833. [PMID: 35062025 PMCID: PMC9627034 DOI: 10.1093/cercor/bhab518] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Revised: 12/15/2021] [Accepted: 12/16/2021] [Indexed: 11/26/2022] Open
Abstract
The integration of visual and auditory cues is crucial for successful processing of speech, especially under adverse conditions. Recent reports have shown that when participants watch muted videos of speakers, the phonological information about the acoustic speech envelope, which is associated with but independent from the speakers' lip movements, is tracked by the visual cortex. However, the speech signal also carries richer acoustic details, for example, about the fundamental frequency and the resonant frequencies, whose visuophonological transformation could aid speech processing. Here, we investigated the neural basis of the visuo-phonological transformation processes of these more fine-grained acoustic details and assessed how they change as a function of age. We recorded whole-head magnetoencephalographic (MEG) data while the participants watched silent normal (i.e., natural) and reversed videos of a speaker and paid attention to their lip movements. We found that the visual cortex is able to track the unheard natural modulations of resonant frequencies (or formants) and the pitch (or fundamental frequency) linked to lip movements. Importantly, only the processing of natural unheard formants decreases significantly with age in the visual and also in the cingulate cortex. This is not the case for the processing of the unheard speech envelope, the fundamental frequency, or the purely visual information carried by lip movements. These results show that unheard spectral fine details (along with the unheard acoustic envelope) are transformed from a mere visual to a phonological representation. Aging affects especially the ability to derive spectral dynamics at formant frequencies. As listening in noisy environments should capitalize on the ability to track spectral fine details, our results provide a novel focus on compensatory processes in such challenging situations.
Collapse
Affiliation(s)
- Nina Suess
- Department of Psychology, Centre for Cognitive Neuroscience, University of Salzburg, Salzburg 5020, Austria
| | - Anne Hauswald
- Department of Psychology, Centre for Cognitive Neuroscience, University of Salzburg, Salzburg 5020, Austria
| | - Patrick Reisinger
- Department of Psychology, Centre for Cognitive Neuroscience, University of Salzburg, Salzburg 5020, Austria
| | - Sebastian Rösch
- Department of Otorhinolaryngology, Head and Neck Surgery, Paracelsus Medical University Salzburg, University Hospital Salzburg, Salzburg 5020, Austria
| | - Anne Keitel
- School of Social Sciences, University of Dundee, Dundee DD1 4HN, UK
| | - Nathan Weisz
- Department of Psychology, Centre for Cognitive Neuroscience, University of Salzburg, Salzburg 5020, Austria
- Department of Psychology, Neuroscience Institute, Christian Doppler University Hospital, Paracelsus Medical University, Salzburg 5020, Austria
| |
Collapse
|
35
|
Tinnemore AR, Montero L, Gordon-Salant S, Goupell MJ. The recognition of time-compressed speech as a function of age in listeners with cochlear implants or normal hearing. Front Aging Neurosci 2022; 14:887581. [PMID: 36247992 PMCID: PMC9557069 DOI: 10.3389/fnagi.2022.887581] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 08/29/2022] [Indexed: 11/13/2022] Open
Abstract
Speech recognition is diminished when a listener has an auditory temporal processing deficit. Such deficits occur in listeners over 65 years old with normal hearing (NH) and with age-related hearing loss, but their source is still unclear. These deficits may be especially apparent when speech occurs at a rapid rate and when a listener is mostly reliant on temporal information to recognize speech, such as when listening with a cochlear implant (CI) or to vocoded speech (a CI simulation). Assessment of the auditory temporal processing abilities of adults with CIs across a wide range of ages should better reveal central or cognitive sources of age-related deficits with rapid speech because CI stimulation bypasses much of the cochlear encoding that is affected by age-related peripheral hearing loss. This study used time-compressed speech at four different degrees of time compression (0, 20, 40, and 60%) to challenge the auditory temporal processing abilities of younger, middle-aged, and older listeners with CIs or with NH. Listeners with NH were presented vocoded speech at four degrees of spectral resolution (unprocessed, 16, 8, and 4 channels). Results showed an interaction between age and degree of time compression. The reduction in speech recognition associated with faster rates of speech was greater for older adults than younger adults. The performance of the middle-aged listeners was more similar to that of the older listeners than to that of the younger listeners, especially at higher degrees of time compression. A measure of cognitive processing speed did not predict the effects of time compression. These results suggest that central auditory changes related to the aging process are at least partially responsible for the auditory temporal processing deficits seen in older listeners, rather than solely peripheral age-related changes.
Collapse
Affiliation(s)
- Anna R. Tinnemore
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, College Park, MD, United States
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States
- *Correspondence: Anna R. Tinnemore,
| | - Lauren Montero
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States
| | - Sandra Gordon-Salant
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, College Park, MD, United States
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States
| | - Matthew J. Goupell
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, College Park, MD, United States
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States
| |
Collapse
|
36
|
Kuruvilla-Mathew A, Thorne PR, Purdy SC. Effects of aging on neural processing during an active listening task. PLoS One 2022; 17:e0273304. [PMID: 36070253 PMCID: PMC9451064 DOI: 10.1371/journal.pone.0273304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 08/06/2022] [Indexed: 11/18/2022] Open
Abstract
Factors affecting successful listening in older adults and the corresponding electrophysiological signatures are not well understood. The present study investigated age-related differences in attention and temporal processing, as well as differences in the neural activity related to signal degradation during a number comparison task. Participants listened to digits presented in background babble and were tested at two levels of signal clarity, clear and degraded. Behavioral and electrophysiological measures were examined in 30 older and 20 younger neurologically-healthy adults. Relationships between performance on the number comparison task, behavioral measures, and neural activity were used to determine correlates of listening deficits associated with aging. While older participants showed poorer performance overall on all behavioral measures, their scores on the number comparison task were largely predicted (based on regression analyses) by their sensitivity to temporal fine structure cues. Compared to younger participants, older participants required higher signal-to-noise ratios (SNRs) to achieve equivalent performance on the number comparison task. With increasing listening demands, age-related changes were observed in neural processing represented by the early-N1 and later-P3 time windows. Source localization analyses revealed age differences in source activity for the degraded listening condition that was located in the left prefrontal cortex. In addition, this source activity negatively correlated with task performance in the older group. Together, these results suggest that older adults exhibit reallocation of processing resources to complete a demanding listening task. However, this effect was evident only for poorer performing older adults who showed greater posterior to anterior shift in P3 response amplitudes than older adults who were good performers and younger adults. These findings might reflect less efficient recruitment of neural resources that is associated with aging during effortful listening performance.
Collapse
Affiliation(s)
- Abin Kuruvilla-Mathew
- Speech Science, School of Psychology, University of Auckland, Auckland, New Zealand
- Eisdell Moore Centre, University of Auckland, Auckland, New Zealand
- * E-mail:
| | - Peter R. Thorne
- Eisdell Moore Centre, University of Auckland, Auckland, New Zealand
- Faculty of Medical and Health Science, University of Auckland, Auckland, New Zealand
- Brain Research New Zealand, University of Auckland, Auckland, New Zealand
| | - Suzanne C. Purdy
- Speech Science, School of Psychology, University of Auckland, Auckland, New Zealand
- Eisdell Moore Centre, University of Auckland, Auckland, New Zealand
- Brain Research New Zealand, University of Auckland, Auckland, New Zealand
| |
Collapse
|
37
|
Sauvé SA, Bolt ELW, Nozaradan S, Zendel BR. Aging effects on neural processing of rhythm and meter. Front Aging Neurosci 2022; 14:848608. [PMID: 36118692 PMCID: PMC9475293 DOI: 10.3389/fnagi.2022.848608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 08/01/2022] [Indexed: 11/13/2022] Open
Abstract
When listening to musical rhythm, humans can perceive and move to beat-like metrical pulses. Recently, it has been hypothesized that meter perception is related to brain activity responding to the acoustic fluctuation of the rhythmic input, with selective enhancement of the brain response elicited at meter-related frequencies. In the current study, electroencephalography (EEG) was recorded while younger (<35) and older (>60) adults listened to rhythmic patterns presented at two different tempi while intermittently performing a tapping task. Despite significant hearing loss compared to younger adults, older adults showed preserved brain activity to the rhythms. However, age effects were observed in the distribution of amplitude across frequencies. Specifically, in contrast with younger adults, older adults showed relatively larger amplitude at the frequency corresponding to the rate of individual events making up the rhythms as compared to lower meter-related frequencies. This difference is compatible with larger N1-P2 potentials as generally observed in older adults in response to acoustic onsets, irrespective of meter perception. These larger low-level responses to sounds have been linked to processes by which age-related hearing loss would be compensated by cortical sensory mechanisms. Importantly, this low-level effect would be associated here with relatively reduced neural activity at lower frequencies corresponding to higher-level metrical grouping of the acoustic events, as compared to younger adults.
Collapse
|
38
|
Gnanateja GN, Devaraju DS, Heyne M, Quique YM, Sitek KR, Tardif MC, Tessmer R, Dial HR. On the Role of Neural Oscillations Across Timescales in Speech and Music Processing. Front Comput Neurosci 2022; 16:872093. [PMID: 35814348 PMCID: PMC9260496 DOI: 10.3389/fncom.2022.872093] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Accepted: 05/24/2022] [Indexed: 11/25/2022] Open
Abstract
This mini review is aimed at a clinician-scientist seeking to understand the role of oscillations in neural processing and their functional relevance in speech and music perception. We present an overview of neural oscillations, methods used to study them, and their functional relevance with respect to music processing, aging, hearing loss, and disorders affecting speech and language. We first review the oscillatory frequency bands and their associations with speech and music processing. Next we describe commonly used metrics for quantifying neural oscillations, briefly touching upon the still-debated mechanisms underpinning oscillatory alignment. Following this, we highlight key findings from research on neural oscillations in speech and music perception, as well as contributions of this work to our understanding of disordered perception in clinical populations. Finally, we conclude with a look toward the future of oscillatory research in speech and music perception, including promising methods and potential avenues for future work. We note that the intention of this mini review is not to systematically review all literature on cortical tracking of speech and music. Rather, we seek to provide the clinician-scientist with foundational information that can be used to evaluate and design research studies targeting the functional role of oscillations in speech and music processing in typical and clinical populations.
Collapse
Affiliation(s)
- G. Nike Gnanateja
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Dhatri S. Devaraju
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Matthias Heyne
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Yina M. Quique
- Center for Education in Health Sciences, Northwestern University, Chicago, IL, United States
| | - Kevin R. Sitek
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Monique C. Tardif
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Rachel Tessmer
- Department of Speech, Language, and Hearing Sciences, The University of Texas at Austin, Austin, TX, United States
| | - Heather R. Dial
- Department of Speech, Language, and Hearing Sciences, The University of Texas at Austin, Austin, TX, United States
- Department of Communication Sciences and Disorders, University of Houston, Houston, TX, United States
| |
Collapse
|
39
|
Trpchevska N, Freidin MB, Broer L, Oosterloo BC, Yao S, Zhou Y, Vona B, Bishop C, Bizaki-Vallaskangas A, Canlon B, Castellana F, Chasman DI, Cherny S, Christensen K, Concas MP, Correa A, Elkon R, Mengel-From J, Gao Y, Giersch ABS, Girotto G, Gudjonsson A, Gudnason V, Heard-Costa NL, Hertzano R, Hjelmborg JVB, Hjerling-Leffler J, Hoffman HJ, Kaprio J, Kettunen J, Krebs K, Kähler AK, Lallemend F, Launer LJ, Lee IM, Leonard H, Li CM, Lowenheim H, Magnusson PKE, van Meurs J, Milani L, Morton CC, Mäkitie A, Nalls MA, Nardone GG, Nygaard M, Palviainen T, Pratt S, Quaranta N, Rämö J, Saarentaus E, Sardone R, Satizabal CL, Schweinfurth JM, Seshadri S, Shiroma E, Shulman E, Simonsick E, Spankovich C, Tropitzsch A, Lauschke VM, Sullivan PF, Goedegebure A, Cederroth CR, Williams FMK, Nagtegaal AP. Genome-wide association meta-analysis identifies 48 risk variants and highlights the role of the stria vascularis in hearing loss. Am J Hum Genet 2022; 109:1077-1091. [PMID: 35580588 PMCID: PMC9247887 DOI: 10.1016/j.ajhg.2022.04.010] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 04/15/2022] [Indexed: 02/08/2023] Open
Abstract
Hearing loss is one of the top contributors to years lived with disability and is a risk factor for dementia. Molecular evidence on the cellular origins of hearing loss in humans is growing. Here, we performed a genome-wide association meta-analysis of clinically diagnosed and self-reported hearing impairment on 723,266 individuals and identified 48 significant loci, 10 of which are novel. A large proportion of associations comprised missense variants, half of which lie within known familial hearing loss loci. We used single-cell RNA-sequencing data from mouse cochlea and brain and mapped common-variant genomic results to spindle, root, and basal cells from the stria vascularis, a structure in the cochlea necessary for normal hearing. Our findings indicate the importance of the stria vascularis in the mechanism of hearing impairment, providing future paths for developing targets for therapeutic intervention in hearing loss.
Collapse
Affiliation(s)
- Natalia Trpchevska
- Department of Physiology and Pharmacology, Karolinska Institutet, 17177 Stockholm, Sweden
| | - Maxim B Freidin
- Department of Twin Research and Genetic Epidemiology, King's College London, London, UK
| | - Linda Broer
- Department of Internal Medicine, Erasmus Medical Center, 3015 CE Rotterdam, the Netherlands
| | - Berthe C Oosterloo
- Department of Otorhinolaryngology, Erasmus Medical Center, 3015 CE Rotterdam, the Netherlands
| | - Shuyang Yao
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, 17177 Stockholm, Sweden
| | - Yitian Zhou
- Department of Physiology and Pharmacology, Karolinska Institutet, 17177 Stockholm, Sweden
| | - Barbara Vona
- Institute of Human Genetics, University Medical Center Göttingen, 37073 Göttingen, Germany; Institute for Auditory Neuroscience and InnerEarLab, University Medical Center Göttingen, 37075 Göttingen, Germany; Department of Otolaryngology-Head & Neck Surgery, University of Tübingen Medical Center, 72076 Tübingen, Germany
| | - Charles Bishop
- Department of Otolaryngology and Communicative Sciences, The University of Mississippi Medical Center, Jackson, MS 39216, USA
| | - Argyro Bizaki-Vallaskangas
- Department of Otolaryngology, University of Tampere, 33100 Tampere, Finland; Pirkanmaan Sairaanhoitopiiri, 33520 Tampere, Finland
| | - Barbara Canlon
- Department of Physiology and Pharmacology, Karolinska Institutet, 17177 Stockholm, Sweden
| | - Fabio Castellana
- Unit of Data Sciences and Technology Innovation for Population Health, National Institute of Gastroenterology "Saverio de Bellis", Research Hospital, Castellana Grotte, 70124 Bari, Italy
| | - Daniel I Chasman
- Division of Preventative Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA; Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA
| | - Stacey Cherny
- Department of Anatomy and Anthropology and Department of Epidemiology and Preventive Medicine, Sackler Faculty of Medicine, Tel Aviv University, 69978 Tel Aviv, Israel
| | - Kaare Christensen
- The Danish Twin Registry, Department of Public Health, University of Southern Denmark, 5000 Odense C, Denmark; Department of Clinical Genetics, Odense University Hospital, 5000 Odense C, Denmark; Department of Clinical Biochemistry and Pharmacology, Odense University Hospital, 5000 Odense C, Denmark
| | - Maria Pina Concas
- Institute for Maternal and Child Health - IRCCS, Burlo Garofolo, 34127 Trieste, Italy
| | - Adolfo Correa
- Jackson Heart Study, The University of Mississippi Medical Center, Jackson, MS 39216, USA
| | - Ran Elkon
- Department of Human Molecular Genetics & Biochemistry, Sackler School of Medicine, Tel Aviv University, 69978 Tel Aviv, Israel
| | - Jonas Mengel-From
- The Danish Twin Registry, Department of Public Health, University of Southern Denmark, 5000 Odense C, Denmark; Department of Clinical Genetics, Odense University Hospital, 5000 Odense C, Denmark
| | - Yan Gao
- Jackson Heart Study, The University of Mississippi Medical Center, Jackson, MS 39216, USA; Department of Population Health Science, University of Mississippi Medical Center, Jackson, MS 39216, USA
| | - Anne B S Giersch
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Giorgia Girotto
- Institute for Maternal and Child Health - IRCCS, Burlo Garofolo, 34127 Trieste, Italy; Department of Medicine, Surgery and Health Sciences, University of Trieste, 34139 Trieste, Italy
| | | | - Vilmundur Gudnason
- Icelandic Heart Association, 201 Kopavogur, Iceland; Faculty of Medicine, University of Iceland, 101 Reykjavik, Iceland
| | - Nancy L Heard-Costa
- Department of Neurology, Boston University School of Medicine, Boston, MA 02118, USA; Framingham Heart Study, Framingham, MA 01702, USA
| | - Ronna Hertzano
- Department of Otorhinolaryngology-Head and Neck Surgery, University of Maryland Baltimore, Baltimore, MD 21201, USA; Department of Anatomy and Neurobiology, University of Maryland Baltimore, Baltimore, MD 21201, USA; Institute for Genome Sciences, University of Maryland Baltimore, Baltimore, MD 21201, USA
| | - Jacob V B Hjelmborg
- The Danish Twin Registry, Department of Public Health, University of Southern Denmark, 5000 Odense C, Denmark
| | - Jens Hjerling-Leffler
- Department of Medical Biochemistry and Biophysics, Karolinska Institutet, 17177 Stockholm, Sweden
| | - Howard J Hoffman
- Division of Scientific Programs, Epidemiology and Statistics Program, National Institute on Deafness and Other Communications Disorders (NIDCD), NIH, Bethesda, MD 20892, USA
| | - Jaakko Kaprio
- Institute for Molecular Medicine Finland (FIMM), University of Helsinki, 00014 Helsinki, Finland
| | - Johannes Kettunen
- Computational Medicine, Center for Life Course Health Research, Faculty of Medicine, University of Oulu, 90220 Oulu, Finland; Biocenter Oulu, University of Oulu, 90220 Oulu, Finland; Finnish Institute for Health and Welfare, 00271 Helsinki, Finland
| | - Kristi Krebs
- Estonian Genome Centre, Institute of Genomics, University of Tartu, Tartu, Estonia
| | - Anna K Kähler
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, 17177 Stockholm, Sweden
| | - Francois Lallemend
- Department of Neuroscience, Karolinska Institutet, 17177 Stockholm, Sweden
| | - Lenore J Launer
- Laboratory of Epidemiology and Population Sciences, Intramural Research Program National Institute on Aging, Bethesda, MD 20892, USA
| | - I-Min Lee
- Division of Preventative Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Hampton Leonard
- Laboratory of Neurogenetics, National Institute on Aging, National Institutes of Health, Bethesda, MD 20892, USA; Center for Alzheimer's and Related Dementias, National Institutes of Health, Bethesda, MD 20892, USA; Data Tecnica International, Glen Echo, MD 20812, USA
| | - Chuan-Ming Li
- Division of Scientific Programs, Epidemiology and Statistics Program, National Institute on Deafness and Other Communications Disorders (NIDCD), NIH, Bethesda, MD 20892, USA
| | - Hubert Lowenheim
- Department of Otolaryngology-Head & Neck Surgery, University of Tübingen Medical Center, 72076 Tübingen, Germany
| | - Patrik K E Magnusson
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, 17177 Stockholm, Sweden
| | - Joyce van Meurs
- Department of Internal Medicine, Erasmus Medical Center, 3015 CE Rotterdam, the Netherlands
| | - Lili Milani
- Estonian Genome Centre, Institute of Genomics, University of Tartu, Tartu, Estonia
| | - Cynthia C Morton
- Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA; Department of Obstetrics and Gynecology and of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA; Manchester Centre for Audiology and Deafness, University of Manchester, Manchester M13 9PL, UK
| | - Antti Mäkitie
- Department of Otorhinolaryngology - Head and Neck Surgery, University of Helsinki and Helsinki University Hospital, 00029 Helsinki, Finland
| | - Mike A Nalls
- Laboratory of Neurogenetics, National Institute on Aging, National Institutes of Health, Bethesda, MD 20892, USA; Center for Alzheimer's and Related Dementias, National Institutes of Health, Bethesda, MD 20892, USA; Data Tecnica International, Glen Echo, MD 20812, USA
| | | | - Marianne Nygaard
- The Danish Twin Registry, Department of Public Health, University of Southern Denmark, 5000 Odense C, Denmark; Department of Clinical Genetics, Odense University Hospital, 5000 Odense C, Denmark
| | - Teemu Palviainen
- Institute for Molecular Medicine Finland (FIMM), University of Helsinki, 00014 Helsinki, Finland
| | - Sheila Pratt
- Department of Communication Science & Disorders, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Nicola Quaranta
- Otolaryngology Unit, Department of Basic Medical Science, Neuroscience and Sense Organs, University of Bari Aldo Moro, 70121 Bari, Italy
| | - Joel Rämö
- Institute for Molecular Medicine Finland (FIMM), University of Helsinki, 00014 Helsinki, Finland
| | - Elmo Saarentaus
- Institute for Molecular Medicine Finland (FIMM), University of Helsinki, 00014 Helsinki, Finland
| | - Rodolfo Sardone
- Unit of Data Sciences and Technology Innovation for Population Health, National Institute of Gastroenterology "Saverio de Bellis", Research Hospital, Castellana Grotte, 70124 Bari, Italy
| | - Claudia L Satizabal
- Department of Neurology, Boston University School of Medicine, Boston, MA 02118, USA; Framingham Heart Study, Framingham, MA 01702, USA; Glenn Biggs Institute for Alzheimer's & Neurodegenerative Diseases and Department of Population Health Sciences, University of Texas Health Sciences Center, San Antonio, TX 78229, USA
| | - John M Schweinfurth
- Department of Otolaryngology and Communicative Sciences, The University of Mississippi Medical Center, Jackson, MS 39216, USA
| | - Sudha Seshadri
- Department of Neurology, Boston University School of Medicine, Boston, MA 02118, USA; Framingham Heart Study, Framingham, MA 01702, USA; Glenn Biggs Institute for Alzheimer's & Neurodegenerative Diseases and Department of Population Health Sciences, University of Texas Health Sciences Center, San Antonio, TX 78229, USA
| | - Eric Shiroma
- Laboratory of Epidemiology and Population Sciences, National Institute on Aging, Baltimore, MD 21224, USA
| | - Eldad Shulman
- Department of Human Molecular Genetics & Biochemistry, Sackler School of Medicine, Tel Aviv University, 69978 Tel Aviv, Israel
| | - Eleanor Simonsick
- Longitudinal Studies Section, Translational Gerontology Branch, National Institute on Aging, Baltimore, MD 21224, USA
| | - Christopher Spankovich
- Department of Otolaryngology and Communicative Sciences, The University of Mississippi Medical Center, Jackson, MS 39216, USA
| | - Anke Tropitzsch
- Department of Otolaryngology-Head & Neck Surgery, University of Tübingen Medical Center, 72076 Tübingen, Germany
| | - Volker M Lauschke
- Department of Physiology and Pharmacology, Karolinska Institutet, 17177 Stockholm, Sweden
| | - Patrick F Sullivan
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, 17177 Stockholm, Sweden; Department of Genetics, University of North Carolina, Chapel Hill, NC 27516, USA
| | - Andre Goedegebure
- Department of Otorhinolaryngology, Erasmus Medical Center, 3015 CE Rotterdam, the Netherlands
| | - Christopher R Cederroth
- Department of Physiology and Pharmacology, Karolinska Institutet, 17177 Stockholm, Sweden; National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Nottingham University Hospitals NHS Trust, Ropewalk House, NG1 5DU Nottingham, UK; Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, NG7 2UH Nottingham, UK.
| | - Frances M K Williams
- Department of Twin Research and Genetic Epidemiology, King's College London, London, UK
| | - Andries Paul Nagtegaal
- Department of Otorhinolaryngology, Erasmus Medical Center, 3015 CE Rotterdam, the Netherlands
| |
Collapse
|
40
|
Smeal M, Snapp H, Ausili S, Holcomb M, Prentiss S. Effects of Bilateral Cochlear Implantation on Binaural Listening Tasks for Younger and Older Adults. Audiol Neurootol 2022; 27:377-387. [PMID: 35636400 DOI: 10.1159/000523914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Accepted: 02/24/2022] [Indexed: 11/19/2022] Open
Abstract
PURPOSE This study investigated the objective and subjective benefit of a second cochlear implant (CI) on binaural listening tasks of speech understanding in noise and localization in younger and older adults. We aimed to determine if the aging population can utilize binaural cues and obtain comparable benefits from bilateral CI (BIL_CI) when compared to the younger population. METHODS Twenty-nine adults with severe to profound bilateral sensorineural hearing loss were included. Participants were evaluated in two conditions, better CI (BE_CI) alone and BIL_CI using AzBio and Bamford-Kowal-Bench (BKB) sentence in noise tests. Localization tasks were completed in the BIL_CI condition using a broadband stimulus, low-frequency stimuli, and high-frequency stimuli. A subjective questionnaire was administered to assess satisfaction with CI. RESULTS Older age was significantly associated with poorer performance on AzBio +5 dB signal-to-noise ratio (SNR) and BKB-speech in noise (SIN); however, improvements from BE_CI to BIL_CI were observed across all ages. In the AzBio +5 condition, nearly half of all participants achieved a significant improvement from BE_CI to BIL_CI with the majority of those occurring in patients younger than 65 years of age. Conversely, the majority of participants who achieved a significant improvement in BKB-SIN were adults >65 years of age. Years of BIL_CI experience and time between implants were not associated with performance. For localization, mean absolute error increased with age for low and high narrowband noise, but not for the broadband noise. Response gain was negatively correlated with age for all localization stimuli. Neither BIL_CI listening experience nor time between implants significantly impacted localization ability. Subjectively, participants report reduction in disability with the addition of the second CI. There is no observed relationship between age or speech recognition score and satisfaction with BIL_CI. CONCLUSION Overall performance on binaural listening tasks was poorer in older adults than in younger adults. However, older adults were able to achieve significant benefit from the addition of a second CI, and performance on binaural tasks was not correlated with overall device satisfaction. The significance of the improvement was task and stimulus dependent but suggested a critical limit may exist for optimal performance on SIN tasks for CI users. Specifically, older adults require at least a +8 dB SNR to understand 50% of speech postoperatively; therefore, solely utilizing a fixed +5 dB SNR preoperatively to qualify CI candidates is not recommended as this test condition may introduce limitations in demonstrating CI benefit.
Collapse
Affiliation(s)
- Molly Smeal
- Department of Otolaryngology, University of Miami, Miami, Florida, USA
| | - Hillary Snapp
- Department of Otolaryngology, University of Miami, Miami, Florida, USA
| | - Sebastian Ausili
- Department of Otolaryngology, University of Miami, Miami, Florida, USA
| | - Meredith Holcomb
- Department of Otolaryngology, University of Miami, Miami, Florida, USA
| | - Sandra Prentiss
- Department of Otolaryngology, University of Miami, Miami, Florida, USA
| |
Collapse
|
41
|
Gillis M, Decruy L, Vanthornhout J, Francart T. Hearing loss is associated with delayed neural responses to continuous speech. Eur J Neurosci 2022; 55:1671-1690. [PMID: 35263814 DOI: 10.1111/ejn.15644] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 02/21/2022] [Accepted: 02/23/2022] [Indexed: 11/28/2022]
Abstract
We investigated the impact of hearing loss on the neural processing of speech. Using a forward modeling approach, we compared the neural responses to continuous speech of 14 adults with sensorineural hearing loss with those of age-matched normal-hearing peers. Compared to their normal-hearing peers, hearing-impaired listeners had increased neural tracking and delayed neural responses to continuous speech in quiet. The latency also increased with the degree of hearing loss. As speech understanding decreased, neural tracking decreased in both populations; however, a significantly different trend was observed for the latency of the neural responses. For normal-hearing listeners, the latency increased with increasing background noise level. However, for hearing-impaired listeners, this increase was not observed. Our results support the idea that the neural response latency indicates the efficiency of neural speech processing: more or different brain regions are involved in processing speech, which causes longer communication pathways in the brain. These longer communication pathways hamper the information integration among these brain regions, reflected in longer processing times. Altogether, this suggests decreased neural speech processing efficiency in HI listeners as more time and more or different brain regions are required to process speech. Our results suggest that this reduction in neural speech processing efficiency occurs gradually as hearing deteriorates. From our results, it is apparent that sound amplification does not solve hearing loss. Even when listening to speech in silence at a comfortable loudness, hearing-impaired listeners process speech less efficiently.
Collapse
Affiliation(s)
- Marlies Gillis
- KU Leuven, Department of Neurosciences, ExpORL, Leuven, Belgium
| | - Lien Decruy
- Institute for Systems Research, University of Maryland, College Park, MD, USA
| | | | - Tom Francart
- KU Leuven, Department of Neurosciences, ExpORL, Leuven, Belgium
| |
Collapse
|
42
|
Schmitt R, Meyer M, Giroud N. Better speech-in-noise comprehension is associated with enhanced neural speech tracking in older adults with hearing impairment. Cortex 2022; 151:133-146. [DOI: 10.1016/j.cortex.2022.02.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 12/19/2021] [Accepted: 02/03/2022] [Indexed: 11/27/2022]
|
43
|
Gordon-Salant S, Schwartz MS, Oppler KA, Yeni-Komshian GH. Detection and Recognition of Asynchronous Auditory/Visual Speech: Effects of Age, Hearing Loss, and Talker Accent. Front Psychol 2022; 12:772867. [PMID: 35153900 PMCID: PMC8832148 DOI: 10.3389/fpsyg.2021.772867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 12/21/2021] [Indexed: 11/13/2022] Open
Abstract
This investigation examined age-related differences in auditory-visual (AV) integration as reflected on perceptual judgments of temporally misaligned AV English sentences spoken by native English and native Spanish talkers. In the detection task, it was expected that slowed auditory temporal processing of older participants, relative to younger participants, would be manifest as a shift in the range over which participants would judge asynchronous stimuli as synchronous (referred to as the "AV simultaneity window"). The older participants were also expected to exhibit greater declines in speech recognition for asynchronous AV stimuli than younger participants. Talker accent was hypothesized to influence listener performance, with older listeners exhibiting a greater narrowing of the AV simultaneity window and much poorer recognition of asynchronous AV foreign-accented speech compared to younger listeners. Participant groups included younger and older participants with normal hearing and older participants with hearing loss. Stimuli were video recordings of sentences produced by native English and native Spanish talkers. The video recordings were altered in 50 ms steps by delaying either the audio or video onset. Participants performed a detection task in which they judged whether the sentences were synchronous or asynchronous, and performed a recognition task for multiple synchronous and asynchronous conditions. Both the detection and recognition tasks were conducted at the individualized signal-to-noise ratio (SNR) corresponding to approximately 70% correct speech recognition performance for synchronous AV sentences. Older listeners with and without hearing loss generally showed wider AV simultaneity windows than younger listeners, possibly reflecting slowed auditory temporal processing in auditory lead conditions and reduced sensitivity to asynchrony in auditory lag conditions. However, older and younger listeners were affected similarly by misalignment of auditory and visual signal onsets on the speech recognition task. This suggests that older listeners are negatively impacted by temporal misalignments for speech recognition, even when they do not notice that the stimuli are asynchronous. Overall, the findings show that when listener performance is equated for simultaneous AV speech signals, age effects are apparent in detection judgments but not in recognition of asynchronous speech.
Collapse
Affiliation(s)
- Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, United States
| | | | | | | |
Collapse
|
44
|
Scheuregger O, Hjortkjær J, Dau T. Identification and Discrimination of Sound Textures in Hearing-Impaired and Older Listeners. Trends Hear 2021; 25:23312165211065608. [PMID: 34939472 PMCID: PMC8721370 DOI: 10.1177/23312165211065608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Sound textures are a broad class of sounds defined by their homogeneous temporal structure. It has been suggested that sound texture perception is mediated by time-averaged summary statistics measured from early stages of the auditory system. The ability of young normal-hearing (NH) listeners to identify synthetic sound textures increases as the statistics of the synthetic texture approach those of its real-world counterpart. In sound texture discrimination, young NH listeners utilize the fine temporal stimulus information for short-duration stimuli, whereas they switch to a time-averaged statistical representation as the stimulus' duration increases. The present study investigated how younger and older listeners with a sensorineural hearing impairment perform in the corresponding texture identification and discrimination tasks in which the stimuli were amplified to compensate for the individual listeners' loss of audibility. In both hearing impaired (HI) listeners and NH controls, sound texture identification performance increased as the number of statistics imposed during the synthesis stage increased, but hearing impairment was accompanied by a significant reduction in overall identification accuracy. Sound texture discrimination performance was measured across listener groups categorized by age and hearing loss. Sound texture discrimination performance was unaffected by hearing loss at all excerpt durations. The older listeners' sound texture and exemplar discrimination performance decreased for signals of short excerpt duration, with older HI listeners performing better than older NH listeners. The results suggest that the time-averaged statistic representations of sound textures provide listeners with cues which are robust to the effects of age and sensorineural hearing loss.
Collapse
Affiliation(s)
- Oliver Scheuregger
- Hearing Systems Section, Department of Health Technology, 5205Technical University of Denmark, Kongens Lyngby, Denmark
| | - Jens Hjortkjær
- Hearing Systems Section, Department of Health Technology, 5205Technical University of Denmark, Kongens Lyngby, Denmark.,Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, Kettegård Allé 30, DK-2650 Hvidovre, Denmark
| | - Torsten Dau
- Hearing Systems Section, Department of Health Technology, 5205Technical University of Denmark, Kongens Lyngby, Denmark
| |
Collapse
|
45
|
Palana J, Schwartz S, Tager-Flusberg H. Evaluating the Use of Cortical Entrainment to Measure Atypical Speech Processing: A Systematic Review. Neurosci Biobehav Rev 2021; 133:104506. [PMID: 34942267 DOI: 10.1016/j.neubiorev.2021.12.029] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 12/12/2021] [Accepted: 12/18/2021] [Indexed: 11/30/2022]
Abstract
BACKGROUND Cortical entrainment has emerged as promising means for measuring continuous speech processing in young, neurotypical adults. However, its utility for capturing atypical speech processing has not been systematically reviewed. OBJECTIVES Synthesize evidence regarding the merit of measuring cortical entrainment to capture atypical speech processing and recommend avenues for future research. METHOD We systematically reviewed publications investigating entrainment to continuous speech in populations with auditory processing differences. RESULTS In the 25 publications reviewed, most studies were conducted on older and/or hearing-impaired adults, for whom slow-wave entrainment to speech was often heightened compared to controls. Research conducted on populations with neurodevelopmental disorders, in whom slow-wave entrainment was often reduced, was less common. Across publications, findings highlighted associations between cortical entrainment and speech processing performance differences. CONCLUSIONS Measures of cortical entrainment offer useful means of capturing speech processing differences and future research should leverage them more extensively when studying populations with neurodevelopmental disorders.
Collapse
Affiliation(s)
- Joseph Palana
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA, 02215, USA; Laboratories of Cognitive Neuroscience, Division of Developmental Medicine, Harvard Medical School, Boston Children's Hospital, 1 Autumn Street, Boston, MA, 02215, USA
| | - Sophie Schwartz
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA, 02215, USA
| | - Helen Tager-Flusberg
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA, 02215, USA.
| |
Collapse
|
46
|
Märcher-Rørsted J, Encina-Llamas G, Dau T, Liberman MC, Wu PZ, Hjortkjær J. Age-related reduction in frequency-following responses as a potential marker of cochlear neural degeneration. Hear Res 2021; 414:108411. [PMID: 34929535 DOI: 10.1016/j.heares.2021.108411] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 12/03/2021] [Accepted: 12/06/2021] [Indexed: 11/28/2022]
Abstract
Healthy aging may be associated with neural degeneration in the cochlea even before clinical hearing loss emerges. Reduction in frequency-following responses (FFRs) to tonal carriers in older clinically normal-hearing listeners has previously been reported, and has been argued to reflect an age-dependent decline in temporal processing in the central auditory system. Alternatively, age-dependent loss of auditory nerve fibers (ANFs) may have little effect on audiometric sensitivity and yet compromise the precision of neural phase-locking relying on joint activity across populations of fibers. This peripheral loss may, in turn, contribute to reduced neural synchrony in the brainstem as reflected in the FFR. Here, we combined human electrophysiology and auditory nerve (AN) modeling to investigate whether age-related changes in the FFR would be consistent with peripheral neural degeneration. FFRs elicited by pure tones and frequency sweeps at carrier frequencies between 200 and 1200 Hz were obtained in older (ages 48-76) and younger (ages 20-30) listeners, both groups having clinically normal audiometric thresholds up to 6 kHz. The same stimuli were presented to a computational model of the AN in which age-related loss of hair cells or ANFs was modelled using human histopathological data. In the older human listeners, the measured FFRs to both sweeps and pure tones were found to be reduced across the carrier frequencies examined. These FFR reductions were consistent with model simulations of age-related ANF loss. In model simulations, the phase-locked response produced by the population of remaining fibers decreased proportionally with increasing loss of the ANFs. Basal-turn loss of inner hair cells also reduced synchronous activity at lower frequencies, albeit to a lesser degree. Model simulations of age-related threshold elevation further indicated that outer hair cell dysfunction had no negative effect on phase-locked AN responses. These results are consistent with a peripheral source of the FFR reductions observed in older normal-hearing listeners, and indicate that FFRs at lower carrier frequencies may potentially be a sensitive marker of peripheral neural degeneration.
Collapse
Affiliation(s)
- Jonatan Märcher-Rørsted
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Ørsteds Plads, Building 352, DK-2800 Kgs. Lyngby, Denmark
| | - Gerard Encina-Llamas
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Ørsteds Plads, Building 352, DK-2800 Kgs. Lyngby, Denmark
| | - Torsten Dau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Ørsteds Plads, Building 352, DK-2800 Kgs. Lyngby, Denmark
| | - M Charles Liberman
- Eaton-Peabody Laboratories and Department of Otolaryngology, Head and Neck Surgery, Massachusetts Eye and Ear, Boston, MA 02114 USA
| | - Pei-Zhe Wu
- Eaton-Peabody Laboratories and Department of Otolaryngology, Head and Neck Surgery, Massachusetts Eye and Ear, Boston, MA 02114 USA
| | - Jens Hjortkjær
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Ørsteds Plads, Building 352, DK-2800 Kgs. Lyngby, Denmark; Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, Kettegård Allé 30, DK-2650 Hvidovre, Denmark.
| |
Collapse
|
47
|
Gransier R, Wouters J. Neural auditory processing of parameterized speech envelopes. Hear Res 2021; 412:108374. [PMID: 34800800 DOI: 10.1016/j.heares.2021.108374] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 10/01/2021] [Accepted: 10/13/2021] [Indexed: 10/19/2022]
Abstract
Speech perception depends highly on the neural processing of the speech envelope. Several auditory processing deficits are hypothesized to result in a reduction in fidelity of the neural representation of the speech envelope across the auditory pathway. Furthermore, this reduction in fidelity is associated with supra-threshold speech processing deficits. Investigating the mechanisms that affect the neural encoding of the speech envelope can be of great value to gain insight in the different mechanisms that account for this reduced neural representation, and to develop stimulation strategies for hearing prosthesis that aim to restore it. In this perspective, we discuss the importance of neural assessment of phase-locking to the speech envelope from an audiological view and introduce the Temporal Envelope Speech Tracking (TEMPEST) stimulus framework which enables the electrophysiological assessment of envelope processing across the auditory pathway in a systematic and standardized way. We postulate that this framework can be used to gain insight in the salience of speech-like temporal envelopes in the neural code and to evaluate the effectiveness of stimulation strategies that aim to restore temporal processing across the auditory pathway with auditory prostheses.
Collapse
Affiliation(s)
- Robin Gransier
- ExpORL, Department of Neurosciences, KU Leuven, 3000 Leuven, Belgium; Leuven Brain Institute, KU Leuven, 3000 Leuven, Belgium.
| | - Jan Wouters
- ExpORL, Department of Neurosciences, KU Leuven, 3000 Leuven, Belgium; Leuven Brain Institute, KU Leuven, 3000 Leuven, Belgium
| |
Collapse
|
48
|
Gnanateja GN, Rupp K, Llanos F, Remick M, Pernia M, Sadagopan S, Teichert T, Abel TJ, Chandrasekaran B. Frequency-Following Responses to Speech Sounds Are Highly Conserved across Species and Contain Cortical Contributions. eNeuro 2021; 8:ENEURO.0451-21.2021. [PMID: 34799409 PMCID: PMC8704423 DOI: 10.1523/eneuro.0451-21.2021] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 11/02/2021] [Indexed: 11/21/2022] Open
Abstract
Time-varying pitch is a vital cue for human speech perception. Neural processing of time-varying pitch has been extensively assayed using scalp-recorded frequency-following responses (FFRs), an electrophysiological signal thought to reflect integrated phase-locked neural ensemble activity from subcortical auditory areas. Emerging evidence increasingly points to a putative contribution of auditory cortical ensembles to the scalp-recorded FFRs. However, the properties of cortical FFRs and precise characterization of laminar sources are still unclear. Here we used direct human intracortical recordings as well as extracranial and intracranial recordings from macaques and guinea pigs to characterize the properties of cortical sources of FFRs to time-varying pitch patterns. We found robust FFRs in the auditory cortex across all species. We leveraged representational similarity analysis as a translational bridge to characterize similarities between the human and animal models. Laminar recordings in animal models showed FFRs emerging primarily from the thalamorecipient layers of the auditory cortex. FFRs arising from these cortical sources significantly contributed to the scalp-recorded FFRs via volume conduction. Our research paves the way for a wide array of studies to investigate the role of cortical FFRs in auditory perception and plasticity.
Collapse
Affiliation(s)
- G Nike Gnanateja
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
| | - Kyle Rupp
- Department of Neurological Surgery, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Fernando Llanos
- Department of Linguistics, The University of Texas at Austin, Austin, Texas 78712
| | - Madison Remick
- Department of Neurological Surgery, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Marianny Pernia
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
| | - Srivatsun Sadagopan
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
| | - Tobias Teichert
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Taylor J Abel
- Department of Neurological Surgery, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania 15213
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
| | - Bharath Chandrasekaran
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
| |
Collapse
|
49
|
Herrmann B, Maess B, Johnsrude IS. A neural signature of regularity in sound is reduced in older adults. Neurobiol Aging 2021; 109:1-10. [PMID: 34634748 DOI: 10.1016/j.neurobiolaging.2021.09.011] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 09/03/2021] [Accepted: 09/08/2021] [Indexed: 01/21/2023]
Abstract
Sensitivity to repetitions in sound amplitude and frequency is crucial for sound perception. As with other aspects of sound processing, sensitivity to such patterns may change with age, and may help explain some age-related changes in hearing such as segregating speech from background sound. We recorded magnetoencephalography to characterize differences in the processing of sound patterns between younger and older adults. We presented tone sequences that either contained a pattern (made of a repeated set of tones) or did not contain a pattern. We show that auditory cortex in older, compared to younger, adults is hyperresponsive to sound onsets, but that sustained neural activity in auditory cortex, indexing the processing of a sound pattern, is reduced. Hence, the sensitivity of neural populations in auditory cortex fundamentally differs between younger and older individuals, overresponding to sound onsets, while underresponding to patterns in sounds. This may help to explain some age-related changes in hearing such as increased sensitivity to distracting sounds and difficulties tracking speech in the presence of other sound.
Collapse
Affiliation(s)
- Björn Herrmann
- Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON, Canada; Rotman Research Institute, Baycrest, North York, ON, Canada; Department of Psychology, University of Toronto, Toronto, ON, Canada.
| | - Burkhard Maess
- Brain Networks Unit, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Ingrid S Johnsrude
- Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON, Canada; School of Communication Sciences & Disorders, The University of Western Ontario, London, ON, Canada
| |
Collapse
|
50
|
Patro C, Kreft HA, Wojtczak M. The search for correlates of age-related cochlear synaptopathy: Measures of temporal envelope processing and spatial release from speech-on-speech masking. Hear Res 2021; 409:108333. [PMID: 34425347 PMCID: PMC8424701 DOI: 10.1016/j.heares.2021.108333] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 07/17/2021] [Accepted: 08/04/2021] [Indexed: 01/13/2023]
Abstract
Older adults often experience difficulties understanding speech in adverse listening conditions. It has been suggested that for listeners with normal and near-normal audiograms, these difficulties may, at least in part, arise from age-related cochlear synaptopathy. The aim of this study was to assess if performance on auditory tasks relying on temporal envelope processing reveal age-related deficits consistent with those expected from cochlear synaptopathy. Listeners aged 20 to 66 years were tested using a series of psychophysical, electrophysiological, and speech-perception measures using stimulus configurations that promote coding by medium- and low-spontaneous-rate auditory-nerve fibers. Cognitive measures of executive function were obtained to control for age-related cognitive decline. Results from the different tests were not significantly correlated with each other despite a presumed reliance on common mechanisms involved in temporal envelope processing. Only gap-detection thresholds for a tone in noise and spatial release from speech-on-speech masking were significantly correlated with age. Increasing age was related to impaired cognitive executive function. Multivariate regression analyses showed that individual differences in hearing sensitivity, envelope-based measures, and scores from nonauditory cognitive tests did not significantly contribute to the variability in spatial release from speech-on-speech masking for small target/masker spatial separation, while age was a significant contributor.
Collapse
Affiliation(s)
- Chhayakanta Patro
- Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Parkway, Minneapolis, MN 55455, USA.
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Parkway, Minneapolis, MN 55455, USA
| | - Magdalena Wojtczak
- Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Parkway, Minneapolis, MN 55455, USA
| |
Collapse
|