1
|
Mai G, Jiang Z, Wang X, Tachtsidis I, Howell P. Neuroplasticity of Speech-in-Noise Processing in Older Adults Assessed by Functional Near-Infrared Spectroscopy (fNIRS). Brain Topogr 2024; 37:1139-1157. [PMID: 39042322 PMCID: PMC11408581 DOI: 10.1007/s10548-024-01070-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 07/13/2024] [Indexed: 07/24/2024]
Abstract
Functional near-infrared spectroscopy (fNIRS), a non-invasive optical neuroimaging technique that is portable and acoustically silent, has become a promising tool for evaluating auditory brain functions in hearing-vulnerable individuals. This study, for the first time, used fNIRS to evaluate neuroplasticity of speech-in-noise processing in older adults. Ten older adults, most of whom had moderate-to-mild hearing loss, participated in a 4-week speech-in-noise training. Their speech-in-noise performances and fNIRS brain responses to speech (auditory sentences in noise), non-speech (spectrally-rotated speech in noise) and visual (flashing chequerboards) stimuli were evaluated pre- (T0) and post-training (immediately after training, T1; and after a 4-week retention, T2). Behaviourally, speech-in-noise performances were improved after retention (T2 vs. T0) but not immediately after training (T1 vs. T0). Neurally, we intriguingly found brain responses to speech vs. non-speech decreased significantly in the left auditory cortex after retention (T2 vs. T0 and T2 vs. T1) for which we interpret as suppressed processing of background noise during speech listening alongside the significant behavioural improvements. Meanwhile, functional connectivity within and between multiple regions of temporal, parietal and frontal lobes was significantly enhanced in the speech condition after retention (T2 vs. T0). We also found neural changes before the emergence of significant behavioural improvements. Compared to pre-training, responses to speech vs. non-speech in the left frontal/prefrontal cortex were decreased significantly both immediately after training (T1 vs. T0) and retention (T2 vs. T0), reflecting possible alleviation of listening efforts. Finally, connectivity was significantly decreased between auditory and higher-level non-auditory (parietal and frontal) cortices in response to visual stimuli immediately after training (T1 vs. T0), indicating decreased cross-modal takeover of speech-related regions during visual processing. The results thus showed that neuroplasticity can be observed not only at the same time with, but also before, behavioural changes in speech-in-noise perception. To our knowledge, this is the first fNIRS study to evaluate speech-based auditory neuroplasticity in older adults. It thus provides important implications for current research by illustrating the promises of detecting neuroplasticity using fNIRS in hearing-vulnerable individuals.
Collapse
Affiliation(s)
- Guangting Mai
- National Institute for Health and Care Research Nottingham Biomedical Research Centre, Nottingham, UK.
- Academic Unit of Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK.
- Division of Psychology and Language Sciences, University College London, London, UK.
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK.
| | - Zhizhao Jiang
- Division of Psychology and Language Sciences, University College London, London, UK
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Xinran Wang
- Division of Psychology and Language Sciences, University College London, London, UK
| | - Ilias Tachtsidis
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Peter Howell
- Division of Psychology and Language Sciences, University College London, London, UK
| |
Collapse
|
2
|
Zekveld AA, Kramer SE, Heslenfeld DJ, Versfeld NJ, Vriend C. Hearing Impairment: Reduced Pupil Dilation Response and Frontal Activation During Degraded Speech Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024:1-18. [PMID: 39392910 DOI: 10.1044/2024_jslhr-24-00017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/13/2024]
Abstract
PURPOSE A relevant aspect of listening is the effort required during speech processing, which can be assessed by pupillometry. Here, we assessed the pupil dilation response of normal-hearing (NH) and hard of hearing (HH) individuals during listening to clear sentences and masked or degraded sentences. We combined this assessment with functional magnetic resonance imaging (fMRI) to investigate the neural correlates of the pupil dilation response. METHOD Seventeen NH participants (Mage = 46 years) were compared to 17 HH participants (Mage = 45 years) who were individually matched in age and educational level. Participants repeated sentences that were presented clearly, that were distorted, or that were masked. The sentence intelligibility level of masked and distorted sentences was 50% correct. Silent baseline trials were presented as well. Performance measures, pupil dilation responses, and fMRI data were acquired. RESULTS HH individuals had overall poorer speech reception than the NH participants, but not for noise-vocoded speech. In addition, an interaction effect was observed with smaller pupil dilation responses in HH than in NH listeners for the degraded speech conditions. Hearing impairment was associated with higher activation across conditions in the left superior temporal gyrus, as compared to the silent baseline. However, the region of interest analysis indicated lower activation during degraded speech relative to clear speech in bilateral frontal regions and the insular cortex, for HH compared to NH listeners. Hearing impairment was also associated with a weaker relation between the pupil response and activation in the right inferior frontal gyrus. Overall, degraded speech evoked higher frontal activation than clear speech. CONCLUSION Brain areas associated with attentional and cognitive-control processes may be increasingly recruited when speech is degraded and are related to the pupil dilation response, but this relationship is weaker in HH listeners. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.27162135.
Collapse
Affiliation(s)
- Adriana A Zekveld
- Otolaryngology-Head and Neck Surgery, Amsterdam UMC location Vrije Universiteit Amsterdam, the Netherlands
- Amsterdam Public Health Research Institute, the Netherlands
- Institute of Psychology, Leiden University, the Netherlands
| | - Sophia E Kramer
- Otolaryngology-Head and Neck Surgery, Amsterdam UMC location Vrije Universiteit Amsterdam, the Netherlands
- Amsterdam Public Health Research Institute, the Netherlands
| | - Dirk J Heslenfeld
- Faculty of Behavioural and Movement Sciences, Experimental and Applied Psychology, VU University, Amsterdam, the Netherlands
| | - Niek J Versfeld
- Otolaryngology-Head and Neck Surgery, Amsterdam UMC location Vrije Universiteit Amsterdam, the Netherlands
- Amsterdam Public Health Research Institute, the Netherlands
| | - Chris Vriend
- Department of Psychiatry and Department of Anatomy and Neuroscience, Amsterdam UMC, Vrije Universiteit Amsterdam, the Netherlands
- Brain Imaging, Amsterdam Neuroscience, the Netherlands
| |
Collapse
|
3
|
Brisson V, Tremblay P. Assessing the Impact of Transcranial Magnetic Stimulation on Speech Perception in Noise. J Cogn Neurosci 2024; 36:2184-2207. [PMID: 39023366 DOI: 10.1162/jocn_a_02224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Healthy aging is associated with reduced speech perception in noise (SPiN) abilities. The etiology of these difficulties remains elusive, which prevents the development of new strategies to optimize the speech processing network and reduce these difficulties. The objective of this study was to determine if sublexical SPiN performance can be enhanced by applying TMS to three regions involved in processing speech: the left posterior temporal sulcus, the left superior temporal gyrus, and the left ventral premotor cortex. The second objective was to assess the impact of several factors (age, baseline performance, target, brain structure, and activity) on post-TMS SPiN improvement. The results revealed that participants with lower baseline performance were more likely to improve. Moreover, in older adults, cortical thickness within the target areas was negatively associated with performance improvement, whereas this association was null in younger individuals. No differences between the targets were found. This study suggests that TMS can modulate sublexical SPiN performance, but that the strength and direction of the effects depend on a complex combination of contextual and individual factors.
Collapse
Affiliation(s)
- Valérie Brisson
- Université Laval, School of Rehabilitation Sciences, Québec, Canada
- Centre de recherche CERVO, Québec, Canada
| | - Pascale Tremblay
- Université Laval, School of Rehabilitation Sciences, Québec, Canada
- Centre de recherche CERVO, Québec, Canada
| |
Collapse
|
4
|
Kuang C, Chen X, Chen F. Recognition of Emotional Prosody in Mandarin-Speaking Children: Effects of Age, Noise, and Working Memory. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2024; 53:68. [PMID: 39180569 DOI: 10.1007/s10936-024-10108-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/09/2024] [Indexed: 08/26/2024]
Abstract
Age, babble noise, and working memory have been found to affect the recognition of emotional prosody based on non-tonal languages, yet little is known about how exactly they influence tone-language-speaking children's recognition of emotional prosody. In virtue of the tectonic theory of Stroop effects and the Ease of Language Understanding (ELU) model, this study aimed to explore the effects of age, babble noise, and working memory on Mandarin-speaking children's understanding of emotional prosody. Sixty Mandarin-speaking children aged three to eight years and 20 Mandarin-speaking adults participated in this study. They were asked to recognize the happy or sad prosody of short sentences with different semantics (negative, neutral, or positive) produced by a male speaker. The results revealed that the prosody-semantics congruity played a bigger role in children than in adults for accurate recognition of emotional prosody in quiet, but a less important role in children compared with adults in noise. Furthermore, concerning the recognition accuracy of emotional prosody, the effect of working memory on children was trivial despite the listening conditions. But for adults, it was very prominent in babble noise. The findings partially supported the tectonic theory of Stroop effects which highlights the perceptual enhancement generated by cross-channel congruity, and the ELU model which underlines the importance of working memory in speech processing in noise. These results suggested that the development of emotional prosody recognition is a complex process influenced by the interplay among age, background noise, and working memory.
Collapse
Affiliation(s)
- Chen Kuang
- School of Foreign Languages, Hunan University, Lushannan Road No. 2, Yuelu District, Changsha City, Hunan Province, China
| | - Xiaoxiang Chen
- School of Foreign Languages, Hunan University, Lushannan Road No. 2, Yuelu District, Changsha City, Hunan Province, China.
| | - Fei Chen
- School of Foreign Languages, Hunan University, Lushannan Road No. 2, Yuelu District, Changsha City, Hunan Province, China.
| |
Collapse
|
5
|
MacLean J, Stirn J, Bidelman GM. Auditory-motor entrainment and listening experience shape the perceptual learning of concurrent speech. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.18.604167. [PMID: 39071391 PMCID: PMC11275804 DOI: 10.1101/2024.07.18.604167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
Background Plasticity from auditory experience shapes the brain's encoding and perception of sound. Though prior research demonstrates that neural entrainment (i.e., brain-to-acoustic synchronization) aids speech perception, how long- and short-term plasticity influence entrainment to concurrent speech has not been investigated. Here, we explored neural entrainment mechanisms and the interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Method Participants learned to identify double-vowel mixtures during ∼45 min training sessions with concurrent high-density EEG recordings. We examined the degree to which brain responses entrained to the speech-stimulus train (∼9 Hz) to investigate whether entrainment to speech prior to behavioral decision predicted task performance. Source and directed functional connectivity analyses of the EEG probed whether behavior was driven by group differences auditory-motor coupling. Results Both musicians and nonmusicians showed rapid perceptual learning in accuracy with training. Interestingly, listeners' neural entrainment strength prior to target speech mixtures predicted behavioral identification performance; stronger neural synchronization was observed preceding incorrect compared to correct trial responses. We also found stark hemispheric biases in auditory-motor coupling during speech entrainment, with greater auditory-motor connectivity in the right compared to left hemisphere for musicians (R>L) but not in nonmusicians (R=L). Conclusions Our findings confirm stronger neuroacoustic synchronization and auditory-motor coupling during speech processing in musicians. Stronger neural entrainment to rapid stimulus trains preceding incorrect behavioral responses supports the notion that alpha-band (∼10 Hz) arousal/suppression in brain activity is an important modulator of trial-by-trial success in perceptual processing.
Collapse
|
6
|
Jin X, Zhang L, Wu G, Wang X, Du Y. Compensation or Preservation? Different Roles of Functional Lateralization in Speech Perception of Older Non-musicians and Musicians. Neurosci Bull 2024:10.1007/s12264-024-01234-x. [PMID: 38839688 DOI: 10.1007/s12264-024-01234-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 02/15/2024] [Indexed: 06/07/2024] Open
Abstract
Musical training can counteract age-related decline in speech perception in noisy environments. However, it remains unclear whether older non-musicians and musicians rely on functional compensation or functional preservation to counteract the adverse effects of aging. This study utilized resting-state functional connectivity (FC) to investigate functional lateralization, a fundamental organization feature, in older musicians (OM), older non-musicians (ONM), and young non-musicians (YNM). Results showed that OM outperformed ONM and achieved comparable performance to YNM in speech-in-noise and speech-in-speech tasks. ONM exhibited reduced lateralization than YNM in lateralization index (LI) of intrahemispheric FC (LI_intra) in the cingulo-opercular network (CON) and LI of interhemispheric heterotopic FC (LI_he) in the language network (LAN). Conversely, OM showed higher neural alignment to YNM (i.e., a more similar lateralization pattern) compared to ONM in CON, LAN, frontoparietal network (FPN), dorsal attention network (DAN), and default mode network (DMN), indicating preservation of youth-like lateralization patterns due to musical experience. Furthermore, in ONM, stronger left-lateralized and lower alignment-to-young of LI_intra in the somatomotor network (SMN) and DAN and LI_he in DMN correlated with better speech performance, indicating a functional compensation mechanism. In contrast, stronger right-lateralized LI_intra in FPN and DAN and higher alignment-to-young of LI_he in LAN correlated with better performance in OM, suggesting a functional preservation mechanism. These findings highlight the differential roles of functional preservation and compensation of lateralization in speech perception in noise among elderly individuals with and without musical expertise, offering insights into successful aging theories from the lens of functional lateralization and speech perception.
Collapse
Affiliation(s)
- Xinhu Jin
- Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
| | - Lei Zhang
- Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Guowei Wu
- Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Xiuyi Wang
- Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
| | - Yi Du
- Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China.
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
- CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, 200031, China.
- Chinese Institute for Brain Research, Beijing, 102206, China.
| |
Collapse
|
7
|
Lai CYY, Ng PS, Chan AHD, Wong FCK. Effects of Auditory Training in Older Adults. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4137-4149. [PMID: 37656601 DOI: 10.1044/2023_jslhr-22-00621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/03/2023]
Abstract
PURPOSE This study examines the effects of an auditory training program on the auditory and cognitive abilities of older adults. Auditory rehabilitation programs are generally designed for hearing aid users, and studies have demonstrated benefits for them. In this study, we seek to understand whether such a training program can also benefit older adults who do not wear hearing aids. We also examined if cognitive benefits can indeed be observed as a result of the training. METHOD Sixty-four older adults were recruited and assigned into three groups: the experimental group (n = 20), the active control group (n = 21), and the no-training control group (n = 23). The experimental group underwent an auditory training program (Listening and Communication Enhancement [LACE]) during the training phase. Meanwhile, the active control group listened to short audio clips and the no-training control group did not participate in any program. An auditory test (Quick Speech-in-Noise [QuickSIN]) and a battery of cognitive tests were conducted before and after the training to examine the participants' performance on auditory ability, short-term memory, and attention. RESULTS The results showed improvements in auditory and cognitive abilities during the training period. When assessing the training effects by comparing the pre- and the posttraining performances, a significant improvement on the QuickSIN task was found in the training group but not in the other two groups. However, other cognitive tests did not show any significant improvement. That is, the LACE training did not benefit short-term memory and attention. The improved performance on short-term memory during the training was not maintained in the posttraining session. CONCLUSION Overall, the study has extended the auditory benefit from the LACE training to the typical aging population in terms of improved communication ability, but the effect of training on auditory abilities did not transfer to gains in cognitive abilities.
Collapse
Affiliation(s)
- C-Y Yvonne Lai
- Linguistics and Multilingual Studies, School of Humanities, Nanyang Technological University, Singapore
- Institute of Population Health Sciences, National Health Research Institutes, Taiwan
| | - P S Ng
- Linguistics and Multilingual Studies, School of Humanities, Nanyang Technological University, Singapore
| | - Alice H D Chan
- Linguistics and Multilingual Studies, School of Humanities, Nanyang Technological University, Singapore
| | - Francis C K Wong
- Linguistics and Multilingual Studies, School of Humanities, Nanyang Technological University, Singapore
| |
Collapse
|
8
|
Herrera C, Whittle N, Leek MR, Brodbeck C, Lee G, Barcenas C, Barnes S, Holshouser B, Yi A, Venezia JH. Cortical networks for recognition of speech with simultaneous talkers. Hear Res 2023; 437:108856. [PMID: 37531847 DOI: 10.1016/j.heares.2023.108856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 07/05/2023] [Accepted: 07/21/2023] [Indexed: 08/04/2023]
Abstract
The relative contributions of superior temporal vs. inferior frontal and parietal networks to recognition of speech in a background of competing speech remain unclear, although the contributions themselves are well established. Here, we use fMRI with spectrotemporal modulation transfer function (ST-MTF) modeling to examine the speech information represented in temporal vs. frontoparietal networks for two speech recognition tasks with and without a competing talker. Specifically, 31 listeners completed two versions of a three-alternative forced choice competing speech task: "Unison" and "Competing", in which a female (target) and a male (competing) talker uttered identical or different phrases, respectively. Spectrotemporal modulation filtering (i.e., acoustic distortion) was applied to the two-talker mixtures and ST-MTF models were generated to predict brain activation from differences in spectrotemporal-modulation distortion on each trial. Three cortical networks were identified based on differential patterns of ST-MTF predictions and the resultant ST-MTF weights across conditions (Unison, Competing): a bilateral superior temporal (S-T) network, a frontoparietal (F-P) network, and a network distributed across cortical midline regions and the angular gyrus (M-AG). The S-T network and the M-AG network responded primarily to spectrotemporal cues associated with speech intelligibility, regardless of condition, but the S-T network responded to a greater range of temporal modulations suggesting a more acoustically driven response. The F-P network responded to the absence of intelligibility-related cues in both conditions, but also to the absence (presence) of target-talker (competing-talker) vocal pitch in the Competing condition, suggesting a generalized response to signal degradation. Task performance was best predicted by activation in the S-T and F-P networks, but in opposite directions (S-T: more activation = better performance; F-P: vice versa). Moreover, S-T network predictions were entirely ST-MTF mediated while F-P network predictions were ST-MTF mediated only in the Unison condition, suggesting an influence from non-acoustic sources (e.g., informational masking) in the Competing condition. Activation in the M-AG network was weakly positively correlated with performance and this relation was entirely superseded by those in the S-T and F-P networks. Regarding contributions to speech recognition, we conclude: (a) superior temporal regions play a bottom-up, perceptual role that is not qualitatively dependent on the presence of competing speech; (b) frontoparietal regions play a top-down role that is modulated by competing speech and scales with listening effort; and (c) performance ultimately relies on dynamic interactions between these networks, with ancillary contributions from networks not involved in speech processing per se (e.g., the M-AG network).
Collapse
Affiliation(s)
| | - Nicole Whittle
- VA Loma Linda Healthcare System, Loma Linda, CA, United States
| | - Marjorie R Leek
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Loma Linda University, Loma Linda, CA, United States
| | | | - Grace Lee
- Loma Linda University, Loma Linda, CA, United States
| | | | - Samuel Barnes
- Loma Linda University, Loma Linda, CA, United States
| | | | - Alex Yi
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Loma Linda University, Loma Linda, CA, United States
| | - Jonathan H Venezia
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Loma Linda University, Loma Linda, CA, United States.
| |
Collapse
|
9
|
Zhang Y, Rennig J, Magnotti JF, Beauchamp MS. Multivariate fMRI responses in superior temporal cortex predict visual contributions to, and individual differences in, the intelligibility of noisy speech. Neuroimage 2023; 278:120271. [PMID: 37442310 PMCID: PMC10460966 DOI: 10.1016/j.neuroimage.2023.120271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 06/20/2023] [Accepted: 07/06/2023] [Indexed: 07/15/2023] Open
Abstract
Humans have the unique ability to decode the rapid stream of language elements that constitute speech, even when it is contaminated by noise. Two reliable observations about noisy speech perception are that seeing the face of the talker improves intelligibility and the existence of individual differences in the ability to perceive noisy speech. We introduce a multivariate BOLD fMRI measure that explains both observations. In two independent fMRI studies, clear and noisy speech was presented in visual, auditory and audiovisual formats to thirty-seven participants who rated intelligibility. An event-related design was used to sort noisy speech trials by their intelligibility. Individual-differences multidimensional scaling was applied to fMRI response patterns in superior temporal cortex and the dissimilarity between responses to clear speech and noisy (but intelligible) speech was measured. Neural dissimilarity was less for audiovisual speech than auditory-only speech, corresponding to the greater intelligibility of noisy audiovisual speech. Dissimilarity was less in participants with better noisy speech perception, corresponding to individual differences. These relationships held for both single word and entire sentence stimuli, suggesting that they were driven by intelligibility rather than the specific stimuli tested. A neural measure of perceptual intelligibility may aid in the development of strategies for helping those with impaired speech perception.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States; Department of Neurosurgery, Baylor College of Medicine, Houston, TX, United States
| | - Johannes Rennig
- Division of Neuropsychology, Center of Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - John F Magnotti
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Michael S Beauchamp
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.
| |
Collapse
|
10
|
Shatzer HE, Russo FA. Brightening the Study of Listening Effort with Functional Near-Infrared Spectroscopy: A Scoping Review. Semin Hear 2023; 44:188-210. [PMID: 37122884 PMCID: PMC10147513 DOI: 10.1055/s-0043-1766105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/09/2023] Open
Abstract
Listening effort is a long-standing area of interest in auditory cognitive neuroscience. Prior research has used multiple techniques to shed light on the neurophysiological mechanisms underlying listening during challenging conditions. Functional near-infrared spectroscopy (fNIRS) is growing in popularity as a tool for cognitive neuroscience research, and its recent advances offer many potential advantages over other neuroimaging modalities for research related to listening effort. This review introduces the basic science of fNIRS and its uses for auditory cognitive neuroscience. We also discuss its application in recently published studies on listening effort and consider future opportunities for studying effortful listening with fNIRS. After reading this article, the learner will know how fNIRS works and summarize its uses for listening effort research. The learner will also be able to apply this knowledge toward generation of future research in this area.
Collapse
Affiliation(s)
- Hannah E. Shatzer
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Frank A. Russo
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| |
Collapse
|
11
|
Zhang L, Wang X, Alain C, Du Y. Successful aging of musicians: Preservation of sensorimotor regions aids audiovisual speech-in-noise perception. SCIENCE ADVANCES 2023; 9:eadg7056. [PMID: 37126550 PMCID: PMC10132752 DOI: 10.1126/sciadv.adg7056] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Musicianship can mitigate age-related declines in audiovisual speech-in-noise perception. We tested whether this benefit originates from functional preservation or functional compensation by comparing fMRI responses of older musicians, older nonmusicians, and young nonmusicians identifying noise-masked audiovisual syllables. Older musicians outperformed older nonmusicians and showed comparable performance to young nonmusicians. Notably, older musicians retained similar neural specificity of speech representations in sensorimotor areas to young nonmusicians, while older nonmusicians showed degraded neural representations. In the same region, older musicians showed higher neural alignment to young nonmusicians than older nonmusicians, which was associated with their training intensity. In older nonmusicians, the degree of neural alignment predicted better performance. In addition, older musicians showed greater activation in frontal-parietal, speech motor, and visual motion regions and greater deactivation in the angular gyrus than older nonmusicians, which predicted higher neural alignment in sensorimotor areas. Together, these findings suggest that musicianship-related benefit in audiovisual speech-in-noise processing is rooted in preserving youth-like representations in sensorimotor regions.
Collapse
Affiliation(s)
- Lei Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xiuyi Wang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, ON M6A 2E1, Canada
- Department of Psychology, University of Toronto, ON M8V 2S4, Canada
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
- CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai 200031, China
- Chinese Institute for Brain Research, Beijing 102206, China
| |
Collapse
|
12
|
MacGregor LJ, Gilbert RA, Balewski Z, Mitchell DJ, Erzinçlioğlu SW, Rodd JM, Duncan J, Fedorenko E, Davis MH. Causal Contributions of the Domain-General (Multiple Demand) and the Language-Selective Brain Networks to Perceptual and Semantic Challenges in Speech Comprehension. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:665-698. [PMID: 36742011 PMCID: PMC9893226 DOI: 10.1162/nol_a_00081] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 09/07/2022] [Indexed: 06/18/2023]
Abstract
Listening to spoken language engages domain-general multiple demand (MD; frontoparietal) regions of the human brain, in addition to domain-selective (frontotemporal) language regions, particularly when comprehension is challenging. However, there is limited evidence that the MD network makes a functional contribution to core aspects of understanding language. In a behavioural study of volunteers (n = 19) with chronic brain lesions, but without aphasia, we assessed the causal role of these networks in perceiving, comprehending, and adapting to spoken sentences made more challenging by acoustic-degradation or lexico-semantic ambiguity. We measured perception of and adaptation to acoustically degraded (noise-vocoded) sentences with a word report task before and after training. Participants with greater damage to MD but not language regions required more vocoder channels to achieve 50% word report, indicating impaired perception. Perception improved following training, reflecting adaptation to acoustic degradation, but adaptation was unrelated to lesion location or extent. Comprehension of spoken sentences with semantically ambiguous words was measured with a sentence coherence judgement task. Accuracy was high and unaffected by lesion location or extent. Adaptation to semantic ambiguity was measured in a subsequent word association task, which showed that availability of lower-frequency meanings of ambiguous words increased following their comprehension (word-meaning priming). Word-meaning priming was reduced for participants with greater damage to language but not MD regions. Language and MD networks make dissociable contributions to challenging speech comprehension: Using recent experience to update word meaning preferences depends on language-selective regions, whereas the domain-general MD network plays a causal role in reporting words from degraded speech.
Collapse
Affiliation(s)
- Lucy J. MacGregor
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Rebecca A. Gilbert
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Zuzanna Balewski
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA
| | - Daniel J. Mitchell
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | | | - Jennifer M. Rodd
- Psychology and Language Sciences, University College London, London, UK
| | - John Duncan
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA
| | - Matthew H. Davis
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| |
Collapse
|
13
|
Impact of Effortful Word Recognition on Supportive Neural Systems Measured by Alpha and Theta Power. Ear Hear 2022; 43:1549-1562. [DOI: 10.1097/aud.0000000000001211] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
14
|
Ritz H, Wild CJ, Johnsrude IS. Parametric Cognitive Load Reveals Hidden Costs in the Neural Processing of Perfectly Intelligible Degraded Speech. J Neurosci 2022; 42:4619-4628. [PMID: 35508382 PMCID: PMC9186799 DOI: 10.1523/jneurosci.1777-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 03/08/2022] [Accepted: 03/10/2022] [Indexed: 11/21/2022] Open
Abstract
Speech is often degraded by environmental noise or hearing impairment. People can compensate for degradation, but this requires cognitive effort. Previous research has identified frontotemporal networks involved in effortful perception, but materials in these works were also less intelligible, and so it is not clear whether activity reflected effort or intelligibility differences. We used functional magnetic resonance imaging to assess the degree to which spoken sentences were processed under distraction and whether this depended on speech quality even when intelligibility of degraded speech was matched to that of clear speech (close to 100%). On each trial, male and female human participants either attended to a sentence or to a concurrent multiple object tracking (MOT) task that imposed parametric cognitive load. Activity in bilateral anterior insula reflected task demands; during the MOT task, activity increased as cognitive load increased, and during speech listening, activity increased as speech became more degraded. In marked contrast, activity in bilateral anterior temporal cortex was speech selective and gated by attention when speech was degraded. In this region, performance of the MOT task with a trivial load blocked processing of degraded speech, whereas processing of clear speech was unaffected. As load increased, responses to clear speech in these areas declined, consistent with reduced capacity to process it. This result dissociates cognitive control from speech processing; substantially less cognitive control is required to process clear speech than is required to understand even very mildly degraded, 100% intelligible speech. Perceptual and control systems clearly interact dynamically during real-world speech comprehension.SIGNIFICANCE STATEMENT Speech is often perfectly intelligible even when degraded, for example, by background sound, phone transmission, or hearing loss. How does degradation alter cognitive demands? Here, we use fMRI to demonstrate a novel and critical role for cognitive control in the processing of mildly degraded but perfectly intelligible speech. We compare speech that is matched for intelligibility but differs in putative control demands, dissociating cognitive control from speech processing. We also impose a parametric cognitive load during perception, dissociating processes that depend on tasks from those that depend on available capacity. Our findings distinguish between frontal and temporal contributions to speech perception and reveal a hidden cost to processing mildly degraded speech, underscoring the importance of cognitive control for everyday speech comprehension.
Collapse
Affiliation(s)
- Harrison Ritz
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, Rhode Island 02912
| | - Conor J Wild
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
| | - Ingrid S Johnsrude
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
- Departments of Psychology and Communication Sciences and Disorders, University of Western Ontario, London, Ontario N6A 3K7, Canada
| |
Collapse
|
15
|
Vaden KI, Teubner-Rhodes S, Ahlstrom JB, Dubno JR, Eckert MA. Evidence for cortical adjustments to perceptual decision criteria during word recognition in noise. Neuroimage 2022; 253:119042. [PMID: 35259524 PMCID: PMC9082296 DOI: 10.1016/j.neuroimage.2022.119042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 02/23/2022] [Accepted: 02/26/2022] [Indexed: 01/31/2023] Open
Abstract
Extensive increases in cingulo-opercular frontal activity are typically observed during speech recognition in noise tasks. This elevated activity has been linked to a word recognition benefit on the next trial, termed "adaptive control," but how this effect might be implemented has been unclear. The established link between perceptual decision making and cingulo-opercular function may provide an explanation for how those regions benefit subsequent word recognition. In this case, processes that support recognition such as raising or lowering the decision criteria for more accurate or faster recognition may be adjusted to optimize performance on the next trial. The current neuroimaging study tested the hypothesis that pre-stimulus cingulo-opercular activity reflects criterion adjustments that determine how much information to collect for word recognition on subsequent trials. Participants included middle-age and older adults (N = 30; age = 58.3 ± 8.8 years; m ± sd) with normal hearing or mild sensorineural hearing loss. During a sparse fMRI experiment, words were presented in multitalker babble at +3 dB or +10 dB signal-to-noise ratio (SNR), which participants were instructed to repeat aloud. Word recognition was significantly poorer with increasing participant age and lower SNR compared to higher SNR conditions. A perceptual decision-making model was used to characterize processing differences based on task response latency distributions. The model showed that significantly less sensory evidence was collected (i.e., lower criteria) for lower compared to higher SNR trials. Replicating earlier observations, pre-stimulus cingulo-opercular activity was significantly predictive of correct recognition on a subsequent trial. Individual differences showed that participants with higher criteria also benefitted the most from pre-stimulus activity. Moreover, trial-level criteria changes were significantly linked to higher versus lower pre-stimulus activity. These results suggest cingulo-opercular cortex contributes to criteria adjustments to optimize speech recognition task performance.
Collapse
Affiliation(s)
- Kenneth I. Vaden
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States,Corresponding author. (K.I. Vaden Jr)
| | - Susan Teubner-Rhodes
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States,Department of Psychological Sciences, 226 Thach Hall, Auburn University, AL 36849-9027
| | - Jayne B. Ahlstrom
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| | - Judy R. Dubno
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| | - Mark A. Eckert
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| |
Collapse
|
16
|
Age-related differences in the neural network interactions underlying the predictability gain. Cortex 2022; 154:269-286. [DOI: 10.1016/j.cortex.2022.05.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 03/30/2022] [Accepted: 05/03/2022] [Indexed: 11/20/2022]
|
17
|
Hong L, Zeng Q, Li K, Luo X, Xu X, Liu X, Li Z, Fu Y, Wang Y, Zhang T, Chen Y, Liu Z, Huang P, Zhang M. Intrinsic Brain Activity of Inferior Temporal Region Increased in Prodromal Alzheimer's Disease With Hearing Loss. Front Aging Neurosci 2022; 13:772136. [PMID: 35153717 PMCID: PMC8831745 DOI: 10.3389/fnagi.2021.772136] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 12/31/2021] [Indexed: 01/13/2023] Open
Abstract
Background and Objective Hearing loss (HL) is one of the modifiable risk factors for Alzheimer's disease (AD). However, the underlying mechanism behind HL in AD remains elusive. A possible mechanism is cognitive load hypothesis, which postulates that over-processing of degraded auditory signals in the auditory cortex leads to deficits in other cognitive functions. Given mild cognitive impairment (MCI) is a prodromal stage of AD, untangling the association between HL and MCI might provide insights for potential mechanism behind HL. Methods We included 85 cognitively normal (CN) subjects with no hearing loss (NHL), 24 CN with HL, 103 mild cognitive impairment (MCI) patients with NHL, and 23 MCI with HL from the ADNI database. All subjects underwent resting-state functional MRI and neuropsychological scale assessments. Fractional amplitude of low-frequency fluctuation (fALFF) was used to reflect spontaneous brain activity. The mixed-effects analysis was applied to explore the interactive effects between HL and cognitive status (GRF corrected, voxel p-value <0.005, cluster p-value < 0.05, two-tailed). Then, the FDG data was included to further reflect the regional neuronal abnormalities. Finally, Pearson correlation analysis was performed between imaging metrics and cognitive scores to explore the clinical significance (Bonferroni corrected, p < 0.05). Results The interactive effects primarily located in the left superior temporal gyrus (STG) and bilateral inferior temporal gyrus (ITG). Post-hoc analysis showed that NC with HL had lower fALFF in bilateral ITG compared to NC with NHL. NC with HL had higher fALFF in the left STG and decreased fALFF in bilateral ITG compared to MCI with HL. In addition, NC with HL had lower fALFF in the right ITG compared to MCI with NHL. Correlation analysis revealed that fALFF was associated with MMSE and ADNI-VS, while SUVR was associated with MMSE, MoCA, ADNI-EF and ADNI-Lan. Conclusion HL showed different effects on NC and MCI stages. NC had increased spontaneous brain activity in auditory cortex while decreased activity in the ITG. Such pattern altered with disease stage changing and manifested as decreased activity in auditory cortex along with increased activity in ITG in MCI. This suggested that the cognitive load hypothesis may be the underlying mechanism behind HL.
Collapse
Affiliation(s)
- Luwei Hong
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Qingze Zeng
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Kaicheng Li
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Xiao Luo
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Xiaopei Xu
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Xiaocao Liu
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Zheyu Li
- Department of Neurology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Yanv Fu
- Department of Neurology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Yanbo Wang
- Department of Neurology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Tianyi Zhang
- Department of Neurology, Tongde Hospital of Zhejiang Province, Hangzhou, China
| | - Yanxing Chen
- Department of Neurology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Zhirong Liu
- Department of Neurology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Peiyu Huang
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
- Peiyu Huang
| | - Minming Zhang
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
- Peiyu Huang
| |
Collapse
|
18
|
Fitzhugh MC, Pa J. Longitudinal Changes in Resting-State Functional Connectivity and Gray Matter Volume Are Associated with Conversion to Hearing Impairment in Older Adults. J Alzheimers Dis 2022; 86:905-918. [PMID: 35147536 PMCID: PMC10796152 DOI: 10.3233/jad-215288] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Hearing loss was recently identified as a modifiable risk factor for dementia although the potential mechanisms explaining this relationship are unknown. OBJECTIVE The current study examined longitudinal change in resting-state fMRI functional connectivity and gray matter volume in individuals who developed a hearing impairment compared to those whose hearing remained normal. METHODS This study included 440 participants from the UK Biobank: 163 who had normal hearing at baseline and impaired hearing at follow-up (i.e., converters, mean age = 63.11±6.33, 53% female) and 277 who had normal hearing at baseline and maintained normal hearing at follow-up (i.e., non-converters, age = 63.31±5.50, 50% female). Functional connectivity was computed between a priori selected auditory seed regions (left and right Heschl's gyrus and cytoarchitectonic subregions Te1.0, Te1.1, and Te1.2) and select higher-order cognitive brain networks. Gray matter volume within these same regions was also obtained. RESULTS Converters had increased connectivity from left Heschl's gyrus to left anterior insula and from right Heschl's gyrus to right anterior insula, and decreased connectivity between right Heschl's gyrus and right hippocampus, compared to non-converters. Converters also had reduced gray matter volume in left hippocampus and left lateral visual cortex compared to non-converters. CONCLUSION These findings suggest that conversion to a hearing impairment is associated with altered brain functional connectivity and gray matter volume in the attention, memory, and visual processing regions that were examined in this study.
Collapse
Affiliation(s)
- Megan C. Fitzhugh
- Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Judy Pa
- Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- Department of Neurology, Alzheimer’s Disease Research Center, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
19
|
Eckert MA, Teubner-Rhodes S, Vaden KI, Ahlstrom JB, McClaskey CM, Dubno JR. Unique patterns of hearing loss and cognition in older adults' neural responses to cues for speech recognition difficulty. Brain Struct Funct 2022; 227:203-218. [PMID: 34632538 PMCID: PMC9044122 DOI: 10.1007/s00429-021-02398-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 09/26/2021] [Indexed: 01/31/2023]
Abstract
Older adults with hearing loss experience significant difficulties understanding speech in noise, perhaps due in part to limited benefit from supporting executive functions that enable the use of environmental cues signaling changes in listening conditions. Here we examined the degree to which 41 older adults (60.56-86.25 years) exhibited cortical responses to informative listening difficulty cues that communicated the listening difficulty for each trial compared to neutral cues that were uninformative of listening difficulty. Word recognition was significantly higher for informative compared to uninformative cues in a + 10 dB signal-to-noise ratio (SNR) condition, and response latencies were significantly shorter for informative cues in the + 10 dB SNR and the more-challenging + 2 dB SNR conditions. Informative cues were associated with elevated blood oxygenation level-dependent contrast in visual and parietal cortex. A cue-SNR interaction effect was observed in the cingulo-opercular (CO) network, such that activity only differed between SNR conditions when an informative cue was presented. That is, participants used the informative cues to prepare for changes in listening difficulty from one trial to the next. This cue-SNR interaction effect was driven by older adults with more low-frequency hearing loss and was not observed for those with more high-frequency hearing loss, poorer set-shifting task performance, and lower frontal operculum gray matter volume. These results suggest that proactive strategies for engaging CO adaptive control may be important for older adults with high-frequency hearing loss to optimize speech recognition in changing and challenging listening conditions.
Collapse
Affiliation(s)
- Mark A Eckert
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC, 29425-5500, USA.
| | | | - Kenneth I Vaden
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC, 29425-5500, USA
| | - Jayne B Ahlstrom
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC, 29425-5500, USA
| | - Carolyn M McClaskey
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC, 29425-5500, USA
| | - Judy R Dubno
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC, 29425-5500, USA
| |
Collapse
|
20
|
Nuttall HE, Maegherman G, Devlin JT, Adank P. Speech motor facilitation is not affected by ageing but is modulated by task demands during speech perception. Neuropsychologia 2021; 166:108135. [PMID: 34958833 DOI: 10.1016/j.neuropsychologia.2021.108135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Revised: 11/26/2021] [Accepted: 12/21/2021] [Indexed: 10/19/2022]
Abstract
Motor areas for speech production activate during speech perception. Such activation may assist speech perception in challenging listening conditions. It is not known how ageing affects the recruitment of articulatory motor cortex during active speech perception. This study aimed to determine the effect of ageing on recruitment of speech motor cortex during speech perception. Single-pulse Transcranial Magnetic Stimulation (TMS) was applied to the lip area of left primary motor cortex (M1) to elicit lip Motor Evoked Potentials (MEPs). The M1 hand area was tested as a control site. TMS was applied whilst participants perceived syllables presented with noise (-10, 0, +10 dB SNRs) and without noise (clear). Participants detected and counted syllables throughout MEP recording. Twenty younger adult subjects (aged 18-25) and twenty older adult subjects (aged 65-80) participated in this study. Results indicated a significant interaction between age and noise condition in the syllable task. Specifically, older adults significantly misidentified syllables in the 0 dB SNR condition, and missed the syllables in the -10 dB SNR condition, relative to the clear condition. There were no differences between conditions for younger adults. There was a significant main effect of noise level on lip MEPs. Lip MEPs were unexpectedly inhibited in the 0 dB SNR condition relative to clear condition. There was no interaction between age group and noise condition. There was no main effect of noise or age group on control hand MEPs. These data suggest that speech-induced facilitation in articulatory motor cortex is abolished when performing a challenging secondary task, irrespective of age.
Collapse
Affiliation(s)
- Helen E Nuttall
- Department of Psychology, Lancaster University, Fylde College, Fylde Avenue, Lancaster, LA1 4YF, UK.
| | - Gwijde Maegherman
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London, WC1N 1PF, UK
| | - Joseph T Devlin
- Department of Experimental Psychology, University College London, 26 Bedford Way, London, WC1H 0AP, UK
| | - Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London, WC1N 1PF, UK
| |
Collapse
|
21
|
Brisson V, Tremblay P. Improving speech perception in noise in young and older adults using transcranial magnetic stimulation. BRAIN AND LANGUAGE 2021; 222:105009. [PMID: 34425411 DOI: 10.1016/j.bandl.2021.105009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 08/06/2021] [Accepted: 08/12/2021] [Indexed: 06/13/2023]
Abstract
UNLABELLED Normal aging is associated with speech perception in noise (SPiN) difficulties. The objective of this study was to determine if SPiN performance can be enhanced by intermittent theta-burst stimulation (iTBS) in young and older adults. METHOD We developed a sub-lexical SPiN test to evaluate the contribution of age, hearing, and cognition to SPiN performance in young and older adults. iTBS was applied to the left posterior superior temporal sulcus (pSTS) and the left ventral premotor cortex (PMv) to examine its impact on SPiN performance. RESULTS Aging was associated with reduced SPiN accuracy. TMS-induced performance gain was greater after stimulation of the PMv compared to the pSTS. Participants with lower scores in the baseline condition improved the most. DISCUSSION SPiN difficulties can be reduced by enhancing activity within the left speech-processing network in adults. This study paves the way for the development of TMS-based interventions to reduce SPiN difficulties in adults.
Collapse
Affiliation(s)
- Valérie Brisson
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada
| | - Pascale Tremblay
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada.
| |
Collapse
|
22
|
Reduced Semantic Context and Signal-to-Noise Ratio Increase Listening Effort As Measured Using Functional Near-Infrared Spectroscopy. Ear Hear 2021; 43:836-848. [PMID: 34623112 DOI: 10.1097/aud.0000000000001137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Understanding speech-in-noise can be highly effortful. Decreasing the signal-to-noise ratio (SNR) of speech increases listening effort, but it is relatively unclear if decreasing the level of semantic context does as well. The current study used functional near-infrared spectroscopy to evaluate two primary hypotheses: (1) listening effort (operationalized as oxygenation of the left lateral PFC) increases as the SNR decreases and (2) listening effort increases as context decreases. DESIGN Twenty-eight younger adults with normal hearing completed the Revised Speech Perception in Noise Test, in which they listened to sentences and reported the final word. These sentences either had an easy SNR (+4 dB) or a hard SNR (-2 dB), and were either low in semantic context (e.g., "Tom could have thought about the sport") or high in context (e.g., "She had to vacuum the rug"). PFC oxygenation was measured throughout using functional near-infrared spectroscopy. RESULTS Accuracy on the Revised Speech Perception in Noise Test was worse when the SNR was hard than when it was easy, and worse for sentences low in semantic context than high in context. Similarly, oxygenation across the entire PFC (including the left lateral PFC) was greater when the SNR was hard, and left lateral PFC oxygenation was greater when context was low. CONCLUSIONS These results suggest that activation of the left lateral PFC (interpreted here as reflecting listening effort) increases to compensate for acoustic and linguistic challenges. This may reflect the increased engagement of domain-general and domain-specific processes subserved by the dorsolateral prefrontal cortex (e.g., cognitive control) and inferior frontal gyrus (e.g., predicting the sensory consequences of articulatory gestures), respectively.
Collapse
|
23
|
Bhandari P, Demberg V, Kray J. Semantic Predictability Facilitates Comprehension of Degraded Speech in a Graded Manner. Front Psychol 2021; 12:714485. [PMID: 34566795 PMCID: PMC8459870 DOI: 10.3389/fpsyg.2021.714485] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 08/06/2021] [Indexed: 01/02/2023] Open
Abstract
Previous studies have shown that at moderate levels of spectral degradation, semantic predictability facilitates language comprehension. It is argued that when speech is degraded, listeners have narrowed expectations about the sentence endings; i.e., semantic prediction may be limited to only most highly predictable sentence completions. The main objectives of this study were to (i) examine whether listeners form narrowed expectations or whether they form predictions across a wide range of probable sentence endings, (ii) assess whether the facilitatory effect of semantic predictability is modulated by perceptual adaptation to degraded speech, and (iii) use and establish a sensitive metric for the measurement of language comprehension. For this, we created 360 German Subject-Verb-Object sentences that varied in semantic predictability of a sentence-final target word in a graded manner (high, medium, and low) and levels of spectral degradation (1, 4, 6, and 8 channels noise-vocoding). These sentences were presented auditorily to two groups: One group (n =48) performed a listening task in an unpredictable channel context in which the degraded speech levels were randomized, while the other group (n =50) performed the task in a predictable channel context in which the degraded speech levels were blocked. The results showed that at 4 channels noise-vocoding, response accuracy was higher in high-predictability sentences than in the medium-predictability sentences, which in turn was higher than in the low-predictability sentences. This suggests that, in contrast to the narrowed expectations view, comprehension of moderately degraded speech, ranging from low- to high- including medium-predictability sentences, is facilitated in a graded manner; listeners probabilistically preactivate upcoming words from a wide range of semantic space, not limiting only to highly probable sentence endings. Additionally, in both channel contexts, we did not observe learning effects; i.e., response accuracy did not increase over the course of experiment, and response accuracy was higher in the predictable than in the unpredictable channel context. We speculate from these observations that when there is no trial-by-trial variation of the levels of speech degradation, listeners adapt to speech quality at a long timescale; however, when there is a trial-by-trial variation of the high-level semantic feature (e.g., sentence predictability), listeners do not adapt to low-level perceptual property (e.g., speech quality) at a short timescale.
Collapse
Affiliation(s)
- Pratik Bhandari
- Department of Psychology, Saarland University, Saarbrücken, Germany
- Department of Language Science and Technology, Saarland University, Saarbrücken, Germany
| | - Vera Demberg
- Department of Language Science and Technology, Saarland University, Saarbrücken, Germany
- Department of Computer Science, Saarland University, Saarbrücken, Germany
| | - Jutta Kray
- Department of Psychology, Saarland University, Saarbrücken, Germany
| |
Collapse
|
24
|
White BE, Langdon C. The cortical organization of listening effort: New insight from functional near-infrared spectroscopy. Neuroimage 2021; 240:118324. [PMID: 34217787 DOI: 10.1016/j.neuroimage.2021.118324] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 06/17/2021] [Accepted: 06/28/2021] [Indexed: 10/21/2022] Open
Abstract
Everyday challenges impact our ability to hear and comprehend spoken language with ease, such as accented speech (source factors), spectral degradation (transmission factors), complex or unfamiliar language use (message factors), and predictability (context factors). Auditory degradation and linguistic complexity in the brain and behavior have been well investigated, and several computational models have emerged. The work here provides a novel test of the hypotheses that listening effort is partially reliant on higher cognitive auditory attention and working memory mechanisms in the frontal lobe, and partially reliant on hierarchical linguistic computation in the brain's left hemisphere. We specifically hypothesize that these models are robust and can be applied in ecologically relevant and coarse-grain contexts that rigorously control for acoustic and linguistic listening challenges. Using functional near-infrared spectroscopy during an auditory plausibility judgment task, we show the hierarchical cortical organization for listening effort in the frontal and left temporal-parietal brain regions. In response to increasing levels of cognitive demand, we found (i) poorer comprehension, (ii) slower reaction times, (iii) increasing levels of perceived mental effort, (iv) increasing levels of brain activity in the prefrontal cortex, (v) hierarchical modulation of core language processing regions that reflect increasingly higher-order auditory-linguistic processing, and (vi) a correlation between participants' mental effort ratings and their performance on the task. Our results demonstrate that listening effort is partly reliant on higher cognitive auditory attention and working memory mechanisms in the frontal lobe and partly reliant on hierarchical linguistic computation in the brain's left hemisphere. Further, listening effort is driven by a voluntary, motivation-based attention system for which our results validate the use of a single-item post-task questionnaire for measuring perceived levels of mental effort and predicting listening performance. We anticipate our study to be a starting point for more sophisticated models of listening effort and even cognitive neuroplasticity in hearing aid and cochlear implant users.
Collapse
Affiliation(s)
- Bradley E White
- Brain and Language Center for Neuroimaging, Gallaudet University, Washington, DC, USA.
| | - Clifton Langdon
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
| |
Collapse
|
25
|
Kim S, Schwalje AT, Liu AS, Gander PE, McMurray B, Griffiths TD, Choi I. Pre- and post-target cortical processes predict speech-in-noise performance. Neuroimage 2021; 228:117699. [PMID: 33387631 PMCID: PMC8291856 DOI: 10.1016/j.neuroimage.2020.117699] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 11/06/2020] [Accepted: 12/23/2020] [Indexed: 12/19/2022] Open
Abstract
Understanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. There is a variance in individuals' ability to understand SiN that cannot be explained by simple hearing profiles, which suggests that central factors may underlie the variance in SiN ability. Here, we elucidated a few cortical functions involved during a SiN task and their contributions to individual variance using both within- and across-subject approaches. Through our within-subject analysis of source-localized electroencephalography, we investigated how acoustic signal-to-noise ratio (SNR) alters cortical evoked responses to a target word across the speech recognition areas, finding stronger responses in left supramarginal gyrus (SMG, BA40 the dorsal lexicon area) with quieter noise. Through an individual differences approach, we found that listeners show different neural sensitivity to the background noise and target speech, reflected in the amplitude ratio of earlier auditory-cortical responses to speech and noise, named as an internal SNR. Listeners with better internal SNR showed better SiN performance. Further, we found that the post-speech time SMG activity explains a further amount of variance in SiN performance that is not accounted for by internal SNR. This result demonstrates that at least two cortical processes contribute to SiN performance independently: pre-target time processing to attenuate neural representation of background noise and post-target time processing to extract information from speech sounds.
Collapse
Affiliation(s)
- Subong Kim
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN 47907, USA
| | - Adam T Schwalje
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Andrew S Liu
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Phillip E Gander
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Bob McMurray
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA; Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA 52242, USA; Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA 52242, USA
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne NE1 7RU, UK
| | - Inyong Choi
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA; Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA 52242, USA.
| |
Collapse
|
26
|
Tremblay P, Brisson V, Deschamps I. Brain aging and speech perception: Effects of background noise and talker variability. Neuroimage 2020; 227:117675. [PMID: 33359849 DOI: 10.1016/j.neuroimage.2020.117675] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2020] [Revised: 12/15/2020] [Accepted: 12/17/2020] [Indexed: 10/22/2022] Open
Abstract
Speech perception can be challenging, especially for older adults. Despite the importance of speech perception in social interactions, the mechanisms underlying these difficulties remain unclear and treatment options are scarce. While several studies have suggested that decline within cortical auditory regions may be a hallmark of these difficulties, a growing number of studies have reported decline in regions beyond the auditory processing network, including regions involved in speech processing and executive control, suggesting a potentially diffuse underlying neural disruption, though no consensus exists regarding underlying dysfunctions. To address this issue, we conducted two experiments in which we investigated age differences in speech perception when background noise and talker variability are manipulated, two factors known to be detrimental to speech perception. In Experiment 1, we examined the relationship between speech perception, hearing and auditory attention in 88 healthy participants aged 19 to 87 years. In Experiment 2, we examined cortical thickness and BOLD signal using magnetic resonance imaging (MRI) and related these measures to speech perception performance using a simple mediation approach in 32 participants from Experiment 1. Our results show that, even after accounting for hearing thresholds and two measures of auditory attention, speech perception significantly declined with age. Age-related decline in speech perception in noise was associated with thinner cortex in auditory and speech processing regions (including the superior temporal cortex, ventral premotor cortex and inferior frontal gyrus) as well as in regions involved in executive control (including the dorsal anterior insula, the anterior cingulate cortex and medial frontal cortex). Further, our results show that speech perception performance was associated with reduced brain response in the right superior temporal cortex in older compared to younger adults, and to an increase in response to noise in older adults in the left anterior temporal cortex. Talker variability was not associated with different activation patterns in older compared to younger adults. Together, these results support the notion of a diffuse rather than a focal dysfunction underlying speech perception in noise difficulties in older adults.
Collapse
Affiliation(s)
- Pascale Tremblay
- CERVO Brain Research Center, Québec City, QC, Canada; Université Laval, Département de réadaptation, Québec City, QC, Canada.
| | - Valérie Brisson
- CERVO Brain Research Center, Québec City, QC, Canada; Université Laval, Département de réadaptation, Québec City, QC, Canada
| | | |
Collapse
|
27
|
Fitzhugh MC, Schaefer SY, Baxter LC, Rogalsky C. Cognitive and neural predictors of speech comprehension in noisy backgrounds in older adults. LANGUAGE, COGNITION AND NEUROSCIENCE 2020; 36:269-287. [PMID: 34250179 PMCID: PMC8261331 DOI: 10.1080/23273798.2020.1828946] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2019] [Accepted: 09/18/2020] [Indexed: 06/13/2023]
Abstract
Older adults often experience difficulties comprehending speech in noisy backgrounds, which hearing loss does not fully explain. It remains unknown how cognitive abilities, brain networks, and age-related hearing loss may uniquely contribute to speech in noise comprehension at the sentence level. In 31 older adults, using cognitive measures and resting-state fMRI, we investigated the cognitive and neural predictors of speech comprehension with energetic (broadband noise) and informational masking (multi-speakers) effects. Better hearing thresholds and greater working memory abilities were associated with better speech comprehension with energetic masking. Conversely, faster processing speed and stronger functional connectivity between frontoparietal and language networks were associated with better speech comprehension with informational masking. Our findings highlight the importance of the frontoparietal network in older adults' ability to comprehend speech in multi-speaker backgrounds, and that hearing loss and working memory in older adults contributes to speech comprehension abilities related to energetic, but not informational masking.
Collapse
Affiliation(s)
- Megan C. Fitzhugh
- Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA
- College of Health Solutions, Arizona State University, Tempe, AZ
| | - Sydney Y. Schaefer
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ
| | | | | |
Collapse
|
28
|
Rysop AU, Schmitt LM, Obleser J, Hartwigsen G. Neural modelling of the semantic predictability gain under challenging listening conditions. Hum Brain Mapp 2020; 42:110-127. [PMID: 32959939 PMCID: PMC7721236 DOI: 10.1002/hbm.25208] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2020] [Revised: 09/07/2020] [Accepted: 09/08/2020] [Indexed: 11/09/2022] Open
Abstract
When speech intelligibility is reduced, listeners exploit constraints posed by semantic context to facilitate comprehension. The left angular gyrus (AG) has been argued to drive this semantic predictability gain. Taking a network perspective, we ask how the connectivity within language-specific and domain-general networks flexibly adapts to the predictability and intelligibility of speech. During continuous functional magnetic resonance imaging (fMRI), participants repeated sentences, which varied in semantic predictability of the final word and in acoustic intelligibility. At the neural level, highly predictable sentences led to stronger activation of left-hemispheric semantic regions including subregions of the AG (PGa, PGp) and posterior middle temporal gyrus when speech became more intelligible. The behavioural predictability gain of single participants mapped onto the same regions but was complemented by increased activity in frontal and medial regions. Effective connectivity from PGa to PGp increased for more intelligible sentences. In contrast, inhibitory influence from pre-supplementary motor area to left insula was strongest when predictability and intelligibility of sentences were either lowest or highest. This interactive effect was negatively correlated with the behavioural predictability gain. Together, these results suggest that successful comprehension in noisy listening conditions relies on an interplay of semantic regions and concurrent inhibition of cognitive control regions when semantic cues are available.
Collapse
Affiliation(s)
- Anna Uta Rysop
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Lea-Maria Schmitt
- Department of Psychology, University of Lübeck, Lübeck, Germany.,Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany.,Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Germany
| | - Gesa Hartwigsen
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
29
|
Slade K, Plack CJ, Nuttall HE. The Effects of Age-Related Hearing Loss on the Brain and Cognitive Function. Trends Neurosci 2020; 43:810-821. [PMID: 32826080 DOI: 10.1016/j.tins.2020.07.005] [Citation(s) in RCA: 120] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Revised: 06/22/2020] [Accepted: 07/14/2020] [Indexed: 12/27/2022]
Abstract
Age-related hearing loss (ARHL) is a common problem for older adults, leading to communication difficulties, isolation, and cognitive decline. Recently, hearing loss has been identified as potentially the most modifiable risk factor for dementia. Listening in challenging situations, or when the auditory system is damaged, strains cortical resources, and this may change how the brain responds to cognitively demanding situations more generally. We review the effects of ARHL on brain areas involved in speech perception, from the auditory cortex, through attentional networks, to the motor system. We explore current perspectives on the possible causal relationship between hearing loss, neural reorganisation, and cognitive impairment. Through this synthesis we aim to inspire innovative research and novel interventions for alleviating hearing loss and cognitive decline.
Collapse
Affiliation(s)
- Kate Slade
- Department of Psychology, Lancaster University, Lancaster, UK
| | - Christopher J Plack
- Department of Psychology, Lancaster University, Lancaster, UK; Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, Manchester, UK
| | - Helen E Nuttall
- Department of Psychology, Lancaster University, Lancaster, UK.
| |
Collapse
|
30
|
Zekveld AA, van Scheepen JAM, Versfeld NJ, Kramer SE, van Steenbergen H. The Influence of Hearing Loss on Cognitive Control in an Auditory Conflict Task: Behavioral and Pupillometry Findings. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:2483-2492. [PMID: 32610026 DOI: 10.1044/2020_jslhr-20-00107] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose The pupil dilation response is sensitive not only to auditory task demand but also to cognitive conflict. Conflict is induced by incompatible trials in auditory Stroop tasks in which participants have to identify the presentation location (left or right ear) of the words "left" or "right." Previous studies demonstrated that the compatibility effect is reduced if the trial is preceded by another incompatible trial (conflict adaptation). Here, we investigated the influence of hearing status on cognitive conflict and conflict adaptation in an auditory Stroop task. Method Two age-matched groups consisting of 32 normal-hearing participants (M age = 52 years, age range: 25-67 years) and 28 participants with hearing impairment (M age = 52 years, age range: 23-64 years) performed an auditory Stroop task. We assessed the effects of hearing status and stimulus compatibility on reaction times (RTs) and pupil dilation responses. We furthermore analyzed the Pearson correlation coefficients between age, degree of hearing loss, and the compatibility effects on the RT and pupil response data across all participants. Results As expected, the RTs were longer and pupil dilation was larger for incompatible relative to compatible trials. Furthermore, these effects were reduced for trials following incompatible (as compared to compatible) trials (conflict adaptation). No general effect of hearing status was observed, but the correlations suggested that higher age and a larger degree of hearing loss were associated with more interference of current incompatibility on RTs. Conclusions Conflict processing and adaptation effects were observed on the RTs and pupil dilation responses in an auditory Stroop task. No general effects of hearing status were observed, but the correlations suggested that higher age and a greater degree of hearing loss were related to reduced conflict processing ability. The current study underlines the relevance of taking into account cognitive control and conflict adaptation processes.
Collapse
Affiliation(s)
- Adriana A Zekveld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan, Amsterdam, the Netherlands
| | - J A M van Scheepen
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan, Amsterdam, the Netherlands
| | - Niek J Versfeld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan, Amsterdam, the Netherlands
| | - Sophia E Kramer
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan, Amsterdam, the Netherlands
| | - Henk van Steenbergen
- Cognitive Psychology Unit, Institute of Psychology, University of Leiden, the Netherlands
- Leiden Institute for Brain and Cognition, the Netherlands
| |
Collapse
|
31
|
Mahmud MS, Ahmed F, Al-Fahad R, Moinuddin KA, Yeasin M, Alain C, Bidelman GM. Decoding Hearing-Related Changes in Older Adults' Spatiotemporal Neural Processing of Speech Using Machine Learning. Front Neurosci 2020; 14:748. [PMID: 32765215 PMCID: PMC7378401 DOI: 10.3389/fnins.2020.00748] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 06/25/2020] [Indexed: 12/25/2022] Open
Abstract
Speech perception in noisy environments depends on complex interactions between sensory and cognitive systems. In older adults, such interactions may be affected, especially in those individuals who have more severe age-related hearing loss. Using a data-driven approach, we assessed the temporal (when in time) and spatial (where in the brain) characteristics of cortical speech-evoked responses that distinguish older adults with or without mild hearing loss. We performed source analyses to estimate cortical surface signals from the EEG recordings during a phoneme discrimination task conducted under clear and noise-degraded conditions. We computed source-level ERPs (i.e., mean activation within each ROI) from each of the 68 ROIs of the Desikan-Killiany (DK) atlas, averaged over a randomly chosen 100 trials without replacement to form feature vectors. We adopted a multivariate feature selection method called stability selection and control to choose features that are consistent over a range of model parameters. We use parameter optimized support vector machine (SVM) as a classifiers to investigate the time course and brain regions that segregate groups and speech clarity. For clear speech perception, whole-brain data revealed a classification accuracy of 81.50% [area under the curve (AUC) 80.73%; F1-score 82.00%], distinguishing groups within ∼60 ms after speech onset (i.e., as early as the P1 wave). We observed lower accuracy of 78.12% [AUC 77.64%; F1-score 78.00%] and delayed classification performance when speech was embedded in noise, with group segregation at 80 ms. Separate analysis using left (LH) and right hemisphere (RH) regions showed that LH speech activity was better at distinguishing hearing groups than activity measured in the RH. Moreover, stability selection analysis identified 12 brain regions (among 1428 total spatiotemporal features from 68 regions) where source activity segregated groups with >80% accuracy (clear speech); whereas 16 regions were critical for noise-degraded speech to achieve a comparable level of group segregation (78.7% accuracy). Our results identify critical time-courses and brain regions that distinguish mild hearing loss from normal hearing in older adults and confirm a larger number of active areas, particularly in RH, when processing noise-degraded speech information.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Faruk Ahmed
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Rakib Al-Fahad
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Kazi Ashraf Moinuddin
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Claude Alain
- Rotman Research Institute-Baycrest Centre for Geriatric Care, Toronto, ON, Canada.,Department of Psychology, University of Toronto, Toronto, ON, Canada.,Institute of Medical Sciences, University of Toronto, Toronto, ON, Canada
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States.,School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States.,Department of Anatomy and Neurobiology, University of Tennessee Health Science Center, Memphis, TN, United States
| |
Collapse
|
32
|
Erb J, Schmitt LM, Obleser J. Temporal selectivity declines in the aging human auditory cortex. eLife 2020; 9:55300. [PMID: 32618270 PMCID: PMC7410487 DOI: 10.7554/elife.55300] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 07/02/2020] [Indexed: 12/03/2022] Open
Abstract
Current models successfully describe the auditory cortical response to natural sounds with a set of spectro-temporal features. However, these models have hardly been linked to the ill-understood neurobiological changes that occur in the aging auditory cortex. Modelling the hemodynamic response to a rich natural sound mixture in N = 64 listeners of varying age, we here show that in older listeners’ auditory cortex, the key feature of temporal rate is represented with a markedly broader tuning. This loss of temporal selectivity is most prominent in primary auditory cortex and planum temporale, with no such changes in adjacent auditory or other brain areas. Amongst older listeners, we observe a direct relationship between chronological age and temporal-rate tuning, unconfounded by auditory acuity or model goodness of fit. In line with senescent neural dedifferentiation more generally, our results highlight decreased selectivity to temporal information as a hallmark of the aging auditory cortex. It can often be difficult for an older person to understand what someone is saying, particularly in noisy environments. Exactly how and why this age-related change occurs is not clear, but it is thought that older individuals may become less able to tune in to certain features of sound. Newer tools are making it easier to study age-related changes in hearing in the brain. For example, functional magnetic resonance imaging (fMRI) can allow scientists to ‘see’ and measure how certain parts of the brain react to different features of sound. Using fMRI data, researchers can compare how younger and older people process speech. They can also track how speech processing in the brain changes with age. Now, Erb et al. show that older individuals have a harder time tuning into the rhythm of speech. In the experiments, 64 people between the ages of 18 to 78 were asked to listen to speech in a noisy setting while they underwent fMRI. The researchers then tested a computer model using the data. In the older individuals, the brain’s tuning to the timing or rhythm of speech was broader, while the younger participants were more able to finely tune into this feature of sound. The older a person was the less able their brain was to distinguish rhythms in speech, likely making it harder to understand what had been said. This hearing change likely occurs because brain cells become less specialised overtime, which can contribute to many kinds of age-related cognitive decline. This new information about why understanding speech becomes more difficult with age may help scientists develop better hearing aids that are individualised to a person’s specific needs.
Collapse
Affiliation(s)
- Julia Erb
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | | | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany
| |
Collapse
|
33
|
Vaden KI, Eckert MA, Dubno JR, Harris KC. Cingulo-opercular adaptive control for younger and older adults during a challenging gap detection task. J Neurosci Res 2020; 98:680-691. [PMID: 31385349 PMCID: PMC7000297 DOI: 10.1002/jnr.24506] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2018] [Revised: 07/18/2019] [Accepted: 07/19/2019] [Indexed: 11/07/2022]
Abstract
Cingulo-opercular activity is hypothesized to reflect an adaptive control function that optimizes task performance through adjustments in attention and behavior, and outcome monitoring. While auditory perceptual task performance appears to benefit from elevated activity in cingulo-opercular regions of frontal cortex before stimuli are presented, this association appears reduced for older adults compared to younger adults. However, adaptive control function may be limited by difficult task conditions for older adults. An fMRI study was used to characterize adaptive control differences while 15 younger (average age = 24 years) and 15 older adults (average age = 68 years) performed a gap detection in noise task designed to limit age-related differences. During the fMRI study, participants listened to a noise recording and indicated with a button-press whether it contained a gap. Stimuli were presented between sparse fMRI scans (TR = 8.6 s) and BOLD measurements were collected during separate listening and behavioral response intervals. Age-related performance differences were limited by presenting gaps in noise with durations calibrated at or above each participant's detection threshold. Cingulo-opercular BOLD increased significantly throughout listening and behavioral response intervals, relative to a resting baseline. Correct behavioral responses were significantly more likely on trials with elevated pre-stimulus cingulo-opercular BOLD, consistent with an adaptive control framework. Cingulo-opercular adaptive control estimates appeared higher for participants with better gap sensitivity and lower response bias, irrespective of age, which suggests that this mechanism can benefit performance across the lifespan under conditions that limit age-related performance differences.
Collapse
Affiliation(s)
- Kenneth I Vaden
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| | - Mark A Eckert
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| | - Judy R Dubno
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| | - Kelly C Harris
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| |
Collapse
|
34
|
Ratnanather JT. Structural neuroimaging of the altered brain stemming from pediatric and adolescent hearing loss-Scientific and clinical challenges. WILEY INTERDISCIPLINARY REVIEWS. SYSTEMS BIOLOGY AND MEDICINE 2020; 12:e1469. [PMID: 31802640 PMCID: PMC7307271 DOI: 10.1002/wsbm.1469] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 10/01/2019] [Accepted: 10/13/2019] [Indexed: 12/20/2022]
Abstract
There has been a spurt in structural neuroimaging studies of the effect of hearing loss on the brain. Specifically, magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) technologies provide an opportunity to quantify changes in gray and white matter structures at the macroscopic scale. To date, there have been 32 MRI and 23 DTI studies that have analyzed structural differences accruing from pre- or peri-lingual pediatric hearing loss with congenital or early onset etiology and postlingual hearing loss in pre-to-late adolescence. Additionally, there have been 15 prospective clinical structural neuroimaging studies of children and adolescents being evaluated for cochlear implants. The results of the 70 studies are summarized in two figures and three tables. Plastic changes in the brain are seen to be multifocal rather than diffuse, that is, differences are consistent across regions implicated in the hearing, speech and language networks regardless of modes of communication and amplification. Structures in that play an important role in cognition are affected to a lesser extent. A limitation of these studies is the emphasis on volumetric measures and on homogeneous groups of subjects with hearing loss. It is suggested that additional measures of morphometry and connectivity could contribute to a greater understanding of the effect of hearing loss on the brain. Then an interpretation of the observed macroscopic structural differences is given. This is followed by discussion of how structural imaging can be combined with functional imaging to provide biomarkers for longitudinal tracking of amplification. This article is categorized under: Developmental Biology > Developmental Processes in Health and Disease Translational, Genomic, and Systems Medicine > Translational Medicine Laboratory Methods and Technologies > Imaging.
Collapse
Affiliation(s)
- J. Tilak Ratnanather
- Center for Imaging Science, Johns Hopkins University, Baltimore, Maryland
- Institute for Computational Medicine, Johns Hopkins University, Baltimore, Maryland
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland
| |
Collapse
|
35
|
Rosemann S, Thiel CM. Neural Signatures of Working Memory in Age-related Hearing Loss. Neuroscience 2020; 429:134-142. [PMID: 31935488 DOI: 10.1016/j.neuroscience.2019.12.046] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Revised: 12/20/2019] [Accepted: 12/29/2019] [Indexed: 11/17/2022]
Abstract
Age-related hearing loss affects the ability to hear high frequencies and therefore leads to difficulties in understanding speech, particularly under adverse listening conditions. This decrease in hearing can be partly compensated by the recruitment of executive functions, such as working memory. The compensatory effort may, however, lead to a decrease in available neural resources compromising cognitive abilities. We here aim to investigate whether mild to moderate hearing loss impacts prefrontal functions and related executive processes and whether these are related to speech-in-noise perception abilities. Nineteen hard of hearing and nineteen age-matched normal-hearing participants performed a working memory task to drive prefrontal activity, which was gauged with functional magnetic resonance imaging. In addition, speech-in-noise understanding, cognitive flexibility and inhibition control were assessed. Our results showed no differences in frontoparietal activation patterns and working memory performance between normal-hearing and hard of hearing participants. The behavioral assessment of further executive functions, however, provided evidence of lower cognitive flexibility in hard of hearing participants. Cognitive flexibility and hearing abilities further predicted speech-in-noise perception. We conclude that neural and behavioral signatures of working memory are intact in mild to moderate hearing loss. Moreover, cognitive flexibility seems to be closely related to hearing impairment and speech-in-noise perception and should, therefore, be investigated in future studies assessing age-related hearing loss and its implications on prefrontal functions.
Collapse
Affiliation(s)
- Stephanie Rosemann
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany; Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany.
| | - Christiane M Thiel
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany; Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany
| |
Collapse
|
36
|
Fitzhugh MC, Hemesath A, Schaefer SY, Baxter LC, Rogalsky C. Functional Connectivity of Heschl's Gyrus Associated With Age-Related Hearing Loss: A Resting-State fMRI Study. Front Psychol 2019; 10:2485. [PMID: 31780994 PMCID: PMC6856672 DOI: 10.3389/fpsyg.2019.02485] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2019] [Accepted: 10/21/2019] [Indexed: 12/23/2022] Open
Abstract
A large proportion of older adults experience hearing loss. Yet, the impact of hearing loss on the aging brain, particularly on large-scale brain networks that support cognition and language, is relatively unknown. We used resting-state functional magnetic resonance imaging (fMRI) to identify hearing loss-related changes in the functional connectivity of primary auditory cortex to determine if these changes are distinct from age and cognitive measures known to decline with age (e.g., working memory and processing speed). We assessed the functional connectivity of Heschl's gyrus in 31 older adults (60-80 years) who expressed a range of hearing abilities from normal hearing to a moderate hearing loss. Our results revealed that both left and right Heschl's gyri were significantly connected to regions within auditory, sensorimotor, and visual cortices, as well as to regions within the cingulo-opercular network known to support attention. Participant age, working memory, and processing speed did not significantly correlate with any connectivity measures once variance due to hearing loss was removed. However, hearing loss was associated with increased connectivity between right Heschl's gyrus and the dorsal anterior cingulate in the cingulo-opercular network even once variance due to age, working memory, and processing speed was removed. This greater connectivity was not driven by high frequency hearing loss, but rather by hearing loss measured in the 0.5-2 kHz range, particularly in the left ear. We conclude that hearing loss-related differences in functional connectivity in older adults are distinct from other aging-related differences and provide insight into a possible neural mechanism of compensation for hearing loss in older adults.
Collapse
Affiliation(s)
- Megan C. Fitzhugh
- College of Health Solutions, Arizona State University, Tempe, AZ, United States
| | - Angela Hemesath
- College of Health Solutions, Arizona State University, Tempe, AZ, United States
| | - Sydney Y. Schaefer
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, United States
| | - Leslie C. Baxter
- Department of Psychology, Mayo Clinic, Scottsdale, AZ, United States
| | - Corianne Rogalsky
- College of Health Solutions, Arizona State University, Tempe, AZ, United States
| |
Collapse
|
37
|
Belkhiria C, Vergara RC, San Martín S, Leiva A, Marcenaro B, Martinez M, Delgado C, Delano PH. Cingulate Cortex Atrophy Is Associated With Hearing Loss in Presbycusis With Cochlear Amplifier Dysfunction. Front Aging Neurosci 2019; 11:97. [PMID: 31080411 PMCID: PMC6497796 DOI: 10.3389/fnagi.2019.00097] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Accepted: 04/10/2019] [Indexed: 12/14/2022] Open
Abstract
Age-related hearing loss is associated with cognitive decline and has been proposed as a risk factor for dementia. However, the mechanisms that relate hearing loss to cognitive decline remain elusive. Here, we propose that the impairment of the cochlear amplifier mechanism is associated with structural brain changes and cognitive impairment. Ninety-six subjects aged over 65 years old (63 female and 33 male) were evaluated using brain magnetic resonance imaging, neuropsychological and audiological assessments, including distortion product otoacoustic emissions as a measure of the cochlear amplifier function. All the analyses were adjusted by age, gender and education. The group with cochlear amplifier dysfunction showed greater brain atrophy in the cingulate cortex and in the parahippocampus. In addition, the atrophy of the cingulate cortex was associated with cognitive impairment in episodic and working memories and in language and visuoconstructive abilities. We conclude that the neural abnormalities observed in presbycusis subjects with cochlear amplifier dysfunction extend beyond core auditory network and are associated with cognitive decline in multiple domains. These results suggest that a cochlear amplifier dysfunction in presbycusis is an important mechanism relating hearing impairments to brain atrophy in the extended network of effortful hearing.
Collapse
Affiliation(s)
- Chama Belkhiria
- Department of Neuroscience, Faculty of Medicine, University of Chile, Santiago, Chile.,Biomedical Neuroscience Institute, Faculty of Medicine, University of Chile, Santiago, Chile
| | - Rodrigo C Vergara
- Department of Neuroscience, Faculty of Medicine, University of Chile, Santiago, Chile.,Biomedical Neuroscience Institute, Faculty of Medicine, University of Chile, Santiago, Chile
| | - Simón San Martín
- Department of Neuroscience, Faculty of Medicine, University of Chile, Santiago, Chile.,Biomedical Neuroscience Institute, Faculty of Medicine, University of Chile, Santiago, Chile
| | - Alexis Leiva
- Department of Neuroscience, Faculty of Medicine, University of Chile, Santiago, Chile.,Biomedical Neuroscience Institute, Faculty of Medicine, University of Chile, Santiago, Chile
| | - Bruno Marcenaro
- Department of Neuroscience, Faculty of Medicine, University of Chile, Santiago, Chile.,Biomedical Neuroscience Institute, Faculty of Medicine, University of Chile, Santiago, Chile
| | - Melissa Martinez
- Department of Neurology and Neurosurgery, Clinical Hospital of the University of Chile, Santiago, Chile
| | - Carolina Delgado
- Department of Neuroscience, Faculty of Medicine, University of Chile, Santiago, Chile.,Department of Neurology and Neurosurgery, Clinical Hospital of the University of Chile, Santiago, Chile
| | - Paul H Delano
- Department of Neuroscience, Faculty of Medicine, University of Chile, Santiago, Chile.,Biomedical Neuroscience Institute, Faculty of Medicine, University of Chile, Santiago, Chile.,Department of Otolaryngology, Clinical Hospital of the University of Chile, Santiago, Chile
| |
Collapse
|
38
|
Presacco A, Simon JZ, Anderson S. Speech-in-noise representation in the aging midbrain and cortex: Effects of hearing loss. PLoS One 2019; 14:e0213899. [PMID: 30865718 PMCID: PMC6415857 DOI: 10.1371/journal.pone.0213899] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2018] [Accepted: 03/04/2019] [Indexed: 01/24/2023] Open
Abstract
Age-related deficits in speech-in-noise understanding pose a significant problem for older adults. Despite the vast number of studies conducted to investigate the neural mechanisms responsible for these communication difficulties, the role of central auditory deficits, beyond peripheral hearing loss, remains unclear. The current study builds upon our previous work that investigated the effect of aging on normal-hearing individuals and aims to estimate the effect of peripheral hearing loss on the representation of speech in noise in two critical regions of the aging auditory pathway: the midbrain and cortex. Data from 14 hearing-impaired older adults were added to a previously published dataset of 17 normal-hearing younger adults and 15 normal-hearing older adults. The midbrain response, measured by the frequency-following response (FFR), and the cortical response, measured with the magnetoencephalography (MEG) response, were recorded from subjects listening to speech in quiet and noise conditions at four signal-to-noise ratios (SNRs): +3, 0, -3, and -6 dB sound pressure level (SPL). Both groups of older listeners showed weaker midbrain response amplitudes and overrepresentation of cortical responses compared to younger listeners. No significant differences were found between the two older groups when the midbrain and cortical measurements were analyzed independently. However, significant differences between the older groups were found when investigating the midbrain-cortex relationships; that is, only hearing-impaired older adults showed significant correlations between midbrain and cortical measurements, suggesting that hearing loss may alter reciprocal connections between lower and higher levels of the auditory pathway. The overall paucity of differences in midbrain or cortical responses between the two older groups suggests that age-related temporal processing deficits may contribute to older adults' communication difficulties beyond what might be predicted from peripheral hearing loss alone; however, hearing loss does seem to alter the connectivity between midbrain and cortex. These results may have important ramifications for the field of audiology, as it indicates that algorithms in clinical devices, such as hearing aids, should consider age-related temporal processing deficits to maximize user benefit.
Collapse
Affiliation(s)
- Alessandro Presacco
- Department of Otolaryngology, University of California, Irvine, CA, United States of America
- Center for Hearing Research, University of California, Irvine, CA, United States of America
- * E-mail:
| | - Jonathan Z. Simon
- Department of Electrical & Computer Engineering, University of Maryland, College Park, MD, United States of America
- Department of Biology, University of Maryland, College Park, MD, United States of America
- Institute for Systems Research, University of Maryland, College Park, MD, United States of America
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, MD, United States of America
| | - Samira Anderson
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, MD, United States of America
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, United States of America
| |
Collapse
|
39
|
Peelle JE. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior. Ear Hear 2019; 39:204-214. [PMID: 28938250 PMCID: PMC5821557 DOI: 10.1097/aud.0000000000000494] [Citation(s) in RCA: 315] [Impact Index Per Article: 63.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Accepted: 07/28/2017] [Indexed: 02/04/2023]
Abstract
Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners' abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in Saint Louis, Saint Louis, Missouri, USA
| |
Collapse
|
40
|
Rosemann S, Thiel CM. The effect of age-related hearing loss and listening effort on resting state connectivity. Sci Rep 2019; 9:2337. [PMID: 30787339 PMCID: PMC6382886 DOI: 10.1038/s41598-019-38816-z] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Accepted: 01/10/2019] [Indexed: 12/22/2022] Open
Abstract
Age-related hearing loss is associated with a decrease in hearing abilities for high frequencies. This increases not only the difficulty to understand speech but also the experienced listening effort. Task based neuroimaging studies in normal-hearing and hearing-impaired participants show an increased frontal activation during effortful speech perception in the hearing-impaired. Whether the increased effort in everyday listening in hearing-impaired even impacts functional brain connectivity at rest is unknown. Nineteen normal-hearing and nineteen hearing-impaired participants with mild to moderate hearing loss participated in the study. Hearing abilities, listening effort and resting state functional connectivity were assessed. Our results indicate no differences in functional connectivity between hearing-impaired and normal-hearing participants. Increased listening effort, however, was related to significantly decreased functional connectivity between the dorsal attention network and the precuneus and superior parietal lobule as well as between the auditory and the inferior frontal cortex. We conclude that already mild to moderate age-related hearing loss can impact resting state functional connectivity. It is however not the hearing loss itself but the individually perceived listening effort that relates to functional connectivity changes.
Collapse
Affiliation(s)
- Stephanie Rosemann
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl-von-Ossietzky Universität Oldenburg, Oldenburg, Germany. .,Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany.
| | - Christiane M Thiel
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl-von-Ossietzky Universität Oldenburg, Oldenburg, Germany.,Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
41
|
Dias JW, McClaskey CM, Harris KC. Time-Compressed Speech Identification Is Predicted by Auditory Neural Processing, Perceptuomotor Speed, and Executive Functioning in Younger and Older Listeners. J Assoc Res Otolaryngol 2019; 20:73-88. [PMID: 30456729 PMCID: PMC6364265 DOI: 10.1007/s10162-018-00703-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Accepted: 10/08/2018] [Indexed: 10/27/2022] Open
Abstract
Older adults typically have difficulty identifying speech that is temporally distorted, such as reverberant, accented, time-compressed, or interrupted speech. These difficulties occur even when hearing thresholds fall within a normal range. Auditory neural processing speed, which we have previously found to predict auditory temporal processing (auditory gap detection), may interfere with the ability to recognize phonetic features as they rapidly unfold over time in spoken speech. Further, declines in perceptuomotor processing speed and executive functioning may interfere with the ability to track, access, and process information. The current investigation examined the extent to which age-related differences in time-compressed speech identification were predicted by auditory neural processing speed, perceptuomotor processing speed, and executive functioning. Groups of normal-hearing (up to 3000 Hz) younger and older adults identified 40, 50, and 60 % time-compressed sentences. Auditory neural processing speed was defined as the P1 and N1 latencies of click-induced auditory-evoked potentials. Perceptuomotor processing speed and executive functioning were measured behaviorally using the Connections Test. Compared to younger adults, older adults exhibited poorer time-compressed speech identification and slower perceptuomotor processing. Executive functioning, P1 latency, and N1 latency did not differ between age groups. Time-compressed speech identification was independently predicted by P1 latency, perceptuomotor processing speed, and executive functioning in younger and older listeners. Results of model testing suggested that declines in perceptuomotor processing speed mediated age-group differences in time-compressed speech identification. The current investigation joins a growing body of literature suggesting that the processing of temporally distorted speech is impacted by lower-level auditory neural processing and higher-level perceptuomotor and executive processes.
Collapse
Affiliation(s)
- James W Dias
- Department of Otolaryngology, Medical University of South Carolina, 135 Rutledge Avenue, MSC 550, Charleston, SC, 29425-5500, USA.
| | - Carolyn M McClaskey
- Department of Otolaryngology, Medical University of South Carolina, 135 Rutledge Avenue, MSC 550, Charleston, SC, 29425-5500, USA
| | - Kelly C Harris
- Department of Otolaryngology, Medical University of South Carolina, 135 Rutledge Avenue, MSC 550, Charleston, SC, 29425-5500, USA
| |
Collapse
|
42
|
Modular reconfiguration of an auditory control brain network supports adaptive listening behavior. Proc Natl Acad Sci U S A 2018; 116:660-669. [PMID: 30587584 PMCID: PMC6329957 DOI: 10.1073/pnas.1815321116] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
How do brain networks shape our listening behavior? We here develop and test the hypothesis that, during challenging listening situations, intrinsic brain networks are reconfigured to adapt to the listening demands and thus, to enable successful listening. We find that, relative to a task-free resting state, networks of the listening brain show higher segregation of temporal auditory, ventral attention, and frontal control regions known to be involved in speech processing, sound localization, and effortful listening. Importantly, the relative change in modularity of this auditory control network predicts individuals’ listening success. Our findings shed light on how cortical communication dynamics tune selection and comprehension of speech in challenging listening situations and suggest modularity as the network principle of auditory attention. Speech comprehension in noisy, multitalker situations poses a challenge. Successful behavioral adaptation to a listening challenge often requires stronger engagement of auditory spatial attention and context-dependent semantic predictions. Human listeners differ substantially in the degree to which they adapt behaviorally and can listen successfully under such circumstances. How cortical networks embody this adaptation, particularly at the individual level, is currently unknown. We here explain this adaptation from reconfiguration of brain networks for a challenging listening task (i.e., a linguistic variant of the Posner paradigm with concurrent speech) in an age-varying sample of n = 49 healthy adults undergoing resting-state and task fMRI. We here provide evidence for the hypothesis that more successful listeners exhibit stronger task-specific reconfiguration (hence, better adaptation) of brain networks. From rest to task, brain networks become reconfigured toward more localized cortical processing characterized by higher topological segregation. This reconfiguration is dominated by the functional division of an auditory and a cingulo-opercular module and the emergence of a conjoined auditory and ventral attention module along bilateral middle and posterior temporal cortices. Supporting our hypothesis, the degree to which modularity of this frontotemporal auditory control network is increased relative to resting state predicts individuals’ listening success in states of divided and selective attention. Our findings elucidate how fine-tuned cortical communication dynamics shape selection and comprehension of speech. Our results highlight modularity of the auditory control network as a key organizational principle in cortical implementation of auditory spatial attention in challenging listening situations.
Collapse
|
43
|
Wang Y, Kramer SE, Wendt D, Naylor G, Lunner T, Zekveld AA. The Pupil Dilation Response During Speech Perception in Dark and Light: The Involvement of the Parasympathetic Nervous System in Listening Effort. Trends Hear 2018. [PMCID: PMC6291871 DOI: 10.1177/2331216518816603] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Recently, the measurement of the pupil dilation response has been applied in many studies to assess listening effort. Meanwhile, the mechanisms underlying this response are still largely unknown. We present the results of a method that separates the influence of the parasympathetic and sympathetic branches of the autonomic nervous system on the pupil response during speech perception. This is achieved by changing the background illumination level. In darkness, the influence of the parasympathetic nervous system on the pupil response is minimal, whereas in light, there is an additional component from the parasympathetic nervous system. Nineteen hearing-impaired and 27 age-matched normal-hearing listeners performed speech reception threshold tests targeting a 50% correct performance level while pupil responses were recorded. The target speech was masked with a competing talker. The test was conducted twice, once in dark and once in a light condition. Need for Recovery and Checklist Individual Strength questionnaires were acquired as indices of daily-life fatigue. In dark, the peak pupil dilation (PPD) did not differ between the two groups, but in light, the normal-hearing group showed a larger PPD than the hearing-impaired group. Listeners with better hearing acuity showed larger differences in dilation between dark and light. These results indicate a larger effect of parasympathetic inhibition on the pupil dilation response of listeners with better hearing acuity, and a relatively high parasympathetic activity in those with worse hearing. Previously observed differences in PPD between normal and impaired listeners are probably not solely because of differences in listening effort.
Collapse
Affiliation(s)
- Yang Wang
- Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery, VU University Medical Center and Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
| | - Sophia E. Kramer
- Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery, VU University Medical Center and Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
| | - Dorothea Wendt
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Electrical Engineering, Technical University of Denmark, Lyngby, Denmark
| | - Graham Naylor
- Hearing Sciences—Scottish Section, School of Medicine, University of Nottingham, Glasgow, UK
| | - Thomas Lunner
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Electrical Engineering, Technical University of Denmark, Lyngby, Denmark
- Department of Behavioral Sciences and Learning, Linköping University, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping and Örebro Universities, Sweden
| | - Adriana A. Zekveld
- Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery, VU University Medical Center and Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
- Department of Behavioral Sciences and Learning, Linköping University, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping and Örebro Universities, Sweden
| |
Collapse
|
44
|
Billings CJ, Madsen BM. A perspective on brain-behavior relationships and effects of age and hearing using speech-in-noise stimuli. Hear Res 2018; 369:90-102. [PMID: 29661615 PMCID: PMC6636926 DOI: 10.1016/j.heares.2018.03.024] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/02/2017] [Revised: 03/06/2018] [Accepted: 03/28/2018] [Indexed: 10/17/2022]
Abstract
Understanding speech in background noise is often more difficult for individuals who are older and have hearing impairment than for younger, normal-hearing individuals. In fact, speech-understanding abilities among older individuals with hearing impairment varies greatly. Researchers have hypothesized that some of that variability can be explained by how the brain encodes speech signals in the presence of noise, and that brain measures may be useful for predicting behavioral performance in difficult-to-test patients. In a series of experiments, we have explored the effects of age and hearing impairment in both brain and behavioral domains with the goal of using brain measures to improve our understanding of speech-in-noise difficulties. The behavioral measures examined showed effect sizes for hearing impairment that were 6-10 dB larger than the effects of age when tested in steady-state noise, whereas electrophysiological age effects were similar in magnitude to those of hearing impairment. Both age and hearing status influence neural responses to speech as well as speech understanding in background noise. These effects can in turn be modulated by other factors, such as the characteristics of the background noise itself. Finally, the use of electrophysiology to predict performance on receptive speech-in-noise tasks holds promise, demonstrating root-mean-square prediction errors as small as 1-2 dB. An important next step in this field of inquiry is to sample the aging and hearing impairment variables continuously (rather than categorically) - across the whole lifespan and audiogram - to improve effect estimates.
Collapse
Affiliation(s)
- Curtis J Billings
- National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, 3710 SW US Veterans Hospital Road (NCRAR), Portland, OR 97239, USA; Department of Otolaryngology, Oregon Health & Science University, 3181 SW Sam Jackson Park Road, Portland, OR 97239, USA.
| | - Brandon M Madsen
- National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, 3710 SW US Veterans Hospital Road (NCRAR), Portland, OR 97239, USA
| |
Collapse
|
45
|
Panouillères MTN, Möttönen R. Decline of auditory-motor speech processing in older adults with hearing loss. Neurobiol Aging 2018; 72:89-97. [PMID: 30240945 DOI: 10.1016/j.neurobiolaging.2018.07.013] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2017] [Revised: 07/20/2018] [Accepted: 07/20/2018] [Indexed: 10/28/2022]
Abstract
Older adults often experience difficulties in understanding speech, partly because of age-related hearing loss (HL). In young adults, activity of the left articulatory motor cortex is enhanced and it interacts with the auditory cortex via the left-hemispheric dorsal stream during speech processing. Little is known about the effect of aging and age-related HL on this auditory-motor interaction and speech processing in the articulatory motor cortex. It has been proposed that upregulation of the motor system during speech processing could compensate for HL and auditory processing deficits in older adults. Alternatively, age-related auditory deficits could reduce and distort the input from the auditory cortex to the articulatory motor cortex, suppressing recruitment of the motor system during listening to speech. The aim of the present study was to investigate the effects of aging and age-related HL on the excitability of the tongue motor cortex during listening to spoken sentences using transcranial magnetic stimulation and electromyography. Our results show that the excitability of the tongue motor cortex was facilitated during listening to speech in young and older adults with normal hearing. This facilitation was significantly reduced in older adults with HL. These findings suggest a decline of auditory-motor processing of speech in adults with age-related HL.
Collapse
Affiliation(s)
- Muriel T N Panouillères
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom; School of Sports Sciences and Human Movement, CIAMS, Université Paris-Sud, Université Paris-Saclay, Orsay, France; UFR Collegium Sciences et Techniques, CIAMS, Université d'Orléans, Orléans, France.
| | - Riikka Möttönen
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom; School of Psychology, University of Nottingham, Nottingham, UK
| |
Collapse
|
46
|
Differences in Hearing Acuity among "Normal-Hearing" Young Adults Modulate the Neural Basis for Speech Comprehension. eNeuro 2018; 5:eN-NWR-0263-17. [PMID: 29911176 PMCID: PMC6001266 DOI: 10.1523/eneuro.0263-17.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 04/17/2018] [Accepted: 04/18/2018] [Indexed: 12/11/2022] Open
Abstract
In this paper, we investigate how subtle differences in hearing acuity affect the neural systems supporting speech processing in young adults. Auditory sentence comprehension requires perceiving a complex acoustic signal and performing linguistic operations to extract the correct meaning. We used functional MRI to monitor human brain activity while adults aged 18–41 years listened to spoken sentences. The sentences varied in their level of syntactic processing demands, containing either a subject-relative or object-relative center-embedded clause. All participants self-reported normal hearing, confirmed by audiometric testing, with some variation within a clinically normal range. We found that participants showed activity related to sentence processing in a left-lateralized frontotemporal network. Although accuracy was generally high, participants still made some errors, which were associated with increased activity in bilateral cingulo-opercular and frontoparietal attention networks. A whole-brain regression analysis revealed that activity in a right anterior middle frontal gyrus (aMFG) component of the frontoparietal attention network was related to individual differences in hearing acuity, such that listeners with poorer hearing showed greater recruitment of this region when successfully understanding a sentence. The activity in right aMFGs for listeners with poor hearing did not differ as a function of sentence type, suggesting a general mechanism that is independent of linguistic processing demands. Our results suggest that even modest variations in hearing ability impact the systems supporting auditory speech comprehension, and that auditory sentence comprehension entails the coordination of a left perisylvian network that is sensitive to linguistic variation with an executive attention network that responds to acoustic challenge.
Collapse
|
47
|
Rosemann S, Thiel CM. Audio-visual speech processing in age-related hearing loss: Stronger integration and increased frontal lobe recruitment. Neuroimage 2018; 175:425-437. [PMID: 29655940 DOI: 10.1016/j.neuroimage.2018.04.023] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Revised: 03/09/2018] [Accepted: 04/09/2018] [Indexed: 11/19/2022] Open
Abstract
Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing accompanied by changes in brain activation particularly involving frontal areas. These changes are modulated by the extent of hearing loss.
Collapse
Affiliation(s)
- Stephanie Rosemann
- Biological Psychology, Department of Psychology, Department for Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany; Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany.
| | - Christiane M Thiel
- Biological Psychology, Department of Psychology, Department for Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany; Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
48
|
Koeritzer MA, Rogers CS, Van Engen KJ, Peelle JE. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:740-751. [PMID: 29450493 PMCID: PMC5963044 DOI: 10.1044/2017_jslhr-h-17-0077] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 08/28/2017] [Accepted: 09/20/2017] [Indexed: 05/20/2023]
Abstract
PURPOSE The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. METHOD We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously. RESULTS Recognition memory (indexed by d') was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise. CONCLUSIONS Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences. SUPPLEMENTAL MATERIALS https://doi.org/10.23641/asha.5848059.
Collapse
Affiliation(s)
- Margaret A Koeritzer
- Program in Audiology and Communication Sciences, Washington University in St. Louis, MO
| | - Chad S Rogers
- Department of Otolaryngology, Washington University in St. Louis, MO
| | - Kristin J Van Engen
- Department of Psychological and Brain Sciences and Program in Linguistics, Washington University in St. Louis, MO
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis, MO
| |
Collapse
|
49
|
Alain C, Du Y, Bernstein LJ, Barten T, Banai K. Listening under difficult conditions: An activation likelihood estimation meta-analysis. Hum Brain Mapp 2018. [PMID: 29536592 DOI: 10.1002/hbm.24031] [Citation(s) in RCA: 73] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
The brain networks supporting speech identification and comprehension under difficult listening conditions are not well specified. The networks hypothesized to underlie effortful listening include regions responsible for executive control. We conducted meta-analyses of auditory neuroimaging studies to determine whether a common activation pattern of the frontal lobe supports effortful listening under different speech manipulations. Fifty-three functional neuroimaging studies investigating speech perception were divided into three independent Activation Likelihood Estimate analyses based on the type of speech manipulation paradigm used: Speech-in-noise (SIN, 16 studies, involving 224 participants); spectrally degraded speech using filtering techniques (15 studies involving 270 participants); and linguistic complexity (i.e., levels of syntactic, lexical and semantic intricacy/density, 22 studies, involving 348 participants). Meta-analysis of the SIN studies revealed higher effort was associated with activation in left inferior frontal gyrus (IFG), left inferior parietal lobule, and right insula. Studies using spectrally degraded speech demonstrated increased activation of the insula bilaterally and the left superior temporal gyrus (STG). Studies manipulating linguistic complexity showed activation in the left IFG, right middle frontal gyrus, left middle temporal gyrus and bilateral STG. Planned contrasts revealed left IFG activation in linguistic complexity studies, which differed from activation patterns observed in SIN or spectral degradation studies. Although there were no significant overlap in prefrontal activation across these three speech manipulation paradigms, SIN and spectral degradation showed overlapping regions in left and right insula. These findings provide evidence that there is regional specialization within the left IFG and differential executive networks underlie effortful listening.
Collapse
Affiliation(s)
- Claude Alain
- Rotman Research Institute, Baycrest Health Centre, Toronto, Ontario, Canada.,Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Lori J Bernstein
- Department of Supportive Care, Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada.,Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | - Thijs Barten
- Rotman Research Institute, Baycrest Health Centre, Toronto, Ontario, Canada
| | - Karen Banai
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| |
Collapse
|
50
|
Is Listening in Noise Worth It? The Neurobiology of Speech Recognition in Challenging Listening Conditions. Ear Hear 2018; 37 Suppl 1:101S-10S. [PMID: 27355759 DOI: 10.1097/aud.0000000000000300] [Citation(s) in RCA: 80] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
This review examines findings from functional neuroimaging studies of speech recognition in noise to provide a neural systems level explanation for the effort and fatigue that can be experienced during speech recognition in challenging listening conditions. Neuroimaging studies of speech recognition consistently demonstrate that challenging listening conditions engage neural systems that are used to monitor and optimize performance across a wide range of tasks. These systems appear to improve speech recognition in younger and older adults, but sustained engagement of these systems also appears to produce an experience of effort and fatigue that may affect the value of communication. When considered in the broader context of the neuroimaging and decision making literature, the speech recognition findings from functional imaging studies indicate that the expected value, or expected level of speech recognition given the difficulty of listening conditions, should be considered when measuring effort and fatigue. The authors propose that the behavioral economics or neuroeconomics of listening can provide a conceptual and experimental framework for understanding effort and fatigue that may have clinical significance.
Collapse
|