1
|
He J, Frances C, Creemers A, Brehm L. Effects of irrelevant unintelligible and intelligible background speech on spoken language production. Q J Exp Psychol (Hove) 2024; 77:1745-1769. [PMID: 38044368 PMCID: PMC11295403 DOI: 10.1177/17470218231219971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 09/04/2023] [Accepted: 10/03/2023] [Indexed: 12/05/2023]
Abstract
Earlier work has explored spoken word production during irrelevant background speech such as intelligible and unintelligible word lists. The present study compared how different types of irrelevant background speech (word lists vs. sentences) influenced spoken word production relative to a quiet control condition, and whether the influence depended on the intelligibility of the background speech. Experiment 1 presented native Dutch speakers with Chinese word lists and sentences. Experiment 2 presented a similar group with Dutch word lists and sentences. In both experiments, the lexical selection demands in speech production were manipulated by varying name agreement (high vs. low) of the to-be-named pictures. Results showed that background speech, regardless of its intelligibility, disrupted spoken word production relative to a quiet condition, but no effects of word lists versus sentences in either language were found. Moreover, the disruption by intelligible background speech compared with the quiet condition was eliminated when planning low name agreement pictures. These findings suggest that any speech, even unintelligible speech, interferes with production, which implies that the disruption of spoken word production is mainly phonological in nature. The disruption by intelligible background speech can be reduced or eliminated via top-down attentional engagement.
Collapse
Affiliation(s)
- Jieying He
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- International Max Planck Research School for Language Sciences, Nijmegen, The Netherlands
| | - Candice Frances
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Ava Creemers
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Laurel Brehm
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Department of Linguistics, University of California, Santa Barbara, Santa Barbara, CA, USA
| |
Collapse
|
2
|
Alavash M, Obleser J. Brain Network Interconnectivity Dynamics Explain Metacognitive Differences in Listening Behavior. J Neurosci 2024; 44:e2322232024. [PMID: 38839303 PMCID: PMC11293451 DOI: 10.1523/jneurosci.2322-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 04/29/2024] [Accepted: 05/01/2024] [Indexed: 06/07/2024] Open
Abstract
Complex auditory scenes pose a challenge to attentive listening, rendering listeners slower and more uncertain in their perceptual decisions. How can we explain such behaviors from the dynamics of cortical networks that pertain to the control of listening behavior? We here follow up on the hypothesis that human adaptive perception in challenging listening situations is supported by modular reconfiguration of auditory-control networks in a sample of N = 40 participants (13 males) who underwent resting-state and task functional magnetic resonance imaging (fMRI). Individual titration of a spatial selective auditory attention task maintained an average accuracy of ∼70% but yielded considerable interindividual differences in listeners' response speed and reported confidence in their own perceptual decisions. Whole-brain network modularity increased from rest to task by reconfiguring auditory, cinguloopercular, and dorsal attention networks. Specifically, interconnectivity between the auditory network and cinguloopercular network decreased during the task relative to the resting state. Additionally, interconnectivity between the dorsal attention network and cinguloopercular network increased. These interconnectivity dynamics were predictive of individual differences in response confidence, the degree of which was more pronounced after incorrect judgments. Our findings uncover the behavioral relevance of functional cross talk between auditory and attentional-control networks during metacognitive assessment of one's own perception in challenging listening situations and suggest two functionally dissociable cortical networked systems that shape the considerable metacognitive differences between individuals in adaptive listening behavior.
Collapse
Affiliation(s)
- Mohsen Alavash
- Department of Psychology, University of Lübeck, Lübeck 23562, Germany
- Center for Brain, Behavior, and Metabolism, University of Lübeck, Lübeck 23562, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck 23562, Germany
- Center for Brain, Behavior, and Metabolism, University of Lübeck, Lübeck 23562, Germany
| |
Collapse
|
3
|
Herrmann B, Ryan JD. Pupil Size and Eye Movements Differently Index Effort in Both Younger and Older Adults. J Cogn Neurosci 2024; 36:1325-1340. [PMID: 38683698 DOI: 10.1162/jocn_a_02172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
The assessment of mental effort is increasingly relevant in neurocognitive and life span domains. Pupillometry, the measure of the pupil size, is often used to assess effort but has disadvantages. Analysis of eye movements may provide an alternative, but research has been limited to easy and difficult task demands in younger adults. An effort measure must be sensitive to the whole effort profile, including "giving up" effort investment, and capture effort in different age groups. The current study comprised three experiments in which younger (n = 66) and older (n = 44) adults listened to speech masked by background babble at different signal-to-noise ratios associated with easy, difficult, and impossible speech comprehension. We expected individuals to invest little effort for easy and impossible speech (giving up) but to exert effort for difficult speech. Indeed, pupil size was largest for difficult but lower for easy and impossible speech. In contrast, gaze dispersion decreased with increasing speech masking in both age groups. Critically, gaze dispersion during difficult speech returned to levels similar to easy speech after sentence offset, when acoustic stimulation was similar across conditions, whereas gaze dispersion during impossible speech continued to be reduced. These findings show that a reduction in eye movements is not a byproduct of acoustic factors, but instead suggest that neurocognitive processes, different from arousal-related systems regulating the pupil size, drive reduced eye movements during high task demands. The current data thus show that effort in one sensory domain (audition) differentially impacts distinct functional properties in another sensory domain (vision).
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, North York, Ontario, Canada
- University of Toronto, Ontario, Canada
| | - Jennifer D Ryan
- Rotman Research Institute, North York, Ontario, Canada
- University of Toronto, Ontario, Canada
| |
Collapse
|
4
|
Kim SG, De Martino F, Overath T. Linguistic modulation of the neural encoding of phonemes. Cereb Cortex 2024; 34:bhae155. [PMID: 38687241 PMCID: PMC11059272 DOI: 10.1093/cercor/bhae155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 03/21/2024] [Accepted: 03/22/2024] [Indexed: 05/02/2024] Open
Abstract
Speech comprehension entails the neural mapping of the acoustic speech signal onto learned linguistic units. This acousto-linguistic transformation is bi-directional, whereby higher-level linguistic processes (e.g. semantics) modulate the acoustic analysis of individual linguistic units. Here, we investigated the cortical topography and linguistic modulation of the most fundamental linguistic unit, the phoneme. We presented natural speech and "phoneme quilts" (pseudo-randomly shuffled phonemes) in either a familiar (English) or unfamiliar (Korean) language to native English speakers while recording functional magnetic resonance imaging. This allowed us to dissociate the contribution of acoustic vs. linguistic processes toward phoneme analysis. We show that (i) the acoustic analysis of phonemes is modulated by linguistic analysis and (ii) that for this modulation, both of acoustic and phonetic information need to be incorporated. These results suggest that the linguistic modulation of cortical sensitivity to phoneme classes minimizes prediction error during natural speech perception, thereby aiding speech comprehension in challenging listening situations.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Department of Psychology and Neuroscience, Duke University, 308 Research Dr, Durham, NC 27708, United States
- Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, Frankfurt am Main 60322, Germany
| | - Federico De Martino
- Faculty of Psychology and Neuroscience, University of Maastricht, Universiteitssingel 40, 6229 ER Maastricht, Netherlands
| | - Tobias Overath
- Department of Psychology and Neuroscience, Duke University, 308 Research Dr, Durham, NC 27708, United States
- Duke Institute for Brain Sciences, Duke University, 308 Research Dr, Durham, NC 27708, United States
- Center for Cognitive Neuroscience, Duke University, 308 Research Dr, Durham, NC 27708, United States
| |
Collapse
|
5
|
Alqudah S, Zuriekat M, Shatarah A. Impact of hearing impairment on the mental status of the adults and older adults in Jordanian society. PLoS One 2024; 19:e0298616. [PMID: 38437235 PMCID: PMC10911586 DOI: 10.1371/journal.pone.0298616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 01/27/2024] [Indexed: 03/06/2024] Open
Abstract
BACKGROUND Hearing loss is a common disorder, affecting both children and adults worldwide. Individuals with hearing loss suffer from mental health problems that affect their quality of life. OBJECTIVE This study aimed to investigate the social and emotional consequences of hearing loss in a Jordanian population using Arabic versions of the Hearing Handicap Inventory for Adults (HHIA) and the Hearing Handicap Inventory for the Elderly (HHIE). METHODS This study included 300 Jordanian participants aged 18-90 years with hearing loss. Each participant underwent a complete audiological evaluation before answering the questionnaires. RESULTS The median overall scores of the HHIA and HHIE groups were 39 and 65, respectively. Both HHIA (Cronbach's alpha = 0.79, p < 0.001) and HHIE (Cronbach's alpha = 0.78, p < 0.001) were significantly associated with the social, emotional, and overall scores. Compared to the adult group, the median emotional and social scores of the older adults group were significantly higher than the adults group (Z = -4.721, p = 0.001), using the Mann-Whitney test. CONCLUSION The present research revealed that psychological disabilities associated with hearing loss in the adult Jordanian population are more frequent and severe than in other nations. This may be attributed to the lack of awareness of the mental consequences of hearing loss among Jordanian healthcare providers and the public.
Collapse
Affiliation(s)
- Safa Alqudah
- Department of Rehabilitation Sciences, Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Irbid, Jordan
| | - Margaret Zuriekat
- Department of Special Surgery, School of Medicine, The University of Jordan & Jordan University Hospital, Amman, Jordan
| | - Aya Shatarah
- Bachelor in Speech and Hearing, Jordan University of Science and Technology, Irbid, Jordan
| |
Collapse
|
6
|
Johns MA, Calloway RC, Karunathilake IMD, Decruy LP, Anderson S, Simon JZ, Kuchinsky SE. Attention Mobilization as a Modulator of Listening Effort: Evidence From Pupillometry. Trends Hear 2024; 28:23312165241245240. [PMID: 38613337 PMCID: PMC11015766 DOI: 10.1177/23312165241245240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 03/11/2024] [Accepted: 03/15/2024] [Indexed: 04/14/2024] Open
Abstract
Listening to speech in noise can require substantial mental effort, even among younger normal-hearing adults. The task-evoked pupil response (TEPR) has been shown to track the increased effort exerted to recognize words or sentences in increasing noise. However, few studies have examined the trajectory of listening effort across longer, more natural, stretches of speech, or the extent to which expectations about upcoming listening difficulty modulate the TEPR. Seventeen younger normal-hearing adults listened to 60-s-long audiobook passages, repeated three times in a row, at two different signal-to-noise ratios (SNRs) while pupil size was recorded. There was a significant interaction between SNR, repetition, and baseline pupil size on sustained listening effort. At lower baseline pupil sizes, potentially reflecting lower attention mobilization, TEPRs were more sustained in the harder SNR condition, particularly when attention mobilization remained low by the third presentation. At intermediate baseline pupil sizes, differences between conditions were largely absent, suggesting these listeners had optimally mobilized their attention for both SNRs. Lastly, at higher baseline pupil sizes, potentially reflecting overmobilization of attention, the effect of SNR was initially reversed for the second and third presentations: participants initially appeared to disengage in the harder SNR condition, resulting in reduced TEPRs that recovered in the second half of the story. Together, these findings suggest that the unfolding of listening effort over time depends critically on the extent to which individuals have successfully mobilized their attention in anticipation of difficult listening conditions.
Collapse
Affiliation(s)
- M. A. Johns
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - R. C. Calloway
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - I. M. D. Karunathilake
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
| | - L. P. Decruy
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - S. Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742, USA
| | - J. Z. Simon
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
- Department of Biology, University of Maryland, College Park, MD 20742, USA
| | - S. E. Kuchinsky
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742, USA
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD 20889, USA
| |
Collapse
|
7
|
Kraus F, Obleser J, Herrmann B. Pupil Size Sensitivity to Listening Demand Depends on Motivational State. eNeuro 2023; 10:ENEURO.0288-23.2023. [PMID: 37989588 PMCID: PMC10734370 DOI: 10.1523/eneuro.0288-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 10/19/2023] [Accepted: 10/22/2023] [Indexed: 11/23/2023] Open
Abstract
Motivation plays a role when a listener needs to understand speech under acoustically demanding conditions. Previous work has demonstrated pupil-linked arousal being sensitive to both listening demands and motivational state during listening. It is less clear how motivational state affects the temporal evolution of the pupil size and its relation to subsequent behavior. We used an auditory gap detection task (N = 33) to study the joint impact of listening demand and motivational state on the pupil size response and examine its temporal evolution. Task difficulty and a listener's motivational state were orthogonally manipulated through changes in gap duration and monetary reward prospect. We show that participants' performance decreased with task difficulty, but that reward prospect enhanced performance under hard listening conditions. Pupil size increased with both increased task difficulty and higher reward prospect, and this reward prospect effect was largest under difficult listening conditions. Moreover, pupil size time courses differed between detected and missed gaps, suggesting that the pupil response indicates upcoming behavior. Larger pre-gap pupil size was further associated with faster response times on a trial-by-trial within-participant level. Our results reiterate the utility of pupil size as an objective and temporally sensitive measure in audiology. However, such assessments of cognitive resource recruitment need to consider the individual's motivational state.
Collapse
Affiliation(s)
- Frauke Kraus
- Department of Psychology, University of Lübeck, 23562 Lübeck, Germany
- Center of Brain, Behavior, and Metabolism, University of Lübeck, 23562 Lübeck, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, 23562 Lübeck, Germany
- Center of Brain, Behavior, and Metabolism, University of Lübeck, 23562 Lübeck, Germany
| | - Björn Herrmann
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto M6A 2E1, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto M5S 3G3, Ontario, Canada
| |
Collapse
|
8
|
Park MH, Kim JS, Lee S, Kim DH, Oh SH. Increased Resting-State Positron Emission Tomography Activity After Cochlear Implantation in Adult Deafened Cats. Clin Exp Otorhinolaryngol 2023; 16:326-333. [PMID: 36397262 DOI: 10.21053/ceo.2022.00423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 11/09/2022] [Indexed: 11/18/2022] Open
Abstract
OBJECTIVES Cochlear implants are widely used for hearing rehabilitation in patients with profound sensorineural hearing loss. However, Cochlear implants have variable. RESULTS and central neural plasticity is considered to be a reason for this variability. We hypothesized that resting-state cortical networks play a role in conditions of profound hearing loss and are affected by cochlear implants. To investigate the resting-state neuronal networks after cochlear implantation, we acquired 18F-fluorodeoxyglucose (FDG)-positron emission tomography (PET) images in experimental animals. METHODS Eight adult domestic cats were enrolled in this study. The hearing threshold of the animals was within the normal range, as measured by auditory evoked potential. They were divided into control (n=4) and hearing loss (n=4) groups. Hearing loss was induced by co-administration of ethacrynic acid and kanamycin. FDG-PET was performed in a normal hearing state and 4 and 11 months after the deafening procedure. Cochlear implantation was performed in the right ear, and electrical cochlear stimulation was performed for 7 months (from 4 to 11 months after the deafening procedure). PET images were compared between the two groups at the three time points. RESULTS Four months after hearing loss, the auditory cortical area's activity decreased, and activity in the associated visual area increased. After 7 months of cochlear stimulation, the superior marginal gyrus and cingulate gyrus, which are components of the default mode network, showed hypermetabolism. The inferior colliculi showed hypometabolism. CONCLUSION Resting-state cortical activity in the default mode network components was elevated after cochlear stimulation. This suggests that the animals' awareness level was elevated after hearing restoration by the cochlear implantation.
Collapse
Affiliation(s)
- Min-Hyun Park
- Department of Otorhinolaryngology, Seoul National University College of Medicine, Seoul, Korea
- Department of Otorhinolaryngology, Seoul Metropolitan Government-Seoul National University Boramae Medical Center, Seoul, Korea
| | - Jin Su Kim
- Division of RI Application, Korea Institute of Radiological and Medical Sciences, Seoul, Korea
| | - Seonhwa Lee
- Division of RI Application, Korea Institute of Radiological and Medical Sciences, Seoul, Korea
| | - Doo Hee Kim
- Department of Otorhinolaryngology, Seoul National University College of Medicine, Seoul, Korea
| | - Seung Ha Oh
- Department of Otorhinolaryngology, Seoul National University College of Medicine, Seoul, Korea
- Sensory Organ Research Institute, Seoul National University Medical Research Center, Seoul, Korea
| |
Collapse
|
9
|
Shetty HN, Raju S, Singh S S. The relationship between age, acceptable noise level, and listening effort in middle-aged and older-aged individuals. J Otol 2023; 18:220-229. [PMID: 37877073 PMCID: PMC10593579 DOI: 10.1016/j.joto.2023.09.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 09/14/2023] [Accepted: 09/18/2023] [Indexed: 10/26/2023] Open
Abstract
Objective The purpose of the study was to evaluate listening effort in adults who experience varied annoyance towards noise. Materials and methods Fifty native Kannada-speaking adults aged 41-68 years participated. We evaluated the participant's acceptable noise level while listening to speech. Further, a sentence-final word-identification and recall test at 0 dB SNR (less favorable condition) and 4 dB SNR (relatively favorable condition) was used to assess listening effort. The repeat and recall scores were obtained for each condition. Results The regression model revealed that the listening effort increased by 0.6% at 0 dB SNR and by 0.5% at 4 dB SNR with every one-year advancement in age. Listening effort increased by 0.9% at 0 dB SNR and by 0.7% at 4 dB SNR with every one dB change in the value of Acceptable Noise Level (ANL). At 0 dB SNR and 4 dB SNR, a moderate and mild negative correlation was noted respectively between listening effort and annoyance towards noise when the factor age was controlled. Conclusion Listening effort increases with age, and its effect is more in less favorable than in relatively favorable conditions. However, if the annoyance towards noise was controlled, the impact of age on listening effort was reduced. Listening effort correlated with the level of annoyance once the age effect was controlled. Furthermore, the listening effort was predicted from the ANL to a moderate degree.
Collapse
Affiliation(s)
| | - Suma Raju
- Department of Speech-Language Pathology, JSS Institute of Speech and Hearing, Mysuru, Karnataka, India
| | - Sanjana Singh S
- Department of Audiology, JSS Institute of Speech and Hearing, Mysuru, Karnataka, India
| |
Collapse
|
10
|
Zhang Y, Rennig J, Magnotti JF, Beauchamp MS. Multivariate fMRI responses in superior temporal cortex predict visual contributions to, and individual differences in, the intelligibility of noisy speech. Neuroimage 2023; 278:120271. [PMID: 37442310 PMCID: PMC10460966 DOI: 10.1016/j.neuroimage.2023.120271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 06/20/2023] [Accepted: 07/06/2023] [Indexed: 07/15/2023] Open
Abstract
Humans have the unique ability to decode the rapid stream of language elements that constitute speech, even when it is contaminated by noise. Two reliable observations about noisy speech perception are that seeing the face of the talker improves intelligibility and the existence of individual differences in the ability to perceive noisy speech. We introduce a multivariate BOLD fMRI measure that explains both observations. In two independent fMRI studies, clear and noisy speech was presented in visual, auditory and audiovisual formats to thirty-seven participants who rated intelligibility. An event-related design was used to sort noisy speech trials by their intelligibility. Individual-differences multidimensional scaling was applied to fMRI response patterns in superior temporal cortex and the dissimilarity between responses to clear speech and noisy (but intelligible) speech was measured. Neural dissimilarity was less for audiovisual speech than auditory-only speech, corresponding to the greater intelligibility of noisy audiovisual speech. Dissimilarity was less in participants with better noisy speech perception, corresponding to individual differences. These relationships held for both single word and entire sentence stimuli, suggesting that they were driven by intelligibility rather than the specific stimuli tested. A neural measure of perceptual intelligibility may aid in the development of strategies for helping those with impaired speech perception.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States; Department of Neurosurgery, Baylor College of Medicine, Houston, TX, United States
| | - Johannes Rennig
- Division of Neuropsychology, Center of Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - John F Magnotti
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Michael S Beauchamp
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.
| |
Collapse
|
11
|
Yasmin S, Irsik VC, Johnsrude IS, Herrmann B. The effects of speech masking on neural tracking of acoustic and semantic features of natural speech. Neuropsychologia 2023; 186:108584. [PMID: 37169066 DOI: 10.1016/j.neuropsychologia.2023.108584] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 04/30/2023] [Accepted: 05/08/2023] [Indexed: 05/13/2023]
Abstract
Listening environments contain background sounds that mask speech and lead to communication challenges. Sensitivity to slow acoustic fluctuations in speech can help segregate speech from background noise. Semantic context can also facilitate speech perception in noise, for example, by enabling prediction of upcoming words. However, not much is known about how different degrees of background masking affect the neural processing of acoustic and semantic features during naturalistic speech listening. In the current electroencephalography (EEG) study, participants listened to engaging, spoken stories masked at different levels of multi-talker babble to investigate how neural activity in response to acoustic and semantic features changes with acoustic challenges, and how such effects relate to speech intelligibility. The pattern of neural response amplitudes associated with both acoustic and semantic speech features across masking levels was U-shaped, such that amplitudes were largest for moderate masking levels. This U-shape may be due to increased attentional focus when speech comprehension is challenging, but manageable. The latency of the neural responses increased linearly with increasing background masking, and neural latency change associated with acoustic processing most closely mirrored the changes in speech intelligibility. Finally, tracking responses related to semantic dissimilarity remained robust until severe speech masking (-3 dB SNR). The current study reveals that neural responses to acoustic features are highly sensitive to background masking and decreasing speech intelligibility, whereas neural responses to semantic features are relatively robust, suggesting that individuals track the meaning of the story well even in moderate background sound.
Collapse
Affiliation(s)
- Sonia Yasmin
- Department of Psychology & the Brain and Mind Institute,The University of Western Ontario, London, ON, N6A 3K7, Canada.
| | - Vanessa C Irsik
- Department of Psychology & the Brain and Mind Institute,The University of Western Ontario, London, ON, N6A 3K7, Canada
| | - Ingrid S Johnsrude
- Department of Psychology & the Brain and Mind Institute,The University of Western Ontario, London, ON, N6A 3K7, Canada; School of Communication and Speech Disorders,The University of Western Ontario, London, ON, N6A 5B7, Canada
| | - Björn Herrmann
- Rotman Research Institute, Baycrest, M6A 2E1, Toronto, ON, Canada; Department of Psychology,University of Toronto, M5S 1A1, Toronto, ON, Canada
| |
Collapse
|
12
|
Cartocci G, Inguscio BMS, Giliberto G, Vozzi A, Giorgi A, Greco A, Babiloni F, Attanasio G. Listening Effort in Tinnitus: A Pilot Study Employing a Light EEG Headset and Skin Conductance Assessment during the Listening to a Continuous Speech Stimulus under Different SNR Conditions. Brain Sci 2023; 13:1084. [PMID: 37509014 PMCID: PMC10377270 DOI: 10.3390/brainsci13071084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 07/07/2023] [Accepted: 07/13/2023] [Indexed: 07/30/2023] Open
Abstract
Background noise elicits listening effort. What else is tinnitus if not an endogenous background noise? From such reasoning, we hypothesized the occurrence of increased listening effort in tinnitus patients during listening tasks. Such a hypothesis was tested by investigating some indices of listening effort through electroencephalographic and skin conductance, particularly parietal and frontal alpha and electrodermal activity (EDA). Furthermore, tinnitus distress questionnaires (THI and TQ12-I) were employed. Parietal alpha values were positively correlated to TQ12-I scores, and both were negatively correlated to EDA; Pre-stimulus frontal alpha correlated with the THI score in our pilot study; finally, results showed a general trend of increased frontal alpha activity in the tinnitus group in comparison to the control group. Parietal alpha during the listening to stimuli, positively correlated to the TQ12-I, appears to reflect a higher listening effort in tinnitus patients and the perception of tinnitus symptoms. The negative correlation between both listening effort (parietal alpha) and tinnitus symptoms perception (TQ12-I scores) with EDA levels could be explained by a less responsive sympathetic nervous system to prepare the body to expend increased energy during the "fight or flight" response, due to pauperization of energy from tinnitus perception.
Collapse
Affiliation(s)
- Giulia Cartocci
- Department of Molecular Medicine, Sapienza University of Rome, 00161 Rome, Italy
- Department of Research and Development, BrainSigns Ltd., 00198 Rome, Italy
| | - Bianca Maria Serena Inguscio
- Department of Research and Development, BrainSigns Ltd., 00198 Rome, Italy
- Department of Human Neuroscience, Sapienza University of Rome, 00185 Rome, Italy
| | - Giovanna Giliberto
- Department of Molecular Medicine, Sapienza University of Rome, 00161 Rome, Italy
| | - Alessia Vozzi
- Department of Research and Development, BrainSigns Ltd., 00198 Rome, Italy
- SAIMLAL Department, Sapienza University of Rome, 00185 Rome, Italy
| | - Andrea Giorgi
- Department of Research and Development, BrainSigns Ltd., 00198 Rome, Italy
- SAIMLAL Department, Sapienza University of Rome, 00185 Rome, Italy
| | - Antonio Greco
- Department of Sense Organs, Sapienza University of Rome, 00161 Rome, Italy
| | - Fabio Babiloni
- Department of Molecular Medicine, Sapienza University of Rome, 00161 Rome, Italy
- Department of Research and Development, BrainSigns Ltd., 00198 Rome, Italy
- Department of Computer Science, Hangzhou Dianzi University, Hangzhou 310005, China
| | | |
Collapse
|
13
|
Perea Pérez F, Hartley DEH, Kitterick PT, Zekveld AA, Naylor G, Wiggins IM. Listening efficiency in adult cochlear-implant users compared with normally-hearing controls at ecologically relevant signal-to-noise ratios. Front Hum Neurosci 2023; 17:1214485. [PMID: 37520928 PMCID: PMC10379644 DOI: 10.3389/fnhum.2023.1214485] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Accepted: 06/23/2023] [Indexed: 08/01/2023] Open
Abstract
Introduction Due to having to work with an impoverished auditory signal, cochlear-implant (CI) users may experience reduced speech intelligibility and/or increased listening effort in real-world listening situations, compared to their normally-hearing (NH) peers. These two challenges to perception may be usefully integrated in a measure of listening efficiency: conceptually, the amount of accuracy achieved for a certain amount of effort expended. Methods We describe a novel approach to quantifying listening efficiency based on the rate of evidence accumulation toward a correct response in a linear ballistic accumulator (LBA) model of choice decision-making. Estimation of this objective measure within a hierarchical Bayesian framework confers further benefits, including full quantification of uncertainty in parameter estimates. We applied this approach to examine the speech-in-noise performance of a group of 24 CI users (M age: 60.3, range: 20-84 years) and a group of 25 approximately age-matched NH controls (M age: 55.8, range: 20-79 years). In a laboratory experiment, participants listened to reverberant target sentences in cafeteria noise at ecologically relevant signal-to-noise ratios (SNRs) of +20, +10, and +4 dB SNR. Individual differences in cognition and self-reported listening experiences were also characterised by means of cognitive tests and hearing questionnaires. Results At the group level, the CI group showed much lower listening efficiency than the NH group, even in favourable acoustic conditions. At the individual level, within the CI group (but not the NH group), higher listening efficiency was associated with better cognition (i.e., working-memory and linguistic-closure) and with more positive self-reported listening experiences, both in the laboratory and in daily life. Discussion We argue that listening efficiency, measured using the approach described here, is: (i) conceptually well-motivated, in that it is theoretically impervious to differences in how individuals approach the speed-accuracy trade-off that is inherent to all perceptual decision making; and (ii) of practical utility, in that it is sensitive to differences in task demand, and to differences between groups, even when speech intelligibility remains at or near ceiling level. Further research is needed to explore the sensitivity and practical utility of this metric across diverse listening situations.
Collapse
Affiliation(s)
- Francisca Perea Pérez
- National Institute for Health and Care Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Douglas E. H. Hartley
- National Institute for Health and Care Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
- Nottingham University Hospitals NHS Trust, Nottingham, United Kingdom
| | - Pádraig T. Kitterick
- Hearing Sciences, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
- National Acoustic Laboratories, Sydney, NSW, Australia
| | - Adriana A. Zekveld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, Amsterdam, Netherlands
| | - Graham Naylor
- National Institute for Health and Care Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Ian M. Wiggins
- National Institute for Health and Care Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
14
|
Shatzer HE, Russo FA. Brightening the Study of Listening Effort with Functional Near-Infrared Spectroscopy: A Scoping Review. Semin Hear 2023; 44:188-210. [PMID: 37122884 PMCID: PMC10147513 DOI: 10.1055/s-0043-1766105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/09/2023] Open
Abstract
Listening effort is a long-standing area of interest in auditory cognitive neuroscience. Prior research has used multiple techniques to shed light on the neurophysiological mechanisms underlying listening during challenging conditions. Functional near-infrared spectroscopy (fNIRS) is growing in popularity as a tool for cognitive neuroscience research, and its recent advances offer many potential advantages over other neuroimaging modalities for research related to listening effort. This review introduces the basic science of fNIRS and its uses for auditory cognitive neuroscience. We also discuss its application in recently published studies on listening effort and consider future opportunities for studying effortful listening with fNIRS. After reading this article, the learner will know how fNIRS works and summarize its uses for listening effort research. The learner will also be able to apply this knowledge toward generation of future research in this area.
Collapse
Affiliation(s)
- Hannah E. Shatzer
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Frank A. Russo
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| |
Collapse
|
15
|
Ryan DB, Eckert MA, Sellers EW, Schairer KS, McBee MT, Ridley EA, Smith SL. Performance Monitoring and Cognitive Inhibition during a Speech-in-Noise Task in Older Listeners. Semin Hear 2023; 44:124-139. [PMID: 37122879 PMCID: PMC10147504 DOI: 10.1055/s-0043-1767695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2023] Open
Abstract
The goal of this study was to examine the effect of hearing loss on theta and alpha electroencephalography (EEG) frequency power measures of performance monitoring and cognitive inhibition, respectively, during a speech-in-noise task. It was hypothesized that hearing loss would be associated with an increase in the peak power of theta and alpha frequencies toward easier conditions compared to normal hearing adults. The shift would reflect how hearing loss modulates the recruitment of listening effort to easier listening conditions. Nine older adults with normal hearing (ONH) and 10 older adults with hearing loss (OHL) participated in this study. EEG data were collected from all participants while they completed the words-in-noise task. It hypothesized that hearing loss would also have an effect on theta and alpha power. The ONH group showed an inverted U -shape effect of signal-to-noise ratio (SNR), but there were limited effects of SNR on theta or alpha power in the OHL group. The results of the ONH group support the growing body of literature showing effects of listening conditions on alpha and theta power. The null results of listening condition in the OHL group add to a smaller body of literature, suggesting that listening effort research conditions should have near ceiling performance.
Collapse
Affiliation(s)
- David B. Ryan
- Hearing and Balance Research Program, James H. Quillen VA Medical Center, Mountain Home, Tennessee
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
- Department of Head and Neck Surgery and Communication Sciences, Duke University School of Medicine, Durham, North Carolina
| | - Mark A. Eckert
- Department of Otolaryngology - Head and Neck Surgery, Hearing Research Program, Medical University of South Carolina, Charleston, North Carolina
| | - Eric W. Sellers
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
| | - Kim S. Schairer
- Hearing and Balance Research Program, James H. Quillen VA Medical Center, Mountain Home, Tennessee
- Department of Audiology and Speech Language Pathology, East Tennessee State University, Johnson City, Tennessee
| | - Matthew T. McBee
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
| | - Elizabeth A. Ridley
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
| | - Sherri L. Smith
- Department of Head and Neck Surgery and Communication Sciences, Duke University School of Medicine, Durham, North Carolina
- Center for the Study of Aging and Human Development, Duke University, Durham, North Carolina
- Department of Population Health Sciences, Duke University School of Medicine, Durham, North Carolina
- Audiology and Speech Pathology Service, Durham Veterans Affairs Healthcare System, Durham, North Carolina
| |
Collapse
|
16
|
Su Y, MacGregor LJ, Olasagasti I, Giraud AL. A deep hierarchy of predictions enables online meaning extraction in a computational model of human speech comprehension. PLoS Biol 2023; 21:e3002046. [PMID: 36947552 PMCID: PMC10079236 DOI: 10.1371/journal.pbio.3002046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 04/06/2023] [Accepted: 02/22/2023] [Indexed: 03/23/2023] Open
Abstract
Understanding speech requires mapping fleeting and often ambiguous soundwaves to meaning. While humans are known to exploit their capacity to contextualize to facilitate this process, how internal knowledge is deployed online remains an open question. Here, we present a model that extracts multiple levels of information from continuous speech online. The model applies linguistic and nonlinguistic knowledge to speech processing, by periodically generating top-down predictions and incorporating bottom-up incoming evidence in a nested temporal hierarchy. We show that a nonlinguistic context level provides semantic predictions informed by sensory inputs, which are crucial for disambiguating among multiple meanings of the same word. The explicit knowledge hierarchy of the model enables a more holistic account of the neurophysiological responses to speech compared to using lexical predictions generated by a neural network language model (GPT-2). We also show that hierarchical predictions reduce peripheral processing via minimizing uncertainty and prediction error. With this proof-of-concept model, we demonstrate that the deployment of hierarchical predictions is a possible strategy for the brain to dynamically utilize structured knowledge and make sense of the speech input.
Collapse
Affiliation(s)
- Yaqing Su
- Department of Fundamental Neuroscience, Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Swiss National Centre of Competence in Research "Evolving Language" (NCCR EvolvingLanguage), Geneva, Switzerland
| | - Lucy J MacGregor
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
| | - Itsaso Olasagasti
- Department of Fundamental Neuroscience, Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Swiss National Centre of Competence in Research "Evolving Language" (NCCR EvolvingLanguage), Geneva, Switzerland
| | - Anne-Lise Giraud
- Department of Fundamental Neuroscience, Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Swiss National Centre of Competence in Research "Evolving Language" (NCCR EvolvingLanguage), Geneva, Switzerland
- Institut Pasteur, Université Paris Cité, Inserm, Institut de l'Audition, Paris, France
| |
Collapse
|
17
|
Bsharat-Maalouf D, Degani T, Karawani H. The Involvement of Listening Effort in Explaining Bilingual Listening Under Adverse Listening Conditions. Trends Hear 2023; 27:23312165231205107. [PMID: 37941413 PMCID: PMC10637154 DOI: 10.1177/23312165231205107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 09/14/2023] [Accepted: 09/15/2023] [Indexed: 11/10/2023] Open
Abstract
The current review examines listening effort to uncover how it is implicated in bilingual performance under adverse listening conditions. Various measures of listening effort, including physiological, behavioral, and subjective measures, have been employed to examine listening effort in bilingual children and adults. Adverse listening conditions, stemming from environmental factors, as well as factors related to the speaker or listener, have been examined. The existing literature, although relatively limited to date, points to increased listening effort among bilinguals in their nondominant second language (L2) compared to their dominant first language (L1) and relative to monolinguals. Interestingly, increased effort is often observed even when speech intelligibility remains unaffected. These findings emphasize the importance of considering listening effort alongside speech intelligibility. Building upon the insights gained from the current review, we propose that various factors may modulate the observed effects. These include the particular measure selected to examine listening effort, the characteristics of the adverse condition, as well as factors related to the particular linguistic background of the bilingual speaker. Critically, further research is needed to better understand the impact of these factors on listening effort. The review outlines avenues for future research that would promote a comprehensive understanding of listening effort in bilingual individuals.
Collapse
Affiliation(s)
- Dana Bsharat-Maalouf
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Tamar Degani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Hanin Karawani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| |
Collapse
|
18
|
Zhang M, Siegle GJ. Linking Affective and Hearing Sciences-Affective Audiology. Trends Hear 2023; 27:23312165231208377. [PMID: 37904515 PMCID: PMC10619363 DOI: 10.1177/23312165231208377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 09/22/2023] [Accepted: 10/01/2023] [Indexed: 11/01/2023] Open
Abstract
A growing number of health-related sciences, including audiology, have increasingly recognized the importance of affective phenomena. However, in audiology, affective phenomena are mostly studied as a consequence of hearing status. This review first addresses anatomical and functional bidirectional connections between auditory and affective systems that support a reciprocal affect-hearing relationship. We then postulate, by focusing on four practical examples (hearing public campaigns, hearing intervention uptake, thorough hearing evaluation, and tinnitus), that some important challenges in audiology are likely affect-related and that potential solutions could be developed by inspiration from affective science advances. We continue by introducing useful resources from affective science that could help audiology professionals learn about the wide range of affective constructs and integrate them into hearing research and clinical practice in structured and applicable ways. Six important considerations for good quality affective audiology research are summarized. We conclude that it is worthwhile and feasible to explore the explanatory power of emotions, feelings, motivations, attitudes, moods, and other affective processes in depth when trying to understand and predict how people with hearing difficulties perceive, react, and adapt to their environment.
Collapse
Affiliation(s)
- Min Zhang
- Shanghai Key Laboratory of Clinical Geriatric Medicine, Huadong Hospital, Fudan University, Shanghai, China
| | - Greg J. Siegle
- Department of Psychiatry, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
19
|
Jakobsen Y, Christensen Andersen LA, Schmidt JH. Study protocol for a randomised controlled trial evaluating the benefits from bimodal solution with cochlear implant and hearing aid versus bilateral hearing aids in patients with asymmetric speech identification scores. BMJ Open 2022; 12:e070296. [PMID: 36581413 PMCID: PMC9806092 DOI: 10.1136/bmjopen-2022-070296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
INTRODUCTION Cochlear implant (CI) and hearing aid (HA) in a bimodal solution (CI+HA) is compared with bilateral HAs (HA+HA) to test if the bimodal solution results in better speech intelligibility and self-reported quality of life. METHODS AND ANALYSIS This randomised controlled trial is conducted in Odense University Hospital, Denmark. Sixty adult bilateral HA users referred for CI surgery are enrolled if eligible and undergo: audiometry, speech perception in noise (HINT: Hearing in Noise Test), Speech Identification Scores and video head impulse test. All participants will receive new replacement HAs. After 1 month they will be randomly assigned (1:1) to the intervention group (CI+HA) or to the delayed intervention control group (HA+HA). The intervention group (CI+HA) will receive a CI on the ear with a poorer speech recognition score and continue using the HA on the other ear. The control group (HA+HA) will receive a CI after a total of 4 months of bilateral HA use.The primary outcome measures are speech intelligibility measured objectively with HINT (sentences in noise) and DANTALE I (words) and subjectively with the Speech, Spatial and Qualities of Hearing scale questionnaire. Secondary outcomes are patient reported Health-Related Quality of Life scores assessed with the Nijmegen Cochlear Implant Questionnaire, the Tinnitus Handicap Inventory and Dizziness Handicap Inventory. Third outcome is listening effort assessed with pupil dilation during HINT.In conclusion, the purpose is to improve the clinical decision-making for CI candidacy and optimise bimodal solutions. ETHICS AND DISSEMINATION This study protocol was approved by the Ethics Committee Southern Denmark project ID S-20200074G. All participants are required to sign an informed consent form.This study will be published on completion in peer-reviewed publications and scientific conferences. TRIAL REGISTRATION NUMBER NCT04919928.
Collapse
Affiliation(s)
- Yeliz Jakobsen
- Department of Oto-Rhino-Laryngology, Odense University Hospital, Odense C, Denmark
- Department of Audiology, Odense University Hospital, Odense C, Denmark
| | | | - Jesper Hvass Schmidt
- Department of Oto-Rhino-Laryngology, Odense University Hospital, Odense C, Denmark
- Department of Audiology, Odense University Hospital, Odense C, Denmark
| |
Collapse
|
20
|
Burg EA, Thakkar TD, Litovsky RY. Interaural speech asymmetry predicts bilateral speech intelligibility but not listening effort in adults with bilateral cochlear implants. Front Neurosci 2022; 16:1038856. [PMID: 36570844 PMCID: PMC9768552 DOI: 10.3389/fnins.2022.1038856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 11/21/2022] [Indexed: 12/12/2022] Open
Abstract
Introduction Bilateral cochlear implants (BiCIs) can facilitate improved speech intelligibility in noise and sound localization abilities compared to a unilateral implant in individuals with bilateral severe to profound hearing loss. Still, many individuals with BiCIs do not benefit from binaural hearing to the same extent that normal hearing (NH) listeners do. For example, binaural redundancy, a speech intelligibility benefit derived from having access to duplicate copies of a signal, is highly variable among BiCI users. Additionally, patients with hearing loss commonly report elevated listening effort compared to NH listeners. There is some evidence to suggest that BiCIs may reduce listening effort compared to a unilateral CI, but the limited existing literature has not shown this consistently. Critically, no studies to date have investigated this question using pupillometry to quantify listening effort, where large pupil sizes indicate high effort and small pupil sizes indicate low effort. Thus, the present study aimed to build on existing literature by investigating the potential benefits of BiCIs for both speech intelligibility and listening effort. Methods Twelve BiCI adults were tested in three listening conditions: Better Ear, Poorer Ear, and Bilateral. Stimuli were IEEE sentences presented from a loudspeaker at 0° azimuth in quiet. Participants were asked to repeat back the sentences, and responses were scored by an experimenter while changes in pupil dilation were measured. Results On average, participants demonstrated similar speech intelligibility in the Better Ear and Bilateral conditions, and significantly worse speech intelligibility in the Poorer Ear condition. Despite similar speech intelligibility in the Better Ear and Bilateral conditions, pupil dilation was significantly larger in the Bilateral condition. Discussion These results suggest that the BiCI users tested in this study did not demonstrate binaural redundancy in quiet. The large interaural speech asymmetries demonstrated by participants may have precluded them from obtaining binaural redundancy, as shown by the inverse relationship between the two variables. Further, participants did not obtain a release from effort when listening with two ears versus their better ear only. Instead, results indicate that bilateral listening elicited increased effort compared to better ear listening, which may be due to poor integration of asymmetric inputs.
Collapse
Affiliation(s)
- Emily A. Burg
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States,Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, United States,*Correspondence: Emily A. Burg,
| | - Tanvi D. Thakkar
- Department of Psychology, University of Wisconsin-La Crosse, La Crosse, WI, United States
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States,Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, United States,Division of Otolaryngology, Department of Surgery, University of Wisconsin-Madison, Madison, WI, United States
| |
Collapse
|
21
|
Lanzilotti C, Andéol G, Micheyl C, Scannella S. Cocktail party training induces increased speech intelligibility and decreased cortical activity in bilateral inferior frontal gyri. A functional near-infrared study. PLoS One 2022; 17:e0277801. [PMID: 36454948 PMCID: PMC9714910 DOI: 10.1371/journal.pone.0277801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 11/03/2022] [Indexed: 12/03/2022] Open
Abstract
The human brain networks responsible for selectively listening to a voice amid other talkers remain to be clarified. The present study aimed to investigate relationships between cortical activity and performance in a speech-in-speech task, before (Experiment I) and after training-induced improvements (Experiment II). In Experiment I, 74 participants performed a speech-in-speech task while their cortical activity was measured using a functional near infrared spectroscopy (fNIRS) device. One target talker and one masker talker were simultaneously presented at three different target-to-masker ratios (TMRs): adverse, intermediate and favorable. Behavioral results show that performance may increase monotonically with TMR in some participants and failed to decrease, or even improved, in the adverse-TMR condition for others. On the neural level, an extensive brain network including the frontal (left prefrontal cortex, right dorsolateral prefrontal cortex and bilateral inferior frontal gyri) and temporal (bilateral auditory cortex) regions was more solicited by the intermediate condition than the two others. Additionally, bilateral frontal gyri and left auditory cortex activities were found to be positively correlated with behavioral performance in the adverse-TMR condition. In Experiment II, 27 participants, whose performance was the poorest in the adverse-TMR condition of Experiment I, were trained to improve performance in that condition. Results show significant performance improvements along with decreased activity in bilateral inferior frontal gyri, the right dorsolateral prefrontal cortex, the left inferior parietal cortex and the right auditory cortex in the adverse-TMR condition after training. Arguably, lower neural activity reflects higher efficiency in processing masker inhibition after speech-in-speech training. As speech-in-noise tasks also imply frontal and temporal regions, we suggest that regardless of the type of masking (speech or noise) the complexity of the task will prompt the implication of a similar brain network. Furthermore, the initial significant cognitive recruitment will be reduced following a training leading to an economy of cognitive resources.
Collapse
Affiliation(s)
- Cosima Lanzilotti
- Département Neuroscience et Sciences Cognitives, Institut de Recherche Biomédicale des Armées, Brétigny sur Orge, France
- ISAE-SUPAERO, Université de Toulouse, Toulouse, France
- Thales SIX GTS France, Gennevilliers, France
| | - Guillaume Andéol
- Département Neuroscience et Sciences Cognitives, Institut de Recherche Biomédicale des Armées, Brétigny sur Orge, France
| | | | | |
Collapse
|
22
|
Objective and Subjective Hearing Difficulties Are Associated With Lower Inhibitory Control. Ear Hear 2022; 43:1904-1916. [PMID: 35544449 DOI: 10.1097/aud.0000000000001227] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
OBJECTIVE Evidence suggests that hearing loss increases the risk of cognitive impairment. However, the relationship between hearing loss and cognition can vary considerably across studies, which may be partially explained by demographic and health factors that are not systematically accounted for in statistical models. DESIGN Middle-aged to older adult participants (N = 149) completed a web-based assessment that included speech-in-noise (SiN) and self-report measures of hearing, as well as auditory and visual cognitive interference (Stroop) tasks. Correlations between hearing and cognitive interference measures were performed with and without controlling for age, sex, education, depression, anxiety, and self-rated health. RESULTS The risk of having objective SiN difficulties differed between males and females. All demographic and health variables, except education, influenced the likelihood of reporting hearing difficulties. Small but significant relationships between objective and reported hearing difficulties and the measures of cognitive interference were observed when analyses were controlled for demographic and health factors. Furthermore, when stratifying analyses for males and females, different relationships between hearing and cognitive interference measures were found. Self-reported difficulty with spatial hearing and objective SiN performance were better predictors of inhibitory control in females, whereas self-reported difficulty with speech was a better predictor of inhibitory control in males. This suggests that inhibitory control is associated with different listening abilities in males and females. CONCLUSIONS The results highlight the importance of controlling for participant characteristics when assessing the relationship between hearing and cognitive interference, which may also be the case for other cognitive functions, but this requires further investigations. Furthermore, this study is the first to show that the relationship between hearing and cognitive interference can be captured using web-based tasks that are simple to implement and administer at home without any assistance, paving the way for future online screening tests assessing the effects of hearing loss on cognition.
Collapse
|
23
|
Tarawneh HY, Jayakody DM, Sohrabi HR, Martins RN, Mulders WH. Understanding the Relationship Between Age-Related Hearing Loss and Alzheimer’s Disease: A Narrative Review. J Alzheimers Dis Rep 2022; 6:539-556. [PMID: 36275417 PMCID: PMC9535607 DOI: 10.3233/adr-220035] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 08/16/2022] [Indexed: 12/02/2022] Open
Abstract
Evidence suggests that hearing loss (HL), even at mild levels, increases the long-term risk of cognitive decline and incident dementia. Hearing loss is one of the modifiable risk factors for dementia, with approximately 4 million of the 50 million cases of dementia worldwide possibly attributed to untreated HL. This paper describes four possible mechanisms that have been suggested for the relationship between age-related hearing loss (ARHL) and Alzheimer’s disease (AD), which is the most common form of dementia. The first mechanism suggests mitochondrial dysfunction and altered signal pathways due to aging as a possible link between ARHL and AD. The second mechanism proposes that sensory degradation in hearing impaired people could explain the relationship between ARHL and AD. The occupation of cognitive resource (third) mechanism indicates that the association between ARHL and AD is a result of increased cognitive processing that is required to compensate for the degraded sensory input. The fourth mechanism is an expansion of the third mechanism, i.e., the function and structure interaction involves both cognitive resource occupation (neural activity) and AD pathology as the link between ARHL and AD. Exploring the specific mechanisms that provide the link between ARHL and AD has the potential to lead to innovative ideas for the diagnosis, prevention, and/or treatment of AD. This paper also provides insight into the current evidence for the use of hearing treatments as a possible treatment/prevention for AD, and if auditory assessments could provide an avenue for early detection of cognitive impairment associated with AD.
Collapse
Affiliation(s)
- Hadeel Y. Tarawneh
- School of Human Sciences, The University of Western Australia, Crawley, WA, Australia
- Ear Science Institute Australia, Subiaco, WA, Australia
| | - Dona M.P. Jayakody
- Ear Science Institute Australia, Subiaco, WA, Australia
- Centre of Ear Science, Medical School, The University of Western Australia, Crawley, WA, Australia
| | - Hamid R. Sohrabi
- Centre for Healthy Ageing, College of Science, Health, Engineering and Education, Murdoch University, WA, Australia
- School of Medical and Health Sciences, Edith Cowan University, Joondalup, WA, Australia
- Department of Biomedical Sciences, Faculty of Medicine and Health Sciences, Macquarie University, NSW, Australia
| | - Ralph N. Martins
- School of Medical and Health Sciences, Edith Cowan University, Joondalup, WA, Australia
- Department of Biomedical Sciences, Faculty of Medicine and Health Sciences, Macquarie University, NSW, Australia
| | | |
Collapse
|
24
|
Impact of Effortful Word Recognition on Supportive Neural Systems Measured by Alpha and Theta Power. Ear Hear 2022; 43:1549-1562. [DOI: 10.1097/aud.0000000000001211] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
25
|
Francis AL. Adding noise is a confounded nuisance. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1375. [PMID: 36182286 DOI: 10.1121/10.0013874] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 08/15/2022] [Indexed: 06/16/2023]
Abstract
A wide variety of research and clinical assessments involve presenting speech stimuli in the presence of some kind of noise. Here, I selectively review two theoretical perspectives and discuss ways in which these perspectives may help researchers understand the consequences for listeners of adding noise to a speech signal. I argue that adding noise changes more about the listening task than merely making the signal more difficult to perceive. To fully understand the effects of an added noise on speech perception, we must consider not just how much the noise affects task difficulty, but also how it affects all of the systems involved in understanding speech: increasing message uncertainty, modifying attentional demand, altering affective response, and changing motivation to perform the task.
Collapse
Affiliation(s)
- Alexander L Francis
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, Indiana 47907, USA
| |
Collapse
|
26
|
Ritz H, Wild CJ, Johnsrude IS. Parametric Cognitive Load Reveals Hidden Costs in the Neural Processing of Perfectly Intelligible Degraded Speech. J Neurosci 2022; 42:4619-4628. [PMID: 35508382 PMCID: PMC9186799 DOI: 10.1523/jneurosci.1777-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 03/08/2022] [Accepted: 03/10/2022] [Indexed: 11/21/2022] Open
Abstract
Speech is often degraded by environmental noise or hearing impairment. People can compensate for degradation, but this requires cognitive effort. Previous research has identified frontotemporal networks involved in effortful perception, but materials in these works were also less intelligible, and so it is not clear whether activity reflected effort or intelligibility differences. We used functional magnetic resonance imaging to assess the degree to which spoken sentences were processed under distraction and whether this depended on speech quality even when intelligibility of degraded speech was matched to that of clear speech (close to 100%). On each trial, male and female human participants either attended to a sentence or to a concurrent multiple object tracking (MOT) task that imposed parametric cognitive load. Activity in bilateral anterior insula reflected task demands; during the MOT task, activity increased as cognitive load increased, and during speech listening, activity increased as speech became more degraded. In marked contrast, activity in bilateral anterior temporal cortex was speech selective and gated by attention when speech was degraded. In this region, performance of the MOT task with a trivial load blocked processing of degraded speech, whereas processing of clear speech was unaffected. As load increased, responses to clear speech in these areas declined, consistent with reduced capacity to process it. This result dissociates cognitive control from speech processing; substantially less cognitive control is required to process clear speech than is required to understand even very mildly degraded, 100% intelligible speech. Perceptual and control systems clearly interact dynamically during real-world speech comprehension.SIGNIFICANCE STATEMENT Speech is often perfectly intelligible even when degraded, for example, by background sound, phone transmission, or hearing loss. How does degradation alter cognitive demands? Here, we use fMRI to demonstrate a novel and critical role for cognitive control in the processing of mildly degraded but perfectly intelligible speech. We compare speech that is matched for intelligibility but differs in putative control demands, dissociating cognitive control from speech processing. We also impose a parametric cognitive load during perception, dissociating processes that depend on tasks from those that depend on available capacity. Our findings distinguish between frontal and temporal contributions to speech perception and reveal a hidden cost to processing mildly degraded speech, underscoring the importance of cognitive control for everyday speech comprehension.
Collapse
Affiliation(s)
- Harrison Ritz
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, Rhode Island 02912
| | - Conor J Wild
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
| | - Ingrid S Johnsrude
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
- Departments of Psychology and Communication Sciences and Disorders, University of Western Ontario, London, Ontario N6A 3K7, Canada
| |
Collapse
|
27
|
Vaden KI, Teubner-Rhodes S, Ahlstrom JB, Dubno JR, Eckert MA. Evidence for cortical adjustments to perceptual decision criteria during word recognition in noise. Neuroimage 2022; 253:119042. [PMID: 35259524 PMCID: PMC9082296 DOI: 10.1016/j.neuroimage.2022.119042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 02/23/2022] [Accepted: 02/26/2022] [Indexed: 01/31/2023] Open
Abstract
Extensive increases in cingulo-opercular frontal activity are typically observed during speech recognition in noise tasks. This elevated activity has been linked to a word recognition benefit on the next trial, termed "adaptive control," but how this effect might be implemented has been unclear. The established link between perceptual decision making and cingulo-opercular function may provide an explanation for how those regions benefit subsequent word recognition. In this case, processes that support recognition such as raising or lowering the decision criteria for more accurate or faster recognition may be adjusted to optimize performance on the next trial. The current neuroimaging study tested the hypothesis that pre-stimulus cingulo-opercular activity reflects criterion adjustments that determine how much information to collect for word recognition on subsequent trials. Participants included middle-age and older adults (N = 30; age = 58.3 ± 8.8 years; m ± sd) with normal hearing or mild sensorineural hearing loss. During a sparse fMRI experiment, words were presented in multitalker babble at +3 dB or +10 dB signal-to-noise ratio (SNR), which participants were instructed to repeat aloud. Word recognition was significantly poorer with increasing participant age and lower SNR compared to higher SNR conditions. A perceptual decision-making model was used to characterize processing differences based on task response latency distributions. The model showed that significantly less sensory evidence was collected (i.e., lower criteria) for lower compared to higher SNR trials. Replicating earlier observations, pre-stimulus cingulo-opercular activity was significantly predictive of correct recognition on a subsequent trial. Individual differences showed that participants with higher criteria also benefitted the most from pre-stimulus activity. Moreover, trial-level criteria changes were significantly linked to higher versus lower pre-stimulus activity. These results suggest cingulo-opercular cortex contributes to criteria adjustments to optimize speech recognition task performance.
Collapse
Affiliation(s)
- Kenneth I. Vaden
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States,Corresponding author. (K.I. Vaden Jr)
| | - Susan Teubner-Rhodes
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States,Department of Psychological Sciences, 226 Thach Hall, Auburn University, AL 36849-9027
| | - Jayne B. Ahlstrom
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| | - Judy R. Dubno
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| | - Mark A. Eckert
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| |
Collapse
|
28
|
Irsik VC, Johnsrude IS, Herrmann B. Age-related deficits in dip-listening evident for isolated sentences but not for spoken stories. Sci Rep 2022; 12:5898. [PMID: 35393472 PMCID: PMC8991280 DOI: 10.1038/s41598-022-09805-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 03/23/2022] [Indexed: 12/03/2022] Open
Abstract
Fluctuating background sounds facilitate speech intelligibility by providing speech ‘glimpses’ (masking release). Older adults benefit less from glimpses, but masking release is typically investigated using isolated sentences. Recent work indicates that using engaging, continuous speech materials (e.g., spoken stories) may qualitatively alter speech-in-noise listening. Moreover, neural sensitivity to different amplitude envelope profiles (ramped, damped) changes with age, but whether this affects speech listening is unknown. In three online experiments, we investigate how masking release in younger and older adults differs for masked sentences and stories, and how speech intelligibility varies with masker amplitude profile. Intelligibility was generally greater for damped than ramped maskers. Masking release was reduced in older relative to younger adults for disconnected sentences, and stories with a randomized sentence order. Critically, when listening to stories with an engaging and coherent narrative, older adults demonstrated equal or greater masking release compared to younger adults. Older adults thus appear to benefit from ‘glimpses’ as much as, or more than, younger adults when the speech they are listening to follows a coherent topical thread. Our results highlight the importance of cognitive and motivational factors for speech understanding, and suggest that previous work may have underestimated speech-listening abilities in older adults.
Collapse
Affiliation(s)
- Vanessa C Irsik
- Department of Psychology & The Brain and Mind Institute, The University of Western Ontario, London, ON, N6A 3K7, Canada.
| | - Ingrid S Johnsrude
- Department of Psychology & The Brain and Mind Institute, The University of Western Ontario, London, ON, N6A 3K7, Canada.,School of Communication and Speech Disorders, The University of Western Ontario, London, ON, N6A 5B7, Canada
| | - Björn Herrmann
- Department of Psychology & The Brain and Mind Institute, The University of Western Ontario, London, ON, N6A 3K7, Canada.,Rotman Research Institute, Baycrest, Toronto, ON, M6A 2E1, Canada.,Department of Psychology, University of Toronto, Toronto, ON, M5S 1A1, Canada
| |
Collapse
|
29
|
Irsik VC, Johnsrude IS, Herrmann B. Neural Activity during Story Listening Is Synchronized across Individuals Despite Acoustic Masking. J Cogn Neurosci 2022; 34:933-950. [PMID: 35258555 DOI: 10.1162/jocn_a_01842] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Older people with hearing problems often experience difficulties understanding speech in the presence of background sound. As a result, they may disengage in social situations, which has been associated with negative psychosocial health outcomes. Measuring listening (dis)engagement during challenging listening situations has received little attention thus far. We recruit young, normal-hearing human adults (both sexes) and investigate how speech intelligibility and engagement during naturalistic story listening is affected by the level of acoustic masking (12-talker babble) at different signal-to-noise ratios (SNRs). In Experiment 1, we observed that word-report scores were above 80% for all but the lowest SNR (-3 dB SNR) we tested, at which performance dropped to 54%. In Experiment 2, we calculated intersubject correlation (ISC) using EEG data to identify dynamic spatial patterns of shared neural activity evoked by the stories. ISC has been used as a neural measure of participants' engagement with naturalistic materials. Our results show that ISC was stable across all but the lowest SNRs, despite reduced speech intelligibility. Comparing ISC and intelligibility demonstrated that word-report performance declined more strongly with decreasing SNR compared to ISC. Our measure of neural engagement suggests that individuals remain engaged in story listening despite missing words because of background noise. Our work provides a potentially fruitful approach to investigate listener engagement with naturalistic, spoken stories that may be used to investigate (dis)engagement in older adults with hearing impairment.
Collapse
Affiliation(s)
| | | | - Björn Herrmann
- The University of Western Ontario.,Rotman Research Institute, Toronto, ON, Canada.,University of Toronto
| |
Collapse
|
30
|
Hong L, Zeng Q, Li K, Luo X, Xu X, Liu X, Li Z, Fu Y, Wang Y, Zhang T, Chen Y, Liu Z, Huang P, Zhang M. Intrinsic Brain Activity of Inferior Temporal Region Increased in Prodromal Alzheimer's Disease With Hearing Loss. Front Aging Neurosci 2022; 13:772136. [PMID: 35153717 PMCID: PMC8831745 DOI: 10.3389/fnagi.2021.772136] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 12/31/2021] [Indexed: 01/13/2023] Open
Abstract
Background and Objective Hearing loss (HL) is one of the modifiable risk factors for Alzheimer's disease (AD). However, the underlying mechanism behind HL in AD remains elusive. A possible mechanism is cognitive load hypothesis, which postulates that over-processing of degraded auditory signals in the auditory cortex leads to deficits in other cognitive functions. Given mild cognitive impairment (MCI) is a prodromal stage of AD, untangling the association between HL and MCI might provide insights for potential mechanism behind HL. Methods We included 85 cognitively normal (CN) subjects with no hearing loss (NHL), 24 CN with HL, 103 mild cognitive impairment (MCI) patients with NHL, and 23 MCI with HL from the ADNI database. All subjects underwent resting-state functional MRI and neuropsychological scale assessments. Fractional amplitude of low-frequency fluctuation (fALFF) was used to reflect spontaneous brain activity. The mixed-effects analysis was applied to explore the interactive effects between HL and cognitive status (GRF corrected, voxel p-value <0.005, cluster p-value < 0.05, two-tailed). Then, the FDG data was included to further reflect the regional neuronal abnormalities. Finally, Pearson correlation analysis was performed between imaging metrics and cognitive scores to explore the clinical significance (Bonferroni corrected, p < 0.05). Results The interactive effects primarily located in the left superior temporal gyrus (STG) and bilateral inferior temporal gyrus (ITG). Post-hoc analysis showed that NC with HL had lower fALFF in bilateral ITG compared to NC with NHL. NC with HL had higher fALFF in the left STG and decreased fALFF in bilateral ITG compared to MCI with HL. In addition, NC with HL had lower fALFF in the right ITG compared to MCI with NHL. Correlation analysis revealed that fALFF was associated with MMSE and ADNI-VS, while SUVR was associated with MMSE, MoCA, ADNI-EF and ADNI-Lan. Conclusion HL showed different effects on NC and MCI stages. NC had increased spontaneous brain activity in auditory cortex while decreased activity in the ITG. Such pattern altered with disease stage changing and manifested as decreased activity in auditory cortex along with increased activity in ITG in MCI. This suggested that the cognitive load hypothesis may be the underlying mechanism behind HL.
Collapse
Affiliation(s)
- Luwei Hong
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Qingze Zeng
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Kaicheng Li
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Xiao Luo
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Xiaopei Xu
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Xiaocao Liu
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Zheyu Li
- Department of Neurology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Yanv Fu
- Department of Neurology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Yanbo Wang
- Department of Neurology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Tianyi Zhang
- Department of Neurology, Tongde Hospital of Zhejiang Province, Hangzhou, China
| | - Yanxing Chen
- Department of Neurology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Zhirong Liu
- Department of Neurology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Peiyu Huang
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
- Peiyu Huang
| | - Minming Zhang
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
- Peiyu Huang
| |
Collapse
|
31
|
Sun PW, Hines A. Listening Effort Informed Quality of Experience Evaluation. Front Psychol 2022; 12:767840. [PMID: 35069342 PMCID: PMC8766726 DOI: 10.3389/fpsyg.2021.767840] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 10/31/2021] [Indexed: 11/15/2022] Open
Abstract
Perceived quality of experience for speech listening is influenced by cognitive processing and can affect a listener's comprehension, engagement and responsiveness. Quality of Experience (QoE) is a paradigm used within the media technology community to assess media quality by linking quantifiable media parameters to perceived quality. The established QoE framework provides a general definition of QoE, categories of possible quality influencing factors, and an identified QoE formation pathway. These assist researchers to implement experiments and to evaluate perceived quality for any applications. The QoE formation pathways in the current framework do not attempt to capture cognitive effort effects and the standard experimental assessments of QoE minimize the influence from cognitive processes. The impact of cognitive processes and how they can be captured within the QoE framework have not been systematically studied by the QoE research community. This article reviews research from the fields of audiology and cognitive science regarding how cognitive processes influence the quality of listening experience. The cognitive listening mechanism theories are compared with the QoE formation mechanism in terms of the quality contributing factors, experience formation pathways, and measures for experience. The review prompts a proposal to integrate mechanisms from audiology and cognitive science into the existing QoE framework in order to properly account for cognitive load in speech listening. The article concludes with a discussion regarding how an extended framework could facilitate measurement of QoE in broader and more realistic application scenarios where cognitive effort is a material consideration.
Collapse
Affiliation(s)
- Pheobe Wenyi Sun
- QxLab, School of Computer Science, University College Dublin, Dublin, Ireland
| | - Andrew Hines
- QxLab, School of Computer Science, University College Dublin, Dublin, Ireland
| |
Collapse
|
32
|
Perea Pérez F, Hartley DE, Kitterick PT, Wiggins IM. Perceived Listening Difficulties of Adult Cochlear-Implant Users Under Measures Introduced to Combat the Spread of COVID-19. Trends Hear 2022; 26:23312165221087011. [PMID: 35440245 PMCID: PMC9024163 DOI: 10.1177/23312165221087011] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 02/17/2022] [Accepted: 02/24/2022] [Indexed: 11/23/2022] Open
Abstract
Following the outbreak of the COVID-19 pandemic, public-health measures introduced to stem the spread of the disease caused profound changes to patterns of daily-life communication. This paper presents the results of an online survey conducted to document adult cochlear-implant (CI) users' perceived listening difficulties under four communication scenarios commonly experienced during the pandemic, specifically when talking: with someone wearing a facemask, under social/physical distancing guidelines, via telephone, and via video call. Results from ninety-four respondents indicated that people considered their in-person listening experiences in some common everyday scenarios to have been significantly worsened by the introduction of mask-wearing and physical distancing. Participants reported experiencing an array of listening difficulties, including reduced speech intelligibility and increased listening effort, which resulted in many people actively avoiding certain communication scenarios at least some of the time. Participants also found listening effortful during remote communication, which became rapidly more prevalent following the outbreak of the pandemic. Potential solutions identified by participants to ease the burden of everyday listening with a CI may have applicability beyond the context of the COVID-19 pandemic. Specifically, the results emphasized the importance of visual cues, including lipreading and live speech-to-text transcriptions, to improve in-person and remote communication for people with a CI.
Collapse
Affiliation(s)
- Francisca Perea Pérez
- National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, UK
- Hearing Sciences, Division of Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK
| | - Douglas E.H. Hartley
- National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, UK
- Hearing Sciences, Division of Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK
- Nottingham University Hospitals NHS Trust, Nottingham, UK
| | - Pádraig T. Kitterick
- Hearing Sciences, Division of Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK
- National Acoustic Laboratories, Sydney, Australia
| | - Ian M. Wiggins
- National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, UK
- Hearing Sciences, Division of Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK
| |
Collapse
|
33
|
Eckert MA, Teubner-Rhodes S, Vaden KI, Ahlstrom JB, McClaskey CM, Dubno JR. Unique patterns of hearing loss and cognition in older adults' neural responses to cues for speech recognition difficulty. Brain Struct Funct 2022; 227:203-218. [PMID: 34632538 PMCID: PMC9044122 DOI: 10.1007/s00429-021-02398-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 09/26/2021] [Indexed: 01/31/2023]
Abstract
Older adults with hearing loss experience significant difficulties understanding speech in noise, perhaps due in part to limited benefit from supporting executive functions that enable the use of environmental cues signaling changes in listening conditions. Here we examined the degree to which 41 older adults (60.56-86.25 years) exhibited cortical responses to informative listening difficulty cues that communicated the listening difficulty for each trial compared to neutral cues that were uninformative of listening difficulty. Word recognition was significantly higher for informative compared to uninformative cues in a + 10 dB signal-to-noise ratio (SNR) condition, and response latencies were significantly shorter for informative cues in the + 10 dB SNR and the more-challenging + 2 dB SNR conditions. Informative cues were associated with elevated blood oxygenation level-dependent contrast in visual and parietal cortex. A cue-SNR interaction effect was observed in the cingulo-opercular (CO) network, such that activity only differed between SNR conditions when an informative cue was presented. That is, participants used the informative cues to prepare for changes in listening difficulty from one trial to the next. This cue-SNR interaction effect was driven by older adults with more low-frequency hearing loss and was not observed for those with more high-frequency hearing loss, poorer set-shifting task performance, and lower frontal operculum gray matter volume. These results suggest that proactive strategies for engaging CO adaptive control may be important for older adults with high-frequency hearing loss to optimize speech recognition in changing and challenging listening conditions.
Collapse
Affiliation(s)
- Mark A. Eckert
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | | | - Kenneth I. Vaden
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | - Jayne B. Ahlstrom
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | - Carolyn M. McClaskey
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | - Judy R. Dubno
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| |
Collapse
|
34
|
Amichetti NM, Neukam J, Kinney AJ, Capach N, March SU, Svirsky MA, Wingfield A. Adults with cochlear implants can use prosody to determine the clausal structure of spoken sentences. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:4315. [PMID: 34972310 PMCID: PMC8674009 DOI: 10.1121/10.0008899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 11/04/2021] [Accepted: 11/08/2021] [Indexed: 06/14/2023]
Abstract
Speech prosody, including pitch contour, word stress, pauses, and vowel lengthening, can aid the detection of the clausal structure of a multi-clause sentence and this, in turn, can help listeners determine the meaning. However, for cochlear implant (CI) users, the reduced acoustic richness of the signal raises the question of whether CI users may have difficulty using sentence prosody to detect syntactic clause boundaries within sentences or whether this ability is rescued by the redundancy of the prosodic features that normally co-occur at clause boundaries. Twenty-two CI users, ranging in age from 19 to 77 years old, recalled three types of sentences: sentences in which the prosodic pattern was appropriate to the location of a clause boundary within the sentence (congruent prosody), sentences with reduced prosodic information, or sentences in which the location of the clause boundary and the prosodic marking of a clause boundary were placed in conflict. The results showed the presence of congruent prosody to be associated with superior sentence recall and a reduced processing effort as indexed by the pupil dilation. The individual differences in a standard test of word recognition (consonant-nucleus-consonant score) were related to the recall accuracy as well as the processing effort. The outcomes are discussed in terms of the redundancy of the prosodic features, which normally accompany a clause boundary and processing effort.
Collapse
Affiliation(s)
- Nicole M Amichetti
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Jonathan Neukam
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Alexander J Kinney
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Nicole Capach
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Samantha U March
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Mario A Svirsky
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| |
Collapse
|
35
|
Murai S, Yang AN, Hiryu S, Kobayasi KI. Music in Noise: Neural Correlates Underlying Noise Tolerance in Music-Induced Emotion. Cereb Cortex Commun 2021; 2:tgab061. [PMID: 34746792 PMCID: PMC8564766 DOI: 10.1093/texcom/tgab061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Revised: 09/25/2021] [Accepted: 09/26/2021] [Indexed: 11/14/2022] Open
Abstract
Music can be experienced in various acoustic qualities. In this study, we investigated how the acoustic quality of the music can influence strong emotional experiences, such as musical chills, and the neural activity. The music’s acoustic quality was controlled by adding noise to musical pieces. Participants listened to clear and noisy musical pieces and pressed a button when they experienced chills. We estimated neural activity in response to chills under both clear and noisy conditions using functional magnetic resonance imaging (fMRI). The behavioral data revealed that compared with the clear condition, the noisy condition dramatically decreased the number of chills and duration of chills. The fMRI results showed that under both noisy and clear conditions the supplementary motor area, insula, and superior temporal gyrus were similarly activated when participants experienced chills. The involvement of these brain regions may be crucial for music-induced emotional processes under the noisy as well as the clear condition. In addition, we found a decrease in the activation of the right superior temporal sulcus when experiencing chills under the noisy condition, which suggests that music-induced emotional processing is sensitive to acoustic quality.
Collapse
Affiliation(s)
- Shota Murai
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Miyakodani, Tatara, Kyotanabe, Kyoto 610-0321, Japan
| | - Ae Na Yang
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Miyakodani, Tatara, Kyotanabe, Kyoto 610-0321, Japan
| | - Shizuko Hiryu
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Miyakodani, Tatara, Kyotanabe, Kyoto 610-0321, Japan
| | - Kohta I Kobayasi
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Miyakodani, Tatara, Kyotanabe, Kyoto 610-0321, Japan
| |
Collapse
|
36
|
Brisson V, Tremblay P. Improving speech perception in noise in young and older adults using transcranial magnetic stimulation. BRAIN AND LANGUAGE 2021; 222:105009. [PMID: 34425411 DOI: 10.1016/j.bandl.2021.105009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 08/06/2021] [Accepted: 08/12/2021] [Indexed: 06/13/2023]
Abstract
UNLABELLED Normal aging is associated with speech perception in noise (SPiN) difficulties. The objective of this study was to determine if SPiN performance can be enhanced by intermittent theta-burst stimulation (iTBS) in young and older adults. METHOD We developed a sub-lexical SPiN test to evaluate the contribution of age, hearing, and cognition to SPiN performance in young and older adults. iTBS was applied to the left posterior superior temporal sulcus (pSTS) and the left ventral premotor cortex (PMv) to examine its impact on SPiN performance. RESULTS Aging was associated with reduced SPiN accuracy. TMS-induced performance gain was greater after stimulation of the PMv compared to the pSTS. Participants with lower scores in the baseline condition improved the most. DISCUSSION SPiN difficulties can be reduced by enhancing activity within the left speech-processing network in adults. This study paves the way for the development of TMS-based interventions to reduce SPiN difficulties in adults.
Collapse
Affiliation(s)
- Valérie Brisson
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada
| | - Pascale Tremblay
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada.
| |
Collapse
|
37
|
Defenderfer J, Forbes S, Wijeakumar S, Hedrick M, Plyler P, Buss AT. Frontotemporal activation differs between perception of simulated cochlear implant speech and speech in background noise: An image-based fNIRS study. Neuroimage 2021; 240:118385. [PMID: 34256138 PMCID: PMC8503862 DOI: 10.1016/j.neuroimage.2021.118385] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 06/10/2021] [Accepted: 07/09/2021] [Indexed: 10/27/2022] Open
Abstract
In this study we used functional near-infrared spectroscopy (fNIRS) to investigate neural responses in normal-hearing adults as a function of speech recognition accuracy, intelligibility of the speech stimulus, and the manner in which speech is distorted. Participants listened to sentences and reported aloud what they heard. Speech quality was distorted artificially by vocoding (simulated cochlear implant speech) or naturally by adding background noise. Each type of distortion included high and low-intelligibility conditions. Sentences in quiet were used as baseline comparison. fNIRS data were analyzed using a newly developed image reconstruction approach. First, elevated cortical responses in the middle temporal gyrus (MTG) and middle frontal gyrus (MFG) were associated with speech recognition during the low-intelligibility conditions. Second, activation in the MTG was associated with recognition of vocoded speech with low intelligibility, whereas MFG activity was largely driven by recognition of speech in background noise, suggesting that the cortical response varies as a function of distortion type. Lastly, an accuracy effect in the MFG demonstrated significantly higher activation during correct perception relative to incorrect perception of speech. These results suggest that normal-hearing adults (i.e., untrained listeners of vocoded stimuli) do not exploit the same attentional mechanisms of the frontal cortex used to resolve naturally degraded speech and may instead rely on segmental and phonetic analyses in the temporal lobe to discriminate vocoded speech.
Collapse
Affiliation(s)
- Jessica Defenderfer
- Speech and Hearing Science, University of Tennessee Health Science Center, Knoxville, TN, United States.
| | - Samuel Forbes
- Psychology, University of East Anglia, Norwich, England.
| | | | - Mark Hedrick
- Speech and Hearing Science, University of Tennessee Health Science Center, Knoxville, TN, United States.
| | - Patrick Plyler
- Speech and Hearing Science, University of Tennessee Health Science Center, Knoxville, TN, United States.
| | - Aaron T Buss
- Psychology, University of Tennessee, Knoxville, TN, United States.
| |
Collapse
|
38
|
De Groote E, Eqlimi E, Bockstael A, Botteldooren D, Santens P, De Letter M. Parkinson's disease affects the neural alpha oscillations associated with speech-in-noise processing. Eur J Neurosci 2021; 54:7355-7376. [PMID: 34617350 DOI: 10.1111/ejn.15477] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 09/03/2021] [Accepted: 09/21/2021] [Indexed: 11/29/2022]
Abstract
Parkinson's disease (PD) has increasingly been associated with auditory dysfunction, including alterations regarding the control of auditory information processing. Although these alterations may interfere with the processing of speech in degraded listening conditions, behavioural studies have generally found preserved speech-in-noise recognition in PD. However, behavioural speech audiometry does not capture the neurophysiological mechanisms supporting speech-in-noise processing. Therefore, the aim of this study was to investigate the neural oscillatory mechanisms associated with speech-in-noise processing in PD. Twelve persons with PD and 12 age- and gender-matched healthy controls (HCs) were included in this study. Persons with PD were studied in the medication off condition. All subjects underwent an audiometric screening and performed a sentence-in-noise recognition task under simultaneous electroencephalography (EEG) recording. Behavioural speech recognition scores and self-reported ratings of effort, performance, and motivation were collected. Time-frequency analysis of EEG data revealed no significant difference between persons with PD and HCs regarding delta-theta (2-8 Hz) inter-trial phase coherence to noise and sentence onset. In contrast, significantly increased alpha (8-12 Hz) power was found in persons with PD compared with HCs during the sentence-in-noise recognition task. Behaviourally, persons with PD demonstrated significantly decreased speech recognition scores, whereas no significant differences were found regarding effort, performance, and motivation ratings. These results suggest that persons with PD allocate more cognitive resources to support speech-in-noise processing. The interpretation of this finding is discussed in the context of a top-down mediated compensation mechanism for inefficient filtering and degradation of auditory input in PD.
Collapse
Affiliation(s)
- Evelien De Groote
- Department of Rehabilitation Sciences, BrainComm Research Group, Ghent University, Ghent, Belgium
| | - Ehsan Eqlimi
- Department of Information Technology, WAVES Research Group, Ghent University, Ghent, Belgium
| | - Annelies Bockstael
- Department of Information Technology, WAVES Research Group, Ghent University, Ghent, Belgium
| | - Dick Botteldooren
- Department of Information Technology, WAVES Research Group, Ghent University, Ghent, Belgium
| | - Patrick Santens
- Department of Neurology, Ghent University Hospital, Ghent, Belgium
| | - Miet De Letter
- Department of Rehabilitation Sciences, BrainComm Research Group, Ghent University, Ghent, Belgium
| |
Collapse
|
39
|
Reduced Semantic Context and Signal-to-Noise Ratio Increase Listening Effort As Measured Using Functional Near-Infrared Spectroscopy. Ear Hear 2021; 43:836-848. [PMID: 34623112 DOI: 10.1097/aud.0000000000001137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Understanding speech-in-noise can be highly effortful. Decreasing the signal-to-noise ratio (SNR) of speech increases listening effort, but it is relatively unclear if decreasing the level of semantic context does as well. The current study used functional near-infrared spectroscopy to evaluate two primary hypotheses: (1) listening effort (operationalized as oxygenation of the left lateral PFC) increases as the SNR decreases and (2) listening effort increases as context decreases. DESIGN Twenty-eight younger adults with normal hearing completed the Revised Speech Perception in Noise Test, in which they listened to sentences and reported the final word. These sentences either had an easy SNR (+4 dB) or a hard SNR (-2 dB), and were either low in semantic context (e.g., "Tom could have thought about the sport") or high in context (e.g., "She had to vacuum the rug"). PFC oxygenation was measured throughout using functional near-infrared spectroscopy. RESULTS Accuracy on the Revised Speech Perception in Noise Test was worse when the SNR was hard than when it was easy, and worse for sentences low in semantic context than high in context. Similarly, oxygenation across the entire PFC (including the left lateral PFC) was greater when the SNR was hard, and left lateral PFC oxygenation was greater when context was low. CONCLUSIONS These results suggest that activation of the left lateral PFC (interpreted here as reflecting listening effort) increases to compensate for acoustic and linguistic challenges. This may reflect the increased engagement of domain-general and domain-specific processes subserved by the dorsolateral prefrontal cortex (e.g., cognitive control) and inferior frontal gyrus (e.g., predicting the sensory consequences of articulatory gestures), respectively.
Collapse
|
40
|
Bhandari P, Demberg V, Kray J. Semantic Predictability Facilitates Comprehension of Degraded Speech in a Graded Manner. Front Psychol 2021; 12:714485. [PMID: 34566795 PMCID: PMC8459870 DOI: 10.3389/fpsyg.2021.714485] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 08/06/2021] [Indexed: 01/02/2023] Open
Abstract
Previous studies have shown that at moderate levels of spectral degradation, semantic predictability facilitates language comprehension. It is argued that when speech is degraded, listeners have narrowed expectations about the sentence endings; i.e., semantic prediction may be limited to only most highly predictable sentence completions. The main objectives of this study were to (i) examine whether listeners form narrowed expectations or whether they form predictions across a wide range of probable sentence endings, (ii) assess whether the facilitatory effect of semantic predictability is modulated by perceptual adaptation to degraded speech, and (iii) use and establish a sensitive metric for the measurement of language comprehension. For this, we created 360 German Subject-Verb-Object sentences that varied in semantic predictability of a sentence-final target word in a graded manner (high, medium, and low) and levels of spectral degradation (1, 4, 6, and 8 channels noise-vocoding). These sentences were presented auditorily to two groups: One group (n =48) performed a listening task in an unpredictable channel context in which the degraded speech levels were randomized, while the other group (n =50) performed the task in a predictable channel context in which the degraded speech levels were blocked. The results showed that at 4 channels noise-vocoding, response accuracy was higher in high-predictability sentences than in the medium-predictability sentences, which in turn was higher than in the low-predictability sentences. This suggests that, in contrast to the narrowed expectations view, comprehension of moderately degraded speech, ranging from low- to high- including medium-predictability sentences, is facilitated in a graded manner; listeners probabilistically preactivate upcoming words from a wide range of semantic space, not limiting only to highly probable sentence endings. Additionally, in both channel contexts, we did not observe learning effects; i.e., response accuracy did not increase over the course of experiment, and response accuracy was higher in the predictable than in the unpredictable channel context. We speculate from these observations that when there is no trial-by-trial variation of the levels of speech degradation, listeners adapt to speech quality at a long timescale; however, when there is a trial-by-trial variation of the high-level semantic feature (e.g., sentence predictability), listeners do not adapt to low-level perceptual property (e.g., speech quality) at a short timescale.
Collapse
Affiliation(s)
- Pratik Bhandari
- Department of Psychology, Saarland University, Saarbrücken, Germany
- Department of Language Science and Technology, Saarland University, Saarbrücken, Germany
| | - Vera Demberg
- Department of Language Science and Technology, Saarland University, Saarbrücken, Germany
- Department of Computer Science, Saarland University, Saarbrücken, Germany
| | - Jutta Kray
- Department of Psychology, Saarland University, Saarbrücken, Germany
| |
Collapse
|
41
|
Abstract
Listening effort is a valuable and important notion to measure because it is among the primary complaints of people with hearing loss. It is tempting and intuitive to accept speech intelligibility scores as a proxy for listening effort, but this link is likely oversimplified and lacks actionable explanatory power. This study was conducted to explain the mechanisms of listening effort that are not captured by intelligibility scores, using sentence-repetition tasks where specific kinds of mistakes were prospectively planned or analyzed retrospectively. Effort measured as changes in pupil size among 20 listeners with normal hearing and 19 listeners with cochlear implants. Experiment 1 demonstrates that mental correction of misperceived words increases effort even when responses are correct. Experiment 2 shows that for incorrect responses, listening effort is not a function of the proportion of words correct but is rather driven by the types of errors, position of errors within a sentence, and the need to resolve ambiguity, reflecting how easily the listener can make sense of a perception. A simple taxonomy of error types is provided that is both intuitive and consistent with data from these two experiments. The diversity of errors in these experiments implies that speech perception tasks can be designed prospectively to elicit the mistakes that are more closely linked with effort. Although mental corrective action and number of mistakes can scale together in many experiments, it is possible to dissociate them to advance toward a more explanatory (rather than correlational) account of listening effort.
Collapse
Affiliation(s)
- Matthew B. Winn
- Matthew B. Winn, University of Minnesota, Twin Cities, 164 Pillsbury Dr SE, Minneapolis, MN Minnesota 55455, United States.
| | | |
Collapse
|
42
|
Jafari Z, Kolb BE, Mohajerani MH. Age-related hearing loss and cognitive decline: MRI and cellular evidence. Ann N Y Acad Sci 2021; 1500:17-33. [PMID: 34114212 DOI: 10.1111/nyas.14617] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Revised: 04/30/2021] [Accepted: 05/07/2021] [Indexed: 12/16/2022]
Abstract
Extensive evidence supports the association between age-related hearing loss (ARHL) and cognitive decline. It is, however, unknown whether a causal relationship exists between these two, or whether they both result from shared mechanisms. This paper intends to study this relationship through a comprehensive review of MRI findings as well as evidence of cellular alterations. Our review of structural MRI studies demonstrates that ARHL is independently linked to accelerated atrophy of total and regional brain volumes and reduced white matter integrity. Resting-state and task-based fMRI studies on ARHL also show changes in spontaneous neural activity and brain functional connectivity; and alterations in brain areas supporting auditory, language, cognitive, and affective processing independent of age, respectively. Although MRI findings support a causal relationship between ARHL and cognitive decline, the contribution of potential shared mechanisms should also be considered. In this regard, the review of cellular evidence indicates their role as possible common mechanisms underlying both age-related changes in hearing and cognition. Considering existing evidence, no single hypothesis can explain the link between ARHL and cognitive decline, and the contribution of both causal (i.e., the sensory hypothesis) and shared (i.e., the common cause hypothesis) mechanisms is expected.
Collapse
Affiliation(s)
- Zahra Jafari
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| | - Bryan E Kolb
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| | - Majid H Mohajerani
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| |
Collapse
|
43
|
Murai SA, Riquimaroux H. Neural correlates of subjective comprehension of noise-vocoded speech. Hear Res 2021; 405:108249. [PMID: 33894680 DOI: 10.1016/j.heares.2021.108249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 03/28/2021] [Accepted: 04/06/2021] [Indexed: 10/21/2022]
Abstract
Under an acoustically degraded condition, the degree of speech comprehension fluctuates within individuals. Understanding the relationship between such fluctuations in comprehension and neural responses might reveal perceptual processing for distorted speech. In this study we investigated the cerebral activity associated with the degree of subjective comprehension of noise-vocoded speech sounds (NVSS) using functional magnetic resonance imaging. Our results indicate that higher comprehension of NVSS sentences was associated with greater activation in the right superior temporal cortex, and that activity in the left inferior frontal gyrus (Broca's area) was increased when a listener recognized words in a sentence they did not fully comprehend. In addition, results of laterality analysis demonstrated that recognition of words in an NVSS sentence led to less lateralized responses in the temporal cortex, though a left-lateralization was observed when no words were recognized. The data suggest that variation in comprehension within individuals can be associated with changes in lateralization in the temporal auditory cortex.
Collapse
Affiliation(s)
- Shota A Murai
- Faculty of Life and Medical Sciences, Doshisha University. 1-3 Miyakodani, Tatara, Kyotanabe 610-0321, Kyoto, Japan
| | - Hiroshi Riquimaroux
- Faculty of Life and Medical Sciences, Doshisha University. 1-3 Miyakodani, Tatara, Kyotanabe 610-0321, Kyoto, Japan.
| |
Collapse
|
44
|
Kadem M, Herrmann B, Rodd JM, Johnsrude IS. Pupil Dilation Is Sensitive to Semantic Ambiguity and Acoustic Degradation. Trends Hear 2021; 24:2331216520964068. [PMID: 33124518 PMCID: PMC7607724 DOI: 10.1177/2331216520964068] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Speech comprehension is challenged by background noise, acoustic interference, and linguistic factors, such as the presence of words with more than one meaning (homonyms and homophones). Previous work suggests that homophony in spoken language increases cognitive demand. Here, we measured pupil dilation—a physiological index of cognitive demand—while listeners heard high-ambiguity sentences, containing words with more than one meaning, or well-matched low-ambiguity sentences without ambiguous words. This semantic-ambiguity manipulation was crossed with an acoustic manipulation in two experiments. In Experiment 1, sentences were masked with 30-talker babble at 0 and +6 dB signal-to-noise ratio (SNR), and in Experiment 2, sentences were heard with or without a pink noise masker at –2 dB SNR. Speech comprehension was measured by asking listeners to judge the semantic relatedness of a visual probe word to the previous sentence. In both experiments, comprehension was lower for high- than for low-ambiguity sentences when SNRs were low. Pupils dilated more when sentences included ambiguous words, even when no noise was added (Experiment 2). Pupil also dilated more when SNRs were low. The effect of masking was larger than the effect of ambiguity for performance and pupil responses. This work demonstrates that the presence of homophones, a condition that is ubiquitous in natural language, increases cognitive demand and reduces intelligibility of speech heard with a noisy background.
Collapse
Affiliation(s)
- Mason Kadem
- Department of Psychology, The University of Western Ontario, London, Ontario, Canada.,School of Biomedical Engineering, McMaster University, Hamilton, Ontario, Canada
| | - Björn Herrmann
- Department of Psychology, The University of Western Ontario, London, Ontario, Canada.,Rotman Research Institute, Baycrest, Toronto, Ontario, Canada.,Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Jennifer M Rodd
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Ingrid S Johnsrude
- Department of Psychology, The University of Western Ontario, London, Ontario, Canada.,School of Communication and Speech Disorders, The University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
45
|
Ayasse ND, Hodson AJ, Wingfield A. The Principle of Least Effort and Comprehension of Spoken Sentences by Younger and Older Adults. Front Psychol 2021; 12:629464. [PMID: 33796047 PMCID: PMC8007979 DOI: 10.3389/fpsyg.2021.629464] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2020] [Accepted: 02/22/2021] [Indexed: 01/18/2023] Open
Abstract
There is considerable evidence that listeners' understanding of a spoken sentence need not always follow from a full analysis of the words and syntax of the utterance. Rather, listeners may instead conduct a superficial analysis, sampling some words and using presumed plausibility to arrive at an understanding of the sentence meaning. Because this latter strategy occurs more often for sentences with complex syntax that place a heavier processing burden on the listener than sentences with simpler syntax, shallow processing may represent a resource conserving strategy reflected in reduced processing effort. This factor may be even more important for older adults who as a group are known to have more limited working memory resources. In the present experiment, 40 older adults (M age = 75.5 years) and 20 younger adults (M age = 20.7) were tested for comprehension of plausible and implausible sentences with a simpler subject-relative embedded clause structure or a more complex object-relative embedded clause structure. Dilation of the pupil of the eye was recorded as an index of processing effort. Results confirmed greater comprehension accuracy for plausible than implausible sentences, and for sentences with simpler than more complex syntax, with both effects amplified for the older adults. Analysis of peak pupil dilations for implausible sentences revealed a complex three-way interaction between age, syntactic complexity, and plausibility. Results are discussed in terms of models of sentence comprehension, and pupillometry as an index of intentional task engagement.
Collapse
Affiliation(s)
| | | | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, United States
| |
Collapse
|
46
|
Zhang Y, Lehmann A, Deroche M. Disentangling listening effort and memory load beyond behavioural evidence: Pupillary response to listening effort during a concurrent memory task. PLoS One 2021; 16:e0233251. [PMID: 33657100 PMCID: PMC7928507 DOI: 10.1371/journal.pone.0233251] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 02/15/2021] [Indexed: 11/18/2022] Open
Abstract
Recent research has demonstrated that pupillometry is a robust measure for quantifying listening effort. However, pupillary responses in listening situations where multiple cognitive functions are engaged and sustained over a period of time remain hard to interpret. This limits our conceptualisation and understanding of listening effort in realistic situations, because rarely in everyday life are people challenged by one task at a time. Therefore, the purpose of this experiment was to reveal the dynamics of listening effort in a sustained listening condition using a word repeat and recall task. Words were presented in quiet and speech-shaped noise at different signal-to-noise ratios (SNR): 0dB, 7dB, 14dB and quiet. Participants were presented with lists of 10 words, and required to repeat each word after its presentation. At the end of the list, participants either recalled as many words as possible or moved on to the next list. Simultaneously, their pupil dilation was recorded throughout the whole experiment. When only word repeating was required, peak pupil dilation (PPD) was bigger in 0dB versus other conditions; whereas when recall was required, PPD showed no difference among SNR levels and PPD in 0dB was smaller than repeat-only condition. Baseline pupil diameter and PPD followed different variation patterns across the 10 serial positions within a block for conditions requiring recall: baseline pupil diameter built up progressively and plateaued in the later positions (but shot up when listeners were recalling the previously heard words from memory); PPD decreased at a pace quicker than in repeat-only condition. The current findings demonstrate that additional cognitive load during a speech intelligibility task could disturb the well-established relation between pupillary response and listening effort. Both the magnitude and temporal pattern of task-evoked pupillary response differ greatly in complex listening conditions, urging for more listening effort studies in complex and realistic listening situations.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Otolaryngology, McGill University, Montreal, Canada
- Centre for Research on Brain, Language and Music, Montreal, Canada
- Laboratory for Brain, Music and Sound Research, Montreal, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Canada
- * E-mail:
| | - Alexandre Lehmann
- Department of Otolaryngology, McGill University, Montreal, Canada
- Centre for Research on Brain, Language and Music, Montreal, Canada
- Laboratory for Brain, Music and Sound Research, Montreal, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Canada
| | - Mickael Deroche
- Department of Otolaryngology, McGill University, Montreal, Canada
- Centre for Research on Brain, Language and Music, Montreal, Canada
- Laboratory for Brain, Music and Sound Research, Montreal, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Canada
- Department of Psychology, Concordia University, Montreal, Canada
| |
Collapse
|
47
|
Holmes E, Utoomprurkporn N, Hoskote C, Warren JD, Bamiou DE, Griffiths TD. Simultaneous auditory agnosia: Systematic description of a new type of auditory segregation deficit following a right hemisphere lesion. Cortex 2021; 135:92-107. [PMID: 33360763 PMCID: PMC7856551 DOI: 10.1016/j.cortex.2020.10.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Revised: 09/17/2020] [Accepted: 10/22/2020] [Indexed: 11/27/2022]
Abstract
We investigated auditory processing in a young patient who experienced a single embolus causing an infarct in the right middle cerebral artery territory. This led to damage to auditory cortex including planum temporale that spared medial Heschl's gyrus, and included damage to the posterior insula and inferior parietal lobule. She reported chronic difficulties with segregating speech from noise and segregating elements of music. Clinical tests showed no evidence for abnormal cochlear function. Follow-up tests confirmed difficulties with auditory segregation in her left ear that spanned multiple domains, including words-in-noise and music streaming. Testing with a stochastic figure-ground task-a way of estimating generic acoustic foreground and background segregation-demonstrated that this was also abnormal. This is the first demonstration of an acquired deficit in the segregation of complex acoustic patterns due to cortical damage, which we argue is a causal explanation for the symptomatic deficits in the segregation of speech and music. These symptoms are analogous to the visual symptom of simultaneous agnosia. Consistent with functional imaging studies on normal listeners, the work implicates non-primary auditory cortex. Further, the work demonstrates a (partial) lateralisation of the necessary anatomical substrate for segregation that has not been previously highlighted.
Collapse
Affiliation(s)
- Emma Holmes
- Wellcome Centre for Human Neuroimaging, UCL, London, UK.
| | - Nattawan Utoomprurkporn
- UCL Ear Institute, UCL, London, UK; NIHR University College London Hospitals Biomedical Research Centre, University College London Hospitals NHS Foundation Trust, UCL, London, UK; Faculty of Medicine, Chulalongkorn University, King Chulalongkorn Memorial Hospital, Bangkok, Thailand
| | - Chandrashekar Hoskote
- Lysholm Department of Neuroradiology, University College London Hospitals NHS Foundation Trust, UCL, London, UK
| | | | - Doris-Eva Bamiou
- UCL Ear Institute, UCL, London, UK; NIHR University College London Hospitals Biomedical Research Centre, University College London Hospitals NHS Foundation Trust, UCL, London, UK
| | - Timothy D Griffiths
- Wellcome Centre for Human Neuroimaging, UCL, London, UK; Biosciences Institute, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
48
|
Choi HG, Hong SK, Lee HJ, Chang J. Acute Alcohol Intake Deteriorates Hearing Thresholds and Speech Perception in Noise. Audiol Neurootol 2020; 26:218-225. [PMID: 33341812 DOI: 10.1159/000510694] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Accepted: 08/05/2020] [Indexed: 11/19/2022] Open
Abstract
OBJECTIVES The hearing process involves complex peripheral and central auditory pathways and could be influenced by various situations or medications. To date, there is very little known about the effects of alcohol on the auditory performances. The purpose of the present study was to evaluate how acute alcohol administration affects various aspects of hearing performance in human subjects, from the auditory perceptive threshold to the speech-in-noise task, which is cognitively demanding. METHODS A total of 43 healthy volunteers were recruited, and each of the participants received calculated amounts of alcohol according to their body weight and sex with a targeted blood alcohol content level of 0.05% using the Widmark formula. Hearing was tested in alcohol-free conditions (no alcohol intake within the previous 24 h) and acute alcohol conditions. A test battery composed of pure-tone audiometry, speech reception threshold (SRT), word recognition score (WRS), distortion product otoacoustic emission (DPOAE), gaps-in-noise (GIN) test, and Korean matrix sentence test (testing speech perception in noise) was performed in the 2 conditions. RESULTS Acute alcohol intake elevated pure-tone hearing thresholds and SRT but did not affect WRS. Both otoacoustic emissions recorded with DPOAE and the temporal resolution measured with the GIN test were not influenced by alcohol intake. The hearing performance in a noisy environment in both easy (-2 dB signal-to-noise ratio [SNR]) and difficult (-8 dB SNR) conditions was decreased by alcohol. CONCLUSIONS Acute alcohol elevated auditory perceptive thresholds and affected performance in complex and difficult auditory tasks rather than simple tasks.
Collapse
Affiliation(s)
- Hyo Geun Choi
- Department of Otorhinolaryngology-Head & Neck Surgery, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea
| | - Sung Kwang Hong
- Department of Otorhinolaryngology-Head & Neck Surgery, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea
| | - Hyo-Jeong Lee
- Department of Otorhinolaryngology-Head & Neck Surgery, Hallym University College of Medicine, Chuncheon, Republic of Korea, .,Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea,
| | - Jiwon Chang
- Department of Otorhinolaryngology-Head & Neck Surgery, Hallym University College of Medicine, Chuncheon, Republic of Korea
| |
Collapse
|
49
|
Griffiths TD, Lad M, Kumar S, Holmes E, McMurray B, Maguire EA, Billig AJ, Sedley W. How Can Hearing Loss Cause Dementia? Neuron 2020; 108:401-412. [PMID: 32871106 PMCID: PMC7664986 DOI: 10.1016/j.neuron.2020.08.003] [Citation(s) in RCA: 159] [Impact Index Per Article: 39.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Revised: 07/31/2020] [Accepted: 08/05/2020] [Indexed: 12/11/2022]
Abstract
Epidemiological studies identify midlife hearing loss as an independent risk factor for dementia, estimated to account for 9% of cases. We evaluate candidate brain bases for this relationship. These bases include a common pathology affecting the ascending auditory pathway and multimodal cortex, depletion of cognitive reserve due to an impoverished listening environment, and the occupation of cognitive resources when listening in difficult conditions. We also put forward an alternate mechanism, drawing on new insights into the role of the medial temporal lobe in auditory cognition. In particular, we consider how aberrant activity in the service of auditory pattern analysis, working memory, and object processing may interact with dementia pathology in people with hearing loss. We highlight how the effect of hearing interventions on dementia depends on the specific mechanism and suggest avenues for work at the molecular, neuronal, and systems levels to pin this down.
Collapse
Affiliation(s)
- Timothy D Griffiths
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK; Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK; Human Brain Research Laboratory, Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA.
| | - Meher Lad
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| | - Sukhbinder Kumar
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| | - Emma Holmes
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Bob McMurray
- Departments of Psychological and Brain Sciences, Communication Sciences and Disorders, Otolaryngology, University of Iowa, Iowa City, IA 52242, USA
| | - Eleanor A Maguire
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | | | - William Sedley
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| |
Collapse
|
50
|
Herrmann B, Johnsrude IS. Absorption and Enjoyment During Listening to Acoustically Masked Stories. Trends Hear 2020; 24:2331216520967850. [PMID: 33143565 DOI: 10.1177/2331216520967850] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Comprehension of speech masked by background sound requires increased cognitive processing, which makes listening effortful. Research in hearing has focused on such challenging listening experiences, in part because they are thought to contribute to social withdrawal in people with hearing impairment. Research has focused less on positive listening experiences, such as enjoyment, despite their potential importance in motivating effortful listening. Moreover, the artificial speech materials-such as disconnected, brief sentences-commonly used to investigate speech intelligibility and listening effort may be ill-suited to capture positive experiences when listening is challenging. Here, we investigate how listening to naturalistic spoken stories under acoustic challenges influences the quality of listening experiences. We assess absorption (the feeling of being immersed/engaged in a story), enjoyment, and listening effort and show that (a) story absorption and enjoyment are only minimally affected by moderate speech masking although listening effort increases, (b) thematic knowledge increases absorption and enjoyment and reduces listening effort when listening to a story presented in multitalker babble, and (c) absorption and enjoyment increase and effort decreases over time as individuals listen to several stories successively in multitalker babble. Our research indicates that naturalistic, spoken stories can reveal several concurrent listening experiences and that expertise in a topic can increase engagement and reduce effort. Our work also demonstrates that, although listening effort may increase with speech masking, listeners may still find the experience both absorbing and enjoyable.
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, Baycrest, Toronto, Ontario, Canada.,Department of Psychology, University of Toronto, Toronto, Ontario, Canada.,Department of Psychology, University of Western Ontario, London, Canada
| | - Ingrid S Johnsrude
- Department of Psychology, University of Western Ontario, London, Canada.,School of Communication Sciences & Disorders, University of Western Ontario, London, Canada
| |
Collapse
|